What we can learn from the world’s first computer bug

Will technology always be at the mercy of human error?

By Tom Standage

The world’s first computer program was written by Ada Lovelace, an English mathematician, in 1843 (an auspicious date for other reasons too). It was for a computer that did not exist. She was friends with Charles Babbage, who had been seized by the idea of building a mechanical computer to automate the process of creating the mathematical tables used in navigation. Calculation and transcription errors in such tables could be fatal, causing ships to be lost. Babbage hoped his machine could automate away human shortcomings. To publicise his design and help him raise the money to build it, Lovelace published an academic paper explaining its workings, in which she included the first program. She thus became the first programmer in history.

But something unusual happened when computer scientists recently translated her program to run it on a modern computer: it turned out to contain a bug. It seems most likely that the bug was a typo that occurred when the program was typeset for printing. But it’s a reminder that software is a human creation that reflects the error-prone nature of its creators.

That is worth remembering today because, in the era of artificial intelligence and algorithmic decision-making, the idea of using machines to automate processes and overcome human failings has resurfaced. Software companies claim that using algorithms to do certain jobs, such as filtering job candidates, is fairer than having humans do it, because machines can’t be biased or prejudiced in the way that humans can. But such systems are often based on historical data sets, which are intrinsically biased. Train your hiring algorithm to favour the kinds of employees who have done well in the past and it will favour white men.

A big part of the problem is the lack of diversity in the tech industry, which means these kinds of biases are far less likely to be spotted beforehand. More could also be done to audit the fairness (or unfairness) of algorithms, rather than just assuming that machines are impartial. The lesson from Ada Lovelace’s first program is that automating away human shortcomings is much harder than it looks, whether in 1843 or 2019.

IMAGES: Getty, Alamy

Discover more

1843 magazine | Djibouti, the port-state squeezed by the Houthis’ Red Sea campaign

While warships protect cargo, migrants trying to reach the Middle East are on their own

1843 magazine | How a trucker became a deadly people smuggler

Transporting undocumented migrants across America can seem like easy money – until everything goes wrong


1843 magazine | Tinnitus nearly drove me mad

I have had to learn to live in a world without silence