Why AI is set to fix human error

AI is 2017’s technology buzzword. It can turn doodles into palatable clip art and it’s definitely better than humans at chess, but it’s not all positivity. Many are worried that AI may be taking over jobs, replacing the human element in hundreds of industries. Even worse, stories of malicious AI running rampant seem to be gaining in traction.

Despite what people may think, society is still far from a future where AI could cause harm to humans – in fact, it’s important to understand that AI is created by humans in the first place. The only thing that could cause AI to misbehave is human error. By focusing on how AI could attack humanity, businesses are missing out on a more relevant issue – it’s not self-aware AI that’s a threat, rather, AI that isn’t smart enough.

>See also: How AI and metadata are taking the hard work out of content discovery

Software just isn’t smart enough

Coders spend approximately 20% of their time running tests to make sure that their software works properly, but that doesn’t mean that technology is devoid of human error – in fact, basic coding problems crop up more often than not. This may sound like something that only affects the tech industry, but occasionally the problems extend further.

In 2014, the Heartbleed bug created panic when it was reported that a simple coding error could let hackers access computers. More recently, Wonga’s data breach affected 245,000 UK customers. This could have been prevented with better cyber security through eliminating errors in code.

This issue doesn’t stop there – recent data shows hacking attacks on UK businesses have cost investors £42 billion, with the UK targeted by dozens of serious cyber attacks each month. Software grows more complex, and more vulnerable, by the minute and we can’t rely on humans to analyse millions of lines of code. The more complexity we introduce, the harder it becomes for humans to test.

>See also: AI: the greatest threat in human history?

With technology becoming more complex, it’s now impossible for humans to keep software safe. As demonstrated by the Tesla autopilot car crash, human error is to blame more than “bad” AI. In this case, autopilot sensors failed to distinguish a white trailer against a bright sky, something we can now attribute to human error within the code.

So how can it be made better?

The way to fix this is through AI itself. AI can test software for us, replacing the human element. AI can do what humans have done for years – but better – and also learn along the way, fixing issues that would otherwise plague our software. This should lower the risk of cyber attacks, and also make software safer to use.

This capability already exists. DiffBlue, for example, have created AI that automates all traditional coding tasks: bug fixing, test writing, finding and fixing exploits, refactoring code, translating from one language to another, and creating original code to fit specifications. It should mean that increasingly complex software is comprehensively tested by other software, eliminating human error.

>See also: AI and automation will be far more significant than Brexit

At the moment, society is light years away from AI with free will, but this doesn’t mean that today is not at an amazing point in history.

People’s day to day lives are becoming more entwined with technology, as they rely more and more on our phones, computers and tablets. The internet of things is looming, and people need to know that they are safe using it.

There’s no logic behind the fear that AI can cause harm, when in fact, it can provide us with the security society needs in the age of technology, as well as the time and place to be more creative. By automating tasks, AI has opened up the market to all sorts of innovative jobs – and by removing human error, society could now be approaching an era of technological perfection.


Sourced by Daniel Kroening, Professor of Computer Science at Oxford University and CEO at DiffBlue


The UK’s largest conference for tech leadership, TechLeaders Summit, returns on 14 September with 40+ top execs signed up to speak about the challenges and opportunities surrounding the most disruptive innovations facing the enterprise today. Secure your place at this prestigious summit by registering here

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...