Can AI be trusted – and is it too late to ask?

Artificial intelligence (AI) that can tell pre-cancerous growths from harmless moles is great. AI that can clone human voices to counterfeit utterly convincing recordings, seeding doubt and misinformation is not so great.

As a technology, AI is unprecedented, powerful and deeply pervasive: from voice recognition to self-driving cars to medical diagnosis, it is swiftly weaving its way into our lives at work, home, and everywhere in between. Yet most of us know very little about it. It’s easy to say that fear of the unknown is fruitless, but how can we understand and leverage AI to create the best possible society without succumbing to fear, doomsaying, or prophecies of a fate worse than war?

In 2016, the news broke that US risk assessment algorithms – used by courtrooms throughout the country to decide the fates and freedoms of those on trial – are racially biased, frequently sentencing Caucasians more leniently than African Americans despite no difference in the type of crime committed. How could this happen within a system that’s supposed to be neutral?

>See also: Despite the hype, AI adoption still in early stages

The answer seems to point towards human input: in the words of AI researcher Professor Joanna Bryson, “if the underlying data reflects stereotypes, or if you train AI from human culture, you will find bias.” And if we’re not careful, we risk integrating that bias into the computer programs that are fast taking over the running of everything from hospitals to schools to prisons – programs that are supposed to eliminate those biases in the first place.

Nigel Willson, global strategist at Microsoft, points out the importance of recognising how no technology is ever black and white. “The reality is that AI is like anything else – it can be very dangerous, or it can be amazing, based on how it’s used or misused,” he says. AI is only as as accurate as the information on which it is trained – meaning that we must be very careful with how we train it.

Awareness of ‘unfair’ bias integrated into decades of data has led researchers to attempt the design of algorithms that counteract that bias when scraping the data: but this sparks the question of what constitutes ‘fairness’. It’s a complex problem with no easy solution – and not necessarily the problem that researchers are most excited about working on.

Willson points out that the most important areas of AI to consider – accountability, transparency, ethics and its social role in society – are also the least sexy. “People don’t see them as exciting,” he admits, “but they’re fundamentally going to affect how AI is implemented.” Covered in Microsoft’s recent book, The Future Computed, are some of the most pertinent questions: how do we ensure that AI is designed and used responsibly? How do we ensure that its users are protected? And how will AI affect employment and jobs?

>See also: NHS Trust successfully fought back WannaCry ransomware with AI

“It’s important that we talk about positivity and the need to regulate,” Willson says. “AI is [going to be] in every smartphone and Christmas toy and practically everything that we do. That means that it’s got an awful lot of visibility – and that’s what will get it on people’s agendas.”

AI isn’t going anywhere – and we shouldn’t want it to. Healthcare is an obvious illustration of this. To pick just one example, the diagnosis of hard-to-treat cancers is set to be revolutionised by the introduction of AI into the field of oncology. Technology giant Intel has set itself the goal of creating ‘one-day precision medicine’ by 2020 for cancer patients – that means, in Intel’s own words, “going to the doctor, getting a diagnosis and receiving a personalised treatment plan, all in 24 hours.” And that’s just the tip of the iceberg: AI should soon be used to quickly and accurately interpret blood and genetic testsdetect burgeoning mental and physical health problems before they surface, and drastically improve the outcomes of risky surgeries.

Although recent headlines might suggest otherwise, AI can be used to support society and its healthy development in a wider sense. Vyacheslav Polonski, a researcher at the University of Oxford, suggests that just as ‘political bots‘ have been used in recent elections to proliferate fake news and misinformation in an attempt to “manufacture the illusion of public support”, they might just as easily be deployed to warn users of social media when they share content that’s factually suspect.

>See also: AI: an untapped technology for UK businesses 

Instead of muddying political discourse by pouring auto-generated semi-truths into an already stormy sea of information, AI could instead serve to clear the air during election seasons by analysing what a user cares about when it comes to voting (for example, education policy) and targeting them with verified, politically neutral information about which candidates have proposed the most pertinent policies.

Of course, any system like this would require outside vetting: which is where human input comes in. Whatever the application – from providing comprehensive recommendations to speed up businesses’ decision making, to analysing hundreds of medical case reports to validate doctors’ diagnoses in real time, or identifying potential links between scientific research papers in different fields – human instinct must remain a vital checkpoint for AI to advance safely.

You wouldn’t leave a baby to learn to walk by itself: and AI is a technology still very much in its infancy. Ultimately, it’s only by merging AI’s ability to quickly process vast quantities of data with humans’ crucial ability to understand nuance and context that we can ensure facts remain facts.

 

Sourced by Dr Vivian Chan, founder/CEO of Sparrho

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...