Threats posed by AI must be addressed, experts warn

Today, the Malicious Use of Artificial Intelligence report has warned that AI can be exploited by rogue states, criminals and terrorists.

Experts have warned that drones being turned into missiles, videos manipulating public opinion and automated hacking are just three of a number of threats can be exploited from artificial intelligence being used in the wrong hands.

The report authors said that those creating AI systems need to do more to mitigate possible vulnerabilities, while governments need to start considering new laws. The 100-page report identified three areas – digital, physical and political – in which the malicious use of AI is most likely to be exploited.

>See also: Despite the hype, AI adoption still in early stages

Duncan Tait, EMEIA CEO at Fujitsu said: “Governments, businesses and industry bodies have to prepare for the potential influence of artificial intelligence. We must understand how AI can be used and evaluate the regulation and controls needed. We must take a measured approach to its impact on society, and concentrate on reskilling employees and job security. However, it’s also critical that we leverage the incredible value of artificial intelligence to support prosperity and wellbeing. A timely and coordinated response across global governments, educational institutes, business and society will be imperative, and indeed 84% of global business leaders are in favour of a coordinated response to prepare for change. With the right leadership and preparation, we can all benefit from artificial intelligence.”

Speaking to the BBC, Shahar Avin, from Cambridge University’s Centre for the Study of Existential Risk, said that the report concentrated aspects of AI that will be available in the next five years, and not in the distant future.

The report calls for:

• Policy-makers and technical researchers to work together to understand and prepare for the malicious use of AI.
• A realisation that, while AI has many positive applications, it is a dual-use technology and AI researchers and engineers should be mindful of and proactive about the potential for its misuse.

>See also: Nearly half of CIOs are planning to deploy artificial intelligence – Gartner

• Best practices that can and should be learned from disciplines with a longer history of handling dual use risks, such as computer security.
• An active expansion of the range of stakeholders engaging with, preventing and mitigating the risks of malicious use of AI.

Going rogue

The scenario where AIs are trained to superhuman levels of intelligence without human examples or guidance is a particular concern of the report.

It outlines some hypothetical examples of where AI systems could turn rogue.

• Technologies such as AlphaGo could be used by hackers to find patterns in data and new exploits in code.
• A malicious individual could buy a drone and train it with facial recognition software to target a certain individual.
• Bots could be automated or “fake” lifelike videos for political manipulation.
• Hackers could use speech synthesis to impersonate targets.

>See also: AI leaders warn of ‘killer robots’ in letter to United Nations

Miles Brundage, research fellow at Oxford University’s Future of Humanity Institute, said: “AI will alter the landscape of risk for citizens, organisations and states – whether it’s criminals training machines to hack or ‘phish’ at human levels of performance or privacy-eliminating surveillance, profiling and repression – the full range of impacts on security is vast.”

“It is often the case that AI systems don’t merely reach human levels of performance but significantly surpass it.”

“It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labour.”

Dr Seán Ó hÉigeartaigh, executive director of the Centre for the Study of Existential Risk and one of the co-authors, added: “Artificial intelligence is a game changer and this report has imagined what the world could look like in the next five to 10 years.”

>See also: ‘AI causes new challenges for research ethics at universities’

“We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems – because the risks are real.”

“There are choices that we need to make now, and our report is a call to action for governments, institutions and individuals across the globe.”

“For many decades hype outstripped fact in terms of AI and machine learning. No longer. This report looks at the practices that just don’t work anymore – and suggests broad approaches that might help: for example, how to design software and hardware to make it less hackable – and what type of laws and international regulations might work in tandem with this.”

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...