AI development needs state control, says pioneer

Yoshua Bengio, one of the three ‘godfathers of AI’, says firms developing artificial intelligence systems need government oversight

Yoshua Bengio, one of the three men dubbed “the godfathers of AI”, says companies involved in AI development need to be registered with government.

Professor Yoshua Bengio told the BBC: “They need to be able to audit them and that is the minimum they need for any other sector, such as building airplanes or cars or pharmaceuticals.”

He also believes that people working in AI development should have ethical training like doctors and that more research is needed on the potential for dangerous scenarios AI could contribute to in the future.

AI means that ‘everyone is a programmer now’ says Nvidia CEOUnveiling a new generative AI platform for tech companies, Nvidia CEO Jensen Huang has hailed a new era of computing that improves access to coding capabilities

Prof Bengio, who won the prestigious Turing Prize for computing in 2018 along with Geoffrey Hinton for their work on deep learning, told the BBC that had he known the speed with which AI has taken off, he would have prioritised safety over usefulness, adding that regulations around the technology need to be ramped up to manage the potential risks.

Doctor Hinton left Google earlier this month citing his “regret” at his life’s work creating and developing AI.

Both Prof Bengio and Doctor Hinton are two of more than 350 signatories of an open letter, warning that the threat to humanity from the fast-developing technology rivals that of nuclear war and disease.

Risk of extinction from AI

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” said the single-sentence statement published by the Center for AI Safety, a San Francisco-based non-profit organisation.

Executives, researchers and engineers, including Sam Altman of OpenAI, Demis Hassabis of Google DeepMind and Dario Amodei of Anthropic, also signed the statement.

Hinton left his position at Google at the beginning of the month to speak freely about the potential harms of artificial intelligence.

The statement follows rising concern – bordering on the hysterical – about artificial intelligence and specifically, generative AI. Today the Daily Mail headlined on its front page, “AI ‘could wipe out humanity’”.

Some futurologists fear that a super-intelligent AI with interests misaligned to those of humans could supplant, or unwittingly destroy, humanity. One famous example is that if artificial intelligence was given the order to create paperclips, it would keep on going until the entire planet was just a wasteland of paperclips. Others worry that overreliance on systems humans do not understand leaves mankind in catastrophic danger if they go wrong.

There is also widespread unease about the potential of AI to spread fake news and disseminate false information, be used in deepfake criminal activity and to replace an estimated one-third of the working population, with white-collar jobs particularly affected.

In March, investment banking giant Goldman Sachs warned that AI poses a threat to around 300 million full-times jobs across the globe, including two thirds of all jobs in the US and Europe.

This month, the World Economic Forum had a more optimistic forecast, saying that 69 million new jobs would be created by AI by 2027 at the same time as it displaced 83 million jobs.

The European Union is pushing ahead with Europe’s Artificial Intelligence Act, while the US is also exploring regulation.

What the draft EU AI Act means for regulationInformation Age speaks to EU data protection, intellectual property and technology experts about the business implications of the EU AI Act

Earlier this month, OpenAI chief executive Sam Altman told the US Congress that the industry needed regulation in the form of licences.

In March, Elon Musk and more than 1,000 other researchers and tech executives called for a six-month pause on the development of advanced AI systems to halt what they called an “arms race”.

Lord Rees of Ludlow, the astronomer royal and founder of Cambridge University’s centre for the study of existential risk, also signed the statement.

“I worry less about some super-intelligent ‘takeover’ than about the risk of over-reliance on large-scale interconnected systems. Large-scale failures of power grids, internet and so forth can cascade into catastrophic societal breakdown,” he told The Times.

“These potentially globe-spanning networks need regulation, just as new drugs must be rigorously tested. And regulation is a special challenge for systems developed by multinational companies, which can bypass regulations just as they can evade a fair level of taxation.”

More on AI development

Elizabeth Renieris – ‘Our robot overlords aren’t quite here just yet’Elizabeth Renieris is a renowned artificial intelligence ethics expert, who believes that Big Tech is being disingenuous when it calls for a global AI super-regulator. Existing laws cover AI, she says, we just need to leverage them

Avatar photo

Tim Adler

Tim Adler is group editor of Small Business, Growth Business and Information Age. He is a former commissioning editor at the Daily Telegraph, who has written for the Financial Times, The Times and the...