AI leader Geoffrey Hinton leaves Google, warning about risks

In an aim to warn of possible harms around AI development, artificial neural network expert Dr Geoffrey Hinton has left his post at Google Brain

Dr Hinton revealed to the New York Times that he now regrets his work on designing machine learning algorithms which paved the way for the release of generative AI tools like ChatGPT and has warned of misinformation and privacy risks around the technology.

He told the BBC: “I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have.

“So it’s as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”

Possible societal hazards cited by Dr Hinton and other experts in the field include bias, infiltration by threat actors, and possible redundancy of job roles.

With large language models (LLMs) becoming more pervasive across business, the AI researcher told the BBC that some of the dangers around chatbots were “quite scary”, as well as stating that his age of 75 has led to him deciding to retire.

However, Dr Hinton has stressed that he did not wish to criticise his former employer, declaring that Google have been “very responsible”, as well as stating that leaving the company would make his pending statements around its AI practices “more credible”.

It was recently revealed that GPT-4 vendor OpenAI have been tasking a ‘red team’ of researchers to test the boundaries of its algorithm, in order to help mitigate harms going forward.

Meanwhile, Google’s Bard chatbot has been scrutinised in recent times for sharing false information, leading to the corporation sharing a warning with trial users and implementing an age limit of 18.

Google’s chief scientist Jeff Dean commented: “We remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”

On Dr Hinton leaving Google, Dean said: “I’ve deeply enjoyed our many conversations over the years. I’ll miss him, and I wish him well!”

Mitigating security risks

When it comes to keeping threat actors at bay, Jake Moore, global cyber security advisor at ESET, has called for AI developers to “learn from mistakes made” when it comes to slow responses to algorithm misuse.

“This is a significant moment for future of AI. Although we are a little way off computers attacking humans, bad actors are already taking advantage of the power of this technology to aid them in their attacks,” said Moore.

“The same technology is currently being used as the antidote in powerful threats, but truth be told, the future of AI is relatively unknown which can be worrying in the direction that it could go. We have spent many years investing in AI but this wonderful achievement will inevitably be used nefariously and could form part of larger scale attacks, especially if used in nation state attacks.

“We should never underestimate the risks and therefore aim at making controlled AI protection in stark contrast to the unavoidable uncontrollable future.”

Related:

Using generative AI to understand customers Generative AI seems poised to transform IT from co-writing code through to enhancing cybersecurity but one of its most profound uses could be how it is used to understand customers.

ChatGPT vs GDPR – what AI chatbots mean for data privacy While OpenAI’s ChatGPT is taking the large language model space by storm, there is much to consider when it comes to data privacy.

Avatar photo

Aaron Hurst

Aaron Hurst is Information Age's senior reporter, providing news and features around the hottest trends across the tech industry.

Related Topics

Generative AI
Google