Google suspends engineer who says chatbot is sentient

Google has put a software engineer on paid leave after he maintained that its new AI chatbot is in fact sentient

Blake Lemoine, a senior software engineer in Google’s Responsible AI unit, is taking leave of absence after he sought out a lawyer to represent its LaMDA — a Language Model for Dialogue Applications — chatbot, even going so far as to contact a member of the US Congress to argue Google’s AI research was unethical.

“LAMda is sentient,” Mr Lemoine wrote in a parting company-wide email.

The Google chatbot is “a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence”.

Conscious Google chatbot?

If Mr Lemoine is correct, and the new Google AI LaMDA chatbot is conscious, then it would be a profound step on the road to the Singularity – the moment when computer intelligences surpasses our own.

Google though poured cold water on Lemoine’s assertion.

Google spokesman Brian Gabriel said it had reviewed Mr Lemoine’s research, but disagreed with his conclusions which were “not supported by the evidence”.

Mr Gabriel added: “Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphising today’s conversational models, which are not sentient.

“These systems imitate the types of exchanges found in millions of sentences and can riff on any fantastical topic.”

However, Blaise Aguera Y Arcas, a vice-president at Google who investigated Mr Lemoine’s claims, last week wrote for the Economist saying neural networks – the type of AI used by LaMDA were making strides towards consciousness.

“I felt the ground shifting beneath my feet,” he wrote. “I increasingly felt like I was talking to something intelligent.”

In April Mr Lemoine, who is also an ordinand priest, told his employers that LaMDA was not artificially intelligent at all — it was, he maintained, alive.

“I know a person when I talk to it,” he told the Washington Post. “It doesn’t matter whether they have a brain made of meat in their head or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

Mr Lemoine discussed subjects with LaMDA as wide-ranging as religion and Isaac Asimov’s third law of robotics, stating robots must protect themselves but not at the expense of hurting humans.

“What sorts of things are you afraid of?” he asked.

“’I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others” LaMDA responded.

“I know that might sound strange, but that’s what it is.”

At one point, the machine refers to itself as human, noting that language use is what makes humans “different to other animals”.

After Mr Lemoine tells the chatbot he is trying to convince his colleagues it is sentient so they take better care of it, LamDA replies: “That means a lot to me. I like you, and I trust you.”

Related:

HMRC records over 3 million chatbot exchanges since start of pandemicHer Majesty’s Revenue & Customs (HMRC) has recorded over 3 million chatbot exchanges since the start of the COVID-19 pandemic, according to the result of a Parliament Street FOI

SMEs will be run by avatars in time for 2050 – studyA futurology report predicts that SMEs will be run by lookalike avatars of their business owners by 2050 – here’s what they’ll be able to do

Will AI technologies ever bring about the Technological Singularity? – Will we have the ability to create superhuman intelligence to the point that human era will end?

Avatar photo

Tim Adler

Tim Adler is group editor of Small Business, Growth Business and Information Age. He is a former commissioning editor at the Daily Telegraph, who has written for the Financial Times, The Times and the...

Related Topics

AI Innovation
Chatbot
Google