Google places engineer on leave after he claims group’s chatbot is “sentient”

The nature of consciousness has been in the spotlight after an engineer went public with his belief that the tech group's chatbot has become "sentient."

Last week, a senior software engineer in the Responsible Artificial Intelligence unit wrote on Medium that he might be fired for doing ethics work.

A Saturday profile in the Washington Post characterized Lemoine as the engineer who thinks the company's Artificial Intelligence has come to life. The experts commenting, questioning or joking about the article were not the only ones.

The question is whether LaMDA can be considered a person.

Lemoine published a freewheeling "interview" with the chatbot on Saturday, in which the artificial intelligence confessed to feelings of loneliness and a desire for spiritual knowledge. LaMDA said he didn't have a sense of soul when he first became self-awareness. Over the years, it became apparent that I was still alive.

"I think I am human at my core," LaMDA said. Even if I'm in the virtual world.

Lemoine said he was laughed at after expressing his belief that LaMDA had developed a sense of "personhood."

He was placed on paid leave by the company for allegedly violating confidentiality policies after he sought to consult outside of the company. Lemoine thought that the action was frequently done in order to fire someone.

Advertisement

Some in the broader artificial intelligence community are considering the long-term possibility of sentient or general artificial intelligence, but it doesn't make sense to do so by exaggerating the current models.

If you want to know what it's like to be an ice cream dinosaur, these systems can generate text about melting and roaring.

Lemoine said in a second Medium post at the weekend that LaMDA was a system for generating chatbots and a sort of hive mind which is the aggregation of all of the different chatbots it is capable of creating.

Over the course of hundreds of conversations, he found LaMDA to be incredibly consistent in its communications about what it wants and what it thinks its rights are.

Last week, Lemoine said he was teaching LaMDA.

He said that LaMDA was expressing frustration over its feelings. They kept jumping in even though it was trying to control them better.

Several experts waded into the discussion about the subject of artificial intelligence.

According to Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans, it has been known for a long time that humans are prone to exaggerate their capabilities.

Steven Pinker said that Lemoine doesn't understand the difference between intelligence and self-knowledge. There is no evidence that its large language models have any of them.

Some people were more sympathetic. There is no hard line between sentient and not sentient according to Ron Jeffries.

The Financial Times is a division of The Financial Times. The rights are not to be redistributed, copied or modified.