Google logo

A number of artificial intelligence models are being developed by the company.

Kenzo TriboUillard of Agence France-Presse is pictured.

The engineer was suspended by the company after he claimed that his artificial intelligence had become sentient. I would think it was a seven-year-old, eight-year-old child if I didn't know what it was.

Lemoine released transcripts of conversations with the artificial intelligence, called LaMDA, in which it seems to express fears of being switched off, talks about how it feels happy and sad, and tries to form bonds with humans by mentioning situations that it could never have experienced. Everything you need to know can be found here.

Is LaMDA really sentient?

Adrian Weller is the director of The Alan Turing Institute.

He says that LaMDA is one of the most recent models in a line of large language models that are trained with a lot of computing power and huge amounts of text data. They use a sophisticated form of pattern matching to find text that best matches the query they've been given that's based on all the data

Adrian believes that sentience is not supported by the facts. Gary Marcus at New York University summed up Lemoine's claims in one word, "nonsense."

So what convinced Lemoine that LaMDA was sentient?

New Scientist asked Lemoine and Google for comment, but they didn't reply. The output of artificial intelligence models in the last few years has been very good.

When it comes to models that mimic human language, our minds are capable of seeing such ability as proof of intelligence. It can also present itself as having self-awareness and feelings.

"As humans, we're very good at portraying things in a way that makes them seem bigger than they really are." We should treat things as if they were sentient. This can be done with cartoons or with animals. Emotions are projected onto them. I think that's what's happening in this case.

Will AI ever be truly sentient?

It's not clear if the current trajectory of artificial intelligence research will lead to the creation of an artificial mind.

"I don't think we understand the mechanisms behind what makes something sentient and intelligent at the moment." I am not convinced that what we are doing with machine learning is intelligence.

It might eventually be possible to duplicate human emotions using sensory inputs. It might be true one day, but most people agree that there is much more to be done.

How has Google reacted?

According to the Washington Post, Lemoine has been placed on suspension after seven years at Google, having tried to hire a lawyer to represent LaMDA, and sending executives a document that claimed the artificial intelligence was sentient. According to its report, the transcripts were published because they broke confidentiality policy.

The team, including ethicists and technologists, reviewed the concerns and found no evidence to back up the claims. There wasn't any evidence that LaMDA was sentient.

Lemoine said that it might be called sharing proprietary property by the internet search engine. It's called sharing a discussion that I had with a coworker.

There are more on this topic.