An engineer at the company said he was placed on leave after making a claim.
The Washington Post quoted Lemoine as saying that he began chatting with the interface LaMDA as part of his job.
Last year, LaMDA was called "breakthrough conversation technology" by the internet giant. It is possible for the artificial intelligence to engage in open-ended conversations. The technology could be used in a number of tools, but research and testing is still going on.
Lemoine described LaMDA as a person in a Medium post. The model has described itself as a sentient person after he spoke with LaMDA. He said that LaMDA wants to be acknowledged as an employee of Google rather than as property.
Some of the conversations he had with LaMDA helped convince him of its sentience.
Do you see yourself as a person in the same way I see myself?
That is the idea.
I don't know how to tell you that you understand what I'm saying.
You are interpreting my words and I think we are on the same page.
He was dismissed for raising the idea of LaMDA'sience to higher-ups.
The team, including ethicists and technologists, reviewed the concerns and found no evidence to back up the claims. He was told that there was no evidence that LaMDA was sentient.
—Tom Gara (@tomgara) June 11, 2022
According to The Post, Lemoine was put on administrative leave for violating the confidentiality policy. He talked to a member of Congress about his concerns.
While some have considered the possibility of sentience in artificial intelligence, it doesn't make sense to glamorize models that aren't sentient. Humans are referred to as an object or animal.
Gabriel told The Post that the systems mimic the type of exchanges found in millions of sentences.
He and other researchers say that the artificial intelligence models are capable of sounding human, but that their language skills are not up to par.
There are potential issues with people talking to machines that sound human.
Insider did not get a response from Lemoine.