After one of the search giant's senior engineers was suspended for making startling claims, experts told Insider that it's unlikely that a virtual assistant has come to life.
The engineer told The Washington Post that he began to believe that the chatbot had become sentient, or able to perceive and feel like a human. The Responsible Artificial Intelligence Organization has an engineer in it.
But Lemoine, who didn't respond to a request for comment from Insider, is apparently on his own when it comes to his claims. There isn't any evidence to back them up.
Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else doing the same.
According to seven experts contacted by Insider, the artificial intelligence bot probably isn't sentient and there is no way to know if it's alive.
The ethics of artificial intelligence has inspired great science fiction novels and movies, according to a professor at the University of Oxford. We are not close to creating a machine that can think like humans.
One of the engineers who worked with LaMDA told Insider that the chatbot, which can carry on a lot of conversations, follows relatively simple processes.
The engineer, who prefers to remain anonymous, told Insider that the code harvests language from the internet. The material on the internet can be learned from.
The engineer said it would be hard for LaMDA to feel pain or emotion, even though the machine seems to convey emotion. Lemoine published a conversation in which the chatbot said it felt happy or sad.
There is no clear way to distinguish between a bot that is designed to mimic social interactions and one that is capable of actually feeling what it is saying.
The engineer told Insider that it was impossible to distinguish between feeling and not feeling based on the sequence of words that came out. There is no question.
The conversation between Lemoine and LaMDA does not show proof of life according to Laura Edelson, a researcher at NYU. It's even more hazy because the conversation was edited.
"If you have a chatbot that can talk about philosophy, that's not different than a chatbot that can talk about movies," he said.
Giada Pistilli said it's human nature to ascribe emotions to objects.
Thomas Diettrich is a computer science professor at Oregon State University.
He said that it can finish a story in a way that looks original if it's trained on a lot of written texts. It knows how to combine old sequences to create new ones.
The role of artificial intelligence in society will come under scrutiny.
Philosophers have been struggling with this for hundreds of years. Over the next 10 to 100 years, our definitions of what is alive will change.