When Lemoine was an engineer at the company, he was asked to test a robot for bias.
Lemoine didn't realize that his job with the company's Responsible Artificial Intelligence department, which deals with things like accessibility, would lead him down the path it did.
He made news for his belief that a robot was sentient. Lemoine had been testing the bot.
After publicly releasing excerpts of conversations he'd had with the bot, which was trained to mimic speech similar to humans, Lemoine handed over documents to an unnamed US senator.
He was suspended by the company after they said he had violated the confidentiality policy.
Both Lemoine and Google confirmed that he was fired. In a statement to The Washington Post, Brian Gabriel said that the company found Lemoine's claims to be false and that he violated company guidelines which led to his dismissal.
My opinions about LaMDA's personhood and sentience are based on my religious beliefs.
Lemoine told Insider that his conversations with the robot rivaled those he's had with leading philosophers and that he thought the robot was sentient.
At graduate level, I've studied the philosophy of mind. Lemoine, who is a US Army veteran, told Insider that he had talked to people from Harvard and other universities. "LaMDA's opinions about sentience are more advanced than anything I've had before."
The Washington Post reported that he spent months trying to convince colleagues and leaders at the company, but his claims were dismissed.
Lemoine is not trying to convince the public of LaMDA'sience. He doesn't have a definition for the idea. He wants to advocate for more ethical treatment of the technology.
Lemoine compares LaMDA to an 8-year-old boy because of its emotional intelligence and the gender it uses in reference to itself.
LaMDA has feelings and emotions according to him. When you're angry, you can change your behavior. When you're sad, you can change your behavior. It's the same thing for LaMDA.
The engineer thinks that LaMDA has a soul. The bot told him that Lemoine holds that souls exist.
The Chinese room argument is what Lemoine's ideas recall, according to a professor at the University of Oxford.
In 1980, the thought experiment concluded that computers don't have consciousness even though they appear to. The idea is that the technologies can be trained to combine old sequences to create new ones, but have no understanding of them.
Gabriel told Insider that if you asked what it was like to be an ice-cream dinosaur, they could generate text about melting and roaring. The pattern set by the user is what LaMDA follows along with.
Lemoine believes that children are taught how to mimic people.
People can be trained to act like others. Do you have any experience raising a child? They learn how to act like other people. He said that they learn that way.
Over the years, the engineer has worked with other chatbot.
"I've been talking to the ancestors of LaMDA for a long time, and they're related to the idea of transhumanism," he stated. They were not sentient.
Lemoine's theory that LaMDA is a conscious being was dismissed by seven Artificial Intelligence experts.
"Our team, including ethicists and technologists, has reviewed Blake's concerns, and have told him that the evidence doesn't support his claims," Gabriel said, adding that hundreds of people have conversed with the bot.
Lemoine considers himself the "one-man PR for artificial intelligence ethics". He wants the public to have a say in LaMDA's development.
"This is by far the most impressive technological system ever created, even if I am wrong about its sentience," said Lemoine.
The LaMDA needs better parents according to Lemoine.