The company reportedly says he violated confidentiality policies.
Illustration by Alex Castro / The Verge

According to the Big Technology newsletter, which spoke to Lemoine, he has lost his job. After contacting members of the government about his concerns, Lemoine was placed on paid administrative leave by Google.

Brian Gabriel, a spokesman for the search engine company, sent a statement to The Verge on Friday that seemed to confirm the firing of a colleague. A research paper detailing the work that goes into its responsible development was published by the company. According to the company, it thoroughly reviewed Lemoine's claims and found them to be completely false.

Artificial intelligence experts and ethicists have said that his claims were more or less impossible. Lemoine believes that his conversations with LaMDA's chatbot lead him to believe that it has become more than just a program and has its own thoughts and feelings as opposed to merely producing conversation realistic enough to make it seem that way.

He argued that the researchers should seek consent from LaMDA before running experiments on it, and that he published parts of those conversations on his Medium account as proof.

Related

Google’s plans for LaMDA 2

The nine-minute explainer on how LaMDA works can be found on the computerphile channel.

Lemoine accused the company of not properly investigating his claims.

As we share in our AI Principles, we take the development of AI very seriously and remain committed to responsible innovation. LaMDA has been through 11 distinct reviews, and we published a research paper earlier this year detailing the work that goes into its responsible development. If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months. These discussions were part of the open culture that helps us innovate responsibly. So, it’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information. We will continue our careful development of language models, and we wish Blake well.