According to Alex Kantrowitz of the Big Technology newsletter, an engineer with the company has been fired. The news was allegedly broken by Lemoine himself during a taping of the podcasts of the same name. The firing was confirmed by the search engine giant.
Lemoine went to the Washington Post with allegations that one of the company's artificial intelligence projects had gainedience. A language model for dialogue applications, known as LaMDA, was publicly unveiled by Google last year as a means to better mimic open-ended conversation. Lemoine was questioning if LaMDA had a soul. In case there's any doubt, he went on to say that he believes that LaMDA is a person.
Lemoine was put on administrative leave after making these statements. In statements to the Washington Post, both before and after, the search engine giant has insisted that its artificial intelligence is not sentient.
The research community spoke against Lemoine's claims. Margaret Mitchell, who was fired from Google after calling out the lack of diversity within the organization, wrote on social media that systems like LaMDA don't develop intent, they instead aremodeling how people express their intentions in text strings. Gary Marcus referred to Lemoine's claims as nonsense.
The following statement was shared by the search engine company.
As we share in our AI Principles, we take the development of AI very seriously and remain committed to responsible innovation. LaMDA has been through 11 distinct reviews, and we published a research paper earlier this year detailing the work that goes into its responsible development. If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months. These discussions were part of the open culture that helps us innovate responsibly. So, it’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information. We will continue our careful development of language models, and we wish Blake well.