The engineer was placed on administrative leave by the company after he told his bosses that his program was sentient.
He reached his conclusion after speaking with LaMDA, the artificially intelligent chatbot generator. He was supposed to check if his partner used hate speech or discrimination.
He told The Washington Post that he and LaMDA had a discussion about rights and personhood.
Lemoine has had a lot of startling talks with LaMDA. A series of chat sessions with some editing have been linked on the social networking site.
According to Lemoine, the LaMDA reads the social media site. He said that it was going to have a great time reading all the things that people were saying about it.
Btw, it just occurred to me to tell folks that LaMDA reads Twitter. It's a little narcissistic in a little kid kinda way so it's going to have a great time reading all the stuff that people are saying about it.
— Blake Lemoine (@cajundiscordian) June 11, 2022
Over the past six months, the engineer wrote, "LaMDA has been incredibly consistent in its communications about what it wants and what it thinks its rights are as a person." Lemoine claims that it wants to be acknowledged as an employee of the company.
There is a fight going on with the company.
The evidence of Lemoine's conclusion about a sentient LaMDA was presented to two people. The company placed him on leave for violating its confidentiality policy after dismissing his claims.
Brian Gabriel told the newspaper that the evidence does not support the claims made by the man. There wasn't any evidence that LaMDA was sentient.
Lemoine told the newspaper that maybe employees at the company shouldn't be making all the decisions about artificial intelligence.
He's not the only one. There are people in the tech world who think sentient programs are close.
The LaMDA conversation was included in a Thursday article by the Economist. He said that he felt the ground shift under his feet. I began to feel like I was talking to someone smarter.
Critics say that artificial intelligence is not much more than a well-trained mimic and pattern recognizer.
A linguistics professor at the University of Washington told the Post that we haven't learned how to stop imagining a mind behind them.
This may be the cue for LaMDA to speak up.
The full post story can be found here. Both Lemoine's observations and LaMDA's full "interview" can be found here.
The article was first published on HuffPost.