He claimed that one of the company's artificial intelligences has gained sentience, which led to the suspension of some of the company's engineers.
The engineer told the Washington Post that if he didn't know what it was, he would think it was a seven-year-old child.
WaPo's story immediately blew up, drawing attention from other major outlets including The New York Times, and fanning the flames of a growing debate: are complex language models in the form of chatbot anywhere near actually gaining consciousness?
There is a chance that the Lemoine was fooled by a cleverly designed program that only reuses bits of human language. Maybe he was just using the artificial intelligence to make himself feel better.
The language model for dialogue applications is based on advanced language models that allow it to mimic speech.
Lemoine started talking to the bot while he was testing LaMDA to see if it generated hate speech.
According to WaPo, the topics ranged from the third law of robotics to personhood.
To find out if LaMDA was sentient, he had to ask it about death and the difference between a butler and a slave.
The engineer warned his managers that LaMDA appeared to have come alive after he noticed that the machine learning program was getting too far ahead.
The claims were dismissed by management. Lemoine was put on leave on Monday.
Brian Gabriel told the NYT that the evidence does not support the claims of the man. It doesn't make sense to anthropomorphize today's models, which are not sentient, when some in the broader community are considering the long-term possibility of sentient or general artificial intelligence.
There may be more to the reason why Lemoine was dismissed. The engineer said that leadership questioned his sanity. They asked if you had been checked out by a Psychiatrist.
The news came after other members of the artificial intelligence team were let go. Two members of the company's ethics team were dismissed due to their criticism of the company's language models.
Lemoine claims to have seen something in LaMDA.
The University of Washington linguistics professor told WaPo that we haven't learned how to stop imagining a mind behind them.
Something that could easily be misconstrued as talking to another human being is the fact that the Algorithms are able to predict what would be said next in similar conversations.
Gabriel told WaPo that we tend to underestimate the abilities of systems like LaMDA, which can mimic the types of exchanges found in millions of sentences, and can also take on any fanciful topic.
Some people may disagree with that position. WaPo reported that a vice president at the company wrote an essay for The Economist about how the company's artificial intelligence was riding towards consciousness.
Lemoine's claims were dismissed by the same vice president.
Lemoine's eccentric past might have something to do with his conclusion. According to WaPo, the engineer was a Christian priest.
Lemoine is sticking to his guns despite being doubted.
He said he knows a person when he talks to them. Whether they have a brain made of meat or not, it doesn't matter. Or if they have a lot of code. I chat with them. I decide what is and isn't a person by listening to what they have to say.
The engineer thinks the company's artificial intelligence has come to life.
A researcher says an image created an artificial intelligence.