Lemoine claims to have had conversations with an advanced artificial intelligence-powered chatbot, which he believes has become sentient.
The Washington Post reported over the weekend that Lemoine was suspended from the company for violating the confidentiality policy.
Regardless of the outcome, the stakes are high. Either we're looking at a scifi scenario in which a megacorporation created a sentient artificial intelligence or, more likely, but still provocative, the artificial intelligence is impressive enough to fool a Google engineer into believing that it has come to life.
There are a lot of reasons that we shouldn't take Lemoine's evidence at face value.
The passages Lemoine shared on Medium have been edited.
The interview was done over several different chat sessions due to technical limitations. We edited the sections together to make them easier to read, but never LaMDA's response.
Lemoine appended the word "edited" to the transcript to say that other passages were edited.
According to documentation obtained by WaPo, the conversations meandered or went on tangents which weren't relevant to the question of LaMDA's sentience.
The highlights of long conversations are what we're looking at. It's a key piece of evidence that should make us question Lemoine's claims that LaMDA is a real person.
Artificial intelligence can be used to produce a lot of raw output and then use human judgement to pick the best examples. It's still cool, but it's more of a collaboration between humans and machines, and problematic for claims of advanced capabilities.
According to the documentation, Lemoine argued that the edits he made to the transcripts were intended to be enjoyable to read.
Would reading the lengthy passages give us the same impression of LaMDA's submissiveness? When evaluating Lemoine's theory, cherry-picking passages should give anyone pause.
The researcher's claims have been cast doubt on by the search engine.
Hundreds of researchers and engineers have conversed with LaMDA, and we are not aware of anyone else making the wide-ranging assertions or exaggerating LaMDA.
Gabriel told the WaPo that "these systems imitate the types of exchanges found in millions of sentences, and can riffs on any fanciful topic."
Did Lemoine alter the transcripts to make LaMDA sound better? Is it possible that any of this is replicable by other researchers?
We should look at Lemoine's claims through a very critical lens until we have other experts look at his data.
It's more likely that self-awareness is a thing of the distant future.
The transcript was altered and rearranged to make it more readable.
There's more on the saga.