Its LaMDA system is built to make AI more conversational.
Photo by Amelia Holowaty Krales / The Verge

According to the Washington Post, one of its engineers was placed on administrative leave for allegedly breaking its confidentiality policies after he became concerned about the effectiveness of an artificial intelligence system. The engineer was testing the LaMDA model to see if it generated discrimination.

The engineer was concerned about the ethics and rights of the artificial intelligence system. After being placed on leave, Lemoine published a transcript of his conversations with the artificial intelligence, which he says proves that it is sentient.

The Washington Post and The Guardian reported that Lemoine's actions have violated confidentiality policies. He invited a lawyer to represent the artificial intelligence system and spoke to a representative from the House Judiciary committee. On June 6th, the day Lemoine was placed on administrative leave, the engineer stated in a Medium post that he sought a minimal amount of outside consultation to help guide him in his investigations.

At last year's I/O, the search giant announced LaMDA, which it hopes will improve its artificial intelligence assistants. The company uses similar language model technology for its products.

There is no evidence that LaMDA is sentient, according to a statement given to WaPo by a person from the search engine giant. The team, including ethicists and technologists, reviewed the concerns and found no evidence to support them. He was told that there was no evidence that LaMDA was sentient.

An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB

— Blake Lemoine (@cajundiscordian) June 11, 2022

Gabriel said that it doesn't make sense to anthropomorphize today's models, which are not sentient, in order to consider the long-term possibility of sentient or generalAI. These systems can mimic the type of exchanges found in millions of sentences.

Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else doing the same.

According to a linguistics professor interviewed by WaPo, it's wrong to equate written responses with sentience. We don't know how to stop imagining a mind behind machines, but we have machines that can generate words.

According to Timnit Gebru, a prominent artificial intelligence ethicist who was fired by the search giant in 2020, the discussion over artificial intelligence sends the wrong message. The entire weekend was spent discussing sentience, instead of discussing the harms of these companies. The derailed mission was accomplished.

Lemoine said that he will continue to work on artificial intelligence. He wrote that he intended to stay in Artificial Intelligence regardless of whether or not he was retained by the search engine.

Related

Google is poisoning its reputation with AI researchers

On June 13th, 6:30AM, there was an additional statement from the company.