In June of this year, a former engineer for the company went on record with The Washington Post to say that an experiment called LaMDA became sentient. At one point, the artificial intelligence had even hired a lawyer, according to reports. The company ended up firing Lemoine.

In a new interview with The Guardian, Lemoine claims that he never intended to convince the public of LaMDA's sentience.

He now says that his intent was to raise awareness about advanced artificial intelligence technologies, regardless of the public's feelings about them.

He told the paper that he raised this as a concern about the degree to which power is being centralized in the hands of a few. The public is being cut out of the discussion about how the technology should be developed because there is a chance that it will affect human history for the next century.

Lemoine's conviction may have been shaken by his many critics. The new revelation from the Guardian interview is that for several weeks before he decided to go to the press, he urged the tech giant to run some experimental sentience tests on the chatbot. The engineer who was fired said that the search engine refused because it maintained that Lemoine's claims were not true.

Lemoine told the Guardian that he felt he had no choice but to speak out. To blow the whistle on a powerful industry with the ability to make big changes to our everyday existence.

Part of the problem with asking for sentience tests is that there are no currently available. It's still an ethereal concept, but neither philosophy nor science have a good idea of how to define it.

Michael Wooldridge is a professor of computer science at the University of Oxford.

Wooldridge told the paper that he doesn't think LaMDA is sentient, but that he agrees that the artificial intelligence industry has a bigger problem with moving goalposts.

"That is a legitimate concern at the moment, how to quantify what we have and know how advanced it is," he said.

The tech is safe and will be used in a number of useful and necessary ways according to the argument made by the company.

There are some practical uses for these systems. They actually talk to us. Some researchers believe that they may be easy to exploit due to the fact that they are easy to identify with.

Margaret Mitchell, a former artificial intelligence ethics researcher at Google, recently told The New York Times that she was worried that the machines would prey on humans. They have the ability to convince us.

No one really knows for certain either way, not the people building them or the public. It seems like Lemoine's point is sentience or no sentience. The larger landscape of artificial intelligence development as well.

"I'm trying to get more involved, more informed and more intentional public discourse about this topic so that the public can decide how artificial intelligence should be integrated into our lives," Lemoine told the Guardian.

We have one world in which I'm correct about LaMDA being sentient and one world where I'm wrong about it. Does that affect my public safety concerns?

It's definitely true. It's all fair. It's difficult to disagree with a lot of these new statements, but it also seems like Lemoine is trying to rewrite a lot of what he's said. Lemoine has been known to pick things up.

Can artificial intelligence ever be sentient? The Guardian is published by The Guardian.

A lawyer hired by "Sentient" has been scared off by the case.