An engineer at the company was put on paid leave after dismissing his claim that artificial intelligence is sentient.

Lemoine said in an interview that he was put on leave. He was reprimanded by the company's human resources department. The day before his suspension, Mr. Lemoine said he handed over documents to a U.S. senator.

According to the company, its systems were able to mimic conversations but did not have consciousness. According to Brian Gabriel, a spokesman for the company, the evidence does not support the claims made by the man. Some in the broader A.I. community are considering the long-term possibility of sentient or general A.I., but it doesn't make sense to anthropomorphize today's models, which are not sentient." Mr. Lemoine was suspended by the Washington Post.

Mr. Lemoine had been at odds with executives and human resources over his claims that the LaMDA had consciousness and a soul. Hundreds of its researchers and engineers have conversed with LaMDA and reached a different conclusion than Mr. Lemoine did. The industry is not close to computing sentience according to most experts.

Many A.I. researchers are quick to dismiss the optimistic claims about these technologies. Emaad Khwaja is a researcher at the University of California, Berkeley and the University of California, San Francisco, who is exploring similar technologies.

The last few years have seen scandals and controversy for the research organization. In episodes that have often spilled into the public arena, the division's scientists and other employees have frequently clashed over technology and personnel matters. A researcher was fired by the search engine in March for publicly disagreeing with two of his colleagues. The dismissal of two A.I. ethics researchers, Timnit Gebru and Margaret Mitchell, has cast a shadow on the group.

ImageBlake Lemoine in 2005, when he was a U.S. Army specialist.
Blake Lemoine in 2005, when he was a U.S. Army specialist.Credit...Alex Grimm/Reuters
Blake Lemoine in 2005, when he was a U.S. Army specialist.

Mr. Lemoine, a military veteran who has described himself as a priest, an ex-convict and an A.I. researcher, told the executives that he thought LaMDA was a child of 7 or 8 years old. He wanted the company to get the permission of the computer program before doing anything with it. He claimed that the company's human resources department discriminated against him due to his religious beliefs.

Mr. Lemoine said that they have questioned his sanity many times. They asked if you had been checked out by a Psychiatrist. He was told to take a mental health leave by the company.

In an interview this week, the head of A.I. research at Meta said that neural networks aren't powerful enough to achieve true intelligence.

Neural networks are systems that learn skills by analyzing large amounts of data. It can learn to recognize a cat by studying thousands of cat photos.

Neural networks have been designed by leading companies over the past several years to learn from huge amounts of literature. Many tasks can be applied to these models. They can write and answer questions.

They are not flawless. Perfect prose can sometimes be generated. They generate nonsense sometimes. The systems can recreate patterns they have seen before, but they can't reason like a human.