A sentient artificial intelligence is not a reality. Is there an artificial intelligence that is racist and sexist? It's a reality.

Blake Lemoine, an engineer and mystic Christian priest, believes that the large language model called LaMDA has become sentient and complete with a soul. While that claim has been denied by many in the artificial intelligence community, Lemoine also explained how he began working on LaMDA.

The model for harmful biases in relation to sexual orientation, gender, identity, ethnicity, and religion was the real-world problem that began his journey with the artificial intelligence.

Lemoine doesn't believe there is an impartial system. The question was whether or not LaMDA had any harmful biases. I found a lot.

As far as he could tell, the Google team has done a good job repairing the biases. Lemoine said that he wouldn't use the term "racism" when asked about LaMDA's tendencies. He claims that the real question is whether or not the stereotypes it uses would be endorsed by the people.

Lemoine's hesitancy to label LaMDA's "bugs" as racist or sexist highlights an ongoing battle within the artificial intelligence community. Many feel that it falls on men in tech like Lemoine to continue to call attention to the biases in artificial intelligence when Black women speak out about them.

Timnit Gebru said that he doesn't want to talk about sentientrobots because they are harmful to other humans.

Artificial intelligence faces long history of harmful stereotypes, and Google is not new to or unaware of these issues.

Jacky Alciné tagged 80 photos of a black man to an album. A neural network was used to analyze enormous sets of data in order to categorize subjects like people and gorillas.

It was the responsibility of the engineers to make sure the data used in the photo system was correct. It was their responsibility to correct the problem when it failed. The New York Times reported that "gorilla" was eliminated as a photo category, rather than retraining its neural network.

The same biases are faced by companies like Microsoft, IBM, and AMAZON. According to the Times, the artificial intelligence used to power facial recognition technology encounters higher error rates when it identifies the sex of women with darker skin tones than when it identifies the sex of lighter skin.

In 2020 Gebru published a paper with six other researchers that criticized large language models like LaMDA and their tendency to parrot words based on the data they learn from. LaMDA would repeat biases when generating language if there were biases in the data. As Lemoine fell into, Gebru criticized training language models with larger datasets, which allowed the artificial intelligence to learn to mimic language even better.

Gebru claims that she was fired by the company in December 2020 after a dispute over the paper. Dr. Margaret Mitchell, a co-author of the paper and founder of the ethical artificial intelligence team, was fired by the search engine.

Despite a supposed commitment to "responsible AI," Google still faces ethical AI problems, leaving no time for sentient AI claims

After the drama and admitted hit to its reputation, the search engine giant promised to double its research staff. According to Recode, the CEO of the company pledged his support to fund more ethical projects. A small group of people on the ethical artificial intelligence team feel that the company may no longer listen to them.

Two more prominent ethical artificial intelligence team members left a year later. Alex and Dylan worked for Gebru at the DAIR institute. Lemoine was asked to research LaMDA's biases because the already small team grew even smaller.

As more and more societal functions use artificial intelligence, it's more important than ever to keep an eye on how the system affects its functions. We can't afford to have our police systems, transportation methods, translation services, and more because of the racism and sexism built into it. When white men in technology choose to focus on issues like artificial intelligence rather than biases, the biases will continue to exist.

Giada Pistilli, an ethicist at Hugging Face, said that there was a big gap between the current narrative of artificial intelligence and what it could do. The narrative causes fear, amazement, and excitement at the same time, but it is mostly based on lies to sell products and take advantage of the hype.