There is good news and bad news. The good news is that there are lots of other problems with the technology.

There is a story about an artificial intelligence. LaMDA is a company that has achieved sentience, according to a person who works for the company.

Lemoine's theory that LaMDA was a conscious being was dismissed by seven artificial intelligence experts. There is a person who has worked with the chatbot.

Experts told Insider that artificial intelligence doesn't need to be clever.

It's well documented that artificial intelligence can replicate and amplify historical human discrimination.

Amazon shut down a recruitment tool it had developed because it was consistently discriminating against female applicants.

It can be difficult to see that the predictions are often based on little more than the opinions of other people.

"Maybe it's fun to speculate on how'sentient' the auto- generation of historically correlated word strings appears, but that's a disingenuous exercise when, right now, algorithmic predictions are exclusion, stereotyping, and unfairly targeting individuals and communities based on data pulled from, say,

Artificial intelligence shows bias against protected characteristics like race and gender, as well as finding new ways to discriminate against people, according to a recent paper written by a professor.

Which browser you use to apply for a job could mean different things to different systems.

There is no legal framework to stop artificial intelligence from discriminating.

Patterns of past injustice in hiring, lending or criminal justice can be picked up by the use of artificial intelligence. She says that artificial intelligence creates new groups that aren't protected by the law.

These issues need to be solved quickly. If and when we are close to crossing that bridge, let's address these first.

Laura Edelson, a computer science researcher at New York University, said that artificial intelligence systems give people a chance to get out of discrimination.

Machine learning systems can be used to make decisions that people don't want to make. She says that it's the system.

More pressing issues around artificial intelligence are overshadowed by the hype surrounding it.

There's no time or media attention given to the growing harms that predictive systems are enabling because we are derailing the work of world-class artificial intelligence ethics researchers.