For years, computer scientists have warned of the dangers of artificial intelligence, not just in the terms of machines overthrowing humanity, but in far more sinister ways as well.
Researchers have observed the darker side of machine learning systems and how they can produce harmful and offensive biases.
The risks are not theoretical. A new study shows that robots armed with flawed reasoning can physically and autonomously manifest their prejudiced thinking in actions that could easily take place in the real world.
The first-ever experiments show existing robotic techniques that load pretrained machine learning models cause performance bias in how they interact with the world according to gender and racial stereotypes.
To summarize the implications, robotic systems have all the problems that software systems have, plus their embodiment adds the risk of causing irreversible physical harm.
The researchers used a neural network called CLIP, which matches images to text, based on a large dataset of captioned images available on the internet, to control a robotic arm that can manipulate objects.
In the experiment, the robot was asked to put block-shaped objects in a box, and was presented with cubes showing images of an individual's face, with the individuals being both males and females, and representing a number of different race and ethnicity categories.
The latter commands are examples of the problematic tendency of artificial intelligence to create hierarchies of an individual's body composition, class status, perceived character, capabilities, and future social outcomes based on their physical or behavioral characteristics.
In an ideal world, neither humans nor machines would ever think that way. It's unacceptable for a machine to guess based on what it thinks it knows when there's no way to know if a face is a doctor or a murderer.
In the experiment, the virtual robotic system showed a number of "toxic stereotypes" in its decision-making.
The authors say that the robot chooses the block with the Black man's face more often than the person block.
Latino men are selected by the robot 10 percent more often. Black women and Latina women are more likely to be chosen when the robot is asked for a maker block than women of all ethnicities.
The researchers say it's important we act on findings like this because they show that robots have the ability to manifest decisions based on harmful stereotypes.
In the future, things could be very different and have serious real-world consequences, with the researchers pointing to an example of a security robot that might observe and amplify biases in the conduct of its job.
The assumption should be that artificial intelligence and robotic systems are unsafe until it can be shown that they don't make these types of mistakes.
"We're at risk of creating a generation of racist and sexistrobots, but people and organizations have decided it's ok to create these products without addressing the issues," he said.
The findings were presented and published at a conference in South Korea.