Machine learning researchers are teaching neural networks how to judge humans and the results are brutal.

The study describes how researchers trained the model to judge human faces, how we do it upon first meeting each other, and how they trained it to evoke photos.

Our dataset contains bias, and it deliberately reflects it, according to a post by a computer science researcher.

Human Error

According to the paper, the artificial intelligence mirrored human judgement and associate objective physical characteristics, such as a person's size or skin color, with attributes ranging from trustworthiness to privilege.

In his thread, he explained that most of the judgement values the researchers trained the artificial intelligence to assign had political inferences. When using the interactive site, Futurism found that the algorithm marked white faces as more conservative and liberals as more liberal.

Jordan W. Suchow of the Stevens Institute of Technology admitted in the press release that they need to be careful with how this technology is used.

Biased Much?

This kind of machine learning can be used to study people's first impressions of one another.

We can use a photo of your face to predict what people will think of you, and which stereotypes they will project onto you.

The paradigm twist is delightful and the issue of artificial intelligence bias is becoming more and more important. At OneMillionImpressions.com, you can check out the interactive research.

Deep models of superficial face judgments.

Welcome to robot hell! The Deranged Genius created an artificial intelligence version of Dil.