Twitter announced the results from an open competition to identify algorithmic biases in its photo-cropping software. After user experiments last year, Twitter users suggested that automatic photo-cropping favors white faces over black faces, the company decided to disable it in March. The company launched an algorithmic bug bounty in order to analyze the problem better.These earlier findings were confirmed by the competition. These earlier findings were confirmed by the competition.These filters are created because we believe that is what beauty is. We also use them to train our models.Rumman Chowdhury, director at Twitters META team, praised the entrants' ability to demonstrate the real-life consequences of algorithmic bias in a presentation of the results at the DEF CON 29 conference.Chowdhury said that when we look at biases in models, it's not only about the experimental [...], but also how this affects the way we think in society. Chowdhury said that life imitating art is life, which leads to unrealistic ideas about what it means for a person to be attractive.Bogdan Kulynych (a graduate student at EPFL in Switzerland) was the competition's first place winner. He also won the top prize of $3,500. Kulynych created a large amount of realistic faces using an AI program called StyleGAN2. He varied skin colors, facial features and slimness to create the different types. These variants were then passed into Twitter's photo-cropping algorithm, to determine which one it liked.Kulynych's summary notes that algorithmic biases amplify biases in society by literally cropping out people who don't meet their preferences for body weight, age and skin color.These biases are more widespread than you might realize. Vincenzo di Cicco was another entrant in the competition. He won special mention because of his innovative approach and showed that the image cropping algorithm preferred emojis with lighter skin tones to those with darker skin. Roya Pakzad (founder of Taraaz tech advocacy group), was third in the competition. She revealed that these biases also extend to written features. Pakzad's work showed that algorithms regularly cropped images to emphasize the English text.While the results of Twitter's bias competition might seem depressing, they confirm the widespread nature of algorithmic societal bias. However, this also shows how tech companies can address these issues by making their systems open to external scrutiny. Chowdhury said that people who enter competitions like this can dive deep into particular types of harm or bias. This is something corporations don't have the luxury of doing.The openness of Twitter is quite different from the response given by other tech companies to similar problems. Joy Buolamwini, MIT's Joy Buolamwini, discovered racial biases and gender biases within Amazon's facial recognition algorithms. Amazon then launched a massive campaign to discredit the researchers and placed a temporary ban on law enforcement using these algorithms.Patrick Hall, an AI researcher and judge in the Twitters competition, said that these biases are present in all AI systems, and companies must work proactive to identify them. Hall said that AI and machine learning are the Wild West, regardless of how well-trained your data science team may be. If you aren't finding bugs or your bug bounties don't find your bugs then who will? You do have bugs.