Facebook has apologized for an error in which its AI mislabeled an image of Black men with a primate label. It called it unacceptable and said it was investigating to prevent it happening again. According to the New York Times' report, users who viewed a video of Black men with primates labeled June 27th by UK tabloid Daily Mail were presented an auto prompt asking if they want to continue seeing videos about Primates.
Facebook immediately disabled all topic recommendation features, according to a spokesperson for the company. The Verge received an email from a spokesperson on Saturday.
The spokesperson stated that this was an unacceptable error. According to the spokesperson, the company is currently investigating the issue in order to prevent similar behavior from occurring again. We have made some improvements to our AI, but we are aware that it is not perfect. There is still much to do. We apologise to anyone who might have seen these inappropriate recommendations.
This is the latest example of artificial Intelligence tools showing gender bias or racial bias. Facial recognition tools have been shown to be particularly susceptible to misidentifying people of colour. After its Photos app had tagged Black photos as gorillas in 2015, Google issued an apology and said that it was investigating whether algorithms created using AI, including those from Instagram, were biased racially.
The US Federal Trade Commission warned in April that AI tools with troubling racial or gender biases could be violating consumer protection laws if used to make decisions about credit, housing, and employment. Elisa Jillson, FTC privacy lawyer, wrote that you should hold yourself accountable and be prepared for the FTC's help.