Facebook apologized for putting an offensive label on a video featuring Black men. According to The New York Times users who viewed a Daily Mail video featuring Black men recently saw a prompt asking if they would like to "[k]ep seeing videos regarding Primates." In a statement to The New York Times, the social network apologized for the "unacceptable mistake" and offered its condolences. The recommendation feature responsible for the message was also removed by the social network as it investigates the issue to prevent similar errors from occurring again.
Dani Lever, a company spokeswoman, stated in a statement that while improvements have been made to AI, it is still not perfect and there are more steps to be taken. We apologize to those who might have seen these offensive suggestions.
Artificial intelligence has a lot of problems, including gender and racial bias. The social network facial recognition technology are still not perfect and can misidentify women and POCs in general. Two Black men were wrongfully arrested in Detroit last year due to false facial recognition matches. Google Photos had tagged photos of Black people with "gorillas" in 2015. Wired discovered that Google's solution was to remove the word "gorilla” from search results and image tags a few years later.
In an attempt to address the problem, the social network shared a dataset that it had created with the AI community a few months back. It included over 40,000 videos with 3,000 actors, who were paid to share their age and gender. Facebook even employed professionals to light the shoots and label the skin tones. This allows AI systems to learn how different people look under different lighting conditions. This data clearly shows that Facebook's AI community still has much work to do.