A few Facebook users who watched a Daily Mail video of Black men recently reported that they received a Facebook label asking them if they would like to see more videos about "primates".
The caption was in bold text underneath the video and read "Keep watching videos about primates?" Users could click the prompt to answer the prompt by clicking on the buttons "Yes" or "Dismiss". It is part of an AI-powered Facebook process, which attempts to collect information about users' interests to deliver relevant content to their News Feed.
The video showed white men calling police on Black men, with the resulting events. It had nothing to do about primates. Facebook apologized to the New York Times, stating that it had made an "unacceptable mistake" and was investigating ways to prevent it from happening again.
Darci Groves (a former Facebook content designer manager), posted the label to a product feedback forum and shared it with her friends on Twitter. Groves shared the information with us because a friend found it and took a screenshot.
This offensive label is especially unacceptable when you consider the large number of photos uploaded by users that Facebook has. It could be used to verify facial recognition using its tools. Although AI is not perfect, the company has to train its algorithms properly. This error cannot be blamed solely on a lack or resources.
Facebook continues to be distrusted for its inability to provide a transparent plan to fix its AI problems. This is on top of mishandling past racial justice matters within the company. Although the company did apologize, it didn't offer any concrete steps beyond disabling this feature and promising to "prevent it from happening again".
This approach is particularly disappointing after Facebook's recent decision to block researchers from accessing tools and accounts that allow them to examine user data and ad activities on the platform. The move was made in violation of a settlement reached with the Federal Trade Commission. That defense has been directly refuted by the FTC.
It is difficult to trust that Facebook will respond to this inappropriate AI gaffe in a timely manner or with the right results. Facebook must clearly state how it will fix the problem if it is serious about creating and using AI tools in an inclusive way.