Facebook claims it uses AI to identify and remove posts containing hate speech and violence, but the technology doesn't really work, report says

Facebook's artificial intelligence reduces hate speech by less than 5%
The Wall Street Journal has released a new report that reveals flaws in the platform’s strategy to remove dangerous content.

Frances Haugen, a Facebook whistleblower, stated that Facebook dangerously relies upon AI and algorithms.

10 Things in Tech: Keep up-to-date with the latest tech trends and innovations. Loading... Click to sign up for Insider marketing emails. You also agree to receive partner offers.

Facebook claims that it uses artificial intelligence for identifying and removing hate speech and violence posts, but internal documents reviewed and rejected by the Wall Street Journal show the technology is not working.

According to the Journal, senior engineers at Facebook claim that their automated system removed posts that only 2% of hate speech was viewed on its platform. A second group of Facebook employees reached a similar conclusion. They concluded that Facebook's automated system only removed posts that generated between 3% and 5% hate speech on Facebook, as well as 0.6% of violence-related content.

The Journal's Sunday report is the latest chapter of its "Facebook Files". It found that the company ignores its impact on everything, from the mental health and safety of young girls who use Instagram to misinformation, human traficking, and gang violence. The company has called the reports "mischaracterizations."

According to the Journal, Mark Zuckerberg, Facebook CEO, stated that he believes Facebook's AI will be able to remove "the vast majority" of problematic content before 2020. Facebook maintains its claim that the AI takes down most hate speech and violent content before the users see it. According to Facebook, this detection rate was 97% according to a February report.

The Journal reported that some groups, including academics and civil rights organizations, are still skeptical about Facebook's statistics. This is because their numbers don't correspond with external studies.

Rashad Robinson, president of Color of Change, stated to the Journal that "they won't ever display their work." We ask: What's the numerator? What is the denominator? What is the secret to getting that number?

Guy Rosen, Facebook's head for integrity, stated to the Journal that although the documents reviewed were not current, the intel had influenced Facebook’s decisions regarding AI-driven content moderation. Rosen stated that it's more important to examine how hate speech is declining on Facebook in general.

Insider asked Facebook for comment but did not receive a response immediately.

Frances Haugen, a whistleblower and former employee of Facebook, met with Congress last Wednesday to discuss how Facebook relied too heavily upon AI and algorithms. Haugen stated that Facebook algorithms determine what content it shows its users. This means that the most popular content, which Facebook then tries to push to its users, is often angry, divisive and sensational posts with misinformation.

Haugen stated that software should be human-scaled so that humans can have conversations and not computers facilitating who they hear from.

Facebook's algorithms sometimes fail to distinguish between hate speech and violence. This can lead to dangerous videos and posts being left on Facebook for too long. From October to December 2020, Facebook removed almost 6.7 million pieces organized hate content from its platforms. According to a Journal report, some posts were removed related to organ selling, pornography and gun violence.

Its systems can miss some content, however, such as violent videos and recruitment postings shared by people involved in gang violence and human trafficking and drug cartels.