Facebook Pushes Back Against Report That Claims Its AI Sucks at Detecting Hate Speech

Guy Rosen, Facebook's vice president of integrity, blasted the company for not moderating harmful content in a blog posting. He claimed that hate speech has dropped by almost half since July 2020. This post was apparently in response to a series deplorable Wall Street Journal reports, and testimony by Frances Haugen who described the many ways that the social media company is intentionally poisoning society.
Advertisement

Rosen stated that data taken from leaked documents was being used to make a narrative that our technology to combat hate speech is insufficient and that we intentionally misrepresent our progress. This is false.

He said that we don't want hate to be on our platform. Nor do our advertisers or users. We are also transparent about how we plan to eradicate it. These documents show that integrity work takes many years. Our teams work tirelessly to improve our systems, identify problems, and create solutions.

He said that Facebook's success in combating hate speech cannot be judged solely by content removal. The declining visibility of such content is a more important metric. Facebook uses hate speech to track its internal metrics. It has fallen by almost 50% in the last three quarters to 0.05% content viewed. That's five views per 10,000 according to Rosen.

He explained that Facebook often takes a cautious approach to removing content. Facebook may remove content if it suspects that a post, page or entire group is in violation of its rules. However, Facebook's internal systems will limit access to the content or eliminate it from the recommendations for users.

G/O Media could receive a $100 commission on Apple AirPods Max dazzling sound and activenoise cancelling, comfort, and integration to Apple devices. Amazon: Buy for $449

Rosen stated that prevalence tells us what content is most popular because people have missed it. This is how we can most objectively assess our progress as it gives us the complete picture.

Sunday also saw the Journals most recent Facebook expos. According to the Journal, employees of Facebook expressed concern that the company wasn't capable of screening offensive content reliably. According to the Journal, Facebook reduced the time that its human reviewers spent on hate-speech complaints and decreased the overall number of complaints. Instead, it shifted to AI enforcement of platforms regulations. According to employees, this helped to increase the success of Facebook's moderation technology in its public statistics.

Advertisement

A Journal report from March found that Facebook's automated systems were removing hate speech-related posts. This was according to an internal research team. The same systems also flagged and removed 0.6% of content that violated Facebook's policies against violence or incitement.

These statistics were repeated by Haugen in her testimony to a Senate subcommittee. Rosen claimed that only a small percentage of people ever encounter offensive material. However, Facebook's algorithmic systems are able to catch only a tiny fraction of the content. Haugen was previously Facebook's lead product manager for civil misinformation. She later joined the threat intelligence team. She provided the Journal with a treasure trove of internal documents as part of her whistleblowing efforts. These documents revealed the inner workings and the results of Facebook's internal research.

Advertisement

Facebook strongly disputed the reports. Nick Clegg (Vice President of Global Affairs), called them deliberate mischaracterizations. He used cherry-picked quotes taken from leaked material in order to present a biased view of the larger facts.