Guy Rosen, Facebook vice president for integrity, wrote Sunday in a blog post that hate speech had fallen by 50% over the last three years. He also stated that the narrative that technology used to combat hate speech was inadequate and that we intentionally misrepresent our progress as false was false.
Rosen stated that we don't want hate to be on our platform. These documents show that integrity work can be a long-term process. Our teams work tirelessly to improve our systems, identify problems, and create solutions.
This post appears to have been in response to an article in Sunday's Wall Street Journal that stated that Facebook employees charged with keeping offending content off its platform don't believe they can reliably screen it.
According to the WSJ, internal documents reveal that Facebook cut the time human reviewers spent on hate speech complaints two years ago and made other adjustments to reduce the number of complaints. According to the WSJ, this created the impression that Facebook's artificial intelligence was more successful at enforcing company rules than it was.
According to the WSJ, a Facebook employee team discovered that automated Facebook systems were removing posts that generated hate speech views between 3 and 5% of the platform's users and less than 1% of content that violated its rules against violence incitement.
Rosen said that focusing only on content removals was a bad way to view how we fight hate speech. Rosen stated that we need to be certain that hate speech is being removed before we can remove it.
He stated that the company believes that focusing on hate speech and how to reduce it using various tools on Facebook is more important than focusing on its prevalence. Rosen claimed that there were five hate speech views for every 10,000 views of a piece on Facebook content. Rosen explained that prevalence tells us what content is being violated because people have missed it. This is how we can most objectively assess our progress as it gives us the complete picture.
However, the WSJ obtained internal documents that revealed some important pieces of content could evade Facebook's detection. These included videos of car accidents, which showed people with severe injuries and threats against trans children.
Based on documents from Frances Haugen, a whistleblower, the WSJ produced a series reports on Facebook. She testified before Congress about the company's awareness of the potential negative effects its Instagram platform could have for teenagers. Facebook has refuted the report based on internal documents.