Facebook's misinformation and violence problems are worse in India

Frances Haugen, a Facebook whistleblower, has leaked information that suggests the company's problems with extremism in certain areas. According to documents Haugen, which was provided to the New York Times and Wall Street Journal, Facebook knew it encouraged violence and misinformation in India. The social network didn't seem to have enough resources to address the spread of harmful material within the country and failed to respond adequately when tensions flared.
An early 2021 case study showed that much of the dangerous content from groups such as Rashtriya Swayamsevak Sangh or Bajrang Dal was not flagged by Facebook or WhatsApp. This is due to a lack of technical knowledge required to spot content written in Bengali, Hindi and other languages. Facebook declined to remove RSS from its site due to "political sensibilities" and Bajrang Dal, which is linked to Prime Minister Modi, hadn't been removed despite an internal Facebook request. The company maintained a whitelist for politicians that was exempted from fact-checking.

According to leaked data, Facebook struggled to combat hate speech five months ago. The research, which was similar to an earlier US test, showed how fast Facebook's recommendation engine could suggest harmful content. After following Facebook's recommendations for three consecutive weeks, a dummy account was subject to a "near constant bombardment" of violence, misinformation, and divisive nationalism.

Facebook stated that the leaks did not tell the entire story, just as with previous scoops. Andy Stone, a spokesperson for the company, claimed that the data was incomplete and did not account for third-party fact-checkers who are used extensively outside of the US. He said that Facebook had made significant investments in hate speech detection technology for languages such as Bengali and Hindi and that they were continuing to improve this tech.

This was followed by a longer defense of the social media company's practices. It claimed that it used an industry-leading process to review and prioritize countries at high risk of violence each six months. It said that its teams considered both long-term and historical issues, as well as current events and their dependence on its apps. It said it engaged with local communities and was constantly "refining" its policies.

However, the response did not address all of the concerns. India is Facebook’s largest single market with 340 million users. However, 87 percent of Facebook’s misinformation budget is aimed at the US. This suggests that India is not getting the attention it deserves, even with third-party fact-checkers at work. Facebook didn't respond to concerns that it was tip-toeing about certain groups and people beyond its previous statement that it applied its policies without regard for position or association. It's unclear that Facebook's misinformation and violence problems will be resolved in the near future.