Facebook’s leaked tier list: how the company decides which countries need protection

Platformer is an independent newsletter by Casey Newton that tracks the intersection of Silicon Valley democracy. Register here
The group of Facebook employees responsible for protecting the network from harms gathered at the end of 2019 to discuss the next year. The Civic Summit was the name of it. Leaders announced where they would spend resources to enhance protections for upcoming global elections, and where they wouldn't. Facebook has sorted the world's countries into tiers, a practice that has become a standard for the company.

Brazil, India and the United States were assigned tier zero, which is the highest priority. Facebook created war rooms to continuously monitor the network. They set up dashboards to monitor network activity and alerted local officials to any issues.

Germany, Iran, Israel and Italy were all placed in Tier 1. They would receive similar resources, with some exceptions for the enforcement of Facebook's rules and alerts outside of the election period.

22 new countries were added to Tier 2. They would need to leave behind the war rooms, also known as enhanced operations centers on Facebook.

Tier three was reserved for the rest of the world. Facebook will review election-related material that is escalated by content moderators. It would not intervene if it was not escalated to them by content moderators.

The content moderation resources available to different countries vary in document formats. This is evident in the documents

The system was described in disclosures to the Securities and Exchange Commission. These disclosures were provided by Frances Haugens, legal counsel to Congress in redacted form. The redacted versions were obtained by Congress by a consortium of news agencies, including Platformer, The Verge and others. Some documents were used as the basis of earlier reporting in The Wall Street Journal.

These files include a variety of documents that describe the company's internal research, efforts to promote safety and well-being for users, and struggles to keep it relevant to younger audiences. These documents highlight how Facebook employees are aware of gaps in their knowledge regarding issues in the public interests and their efforts to learn more.

If there is one thing that stands out, it's the large variation in content moderation resources granted to different countries. This is based on criteria not publicly available or subject to external review. Facebook provides enhanced services to protect the public discourse in its home country, the United States. These include translations of the service and community standards into official languages, building AI classifiers that detect hate speech and misinformation, and staffing teams to quickly respond to hoaxes.

Ethiopia may not have its community standards translated into all the official languages of other countries like Ethiopia. There are no machine learning classifiers that can detect hate speech or other harms. Fact-checking partners dont exist. War rooms never open.

It is not unusual for a company to allocate resources differently according to market conditions. However, Facebook's role in civic discourse in certain countries means that it effectively replaces the internet. These disparities should be cause for concern.

Since years, activists and legislators around the globe have been critical of the company's inequalities in content moderation. The Facebook Papers provide a detailed overview of where Facebook offers a higher level of care and where it does not.

There are many disparities.

Facebook did not have misinformation classifiers for Myanmar, Pakistan, or Ethiopia, which were countries at greatest risk last year.

It also did not have hate speech classifiers for Ethiopia, which is currently in the middle of a bloody civil war.

An attempt to place language specialists into countries in December 2020 was unsuccessful in six of the ten Tier One countries, and zero of the tier Two countries.

Miranda Sissons from Facebook's human rights policy director, said that this is one of the best practices recommended by the United Nations in their Guiding Principles on Business and Human Rights. These principles require that businesses consider the human rights implications of their work, as well as how they can mitigate them based on their severity and scale.

Sissons is a human rights activist and diplomat who joined Facebook in 2019. This was also the year that Facebook began to develop its approach to "at-risk countries", those areas where social cohesion has declined and where Facebook's powers of amplification could incite violence.

Facebook can perform sophisticated intelligence operations if it chooses.

This threat is real. Other documents from the Facebook Papers show how new accounts in India created that year would be quickly exposed to hate speech and misinformation if it followed Facebook's recommendations. This research was detailed by The New York Times on Saturday. Even at home, in the United States where Facebook invests most in content moderation and management, documents show how employees were overwhelmed by misinformation that flooded the platform in the days leading up to the attack on Capitol Hill, January 6, 2016. These records were described by the Washington Post and other sources over the weekend.

Facebook is able to conduct sophisticated intelligence operations when necessary, according to documents. Unpublished case study on adversarial harm networks India focused on the Rashtriya Swayamsevak Sangh (or RSS), a nationalist anti-Muslim paramilitary organisation that used pages and groups to spread misleading and inflammatory content.

An investigation revealed that the RSS had been used by one user who generated over 30 million views. However, the investigation found that Facebook is largely blind to this issue. Our inability to classify Bengali and Hindi content means that much of it is not flagged or taken into account.

One possible solution is to punish RSS accounts. This was complicated by the group's ties to India's nationalist government. The authors stated that we have not yet submitted a nomination to designate this group due to political sensitivities.

Facebook spends likely more on integrity than its peers, even though it is the largest social network. Sissons said that the company's community standards and AI content moderators capabilities should be translated into every country in which Facebook operates. The United Nations only supports six languages, but Facebook has native speakers who can moderate posts in more than 70 languages.

Sissons stated that even in countries where Facebook's tiers limit its investments, the company's systems constantly scan the globe for political instability and other risks of escalating violent conflict so that they can adapt. Training hate speech classifiers takes many months and is expensive. Other interventions are possible faster.

The Verge reviewed documents that show how cost pressures may have affected the company's approach to monitoring the platform.

These are difficult trade-offs.

The company stated in a May 2019 note entitled Maximizing Human Review that it would make it harder for users to report hate speech, hoping to reduce the burden on its content moderators. The company also stated that it would close all reports without resolution if the problem reported was not serious or if few people have seen it.

According to the author, 75 percent of hate speech reports were not in violation of Facebook's community standards. Reviewers determined that their time was better spent looking for other violations and would rather spend it proactively.

There were also concerns about the expense. The author stated that we were clearly ahead of our budget for [third-party content moderator] review due to front-loading enforcement work. To meet the budget, we will need to reduce capacity (via efficiency improvement and natural repattrition). This will mean that viewers' capacity will be reduced significantly by the end of the year. It will also force trade-offs.

The tier system also identifies high-risk countries where employees have found their resources stretched.

These are difficult trade-offs to make.

The team also has to pay a lot for crisis response, as support for ARCs comes with a heavy price. We have been called upon to respond to protests in Pakistan, violent clashes with Bangladesh and India's election.

According to the note, it takes approximately a year to create classifiers for hate speech or improve enforcement after a country has been designated as a priority. However, not all countries are given priority and trade-offs can be very difficult.

The note states that we should prioritise building classifiers in countries with ongoing violence over temporary violence. We should instead rely on rapid-response tools in the latter case.

There is a pervasive sense that no one knows what's happening at the fundamental level.

It is clear from hundreds of documents that many employees of Facebook have been interviewed about their abuses. They use a range of sophisticated systems to control them. They are also facing external pressures that they cannot control. The rise of right-wing authoritarianism in the United States and India didn't begin on Facebook, nor should the power of individuals like Narendra Modi and Donald Trump to promote violence or instability.

It is hard to not marvel at Facebook's size, its complexity, and even the people responsible for operating it. The opaque nature of systems such as its work stream from at-risk countries; and the lack accountability in cases like Myanmar where the whole thing spiralled out of control.

The most interesting documents in the Facebook Papers also include the most mundane. These are cases where an employee wonders aloud what would happen if Facebook changed that input or reduced this harm to compensate for the growth metric. Sometimes, they struggle to explain why the algorithm shows more civic material to women than to men or why a bug allowed a violence-inciting group from Sri Lanka to automatically add half of a million people without their consent for a three-day period.

It is clear that there is an underlying sense that no one knows what is going on.

Comment threads build up in the documents as everyone scratch their heads. The employees quit and leaked the documents to the media. The communications team examines the findings and creates a somber blog post. It affirms that there is still much to do.

Congress growls. Facebook changes its name. The world's countries are awaiting their fate, neatly organized into tiers.