A group of Facebook engineers identified a massive ranking failure that exposed as much as half of all News Feed views over the past six months.
The engineers first noticed the issue last October when a sudden surge of misinformation began flowing through the News Feed. The News Feed gave the posts distribution, spiking views by as much as 30 percent globally. The engineers were unable to find the root cause of the surge and had to fix it on March 11th.
In addition to posts flagged by fact-checkers, the internal investigation found that, during the bug period, Facebook's systems failed to properly demote nudity, violence, and even Russian state media the social network recently pledged to stop recommending in response to the country. The issue was designated a level-one SEV, or Severe Engineering Vulnerability, which is reserved for the company's worst technical crises.
The technical issue was first introduced in 2019 but didn’t create a noticeable impact until October 2021
The company found inconsistencies in downranking on five separate occasions, which correlated with small, temporary increases to internal metrics, according to the company's internal documents.
For years, Facebook has promoted downranking as a way to improve the quality of the News Feed and has expanded the kinds of content that its automated system acts on. Downranking has been used in response to wars and controversial political stories, sparking concerns of shadow banning and calls for legislation. This incident shows what happens when the system goes awry, despite the fact that Facebook has yet to open up about its impact on what people see.
The impulse people have to engage with sensationalist and provocative content is fought by downranking.
“We need real transparency to build a sustainable system of accountability”
Downranking suppresses what Facebook calls "borderline" content that comes close to violating its rules but also content that its artificial intelligence systems suspect is violating. The company published a high-level list of what it demotes last September, but hasn't explained how exactly demotion affects distribution of affected content. Officials have told me that they hope to shed more light on how demotions work but are concerned that doing so would help adversaries game the system.
In the meantime, Facebook's leaders brag about how their artificial intelligence systems are getting better each year at detecting content like hate speech, placing greater importance on the technology as a way to moderate at scale. Last year, Facebook said it would start downranking all political content in the News Feed, as part of CEO Mark Zuckerberg's push to return the Facebook app to its more lighthearted roots.
I've seen no indication that there was malicious intent behind the recent ranking bug that affected up to half of News Feed views over a period of months, and luckily, it didn't break Facebook's other moderation tools. The incident shows why more transparency is needed in internet platforms, according to a former member of Facebook's Civic Integrity team.
In a large complex system like this, bugs are inevitable and understandable, according to Massachi, who is now co-founder of the nonprofit Integrity Institute. How would we know? We need real transparency to build a sustainable system of accountability so we can help them catch problems quickly.