Facebook Whistleblower Testified That Company’s Algorithms Are Dangerous: Here’s Why

The Conversation is an online publication that covers the most recent research and permission to reproduce the essay below.
Frances Haugen, former Facebook product manager, testified before the U.S. Senate, Oct. 5, 2021 that social media platforms by the company harm children and stoke divisions and weaken our democracy.

Haugen was the main source of information for the Wall Street Journal's exposé on the company. She called Facebook's algorithms dangerous and said that Facebook executives knew of the danger but placed profits above people. She also called for Congress to regulate Facebook.

Social media platforms heavily rely on user behavior to determine what content you see. They look for content that people like, comment on, and share. Troll farms are organizations that promote provocative content. They exploit this by copying high engagement content and then posting it as theirs, which allows them to reach a wider audience.

As a computer scientist who studies how large numbers of people use technology, I can see the logic behind these algorithms. However, I see significant pitfalls in the way social media companies actually do this.

From lions in the savanna to Facebook likes

The wisdom of the crowds theory assumes that sound decisions can be made by using signals from other people's actions, opinions, and preferences. Collective predictions are more accurate than individual predictions, for example. It is possible to use collective intelligence to predict the future of financial markets, elections, and even disease outbreaks.

These principles have been encoded in the brain over millions of years through cognitive biases. They are known as familiarity, mere exposure, and bandwagon effect. Everyone should start running. Maybe someone saw a lion coming, and it could save your life. Although you may not be able to answer the question immediately, it is better to ask later.

Your brain uses simple rules to translate clues from the environment, including your peers, into decisions. Because they are based upon sound assumptions, these rules work well in most situations. They assume that people act rationally and it is unlikely that many will be wrong. The past also predicts the future.

Technology allows users to gain signals from a larger number of people, many of whom they don't know. These popularity signals or engagement signals are used heavily by artificial intelligence applications. They can be used to select search engine results, recommend music and videos, as well as suggest friends and rank posts on news feeds.

Not all viral content is worthy to be viral

Our research has shown that almost all social media platforms (including news recommendation systems and social media) have a strong popularity bias. Popularity bias can have unintended and harmful consequences if applications are driven more by engagement than explicit search engine queries.

Social media platforms like Facebook, Instagram and YouTube rely heavily upon AI algorithms to rank and recommend content. These algorithms use inputs such as what you comment on, share, and like. The algorithms aim to increase engagement by finding what people like and ranking them at the top of their feeds.

An introduction to the Facebook algorithm.

This seems plausible on the surface. These algorithms should be able to identify high-quality content if people want credible news, expert opinion, and fun videos. The wisdom of the crowds is a key assumption: that popular content will lead to high-quality content.

This assumption was tested by studying an algorithm that ranks items based on popularity and quality. Our findings showed that popularity bias tends to lower content quality. Engagement is not an indicator of quality when only a few people have seen the item. These cases generate a noisy signal and the algorithm will likely amplify this noise. If a low-quality item becomes popular enough, it will continue to get amplified.

Engagement bias can also affect humans. Complex contagion is evidenced by the fact that information can be transmitted via the internet. This means that the more people see an idea online, they are more likely to share it. Social media informs people that an item is viral. This triggers their cognitive biases and leads to the irresistible urge of sharing it.

Not-so-wise crowds

Recently, we conducted an experiment with Fakey, a news literacy application. Our lab developed the game that simulates a newsfeed like Twitter and Facebook. The current news feed includes articles from mainstream, fake, junk science, hyperpartisan, conspiratorial, and conspiracy sources. Players earn points for liking and sharing news from trusted sources, as well as flagging low-credibility articles that need to be fact-checked.

Our research shows that users are more likely share, like, and flag low-credibility articles when they can see how many others have shared them. The vulnerability is created by the exposure to engagement metrics.

Wisdom of the crowds is flawed because it assumes that each person is independent. This may not be true for many reasons.

People are more likely to be friends with the same people online, which makes their online communities less diverse. People are drawn to homogeneous communities because they can easily unfriend people with whom they don't agree. This is often called echo chambers.

A second reason is that many friends influence each other because they are all friends with one another. An experiment showed that your preferences can be affected by what your friends listen to. Your independent judgment is distorted by your social desire to conform.

Third, popularity signals are easily manipulated. Search engines have evolved sophisticated methods to combat so-called link farms, and other schemes to manipulate search algorithm. Social media platforms are only beginning to discover their vulnerabilities.

To manipulate the information market, people have created fake accounts such as trolls or social bots and set up fake networks. They have created fake accounts to make it appear that conspiracy theories or political candidates are popular. This tricked both the platform algorithms as well as people's cognitive biases. To create illusions of majority opinion, they have altered the structure and organization of social networks.

Dialing down engagement

What should you do? The technology platforms are currently on defense. They have become more aggressive in removing fake accounts and misinformation during elections. These efforts could be seen as a game of whacka-mole.

Add friction is another preventive option. To slow down information spreading. CAPTCHA tests could be used to stop high-frequency behavior like automated liking or sharing. These tests require either a human response, or a fee. This would reduce manipulation opportunities and allow people to pay more attention to what is presented to them. This would make it less likely that engagement bias could influence people's decisions.

It would be a good idea for social media companies to adjust their algorithms to less rely on engagement signals and more upon quality signals to determine what content they serve. Maybe whistleblower revelations can provide the needed impetus.

This article was first published by The Conversation. You can read the original article.