The Facebook whistleblower says its algorithms are dangerous. Here’s why.

Many former and current employees of Facebook have stated that this is why Facebook cannot keep track of what they are showing users in their news feed. Different teams may have different objectives. The system has become so complicated and inflexible that it is difficult for anyone to keep track of all its components.
Therefore, quality control in the company's core process is experimentation and measurement. As I said:

FBLearner allows teams to train a machine-learning model that can be used to improve the detection of content that is not in line with Facebook's community standards. Krishna Gade, who was the engineering manager for the news feed between 2016 and 2018, tests the model on a subset of Facebook users. This allows them to determine how the model affects engagement metrics such as likes, comments, shares, and shares.

A model that reduces engagement is thrown out. It will be deployed and monitored continuously if it is not. Gade explained via Twitter that engineers would receive notifications every few days when certain metrics like comments or likes were low. They would then determine what caused the problem and if any models required retraining.

How did Facebook's content ranking lead to misinformation and hate speech?

Haugen repeated the notion that Facebook's algorithm incites hate speech, misinformation and even ethnic violence during her testimony.

Facebook...knowsthey have acknowledged in public that engagement-based ranking can be dangerous without integrity or security systems, but they then not rolledout those integrity and security system in most languages around the world, she said to the Senate today. It is tearing families apart. It is even causing ethnic violence in countries like Ethiopia.

Here's what I have written previously about this topic:

Machine-learning models that increase engagement favor controversy, misinformation and extremism. Simply put, people like crazy stuff.

Sometimes, this can exacerbate existing political tensions. Myanmar is the most tragic example. Viral fake news and hate speech against the Rohingya Muslim minorities fueled a religious conflict that exploded into a complete genocide. After years of downplaying its role in the matter, Facebook admitted that it had not done enough in 2018 to prevent our platform being used to incite violence offline and foment division.

Haugen also mentioned that Facebook knew this for some time. According to previous reporting, Facebook has been studying this phenomenon since at most 2016.

Monica Lee, a Wall Street Journal researcher, discovered that Facebook not only hosted a large number extremist groups, but also promoted them to its users. 64% of all extremist-group joins were due to our recommendation tools. This was mainly thanks to the models behind Groups You Should Be Join and Discover.

Chris Cox, Facebook's long-serving chief product officer, created a task force in 2017 to determine if increasing user engagement on Facebook was contributing towards political polarization. It discovered that there was indeed a correlation and that engagement would be affected if it were to reduce polarization. The Journal reviewed a mid-2018 document from the task force, which suggested several possible solutions, including tweaking the recommendation algorithms to offer more groups to people to join. It acknowledged that some ideas were antigrowth and disbanded the task force.

These findings were also confirmed by Facebook employees in my conversations.

An ex-Facebook AI researcher, who joined in 2018, says that he and his team carried out study after study to confirm the same idea: models that maximize engagement increase Polarization. It was easy to track users' opinions on various issues and what content they like, as well as how their stances have changed. The models were able to show users extreme views, regardless of the issue. He says that they have become more polarized over time.