Twitter’s research shows that its algorithm favors conservative views

Twitter's blog posted that Twitter's algorithm favors right-leaning content, but it is not clear why. These findings were based on an internal study of Twitter's algorithmic amplification political content.
Twitter looked at millions tweets between April 1st, 2020, and August 15th. These tweets came from elected officials and news outlets in Canada, France and Germany as well as the UK. Twitter discovered that right-leaning accounts in all the countries it studied received more algorithmic amplification than those of the political left, with the exception of Germany.

On Twitter and Facebook, negative posts about political outgroups are more popular than positive ones

Twitter claims it doesn't know why its algorithm favors right-leaning material. It also says it does not know the reason. But, Steve Rathje, a PhD candidate in social media studies, released the results of his research which explains why divisive content regarding political outgroups is more common to go viral.

Rathje was interviewed by The Verge to discuss Twitter's findings. Rathje said that negative posts about political outgroups receive more engagement on Twitter and Facebook than positive ones. This was also a key finding from our study. This means that if a Democrat posts negative content about a Republican (or vice-versa), it will generally receive more engagement.

Rathjes research suggests that right-leaning tweets on Twitter can spark more outrage and thereby increase their visibility. Twitter's algorithm problem may be more related to toxic tweeting than any specific political bias. As we have already mentioned, the research by Twitter found that Germany was the only country without the right-leaning bias in its algorithm. This could be due to Germany's agreement with Google, Facebook and Twitter to remove hate speech within 24hrs. To prevent Nazi imagery appearing on Twitter, some users change their country to Germany.

Twitter has been working to improve the way we tweet for quite some time. Twitter tested a feature in 2020 that warns users if they are about to write a rude response. And just this year, Twitter piloted a message that will appear if it suspects that you are getting into a Twitter battle. These are indicators of how Twitter is already aware of the problems Twitter has with hateful and bullying posts.

Frances Haugen is the whistleblower who revealed a variety of internal documents from Facebook. She claims that Facebook's algorithm favors hate speech, and divisive content. Twitter could be in the exact same situation, but it is sharing some internal data examinations to prevent a leak.

Rathje pointed to another study which found that moral outrage increased viral posts from conservative and liberal viewpoints, but more so from conservatives. Rathje says that features such as the algorithmic promotion that leads to social media virality should be further researched to determine if these features are responsible for the success of conservative-leaning posts.