Should bad science be taken off social media?

Rachel Schraer is a writer.
Health reporter.

Lynn Grieveson is the image source.

How do you get rid of bad information?

Understanding science and making health decisions can have life- or death consequences.

People who were deterred from taking vaccines due to reading misleading information online have died.

There are inaccurate or completely made-up claims about 5G and the origins of Covid-19 that have been linked to violence.

Scientists whose careers are based on the understanding that facts can and should be disputed, and that evidence changes, can find removing information to be a lot like censorship.

The Royal Society is the oldest continuously operating scientific institution in the world, and it is trying to grapple with the challenges posed by our newest ways of communicating information.

It advises against social media companies removing legal but harmful content. The authors of the report believe that social media sites should change their algorithms to stop people from making money off false claims.

Researchers who are experts in tracking how misinformation spreads online disagree with that view.

The Center for Countering Digital Hate (CCDH) believes there are times when it is best to remove content when it is harmful and spread widely.

The team points to Plandemic, a video that went viral at the start of the Pandemic, making dangerous and false claims designed to scare people away from effective ways of reducing harm from the virus, like vaccines and masks, and was eventually taken down.

The video's sequel, Plandemic 2, fell flat after being restricted to major platforms and having nothing like the first video.

Prof Kleis Nielsen is the director of the Institute for the Study of Journalism at the University of Oxford.

Science misinformation can lead to disproportionate harm, even though it's a relatively small part of people's media diet.

He thinks that there are a lot of citizens who would have their worst suspicions confirmed if established institutions took a more hands-on role in limiting people's access.

The image is from the same source.

The image caption is.

The Royal Society says that removing content may make people distrust each other and make them use misinformation. This may cause more harm than good by driving misinformation content towards harder-to-address corners of the internet.

The fact that those corners are harder to reach is part of the point. It reduces the chance that someone who is not already committed to potentially harmful beliefs will be exposed to them by chance.

Some of the violent protests that were driven at least in part by a conspiracy had their origin on Facebook. There is no evidence that removing content leads to more harmful beliefs.

Scientific misinformation is not new.

The incorrect belief in a link between the vaccine and the disease was caused by a published academic paper, while the harm of water fluoridation was driven by the print media, campaign groups and word of mouth.

The speed at which false facts travel and the large number of people who can read them have changed.

One way to tackle misinformation is to make it harder to find and share, and less likely to appear on someone's feed.

The caption is media.

The Reality Check answers your vaccine concerns.

Prof Gina Neff explained that this was to ensure that people still can speak their mind, even if they don't have an audience of millions.

They can still post it, but the platforms don't have to make it go crazy.

The Institute for Strategic Dialogue, a think tank which monitors extremists, points out that a lot of misinformation relies on the appropriation and misuse of genuine data and research.

It can take a long time to debunk false information, because it can take a long time to explain how and why this is a misuse of the data.

The Royal Society supports fact-checking.

One of the most common pieces of vaccine misinformation over the past year was the notion that people are harmed in high numbers by the jab. The claim is based on a misinterpretation of figures.

A small group of accounts spreading misinformation had a disproportionate influence on the public debate on social media, according to research.

Many of these accounts have been labeled by fact-checkers as sharing false or misleading content on multiple occasions, yet remain live.

The Royal Society didn't remove the accounts ofinfluencers who are prolific spreaders of harmful misinformation.

This is seen as an important tool by many experts, and research shows that it can be successful.

David Icke's ability to reach people was reduced when he was removed from YouTube.

After the ban of his videos on YouTube, his videos on BitChute had a decline in views. His videos had been viewed over 9 million times on the internet.

Kate Shemirani, a former nurse and prolific spreader of Covid misinformation, was de-platformed in the short term.

Current models of de-platforming need to be developed. Prof Martin Innes, one of the paper's authors, says that it's not enough to just take down a piece of content.

The need to disrupt the whole network is shown by research.

He believes that the way we tackle misinformation that could put people in danger is not yet embedded with this level of sophistication.