Meta thinks Facebook may need more “harmful health misinformation”

Pandemic management is still a challenge for the US. Some cities and counties are considering reinstating mask mandates due to a chronic nursing shortage.

Meta is already thinking about what a return to normal might look like despite the recent increase in deaths. It's been speculated that it's time to go back to the company's heydays of allowing health misinformation to spread on social media.

On Tuesday, Meta's president of global affairs, Nick Clegg, wrote in a statement that Meta is considering whether or not to remove all posts that promote false information about vaccines, masks, and social separation. Meta is asking its oversight board to weigh in on whether the current COVID-19 misinformation policy is still appropriate now that "extraordinary circumstances at the beginning of the Pandemic" have passed.

During the Pandemic, Meta began removing entire categories of information from the site for the first time, creating tension between two of the company's values.

We are asking for an advisory opinion from the Oversight Board on whether Meta's current measures to address COVID-19 misinformation under our harmful health misinformation policy continue to be appropriate, or whether we should address this misinformation through other means, like labeling or demoting it either directly or through our third

Meta requested that the oversight board accept her request. A lot of submissions are expected from the board. Meta has 60 days to explain how it will or won't act upon recommendations after the board has considered all input.

Meta doesn't have to abide by any decisions that the oversight board makes, and even if a shift to less extreme moderation is approved, critics are likely to interpret the move as Meta seeking a scapegoat so that loosened restrictions is not perceived as an internal decision.


Why change the policy now?

The oversight board can take months to produce an opinion and the company wants feedback now so that they can act more thoughtfully during future Pandemics.

Before changing it's name to Meta, Facebook spent a year cracking down on anti-vax misinformation. The steps are similar to the ones that Clegg is suggesting reverting to. The company began fact-checking more posts with misinformation, banned ads with misinformation, and limited the reach of some posts.

Anti-vax content on Facebook increased and spread more quickly to neutral audiences who had not yet formed an opinion on COVID-19 vaccine despite the fact that these steps were taken during the Pandemic. Facebook was motivated by profits not to respond quickly to the vaccine hesitancy that was caused by this. People who sell or profit off of vaccine misinformation were shown to be the ones with the furthest reach in neutral newsfeeds.

Facebook changed its name and policy after Congress investigated and found that some misinformation could lead to an imminent risk of physical harm. 25 million pieces of content were deleted due to the company's policies protecting free speech.

Meta has a duty to reconsider if it acted rashly by removing all those posts, so that next time there's a crisis, there's clearer guidance on hand that adequately weighs free speech and misinformation concerns. Meta's harmful health misinformation policy should only be used to limit misinformation spread during times when official sources of information are not available, as they were at the start of the epidemic, but are now.

In times where there are obvious official sources of information, should technology companies have less obligation to limit misinformation spread?

There is a platform that has already proven how hard it is to control misinformation spread even when there is a ban on harmful misinformation.

Meta did not reply immediately.