Illustration by Alex Castro

The new policy for dealing with misinformation during a crisis was announced on Thursday, setting new standards for gating or blocking the promotion of certain accounts if they are seen as spreading misinformation.

Content moderation is more than just leaving up or taking down content, according to Yoel Roth, the head of safety and integrity.

How to fight lies, trick, and chaos online

The internet is full of unreliable information. You can cut through the chaos by following a few simple rules.

False reporting of events, false allegations involving weapons or use of force, or broader misinformation regarding atrocities or international response are all put under scrutiny by the new policy.

Users rush to share information during emergencies, as misinformation goes viral. The sheer speed of events makes it difficult to implement normal verification or fact-checking systems.

Under the new policy, a warning label will be added to all misinformation, similar to the existing labels for explicit imagery. algorithmic promotion will also be blocked.

The standards are meant to be limited. The company expects to apply the rules to all emerging crises going forward, even though it will initially apply the policy to content concerning the ongoing Russian invasion of Ukraine. The policy defines a crisis as situations in which there is a widespread threat to life, physical safety, health, or basic subsistence.

The policy comes at a delicate time for the company, with the approval of the sale to Musk in a confusing limbo. Musk has pledged to scale back moderation systems at the company in favor of a maximalist view of free speech. With Musk claiming the deal is on hold, it's unclear when or how his ideas will be implemented.