Illustration by Alex Castro / The Verge

In some countries, users can post content that is usually forbidden, such as calls for harm or the death of Russian soldiers, because of a temporary change in policy. The change was first reported in a report by the news agency. According to the outlet, calls for the death of Russian President Vladimir Putin will be allowed as long as they don't contain threats toward others or indicators of credibility.

Meta has temporarily made allowances for forms of political expression that would normally violate their rules, such as death to the Russian, as a result of the Russian invasion of Ukraine.

The New York Times confirmed that this policy applies to people from Russia, Poland, and other countries. The Times reported that in 2021, Vice reported that Facebook had made that exception for certain earlier cases.

Since the company started publishing them publicly, the community standards regarding hate speech and violence have continued to receive updates. This change is an example of how platforms have changed their treatment of content since the fighting began.

The content of the message sent to the moderators is included in an update to the report.

We are issuing a spirit-of-the-policy allowance to allow T1 violent speech that would otherwise be removed under the Hate Speech policy when: (a) targeting Russian soldiers, EXCEPT prisoners of war, or (b) targeting Russians where it’s clear that the context is the Russian invasion of Ukraine (e.g., content mentions the invasion, self-defense, etc.).

Guidelines for moderation dictate that language that dehumanizes or attacks a group based on its identity be removed. The context of the current situation requires reading posts from the listed countries about generic Russian soldiers as a proxy for the Russian military as a whole, and absent credible statements attached, the moderators are directed not to take action on them.

It is not clear if the posts would be removed even without direction. There are many carve-out and exceptions in the policy. It states that more information is needed before the policy is enforced.

Content attacking concepts, institutions, ideas, practices, or beliefs associated with protected characteristics, which are likely to contribute to imminent physical harm, intimidation or discrimination against the people associated with that protected characteristic. Facebook looks at a range of signs to determine whether there is a threat of harm in the content. These include but are not limited to: content that could incite imminent violence or intimidation; whether there is a period of heightened tension such as an election or ongoing conflict; and whether there is a recent history of violence against the targeted protected group. In some cases, we may also consider whether the speaker is a public figure or occupies a position of authority.

The Russian government's reaction to the report is unknown, and there haven't been any updates from the agency that banned Facebook.