YouTube on June 6 tightened its policies to combat hateful and supremacist content, as it resolved to protect the platform from being used to incite hatred, harassment, discrimination and violence.
“Today, we’re taking another step in our hate speech policy by specifically prohibiting videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status,” YouTube said in a blog post.
Citing examples, it said this would include videos that promote or glorify Nazi ideology, which is inherently discriminatory. Also, YouTube will remove content denying that well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary School (in Connecticut, United States), took place.
The latest move assumes significance as digital platforms and social media companies have come under increased global scrutiny on issues like hate content, fake news and data privacy.
India, too, has taken a tough stance on these burning issues.
Earlier this week, IT Minister Ravi Shankar Prasad had warned that misuse of digital platforms will not be tolerated in India.
The IT Ministry has already begun work on tightening rules for social media and online companies, for which it already held wide public consultations.
The YouTube blog post said that it has been investing in policies, resources and products needed to protect the YouTube community from harmful content.
“In 2017, we introduced a tougher stance towards videos with supremacist content, including limiting recommendations and features like comments and the ability to share the video,” it said, adding that the step dramatically reduced views to these videos, on average 80 per cent.
The company said it will begin enforcing the updated policy (on tacking hateful and supremacist content) on June 6, but it will take time for the systems to fully ramp up. YouTube will gradually expand the coverage over the next several months.
“We recognize some of this content has value to researchers and NGOs looking to understand hate in order to combat it, and we are exploring options to make it available to them in the future. And as always, context matters, so some videos could remain up because they discuss topics like pending legislation, aim to condemn or expose hate, or provide analysis of current events,” it added.
YouTube noted it is critical that the monetisation systems reward trusted creators who add value to the platform.
“We have longstanding advertiser-friendly guidelines that prohibit ads from running on videos that include hateful content and we enforce these rigorously…In the case of hate speech, we are strengthening enforcement of our existing YouTube partner program policies,” it added.
Channels that repeatedly brush up against its hate speech policies will be suspended from the YouTube Partner program. This means they cannot run ads on their channel or use other monetisation features like SuperChat, it added.
The openness of YouTube’s platform has helped creativity and access to information thrive, the blog noted.
“It’s our responsibility to protect that, and prevent our platform from being used to incite hatred, harassment, discrimination and violence,” YouTube added.