Much of Oasis's plan is still idealistic. There is a proposal to use machine learning to detect hate speech. Karen reported last year that the models gave hate speech too much chance to spread. Wang defends Oasis's promotion of artificial intelligence. She says that artificial intelligence is as good as the data gets. Platforms share different moderation practices, but all work towards better accuracies, faster reaction, and safety by design prevention.
Future goals for the consortium are outlined in a seven page document. Wang says that the first several months of work have centered on creating advisory groups to help create the goals.
The plan's content moderation strategy is vague. Wang wants companies to hire a diverse set of content moderators so they can understand and combat harassment of people of color. The plan doesn't offer any more steps toward achieving this goal.
The data on which users are being abusive is important in identifying repeat offenders. Tech companies will work with nonprofits, government agencies, and law enforcement to create safety policies. Oasis will have a law enforcement response team that will be tasked with notifying police about harassment and abuse. It is not clear how the task force will work with law enforcement.
Privacy and safety are balanced.
The standards document is a good first step, despite the lack of concrete details, according to experts I spoke to. A lawyer specializing in technology and human rights says that it is a good thing that Oasis is looking at self-regulation.
Tech companies have worked in this way before. Some people agreed to exchange information with the Global Internet Forum to Combat Terrorism. Companies that sign on to it self- regulate.
Lucy Sparrow, a researcher at the School of Computing and Information Systems at the University of Melbourne, says that Oasis offers companies something to work with, rather than waiting for them to come up with the language themselves or wait for a third party to do that work.
Baking ethics into design from the start, as Oasis pushes for, is admirable and Sparrow's research shows it makes a difference. She says that Oasis are encouraging thinking about ethics from the beginning.
He says ethical design might not be enough. Tech companies have been criticized for taking advantage of consumers without legal expertise.
Sparrow doesn't believe that a group of tech companies will act in consumers' best interest. She says it raises two questions. How much do we trust capital-driven corporations to control safety? How much control do we want tech companies to have over our lives?
Users have a right to both safety and privacy, but those needs can be in tension.