Moderation may be able to work for harms caused by individual pieces of content. We need this new approach because there are many structural problems that manifest in broad ways across the product rather than through any single piece of content. A famous example of this kind of structural issue is Facebook's 2012 experiment, which showed that users' affect ( their mood as measured by their behavior on the platform) shifted measurablydepending on which version of the product they were exposed to.
After the results became public, Facebook stopped this type of experimentation. Even though they stopped measuring such effects, they still have them.
Product choices can cause structural problems. Product managers at technology companies like Facebook, YouTube, and TikTok are paid a lot to focus on maximizing time and engagement on the platforms. Most product changes are deployed to small test audiences through randomized controlled trials. The Objectives and Key Results, or OKRs, can be used to determine bonuses and promotions. Other teams that are usually downstream have less authority to address the root causes of product decisions. The teams are able to respond to acute harms, but can't address problems caused by the products themselves.
The question of societal harms could be turned on it's head with attention and focus. There were revelations about the impact of Facebook on the mental health of teens. Facebook said that it had studied whether teens felt that the product had a negative effect on their mental health and whether that perception caused them to use the product less. The response addressed the issue of mental health, rather than the impact on user engagement, which would make the study worthwhile.
It won't be easy to evaluate systemic harm. We would have to figure out what we can measure, what we need of companies, and what issues to prioritize in any assessment.
Financial interests can run counter to meaningful limitations on product development and growth. Regulation that works on behalf of the public is a standard case. Whether through a new legal mandate from the Federal Trade Commission or harm mitigation guidelines from a new governmental agency, the Regulator's job would be to work with technology companies' product development teams to design implementable protocols to assess meaningful signals of harm.
Adding these types of protocols should be easy for the largest companies because they have already built randomized controlled trials into their process to measure their efficacy. The testing would not require regulatory participation if the standards were defined and the testing was executed. Diagnostic questions along with normal growth related questions would be required to make the data accessible to external reviewers. This procedure will be explained in more detail in our upcoming paper at the conference.
When products that reach tens of millions are tested for their ability to boost engagement, companies need to make sure that those products also follow a "don't make the problem worse" principle. The effects of already-approved products could be rolled back over time.