Let's talk about whether infrastructure providers should take more responsibility for moderation than they have done before.

It's me.

A nearly 10-year-old web forum founded by a former administrator of the popular 8chan has become notorious for its online harassment campaigns against LBGT people. The recent wave of anti-trans legislation in the U.S. led to terrifying threats and violence against Clara Sorrenti, a well-known creator of twitch, who spoke out against the legislation.

Two people wrote about the situation at NBC.

Sorrenti, known to fans of her streaming channel as “Keffals,” says that when her front door opened on Aug. 5 the first thing she saw was a police officer’s gun pointed at her face. It was just the beginning of a weekslong campaign of stalking, threats and violence against Sorrenti that ended up making her flee the country.

Police say Sorrenti’s home in London, Ontario, had been swatted after someone impersonated her in an email and said she was planning to perpetrate a mass shooting outside of London’s City Hall. After Sorrenti was arrested, questioned and released, the London police chief vowed to investigate and find who made the threat. Those police were eventually doxxed on Kiwi Farms and threatened. The people who threatened and harassed Sorrenti, her family and police officers investigating her case have not been identified.

Sorrenti started a campaign to get Cloudflare to stop providing security services to Kiwi Farms. Thanks to her popularity, #DropKiwiFarms and #Cloudflare ProtectsTerrorists both trended on the social networking site. The question was what Cloudflare would do about it.

It is possible that most casual web surfers don't know about Cloudflare. The internet cannot function without the company's offerings. There are at least three services that have been valuable toKiwi farms.

Twice before in its history, Cloudflare has confronted related high-profile controversies in moderation

By generating thousands of copies of it and storing it at end points around the world, Cloudflare made it easier to use and quicker to deliver. The DDoS attacks can crash sites if they overwhelm them with bot traffic. Alex Stamos points out that hiding the identity of their web hosting company prevented people from pressuring the hosting provider to take action against it.

It has endeavored to make principled arguments for doing so because it knows it was doing all this. Twice before in its history, it turned off protection for the neo-Nazi site, the Daily Stormer, and again in 2019. The company warned that the decisions would cause more pressure on infrastructure providers to shut down other websites that would hurt marginalized groups.

The company echoed that sentiment in a post last week. One that didn't mention the name of the farm. The CEO and head of public policy are here.

“Giving everyone the ability to sign up for our services online also reflects our view that cyberattacks not only should not be used for silencing vulnerable groups, but are not the appropriate mechanism for addressing problematic content online. We believe cyberattacks, in any form, should be relegated to the dustbin of history.”

Cloudflare has been so principled in developing its policies that it deserves a lot of credit. The company believes that the more responsibility you have for removing harmful material the closer you are to hosting, recommending, and otherwise driving attention to content. The more you host and recommend, the more hesitant you should be to intervene.

The logic is that the people who host and recommend the content are the ones who are most responsible for the content being consumed. You don't want the company to decide what's on the photo sharing site.

These policies are undeniably convenient to Cloudflare

Since laws emerge from a more democratic process and have more legitimacy, we should pass laws to dictate what content should be taken down. I like the idea of making moderation decisions more accountable to the public, but I don't want the government to intervene in matters of speech.

These policies are convenient to the company. This allows the company to not have to consider moderation issues. It helps Cloudflare serve the largest number of customers, keep it out of hot-button cultural debates, and stay off the radar of regulators who are increasingly skeptical of tech companies.

Content moderation is usually pushed off on someone else. Unless it is necessary for the survival of the business, there isn't much upside in policing speech.

The second part of the story.

Giving everyone the ability to sign up for our services online also reflects our view that cyberattacks are not the right way to address problematic groups. The idea is that Cloudflare wants to take attacks off the table for everyone and that harassment should be fought in other ways.

If everyone from local police departments to national lawmakers took online harassment more seriously, and developed a coordinated strategy to protect victims, it would be a good thing.

They don't. As inconvenient as it is for the company, Cloudflare has become a legitimate pressure point to stop these harassers from committing acts of violence. Other security providers could be found byKiwi farms. The Daily Stormer and 8chan were forced out of the mainstream by the decision of Cloudflare to stop services for them.

Cloudflare’s decision arguably made it complicit in whatever happened

It was complicit in whatever happened to Sorrenti, and anyone else the mob might decide to target, because it continued to protectKiwi farms. The three people who died by suicide were targeted by the farm.

For all of its claims about wanting to bring about an end to cyberattacks, Cloudflare provides security services to the makers of cyberattack software. Sergiy P. Usatyuk was convicted of running a large distributed denial of service scheme. According to Usatyuk, Cloudflare makes money from such schemes because it can sell protection to the victims.

Cloudflare compares itself to a fire department that puts out fires no matter how bad a person is in the home. Cloud Flare is a fire department that puts out fires regardless of who lives there. They don't mention that they are lighting the fires and making money by putting them out.

There aren't good reasons for Cloudflare to stay out of most moderation debates There are, there are. It doesn't matter who the company decides to deploy its security guards for, as long as a small group of the worst people on the internet are protected.

This is the third part of the series.

Stamos predicted that the company's stance wouldn't hold. There have been suicides linked to KF and soon a doctor, activist or trans person is going to get doxed and killed or a mass shooter is going to be inspired there. The killer's links to the site will be shown in the investigation.

It hasn't yet come to that. According to the company, there were credible threats against individuals over the past several days, and on Saturday they stopped protectingKiwi farms.

"This is an extraordinary decision for us to make and, given Cloudflare's role as an internet infrastructure provider, a dangerous one that we are not comfortable with." Over the last 48 hours, we believe there is an emergency and immediate threat to human life that is unlike anything we've ever seen from any other customer before.

“We do not believe we have the political legitimacy to determine generally what is and is not online”

It feels like a huge failure of social policy that the safety of Sorrenti and other people targeted by online mobs comes down to whether a handful of companies will agree to continue protecting their organizing spaces. It feels crazy. We are giving what should be a responsibility of law enforcement to a for-profit company.

The company wrote last week that they don't think we have the political legitimacy to restrict security or core internet services. It probably doesn't.

Sometimes you have to use your hand. The right thing to do is not to ask Congress to pass a law that tells you what to do. It's to stop giving those services.

Sometimes there isn't a clear moment when a forum full of trolly tips over into inciting violence. It would be nice if someone did something about the problem, but far-right actors rely on terrorism to dehumanize groups of people.

Infrastructure providers can’t turn a blind eye until the last possible moment

The strategy designed to resist content moderation is one of the reasons why this has been so effective. It gives cover to the infrastructure providers that are looking for reasons not to act. It has become a loophole that the far right can exploit, confident that they will remain in the good graces of the platforms if they don't call for murder.

That loophole needs to be closed. Infrastructure providers should not intervene on matters of moderation. When those companies provide services that aid in real-world violence, they can't turn a blind eye. They should use their leverage to prevent the loss of life that will be linked to the tech stack upon which it sat by recognizing groups that organize harassment campaigns earlier.

Cloudflare refers to it's desire to protect vulnerable and marginalized groups a lot. It's important to fight for a free and open internet because it's resistant to pressure from authoritarian governments. Protection to the vulnerable and marginalized groups that are being attacked by your customers is also offered.

I'm happy that Cloudflare came around. I hope it gets there quicker.