Illustration by Alex Castro / The Verge

When a man pulled into the parking lot of a grocery store in Buffalo, New York, on Saturday in a racist attack, his camera was already rolling.

According to CNN, the suspect broadcasted a live video from his point of view, showing shoppers in the parking lot as the alleged shooter arrived, followed by him inside as he began a rampage that killed 10 people and injured three. Less than two minutes after the violence began, the video was removed and the user was suspended, according to the company's head of communications for the Americas. The Washington Post reported that 22 people saw the attack unfold online.

After the fact, millions saw the livestream footage. The video was viewed more than 3 million times on sites like Streamable and on major platforms like Facebook, as well as copies and links being spread online after the attack, according to The New York Times.

This isn't the first time perpetrators of mass shootings have broadcasted their violence live online. In New Zealand, a man live streamed his killings on Facebook. 1.5 million videos of the attack were removed by the platform in 24 hours. The tide of violent, racist, and antisemitic content created from the original was reshared days after the deadly attack.

Rasty Turek is the CEO of Pex, a company that creates content identification tools. Turek said if the stream was disrupted and taken down within two minutes of its beginning, it would be ridiculous.

Turek says that that is an achievement that was unprecedented in comparison to a lot of other platforms. Faught says that the alleged shooter's stream was removed mid- broadcast but did not respond to questions about how long the alleged shooter was broadcasting before the violence began or how they were notified of the stream.

“The challenge is what happens with that video afterwards.”

Turek acknowledges that getting moderation response time down to zero is not the right way to think about the problem because live streaming has become so widely accessible. How platforms handle copies and reuploads of harmful content is more important than that.

The challenge is not how many people watch the video.

Big tech companies created a system to detect content. The goal of the Global Internet Forum to Counter Terrorism is to prevent the spread of terrorist content online. After the attacks in New Zealand, the coalition said it would begin tracking far-right content and groups online. Material related to the Buffalo shooting was added to the database in order to allow platforms to catch and take down reposted content.

Implementation remains a problem even with the central response of GIFCT. Though coordinated efforts are admirable, not every company participates in the effort and its practices aren't always clearly carried out.

There are a lot of smaller companies that don't have the resources for moderation.

The stream was caught early and the shooter was able to broadcast for 17 minutes on Facebook. According to The New York Times, Streamable's slow response means that by the time the video was removed, millions had viewed it and a link to it was shared hundreds of times. The company that owns Streamable, Hopin, did not respond to the request for comment.

The Streamable link was taken down, but portions of the recording can be found on other platforms like Facebook, TikTok, and Twitter. The major platforms had to remove and suppress the reshared versions of the video.

Days after the shooting, portions of the video that users reuploaded to Twitter and TikTok remain

Jack Malon, a company spokesman, said that the content filmed by the Buffalo shooter has been removed. Malon says that the platform is constantly surfacing videos from authoritative sources in search and recommendations, making it harder to find reuploads.

A company spokesman who declined to be named due to safety concerns said that the company was moving videos and media related to the incident. TikTok didn't respond to many requests for comment. Some of the video that users reuploaded to social media remain.

Multiple versions of the video are being added to a database to help Facebook detect and remove content. External platforms hosting the content are permanently blocked.

Even into the week, clips from the livestream continued to circulate. On Monday afternoon, The Verge viewed a Facebook post with two clips from the alleged livestream, one showing the attacker driving into the parking lot talking to himself and another showing a person pointing a gun at someone inside a store as they screamed in terror. A caption on the clip suggests that the victim was spared because they were white. The post was taken down after The Verge asked about it.

The original clip has been cut and edited, partially edited, and has a widespread reach, meaning it will likely never go away.

Maria Y. Rodriguez is an assistant professor at the University of Buffalo School of Social Work. Rodriguez, who studies social media and its effects on communities of color, says moderation and preserving free speech online take discipline, not just around Buffalo content but also in the day-to-day decisions platforms make.

Rodriguez says that platforms need some support in terms of regulation that can offer some parameters. She says that standards are needed for how platforms detect violent content and how they moderate it.

Certain practices on the part of platforms could minimize harm to the public, like sensitive content filters that give users the option to view potentially upsetting material or to simply scroll past. Hate crimes are not new and are likely to happen again. Moderation could limit how violent material travels, but what to do with the perpetrators is what has kept Rodriguez awake at night.

She says, "What do we do about him and other people like him?"