Yoel Roth said Musk was warned Twitter Blue verification would go disastrously wrong. Karissa BellK. Bell|11.29.22
A view of the Twitter logo at its corporate headquarters in San Francisco, California, U.S. November 18, 2022. REUTERS/Carlos Barria
Carlos Barria / reuters

The former head of trust and safety at the company said in his first interview that he believes the platform is less safe under Musk. While speaking at an event hosted by the Knight Foundation, he responded "I don't think so."

In the chaotic days after Musk's takeover, he was one of the only top executives to publicly discuss what was happening on the micro-blogging site. A surge in racist slurs on the platform was the result of a coordinated troll campaign. Musk pointed to his explanations about what was being done to stop the racist attacks on his account.

Although he was initially optimistic, a breakdown in "procedural legitimacy" led to him leaving. He noted that Musk had stated he wanted to form a "moderation council” before making major policy decisions, but he quickly showed he would prefer to make his own decisions.

— Aaron Rupar (@atrupar) November 29, 2022

I was optimistic because he said things that were consistent with establishing a moderation council. I had my optimism fade.

He said that his team had warned Musk ahead of time but he ignored them. "It went off the rails in the way that we anticipated, and there weren't the protections that needed to be in place to address it upfront."

The comments come as Musk prepares to launch a new verification method for his social networking site. There will be different colors of badges for businesses and individuals, and there will be a manual verification process, according to Musk.

After mass layoffs and resignations at the company, users should pay close attention to whether key safety features, like blocking and muting, still work. It's a symptom that something is deeply wrong if protected Twitter stops working.

The lack of veteran policy and safety employees at the company would hurt the platform.

He wants to know if there's enough people who understand the malicious campaigns that happen on the service. There are not enough people who can do that work at the company.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.