The new machine learning feature will help streamers protect their channels from people trying to avoid bans. "Suspicious User Detection" is a tool that will flag individuals it suspects may be banned.
In cases of the former, they will prevent any messages from showing up. It will help streamers by identifying those who help them with their channel. They can decide if they want to ban that person. By default, repeat troll can send messages in chat, but they will be flagged by the system. The option to prevent creators from sending messages in the first place is available on the website.
The tool is powered by a machine learning model that takes a number of signals into account and compares that data against accounts previously banned from a Creator's channel to assess the likelihood the account is evading a previous
Suspicious User Detection will be turned on for everyone, but not for streamers. It's impossible to create a machine learning tool that is 100 percent accurate in every context. The company said that you should make the final call on who can participate in your community. The tool will learn from the actions you take and the accuracy of its predictions should improve over time.
The summer of 2016 saw a phenomenon called "hate raids", in which Twitch struggled to contain. The attacks saw malicious individuals use thousands of bots. They targeted creators from marginalized communities. Some creators walked away from the platform for a day in protest of the lack of action by the company.