Twitch has released a technology that utilizes machine learning to detect users attempting to re-enter chat channels from which they were previously banned due to abusive behavior.
“Bad actors” frequently created new accounts to abuse users, according to the gaming-focused live streaming website. However, if a person was a “likely” or “potential” ban evader, the new algorithm would alert streamers and chat moderators. It’s a component of Twitch’s ongoing efforts to combat harassment and hate speech.
Hate raids, in which unethical streamers send their fans or even automated bots to other channels to attack someone, have been a source of criticism for the corporation.
Frequently, the victims are members of minority or marginalized communities.
Creators had pressed Amazon, which is owned by Amazon, to do more to combat hate speech.
Twitch unveiled “phone-verified chat” in September, allowing streamers to make certain or all users authenticate their phone numbers before conversing.
It also filed a lawsuit against anonymous individuals who were allegedly participating in “chat-based attacks against marginalized streamers” in the same month.
Twitch stated the new suspicious-user identification technology is “driven by machine learning” and detects ban evaders using “several account signals.”
By default, the new model will be switched on, but moderators and authors will be able to change its settings or turn it off.
It compares several indicators, including the behavior and account attributes of players attempting to join a chat channel, to those of banned accounts, and flags potential ban evaders in two ways, according to Twitch.
Most likely, their chat messages will be blocked. If this is the case, their messages will continue to surface.