A new AI built to combat toxic players in online games has banned 20,000 Counter-Strike: Global Offensive players in its first six weeks. This new AI job analyzes messages in the in-game chat.
This AI is called Minerva and was built by a team on the FACEIT platform, the team that organized CS 2018: GO, London Major. Which collaborates with Google Cloud and Jigsaw, one of Google’s tech startup incubators.
Minerva starts checking CS: GO chat messages in late August. In the first month and a half, Minerva marked 7,000,000 messages as toxic chat, issued 90,000 warnings, and banned 20,000 players.
Minerva is trained through machine language. The first time Minerva will issue a warning for verbal abuse if it detects toxic messages, Minerva also marks spam messages.
Within a few seconds, after the match was finished, Minerva sent a notice of warning or prohibition to the offending player, and the punishment was getting harsher for the repeat offender.
With AI Help, Toxic Players Rate Went Down Immensely
The number of toxic messages decreased from 20% between August and September with AI help, and the number of unique players sending toxic messages fell by 8%.
The trial began after “months” and this was only the first step in launching Minerva into online games. “Minerva detects chat in the game and it functions well as the first step towards our vision for the advancement of this AI,” FACEIT said on their blog.
“We are really pleased with this initial step because this is a strong foundation that will enable us to improve Minerva until we finally detect and deal with all types of abusive behavior in real-time.”
“In the coming weeks we will announce a new system that will support Minerva in its training,” said FACEIT again.