Blizzard test deployment machine learning for moderation of Overwatch
๐๐ฅ๐ข๐ณ๐ณ๐๐ซ๐ ๐ญ๐๐ฌ๐ญ๐ฌ ๐ญ๐ก๐ ๐ฎ๐ฌ๐ ๐จ๐ ๐ฆ๐๐๐ก๐ข๐ง๐ ๐ฅ๐๐๐ซ๐ง๐ข๐ง๐ ๐ญ๐จ ๐ญ๐๐๐ค๐ฅ๐ ๐ฎ๐ง๐ฐ๐๐ง๐ญ๐๐ ๐ฅ๐๐ง๐ ๐ฎ๐๐ ๐ ๐ฎ๐ฌ๐ ๐ข๐ง ๐๐ฏ๐๐ซ๐ฐ๐๐ญ๐๐ก. ๐๐ก๐ข๐ฌ ๐ข๐ฌ ๐๐จ๐ง๐ ๐ข๐ง ๐ฌ๐๐ฏ๐๐ซ๐๐ฅ ๐ฅ๐๐ง๐ ๐ฎ๐๐ ๐๐ฌ. ๐๐ง ๐ญ๐ก๐ ๐ฅ๐จ๐ง๐ ๐ญ๐๐ซ๐ฆ, ๐ญ๐๐๐ก๐ง๐จ๐ฅ๐จ๐ ๐ฒ ๐ฆ๐ฎ๐ฌ๐ญ ๐๐ฅ๐ฌ๐จ ๐๐ ๐๐๐ฅ๐ ๐ญ๐จ ๐๐ฌ๐ฌ๐๐ฌ๐ฌ ๐ฆ๐จ๐ซ๐ ๐ญ๐ก๐๐ง ๐ฃ๐ฎ๐ฌ๐ญ ๐ฅ๐๐ง๐ ๐ฎ๐๐ ๐, ๐ฌ๐ฎ๐๐ก ๐๐ฌ ๐๐๐ก๐๐ฏ๐ข๐จ๐ฎ๐ซ.
Jeff Kaplan, game director of Blizzard's Overwatch shooter, says in an interview with Kotaku that the company is experimenting with machine learning and that it is trying to teach a system what unwanted language is. The goal of ai's bet is to be able to tackle this faster, without having to wait for a player to report it. It concerns the use of languages in different languages, such as English and Korean. At the moment, Blizzard is using the system to deal with the most blatant cases.
"In everything related to reporting and punishing players, you need to start with the most extreme examples and see how the rules can be adjusted," says Kaplan to the site. The detection of unwanted language would not analyze messages between friends directly. In the long term, it must also be possible to detect undesirable behaviour in the game. It is unclear how far Blizzard has come with this. Kaplan says: "That is the next step. For example, how do you know if Mei's ice cream wall in the spawn room has been built by a monkey?
The Overwatch team is also looking at ways to reward positive behaviour in the game. Together with companies such as Twitch and League of Legends-maker Riot Games, for example, it is part of the so-called Fair Play Alliance, which works on 'healthy communities' in online games. In LoL, for example, such a system already exists in the form of the Honor system. The use of machine learning for moderation also happens in other places, such as the reactions under New York Times articles based on a Google service called Jigsaw.