Anyone who has played video games with voice chat over the past decade knows that there are some risks involved. You might be greeted by friendly teammates, but you can also hear the most venomous language you have ever heard in your life.
Riot Games, the game developer behind such highly popular titles as League of Legends and Valerant, is thinking about this. And taking action.
The developer is today announcing changes to its privacy notice that allow voice comms to be captured and evaluated when reports are presented around a disruptive behavior. The policy change is riot-wide, meaning that all players of all sports must accept those changes. However, the only game that is scheduled to use these new abilities is Valerant, as it is the heaviest voice-over game from Riot.
The plan here is to store the relevant audio data in the registered area of the account and evaluate it to see if the behavioral agreement was violated. This process begins with a report being presented, and is not an always-present arrangement. If a violation has occurred, the player will be made available to the player if the violation occurs and will eventually be removed as there is no further need for review. If no violation is found, the data will be deleted.
Before we move on, let me just say that this is a big fucking deal. Publishers and developers have long known that toxicity in gaming is not only a terrible user experience, but it is preventing large swats of potential gamers from actively dedicating themselves.
“Players are experiencing a lot of pain in voice pain and it takes the form of a variety of disruptions in pain behavior and can be very harmful,” said Heads of Players Dynamics Wescat Hart. “We recognize, and we have made a promise to the players that we will do everything we can in this space.”
Voice chat often makes the game richer and more fun. Especially during epidemics, people are craving more human relationships. But in a stressful environment like competitive sports, this connection can turn sour.
As a gamer myself, I can safely say that some of the saddest experiences of my life have occurred while playing video games with strangers.
To be clear, Riot is not getting specific on how this voice chat moderation will work. The first step is an update to its privacy notice, which gives players a head and empowers the company to begin evaluating the voice complex.
The voice of the police is incredibly difficult to com. Not only do you need to be transparent with users and update any legal documents (which is arguably the easiest step, and a riot is taking place today), but you must develop the right technology to do so, all players. While protecting privacy.
I spoke with Hart and the Data Protection Officer and CISO Chris Hyams about the changes. Both stated that the actual system to detect behavior violations within VoiceCom is still under development. It can focus on automated voice-to-text transcription, and can pass through the same systems as text chat moderation, or it may rely too heavily on machine learning that actually violates via voice alone Can find out
“We’re looking at technologies and we’re trying to get down to what we want to launch,” Hart said. “We are spending a lot of time and effort in space and we have a very good idea of the direction that we are going to take. But what we want to do is to have some audio to work with, to better understand that any other approach that we are looking at is going to be the best. To do this, we have to be able to do some real process, not just make a good guess. “
To get to that answer as soon as possible, he said, the first phase of updating the privacy notice had to take effect.
Hart and Hyams also stated that some layer of human moderation would be included to ensure that whatever system was being developed was functioning properly and could eventually be rolled out in other languages and other titles. Is, as the system is initially being developed for Valerant in North America. .
Advances in machine learning and natural language processing are making development easier than it was 10 years ago or two years ago. But even in a world where a machine learning algorithm can accurately detect abusive language with all its nuances, there is still another obstacle.
The gamer, even from one title to another, has its own language. There is a complete explanation of the words and words used by gamers that are not used in every day life. This adds another complexity to the process of developing this system.
Nevertheless, this is an important step to ensure that Riot Sports titles, and hopefully other titles as well, become an inclusive environment where anyone who wants to play the game feels safe and able to do so.
And Riot is careful to understand that the developing game is an overall effort. Everything from game design to anti-cheating measures to behavioral guidelines and moderation have an impact on the player’s overall experience.
With this announcement, the company is also offering an update to its terms of service with an updated global return policy and new language around anti-cheat software for current and future riot titles.