Valorant thinks it can fix its toxicity and abuse problem by monitoring voice chats, but players disagree – Screen Shot
Deep Dives Level Up Newsletters Saved Articles Challenges

Valorant thinks it can fix its toxicity and abuse problem by monitoring voice chats, but players disagree

Over the years, there have been many popular online multiplayer games, from Call of Duty: Warzone to Overwatch. Players love the draw of team-based games and with the explosion of esports and esports teams—whose players make a tidy sum if they win events—the appeal has never been stronger. But like with many other things in life, not everything is sunshine and rainbows.

Currently, one of the most popular multiplayer games out there is Valorant. Unfortunately, its success is almost counterbalanced by the toxicity that plagued its community—one that has only been seen once before in League of Legends, but that’s a story for another time. First of all, let’s take a look at what Valorant actually is.

What is ‘Valorant’?

Valorant is a free-to-play (F2P) five versus five (5v5) first-person shooter (FPS) developed by Riot Games (ironically, the same company behind League of Legends). The main game mode is called ‘Search and Destroy’, a common match type in many FPS games—but this version comes with a twist. Players need to plant a bomb (called a ‘spike’) in the opposing team’s base. Games consist of 25 rounds of 100 seconds where the first team that wins 13 rounds, wins the match. It’s worth noting that if all of your team is wiped out before a round is over, you will automatically lose.

At the beginning of each round, players have 30 seconds to buy weapons and armour, but if they die, they’re then forced to wait until the next round to respawn. They also have to choose one of the many playable characters, called Agents, at the beginning of every round, each equipped with a multitude of abilities such as healing or making a protective wall appear from thin air. Valorant additionally has different Acts—essentially big updates to the game which introduce new Agents, maps and modes.

With a fun twist on the FPS genre and incorporating some multiplayer online battle arena (MOBA) mechanics—which Riot Games is well known for—Valorant looks like a fun and interesting game to dive into. But it’s only when you dive in that the real problems begin to surface.

Toxicity and abuse within the community

Toxicity in video games isn’t a new thing. From the verbal abuse seen in League of Legends to the Fortnite streamer Ninja’s obscene outbursts and arguably even the practice of teabagging, online gaming is rife with immaturity and poor taste.

To get a better understanding of what is going on in the Valorant scene, SCREENSHOT spoke to Twitch streamer Cloutchazerr about her experiences with toxicity on the platform. As an up-and-coming content creator, Clout streams a lot of Valorant and is an active part of the game’s community. Unfortunately, she has also been subject to a tirade of abuse while playing.

“There are some people who don’t care, who are creepy because they know I’m a girl, and ask if I’m a little boy or a girl since I usually don’t have like an ‘uwu’ anime girl voice, you know? And some [players] are just straight up toxic, sexist and even racist when they know I’m Chinese,” she shared.

It seems that the abuse Clout receives isn’t even about her performance in the game, and is mostly in the form of personal attacks which is extremely concerning to see. She did admit to being agitated herself, but never to the extent of the abuse she receives. “Sometimes I get heated when people are genuinely bad at the game but I’ve been banned so many times just by saying shit to my harasser. Who doesn’t get a little angry at people doing bad, you know? Like baiting, not shooting, not trading etc.”

The abuse has gotten so bad that Clout even considered leaving the game at one point. “Oh yeah, I’ve been talking about leaving but I can’t cause I’ve already spent too much on Valorant,” she joked. But despite the toxicity, she still can’t bring herself to leave. “Valorant is great when you actually play it for fun and with friends because it’s a socialising game too.”

Valorant has two tiers of game you can play. You can jump into unrated matches which are casual affairs, or you can participate in ranked matches which are more serious and very competitive. It’s here that Clout noted most of the toxicity lies. “There are some mad people on ‘unrated’ too, but I’d like to say 80 per cent are on ‘ranked’. Because it’s more serious, I guess.”

Throughout the interview, there was a common theme that kept cropping up: as a woman and a POC, Clout felt like she was targeted far more than her male counterparts. “I’m not saying for other genders and for other people but tremendously being a woman in Valorant, we get treated so differently. My girlfriends that are my mods are mostly 13 to 15 [year-olds] and they get asked a lot if they are ‘boys’ and people always think that there are only two genders,” she explained.

“I feel like the ‘girls can’t game better than guys’ stereotype still applies heavily on Valorant, especially on the Singaporean server,” Clout continued. “It’s just normal to get shat on and cursed at, people being sexist, racist, toxic, etc. So when we get ‘good’ or non-toxic teammates, it’s just very surprising. Toxicity, racism and sexism is so normalised [in Valorant] that it’s sad to see honestly.”

Curb your tongue

It has been announced that on 13 July 2022, Riot Games will begin collecting data from voice chats in Valorant games in North America to combat the disruptive and despicable behaviour within the game. They will be using this data to get its AI model “in a good enough place for a beta launch later this year,” as per the company’s statement.

“This is brand new tech and there will for sure be growing pains,” Riot Games wrote. “But the promise of a safer and more inclusive environment for everyone who chooses to play is worth it.” When asked whether or not she thought this would work, Clout told SCREENSHOT, “The thing that I hate about not having a mod for voice chat is that when people say the n-word, slurs, sexist or toxic things in general, it won’t be detected as fast as text chat because voice chat is usually a ‘he said/she said’ moment.”

“I think it’s better?” she continued. “I’m not sure though, I’d have to experience it myself, but for the most part, it is good. Good to monitor the ‘he said/she said’ situation.”

While Riot Games is actively and publicly working on making its game a safer and more pleasant place for people to socialise and game together, it seems that it will take a while before the fruits of its labour are seen. The sexist and racist abuse documented by Clout, and no doubt experienced by many other women and POCs throughout the community, is quite alarming—and even with the technology being developed to combat this kind of behaviour, there is a long, long way to go before online gaming spaces are as inclusive and as safe as we’d like them to be.

Should social media require ID to end online abuse? We look at the pros and cons

Ever read the book Lord of the Flies? To spare you the spoilers, essentially the tale follows a group of schoolboys, stranded on a remote island, who slowly dissent into the barbarian and brutal nature of humankind when stripped of social inhibitions. Why am I telling you this? It’s somewhat relevant to the despicable display we’ve seen on social media over the last few days. We’ve previously seen it with trolling, and now, we’re seeing it in the spike of severe racial abuse on social media. When hiding behind social media, a shield of anonymityoften perceived as a consequence-free spacesome, unfortunately, let their true colours show.

Psychologists have a specific term used to describe this form of behaviour: the online disinhibition effect, which is the lack of restraint one feels when communicating online in comparison to doing so in person. And it’s a problem that’s only getting worse. But how do we tackle it? Some believe the requirement of ID to set up a social media account may be the answerbut it’s not quite as simple as it first seems.

The case for required ID to set up a social media account

On the surface, the argument seems fairly simple. In the face of the disgusting rise of racial related abuse brought by England’s loss in the Euro 2020 over the weekend, a fix may be to require every user to present valid identification. That way, if someone was to commit a hate crime online, it would be easier to identify the individual. Likewise, requiring ID for social media would help break the perception that spaces like Twitter or Facebook are ‘the Wild West of the internet’, hammering home that actions do in fact have consequences, even online.

And I have to admit, it’s a strong argument. It’s been backed by industry experts, who argue that social media companies must start requiring users to verify their identity when opening an account to end the idea that such internet platforms are “consequence-free” areas for abuse, according to the Evening Standard.

Director of Policy at BCS, Doctor Bill Mitchell, said that people should be asked to verify their identity in order to use social media platformseven going as far as to argue that this could be implemented without compromising the user’s personal privacy. He said, “Despite the boycotts and some technical changes from big tech companies, some people still see social media as a consequence-free playground for racial abuse.”

“IT experts think these platforms should ask to verify their real ID behind account handles; at the same time, public anonymity is important to large groups of people and no-one should have to use their real name online and any verification details behind the account must be rigorously protected,” Mitchell continued.

But is it actually achievable to implement? Can we really screen all 3.96 billion people who use social media worldwide for identification? Recent research from the BCS seems to suggest so, well at least for those living in the UK. The study found that 56 per cent of tech experts believe linking social media accounts to true identities is technically achievable. Mitchell continued, “Tech experts want users to be accountable for what they say, and they see few technical barriers to verifying the real ID behind account handles.”

A double-edged sword

The premise of requiring ID for starting social media accounts has sparked wide debate online. In a recent tweet, musician James Arthur said: “ID verification to set up any sort of social media account seems like a no brainer to me? Why are we still letting tossers online hurl abuse at anyone they please without any repercussions? It’s a no brainer to me.”

Other users had a different take on the matter. Quite rightfully, some expressed concern that giving ID to big social media companies could lead to privacy breaches and impinge on their freedom of speech. One user said, “I don’t want potential employers checking my social media profile for my political opinions. I don’t want to be doxxed by people who disagree with me. I want people with hostile governments to be free to express themselves.”

The truth is, the premise of mandatory identification for social media is a double-edged sword: both of these people above have valid arguments. In fact, I’d go as far as to say I wouldn’t trust Zuck with the data to my Spotify Wrappedlet alone my personal detailsgiven the outright dangerous handling of data we’ve seen by Facebook over the last decade or so. However, it’s clear something needs to be done to tackle racist discourse onlineand the current methods to prevent such behaviour are as effective as reflective camouflage.

This debate isn’t new either. A petition which gained over 620,000 signatures, pushing for mandatory ID to set up social media accounts, was brought to parliament in May 2021. At the time, the government voted against requiring ID, stating it would restrict “all users’ right to anonymity.”

“Introducing compulsory user verification for social media would disproportionately impact users who rely on anonymity to protect their identity. These users include young people exploring their gender or sexual identity, whistleblowers, journalists and victims of abuse. Introducing a new legal requirement, whereby only verified users can access social media, would force these users to disclose their identity and increase risk of harm to their personal safety,” the government said in a statement.

In the debate, it also concluded that “users without ID, or users who are reliant on ID from family members, would experience a serious restriction of their online experience, freedom of expression and rights. Research from the Electoral Commission suggests that there are 3.5 million people in the UK who do not currently have access to a valid photo ID.” Ironically, this hasn’t stopped the Tory party from pushing for mandatory ID for voting in the UK. A little contradictory, don’t you think?

Regardless, it’s clear that this debate isn’t quite as simple as it first appears. Although experts claim that we could implement such measures, this doesn’t necessarily mean that we should. Of course, Twitter has labelled the abuse as “abhorrent,” saying it has “absolutely no place” on its platform. It went on to highlight how its machined learning-based automation and human review has “swiftly removed over 1000 tweets and permanently suspended a number of accounts for violating our rules.” But is it enough? Many, especially those personally affected by such abusive statements, would argue it isn’t. Where do we go from here?