Ever read the book Lord of the Flies? To spare you the spoilers, essentially the tale follows a group of schoolboys, stranded on a remote island, who slowly dissent into the barbarian and brutal nature of humankind when stripped of social inhibitions. Why am I telling you this? It’s somewhat relevant to the despicable display we’ve seen on social media over the last few days. We’ve previously seen it with trolling, and now, we’re seeing it in the spike of severe racial abuse on social media. When hiding behind social media, a shield of anonymity—often perceived as a consequence-free space—some, unfortunately, let their true colours show.
Psychologists have a specific term used to describe this form of behaviour: the online disinhibition effect, which is the lack of restraint one feels when communicating online in comparison to doing so in person. And it’s a problem that’s only getting worse. But how do we tackle it? Some believe the requirement of ID to set up a social media account may be the answer—but it’s not quite as simple as it first seems.
On the surface, the argument seems fairly simple. In the face of the disgusting rise of racial related abuse brought by England’s loss in the Euro 2020 over the weekend, a fix may be to require every user to present valid identification. That way, if someone was to commit a hate crime online, it would be easier to identify the individual. Likewise, requiring ID for social media would help break the perception that spaces like Twitter or Facebook are ‘the Wild West of the internet’, hammering home that actions do in fact have consequences, even online.
And I have to admit, it’s a strong argument. It’s been backed by industry experts, who argue that social media companies must start requiring users to verify their identity when opening an account to end the idea that such internet platforms are “consequence-free” areas for abuse, according to the Evening Standard.
Director of Policy at BCS, Doctor Bill Mitchell, said that people should be asked to verify their identity in order to use social media platforms—even going as far as to argue that this could be implemented without compromising the user’s personal privacy. He said, “Despite the boycotts and some technical changes from big tech companies, some people still see social media as a consequence-free playground for racial abuse.”
“IT experts think these platforms should ask to verify their real ID behind account handles; at the same time, public anonymity is important to large groups of people and no-one should have to use their real name online and any verification details behind the account must be rigorously protected,” Mitchell continued.
But is it actually achievable to implement? Can we really screen all 3.96 billion people who use social media worldwide for identification? Recent research from the BCS seems to suggest so, well at least for those living in the UK. The study found that 56 per cent of tech experts believe linking social media accounts to true identities is technically achievable. Mitchell continued, “Tech experts want users to be accountable for what they say, and they see few technical barriers to verifying the real ID behind account handles.”
The premise of requiring ID for starting social media accounts has sparked wide debate online. In a recent tweet, musician James Arthur said: “ID verification to set up any sort of social media account seems like a no brainer to me? Why are we still letting tossers online hurl abuse at anyone they please without any repercussions? It’s a no brainer to me.”
Other users had a different take on the matter. Quite rightfully, some expressed concern that giving ID to big social media companies could lead to privacy breaches and impinge on their freedom of speech. One user said, “I don’t want potential employers checking my social media profile for my political opinions. I don’t want to be doxxed by people who disagree with me. I want people with hostile governments to be free to express themselves.”
The truth is, the premise of mandatory identification for social media is a double-edged sword: both of these people above have valid arguments. In fact, I’d go as far as to say I wouldn’t trust Zuck with the data to my Spotify Wrapped—let alone my personal details—given the outright dangerous handling of data we’ve seen by Facebook over the last decade or so. However, it’s clear something needs to be done to tackle racist discourse online—and the current methods to prevent such behaviour are as effective as reflective camouflage.
This debate isn’t new either. A petition which gained over 620,000 signatures, pushing for mandatory ID to set up social media accounts, was brought to parliament in May 2021. At the time, the government voted against requiring ID, stating it would restrict “all users’ right to anonymity.”
“Introducing compulsory user verification for social media would disproportionately impact users who rely on anonymity to protect their identity. These users include young people exploring their gender or sexual identity, whistleblowers, journalists and victims of abuse. Introducing a new legal requirement, whereby only verified users can access social media, would force these users to disclose their identity and increase risk of harm to their personal safety,” the government said in a statement.
In the debate, it also concluded that “users without ID, or users who are reliant on ID from family members, would experience a serious restriction of their online experience, freedom of expression and rights. Research from the Electoral Commission suggests that there are 3.5 million people in the UK who do not currently have access to a valid photo ID.” Ironically, this hasn’t stopped the Tory party from pushing for mandatory ID for voting in the UK. A little contradictory, don’t you think?
Regardless, it’s clear that this debate isn’t quite as simple as it first appears. Although experts claim that we could implement such measures, this doesn’t necessarily mean that we should. Of course, Twitter has labelled the abuse as “abhorrent,” saying it has “absolutely no place” on its platform. It went on to highlight how its machined learning-based automation and human review has “swiftly removed over 1000 tweets and permanently suspended a number of accounts for violating our rules.” But is it enough? Many, especially those personally affected by such abusive statements, would argue it isn’t. Where do we go from here?
Watching porn online will never be the same, at least in the U.K. Now known as ‘the porn block’, the age-verification law for commercial porn sites was passed as part of the 2017 Digital Economy Act and was initially expected to be in place by April 2018. But because of its controversial nature, many delays stopped it from being put into action. Although a precise date hasn’t been set out just yet, the Minister for the Department of Digital, Culture, Media and Sport Margot James, told MPs, “We expect it to be in force by Easter of next year”.
While we wait for a commencement date, there is a necessity to question what this ‘block’ law will actually change and weigh the pros and cons. The problem not only lies in the fact that it might change porn and the way it is perceived—because let’s be honest, a lot has to change in the porn industry—but also in what it means about our freedom and our right to privacy. Imagine how many teenagers would give up on expanding their sexual journey through PornHub’s best picks if they had to give out their phone number and email address first, let alone their parents’ credit card details.
This new age-check requirement will apply to any website or online platform that provides pornography. Businesses that refuse to comply will be fined up to £250,000 and regulators will be able to block porn websites if they fail to show that they are denying access to under 18s. While the main idea behind this law makes perfect sense—to protect minors from being exposed to porn at a too young age—many other aspects and repercussions can be criticised.
The practical aspects of the changes that it would bring are the first and most obvious inconveniences. Here are a few ways users will be able to prove their age. The first option, called AgeID, will direct users to a non-pornographic page, where they will be asked to provide personal data—credit card details, phone numbers, and emails—to prove their age.
The second option will expect that users buy age-verification cards that are only valid for 24 hours. These cards will contain a code that will be entered on the page to prove they are over 18. They could cost up to £8 and a trip to your local off license.
Although the two options sound tedious, it should be said that any young child having access to pornographic content is concerning. A study commissioned by the National Society for the Prevention of Cruelty to Children (NSPCC), shows that 53 percent of 11-16-year-olds surveyed have seen sexually explicit content online. With that in mind, it is understandable that people fear children are becoming more and more desensitised to certain things. What happened to parental controls and privacy settings?
Now when looked at from another angle, this law reveals more problems. No matter how much its critics chose to deny it, pornography has a big influence on us as a society. Yes, it reflects misogynistic views, an unrealistic depiction of bodies, stereotypic ideas and so much more. But it also can influence our vision of gender, intimacy and beauty in good ways.
With more and more independent pornographic film producers coming onto the scene, the porn industry is slowly starting to show a more artistic and realistic side. More focus is now put on the diversity of sex and queer, trans, non-Western people. As flawed as pornography can be, it can be used to communicate comprehensive and open-minded sex education, while today’s modern sex education has been restricted in many ways and in many countries.
And then there is the issue of privacy that this law poses. No one wants to give out that kind of private information when landing on a porn website. The company MindGeek—which owns PornHub, YouPorn and others—is already renowned for its multiple data breaches (seven since 2012). This just shows how risky it could be to put your information out there when trying to watch explicit content—especially when MindGeek will be the company operating AgeID.
This law will help the corporate interests of the biggest adult entertainment companies while putting users’ personal information at risk. U.K.’s ‘porn block’ could mean data collection, leaks, and blackmail. Are you willing to take this risk just for a bit of ‘adult content’? As for protecting underaged viewers, if they don’t know how to change their IP address already, they’ll always be able to look at explicit content on social media. In other words, the ‘porn block’ is solving a problem by creating many more—because top-down restrictions aren’t always the right solution.