Facebook’s lies, Twitch’s leaks and TikTok’s takeover: here are Ofcom’s new rules – Screen Shot
Deep Dives Level Up Newsletters Saved Articles Challenges

Facebook’s lies, Twitch’s leaks and TikTok’s takeover: here are Ofcom’s new rules

As Facebook’s failures and conscious negligence of protecting its users from explicit content continue to surface, the Office of Communications (Ofcom) steps up to enforce some stricter rules for certain platforms. In the UK, apps like TikTok and Twitch could soon face large fines and heavy restrictions if they fail to adhere to the government-approved regulatory body’s new rules. US Representative Alexandria Ocasio-Cortez (commonly referred to as AOC) cites that a large issue in monitoring Facebook is its monopoly over itself and all its acquired platforms—arguing for it to be subject to independent system checks like TikTok and Twitch.

The politician wrote on Twitter, “If Facebook’s monopolistic behaviour was checked back when it should’ve been (perhaps around the time it started acquiring competitors like Instagram), the continents of people who depend on WhatsApp and IG for either communication or commerce would be fine right now. Break them up.” She further stated how such a monopoly could have potentially dangerous implications on democracy as we know it.

Ofcom is an authoritative regulatory body, approved by the UK government, that monitors and oversees industries pertaining to telecommunications, broadcasting (both television and radio) and even postal sectors in the country. The organisation’s responsibilities lie in a multitude of areas including: complaints, codes and policies, licensing, competition, research and protecting radio industries from abuse. Now, it has released some new rules for video-sharing platforms (VSPs) operating in the UK.

In a first for Europe, VSPs like Snapchat, Twitch, Vimeo, OnlyFans (which has had its fair share of content moderation controversies) and of course, TikTok are now subject to million-pound fines—or in more serious cases face site-wide suspensions—by Ofcom if they fail to clampdown on hate speech, child sexual abuse material and inappropriate content evident on their platforms. While the broadcasting regulatory’s main priorities is child abuse material, other rules include a crackdown on terrorism-related content and racism.

Ofcom found that a third of VSP users have come across such content on the above listed sites. So, in order to protect such users the organisation, although itself unable to assess individual content, will heavily monitor a platform’s ability to act swiftly and effectively in removing content that violates guidelines. The VSPs will be required to prepare and adequately impose crystal clear guidelines for uploading content, develop an easy-to-use report and complaint process and have vigorous age-verification restrictions on certain content.

It seems such users have had enough of witnessing such content online as the whole of Twitch has been leaked. VGC has reported that an anonymous user has posted a 125 GB torrent link to 4chan believed to contain the comprehensive source code history, reports of creator payouts, Twitch clients and more; the leak (which is publicly available) was reportedly conducted to “foster more disruption and competition in the online video streaming space” as “their community is a disgusting toxic cesspool.” This is not the first time Twitch has come under fire from its users for its failure to act in moderating abuse, as many boycotted the site over ‘hate raids’ targeted at black and LGBTQ streamers.

If the new rules set out by the UK watchdog are violated or not rigorously maintained, Ofcom will be allowed to exact a penalty of up to 5 per cent of a platform’s turnover or, £250,000. Such rules and punishments will only be applicable to VSPs that have a UK regional headquarters—meaning platforms like Netflix and YouTube are exempt from such moderation from Ofcom and rely on other bodies in their regions of location to conduct such monitoring. YouTube for example, would have to answer to Irish authorities.

Chief Executive of Ofcom Dame Melanie Dawes, stated that “online videos play a huge role in our lives now, particularly for children… But many people see hateful, violent or inappropriate material while using them.” She continued, “The platforms where these videos are shared now have a legal duty to take steps to protect their users. So we’re stepping up our oversight of these tech companies, while also gearing up for the task of tackling a much wider range of online harms in the future.”

Facebook is building an Instagram for kids. Could it put young users at risk?

On 16 March 2021, Instagram announced a major security update aiming to make the platform safer for younger audiences. The measure introduced restrictions on Direct Messages (DMs) between teens and adults they don’t follow, constant prompts to be cautious about online interactions as well as encouragement to make their accounts private. Barely two days later, BuzzFeed News obtained an internal Facebook post confirming the company’s plans to launch a separate version of Instagram altogether for children under the age of 13. What could possibly go wrong, right?

“I’m excited to announce that going forward, we have identified youth work as a priority for Instagram and have added it to our H1 priority list,” Vishal Shah, Instagram’s vice president of product, wrote on the employee board. The message outlined two ‘youth pillars’ the platform would be focusing on: “accelerating integrity and privacy work to ensure the safest possible experience for teens” along with “building a version of Instagram that allows people under the age of 13 to safely use the platform for the first time.”

Kids are increasingly asking their parents if they can join apps that help them keep up with their friends,” Joe Osborne, a Facebook spokesman, said in a supporting statement. Osborne further highlighted the absence of child-friendly social networking apps in the present market—hence working on building additional products like Messenger Kids to fill the gap.

“We’re exploring bringing a parent-controlled experience to Instagram to help kids keep up with their friends, discover new hobbies and interests and more,” he added. Given the app’s popularity among teens, Instagram ultimately seeks to tap into an audience of children under the age of 13 as a viable growth segment. Shifting this potential segment onto a separate platform not only helps Instagram regulate the social media but also expand its user base and ‘future-proof’ the app’s demand in the lifestyle of the next generation.

However, child safety experts and health advocates were quick to jump onto the scene, digging up the demography’s brushes with predators on the social media platform—thereby urging Facebook to scrap all plans on implementing ‘Kidstagram’.

A sticky situation rooted in cons

In a letter coordinated by the non-profit youth advocacy Campaign for a Commercial-Free Childhood, more than 20 groups and dozens of individual researchers labelled ‘Instagram for kids’ as a tool that will “put young users at great risks.” Citing a “growing body of research” demonstrating the negative effects of social media on the youth, the letter implored Mark Zuckerberg to scrap the project.

“Instagram, in particular, exploits young people’s fear of missing out and desire for peer approval to encourage children and teens to constantly check their devices and share photos with their followers,” the letter read, adding how the platform’s “relentless focus on appearance, self-presentation, and branding presents challenges to adolescents’ privacy and wellbeing.”

The letter further highlighted how the effects of Instagramwhile proven to be negative on teens—will be even more grave for those under the age of 13. “Young children are highly persuadable by algorithmic prediction of what they might click on next, and we are very concerned about how automated decision making would determine what children see and experience on a kids’ Instagram platform,” the letter said.

Although the groups agree that “something needs to be done to protect the millions of children who have lied about their age to create Instagram accounts,” they outlined how “launching a version of Instagram for children under 13 is not the right remedy.” The letter thereby urged the tech giant to abandon its plans that are still “in the early planning stages.”

“Doing so would send a strong message that Facebook understands the vast concerns about the effects its business model is having on young people and is open to solutions that truly benefit children and teensnot just Facebook’s market share,” the letter concluded.

Although Facebook is yet to comment regarding the letter, at a hearing related to Facebook’s antitrust concerns earlier this year Zuckerberg shrugged off all criticisms of the platform, stating that “there is clearly a large number of people under the age of 13 who would want to use a service like Instagram” to “stay connected with friends.”

A challenge based on previous testaments

Let’s be honest here, the backlash that ‘Instagram for kids’ is getting isn’t surprising, especially given the case studies of Messenger Kids and YouTube Kids.

Previous attempt of the tech giant to dip its toes into the coveted market segmentwith Messenger Kids in 2017was quick to run into problems. Two years after its launch, Facebook uncovered a major design flaw that made it possible for kids to enter group chats with strangers without the authorisation of their parents. In the following weeks, Facebook quietly shut down those group chats and alerted users, without making any public statements disclosing the issue.

YouTube is yet another platform that has run into trouble after launching its child-friendly alternative. Launched in 2015, YouTube Kids had to crack down on inappropriate videos being displayed to its users. Earlier this month, the House Subcommittee on Economic and Consumer Policy hammered the service for its low-quality content, a high degree of product placement and insufficient content moderation. Just last week, Viacom, Disney, and 10 advertising technology firms came to a settlement in a lawsuit that accused these companies of launching tracking software on children-focused apps without the consent of their parents.

While Adam Mosseri, the head of Instagram swears by its upcoming transparency and control features, stating the absence of ads altogether on the curated platform, a plethora of researchers are accusing the tech giant for its attempt at “normalising the idea that social connections exist to be monetised.”

“From a privacy perspective, you’re just legitimising children’s interactions being monetised in the same way that all of the adults using these platforms are,” said Priya Kumar, a PhD candidate at the University of Maryland. In an interview with BuzzFeed News, Kumar mentioned how a lot of the children using YouTube Kids often end up migrating to the main platform either by choice or by accidenta bane for parents but a boon for companies. “Just because you have a platform for kids, it doesn’t mean the kids are going to stay there,” she added.

Although Messenger Kids and YouTube Kids have raised concerning issues, some independent entertainment companies that have successfully tapped into the coveted market segment can’t be kept out of the loop. Moonbug, for example, is a successful example of a media network delivering developmentally appropriate and accessible content to children all over the world. Utilising the data collected to “understand the needs, wants and current trends” of the young audience, the on-demand video platform distributes fun and safe content via its kids-friendly app.

While Instagram is constantly introducing new security and anti-bullying tools, they might just be far from solving the problem altogether. After all, if kids below 13 can lie about their age on Instagram, what stops adults from lying about their age on such ‘under-13’ platforms? Maybe ‘Kidstagram’ should remain a hashtag for the time being.