Facebook vows to ban Taliban-related content. How exactly?

By Monica Athnasious

Published Aug 17, 2021 at 01:27 PM

Reading time: 3 minutes

21735

Facebook has told BBC News that it has now officially banned Taliban-related content from its platform. This news comes after a devastating week in Afghanistan when the extremist organisation overtook its capital unopposed and seized the presidential palace. The Taliban—as well as other radical groups—has often used social media to spread its messages and propaganda. Now, in the wake of this swift takeover, questions are being raised about the new challenges this will pose for social media giants in dealing with content made by the Taliban.

In the report, Facebook explained how it follows the “authority of the international community” when making such decisions. A spokesperson for the social media giant told the BBC, “The Taliban is sanctioned as a terrorist organisation under US law and we have banned them from our services under our Dangerous Organisation policies. This means we removed accounts maintained by or on behalf of the Taliban and prohibit praise, support, and representation of them.”

The spokesperson added that the team dedicated to this specific moderation would comprise Afghanistan experts—namely native speakers who have in-depth knowledge of the context and would be better able to spot the issues on the platform.

Facebook’s ‘Dangerous Organisation’ policies

Facebook’s Dangerous Organisation moderation policies state that they assess entities that proclaim a “violent mission” under three tiers. Tier 1 focuses on those who engage in violence, hatred and human rights violations offline—which includes “terrorist, hate and criminal organisations.” This tier also explains that the platform does not allow the praise of content that showcases violence, terrorism, murder, racist ideologies or praise of the perpetuator of the hatred.

Its second tier “focuses on entities that engage in violence against state or military actors but do not generally target civilians—what we call ‘Violent Non-State Actors.’ We remove all substantive support and representation of these entities, their leaders, and their prominent members. We remove any praise of these groups’ violent activities.”

The third and final tier suggests that any content pertaining to hate speech or demonstrating intent to engage in violence would also be removed. This would include “militarised social movements, violence-inducing conspiracy networks and individuals and groups banned for promoting hatred.”

Moderation failures

In spite of these regulations, social media platforms—especially the likes of Facebook—have a long timeline of historical failure in upholding these moderations. An obvious example is the racist abuse the England players suffered online post the Euro 2020 final. Recent studies have also suggested the use of emojis in abusive content protects posts from being taken down due to flaws in the algorithm. Even in the Taliban context, the moderation is not yet proven effective as Reuters stated that Facebook’s Whatsapp—an end-to-end encrypted messaging service—is still being used by Taliban members to communicate with Afghans, despite its prohibition under ‘Dangerous Organisations’.

Facebook’s Dangerous Organisation policies are not lacking in criticisms either. Although Facebook claims that policies recognise “neutral content”—whereby users often comment, report or discuss these subjects and would allow room for such discussions to take place—Facebook has wrongfully taken down posts like these in the past. Its definition is too wide and there are calls for the company to further detail them. The criticisms of its policies are still at large.

Not only this, but Facebook has failed to yet mention or address the impact monitoring Taliban content will have on its employees. The individuals being recruited for this job (as mentioned earlier) will be Afghans themselves. The proximity to this content may be incredibly detrimental to the employees’ mental health. It was just last year that the social media company agreed to pay $52 million to its content moderators over post-traumatic stress disorder (PTSD) claims, as first investigated by The Verge.

No uniformity

Other social media platforms have also been questioned about how they will approach this issue as many are split on the proper solution and fail to follow in Facebook’s footsteps. Taliban spokesmen have reportedly been using Twitter to communicate direct updates (of its takeover) to its hundreds of thousands of followers. When asked about the group’s use of the platform by Reuters, Twitter put forward its policies on violent content online but refused to disclose how the groups are classified or defined.

In its investigations of social media, Reuters was unable to get a comment from the video platform YouTube. The company stated that it relied on governmental definitions of what constitutes a “Foreign Terrorist Organisation (FTO) in order to make any enforcements.” At this current moment, as YouTube pointed out, the Taliban is not defined by the US as an FTO, rather it is categorised as “specially designated global terrorist.” Do they require these specific definitions or is it a get-out-of-jail-free card?

Keep On Reading

By Charlie Sawyer

Flo Health achieves unicorn status, but is a male-led team fit for femtech?

By Abby Amoakuh

Who is Laura Loomer, the right-wing conspiracy theorist threatening Donald Trump’s campaign?

By Abby Amoakuh

Trying to manifest your dream partner for 2025 Influencer-backed app To Be Magnetic says it can help

By Fatou Ferraro Mboup

Donald Trump’s viral McDonald’s shift mocked online for being completely staged

By Abby Amoakuh

It Ends With Us author Colleen Hoover’s long history of controversies and problematic behaviour

By Charlie Sawyer

Utah’s decision to ban A Court of Thorns and Roses proves that free thinking is off the table in the US

By Alma Fabiani

From ugly crying to reliving Y2K dreams: 6 audiobooks for every mood on Amazon Music

By Abby Amoakuh

UK search engines are now promoting tutorials on how to create deepfake porn

By Fatou Ferraro Mboup

TikToker reveals Molly-Mae Hague’s new brand, Maebe, is secretly using Couture Club creations

By Charlie Sawyer

Republican Rep says he’d force his daughter to carry pregnancy from rape

By Abby Amoakuh

Sweden’s plans for an underage social media ban to curb gang violence could inspire EU to do the same

By Abby Amoakuh

Sex scenes in Netflix’s It’s What’s Inside pose questions about sexual consent during body-swapping

By Charlie Sawyer

The Apprentice star Sebastian Stan warns Trump’s criticism may spark new wave of violence

By Abby Amoakuh

Unpopular opinion: Why it’s time to end pet ownership

By Fatou Ferraro Mboup

Will Nara Smith’s alleged Trump vote mark the end of her tradwife influence?

By Merilyn Chang

Here’s why Trump is resonating with Asian American families like mine

By Abby Amoakuh

Are It Ends with Us stars Blake Lively and Justin Baldoni beefing? Here’s all the evidence we could find

By Abby Amoakuh

Why are people tagging Bad Bunny in videos of them crying? The DtMF TikTok trend explained

By Charlie Sawyer

Anna Kendrick’s revelations about her 7-year abusive relationship on Call Her Daddy matter more than you think

By Charlie Sawyer

The Taliban just banned the media from airing images of anything with a soul