Facebook has a VIP programme that lets celebrities and politicians avoid moderation – Screen Shot
Deep Dives Level Up Newsletters Saved Articles Challenges

Facebook has a VIP programme that lets celebrities and politicians avoid moderation

According to a new report in The Wall Street Journal, Facebook has, for years now, operated a little known programme called ‘XCheck’ or ‘cross check’—allowing celebrities, politicians, and other members of America’s elite to avoid the kinds of moderation policies that the average user is subject to. In other words, they’ve been receiving special treatment because of their fame and have been allowed to play by their own rules.

According to the publication’s investigation, the programme was created in order to avoid “PR fires,” the public backlash that occurs when Facebook makes a mistake affecting a high profile user’s account. The XCheck programme means that if one of these accounts broke its rules, the violation is then sent to a separate team so that it can be reviewed by Facebook employees, rather than its non-employee moderators who typically review rule-breaking content.

Although Facebook had previously divulged the existence of cross check, it forgot to mention that “most of the content flagged by the XCheck system faced no subsequent review,” as stated in The Wall Street Journal’s report. In one incident, Brazilian football star Neymar da Silva Santos Júnior posted nude photos of a woman who had accused him of sexual assault. In case you weren’t sure, such a post is a blatant violation of Facebook’s rule around non-consensual nudity, and rule-breakers are typically banned from the platform. And yet the cross check system “blocked Facebook’s moderators from removing the video,” and the post was viewed nearly 60 million times before it was eventually removed. Neymar’s account faced no other consequences.

Other recipients of such privileges have included former President Donald Trump (prior to his two-year suspension from the platform earlier this year), his son Donald Trump Jr., rightwing commentator Candace Owens, and Senator Elizabeth Warren, among others. In most cases, individuals who are ‘whitelisted’ or given a pass on moderation enforcement are unaware that it is happening.

Employees at Facebook seem to have been aware that XCheck is problematic for quite some time. “We are not actually doing what we say we do publicly,” company researchers said in a 2019 memo entitled ‘The Political Whitelist Contradicts Facebook’s Core Stated Principles’. “Unlike the rest of our community, these people can violate our standards without any consequences.”

Last year alone, the cross check system enabled rule-breaking content to be viewed more than 16 billion times before being removed, according to internal Facebook documents cited by The Wall Street Journal. The report also says Facebook ‘misled’ its Oversight Board, which pressed the company on its cross check system back in June when weighing in on how the company should handle Trump’s “indefinite suspension.” The company told the board at the time that the system only affected “a small number” of its decisions and that it was “not feasible” to share more data.

“The Oversight Board has expressed on multiple occasions its concern about the lack of transparency in Facebook’s content moderation processes, especially relating to the company’s inconsistent management of high-profile accounts,” the Oversight Board shared on Twitter. “The Board has repeatedly made recommendations that Facebook be far more transparent in general, including about its management of high-profile accounts, while ensuring that its policies treat all users fairly.”

‘What does Facebook have to say about all that?’ you might be wondering. The social media giant told The Wall Street Journal that its reporting was based on “outdated information” and that the company has been trying to improve the cross check system. “In the end, at the center of this story is Facebook’s own analysis that we need to improve the program,” Facebook spokesperson Andy Stone wrote in a statement. “We know our enforcement is not perfect and there are tradeoffs between speed and accuracy.”

These recent revelations could lead to new investigations into Facebook’s content moderation policies. Already, some information related to cross check has been “turned over to the Securities and Exchange Commission and to Congress by a person seeking federal whistleblower protection,” according to The Wall Street Journal. It’s not a good look, given how just a few weeks ago, Facebook received backlash after one of its AI recommendation systems asked users if they wanted to “keep seeing videos about primates” when referring to a newspaper video featuring black men.

Watch out Zucko, the cracks are showing.

Facebook apologies after its AI labels black men ‘primates’

Facebook users who watched a newspaper video featuring black men were asked if they wanted to “keep seeing videos about primates” by one of the platform’s artificial intelligence recommendation systems. It’s only the latest in a long-running series of errors that have raised concerns over racial bias in AI.

Facebook told the BBC it “was clearly an unacceptable error,” disabled the system and then proceeded to open an investigation. “We apologise to anyone who may have seen these offensive recommendations,” continued the social media giant.

Back in 2015, Google came under fire after its new Photos app categorised photos in one of the most racist ways possible. On 28 June, computer programmer Jacky Alciné found that the feature kept tagging pictures of him and his girlfriend as “gorillas.” He tweeted at Google asking what kind of sample images the company had used that would allow such a terrible mistake to happen.

“Google Photos, y’all fucked up. My friend’s not a gorilla,” read his now-deleted tweet. Google’s chief social architect Yonatan Zunger responded quickly, apologising for the feature: “No, this is not how you determine someone’s target market. This is 100% Not OK.”

The company said it was “appalled and genuinely sorry,” though its fix, Wired reported in 2018, was simply to censor photo searches and tags for the word “gorilla.”

In July 2020, Facebook announced it would be forming new internal teams that would determine whether its algorithms are racially biased or not. The recent “primates” recommendation “was an algorithmic error on Facebook” and did not reflect the content of the video, a representative told BBC News.

“We disabled the entire topic-recommendation feature as soon as we realised this was happening so we could investigate the cause and prevent this from happening again. As we have said while we have made improvements to our AI, we know it’s not perfect and we have more progress to make,” they continued.

In May 2021, Twitter admitted racial biases in the way its “saliency algorithm” cropped previews of images. Studies have also shown biases in the algorithms powering some facial recognition systems.

Research has previously shown that AI is often biased. AI systems aren’t perfect—over the last few years, society has begun to grapple with exactly how much human prejudices can find their way through AI systems. Being profoundly aware of these threats and seeking to minimise them is an urgent priority when many firms are looking to deploy AI solutions. Algorithmic bias can take varied forms such as gender bias, racial prejudice and age discrimination.

First and foremost, however, we need to admit and recognise that AI isn’t perfect. Developing more inclusive algorithms—with a specific goal of removing social bias—is the only way we can prevent this from happening again. Until then, AI will continue to make racist mistakes—driven by human error—for the years to come.