The Twitter account Politics For All (PFA) was permanently banned from the platform over the weekend due to “multiple or severe violations” of the company’s manipulation and spam rules. A Twitter spokesperson further noted that the news aggregator had been “artificially amplifying” its audience and would not be allowed to return.
The suspension stretches to News For All, Football For All, and the personal account of Nick Moar, the 19-year-old who founded the accounts while he was still in school. PFA’s popularity grew rapidly over the year of 2021 by “aggressively aggregating news stories published and reported by mainstream outlets,” The Guardian wrote when reporting on the matter. “Its understanding of what would go viral on Twitter attracted hundreds of thousands of followers, including MPs and government ministers,” the publication continued.
But Moar’s knack for finding viral news was not the only problem Twitter found with his accounts—PFA was also accused of distorting stories by focusing on specifics that would trend. Mainstream journalists also complained that the account’s use of emojis in its breaking news posts would often attract more social media shares than the same posts originally shared by the outlets who actually reported on the stories. At surface level, it looks like PFA was banned for simply having a clearer understanding of how social media works than mainstream news outlets. But it should also be noted that it was recently found that the use of emojis protects abusive posts from being taken down. Does the same apply to biased and misleading posts?
As a response to the permanent ban, a PFA insider told The Telegraph, “The fact that Twitter will allow The Taliban on their platform, but not a simple news aggregator is quite something. We will be appealing this decision.”
The account had been adopted as a valuable source of information by many Twitter users for disrupting political news cycles—by highlighting stories that might otherwise have been missed by most before publishing a second tweet linking to the original article. Its fan base included the likes of former Manchester United footballer Gary Neville, who was among those calling for the account to be reinstated.
So @Twitter has permanently banned @politicsforall . A very dark move. What the hell is going on? #ReinstatePoliticsForAll
— Gary Neville (@GNev2) January 5, 2022
Moar, who launched the first account when he was 17, is a Conservative supporter and advocate for Brexit, but PFA has aimed to be impartial—emphasis on aimed. “Artificially amplifying or disrupting conversations through the use of multiple accounts is a violation of the Twitter rules,” the message from the company stated. “This includes overlapping accounts: operating multiple accounts with overlapping personas or substantially similar content. Mutually interacting accounts: operating multiple accounts that interact with one another in order to inflate or manipulate the prominence of specific tweets and accounts. And coordination: creating multiple accounts to post duplicative content or create fake engagement,” it continued.
The ban of three relatively popular news aggregation services by Twitter could attract political scrutiny on the platform. Social media companies will soon be regulated by Ofcom under the forthcoming online harms legislation. Let’s just hope this doesn’t mean that our beloved @SimplePolitics will get the sack too.
A week ago, Twitter unveiled its new ‘Super Follow’ feature, which will allow users to charge followers $4.99 a month for extra content including subscriber-only newsletters, deals and discounts, and exclusive tweets if they choose to. Twitter users were quick to bash the OnlyFans and Patreon hybrid.
This week, the social media platform has made another move bound to upset some of its users; it is working on a new strike system that could lead to some users getting permanently banned for promoting vaccine misinformation.
Like Facebook just did mid-February, Twitter had also previously banned harmful anti-vaxxer propaganda aimed at the COVID-19 vaccines out of concern that it could make people more hesitant to get vaccinated. Now, the social media platform is adding more layers to its approach in order to make it more effective—a move that other platforms like Facebook and YouTube could certainly learn from.
On Monday 1 March, Twitter announced in a blog post that tweets deemed to be harmful misinformation will be subject to labels directing people to content curated by Twitter, public health resources, or the company’s rules. At the same time, users who continue to post such tweets will be subject to a strike policy. If a user posts too much vaccine misinformation and gets five strikes, their account could be permanently deleted from the app.
“Our goal with these product interventions is to provide people with additional context and authoritative information about COVID-19,” said the company. “Through the use of the strike system, we hope to educate people on why certain content breaks our rules so they have the opportunity to further consider their behavior and their impact on the public conversation.”
The new labels and strikes mentioned above have not yet been fully rolled out. First, Twitter says that labels will only be applied by human moderators, and will start with content in English. This will allow the social network to train its algorithms to make rulings on its own, a process that will take some time to develop. As Recode reported last year, Twitter’s automated labelling appeared to flag posts that weren’t misinformation because of keywords they used.
But labels and strikes are not the only punishment Twitter has to offer against vaccine misinformation. In late January, the social media platform also announced that it was developing a new tool called Birdwatch that’s designed to crowdsource expertise and beat back false narratives in a Wikipedia-like forum eventually connected to Twitter’s main app. The company, as it has throughout the pandemic, has been trying to elevate authoritative voices like Anthony Fauci’s to speak on vaccine-related issues. It’s also working with the White House to clamp down on vaccine misinformation.
But misinformation about the coronavirus pandemic doesn’t stop at vaccines; Twitter also started applying labels to COVID-19 claims that it deemed misleading but not drastic enough for removal, such as the idea that 5G cellular networks were somehow related to it.
How well Twitter’s new policies will work in actually curbing vaccine misinformation remains to be seen. Experts have highlighted that not all content opposed to vaccines is framed in terms of factual claims, and experts have warned that simply taking down false information about vaccines isn’t always the best approach for curbing vaccine hesitancy, as we’ve seen previously with the US’ measles outbreak.