Twitter shadowban: what is it and how can you tell if you’re shadow banned? – Screen Shot
Deep Dives Level Up Newsletters Saved Articles Challenges

Twitter shadowban: what is it and how can you tell if you’re shadow banned?

What does being shadowbanned mean?

Shadowbanning is the act of blocking or partially blocking a user or their content from an online community. But here’s the trick: anyone who gets shadowbanned will not be notified that they are, and most users will only realise it after a while. It’s a simple way for social media companies to let spammers continue to spam without anyone else in the community (or outside of it) seeing what they do.

Most users might realise they’ve been shadowbanned after noticing a significant dip in their likes on new posts or if someone else looking for their account realises that they are unable to tag or search the shadowbanned account.

If you want to learn more about getting shadowbanned on Instagram, we previously wrote about how you can tell and what you can do against it. However, if you think you might be shadowbanned on Twitter or simply want to learn more about the different types of shadowbans on the social media platform, here’s everything you need to know.

Just like on Instagram, your Twitter account can get shadowbanned too—the social media even has different kinds of shadowbans depending on what it needs to penalise you for. Interestingly, in July 2018, the company wrote a blog post on the matter titled Setting the record straight on shadow banning.

The point of the article was clear: Twitter doesn’t shadowban its users—or so it says. Most websites and apps deny that they shadowban, that’s why technically, there’s no way to know for sure that it’s happened.

If you suspect that your account has been shadowbanned, a change in the platform’s search or newsfeed algorithm might actually be to blame. “Since the algorithms are the property of social media companies, it’s not in their best interest to reveal everything about them publicly,” explains Neil Patel.

Regardless of whether you’ve been penalised deliberately or accidentally, the effect remains the same: no one can see your posts.

Twitter shadowbanning

As mentioned above, Twitter claimed that it does shadowban. However, in its blog post, it also says that it “ranks tweets and search results” to “address bad-faith actors.” In other words, if Twitter thinks you’re a spammer or a troll, its algorithm will penalise your content.

“Bad-faith actors” is just the social media’s way of defining someone that needs to be shadowbanned. But what are the specific factors that Twitter uses to tell if you’re a “bad-faith actor”?

Reasons you might be shadowbanned on Twitter

– Your account is less than a year old
– Your account has less than 500 followers
– You haven’t confirmed the email address linked to your Twitter account
– You followed or retweeted a tweet from a suspicious/spam account
– You’ve been muted by other Twitter users
– You’ve been retweeted by a suspicious/spam account
– You’ve been blocked by other Twitter users
– You haven’t uploaded a picture to your profile
– You’ve followed too many Twitter accounts in too little time
– You’ve posted too much in too little time
– You’ve been copying and pasting the same tweet too many times (same applies to images and links) 
– You’ve been engaging a lot with accounts that don’t follow you

What can you do to avoid getting shadowbanned on Twitter?

First of all, make sure you have completed your Twitter profile. Start by confirming your email address and uploading a profile picture. Secondly, Twitter (and any other social media) hates spammers so don’t spam people and don’t be overly promotional. If you’re trying to sell a product or service and are posting too much, other users might block your content, causing a shadowban on your account.

Lastly, don’t be a troll. Avoid getting into online arguments, as this kind of behaviour could easily lead other Twitter users to block, mute or report your account.

Twitter’s different types of shadowbans

1. The No Search Suggestion Ban

This type of ban causes an account to not populate search suggestions and people search results when it is searched for while being logged out. Twitter seems to take tie strength or a similar metric into account. While an account may be suggested to users you are strongly tied to, it may not be shown to others.

2. The No Search Ban

This ban causes your tweets to be hidden from the search results entirely, no matter whether the quality filter is turned on or off. This behaviour includes hashtags as well. This type of ban seems to be temporally limited for active accounts.

3. The No Ghost Ban

This is what is referred to as conventional shadowban or thread banning as well. It comprises a search ban while threads are completely ripped apart by hiding reply tweets of the affected user to others. Everything will look perfectly normal to the affected user but many others will not be able to see reply tweets of the affected user at all. Reasons for this ban include behaviour like excessive tweeting or following. Again, this type of ban seems to be temporally limited for active accounts.

4. The Reply Barrier Ban

If Twitter’s signals determine that an account might engage in harmful behaviour, Twitter will hide their replies behind a barrier and only load them when ‘Show more replies’ is clicked. This behaviour is personalised, which means that Twitter will not hide the tweets of accounts that you follow.

In some cases, Twitter classifies accounts as offensive. In this case, replies are hidden behind a second barrier within the ‘Show more replies’ section. This may depend on the conversation which you participated in.

If you’d like to see whether your account is currently suffering from any of these bans, you can use a shadowban test.

Other platforms like Instagram, Reddit, TikTok, Facebook, YouTube, and even LinkedIn often shadowban their users for different reasons, so make sure you always stick to the Terms of Service, don’t spam, avoid banned hashtags, avoid posting about illegal topics, and treat others politely.

A shadowban can be one of the most frustrating things, especially if you don’t feel like you deserve one. Maybe you don’t agree with the social media algorithm about what is or isn’t inappropriate, or maybe you think you were having a constructive debate while the algorithm thinks you were being a troll—who knows?

Hopefully, our extra tips will help you avoid being shadowbanned in the future. Good luck out there!

Opinion

Amnesty International report reveals that Twitter is a ‘toxic place’ for women

By Yair Oded


Social media

Dec 21, 2018

A recent study, titled Troll Patrol project, compiled jointly by Amnesty International and Element AI, a Canadian AI software firm, finds that black female journalists and politicians are 84 percent more likely to be the target of hate speech on Twitter. The study, carried out with the support of thousands of volunteers, examined roughly 228,000 tweets sent to 778 women politicians and journalists in the U.S. and U.K. in 2017. The report’s disturbing findings have sparked an international uproar and a barrage of criticism against the social media giant, which apparently fails to curb hate speech on its platform.

The study found that a total of 1.1 million dehumanising tweets were sent to the women examined, which is the equivalent of one every 30 seconds, and that 7.1 percent of all tweets sent to these women were abusive. Amnesty International regards such trolling as a violation of these women’s human rights, stating that “Our conclusion is that online abuse [works] against the freedom of expression for women because it gets them to withdraw, it gets them to limit their conversations and sometimes to leave the platform altogether… But we never really knew how big a problem was because Twitter holds all the data. Every time we ask for reports, they’re very vague, telling us that they’re taking some small steps. … Because they didn’t give us the data, we had to do it ourselves.”

Amnesty was soon joined by public figures, politicians, and organisations who criticised Twitter’s incompetent mechanism of removing abusive content and the company’s failure to properly adopt its recent policy revisions meant to strengthen monitoring of dangerous and offensive tweets. Twitter’s shares reportedly took a 12 percent nosedive yesterday, after being referred to as “toxic”, “uninvestable”, and “the Harvey Weinstein of social media” by an influential research firm called Citron. “The hate on Twitter is real and the company is not taking proper steps to curb the problem,” Citron said in a statement, adding that the company’s failure to “effectively tackle violence and abuse on the platform has a chilling effect on freedom of expression online.”

Similarly to Facebook, Twitter relies heavily on AI algorithms to spot and remove content deemed inappropriate, violent, or discriminatory. Yet, such machines often fail to pick up on hate speech that relies on context and is not easily discernible. A tweet like “Go back to the kitchen, where you belong”, for instance, will less likely be spotted by an AI machine than “all women are scum”.

Twitter, on its part, claims that ‘problematic speech’ is difficult to define and that often it’s hard to determine what counts as dehumanising. “I would note that the concept of ‘problematic’ content for the purposes of classifying content is one that warrants further discussion,” Vijaya Gadde, Twitter’s legal officer, said in a statement, adding that “We work hard to build globally enforceable rules and have begun consulting the public as part of the process.”

There is no doubt that social media companies such as Twitter must further develop their content monitoring tools, by fusing AI algorithms with, yes, air-breathing-real-world humans, who prove to still be indispensable and irreplaceable. This, therefore, becomes not only a technology issue but an HR and resource allocation one; Twitter and its social media buds should increase the size of its content control divisions until machines can effectively and truly master the art of reading comprehension.

Other than forcing Twitter to get tougher on hate speech, people must take a moment to truly reflect on who the report identifies as the primary target of trolling: female minorities who raise their voice publicly, whether as a U.K parliament member, U.S. Congresswoman or Senator, or a journalist. The challenge is then not only to remove content that abuses minority women and discourages them from remaining visible and active, but also to recognise that we live in a society that still aggressively tries to silence them.

As we do so, we must simultaneously explore who are the people behind the 1.1 million abusive tweets; what segments of the population do they belong to? Who, exactly, are those so terrified of the prospect of having minority women fight for their rights? Should we lean on commonly-held assumptions regarding their identity, or will a data about them reveal a more complicated story? All of these questions should be the subject of further research, without which we could never truly tackle the plague of racism and misogyny.