A recent study, titled Troll Patrol project, compiled jointly by Amnesty International and Element AI, a Canadian AI software firm, finds that black female journalists and politicians are 84 percent more likely to be the target of hate speech on Twitter. The study, carried out with the support of thousands of volunteers, examined roughly 228,000 tweets sent to 778 women politicians and journalists in the U.S. and U.K. in 2017. The report’s disturbing findings have sparked an international uproar and a barrage of criticism against the social media giant, which apparently fails to curb hate speech on its platform.
The study found that a total of 1.1 million dehumanising tweets were sent to the women examined, which is the equivalent of one every 30 seconds, and that 7.1 percent of all tweets sent to these women were abusive. Amnesty International regards such trolling as a violation of these women’s human rights, stating that “Our conclusion is that online abuse [works] against the freedom of expression for women because it gets them to withdraw, it gets them to limit their conversations and sometimes to leave the platform altogether… But we never really knew how big a problem was because Twitter holds all the data. Every time we ask for reports, they’re very vague, telling us that they’re taking some small steps. … Because they didn’t give us the data, we had to do it ourselves.”
Amnesty was soon joined by public figures, politicians, and organisations who criticised Twitter’s incompetent mechanism of removing abusive content and the company’s failure to properly adopt its recent policy revisions meant to strengthen monitoring of dangerous and offensive tweets. Twitter’s shares reportedly took a 12 percent nosedive yesterday, after being referred to as “toxic”, “uninvestable”, and “the Harvey Weinstein of social media” by an influential research firm called Citron. “The hate on Twitter is real and the company is not taking proper steps to curb the problem,” Citron said in a statement, adding that the company’s failure to “effectively tackle violence and abuse on the platform has a chilling effect on freedom of expression online.”
Similarly to Facebook, Twitter relies heavily on AI algorithms to spot and remove content deemed inappropriate, violent, or discriminatory. Yet, such machines often fail to pick up on hate speech that relies on context and is not easily discernible. A tweet like “Go back to the kitchen, where you belong”, for instance, will less likely be spotted by an AI machine than “all women are scum”.
Twitter, on its part, claims that ‘problematic speech’ is difficult to define and that often it’s hard to determine what counts as dehumanising. “I would note that the concept of ‘problematic’ content for the purposes of classifying content is one that warrants further discussion,” Vijaya Gadde, Twitter’s legal officer, said in a statement, adding that “We work hard to build globally enforceable rules and have begun consulting the public as part of the process.”
There is no doubt that social media companies such as Twitter must further develop their content monitoring tools, by fusing AI algorithms with, yes, air-breathing-real-world humans, who prove to still be indispensable and irreplaceable. This, therefore, becomes not only a technology issue but an HR and resource allocation one; Twitter and its social media buds should increase the size of its content control divisions until machines can effectively and truly master the art of reading comprehension.
Other than forcing Twitter to get tougher on hate speech, people must take a moment to truly reflect on who the report identifies as the primary target of trolling: female minorities who raise their voice publicly, whether as a U.K parliament member, U.S. Congresswoman or Senator, or a journalist. The challenge is then not only to remove content that abuses minority women and discourages them from remaining visible and active, but also to recognise that we live in a society that still aggressively tries to silence them.
As we do so, we must simultaneously explore who are the people behind the 1.1 million abusive tweets; what segments of the population do they belong to? Who, exactly, are those so terrified of the prospect of having minority women fight for their rights? Should we lean on commonly-held assumptions regarding their identity, or will a data about them reveal a more complicated story? All of these questions should be the subject of further research, without which we could never truly tackle the plague of racism and misogyny.