An unpredictable result of social media’s mass-spread over the last few years is the embarrassing, rash and almost relatable blow-ups of public figures taking to the platforms to vent on or express their very personal beliefs. Demonstration of “twitter finger wars”, name calling and illogical breakdowns are becoming common appearances among politicians and even CEOs of companies.
What’s unfathomable is that these large high-stakes companies and even parties, have no real procedures in place to filter and moderate the publications coming out of their CEOs or party leaders. Looking at the Fortune 500 CEOs, 61 percent have no social media presence at all, but when thinking of the benefits of being active and cultivating a following, it’s easy to understand why even those who are extremely powerful, fall prey to social media addictions.
Social media has helped companies benefit from what is known as thought leadership, which is the attempt of companies to position their executive leaders as influencers. In the age of social media influencers, the inevitable rise of the tycoon influencer is now on full blast for everyone to click, follow and stream.
Free PR and immediate conversations with hundreds of followers is appealing and obviously profitable, but when is there a line to draw? Whether it be the president of the United States or the CEOs of large and influencing companies, a question of ethics must be asked when looking at free-speech, economic effect, and social media citizens and their reactions in turn. Elon Musk’s vocal presence on social media, alongside his peers, such as Mark Zuckerberg and even Jeff Bezos are obvious examples. In a report on CEOs social media behaviour, CEO.com writes, “Social media… has a major impact on brand reputation. A CEO can either participate in the discussion and influence it, or risk the implications of allowing his or her corporate image to be decided in the court of public opinion.” The effects of these tweets and blasts have real-life and long-lasting effects on markets as well as on the personal lives of many. So is it time for social media companies to take an even larger stand with user punishment? Should the government be involved?
Banning of extremists such as Alex Jones proves to be a positive example of social media companies cracking down on negative impacts caused by popular influencers, but even so, many problematic points arose. The lateness of Twitter to join other social media outlets by temporarily banning the high-case user evidently showed that the company itself did not know how to tackle the censorship of his extreme and abusive tweets while at the same time staying true to their ethos of a free-speech platform for everyone to express their views on.
The concept of social media influencers is a well-known one in the world of consumer brands, but what if these influencers have the power to change policy and generate immense profit for their benefit at the expense of others? There is no easy answer moving forward, regardless though, I will continue to follow these highly important, frequent social media moguls as I wait for their next scandalous comment I can retweet.
A recent study, titled Troll Patrol project, compiled jointly by Amnesty International and Element AI, a Canadian AI software firm, finds that black female journalists and politicians are 84 percent more likely to be the target of hate speech on Twitter. The study, carried out with the support of thousands of volunteers, examined roughly 228,000 tweets sent to 778 women politicians and journalists in the U.S. and U.K. in 2017. The report’s disturbing findings have sparked an international uproar and a barrage of criticism against the social media giant, which apparently fails to curb hate speech on its platform.
The study found that a total of 1.1 million dehumanising tweets were sent to the women examined, which is the equivalent of one every 30 seconds, and that 7.1 percent of all tweets sent to these women were abusive. Amnesty International regards such trolling as a violation of these women’s human rights, stating that “Our conclusion is that online abuse [works] against the freedom of expression for women because it gets them to withdraw, it gets them to limit their conversations and sometimes to leave the platform altogether… But we never really knew how big a problem was because Twitter holds all the data. Every time we ask for reports, they’re very vague, telling us that they’re taking some small steps. … Because they didn’t give us the data, we had to do it ourselves.”
Amnesty was soon joined by public figures, politicians, and organisations who criticised Twitter’s incompetent mechanism of removing abusive content and the company’s failure to properly adopt its recent policy revisions meant to strengthen monitoring of dangerous and offensive tweets. Twitter’s shares reportedly took a 12 percent nosedive yesterday, after being referred to as “toxic”, “uninvestable”, and “the Harvey Weinstein of social media” by an influential research firm called Citron. “The hate on Twitter is real and the company is not taking proper steps to curb the problem,” Citron said in a statement, adding that the company’s failure to “effectively tackle violence and abuse on the platform has a chilling effect on freedom of expression online.”
Similarly to Facebook, Twitter relies heavily on AI algorithms to spot and remove content deemed inappropriate, violent, or discriminatory. Yet, such machines often fail to pick up on hate speech that relies on context and is not easily discernible. A tweet like “Go back to the kitchen, where you belong”, for instance, will less likely be spotted by an AI machine than “all women are scum”.
Twitter, on its part, claims that ‘problematic speech’ is difficult to define and that often it’s hard to determine what counts as dehumanising. “I would note that the concept of ‘problematic’ content for the purposes of classifying content is one that warrants further discussion,” Vijaya Gadde, Twitter’s legal officer, said in a statement, adding that “We work hard to build globally enforceable rules and have begun consulting the public as part of the process.”
There is no doubt that social media companies such as Twitter must further develop their content monitoring tools, by fusing AI algorithms with, yes, air-breathing-real-world humans, who prove to still be indispensable and irreplaceable. This, therefore, becomes not only a technology issue but an HR and resource allocation one; Twitter and its social media buds should increase the size of its content control divisions until machines can effectively and truly master the art of reading comprehension.
Other than forcing Twitter to get tougher on hate speech, people must take a moment to truly reflect on who the report identifies as the primary target of trolling: female minorities who raise their voice publicly, whether as a U.K parliament member, U.S. Congresswoman or Senator, or a journalist. The challenge is then not only to remove content that abuses minority women and discourages them from remaining visible and active, but also to recognise that we live in a society that still aggressively tries to silence them.
As we do so, we must simultaneously explore who are the people behind the 1.1 million abusive tweets; what segments of the population do they belong to? Who, exactly, are those so terrified of the prospect of having minority women fight for their rights? Should we lean on commonly-held assumptions regarding their identity, or will a data about them reveal a more complicated story? All of these questions should be the subject of further research, without which we could never truly tackle the plague of racism and misogyny.