Over the past year and a half, TikTok has been rapidly taking over Southeast Asia, and has made impressive strides in the U.S. and Europe, situating itself as the next ‘it’ app in the social media landscape. Alas, the 15-second video app has been used as a vehicle of egregious hate speech, racist vitriol, and violent attacks, particularly in India.
An investigation by WIRED revealed that thousands across India have taken to TikTok to spread racist and violent messages against members of groups who are perceived to be lower than them on the caste system’s social ladder.
In one case, Venkataraman, a 28-year-old man in the state of Tamil Nadu, had posted a video in which he drunkenly yelled racial slurs against the Dalits—the group ranked lowest in India’s Hindu caste system. “Fight us if you are a real man, you Dalit dogs. You bastards are worthless in front of us. We’ll butcher you lowlifes,” Venkataraman was seen saying in the video, which he claimed he shot at the encouragement of his 18-year-old friend. As the video went viral, a wave of protests broke out in the area, and Dalit demanded that acion be taken against Venkataraman. The latter then placed the blame for video and the backlash on his friend, whom he then strangled to death.
Overall, tens of thousands of TikTok videos have reportedly promoted hate speech and contained casteist-inspired hashtags. Over a two-month period this summer, WIRED came across 500 TikTok videos that included caste-based hate, incitement for violence, and threats. In a growing number of cases, the rapid proliferation and ubiquity of such hate speech encourages people to take the fight off of the screen and commit acts of violence in real life. Thus far, 18 incidents of violence (ten of which resulted in deaths) were linked either directly or indirectly to TikTok in India.
Responding to the investigation, TikTok stated that, “The team had identified the videos cited before WIRED contacted us and were in the removal process, but we continuously work to improve our capabilities to do even better.” The company has also appointed a special grievance officer to India in August.
Yet, court documents procured by WIRED reveal that the company currently fails to curb the volume of hate speech spreading on its platform in India. Over a five-month period, between November 2018 and April 2019, TikTok removed 36,365 videos that breached its codes on hate speech and religion, and 12,309 videos that included dangerous behaviour and violence. And still the court documents reveal that only one out of ten of the overall videos reported (677,544) were eventually removed, and that those reported only account for 0.00006 percent of the total videos uploaded. While this data makes it difficult to measure the true impact of TikTok on the proliferation of hate speech in India, it indicates that the company simply fails to establish an effective screening mechanism to moderate content on its app.
“The problem with Tiktok is that they are not very open to advocacy or engaging with civil society. Not even to the standards of its American counterparts,” said Thenmozhi Soundararajan, executive director of Equality Labs, a South Asian human rights group, adding that, “I think they’d rather pay the fines and don’t care.”
TikTok has also inspired the wrath of numerous lawmakers and judges in India, who have been vocal in their opposition to the app and its influence over the Indian population. At the request of the Indian Court system, which ruled that the app was disseminating “pornographic” and “inappropriate” content, Google and Apple removed TikTok from their app stores last April, and didn’t reinstate it until millions of additional videos were taken off the platform.
The power of social media platforms in exacerbating tensions and their role as potential vehicles of hate should not be taken lightly. It is true that TikTok is not the only company struggling to formulate a proper system to curb hate-speech and halt the spreading of misinformation, yet with its position as the most popular kid on the block, at least in Southeast Asia, comes an even greater responsibility to lead such efforts.
TikTok—get your act together.
For many years, women living in Saudi Arabia had to contend with guardianship—essentially, the men in their lives have complete control over where they go, where they study, and whether they are even able to drive or move around freely. That is often contingent on tons of paperwork—applied by the government through an intensely bureaucratic system. Now, an app called Absher, which was created by the Saudi government, digitises that process. It alerts men on the whereabouts of the women they know. Whether they are leaving the country or coming into it (supposedly without their permission), Absher will notify these men, amongst other services which the app offers. It gives men permission to revoke the travelling abilities of the women they are guardians of.
Women in Saudi Arabia, as well as other feminists internationally, have called on Google and Apple to remove these apps from the app store, ever since the issue first gained press coverage. But the app essentially makes a form of oppression and control which already exists in real-life into digital form—for women in Saudi Arabia, it just meant that their request to leave the country or to travel somewhere without a chaperone could be refused faster, rather than getting lost in government bureaucracy or refused a few months later. But this is also part of the larger question—how much do Apple and Google know about the apps that are on their app stores?
This is not the first time that an app with dangerous actual consequences has been brought to the attention of large technology companies. On one hand, there’s the problem of malicious apps—apps with malware that are disguised as other versions of popular apps, like Tinder, or third-party app stores. These are dangerous in a different way than Absher—for example, these apps will steal individual credit card information, or use geotagging information to find out where you are, but they won’t be able to change anything about where you can and can’t go. This is something which security experts have spoken about previously, but it’s a different issue from what is happening with an app like Absher, which is, for all intents and purposes, legal.
In this case however, the problem may have arisen because it’s not immediately obvious that Absher enables this kind of control over women. Absher also hosts a variety of other services, such as passport checks and document scanning, and so it may not have initially been obvious to moderators that the app would be used in this way. As a New York Times article documented in January, women are trying to leave Saudi Arabia in greater numbers than before, partially enabled by technology. Some of them were able to use websites and WhatsApp groups to coordinate with other women, some were even able to use Absher on their male relative’s phones, setting them to let the women travel and escape to safety.
On a larger scale, apps like Absher proliferate because they aren’t technically illegal. What Absher does violate is international human rights law—but that’s also because the government that’s created it and uses it does too. In this way, trying to remove Absher would potentially cause a firestorm, particularly given the relationship between Saudi Arabia and Silicon Valley, in addition to doing little to change a fundamentally broken system. If large international bodies and other groups haven’t been able to alter the misogynistic system of guardianship, the app being removed from the app store is unlikely to do so.
These problems arise because the process of developing an app and putting it on app stores, both for Apple and Google, are fairly straightforward. Google, in comparison to Apple, which has a strict approval process, has also come under fire for the apps that it lets proliferate on Google Play. A report from WIRED UK found that child-friendly apps that were being sold on Google Play were anything but. The problem of content moderation on the app store is one that Google has had to reckon with, but has done precious little about. This is also different because the government of Saudi Arabia has created the app—making it harder to take down than just reckoning with an app developer.
But Apple and Google do have the ability to intervene and remove apps from their app stores as and when it’s deemed necessary. Recently, in India, TikTok was considered to be a danger and a menace to the population, particularly given how many of the users were under the age of 18. A week later, TikTok was removed from their app stores, over concerns about paedophilia.
After all, Saudi Arabia’s repressive policies towards women are not a state secret—human rights organisations and activists have been raising the alarm about them for years—so it’s unsurprising that this may have passed under the radar. But the fact that companies are enabling this kind of human rights abuse should surely be a cause for concern for anyone who cares about freedom or equality.