The use of emojis protects abusive posts from being taken down, new study shows

By Alma Fabiani

Published Aug 16, 2021 at 02:55 PM

Reading time: 2 minutes

Nowadays, you really don’t need to be that tech-savvy to know what emojis are—you know, those ‘yellow little dudes’ you can add to your texts and captions, as my grandma likes to call them. The exact same applies to online trolls—who statistically, and I’ll add the adverb unsurprisingly in there too, tend to be men more often than not. Ironically, we’ve also previously seen that men tend to use fewer emojis than women. And last but not least, we even learned that using emojis can help you get laid more often. Make it make sense, right?

‘Where is she going with this?’ I can already hear you say. Let me enlighten you. Yet another study has revealed something new about the use of emojis, especially on social media platforms. The use of emojis online could help decrease the chances of abusive posts being tracked and taken down.

A new report published by the Oxford Internet Institute discovered that online posts which contain abuse are becoming less likely to be identified by some algorithms because of the emojis added to them. The report revealed that, while most algorithms used to track down hate posts work well with posts that only include text, it becomes largely difficult for them to work with posts that include emojis.

As a result, many posts containing harmful and abusive language, along with a couple of emojis, manage to remain on social media. Reporting on the same study, Digital Information World cites one example of this which took place during the Euro 2020 final, when players from England received racist remarks upon losing. Many of these racist posts were never detected nor deleted by the algorithm “since almost all of them contained a monkey emoji.”

Why can’t algorithms detect emoji-using trolls?

Social media algorithms all go through the same ‘learning process’—even TikTok’s impressively accurate one. Depending on what they’ll soon be asked to do, they are first shown examples of what they need to look out for on the platform. For example, when an algorithm is being ‘trained’ to detect abusive comments, it is trained on databases. The problem is that, until recently, these databases mostly contained text and rarely had emojis in them. Because of this, when a platform’s algorithm comes across abusive posts with emojis, it automatically sorts it as acceptable when it’s obviously not.

“A recent analysis disclosed that Instagram accounts that posted racist content but used emojis within it were three times less likely to be banned than those accounts that posted racist content without using emojis,” Digital Information World added.

That’s where Oxford University came in to hopefully save the day. Researchers put forward a solution for this problem, developing a new database of around 4,000 sentences that contained different emojis. They then used the new database to train an artificial intelligence-based model, which was then tested upon different hate comments such as those on minorities, religions and the LGBTQ+ community.

Google’s model named ‘Perspective API’, when tested with the dataset created by researchers, was only found to be 14 per cent efficient. In comparison, the AI model created by the researchers (called  HatemojiTrain) increased Google’s efficiency by about 30 to 80 per cent.

Shortly after publishing their report, the team of researchers from Oxford University shared its database online for other developers and companies to use as soon as possible. Yet so far, it doesn’t seem to be making much noise online.

Keep On Reading

By Abby Amoakuh

Neuralink’s human implant success sparks fear for the future of society

By Fatou Ferraro Mboup

Nella Rose’s I’m A Celeb criticism proves that Black women can never win in reality TV

By Fatou Ferraro Mboup

Former boy band member accuses Taylor Swift of performing demonic rituals at concerts

By Alma Fabiani

The rise, fall, and resurgence of the tramp stamp: How Gen Z are reclaiming lower back tattoos

By Charlie Sawyer

Why is #FreeLiamNissan trending on Twitter and what does Elon Musk have to do with Liam Neeson?

By Abby Amoakuh

White US politician tells primarily Black audience that her father born in 1933 was a white slave

By Abby Amoakuh

Online adoption ads prey on pregnant women in actions reminiscent of the Baby Scoop era

By Fatou Ferraro Mboup

Channel 4’s all-white board controversy is a clear sign that proper diversity in the media doesn’t exist

By Alma Fabiani

Teacher tragically found dead at scene of nativity play at UK private school

By Emma O'Regan-Reidy

The return of 2012’s most divisive shoe: Why wedge sneakers are making a comeback in 2024

By Emma O'Regan-Reidy

From gen Z farming to pro-hybrid work, here are 3 ways the younger generation will impact 2024

By Charlie Sawyer

An acoustic guitar and the first chords of Wonderwall aka every girl’s worst dating nightmare

By Abby Amoakuh

Abbott Elementary star Janelle James comes under fire for jokes about son’s genitals

By Charlie Sawyer

How much does it cost to attend the 2024 Met Gala? Why this year’s event is set to be the messiest one yet

By Charlie Sawyer

Kim Kardashian faces backlash for shocking two word response to Palestine protester

By Charlie Sawyer

Actor Jamie Dornan guiltily admits to stalking women in London. Here’s why

By Fleurine Tideman

Travis Kelce gave both Taylor Swift and the whole world the ick

By Fatou Ferraro Mboup

Exploring The Gambia’s attempt to reverse its ban on FGM and how the ritual cutting impacts women worldwide

By Abby Amoakuh

Drake calls for release of Tory Lanez, proving once more that he’s a rapper for the manosphere

By Abby Amoakuh

Who is Courtney Clenney, the OnlyFans model accused of stabbing her boyfriend to death?