The use of emojis protects abusive posts from being taken down, new study shows

By Alma Fabiani

Published Aug 16, 2021 at 02:55 PM

Reading time: 2 minutes

21705

Nowadays, you really don’t need to be that tech-savvy to know what emojis are—you know, those ‘yellow little dudes’ you can add to your texts and captions, as my grandma likes to call them. The exact same applies to online trolls—who statistically, and I’ll add the adverb unsurprisingly in there too, tend to be men more often than not. Ironically, we’ve also previously seen that men tend to use fewer emojis than women. And last but not least, we even learned that using emojis can help you get laid more often. Make it make sense, right?

‘Where is she going with this?’ I can already hear you say. Let me enlighten you. Yet another study has revealed something new about the use of emojis, especially on social media platforms. The use of emojis online could help decrease the chances of abusive posts being tracked and taken down.

A new report published by the Oxford Internet Institute discovered that online posts which contain abuse are becoming less likely to be identified by some algorithms because of the emojis added to them. The report revealed that, while most algorithms used to track down hate posts work well with posts that only include text, it becomes largely difficult for them to work with posts that include emojis.

As a result, many posts containing harmful and abusive language, along with a couple of emojis, manage to remain on social media. Reporting on the same study, Digital Information World cites one example of this which took place during the Euro 2020 final, when players from England received racist remarks upon losing. Many of these racist posts were never detected nor deleted by the algorithm “since almost all of them contained a monkey emoji.”

Why can’t algorithms detect emoji-using trolls?

Social media algorithms all go through the same ‘learning process’—even TikTok’s impressively accurate one. Depending on what they’ll soon be asked to do, they are first shown examples of what they need to look out for on the platform. For example, when an algorithm is being ‘trained’ to detect abusive comments, it is trained on databases. The problem is that, until recently, these databases mostly contained text and rarely had emojis in them. Because of this, when a platform’s algorithm comes across abusive posts with emojis, it automatically sorts it as acceptable when it’s obviously not.

“A recent analysis disclosed that Instagram accounts that posted racist content but used emojis within it were three times less likely to be banned than those accounts that posted racist content without using emojis,” Digital Information World added.

That’s where Oxford University came in to hopefully save the day. Researchers put forward a solution for this problem, developing a new database of around 4,000 sentences that contained different emojis. They then used the new database to train an artificial intelligence-based model, which was then tested upon different hate comments such as those on minorities, religions and the LGBTQ+ community.

Google’s model named ‘Perspective API’, when tested with the dataset created by researchers, was only found to be 14 per cent efficient. In comparison, the AI model created by the researchers (called  HatemojiTrain) increased Google’s efficiency by about 30 to 80 per cent.

Shortly after publishing their report, the team of researchers from Oxford University shared its database online for other developers and companies to use as soon as possible. Yet so far, it doesn’t seem to be making much noise online.

Keep On Reading

By Charlie Sawyer

Watch Coldplay bring out Michael J. Fox in emotional moment at Glastonbury festival

By Fatou Ferraro Mboup

Exploring The Gambia’s attempt to reverse its ban on FGM and how the ritual cutting impacts women worldwide

By Charlie Sawyer

What is the No Thanks app? And how are people using boycotting methods to protest the war in Gaza?

By Emma O'Regan-Reidy

Is BookTok ruining reading? Critics seem to think so

By Charlie Sawyer

Emily Ratajkowski debuts divorce rings, symbolising the beginning of the loud breakup era

By Abby Amoakuh

Dermatologists accuse Nara Smith of promoting skin cancer with latest homemade sunscreen video

By Monica Athnasious

The surprising history and original purpose of chainsaws

By Charlie Sawyer

Wellness TikTokers spread conspiracy theory that sunscreen is bad for you

By J'Nae Phillips

From blokecore to shirred jerseys, football’s girl-coded makeover holds a deeper message

By Charlie Sawyer

New York Mayor supports conspiracy theory on why all pro-Palestine student protestors have the same tent

By Abby Amoakuh

Bridgerton’s Nicola Coughlan hits back at journalist who shamed her for nude scene

By Charlie Sawyer

Usher Super Bowl 2024 halftime show: Justin Bieber to make comeback as special guest

By Abby Amoakuh

Bobbi Althoff thrown out of Drake’s SXSW party attending uninvited reignites affair rumours

By Charlie Sawyer

Explaining Swiftonomics: Why NFL stans need to be thanking Taylor Swift big time

By Charlie Sawyer

Legit or not? Debunking the latest viral £50 Temu free money giveaway

By Fatou Ferraro Mboup

Lego urges California police department to stop using its toy heads for mugshot images

By Fatou Ferraro Mboup

Former boy band member accuses Taylor Swift of performing demonic rituals at concerts

By Fatou Ferraro Mboup

Who is Bianca Censori and why is her controversial family worried about Kanye West?

By Fatou Ferraro Mboup

Teenage boy arrested after creating graphic deepfake AI images of over 50 female students 

By Abby Amoakuh

Industry insider accuses Kris Jenner’s boyfriend Corey Gamble of grooming Justin Bieber and more in wild interview