The use of emojis protects abusive posts from being taken down, new study shows

By Alma Fabiani

Published Aug 16, 2021 at 02:55 PM

Reading time: 2 minutes

21705

Nowadays, you really don’t need to be that tech-savvy to know what emojis are—you know, those ‘yellow little dudes’ you can add to your texts and captions, as my grandma likes to call them. The exact same applies to online trolls—who statistically, and I’ll add the adverb unsurprisingly in there too, tend to be men more often than not. Ironically, we’ve also previously seen that men tend to use fewer emojis than women. And last but not least, we even learned that using emojis can help you get laid more often. Make it make sense, right?

‘Where is she going with this?’ I can already hear you say. Let me enlighten you. Yet another study has revealed something new about the use of emojis, especially on social media platforms. The use of emojis online could help decrease the chances of abusive posts being tracked and taken down.

A new report published by the Oxford Internet Institute discovered that online posts which contain abuse are becoming less likely to be identified by some algorithms because of the emojis added to them. The report revealed that, while most algorithms used to track down hate posts work well with posts that only include text, it becomes largely difficult for them to work with posts that include emojis.

As a result, many posts containing harmful and abusive language, along with a couple of emojis, manage to remain on social media. Reporting on the same study, Digital Information World cites one example of this which took place during the Euro 2020 final, when players from England received racist remarks upon losing. Many of these racist posts were never detected nor deleted by the algorithm “since almost all of them contained a monkey emoji.”

Why can’t algorithms detect emoji-using trolls?

Social media algorithms all go through the same ‘learning process’—even TikTok’s impressively accurate one. Depending on what they’ll soon be asked to do, they are first shown examples of what they need to look out for on the platform. For example, when an algorithm is being ‘trained’ to detect abusive comments, it is trained on databases. The problem is that, until recently, these databases mostly contained text and rarely had emojis in them. Because of this, when a platform’s algorithm comes across abusive posts with emojis, it automatically sorts it as acceptable when it’s obviously not.

“A recent analysis disclosed that Instagram accounts that posted racist content but used emojis within it were three times less likely to be banned than those accounts that posted racist content without using emojis,” Digital Information World added.

That’s where Oxford University came in to hopefully save the day. Researchers put forward a solution for this problem, developing a new database of around 4,000 sentences that contained different emojis. They then used the new database to train an artificial intelligence-based model, which was then tested upon different hate comments such as those on minorities, religions and the LGBTQ+ community.

Google’s model named ‘Perspective API’, when tested with the dataset created by researchers, was only found to be 14 per cent efficient. In comparison, the AI model created by the researchers (called  HatemojiTrain) increased Google’s efficiency by about 30 to 80 per cent.

Shortly after publishing their report, the team of researchers from Oxford University shared its database online for other developers and companies to use as soon as possible. Yet so far, it doesn’t seem to be making much noise online.

Keep On Reading

By Charlie Sawyer

Father of former Harry Potter star gives serious warning to the new child stars in HBO Max reboot

By Fatou Ferraro Mboup

Why do Gen Zers think KFC is using human meat? Unpacking the controversy behind the chain’s latest ad

By Abby Amoakuh

New video game that allows men to r*pe female family members triggers backlash amid incel concerns

By Fatou Ferraro Mboup

Belgian court lets convicted rapist go free so he can become a gynaecologist

By Fatou Ferraro Mboup

Jeffree Star makes inappropriate comment after Kanye West posts disturbing incest confession

By Abby Amoakuh

Only at Coachella can you be caught saying the N-word and still perform without question

By Abby Amoakuh

Euphoria fans freak out as major storyline for season three gets leaked

By Charlie Sawyer

Netflix’s new viral movie, The Life List, is prompting Gen Zers to break up with their boyfriends

By Eliza Frost

How exactly is the UK government’s Online Safety Act keeping young people safe? 

By Alma Fabiani

Amazon Music is giving away 4 months free. Here’s how to claim it

By Charlie Sawyer

Another female influencer has been punched in the head in New York. Is it the same attacker?

By Charlie Sawyer

Australian actor Joseph Zada cast as Haymitch Abernathy in upcoming Hunger Games prequel

By Payton Turkeltaub

Do Gen Z secretly hate their boyfriends? TikTok’s viral #IHateMyBF says yes

By Fatou Ferraro Mboup

Kim Kardashian’s Paris $10 million heist: grandpa robbers tell all as trial begins

By Fatou Ferraro Mboup

Did Chappell Roan push her assistant on the red carpet? We analyse the footage

By Charlie Sawyer

First look at $1 billion UK mini city where controversial HBO Harry Potter series will be filmed

By Fatou Ferraro Mboup

Keep vaping or your Tamagotchi dies. Introducing the latest vape invention in New York

By Eliza Frost

Taylor Swift announces new album on Travis Kelce’s podcast. Everything we know about TS12 so far

By Fatou Ferraro Mboup

Robert F. Kennedy Jr declares war on teen sperm count, stating it’s an existential crisis

By Charlie Sawyer

Pope picking 101: What actually happens during Conclave