Internet trolls are not a new phenomenon, in fact, they’ve been around since the late 1980s. However, in an era where cancel culture has become synonymous with cyber-bullying, we spot the rise of a new genre of trolls altogether—ones who are backed by a company’s payroll to infiltrate, manipulate and control online conversations surrounding their rivals. Welcome to the nasty little business of professional trolling.
Professional trolling can be summed up as the coordinated efforts to spread online ‘disinformation’—a subset of misinformation that is deliberately deceptive and misleading. The practice essentially offers governments, political parties and tech firms a fast and cheap way to weaken their rivals. This is done by exclusively employing people to carry out such trolling activities.
Typically roped in from developing countries, these employees are responsible for posting troll comments on various social media forums. They work in organised groups by setting up call-centre like operations, later following companies and influencers targeted by their employers. Masquerading as ‘one of us’, they then infiltrate the comments section and manipulate the conversation by inundating social media with conspiracy theories.
Subjecting other users to social media conditioning, these professional trolls distort the truth by copy-pasting curated thoughts infinite times until they pass off as truth. These fabricated lies are then picked up by other users who end up liking and even sharing these thoughts in their own social circles.
In many ways, professional trolling can be compared to what is happening in the US’ news landscape and the infiltration of pink slime in the country’s local news.
Comment sections can often be the best part about a piece of news, adding a layer of social dialogue onto traditional journalism. It is for this aspect that many head over and engage with other like-minded individuals. However, this practice can also be extremely devastating.
A study published in The Journal of Computer-Mediated Communication asked participants to first read a news post on a fictitious blog explaining the potential risks and benefits of a new product and then head over to the comments section to engage with other readers. The sample comments posted under the article then exposed participants to both civil and uncivil opinions. The results were surprising and disturbing at the same time—uncivil comments not only polarised readers but often impacted a participant’s interpretation of the story itself.
A digital analyst at The Atlantic further conducted a study to analyse this polarisation, only to resonate with the above-mentioned findings. The participants who were exposed to negative comments in the study were more likely to judge the quality of the article and—regardless of the content—doubt the truth it stated. These findings essentially make one believe that we are all just one comment away from losing our faith in humanity.
Last month, Facebook uncovered a massive ‘troll farm’ in Albania, linked to an Iranian militant group. The operations of the group had the “hallmarks of a typical troll farm,” which Facebook defines as “a physical location where a collective of operators share computers and phones to jointly manage a pool of fake accounts as part of an influence operation.”
“It looked like a team of trolls hot-desking,” tweeted Ben Nimmo, Facebook’s Global Influence Operations Threat lead. Nimmo noted the trolling operation resembling that of a full-time job from 6 am to 11 pm “with a break around lunchtime.”
Just last week, Digital Africa Research Lab and Buzzfeed News uncovered a large troll operation in Nigeria. Led in collaboration between a Nigerian PR firm and a UK-based nonprofit organisation, the operation paid social media influencers in Nigeria to tweet twice a week in support of a Columbian businessman, Alex Saab, accused of money laundering in the US. Following the report, Twitter went on to suspend more than 1,500 accounts for manipulating #FreeAlexSaab.
“Operations like these tend to be about making noise,” tweeted Nimmo. “They create the impression that a viewpoint is more popular than it is.” Although secretive in nature, such professional troll farms tend to share key attributes, which helps researchers and tech platforms to sniff them out almost instantaneously.
First attribute is that of shared physical location. “Troll farms are often propped up by a party that will pay for high-speed internet and computers that together power the network,” noted Axios. “It’s easier to finance and monitor operations that physically sit close together.”
Next up is the time frame: content from troll farms tends to be posted during work hours, with breaks for lunch as noted by Nimmo earlier. The last factor which distinguishes the presence of such operations is ‘hyper-targeted messaging’. “Posts from troll farms tend to zero in on a certain political message whereas most ordinary users post about an array of topics,” Axios concluded.
Professional trolling fosters a “symbiotic relationship between companies eager to weaken rivals and developing nations eager for cash.” In an interview with Axios, Carroll, a 20-year veteran with the FBI, admitted to seeing such trolls being employed from places like Vietnam, Philippines, and Malaysia. “Where there’s a lot of cheap labor and little oversight,” he added.
“We’re also seeing a lot more troll operations being picked up in Africa,” said Jean le Roux, a researcher at the Atlantic Council’s Digital Forensic Research Lab, in the interview. “More people in Africa are going online on social media,” he continued. “At the same time, Africa is one of poorer continents, which creates an easy recipe for countries like Russia to step in and pay someone to sit behind a computer all day.” These organisations go lengths to set up ‘cut-outs’ or systems to pay trollers without having to go through a bank that would get them noticed. Experts at covering up their tracks, they usually distribute money via a third-party on the ground itself.
With all this being said, however, professional trolling is seen as a double-edged sword, terrifying for some, favourable to others. While a good amount of critics perceive it as a practice that puts “the integrity of the internet at stake,” some companies praise it for bringing revenue-producing traffic to their website. For these organisations, all publicity is good publicity.
“The best way to slow down professional trolls is to make it more expensive for them to carry out disinformation campaigns,” le Roux added. A Twitter campaign periodically reminds followers to ignore anonymous comments on social media platforms. “You wouldn’t listen to someone named Bonerman26 in real life. Don’t read the comments,” a viral tweet reads.
While transparent comment systems and forums could help regulate the practice, the veil of anonymity given to users can’t be completely removed from such platforms—breeding in turn—a space where professional trolls influence users while flying under the radar. Whether or not you choose to read or engage with the comments of these paywall-backed trolls, it’s always a good idea to harbour scepticism for them. After all, like le Roux mentioned, trolling requires very few technical skills to carry out with almost every emerging economy susceptible to the practice.
Late on 17th April, ‘Lil Miquela’ @lilmiquela, a virtual Instagram model, fashion influencer and activist made to promote multiculturalism, was hacked by ‘Bermuda’ @bermudaisbae, a seemingly pro-Trump virtual Instagram model promoting alt-right propaganda. And with what was about to become one of the first AI social media feuds, the alarming possibility that CGI influencers and political opinions could marry into a dystopic union, became suddenly apparent.
Lil Miquela is listed on Wikipedia as a Spanish-Brazilian American computer-generated model and music artist from Downey, California. The millennial icon has used her platform to support socially minded causes including Black Lives Matter, feminism, Muslim and refugee advocacy organisations, transgender rights, Deferred Action for Childhood Arrivals (DACA), gun control, My Friend’s Place, Black Girls Code, Planned Parenthood, and protests of the Dakota Access Pipeline. And whilst some could be critical of the socio-political influencer for her trite dabbling in politics, we must admit ‘Bermuda’ is much, much worse.
Bermuda, whose bio reads “The earth isn’t getting hotter but I am”, is of a white reflection with blonde hair. The account posts anti-feminist, pro-Trump, Christian and alt-right promotions with ramblings such as “it’s OK to be white. I said it and I’m not afraid to say it: I am proud to be a white woman.#teamBermuda #BermudaHive #hotterinBermuda #Bermudatriangle #theNextStep #discourse #learntotalk”
As the speculation continued, it was then released that ‘Bermuda’ was run by Cain Intelligence, a Silicon Valley based Conscious Language Intelligence (CLI) company. “At Cain we’ve always strived to be leaders in a world overrun with followers. We’re passionate about creating a consumer-facing example of our Artificial Intelligence learnings.(…) We are proud to present Bermuda! Bermuda is the first of her kind. Built to speak her truth and to the interests of today’s youth, she is uniquely unapologetic, representing not only a breakthrough in artificial intelligence but also in modern political thought.” Read’s the website, which sits alongside a pro-Trump feature describing their endorsement for Donald Trump in 2016, as well as information on their CGI technologies, but no contact information.
“For me, it means saying what I want + being OK with people not liking me because of it….” says Bermuda. Her actions are seemingly reactionary to leftist strategies of shutting down discourse. In our current cultural climate, the alt-right is on the rise. In Kill All Normies: The Online Culture Wars from Tumblr and 4chan to the Alt-right and Trump by writer Angela Nagle, the internet and specifically spaces such as 4chan and Reddit is where the alt-right has flourished—through meme culture, offensive content ‘for the lulz’ has bred a huge amount of seemingly untouchable propaganda, because if you don’t like it it’s ‘just a joke’. It’s not a critical piece of text, it’s a meme—ipso facto. You’re the uncool one if you don’t ‘geddit’. And this is a act ‘Bermuda’ has engaged with, posting anti-feminist (specifically anti-Lena Dunham targeted) memes.
“Guys i’ve been hacked – it’s NOT me!!”
— Miquela (@lilmiquela) April 17, 2018
As the story unfolded, ‘Bermuda’ was threatening ‘Lil Miquela’ to ‘tell the truth’, with various countdown Instagram post’s, leaving viewers speculating what was happening, with comments such as ‘is is westworld?!”.
I couldn’t find anything on Reddit or 4chan about the feud, and by the following Wednesday they had ‘met’ and the truth was revealed, which inevitably was that Lil Miquela wasn’t real and rather a CGI Artificial Intelligence—the uncanny valley eliciting cold, eerie feelings in viewers. But this we already knew, and seemingly, Lil Miquela’s account has been flooded with personal anecdotes of her experience, and in turn, her realisation of her own CGI AI mechanisms.
“I am a robot. It just doesn’t sound right. I feel so human. I cry and I laugh and I dream. I fall in love. I’m trying to not let this mess me up but it is. I’m not in a good place. Im so upset and afraid…”
So here’s the Tea. Apparently, Lil Miquela was owned by Cain Intelligence (Daniel Cain), then Brud.fyi, a “group of Los Angeles based problem solvers specialising in robotics, artificial intelligence and their application to media businesses” stole her from Cain’s company in Silicon Valley in a bid to “free” her. Brud.fyi started the rumour that Lil Miquela may or may not be real for social media attention to garner financial opportunities and capital, working with other real Instagram celebrities, brands and so on. Cain Intelligence made Bermuda to hack Lil Miquela’s account and make her “tell the truth” about her mysterious identity. But there is another aspect at play here; a much more sinister, political leaning agenda. Beyond revealing the truth about the authenticity of the increasingly popular influencer, Bermuda’s hack became a ploy to express her pro-Trump, alt-right ideology to a majority left leaning following, whose algorhythms had most likely never touched on the accounts of those who share and promote Bermuda’s views. In many regards, this hack was about trying to glitch the otherwise concrete algorithms and infiltrate the accounts of hundreds of thousands with her rhetoric.
What this Sim-esque war and its creators has taught us is to realise the political implication of CGI/AI within digital cultures. But what we should really be pondering is what it would mean to suddenly find ourselves with a virtual leader? It is all very much a “Waldo moment” (see episode 3 of Black Mirror’s second season) in which a cartoon becomes the face of a political party. CGI/AI leaders are not just an alt-right issue but also an issue with the left and particularly—and perhaps more importantly—an issue with political discourse right now all together. With no history and no reality, comes no accountability. And that’s a terrifying prospect.