On 18 December, 2019, Netflix released a three-episode show called Don’t F**** With Cats: Hunting An Internet Killer, opening a gruesome Pandora’s box most people had forgotten about. The show follows the story of Luka Magnotta, a former adult entertainer turned sadistic cat-torturer, turned—eventually—killer. What the documentary also did, in hindsight, is shed light back on bestgore.com, the most popular, and arguably the most disturbing shock website, and the moral and legal controversies behind its existence.
In 2013, a video called 1 Lunatic 1 Icepick was published on bestgore.com showing Magnotta brutally murdering Lin Jun, an engineering student from China who had moved to Montreal, Canada, for his studies. After dismembering his body, Magnotta sent several of Jun’s body parts to Canadian political parties among other recipients. The video is a 10-minute unbearable sequence of images, which was described by Mark Marek, Best Gore’s founder, as “without a doubt the sickest thing you will have ever seen in your entire life,” on the video’s caption.
Shortly after posting the video, Magnotta was trialled and found guilty of first-degree murder in 2014, while Mark Marek was accused by the Canadian authorities of “corrupting morals,” based on a law from 1949 which states that anyone who “makes, prints, publishes, distributes, circulates, or has in his possession for the purpose of publication, distribution or circulation any obscene written matter, picture, model, phonograph record or other thing whatever” might risk going to jail. According to the law, the word ‘obscenity’ is used to describe any materials mixed with violence, sex, and degradation, as reported by Adrienne Jeffries in an article published by The Verge. In it, Jeffries questions the responsibility of Marek within this story, asking whether is it correct that he faced jail for posting a video of the murder.
Despite the sickening violence depicted on 1 Lunatic 1 Icepick, when you scroll through Best Gore, Magnotta’s video is in good company. The Canadian website features some of the most graphic violence that occurs on earth, all made by human hands. Among its different categories, users can find gang executions, ISIS beheadings, car accidents and videos depicting cases of police brutality from all over the world. With an average of 200,000 pageviews a day, the demand for this type of content is high, to say the least.
Shock, or gore, websites started appearing in 1996, when rotten.com was founded. Rotten started the ‘trend’ by mostly featuring still images of car accidents and medical conditions but it was in the early 2000s that ogrish.com paved the way for a category of its own. These types of websites are technically legal and are still live online, like the famous theYNC and goregrish, both available on theync.com and goregrish.com. In the US, all websites are protected by Section 230 of the Communications Decency Act (CDA), meaning that if you publish user-generated content, you’re not responsible for what it portrays.
The circulation of violent content online is part of an ongoing debate that repeatedly puts at stake the internet’s freedom, but what is the price users are willing to pay to keep the internet a free—and to some extent unregulated—space? Despite the voyeuristic and sadistic purposes that most likely hide behind most of Best Gore’s users, the idea behind Best Gore and Marek’s manifesto lies on a (relatively) reasonable basis.
Among the website’s several statements on freedom of speech on the internet and the threat of online censorship, Marek writes, “Harm to freedom of expression caused by censorship of content just because some may deem it blasphemous, obscene or morally-corrupting would be devastating and should be of utmost concern to all people of conscience. […] And this is where Best Gore steps in, as the website has played a pivotal role in exposing lies which were declared as official truths by the mainstream media, exposed countless cases of police brutality, governments sanctioned terrorism, war profiteering, fear mongering and other unsavory activities which enslave the people in injustice.”
The issue with online toxicity is that we don’t seem to be able to pinpoint whether what we see online influences real-life actions or vice versa. If this violence exists in real life, is there a point in censoring its representation online? The internet entailed a moral and ethical compromise since day one, and it’s with websites such as Best Gore that we are reminded of how severe this paradox can get. Whether we can handle it is up to you and me.
Jordan Peterson is now a household name. He’s known as a psychologist at the University of Toronto, a best-selling author, and, more specifically, as a representative of the free speech movement. His writing and lectures around his book, 12 Rules for Life: An Antidote to Chaos, have made him into somewhat of a celebrity among many different internet communities, particularly among young men who feel that they’re being censored elsewhere. It’s perhaps unsurprising then that he has now created his own explicitly free speech-focused social network called thinkspot. With the platform now live, it’s important we look at the idea behind it. Is an anti-censorship social media network a good idea?
Thinkspot functions through a subscription-based model so that people who are posting on the website can monetise their content (much like Youtubers, Instagram influencers and OnlyFans users). There also is a minimum word length for comments (50 words), so that people can’t post insults that easily, and actually have to give some thought to the things that they write (an idea that sounds full of promise but probably only means people have to be more creative with the way they insult others). Before the platform was even live, Peterson confirmed that popular alt-right personalities like YouTuber Carl Benjamin, who ran for a position as an MP for the UKIP party, and Dave Rubin, who also hosts a popular show on YouTube, were on board to be beta testers for the website. On thinkspot, the only way for someone to be asked to leave the website or have their content removed is if a court deemed it necessary because of illegal content.
The platform, subtly presented as an alternative social media platform for the alt-right where the terms and conditions are dictated by the absolute need to maintain standards of free speech, came against the backdrop of what conservative thinkers and the alt-right had been saying about the censorship of right-wing views. Despite this allegation of bias, Senate Republicans held a hearing around this very topic where they introduced a bill to make sure that social media companies removed content in a “politically neutral” manner, whatever that means.
Many on the right, whether they’re based in the US, on Facebook, YouTube, or Twitter, have long since alleged that these social media companies have a bias against conservative viewpoints. They point out the example of Steven Crowder, a YouTube personality who had his channel demonetised after mocking Vox journalist Carlos Maza, using homophobic slurs against him and inciting his audience to follow suit. It’s worth pointing out that Crowder’s account still remains active, although he can’t make any money from it.
That’s why Peterson offered to create this ‘haven’ as an antidote to what he and his followers call censorship, but there were a couple of issues he has yet to address. For example, Peterson wanted the site to rely on an upvoting or downvoting system, something similar to Reddit’s system of approval and moderation, which is community-based. In practice, this could simply mean that any view that doesn’t fit within the consensus among thinkspot users will be downvoted. It’s unlikely that progressives have been signing up in droves to this platform, given its guidelines, as well as the demographics of Peterson’s ‘fanbase’. No one in their right mind would want to sign up to a website where simply pointing out you’re a woman will probably get you trolled and insulted (as long as it’s done in at least 50 words).
The idea behind thinkspot is to place an emphasis on thoughtful and respectful conversation, although the way that the site itself is really organised remains unclear. Yet, thoughtful or respectful conversation—however it’s defined—requires some sort of moderation, even on a very minute level. It potentially means that some guidelines are imposed, either algorithmically or through human intervention. For groups of people who have come to thinkspot to ‘flee censorship’, this may not be received well and it could even lead to further victim complexes, as the allegations of liberal bias against conservatives are on very thin ground.
Peterson’s free speech crusade is also, arguably, wilfully naive. Websites like 4chan, Reddit, and Voat, which have also started with similar enough principles of providing an open forum for discussion, have thriving communities that fester off hatred and have been, in some cases, directly linked to extreme violence around the world. Peterson’s own sense of importance also leads him to believe that courts won’t be paying attention to the content posted on thinkspot, nor will they be able to act fast enough.
As cases involving large social media companies and free speech have shown, things can move quickly if there’s enough media attention (and it’s likely there will be). Simply posting inflammatory or offensive messages on a public messaging board is enough to attract other people with similar views, which means that thinkspot might be the free speech platform with no holds barred that some have been waiting for, but it won’t be without a cost. We’re still waiting to see whether Jordan Peterson’s thinkspot will ever be in the spotlight.