TikToker designs controversial ‘euthanasia boat ride’ which plummets passengers to death – Screen Shot
Deep Dives Level Up Newsletters Saved Articles Challenges

TikToker designs controversial ‘euthanasia boat ride’ which plummets passengers to death

Euthanasia, which is the act of intentionally ending a person’s life with the aim of alleviating great pain or suffering, is a highly controversial topic that is continuously debated—especially given that its legitimacy and legality differ between countries. In Switzerland, for example, while active euthanasia—the act of administering a lethal injection to someone who demands it—remains illegal, supplying the means for committing suicide is legal, as long as the infliction is performed by the one wishing to die. This led the country to legalise a new way of self-infliction by assisted suicide in December 2021 in the form of the Sarco suicide pod, a 3D-printed portable coffin-like capsule with windows.

More recently, however, the discourse surrounding euthanasia has resurfaced, on TikTok this time, after content creator @sm.coasters shared a video of a ‘one way only’ boat ride imagined to help those wanting to end their lives to do so in a “totally human way.”

The account regularly shares mock-ups of interesting yet awfully sinister designs for roller coasters and rides. The difference with the viral ride pictured below, which currently has 6.6 million views on the platform, is that it can only be experienced once

In the viral video, a floating carriage can be seen sitting at the top of a huge structure—only to be nudged over the edge and fall a terribly long way down into what appears to be a pool at the bottom. Upon landing, the carriage is supposed to bounce up into the air before touching down back in the water, undamaged.

As you can expect from a video with so many views, the clip’s comment section is home to many users joking about the morbid invention. One person wrote, “Knowing my luck the boat won’t land the right way up and I’d survive.” Another said, “Yep, that would do it.”

@sm.coasters

Boat ride euthanasia - totally humane way to go

♬ original sound - SM.Coasters

Believe it or not, this is not the first time a ‘fun’ concept has been conceived for euthanasia. In 2021, another video made the rounds on TikTok of a journey to death designed by Lithuanian artist Julijonas Urbonas back in 2016. Named the ‘Euthanasia Coster’, the ride was reshared on the gen Z-first platform by TikToker @lukedavidson_ where he explained a bit more about the invention.

Imagined as a single-seated joy ride that lifts the passenger 500 metres up in the air, it would then send them swooping down along seven loop-the-loops at a deadly speed. Reaching 100 metres per second, the speed would induce a euphoric sensation before “extreme G-force” would kill the person riding it.

“Your blood is rushed to your lower extremities so there is a lack of blood in your brain, so your brain starts to suffocate. When your brain starts to suffocate, people become euphoric. Usually pilots experience such extreme forces for just a few seconds, but riders in the rollercoaster experience it for one minute and nobody has experienced this for such a long time,” Urbonas said while presenting the design at the HUMAN+ exhibition at Trinity College in Dublin. He concluded that such a way of assisted suicide introduces the element of ritual to death which has been “medicalised, secularised, sterilised.”

@lukedavidson_

You Can Only Ride This Roller Coaster Once #facts

♬ Paris - Else

Needless to say, Urbonas’ ride never made it past its prototype phase. Whether the latest euthanasia boat ride will witness the same fate or not remains to be seen.

AI

An algorithm is challenging the most intelligent human minds in debate. What happens when it wins?

In 2019, a globally recognised debate champion, Harish Natarajan, took part in a live debate with a five and a half foot tall rectangular computer screen, in front of around 800 people. The topic of discussion? Whether or not preschool should be subsidised. The topic is not really the point here however, but the fact that Natarajan had a heated discussion with a computer system, is. This particular algorithm has evolved rather quickly since then too, and it’s inching closer to engaging in the type of complex human interaction that’s represented by formal argumentation.

For a bit of back context: IBM’s Deep Blue was the first computer to defeat a reigning chess champion, Garry Kasparov in 1997, then fourteen years later IBM’s Watson defeated the all-star Jeopardy! players Brad Rutter and Ken Jennings. By this point, intelligent computers were solidified, but a lot of the tests to reach this clarification were based on clear winner or loser outcomes. In other words, the coding behind such technological masterminds led to a defined binary algorithmic path to victory, presenting that in fact a system which could interact with the nuance that enables complex conversation with human beings was still not possible. That is until (potentially) now.

A study published by Nature shows a startling progress within the Artificial Intelligence (AI) industry, and specifically with IMB’s new creation ‘Project Debater’, which is the algorithm that is increasing the likelihood that a computer may soon be able to understand and interact with what could be described as a ‘grey area’, when it comes to the differentiation between humans and technology.

The study consists of IBM researchers from all over the world reporting on the AI system’s progress. Following on from the 2019 debate with Natarajan, a train of similar tests have been recorded, and evaluated on nearly 80 different topics by 15 members of a virtual audience, between Project Debater and three other expert human debaters.

As reported by Scientific American “these human-against-machine contests, neither side is allowed access to the Internet. Instead, each is given 15 minutes to ‘collect their thoughts,’ as Christopher P. Sciacca, manager of communications for IBM Research’s global labs, puts it. This means the human debater can take a moment to jot down ideas about a topic at hand, such as subsidised preschool, while Project Debater combs through millions of previously stored newspaper articles and Wikipedia entries, analysing specific sentences and commonalities and disagreements on particular topics. Following the prep time, both sides alternately deliver four-minute speeches, and then each gives a two-minute closing statement.”

Although Project Debater has come a significantly long way, it still hasn’t managed to argue past the human debaters, but to be fair, neither can most other humans. What is the real point of having an AI system that has the ability to argue anyway? Well, humans live online, and bots are frequently chatting to us without us possibly being able to decipher whether it is one or not. One goal is to make this exact process even harder for us to know the difference.

Researcher Chris Reed from the University of Dundee, who isn’t part of the Project Debater team, but wrote in a commentary also published in Nature saying that “More than 50 laboratories worldwide are working on the problem, including teams at all the large software corporations.” Which leads us to realise that these systems aren’t going anywhere in the future. As Futurism wrote on the topic, to prepare ourselves for what’s to come “we should all perhaps start thinking about how to choose our battles. Before you get sucked into another online argument, keep in mind it might just be some bot on the other end that will endlessly engage in the fight until you just walk away—or waste hours screaming into the digital void.”

Models of what constitutes as a ‘good argument’ are diverse, and on the other hand, a good debate can amount to, as Reed puts it, “little more than formalised intuitions.” The article continues to pose that the challenge argument-technology systems face will basically be whether to treat arguments as local fragments of discourse influenced by an isolated set of considerations, or to “weave them into the larger tapestry of societal-scale debates.” Reed writes that “this is about engineering the problem to be tackled, rather than engineering the solution.”

In the real, human world, there are no clear boundaries to determine an argument; solutions are more often than not subjective to a range of contextual ideas. However, if Project Debater is further adapted and in turn successful, Reed comments that “Given the wildfires of fake news, the polarisation of public opinion and the ubiquity of lazy reasoning, that ease belies an urgent need for humans to be supported in creating, processing, navigating and sharing complex arguments—support that AI might be able to supply. So although Project Debater tackles a grand challenge that acts mainly as a rallying cry for research, it also represents an advance towards AI that can contribute to human reasoning.”

In their essence, AI systems are defined by a machine’s ability to perform a task that is usually associated with intelligent human beings: argument and debate are fundamental to the way humans interact and react to the world around them, and so, within our human world that has become increasingly parallel in importance to another—the internet world—having an intelligent system of interactions and reactions will simply solidify the parallel. Whether that is in fact a good or bad thing is still open to human debate.