Forget Space X or Hyperloop, Elon Musk’s ambitious neurotechnology project has just made a breakthrough akin to your wildest sci-fi fantasy: a monkey is now able to play Pong solely with its mind. The demonstration by the company Neuralink is a prime example of a brain-machine interface in action. With human trials set to start later this year, what does this mean for humanity as we know it?
Last year, the company successfully implanted a chip into a pig’s brain to measure visual information and sensory data from its snout. Last month, the company successfully implanted a chip into a monkey’s brain so it could play Pong—the two-dimensional sports game that simulates table tennis—using only its mind. It’s safe to say Neuralink is making (brain) waves within the emerging neurotechnology industry.
It all started with a coin-sized disc, called a ‘link’, which is implanted by a precision surgical robot into the monkey’s brain, connecting thousands of micro threads from the chip to neurons responsible for controlling motion. The nine-year-old monkey called Pager—presumably unaware that it’s the centrepiece of a scientific breakthrough and internet fame—had two Neurolink devices put on each side of his brain six weeks before. Pager was then taught to use a joystick to move a cursor to targets on a screen in exchange for a banana smoothie. What could possibly go wrong, right?
The ‘link’ device then records the monkey’s neuron activity while he interacts with the joystick and cursor. The narrator of the video explains this is only possible due to thousands of tiny wires implanted into Pager’s motor cortex—the part of the brain that coordinates hand and arm movements. The data is then fed into a decoder algorithm, predicting Pager’s intended hand movements in real-time.
Neuralink claims that once the decoder is calibrated, the monkey is free to control the cursor without relying on the joystick—essentially controlling the cursor with only its mind. The joystick is then deactivated as the video shows the monkey playing Pong with, and only with, its mind. It’s proof of the astonishing scientific advances we humans can achieve—Pager is able to play Pong telepathically with more accuracy than I ever could on my 2008 flip phone.
To put it bluntly, it’s too early to tell. However, there is reason to believe we’re witnessing the emergence of a new technology that could have a serious impact on society. Bearing in mind that this is mostly hypothetical, aside from Pager’s ability to play a video game telepathically, which is now objective science—let’s start with the positives.
Neuralink claims that the technology could assist people who are paralysed from brain or spinal injuries, giving them the ability to control computerised devices with their minds—similar to how Pager was able to control a cursor with just his brain. If all goes to plan, it would be an invaluable way for paraplegics, quadriplegics or victims of strokes to live a free and autonomous life. The ‘link’ chip might also be able to connect with other technology, for instance, making prosthetic limbs feel ‘real’.
This experiment’s success also touches upon how the technology could, theoretically, be a valuable treatment for psychological and neurological conditions like depression or addiction—even claiming to restore senses for those who are blind or deaf. This is all very up there but there’s reason to be cautiously optimistic of how developments in neurotechnology could drastically change medicine as we know it, and for the good.
It’s worth noting the positives go beyond therapeutic value too. The technology could offer a faster way of interacting with computers—we wouldn’t be limited to the QWERTY keyboard anymore, instead, we’d be able to send messages at the speed of thought. Granted, this would make being ghosted by your Tinder match that extra bit painful.
Scientists have also theorised that the technology could connect brains to the cloud. This would essentially change human intelligence as we know it—an individual’s ‘native’ intelligence could be augmented by accessing cloud-based artificial intelligence. It sounds whacky now but imagine explaining Google to someone in the early 90s.
Alright, I’m going to burst the positive bubble here: criminals have, and most likely always will adapt to new technology in order to exploit the vulnerable. It’s happened with credit cards, with the internet, and it even happened with COVID-19—there’s no reason to believe that once this technology is mainstream, it’ll be invincible to those with bad intentions.
Scientists warn that without “bulletproof security”, hackers could access implanted chips, causing malfunctions or misdirections of their actions. Similar to that Wallace and Gromit episode where an evil penguin hacked the robotic trousers to steal from a bank, staging Wallace in the process—only with much darker consequences. A device vulnerable to such actions could be fatal for the disabled individuals the technology serves to benefit.
It’s an ethical and philosophical issue that still plagues the neurotechnology field to this day. And if that wasn’t complicated enough, some have raised concerns that developments in AI working through a brain-machine interface could take control of the host’s brain through nanotechnology. The very man himself, Elon Musk, has previously warned that AI poses an existential threat to humanity—claiming AI is set to overtake humans in less than five years.
It’s a tricky ethical minefield to manoeuvre. And if animal testing wasn’t unethical enough, human trials are set to start at the beginning of this year. Scientists have warned that we must devote enough time and effort to building safeguards. However, if implemented safely, the technology could bring enormous positives to society.
As for me: I’m a writer, not a scientist, there’s little value I can add to the discussion other than what I’ve already said. I guess it’s a waiting game—if in twenty years I can order a pizza just by thinking (and my brain isn’t hacked by cybercriminals), I’ll be happy knowing science has done its job.
Some of you might not remember, but August 2017 was definitely microchipping’s ‘breakthrough’. Wisconsin-based company Three Square Market had lined up its employees in its cafeteria to be implanted with microchips. Just after that, the news named the event the “chipping party” and bombarded us with futuristic tales of cashless payments and phoneless calls. We were all going to be transhuman. Back to 2019, and most of us are still microchip-free. So why are microchips still not the norm?
Let’s start with the obvious—the idea of living with a chip under your skin potentially monitoring your every move is not one that is easy to accept. After the famous chipping party, religious groups panicked, convinced that the small implants actually marked people with “the mark of the beast”, and accused Three Square Market of being the antichrist. While this sounded highly improbable, more rational worries have since then been stated and proved right.
Microchip implants are similar to the ones we’ve started putting under livestock and our pets’ skin. In other words, the tiny bar codes have been around for a while. Kevin Warwick, a professor of cybernetics at Reading University, had already implanted a chip in his hand by 1998. Warwick wanted to show the world that it was doable, but also that microchipping was the future. To him, fusing technology with our bodies had to be the next big step. He obviously didn’t take data breaches and exploitation of labour into consideration.
So far, the few people that have been microchipped are digital-savvy people from wealthy countries. Well-educated Swedes—“chips and beer” evenings were quite big in Stockholm—do not make up the world’s population. Talking to The Guardian, Urs Gasser, executive director at Harvard’s Berkman Klein Center for Internet and Society, made a very valid point about the way people reacted to the chipping party, stating that “[s]eeing employees get implanted at the workplace made people question what it means to be an employee. Are you a person being paid for your work, or are you the property of the company you work for?” These implants probably weren’t the “mark of the beast”, but they definitely reminded us of the dystopian novel 1984.
The implications that microchipping technology brings with itself are exactly what stops us from getting them. Having a chip under our skin equals increased worker surveillance for most. And, more recently, it could also mean that data collection methods could go up a notch—something that we clearly don’t need while Facebook is still around. After all, microchips could surveil us non stop, and monitor us wherever we go. Remember the nightmare that was Snapchat Map, when all your Snapchat contacts could see where you were at any time of the day? Imagine this with microchips, only this time we won’t be able to turn its signal off.
The same kind of concerns have recently been raised by lawmakers in the US. The states of Arkansas, New Jersey and Tennessee have started drafting new bills to make involuntary microchipping illegal, while Nevada has already passed its own version of the bill. The message is: feel free to microchip yourself if you want, but don’t force anyone to do the same, especially not if they’re your employees. And yet, surveillance tech still is a big issue in the US, with companies forcing their employees to wear an Apple watch constantly to monitor their health. There doesn’t seem to be that much difference between this and microchips.
To try and look at microchipping under a different light than this ethically problematic one, Screen Shot spoke to Jowan Österlund, founder and CEO of the microchipping company Biohax, and the body piercing professional that microchipped the lucky employees who attended 2017’s chipping party. Österlund, like many others, can’t ignore the risks that microchips could entail, but for him, it’s part of a good thing, “People say that [they’re scared of microchips] while they’re on Facebook, on their smartphones logged into their Google accounts, so I can’t really take that seriously. I wouldn’t want people to blindly accept any kind of new technologies, because that would be repeating the same mistakes again. If people are scared, it’s good—they get informed, and they’re not afraid anymore.”
Österlund furthered his point by pinpointing the many benefits that microchips could bring to adapters, such as health-related tracking, “you could get all the information you need about your heart rate patterns, your sleeping patterns, your blood oxygenation, and breathing patterns.” According to him, most of us react that way to microchips because of the small body modification they come with. What needs to be changed, for them to be 100 per cent safe, are the regulations that have been proven to be already problematic, even without most of us having a chip implant.
Instead of panicking about microchips, our attention should be focused on demanding stronger labour regulations and data protection laws, such as Europe’s General Data Protection Regulation (GDPR). In other words, don’t shoot the messenger. Microchips could be our society’s gamechanger—the only question is: are we ready for that kind of change?