Imagine if I told you that a few years down the line humans will have the ability to merge themselves with computers. Well, soon you won’t have to imagine it anymore. Recent studies show that we are closer to a transhumanist era than we thought. But what exactly is transhumanism?
Transhumanism is a philosophical belief that human evolution must come to a natural end and from there on continue with human intervention by essentially merging biology with technology. In the past two decades, we’ve seen technology transform and improve radically—even I, despite being born in 1998 remember a time pre-WiFi and 4G, when we couldn’t even use the landline if the internet was switched on. Fast forward to today, we have people replacing human partners for sex robots, self-driving cars in development, and employee-less stores. Transhumanism is the next step.
A recent study conducted by scientists at Harvard and Surrey University suggests that we are on the verge of entering the transhumanist era, as these scientists manufacture nanoscale probes used to read intracellular electrical activity from neurons. Nanoscale probes could potentially measure the electric current that runs within our cells and push progress on human-machine interfaces. In other words, in the near future, science will have the ability to turn us into literal machines.
And this should come as no surprise, especially with Elon Musk’s recent developments with Neuralink. The company recently unveiled some of the technology it has been working on, which includes a device implanted in paralysed people allowing them to control phones and computers via electromagnetic waves from their brains. This would mean that people who previously weren’t able to or struggled to communicate to could now do so through technology—as a computer would quite literally be able to read our thoughts.
Similarly, Facebook also recently released an update on its developments with brain-reading computer interfaces. The company funded research at the University of California, San Francisco (UCSF), where it published the results of an experiment decoding people’s speech, using implanted electrodes linked to a computer to read words and phrases from the brain.
Such topics inspire heated debates, and it’s understandable why. While Neuralink and Facebook’s developments may be able to improve the lives of many for the better, the merging of humans with technology is controversial. Many argue that we are already transhuman due to the growing crisis of our screen addiction and reliance on technology. This is where the increased discussion on transhumanism comes from.
Elon Musk himself believes in the need for merging people with AI in order to avoid losing control of superintelligent technology and prevent technological unemployment. Humanity+ is a non-profit organisation that advocates for transhumanism through the use of AI to expand human capacities and “increase human performance”, as it puts it, “outside the realms of what is considered to be normal for humans”. The organisation already has over 6,000 members. There are threads on Quora where people discuss their desire to become transhuman, and one of HBO’s most recent shows, Years and Years presents us with a transhuman teenager depicted in the realms of our foreseeable future. The point is, there is a growing demand for transhumanism and we need to talk about it.
While it is evident that merging biology with technology can make major improvements for healthcare and medicine, it is still uncertain what other features we will be able to implement in ourselves with transhumanism. It could develop to anything from having a Google search engine inside our brains to taking photographs with our eyes. PhD student Anqi Zhang, who was part of the research team at Harvard, says that, “the area of brain-machine interfaces will see significant advancement in the next 10 to 15 years”, meaning we will see various implementations very soon. Of course, this all sounds far-fetched and bizarre, and rightfully so.
There are a number of things that could go wrong, one being the fact that implanting devices into our brains would slowly take over the function of different specific parts of it. If we ever reach this ‘perfection’ that transhumanism depicts, it would be pretty difficult to know where to draw the line and finally stop. Nevertheless, we are certainly shifting toward a transhuman era, and all we can do is sit back and try to stay hopeful that it will probably improve the lives of many, if adequately moderated.
According to Amazon, we suck at handling our emotions—so they’re offering to do it for us. The company that gave us Echo and everyone’s favourite voice to come home to, Alexa, has announced it is working on a voice-activated wearable device that can detect our emotions. Based on the user’s voice, the device (unfortunately not a mood ring) can discern the emotional state the user is in and theoretically instruct the person on how to effectively respond to their feelings and also how to respond to others. As Amazon knows our shopping habits, as well as our personal and financial information, it now wants our soul too. Welcome to the new era of mood-based marketing and possibly the end of humanity as we know it.
Emotional AI and voice recognition technology has been on the rise and according to Annette Zimmermann, “By 2022, your personal device will know more about your emotional state than your own family.” Unlike marketing of the past where they captured your location, what you bought, or what you like, it’s not about what we say anymore but how we say it. The intonations of our voices, the speed we talk at, what words we emphasise and even the pauses in between those words.
Voice analysis and emotional AI are the future and Amazon plans to be a leader in wearable AI. Using the same software in Alexa, this emotion detector will use microphones and voice activation to recognise and analyse a user’s voice to identify emotions through vocal pattern analysis. Through these vocal biomarkers, it can identify base emotions such as anger, fear, and joy, to nuanced feelings like boredom, frustration, disgust, and sorrow. The secretive Lab 126, the hardware development group behind Amazon’s Fire phone, Echo speaker and Alexa, is creating this emotion detector (code name Dylan). Although it’s still in early development, Amazon has already filed a patent on it since October 2018.
This technology has been around since 2009. Companies such as CompanionMx, a clinical app that uses voice analysis to document emotional progress and suggest ways of improvement for a patient, VoiceSense who analyses customer’s investment style and employee hiring and turnover, and Affectiva, born out of the MIT media lab, that produces emotional AI for marketing firms, healthcare, gaming, automotive, and almost every other facet of modern life you can think of.
So why is Amazon getting into it now? With Amazon’s data goldmine combined with emotional AI, it has a bigger payout than Apple or Fitbit. Combining a user’s mood with their browsing and purchasing history will improve on what they recommend you, refine their target demographics, and improve how they sell you stuff.
From a business standpoint, this is quite practical. When it comes down to it, we’ll still need products. One example being health products. You won’t care so much about the bleak implications of target marketing when you’re recommended the perfect flu meds when you’re sick. Mood-based marketing makes sense as mood and emotions can affect our decision making. For instance, if you were going through a breakup you’re more apt to buy an Adele album than if you were in a relationship. But this is deeper than knowing what type of shampoo we like or the genre of movie we prefer watching. This is violating and takes control away from our purchasing power. They’re digging into how we feel—our essence and if you believe in it, into our souls.
One must ask who is coding this emotion detector? Whose emotional bias is influencing and identifying what is an appropriate emotional response? Kate Crawford from the AI Now Institute voiced her concerns in her 2018 speech at the Royal Society, emphasising how the person behind the tech is the most important person as they will be affecting how millions of people behave, as well as future generations.
For instance, if a Caucasian man was coding this tech, could they accurately identify the emotional state of a black female wearing this device? How do you detect the feeling after experiencing microaggressions if the person coding the tech has never experienced that? What about emotions that can’t be translated from language to language? Other concerns are that we won’t be able to trust ourselves on how we feel. For instance, if we ask where’s the closest ice cream shop and it asks if we’re sad, will we become sad? Can it brainwash us to feel how it wants us to feel? After decades of using GPS, we don’t know how to navigate ourselves without it. Will this dependency sever our ability to feel and how to react emotionally—in other words being human?
Taking all this information in, I’m still weirdly not mad at the idea of a mood detector. This has potential as an aid. People with social conditions such as PTSD, Autism, or Asperger’s disease can benefit, as this would aid in interaction with others or for loved ones to better understand those who are afflicted. So should we allow non-sentient machines who’ve never experienced frustration, disappointment, or heartache to tell us how to feel? Part of me says hell no, but a part of me wouldn’t mind help with handling my emotions. If we are aware of all the positive and negative implications, we can better interact with this technology and use it responsibly. If we see this as an aid and not as a guide, this could have great potential to communicate better with others and ourselves objectively. Or it can obliterate what is left of our humanity. Sorry, that was a bit heavy-handed, but can’t help it, I’m human.