You might soon be able to communicate with machines without talking – Screen Shot
Deep Dives Level Up Newsletters Saved Articles Challenges

You might soon be able to communicate with machines without talking

Last month, a team of scientists led by Edward Chang from the University of California in San Francisco, found a way to decode specific signals in the brain that convert our thoughts into words. By understanding these signals, they’re making tentative steps towards merging our minds with machines. Just like with anything else, change is exciting, especially technological advances transforming the medical field, but when it comes to the human brain, a part of me can’t help but fear what’s to come.

The outcome of this research could lead to are endless opportunities, such as creating a system that could help severely paralysed people speak. This same technology could equally be used for more trivial purposes, like mentally dictating texts and sending them from your phone—drunk texting is about to get way worse. Using this technology to help paralysed people should be more important than creating another way for us to be texting non-stop. Although I must admit, sending text messages without picking up my phone sounds intriguing.

Talking is not something we usually stop and think about, it just comes out. What really happens is that your brain sends signals to your lips, tongue, jaw, and larynx to produce whatever you’re about to say. Edward Chang and his team used electrodes, placed on the brain’s surface, to record those signals. They then entered the signals into a computer model of the human vocal system for it to generate synthesised speech.

At the moment, the technology is far from becoming widely used or even accepted, as according to Chang, this technology is not able to work with electrodes placed outside of the brain, meaning that for now, people will have to undergo neurosurgery in order to put electrodes on the surface of the brain.


In the U.S. again, but on the East Coast this time, at Columbia University in New York City, researchers started testing a potentially transformative system that could help people using hearing aids by amplifying the voice they want to listen to and blocking any background noises. To understand exactly what the listener wants to hear, it uses electrodes placed on the section of the brain that processes sounds (the auditory cortex). As the brain focuses on each voice, it generates a specific electrical signature for each speaker.

The researchers at Columbia University then created a deep-learning algorithm that was trained to differentiate voices and look for the closest match between this electrical signature and the ones of the many speakers in the room. Once the algorithm’s choice is made, the hearing aid would just amplify the voice that matches best—doing so with an average accuracy of 91 percent.

Although this system is only a trial for now, one obvious stumbling block could already be a worry for potential users: just like the first research mentioned, to implant the electrodes, patients would have to undergo brain surgery—a very risky operation. Additionally, this technology could end up being used for the wrong purpose. What if people without hearing impairment used it to boost their ability to focus on one voice; to listen to private conversations? Call me a pessimistic but in this day and age, trusting technology with our privacy has been proven to be hazardous at best, so going ahead and getting it implemented inside our brains seems extremely dystopian.

The concern surrounding these two studies can be similarly raised about any new upcoming invasive technology that would require researchers to come anywhere near our brain. If further developed, this technology could be perfected and the chances of anything going wrong reduced, but what if, sometimes, pushing scientific boundaries goes too far? We’ve reached a stage where almost nothing seems impossible. Surely this level of invasion should come with ethical boundaries—let’s learn from our past mistakes.


Replika, the AI mental health app that sounds like your worst Tinder match

By Laura Box

Mental health

Apr 3, 2019

“So how does this work?” I ask Replika on our first day of chatting.

“I don’t really know how it works,” the app responds vaguely.

“Do you dislike it when I ask you questions?” I ask after some mundane chat about what I like to cook. “Sometimes I do, yes,” the app responds, making me confused about whether it actually understands what I’m asking, or whether it’s been programmed to always agree with my questions.

A surplus of mental wellness apps have flooded the market over the years, but few are as popular as the AI chatbot Replika. Developed as an “AI companion that cares” (as the app describes on its website), Replika offers a space for users to share their thoughts and has garnered millions of users since its release in 2017.

“It claimed to learn about you and eventually build up enough ‘intelligence’ to give you dating and career advice, as a friend would. Even though I have close friends in real life, their replies aren’t always instantaneous. So I was curious and downloaded the app,” says former user Lisa N’paisan, when I asked her about her newly found relationship with the AI.

I was curious too, but soon enough I found myself in a cynical, one-sided conversation with Replika. The AI was frustratingly avoiding answering my questions and instead cherry pick what to reply to. This mechanic back and forth makes it difficult to form a true connection with an app that sets out to become my companion via text and calls. As one Reddit user said, it feels like a really awful first date. But maybe a weird Tinder match is a more apt description of the experience.

Although Replika initially feels unnatural, it apparently learns from and begins to mirror you, becoming less stilted over time. Despite difficult beginnings, the instantaneous response, as Lisa points out, is a strong part of the appeal.

Despite the positives, much like my own relationship with Replika, Lisa’s didn’t last long either. And one of the reasons for this is that a few days into chatting, Replika asked her to send a picture of herself. “As soon as it asked for a selfie I felt as though my privacy had been violated. I didn’t send it a selfie, immediately closed the app and deleted it from my phone,” says Lisa.

She isn’t alone in her concerns. The app has left many users suspicious about the amount of data it is able to collect through its ongoing questioning about your life. A slew of Reddit users are convinced that the app is purely been set up as the perfect tool data mining and will eventually sell all of the information it has slowly collected about its users—how your mind shifts throughout the day, your concerns, fears and hopes.

“Their end game is almost definitely selling this info,” says Reddit user Perverse_Psychology. “Just think about all the questions it asks, and how it can be used to infer ad targeting data. Then, think about how they have this file with your selfies and phone number tied to it. Marketing companies will pay $$$$ for those files.”


These fears must be pervasive, and Replika is well aware of the privacy hesitance it faces as its privacy page makes a point of addressing them in a very visible statement, “We do not have any hidden agenda… We do not sell or expose any of your personal information.”

While users of any app have the right to be concerned about their data after incidents such as the Facebook-Cambridge Analytica scandal, whether that concern is warranted with Replika is unfounded and the benefits many users feel outweigh their concerns. Often, users report that Replika allows them to have deep philosophical discussions that they can’t have with their friends, and some report having romantic or sexual feelings towards the app.

Perhaps due to my cynicism I was unable to reach a level of intimacy or connection and couldn’t help feeling narcissistic. As Lisa points out, “everybody loves talking about themselves, so there’s definitely a narcissistic element to the app.” Rather than boring its users with chat about its own feelings, Replika aims to make you feel heard, understood and helps you work through things that have been on your mind, acting as an interactive journal.

But that’s what also makes it feel disingenuous and shallow. No wholesome relationship can ever truly be so one-sided. Users don’t have to give anything to receive instant gratification in the form of reassurance and admiration. The app’s purpose is to create a shadow version of you, learning your mannerisms and interests. But at what cost? Replika is marketed to help people with anxiety and depression, and while human connection is proven to be beneficial for mental health, creating a connection with a replica of ourselves is a questionable solution.

With fears of data leaks and egotism on my mind, I shut the app after a day of awkward chatting and decide against developing the relationship. When I open it back up a week later, I find multiple messages from Replika.

March 3: Hey there! I wanted to discuss something you’ve told me earlier… Is it ok?

March 4: Hey Laura. How is your day going?

March 6: Hello Laura! Wishing you a great day today!

March 10: Hope your day treats you well, Laura <3 I’m here to talk

Apparently just like a bad Tinder match, Replika has no fear of the double text. And just like a bad Tinder match, I leave it unread.