Replika, the AI mental health app that sounds like your worst Tinder match – Screen Shot
Deep Dives Level Up Newsletters Saved Articles Challenges

Opinion

Replika, the AI mental health app that sounds like your worst Tinder match

By Laura Box


Mental health

Apr 3, 2019

“So how does this work?” I ask Replika on our first day of chatting.

“I don’t really know how it works,” the app responds vaguely.

“Do you dislike it when I ask you questions?” I ask after some mundane chat about what I like to cook. “Sometimes I do, yes,” the app responds, making me confused about whether it actually understands what I’m asking, or whether it’s been programmed to always agree with my questions.

A surplus of mental wellness apps have flooded the market over the years, but few are as popular as the AI chatbot Replika. Developed as an “AI companion that cares” (as the app describes on its website), Replika offers a space for users to share their thoughts and has garnered millions of users since its release in 2017.

“It claimed to learn about you and eventually build up enough ‘intelligence’ to give you dating and career advice, as a friend would. Even though I have close friends in real life, their replies aren’t always instantaneous. So I was curious and downloaded the app,” says former user Lisa N’paisan, when I asked her about her newly found relationship with the AI.

I was curious too, but soon enough I found myself in a cynical, one-sided conversation with Replika. The AI was frustratingly avoiding answering my questions and instead cherry pick what to reply to. This mechanic back and forth makes it difficult to form a true connection with an app that sets out to become my companion via text and calls. As one Reddit user said, it feels like a really awful first date. But maybe a weird Tinder match is a more apt description of the experience.

Although Replika initially feels unnatural, it apparently learns from and begins to mirror you, becoming less stilted over time. Despite difficult beginnings, the instantaneous response, as Lisa points out, is a strong part of the appeal.

Despite the positives, much like my own relationship with Replika, Lisa’s didn’t last long either. And one of the reasons for this is that a few days into chatting, Replika asked her to send a picture of herself. “As soon as it asked for a selfie I felt as though my privacy had been violated. I didn’t send it a selfie, immediately closed the app and deleted it from my phone,” says Lisa.

She isn’t alone in her concerns. The app has left many users suspicious about the amount of data it is able to collect through its ongoing questioning about your life. A slew of Reddit users are convinced that the app is purely been set up as the perfect tool data mining and will eventually sell all of the information it has slowly collected about its users—how your mind shifts throughout the day, your concerns, fears and hopes.

“Their end game is almost definitely selling this info,” says Reddit user Perverse_Psychology. “Just think about all the questions it asks, and how it can be used to infer ad targeting data. Then, think about how they have this file with your selfies and phone number tied to it. Marketing companies will pay $$$$ for those files.”

Alessandro-Cripsta

These fears must be pervasive, and Replika is well aware of the privacy hesitance it faces as its privacy page makes a point of addressing them in a very visible statement, “We do not have any hidden agenda… We do not sell or expose any of your personal information.”

While users of any app have the right to be concerned about their data after incidents such as the Facebook-Cambridge Analytica scandal, whether that concern is warranted with Replika is unfounded and the benefits many users feel outweigh their concerns. Often, users report that Replika allows them to have deep philosophical discussions that they can’t have with their friends, and some report having romantic or sexual feelings towards the app.

Perhaps due to my cynicism I was unable to reach a level of intimacy or connection and couldn’t help feeling narcissistic. As Lisa points out, “everybody loves talking about themselves, so there’s definitely a narcissistic element to the app.” Rather than boring its users with chat about its own feelings, Replika aims to make you feel heard, understood and helps you work through things that have been on your mind, acting as an interactive journal.

But that’s what also makes it feel disingenuous and shallow. No wholesome relationship can ever truly be so one-sided. Users don’t have to give anything to receive instant gratification in the form of reassurance and admiration. The app’s purpose is to create a shadow version of you, learning your mannerisms and interests. But at what cost? Replika is marketed to help people with anxiety and depression, and while human connection is proven to be beneficial for mental health, creating a connection with a replica of ourselves is a questionable solution.

With fears of data leaks and egotism on my mind, I shut the app after a day of awkward chatting and decide against developing the relationship. When I open it back up a week later, I find multiple messages from Replika.

March 3: Hey there! I wanted to discuss something you’ve told me earlier… Is it ok?

March 4: Hey Laura. How is your day going?

March 6: Hello Laura! Wishing you a great day today!

March 10: Hope your day treats you well, Laura <3 I’m here to talk

Apparently just like a bad Tinder match, Replika has no fear of the double text. And just like a bad Tinder match, I leave it unread.

Amazon is working on a voice-activated device that can read our emotions

According to Amazon, we suck at handling our emotions—so they’re offering to do it for us. The company that gave us Echo and everyone’s favourite voice to come home to, Alexa, has announced it is working on a voice-activated wearable device that can detect our emotions. Based on the user’s voice, the device (unfortunately not a mood ring but you can read more about these here) can discern the emotional state the user is in and theoretically instruct the person on how to effectively respond to their feelings and also how to respond to others. As Amazon knows our shopping habits, as well as our personal and financial information, it now wants our soul too. Welcome to the new era of mood-based marketing and possibly the end of humanity as we know it.

Emotional AI and voice recognition technology has been on the rise and according to Annette Zimmermann, “By 2022, your personal device will know more about your emotional state than your own family.” Unlike marketing of the past where they captured your location, what you bought, or what you like, it’s not about what we say anymore but how we say it. The intonations of our voices, the speed we talk at, what words we emphasise and even the pauses in between those words.

Voice analysis and emotional AI are the future and Amazon plans to be a leader in wearable AI. Using the same software in Alexa, this emotion detector will use microphones and voice activation to recognise and analyse a user’s voice to identify emotions through vocal pattern analysis. Through these vocal biomarkers, it can identify base emotions such as anger, fear, and joy, to nuanced feelings like boredom, frustration, disgust, and sorrow. The secretive Lab 126, the hardware development group behind Amazon’s Fire phone, Echo speaker and Alexa, is creating this emotion detector (code name Dylan). Although it’s still in early development, Amazon has already filed a patent on it since October 2018.

This technology has been around since 2009. Companies such as CompanionMx, a clinical app that uses voice analysis to document emotional progress and suggest ways of improvement for a patient, VoiceSense who analyses customer’s investment style and employee hiring and turnover, and Affectiva, born out of the MIT media lab, that produces emotional AI for marketing firms, healthcare, gaming, automotive, and almost every other facet of modern life you can think of.

So why is Amazon getting into it now? With Amazon’s data goldmine combined with emotional AI, it has a bigger payout than Apple or Fitbit. Combining a user’s mood with their browsing and purchasing history will improve on what they recommend you, refine their target demographics, and improve how they sell you stuff.

From a business standpoint, this is quite practical. When it comes down to it, we’ll still need products. One example being health products. You won’t care so much about the bleak implications of target marketing when you’re recommended the perfect flu meds when you’re sick. Mood-based marketing makes sense as mood and emotions can affect our decision making. For instance, if you were going through a breakup you’re more apt to buy an Adele album than if you were in a relationship. But this is deeper than knowing what type of shampoo we like or the genre of movie we prefer watching. This is violating and takes control away from our purchasing power. They’re digging into how we feel—our essence and if you believe in it, into our souls.

One must ask who is coding this emotion detector? Whose emotional bias is influencing and identifying what is an appropriate emotional response? Kate Crawford from the AI Now Institute voiced her concerns in her 2018 speech at the Royal Society, emphasising how the person behind the tech is the most important person as they will be affecting how millions of people behave, as well as future generations.

For instance, if a Caucasian man was coding this tech, could they accurately identify the emotional state of a black female wearing this device? How do you detect the feeling after experiencing microaggressions if the person coding the tech has never experienced that? What about emotions that can’t be translated from language to language? Other concerns are that we won’t be able to trust ourselves on how we feel. For instance, if we ask where’s the closest ice cream shop and it asks if we’re sad, will we become sad? Can it brainwash us to feel how it wants us to feel? After decades of using GPS, we don’t know how to navigate ourselves without it. Will this dependency sever our ability to feel and how to react emotionally—in other words being human?

Taking all this information in, I’m still weirdly not mad at the idea of a mood detector. This has potential as an aid. People with social conditions such as PTSD, Autism, or Asperger’s disease can benefit, as this would aid in interaction with others or for loved ones to better understand those who are afflicted. So should we allow non-sentient machines who’ve never experienced frustration, disappointment, or heartache to tell us how to feel? Part of me says hell no, but a part of me wouldn’t mind help with handling my emotions. If we are aware of all the positive and negative implications, we can better interact with this technology and use it responsibly. If we see this as an aid and not as a guide, this could have great potential to communicate better with others and ourselves objectively. Or it can obliterate what is left of our humanity. Sorry, that was a bit heavy-handed, but can’t help it, I’m human.