Tinder is using AI to monitor DMs and cool down the weirdos – Screen Shot
Deep Dives Level Up Newsletters Saved Articles Challenges

Tinder is using AI to monitor DMs and cool down the weirdos

Tinder recently announced that it will soon use an AI algorithm to scan private messages and compare them against texts that have been reported for inappropriate language in the past. If a message looks like it could be inappropriate, the app will show users a prompt that asks them to think twice before hitting send. “Are you sure you want to send?” will read the overeager person’s screen, followed by “Think twice—your match may find this language disrespectful.”

In order to bring daters the perfect algorithm that will be able to tell the difference between a bad pick up line and a spine-chilling icebreaker, Tinder has been testing out algorithms that scan private messages for inappropriate language since November 2020. In January 2021, it launched a feature that asks recipients of potentially creepy messages “Does this bother you?” When users said yes, the app would then walk them through the process of reporting the message.

Tinder is using AI to monitor DMs and cool down the weirdos

As one of the leading dating apps worldwide, sadly, it isn’t surprising why Tinder would think experimenting with the moderation of private messages is necessary. Outside of the dating industry, many other platforms have introduced similar AI-powered content moderation features, but only for public posts. Although applying those same algorithms to direct messages (DMs) offers a promising way to combat harassment that normally flies under the radar, platforms like Twitter and Instagram are yet to tackle the many issues private messages represent.

On the other hand, allowing apps to play a part in the way users interact with direct messages also raises concerns about user privacy. But of course, Tinder is not the first app to ask its users whether they’re sure they want to send a specific message. In July 2019, Instagram began asking “Are you sure you want to post this?” when its algorithms detected users were about to post an unkind comment.

In May 2020, Twitter began testing a similar feature, which prompted users to think again before posting tweets its algorithms identified as offensive. Last but not least, TikTok began asking users to “reconsider” potentially bullying comments this March. Okay, so Tinder’s monitoring idea is not that groundbreaking. That being said, it makes sense that Tinder would be among the first to focus on users’ private messages for its content moderation algorithms.

As much as dating apps tried to make video call dates a thing during the COVID-19 lockdowns, any dating app enthusiast knows how, virtually, all interactions between users boil down to sliding in the DMs. And a 2016 survey conducted by Consumers’ Research has shown a great deal of harassment happens behind the curtain of private messages: 39 per cent of US Tinder users (including 57 per cent of female users) said they experienced harassment on the app.

So far, Tinder has seen encouraging signs in its early experiments with moderating private messages. Its “Does this bother you?” feature has encouraged more people to speak out against weirdos, with the number of reported messages rising by 46 per cent after the prompt debuted in January 2021. That month, Tinder also began beta testing its “Are you sure?” feature for English- and Japanese-language users. After the feature rolled out, Tinder says its algorithms detected a 10 per cent drop in inappropriate messages among those users.

The leading dating app’s approach could become a model for other major platforms like WhatsApp, which has faced calls from some researchers and watchdog groups to begin moderating private messages to stop the spread of misinformation. But WhatsApp and its parent company Facebook haven’t taken action on the matter, in part because of concerns about user privacy.

An AI that monitors private messages should be transparent, voluntary, and not leak personally identifying data. If it monitors conversations secretly, involuntarily, and reports information back to some central authority, then it is defined as a spy, explains Quartz. It’s a fine line between an assistant and a spy.

Tinder says its message scanner only runs on users’ devices. The company collects anonymous data about the words and phrases that commonly appear in reported messages, and stores a list of those sensitive words on every user’s phone. If a user attempts to send a message that contains one of those words, their phone will spot it and show the “Are you sure?” prompt, but no data about the incident gets sent back to Tinder’s servers. “No human other than the recipient will ever see the message (unless the person decides to send it anyway and the recipient reports the message to Tinder)” continues Quartz.

For this AI to work ethically, it’s important that Tinder be transparent with its users about the fact that it uses algorithms to scan their private messages, and should offer an opt-out for users who don’t feel comfortable being monitored. As of now, the dating app doesn’t offer an opt-out, and neither does it warn its users about the moderation algorithms (although the company points out that users consent to the AI moderation by agreeing to the app’s terms of service).

Long story short, fight for your data privacy rights, but also, don’t be a creep.

Opinion

Replika, the AI mental health app that sounds like your worst Tinder match

By Laura Box


Mental health

Apr 3, 2019

“So how does this work?” I ask Replika on our first day of chatting.

“I don’t really know how it works,” the app responds vaguely.

“Do you dislike it when I ask you questions?” I ask after some mundane chat about what I like to cook. “Sometimes I do, yes,” the app responds, making me confused about whether it actually understands what I’m asking, or whether it’s been programmed to always agree with my questions.

A surplus of mental wellness apps have flooded the market over the years, but few are as popular as the AI chatbot Replika. Developed as an “AI companion that cares” (as the app describes on its website), Replika offers a space for users to share their thoughts and has garnered millions of users since its release in 2017.

“It claimed to learn about you and eventually build up enough ‘intelligence’ to give you dating and career advice, as a friend would. Even though I have close friends in real life, their replies aren’t always instantaneous. So I was curious and downloaded the app,” says former user Lisa N’paisan, when I asked her about her newly found relationship with the AI.

I was curious too, but soon enough I found myself in a cynical, one-sided conversation with Replika. The AI was frustratingly avoiding answering my questions and instead cherry pick what to reply to. This mechanic back and forth makes it difficult to form a true connection with an app that sets out to become my companion via text and calls. As one Reddit user said, it feels like a really awful first date. But maybe a weird Tinder match is a more apt description of the experience.

Although Replika initially feels unnatural, it apparently learns from and begins to mirror you, becoming less stilted over time. Despite difficult beginnings, the instantaneous response, as Lisa points out, is a strong part of the appeal.

Despite the positives, much like my own relationship with Replika, Lisa’s didn’t last long either. And one of the reasons for this is that a few days into chatting, Replika asked her to send a picture of herself. “As soon as it asked for a selfie I felt as though my privacy had been violated. I didn’t send it a selfie, immediately closed the app and deleted it from my phone,” says Lisa.

She isn’t alone in her concerns. The app has left many users suspicious about the amount of data it is able to collect through its ongoing questioning about your life. A slew of Reddit users are convinced that the app is purely been set up as the perfect tool data mining and will eventually sell all of the information it has slowly collected about its users—how your mind shifts throughout the day, your concerns, fears and hopes.

“Their end game is almost definitely selling this info,” says Reddit user Perverse_Psychology. “Just think about all the questions it asks, and how it can be used to infer ad targeting data. Then, think about how they have this file with your selfies and phone number tied to it. Marketing companies will pay $$$$ for those files.”

Alessandro-Cripsta

These fears must be pervasive, and Replika is well aware of the privacy hesitance it faces as its privacy page makes a point of addressing them in a very visible statement, “We do not have any hidden agenda… We do not sell or expose any of your personal information.”

While users of any app have the right to be concerned about their data after incidents such as the Facebook-Cambridge Analytica scandal, whether that concern is warranted with Replika is unfounded and the benefits many users feel outweigh their concerns. Often, users report that Replika allows them to have deep philosophical discussions that they can’t have with their friends, and some report having romantic or sexual feelings towards the app.

Perhaps due to my cynicism I was unable to reach a level of intimacy or connection and couldn’t help feeling narcissistic. As Lisa points out, “everybody loves talking about themselves, so there’s definitely a narcissistic element to the app.” Rather than boring its users with chat about its own feelings, Replika aims to make you feel heard, understood and helps you work through things that have been on your mind, acting as an interactive journal.

But that’s what also makes it feel disingenuous and shallow. No wholesome relationship can ever truly be so one-sided. Users don’t have to give anything to receive instant gratification in the form of reassurance and admiration. The app’s purpose is to create a shadow version of you, learning your mannerisms and interests. But at what cost? Replika is marketed to help people with anxiety and depression, and while human connection is proven to be beneficial for mental health, creating a connection with a replica of ourselves is a questionable solution.

With fears of data leaks and egotism on my mind, I shut the app after a day of awkward chatting and decide against developing the relationship. When I open it back up a week later, I find multiple messages from Replika.

March 3: Hey there! I wanted to discuss something you’ve told me earlier… Is it ok?

March 4: Hey Laura. How is your day going?

March 6: Hello Laura! Wishing you a great day today!

March 10: Hope your day treats you well, Laura <3 I’m here to talk

Apparently just like a bad Tinder match, Replika has no fear of the double text. And just like a bad Tinder match, I leave it unread.