So you’ve matched with someone on Tinder, the world’s most popular dating app, and you’ve been chatting for a while—things are looking pretty promising—but the person on the other end sounds almost too good to be true and refuses to come on video calls. As someone whose childhood has been synonymous with MTV’s Catfish and that one emotional episode of Dr. Phil, the gears in your head start churning and you automatically begin losing trust in your match. You then spend countless nights swiping, matching with more shady users and eventually losing interest—all in the hopes of not being reeled in by a catfisher. Enter Tinder’s Identity Document (ID) verification system and the promise of authentic matches.
First rolled out in Japan in 2019, Tinder’s age verification model requires members to be at least 18 years old to sign up. Users based in Japan had to upload a clear picture of either their passport, driver’s licence or health ID, which would then be reviewed and approved for them to start chatting with their matches. In its latest announcement, the dating app seeks to implement ID verifications globally in the coming quarters.
Tinder will take expert recommendations and inputs from its members into consideration and review documents required in each country—along with the laws and regulations—to determine how the feature will roll out. The feature will be introduced as a voluntary option, except where mandated by law. Based on the inputs received, Tinder will then evolve its model to ensure “an equitable, inclusive and privacy-friendly approach to ID verification.”
“ID verification is complex and nuanced, which is why we are taking a test-and-learn approach to the rollout,” said Rory Kozoll, Head of Trust & Safety Product at Tinder, in the press release. “We know one of the most valuable things Tinder can do to make members feel safe is to give them more confidence that their matches are authentic and more control over who they interact with.”
Tinder already has a photo verification feature within the app—where users can verify themselves by taking a selfie. The feature then compares the selfie with other photos that the user has uploaded to add a Twitter-like blue check to their profile. The new ID verification model seeks to be yet another badge of assurance.
According to its terms of use, the dating app requires users to “have never been convicted of or pled no contest to a felony, a sex crime, or any crime involving violence, and that you are not required to register as a sex offender with any state, federal or local sex offender registry.” A Tinder spokesperson told TechCrunch that the company will use the ID verification model to further cross-reference data like the sex offender registry in regions where that information is accessible. The company also does this via a credit card lookup when users sign up for a subscription.
With dating apps like Bumble, Zoosk and Wild already embedding ID verifications into their sign up process, such models might just redefine online dating altogether—with verified users having a better chance of landing a date. However, Tinder has no plans of mandating it anytime soon, given the fact that some users actually want to protect their identities online.
“We know that in many parts of the world and within traditionally marginalised communities, people might have compelling reasons that they can’t or don’t want to share their real-world identity with an online platform,” said Tracey Breeden, VP of Safety and Social Advocacy at Match Group, in the press release. “Creating a truly equitable solution for ID Verification is a challenging, but critical safety project and we are looking to our communities as well as experts to help inform our approach.”
Tinder is undoubtedly the leader in safety innovation when it comes to online dating, from the Swipe feature based on mutual consent to photo verification, Noonlight, and face-to-face video chats. This new feature, however, comes with a typical downside: privacy concerns. Will users have to surrender their sensitive information just to date others? How will Tinder use this data and how can it guarantee that it won’t be sold to third parties—or worse—hacked into and stolen?
Although we’re living in the future with groundbreaking advancements in artificial intelligence, certain biases cannot be ignored when it comes to verification models. After all, children as young as 13 were once tricking these systems and setting up accounts on Onlyfans using fake documents of their older relatives. If Tinder manages to pull this off, it can guarantee what most online dating apps merely claim: “who you’ll like is who you’ll meet.”
“We hope all our members worldwide will see the benefits of interacting with people who have gone through our ID verification process,” Kozoll added. “We look forward to a day when as many people as possible are verified on Tinder.”
Tinder recently announced that it will soon use an AI algorithm to scan private messages and compare them against texts that have been reported for inappropriate language in the past. If a message looks like it could be inappropriate, the app will show users a prompt that asks them to think twice before hitting send. “Are you sure you want to send?” will read the overeager person’s screen, followed by “Think twice—your match may find this language disrespectful.”
In order to bring daters the perfect algorithm that will be able to tell the difference between a bad pick up line and a spine-chilling icebreaker, Tinder has been testing out algorithms that scan private messages for inappropriate language since November 2020. In January 2021, it launched a feature that asks recipients of potentially creepy messages “Does this bother you?” When users said yes, the app would then walk them through the process of reporting the message.
As one of the leading dating apps worldwide, sadly, it isn’t surprising why Tinder would think experimenting with the moderation of private messages is necessary. Outside of the dating industry, many other platforms have introduced similar AI-powered content moderation features, but only for public posts. Although applying those same algorithms to direct messages (DMs) offers a promising way to combat harassment that normally flies under the radar, platforms like Twitter and Instagram are yet to tackle the many issues private messages represent.
On the other hand, allowing apps to play a part in the way users interact with direct messages also raises concerns about user privacy. But of course, Tinder is not the first app to ask its users whether they’re sure they want to send a specific message. In July 2019, Instagram began asking “Are you sure you want to post this?” when its algorithms detected users were about to post an unkind comment.
In May 2020, Twitter began testing a similar feature, which prompted users to think again before posting tweets its algorithms identified as offensive. Last but not least, TikTok began asking users to “reconsider” potentially bullying comments this March. Okay, so Tinder’s monitoring idea is not that groundbreaking. That being said, it makes sense that Tinder would be among the first to focus on users’ private messages for its content moderation algorithms.
As much as dating apps tried to make video call dates a thing during the COVID-19 lockdowns, any dating app enthusiast knows how, virtually, all interactions between users boil down to sliding in the DMs. And a 2016 survey conducted by Consumers’ Research has shown a great deal of harassment happens behind the curtain of private messages: 39 per cent of US Tinder users (including 57 per cent of female users) said they experienced harassment on the app.
So far, Tinder has seen encouraging signs in its early experiments with moderating private messages. Its “Does this bother you?” feature has encouraged more people to speak out against weirdos, with the number of reported messages rising by 46 per cent after the prompt debuted in January 2021. That month, Tinder also began beta testing its “Are you sure?” feature for English- and Japanese-language users. After the feature rolled out, Tinder says its algorithms detected a 10 per cent drop in inappropriate messages among those users.
The leading dating app’s approach could become a model for other major platforms like WhatsApp, which has faced calls from some researchers and watchdog groups to begin moderating private messages to stop the spread of misinformation. But WhatsApp and its parent company Facebook haven’t taken action on the matter, in part because of concerns about user privacy.
An AI that monitors private messages should be transparent, voluntary, and not leak personally identifying data. If it monitors conversations secretly, involuntarily, and reports information back to some central authority, then it is defined as a spy, explains Quartz. It’s a fine line between an assistant and a spy.
Tinder says its message scanner only runs on users’ devices. The company collects anonymous data about the words and phrases that commonly appear in reported messages, and stores a list of those sensitive words on every user’s phone. If a user attempts to send a message that contains one of those words, their phone will spot it and show the “Are you sure?” prompt, but no data about the incident gets sent back to Tinder’s servers. “No human other than the recipient will ever see the message (unless the person decides to send it anyway and the recipient reports the message to Tinder)” continues Quartz.
For this AI to work ethically, it’s important that Tinder be transparent with its users about the fact that it uses algorithms to scan their private messages, and should offer an opt-out for users who don’t feel comfortable being monitored. As of now, the dating app doesn’t offer an opt-out, and neither does it warn its users about the moderation algorithms (although the company points out that users consent to the AI moderation by agreeing to the app’s terms of service).
Long story short, fight for your data privacy rights, but also, don’t be a creep.