RentAFriend has surged during lockdown: should we put a price tag on friendship? – Screen Shot
Deep Dives Level Up Newsletters Saved Articles Challenges

RentAFriend has surged during lockdown: should we put a price tag on friendship?

What’s the price of a pint in London? £6. A fancy meal? £40. Good company? Priceless… right? Well, think again. There’s an age-old saying “money can’t buy happiness,” which is true to some extent. However, with the assistance of the website RentAFriend, it can buy you friends. Being the extraverted person that I am, who’s been largely stripped of social contact for the last year, at face value, having a few more contacts in my phonebook would definitely make me happier.

We humans have a bad habit of putting a price on pretty much anything (at least in Western capitalistic society). However, is putting a price on friendship a step over the line? At a glance, it all seems like a pretty innocent concept—a solution to a problem. Moved to a new country and don’t have any friends? Don’t worry, RentAFriend has got your back. But dig a little deeper and the unethical nature of the service begins to show its head.

No, it’s not a scam. RentAFriend is actually a legitimate business, amassing an international membership in the hundreds of thousands. I’ll give the founder Scott Rosenbaum the benefit of the doubt, as I’m sure many people on the site have found meaningful, long-lasting relationships. The site has a membership of over 620,000 so, given the laws of probability, that’s a given. But are the founders solving the problem or getting rich by exploiting a deeply woven problem of loneliness in society? Now, that’s an interesting ethical conundrum to explore.

How RentAFriend works

RentAFriend is essentially an online database full of potential contacts who could be your next BFF—on the condition you pay a membership fee. The site straight up looks like something from the mid-2000s and reminds me of the old-school dating websites only Boomers found useful before the birth of Tinder and Bumble shook things up.

After the initial membership fee to the site, people searching for new friends will also have to fork out extra to ‘hire’ the friend they’ve chosen. According to the official RentAFriend website, these rates start at “just $10 an hour” but a quick search on Google shows that some offer rates of up to $50 an hour. 

Alright, I respect the hustle as much as the next guy, but if you really think your friendship is worth $50 an hour, get off your high horse and take a look at yourselfyou’re either incredibly deluded or a self-absorbed idiot.

Capitalisation on mental health and loneliness

Jokes aside, this service is at best distasteful and at worst exploitative. Levels of loneliness in Great Britain have increased since Spring 2020. Between 3 April and 3 May 2020, 2.6 million British adults say they felt lonely either “often” or “always.” During the winter months, between October 2020 to February 2021, that number increased to 3.7 million—that’s around 7.2 per cent of the British adult population.

According to Mind, “feeling lonely isn’t in itself a mental health problem, but the two are strongly linked. Feeling lonely can also have a negative impact on your mental health, especially if these feelings have lasted a long time.” While loneliness, and the mental health conditions it causes, has increased so has the user base of RentAFriendwhich has seen a 20 per cent increase since the pandemic began.

Putting a paywall on friendship

It would be fine if the service was free, but it’s notin fact, it’s actually pretty expensive. RentAFriend is the embodiment of a friendship paywall, leaving those lonely enough but rich enough to afford it out of pocket, and those who are struggling financially out of the loop. This is particularly problematic when looking at the statistics. According to the ONS, areas with a higher concentration of younger people and areas with higher rates of unemployment tended to have higher rates of loneliness during the period of October 2020 to February 2021. Consequently, those who would benefit from the service the most are the ones least likely to be able to afford it.

Using the word dystopian almost comes naturally when writing about these new kinds of services. But in this instance, it really is fitting. I’ll try my best to not sound like an out-of-touch, anti-tech Boomer but the ‘rent a friend’ model personifies the disconnect of meaningful relationshipsand how greedy entrepreneurs are willing to exploit that disconnect. If you are feeling lonely right now, know you are not alone. As tempting as it might be, I advise restraining yourself from using RentAFriend and instead invest your time, and money, in hobbies or community activities. You’re likely to find friends with similar interests who are invested in you for younot just your wallet.

Tinder is using AI to monitor DMs and cool down the weirdos

Tinder recently announced that it will soon use an AI algorithm to scan private messages and compare them against texts that have been reported for inappropriate language in the past. If a message looks like it could be inappropriate, the app will show users a prompt that asks them to think twice before hitting send. “Are you sure you want to send?” will read the overeager person’s screen, followed by “Think twice—your match may find this language disrespectful.”

In order to bring daters the perfect algorithm that will be able to tell the difference between a bad pick up line and a spine-chilling icebreaker, Tinder has been testing out algorithms that scan private messages for inappropriate language since November 2020. In January 2021, it launched a feature that asks recipients of potentially creepy messages “Does this bother you?” When users said yes, the app would then walk them through the process of reporting the message.

Tinder is using AI to monitor DMs and cool down the weirdos

As one of the leading dating apps worldwide, sadly, it isn’t surprising why Tinder would think experimenting with the moderation of private messages is necessary. Outside of the dating industry, many other platforms have introduced similar AI-powered content moderation features, but only for public posts. Although applying those same algorithms to direct messages (DMs) offers a promising way to combat harassment that normally flies under the radar, platforms like Twitter and Instagram are yet to tackle the many issues private messages represent.

On the other hand, allowing apps to play a part in the way users interact with direct messages also raises concerns about user privacy. But of course, Tinder is not the first app to ask its users whether they’re sure they want to send a specific message. In July 2019, Instagram began asking “Are you sure you want to post this?” when its algorithms detected users were about to post an unkind comment.

In May 2020, Twitter began testing a similar feature, which prompted users to think again before posting tweets its algorithms identified as offensive. Last but not least, TikTok began asking users to “reconsider” potentially bullying comments this March. Okay, so Tinder’s monitoring idea is not that groundbreaking. That being said, it makes sense that Tinder would be among the first to focus on users’ private messages for its content moderation algorithms.

As much as dating apps tried to make video call dates a thing during the COVID-19 lockdowns, any dating app enthusiast knows how, virtually, all interactions between users boil down to sliding in the DMs. And a 2016 survey conducted by Consumers’ Research has shown a great deal of harassment happens behind the curtain of private messages: 39 per cent of US Tinder users (including 57 per cent of female users) said they experienced harassment on the app.

So far, Tinder has seen encouraging signs in its early experiments with moderating private messages. Its “Does this bother you?” feature has encouraged more people to speak out against weirdos, with the number of reported messages rising by 46 per cent after the prompt debuted in January 2021. That month, Tinder also began beta testing its “Are you sure?” feature for English- and Japanese-language users. After the feature rolled out, Tinder says its algorithms detected a 10 per cent drop in inappropriate messages among those users.

The leading dating app’s approach could become a model for other major platforms like WhatsApp, which has faced calls from some researchers and watchdog groups to begin moderating private messages to stop the spread of misinformation. But WhatsApp and its parent company Facebook haven’t taken action on the matter, in part because of concerns about user privacy.

An AI that monitors private messages should be transparent, voluntary, and not leak personally identifying data. If it monitors conversations secretly, involuntarily, and reports information back to some central authority, then it is defined as a spy, explains Quartz. It’s a fine line between an assistant and a spy.

Tinder says its message scanner only runs on users’ devices. The company collects anonymous data about the words and phrases that commonly appear in reported messages, and stores a list of those sensitive words on every user’s phone. If a user attempts to send a message that contains one of those words, their phone will spot it and show the “Are you sure?” prompt, but no data about the incident gets sent back to Tinder’s servers. “No human other than the recipient will ever see the message (unless the person decides to send it anyway and the recipient reports the message to Tinder)” continues Quartz.

For this AI to work ethically, it’s important that Tinder be transparent with its users about the fact that it uses algorithms to scan their private messages, and should offer an opt-out for users who don’t feel comfortable being monitored. As of now, the dating app doesn’t offer an opt-out, and neither does it warn its users about the moderation algorithms (although the company points out that users consent to the AI moderation by agreeing to the app’s terms of service).

Long story short, fight for your data privacy rights, but also, don’t be a creep.