Software engineers scam Reddit users with AI-generated nudes in new social experiment – Screen Shot
Deep Dives Level Up Newsletters Saved Articles Challenges

AI

Software engineers scam Reddit users with AI-generated nudes in new social experiment

Artificial intelligence (AI) has become intrinsically linked to the ways in which we now navigate the modern world. Whether it’s diagnosing medical conditions or stealing our jobs, technology has officially managed to seep its way into every facet of our lives. Most recently, AI was utilised by a group of software engineers to conduct a highly questionable, if not intriguing, social experiment.

According to news outlet Firstpost, two individuals decided to take it upon themselves to trick a cohort of unassuming Redditors into paying for nudes of a “beautiful woman.” The only catch? The photos they received were not from an actual woman, rather, they were from a computer-generated woman named Claudia.

Reddit happens to be a platform which can often attract some of the seediest netizens online, and so it’s no wonder that the engineers were able to find a number of people more than willing to splash the cash in hopes of receiving some nude pictures.

As reported by Firstpost, the two men behind the scheme were able to make up to $100 selling sensitive images of Claudia. One paying customer even tried to invite the AI-generated woman on a date, alleging that he earned six figures and could show her a good time…

The pair used a computer program called ​​Stable Diffusion to create Claudia and instructed the software to construct a woman taking a selfie “without makeup, with black hair, shoulder length hair, plain background, straight hair and hair fringe.”

It should also be noted that while it’s unknown if the engineers specified an age, the AI did create a woman who looked remarkably young—something that makes the entire experiment feel that bit more unsavoury.

Are AI-generated nudes a good thing or a bad thing?

In many ways, this event accurately reflects the lengths in which some individuals will go to in order to try and obtain nude photos of women online. And while sex workers are of course fully entitled to sell images of themselves, this particular method was unregulated and therefore can lead to a number of ethical issues, regardless of whether or not the woman in question is real or computer-generated.

For one thing, Stable Diffusion is freely available to the public, and while the company states that users are forbidden to create “obscene, vulgar, lascivious, offensive and pornographic” pictures, it’s completely impossible to keep track of everyone who has access to the site. It’s an upsetting yet sure thing that predators have utilised the technology to create disturbing images, and most likely share them on with other criminals online.

There’s also an argument to be made that the use of AI-generated nudes, and indeed the future idea of completely AI-generated porn, takes direct business and profit away from sex workers.

However, we know that young women can be subject to extreme exploitation online, whether it be someone deepfaking your face onto a nude image, or being the victim of revenge porn. This begs the question, could fully AI-generated nudes lessen the need for individuals to take advantage of real human beings?

Now, this is an incredibly multi-layered story, and one that’s created a number of divisive debates online. Global video content publisher Brut recently shared a clip on its Instagram discussing the experiment, and users definitely had a lot to say in the comments section.

View this post on Instagram

A post shared by Brut AI (@brut.ai)

One netizen wrote: “I don’t see the problem with AI-generated nudes. It beats the continued exploitation and trafficking of women,” while another noted: “If you think this fake ass looking person is real you deserve to get scammed I’m so sorry.”

It’s a complicated topic with a number of matters to address. The development of technology has always gone hand in hand with global ethical dilemmas, and this is only one clear example of it playing out in real life. Whether or not you agree with the engineers’ experiment, it’s definitely spurred on a conversation that needs to be had.

AI

Thinking about deepfake porn and the case of streamer QTCinderella’s exploitation

Over the last couple of months, artificial intelligence (AI) and its seemingly endless possibilities have exploded into public consciousness, with innovations like chatbot ChatGPT and all-in-one image editing tool Lensa dominating the internet and forcing us to rethink the future of technology altogether.

AI—aka scientific inventions that specifically imitate the cognitive capabilities of a human—has been around for years (hey, Siri!) but ever-improving synthetic media technologies are now throwing up all manner of ethical issues. In other words, their developers often make use of data freely available to them.

If a stranger asked for a copy of your passport, would you hand it over without asking who they were, and why they needed it? What about your bank details? Or your fingerprint?

Chances are, the answer is no. But when it comes to giving away our personal data online, the boundaries have become blurred. We’ve all clicked ‘accept cookies’ in a fit of impatience (or because it sounded kind of cute, like a snack from grandma).

And then there’s our faces and bodies. Most of us don’t think twice before posting pictures of ourselves online. We do this knowing anyone can access public Instagram and TikTok accounts, and that even private profiles can get hacked. Why? Well, because everyone’s doing it.

Tons of people now make an excellent living out of sharing content. But what if our information is used against our will, or out of context? What if we’ve lost online autonomy over our own bodies?

Twitch stars QTCinderella and Pokimane just found out the answer to this frightening question. The two famous streamers make money through video content in which they’re often seen decorating cakes or playing games. What they don’t do is make sex tapes. But, thanks to sophisticated AI technology, there are dark corners of the internet where it now looks like they do. And anyone can pay to watch.

QTCinderella’s likeness was overlaid onto a pre-existing adult video without her permission. What’s worse, her fellow high-profile streamer (and so-called friend) Brandon Ewing aka Atrioc was caught downloading it, and views skyrocketed. Fans pointed out his transgression after he uploaded a video of himself, in which his computer screen evidenced him downloading the deepfake. He issued a tearful apology, but the damage was already done.

QTCinderella live streamed her response to the creators responsible, and to Atrioc himself. “I’m so exhausted and I think you guys need to know what pain looks like, because this is it. This is what it looks like to feel violated. This is what it feels like to be taken advantage of, this is what it looks like to see yourself naked against your will, being spread all over the internet. This is what it looks like,” she stated.

She continued: “Fuck the fucking internet. Fuck Atrioc for showing it to thousands of people. Fuck the people DMing me pictures of myself from that website. Fuck you all.”

Technology has moved fast since Jordan Peele first made waves with a deepfake video of President Obama back in 2018. The film creator teamed up with BuzzFeed to make it look like the President of the US was speaking nonsense, using nothing but the simple-to-use Fakeapp. This spoof alone proved how easy it was to spread fake images and videos around the globe.

And now, as technology advances, so do the opportunities for the sexual exploitation of women through deepnudes. In October 2020, visual threat intelligence company Sensity AI published a report which stated that over 680,000 women had fake images made of them using as little as one photograph.

The effects of such violations can be far-reaching, causing mental health issues like depression and disordered thinking. QTCinderella shared via Twitter that the events had negatively affected her self-perception:

The streamer vowed to sue the people who made the website, but because AI technology is still relatively new, it’s not easy. Federal revenge porn law in the US does not technically address deepfakes so it might not be possible, or in the least very complicated, for her to gain justice.

What’s worse is that the videos have given way to internet trolls weighing in on the events, many jibing that QTCinderella has nothing to be upset about, claiming that abuse “comes with the territory” of being famous. Spoiler alert, it doesn’t.

SCREENSHOT spoke with psychologist Zoe Mallett on internet security as well as how emotionally exhausting and draining online abuse can be. The expert explained: “Firstly, it’s okay to feel heightened emotions if you are being abused online, it’s the same if you’re being abused in real life. It hurts, we feel rejected, we feel sad, and it can make us feel isolated.”

Mallett continued: “Often, we can feel ashamed or embarrassed so we don’t talk to our friends or family about it. Especially if we feel that telling our family may run the risk of them policing us with where we spend our time online. We also have to take into consideration that online bullying and trolling is still quite new territory, and it’s hard to know who is behind the comments, and their purpose.”

Online abuse is gaining greater traction within academic research and, as Mallett explained, theorists have found that “trolls possess dark personality traits, including psychopathy, narcissism and sadism. This can help us better understand that, statistically, this abuse is coming from those experiencing very serious disorders. We can look at this to help us try and take away feelings of the comments being a personal attack.”

Of course, this reassurance feels lacklustre when you realise that individuals are now charging minimal amounts of money to make deepfakes. For the price of lunch, anyone can request an embarrassing or explicit fake video be made of an ex-partner, family member, supposed friend or classmate. It’s an incredibly unsettling reality.

It’s impossible to control the behaviour of anyone but ourselves. But there are things we can do to safeguard our personal information. For starters, this can consist of setting our personal accounts to ‘private’, making sure our passwords are secure, and blocking any suspicious accounts we don’t recognise.

Put aside some time to conduct an online audit. Have a think about how you look to the outside world, and whether you’re okay with that. Delete or archive any images or captions you wouldn’t feel comfortable being shared.

Mallett also recommends speaking to trusted individuals about your online activity and taking time away from your screen to consider other perspectives and distance yourself from the intensity of social media and internet culture.

Think twice before jumping on the latest social media trend, too. Ask yourself why the website or app needs your details, and if you’re comfortable sharing them. Would you want your information or photographs to be stored (and possibly sold on) by people you don’t know?

Something to bear in mind next time you share a photo with 3,000 people you’ve never met, or upload ten selfies to a brand new app in exchange for a sexy AI avatar—the results aren’t always as glamorous as you may think. As the expert noted, “You can get the same dopamine hit that the online world gives you with human touch, human connection and getting out into nature. Keep reminding yourself that there is a world outside of the online one.”

Props to QTCinderella for speaking out on her experience, drawing attention to this complicated issue, and reminding us about the sometimes dangerous consequences of sharing personal pictures online.