What does the viral deepfake advert of Joe Rogan say about the future of advertising? – Screen Shot
Deep Dives Level Up Newsletters Saved Articles Challenges


What does the viral deepfake advert of Joe Rogan say about the future of advertising?

The dangers of deepfake and AI technology seem to be constantly on the rise. From gross and harmful exploitation of people’s likeness for sexually explicit imagery to artists facing copyright nightmares over stolen artwork that was generated through machine learning, AI is constantly spawning new risks and problems for society. And the latest viral news that a TikTok advertisement is using artificial recreations of celebrities to promote products is just the tip of the iceberg.

In the latest AI catastrophe to hit the internet, a viral advert sees controversial podcast host Joe Rogan trying to sell you alleged libido-boosting supplements. The clip uses actual footage from an episode of The Joe Rogan Experience which has been edited using deepfake technology in order to make it look like Rogan is actually praising the benefits of “that Alpha Grind product.” An AI voice dubbing tool further creates the auditory illusion that the American commentator said these words. The result is chillingly convincing.

Rogan pulls in an average of 11 million listeners per episode. So, a product endorsement from him is likely going to boost your exposure and net you a nice rise in sales. However, in order to get a hold of that publicity, brands have previously had to fork out gigantic fees. This is no longer an issue when you consider the capabilities of AI.

While the dubbed videos aren’t completely perfect, it’s easy to see how an absent-minded viewer might fall for this scam—believing that the product being sold is endorsed by a celebrity they trust, or who might have authority on the goods in question.

As users began flooding Twitter with their surprise and shock at the false advert, one user perfectly captured some of the serious issues surrounding the viral moment, reminding us how dangerous deepfake technology has been for women for years. It’s only when a hyper-masculine man such as Rogan is used or manipulated that other men take note.


According to Mashable, the original posting of the ad has since been removed from TikTok and the originally circulating tweet that housed the viral clip has since been disabled due to a complaint from “the copyright owner.”

What are the dangers of AI?

The clips were removed from the video-sharing platform as they violated the platform’s “harmful misinformation policy.” The account that initially posted the ads was also banned. So, “what’s the issue?” one Twitter user asks. Professor of Neurobiology at Stanford, Andrew D Huberman(the other guest from the original clip) replied saying: “They created a false conversation. We never had. We were talking about something very different.”

What is being highlighted in these adverts is the very real danger that deepfakes such as this presents. There is an inherent ethical problem with content that breaks boundaries of consent, image rights and consumer trust.

Misinformation and falsehoods are easily recreated too thanks to the machine learning technology and the possibilities are, well, terrifying. First it’s fake celebrity endorsements, next it’s political speeches and deepfake cover-ups.

While the fake endorsement may seem tame, we’re on a very scary path,one where we need more opportunity to accurately verify the authenticity of videos like this, especially as they continue to get more and more realistic. Don’t even get me started on the threat that AI poses to employment opportunities for artists and creators.

I’m glad that the social media platforms housing the content acted fast and doled out significant punishments to the offenders. It’s strange knowing that it’s up to Silicon Valley to ensure that stuff like this doesn’t get normalised. I don’t want to live in a world where I can’t verify that the person I’m listening to is real. Do you?


Thinking about deepfake porn and the case of streamer QTCinderella’s exploitation

Over the last couple of months, artificial intelligence (AI) and its seemingly endless possibilities have exploded into public consciousness, with innovations like chatbot ChatGPT and all-in-one image editing tool Lensa dominating the internet and forcing us to rethink the future of technology altogether.

AI—aka scientific inventions that specifically imitate the cognitive capabilities of a human—has been around for years (hey, Siri!) but ever-improving synthetic media technologies are now throwing up all manner of ethical issues. In other words, their developers often make use of data freely available to them.

If a stranger asked for a copy of your passport, would you hand it over without asking who they were, and why they needed it? What about your bank details? Or your fingerprint?

Chances are, the answer is no. But when it comes to giving away our personal data online, the boundaries have become blurred. We’ve all clicked ‘accept cookies’ in a fit of impatience (or because it sounded kind of cute, like a snack from grandma).

And then there’s our faces and bodies. Most of us don’t think twice before posting pictures of ourselves online. We do this knowing anyone can access public Instagram and TikTok accounts, and that even private profiles can get hacked. Why? Well, because everyone’s doing it.

Tons of people now make an excellent living out of sharing content. But what if our information is used against our will, or out of context? What if we’ve lost online autonomy over our own bodies?

Twitch stars QTCinderella and Pokimane just found out the answer to this frightening question. The two famous streamers make money through video content in which they’re often seen decorating cakes or playing games. What they don’t do is make sex tapes. But, thanks to sophisticated AI technology, there are dark corners of the internet where it now looks like they do. And anyone can pay to watch.

QTCinderella’s likeness was overlaid onto a pre-existing adult video without her permission. What’s worse, her fellow high-profile streamer (and so-called friend) Brandon Ewing aka Atrioc was caught downloading it, and views skyrocketed. Fans pointed out his transgression after he uploaded a video of himself, in which his computer screen evidenced him downloading the deepfake. He issued a tearful apology, but the damage was already done.

QTCinderella live streamed her response to the creators responsible, and to Atrioc himself. “I’m so exhausted and I think you guys need to know what pain looks like, because this is it. This is what it looks like to feel violated. This is what it feels like to be taken advantage of, this is what it looks like to see yourself naked against your will, being spread all over the internet. This is what it looks like,” she stated.

She continued: “Fuck the fucking internet. Fuck Atrioc for showing it to thousands of people. Fuck the people DMing me pictures of myself from that website. Fuck you all.”

Technology has moved fast since Jordan Peele first made waves with a deepfake video of President Obama back in 2018. The film creator teamed up with BuzzFeed to make it look like the President of the US was speaking nonsense, using nothing but the simple-to-use Fakeapp. This spoof alone proved how easy it was to spread fake images and videos around the globe.

And now, as technology advances, so do the opportunities for the sexual exploitation of women through deepnudes. In October 2020, visual threat intelligence company Sensity AI published a report which stated that over 680,000 women had fake images made of them using as little as one photograph.

The effects of such violations can be far-reaching, causing mental health issues like depression and disordered thinking. QTCinderella shared via Twitter that the events had negatively affected her self-perception:

The streamer vowed to sue the people who made the website, but because AI technology is still relatively new, it’s not easy. Federal revenge porn law in the US does not technically address deepfakes so it might not be possible, or in the least very complicated, for her to gain justice.

What’s worse is that the videos have given way to internet trolls weighing in on the events, many jibing that QTCinderella has nothing to be upset about, claiming that abuse “comes with the territory” of being famous. Spoiler alert, it doesn’t.

SCREENSHOT spoke with psychologist Zoe Mallett on internet security as well as how emotionally exhausting and draining online abuse can be. The expert explained: “Firstly, it’s okay to feel heightened emotions if you are being abused online, it’s the same if you’re being abused in real life. It hurts, we feel rejected, we feel sad, and it can make us feel isolated.”

Mallett continued: “Often, we can feel ashamed or embarrassed so we don’t talk to our friends or family about it. Especially if we feel that telling our family may run the risk of them policing us with where we spend our time online. We also have to take into consideration that online bullying and trolling is still quite new territory, and it’s hard to know who is behind the comments, and their purpose.”

Online abuse is gaining greater traction within academic research and, as Mallett explained, theorists have found that “trolls possess dark personality traits, including psychopathy, narcissism and sadism. This can help us better understand that, statistically, this abuse is coming from those experiencing very serious disorders. We can look at this to help us try and take away feelings of the comments being a personal attack.”

Of course, this reassurance feels lacklustre when you realise that individuals are now charging minimal amounts of money to make deepfakes. For the price of lunch, anyone can request an embarrassing or explicit fake video be made of an ex-partner, family member, supposed friend or classmate. It’s an incredibly unsettling reality.

It’s impossible to control the behaviour of anyone but ourselves. But there are things we can do to safeguard our personal information. For starters, this can consist of setting our personal accounts to ‘private’, making sure our passwords are secure, and blocking any suspicious accounts we don’t recognise.

Put aside some time to conduct an online audit. Have a think about how you look to the outside world, and whether you’re okay with that. Delete or archive any images or captions you wouldn’t feel comfortable being shared.

Mallett also recommends speaking to trusted individuals about your online activity and taking time away from your screen to consider other perspectives and distance yourself from the intensity of social media and internet culture.

Think twice before jumping on the latest social media trend, too. Ask yourself why the website or app needs your details, and if you’re comfortable sharing them. Would you want your information or photographs to be stored (and possibly sold on) by people you don’t know?

Something to bear in mind next time you share a photo with 3,000 people you’ve never met, or upload ten selfies to a brand new app in exchange for a sexy AI avatar—the results aren’t always as glamorous as you may think. As the expert noted, “You can get the same dopamine hit that the online world gives you with human touch, human connection and getting out into nature. Keep reminding yourself that there is a world outside of the online one.”

Props to QTCinderella for speaking out on her experience, drawing attention to this complicated issue, and reminding us about the sometimes dangerous consequences of sharing personal pictures online.