The dangers of deepfake and AI technology seem to be constantly on the rise. From gross and harmful exploitation of peopleâs likeness for sexually explicit imagery to artists facing copyright nightmares over stolen artwork that was generated through machine learning, AI is constantly spawning new risks and problems for society. And the latest viral news that a TikTok advertisement is using artificial recreations of celebrities to promote products is just the tip of the iceberg.
In the latest AI catastrophe to hit the internet, a viral advert sees controversial podcast host Joe Rogan trying to sell you alleged libido-boosting supplements. The clip uses actual footage from an episode of The Joe Rogan Experience which has been edited using deepfake technology in order to make it look like Rogan is actually praising the benefits of âthat Alpha Grind product.â An AI voice dubbing tool further creates the auditory illusion that the American commentator said these words. The result is chillingly convincing.
Rogan pulls in an average of 11 million listeners per episode. So, a product endorsement from him is likely going to boost your exposure and net you a nice rise in sales. However, in order to get a hold of that publicity, brands have previously had to fork out gigantic fees. This is no longer an issue when you consider the capabilities of AI.
While the dubbed videos arenât completely perfect, itâs easy to see how an absent-minded viewer might fall for this scamâbelieving that the product being sold is endorsed by a celebrity they trust, or who might have authority on the goods in question.
As users began flooding Twitter with their surprise and shock at the false advert, one user perfectly captured some of the serious issues surrounding the viral moment, reminding us how dangerous deepfake technology has been for women for years. Itâs only when a hyper-masculine man such as Rogan is used or manipulated that other men take note.
According to Mashable, the original posting of the ad has since been removed from TikTok and the originally circulating tweet that housed the viral clip has since been disabled due to a complaint from âthe copyright owner.â
The clips were removed from the video-sharing platform as they violated the platformâs âharmful misinformation policy.â The account that initially posted the ads was also banned. So, âwhatâs the issue?â one Twitter user asks. Professor of Neurobiology at Stanford, Andrew D Huberman(the other guest from the original clip) replied saying: âThey created a false conversation. We never had. We were talking about something very different.â
What is being highlighted in these adverts is the very real danger that deepfakes such as this presents. There is an inherent ethical problem with content that breaks boundaries of consent, image rights and consumer trust.
Misinformation and falsehoods are easily recreated too thanks to the machine learning technology and the possibilities are, well, terrifying. First itâs fake celebrity endorsements, next itâs political speeches and deepfake cover-ups.
While the fake endorsement may seem tame, weâre on a very scary path,one where we need more opportunity to accurately verify the authenticity of videos like this, especially as they continue to get more and more realistic. Donât even get me started on the threat that AI poses to employment opportunities for artists and creators.
Iâm glad that the social media platforms housing the content acted fast and doled out significant punishments to the offenders. Itâs strange knowing that itâs up to Silicon Valley to ensure that stuff like this doesnât get normalised. I donât want to live in a world where I canât verify that the person I’m listening to is real. Do you?