An investigation into the spread of deepfake pornography by the women’s magazine Glamour revealed that the UK’s most popular search engines are leading users towards nefarious deepfake tutorials and explicit AI software.
The report found that Google, Microsoft’s Bing and Yahoo search—the most-used search engines in the UK according to Statista—are currently all displaying results for tutorials explaining how to create deepfake porn and use face-swapping technology. This is highly worrisome given the prolific and harmful nature of deepfake artificial intelligence (AI).
Other results also provided access to explicit AI software and undress, or nudify apps, showing that despite government and industry efforts to combat the spread of harmful pornography, the content still remains searchable and shareable.
Of course, not everyone has malicious intent, but the accessibility of these tools make it possible for anyone to mock up explicit images of a celebrity, ex-girlfriend, or anyone else with simple imagery of their face—with or without their consent.
In the past few months, SCREENSHOT has extensively reported on the proliferation of artificially created pornography and how it has affected notable celebrities such as Taylor Swift, Jenna Ortega, Sabrina Carpenter, and Sydney Sweeney. However, they only serve as examples of how this practice has been limiting the freedoms and heightening the anxiety of women across the globe, who are vulnerable to being exploited through these methods.
Research by McAfee shows that in the last few months, 27 per cent of internet users have had their likeness used to create sexually explicit content and that people are increasingly concerned about their images being used without their consent.
Similarly, reports from identity theft company Home Security Heroes revealed a 550 per cent surge in deepfake videos online in 2023. And with increasing accounts of adolescents being subjected to this practice, it is not only a necessity but a duty for the UK government to protect its citizens’ safety online.
On top of this, 91 per cent of Glamour magazine’s readership of 12 million agrees that deepfake pornography poses a direct threat to women’s safety.
As deepfake abuse has increasingly broken into the mainstream, digital platforms and the UK government have introduced policies and practices to curb abusive content. For example, in April 2024, the UK government announced a crackdown on people who create sexually explicit ‘deepfakes’. This would be effectuated with changes to the Criminal Justice Bill, including new offences for creating this kind of content. However, the amendment was criticised for focusing on intent to cause harm, instead of simple consent.
Companies such as Meta, Pornhub, OnlyFans and TikTok have all joined initiatives with charities and organisations that focus on women’s online safety to respond to the issue. Likewise, Reddit claims to have hired more staff to remove harmful content that violates its policies.
Yet, it is becoming apparent that these measures aren’t enough to sufficiently tackle the increase in deepfakes, considering that they only manage this content rather than introduce an actual blanket ban.
“It’s too little, too late,” Professor Clare McGlynn, a leading expert on tech-facilitated abuse, told the women’s magazine. “There is an entire ecosystem around the creation and distribution of sexually explicit deepfakes that has been allowed to proliferate.”