Fans of Elordi took the video to social media to compare it with genuine images of the actor. They pointed out a critical detail: the real Jacob Elordi has a distinctive birthmark, which is absent in the fake video. One user stated: “The way this video is fake ‘cause Jacob Elordi actually has a birthmark right under his left chest. Y’all gays should know better 😂”
Another one added: “Hi, there’s a porn-deep fake video (where they put the face of one person on another) of Jacob Elordi circulating around, I ask that if you see any of the videos or montages, REPORT IT AS HARASSMENT, this is sick and can reach other proportions, if u see it around, report it.”
This incident is far from isolated. The misuse of deepfake technology has become increasingly common, with numerous celebrities falling victim to fabricated videos. Jenna Ortega, Sabrina Carpenter, and Bobby Althoff are just a few notable names who have been similarly targeted. These incidents highlight the growing challenge of protecting personal and professional reputations in the digital age, where AI technology can convincingly replicate individuals.
The rise of deepfake technology poses significant ethical and legal questions. While the technology itself has the potential for beneficial uses, its misuse for creating false and harmful content is a growing concern.
Efforts to combat deepfake proliferation include developing more sophisticated detection tools and enacting stricter regulations. Tech companies and governments are increasingly recognising the need to address this issue to protect individuals from malicious content. Additionally, raising public awareness about the existence and capabilities of deepfakes is crucial. By educating people on how to spot inconsistencies and question suspicious content, the impact of such deceptive videos can be mitigated.
In the case of Elordi, the swift action of fans in identifying the video as a deepfake demonstrates the power of collective vigilance. It underscores the importance of critical thinking and community support in the fight against the misuse of AI technology. As deepfake incidents continue to rise, these elements will be vital in safeguarding truth and maintaining the integrity of digital spaces.
In the UK, the Ministry of Justice (MoJ) has announced that the creation of sexually explicit deep fake images will soon be considered a criminal offence under new legislation.