Deep Dives Level Up Newsletters Saved Articles Challenges

AI

Deepfakes can effectively fool facial recognition services, study suggests

By Jack Ramage

Aug 4, 2021

COPY URL

Deepfakes are slipping into the mainstream. If you’re still unaware of what deepfakes actually are, let me fill you in: deepfakes can be thought of as videos generated by Artificial Intelligence (AI), taking a person in an existing video and replacing them with someone else. If you haven’t stumbled across deepfake technology on your daily social media binge yet, chances are that you will soon. It was only recently when the deepfake footage of Tom Cruise—posted to an unverified TikTok account—racked up a whopping 11 million views on the app. According to Deeptrace, an AI tech startup, the number of deepfakes on the internet has increased by 330 per cent from October 2019 to June 2020—reaching over 50,000 at their peak, creepy right?

Now, let’s be clear, not all deepfakes are used with malicious intent. In fact, they’ve been used quite humorously in the past—cue Nick Cage deepfake compilation—but as with all things, if the technology is placed in the wrong hands, it can have a detrimental impact on the lives of innocent people. Deepfakes have already been used to generate pornographic material of actors—leading to a spike in non-consensual deepfake porn over the recent years. Cybercriminals have even used deepfake software to impersonate the CEO of a UK-based energy firm, demanding a fraudulent transfer of $ 243,000.

And if that wasn’t enough, scientists are now warning deepfake technology to have hit a new, somewhat terrifying, milestone: effectively fooling commercial facial recognition services.

The deeply-fake deepfake effect

A paper, published by researchers over at Sungkyunkwan University, Suwon, South Korea, showed that both Amazon and Microsoft Application Programming Interfaces (APIs) can be fooled with commonly used deepfake-generating methods. The researchers used AI models trained on five different datasets—three publically available and two that they created themselves—containing the faces of Hollywood movie stars, singers, athletes, and politicians. They created 8,119 deepfakes from the datasets in total. From each deepfake video, they extracted multiple faceshots and submitted them to the APIs in question. Microsoft’s Azure Cognitive Services was fooled 78 per cent of the time, whereas Amazon’s API was fooled 68 per cent of the time.

“From experiments, we find that some deepfake generation methods are of greater threat to recognition systems than others and that each system reacts to deepfake impersonation attacks differently,” the researchers at Sungkyunkwan University wrote.

They continued, “We believe our research findings can shed light on better designing robust web-based APIs, as well as appropriate defence mechanisms, which are urgently needed to fight against malicious use of deepfakes.”

“Assuming the underlying face recognition API cannot distinguish the deepfake impersonator from the genuine user, it can cause many privacy, security, and repudiation risks, as well as numerous fraud cases,” the researchers warn. “Voice and video deepfake technologies can be combined to create multimodal deepfakes and used to carry out more powerful and realistic phishing attacks … [And] if the commercial APIs fail to filter the deepfakes on social media, it will allow the propagation of false information and harm innocent individuals.”

So… what next?

As the researchers have warned, it’s clear that the rapidly evolving nature of deepfake technology could pose a significant risk in terms of privacy and security. The issue has caused what could be thought of as a technological ‘arms race’—between deepfake developers with ill intent and deepfake detectors—attempting to rid fraudulent deepfakes from the web. Microsoft has recently launched its own deepfake combating solution, a tool that can analyse a still photo or video to provide a score of its level of confidence that the media hasn’t been artificially manipulated. 

Deepfake detectors work in a similar way to the way deepfakes do, making use of machine learning models in order to detect the videos. However, deepfakes have also found a way to fool detectors, including adversarial examples in every frame to confuse the AI system. It’s been reported that deepfake attacks of this nature have an impressive success rate from 78 to 99 per cent.

The future is still unknown: it’s difficult to predict the future of the tech industry in general—let alone a technology that has only really surfaced within the last five years or so. Some forecast a dystopian future in which deepfakes will evolve to a point where you can’t trust any footage online. Others take a more optimistic approach, comparing deepfakes to animation—suggesting that it could bring a new wave of content production. However, as facial recognition continues to become more embedded in our own lives, from unlocking your phone to more stringent security measures, it’s clear that measures need to be taken in order to stop cybercriminals from misusing the technology for their own purposes.

Deepfakes can effectively fool facial recognition services, study suggests


By Jack Ramage

Aug 4, 2021

COPY URL


AI

Creating deepfakes of your dead relatives out of nostalgia? It’s a thing, but it comes with a hefty price

By Alma Fabiani

Mar 17, 2021

COPY URL

We’ve previously witnessed the appearance of AI-generated deepnudes, which, by the way, were later on revealed as being made using real images of sexual abuse—how awful. Around the same time, Kanye West thought it would be a great idea to buy his then-wife Kim Kardashian a talking hologram of her late father, Robert Kardashian. No need to ponder why they’re currently getting divorced. Even deepfake memes became popular!

All in all, it’s safe to say that deepfakes have comfortably infiltrated our lives, just like the rest of the, somewhat surprisingly, recent technologies that we take for granted in our daily life. Just because one trend is never enough for gen Zers—I should know, I am one myself—deepfakes now play a part in yet another trend of the moment: nostalgia.

From Y2k fashion to the viral retro music genre vaporwave, it’s obvious that we have a thing for reminiscence and sentimentality, even for eras we weren’t born in time for. This explains why we’re now all going berserk for MyHeritage’s new free feature called ‘deep nostalgia’, which allows users to upload pictures of their late relatives (or anyone else too, someone uploaded a photograph of the legendary Rosalind Franklin, ‘just because’) and have them come to life, eyes swivelling, faces tilting, and all that jazz.

The Black Mirror-esque technology has already taken TikTok by storm, with users sharing videos of them showing their parents AI-generated animations of their great great grandfather, grandmother, and other relatives, inevitably leading to emotional reactions and sometimes tears.

The creepy yet fascinating tool comes from MyHeritage, the Israeli online genealogy platform mostly known for its DNA test kits which provide customers with DNA matching and ethnicity estimates. But MyHeritage’s AI-powered viral deepfakery isn’t as complicated as it seems: the company is simply going straight for tugging on your heartstrings to grab data that can then be used to drive sign-ups for its other (paid) services. In other words, selling DNA tests is its main business, not ‘making it’ on TikTok, although that’s always a plus for any company.

It’s free to animate a photo using the deep nostalgia tech on MyHeritage’s website, but you don’t get to see the result until you hand over at least an email address and agree to its privacy policy and terms and conditions. Both of which have previously attracted a number of concerns over the years.

As TechCrunch explains, last year for example, “the Norwegian Consumer Council reported MyHeritage to the national consumer protection and data authorities after a legal assessment of the T&Cs found the contract it asks customers to sign to be ‘incomprehensible’.”

Back in 2018, MyHeritage also suffered a major data breach. The data from that breach was later found for sale on the dark web, among a wider cache of hacked account info pertaining to several other services.

That being said, if you’re able to set aside the ethics of encouraging people to drag their long-lost relatives into the dark hole that is MyHeritage’s cross-sell DNA testing, then yes, the deepfake tool is pretty impressive.

But MyHeritage is not the only company to be praised (or condemned) for the deep nostalgia trend. Another Israeli company, D-ID, helped power it. As a TechCrunch Disrupt Battlefield alumni, D-ID started out building tech to digitally de-identify faces with an eye on protecting images and video from being identifiable by facial recognition algorithms. Oh, the irony!

The company released a demo video of the newer, photo-animating technology last year. The tech uses a driver video to animate the photo, mapping facial features from the photo onto that base driver to create a “live portrait.”

“The Live Portrait solution brings still photos to life. The photo is mapped and then animated by a driver video, causing the subject to move its head and facial features, mimicking the motions of the driver video,” D-ID said in a press release. “This technology can be implemented by historical organizations, museums, and educational programs to animate well-known figures.” So, not really your great great uncle.

Like all good things in life, MyHeritage’s deep nostalgia feature is not completely free—after the first few free nostalgia hits, users are asked to pay a monthly fee. I would be lying if I said I’m not going to be one of the many to have a fiddle with the tool, however, knowing that a paywall is bound to cut me short in my nostalgia mania is a welcomed thought.

Creating deepfakes of your dead relatives out of nostalgia? It’s a thing, but it comes with a hefty price


By Alma Fabiani

Mar 17, 2021

COPY URL


 

×

Emails suck! Ours don't

Sign up to our weekly newsletter

 

Don't show again