On LinkedIn, companies are leveraging deepfakes’ trustworthiness to boost sales – Screen Shot
Deep Dives Level Up Newsletters Saved Articles Challenges

On LinkedIn, companies are leveraging deepfakes’ trustworthiness to boost sales

How would you describe a typical corporate headshot of someone on LinkedIn? A close-cropped image of the person with a slightly stiff smile, slicked hair and blurred background? Well, that’s exactly what Renée DiResta of the Stanford Internet Observatory thought when she received a connection request from another user.

“Quick question—have you ever considered or looked into a unified approach to message, video, and phone on any device, anywhere?” the sender, Keenan Ramsey, wrote in the message—mentioning that she and DiResta both belonged to a LinkedIn group for entrepreneurs. Ramsey additionally punctuated her greeting with a grinning face emoji before moving on to a pitch about software. Nothing suspicious here so far, just another corporate spam that you either fall for and take the bait or ignore entirely.

But this wasn’t the case for DiResta’s trained eyes. For starters, Ramsey was wearing only one earring in her profile picture. While some strands of her hair blended into the blurry background—which, upon closer inspection, looked like nothing in particular—others disappeared and then reappeared. Then came the placement of her eyes. DiResta noted that they were aligned right in the middle of the image, a tell-tale sign of an AI-generated deepfake.

“The face jumped out at me as being fake,” DiResta, who has studied Russian disinformation campaigns and anti-vaccine conspiracies in the past, told NPR. “In the course of my work, I look at a lot of these things, mostly in the context of political influence operations,” she mentioned. “But all of a sudden, here was a fake person in my inbox reaching out to me.” DiResta also noted how Ramsey’s profile featured a run-of-the-mill description of RingCentral, the software company where she claimed to work, along with a brief job history. She also had an undergraduate business degree from New York University listed on her profile with a generic list of interests in CNN, Unilever, Amazon and Melinda Gates.

These insights led DiResta and her colleague Josh Goldstein at the Stanford Internet Observatory to launch a full-blown investigation into the purpose and harm triggered by deepfakes infiltrating professional platforms like LinkedIn. The result of the study? The researchers uncovered more than 1,000 profiles using AI-generated faces, belonging to more than 70 different companies on the platform.

When NPR further investigated the matter, the media outlet found that most of these profiles are used to drum up sales for the companies that they claim to work for. “Accounts like Keenan Ramsey’s send messages to potential customers. Anyone who takes the bait gets connected to a real salesperson who tries to close the deal,” NPR noted. “Think telemarketing for the digital age.”

Some of the major incentives for companies who have turned to fake profiles include the tactic’s potential of reaching more customers without beefing up their own workforce or hitting LinkedIn’s limits on messages. From a business perspective, it’s undoubtedly cheaper to make fake social media accounts with AI-generated faces than hire actual people to make real accounts. Plus, the images are proven to be more convincing than real people anyways.

NPR further highlighted how the demand for online sales leads has exploded over the pandemic—given how it has become harder for teams to pitch their products in person. However, what’s shocking is that among the 70 businesses that were listed as employers on these fake profiles, several told NPR that they had hired external vendors to help with sales. The companies also claimed to not have authorised the use of computer-generated images and were surprised to learn about the same.

“This is not how we do business,” Heather Hinton, RingCentral’s chief information security officer told NPR. “This was for us a reminder that technology is changing faster than even those of us who are watching it can keep up with. And we just have to be more and more vigilant as to what we do and what our vendors are going to do on our behalf.”

Robert Balderas—CEO of Bob’s Containers in Texas—on the other hand, admitted that the company hired a firm named airSales to boost its business. Although Balderas knew airSales was creating LinkedIn profiles for people who described themselves as “business development representatives” for his company, he thought that “they were real people who worked for airSales.” However, Bob’s Containers had stopped working with airSales before NPR inquired about the profiles.

After Stanford researchers DiResta and Goldstein alerted LinkedIn about the marketing practice, the platform investigated the concern and has since removed profiles that have broken its policies—including rules against creating fake profiles or falsifying information.

Our policies make it clear that every LinkedIn profile must represent a real person. We are constantly updating our technical defences to better identify fake profiles and remove them from our community, as we have in this case,” LinkedIn spokesperson Leonna Spilman said in a statement to NPR. “At the end of the day it’s all about making sure our members can connect with real people, and we’re focused on ensuring they have a safe environment to do just that.”

As for the consumer-focused companies in question, this marketing tactic—be it intentional or not—is bound by the trust they seek to build with their audience. So as the deepfake technology that is currently being used to propagate misinformation and harassment online makes its way into the corporate world, remember: the eyes, Chico, they never lie.

AI

AI-generated fake faces are more trustworthy than real people, new study reveals

Can you tell the difference between a human and a machine? Well, recent research has shown that AI-engineered fake faces are more trustworthy to us than real people.

Pulling the wool over our eyes is no easy feat but, over time, fake images of people have become less and less distinguishable from real ones. Researchers at Lancaster University, UK, and the University of California, Berkeley, looked into whether or not fake faces created by machine frameworks could trick people into believing they were real. Sophie J. Nightingale from Lancaster University and Hany Farid at the University of California conducted the new study published in Proceedings of the National Academy of Sciences USA (PNAS).

In the paper, AI programs called GANS (generative adversarial networks), produced fake images of people by “pitting two neural networks against each other,” The New Scientist explained. One network, called the ‘generator’, produced a series of synthetic faces—ever-evolving like an essay’s rough draft. Another network, known as a ‘discriminator’, was first trained on real images, after which it graded the generated output by comparing them to its bank of real face data.

Beginning with a few tiny pixels, the generator, with feedback from the discriminator, started to create increasingly realistic images. In fact, they were so realistic that the discriminator itself could no longer tell which ones were fake.

Providing an accurate measure for how much technology has advanced, Nightingale and Farid tested these images on 315 participants. Recruited through a crowdsourcing website, the audience were asked whether or not they could distinguish between 400 fake photos matched to the 400 pictures of real people. The selection of photographs consisted of 100 people from four different ethnic groups: white, black, East Asian and South Asian. As for the result? The test group had a slightly worse-than-chance accuracy rate: around 48.2 per cent.

A second group of 219 participants was also tested. However, this group received training in order to recognise the computer-generated faces. Here, the participants scored a higher accuracy rating of 59 per cent, but according to New Scientist, this difference was “negligible” and nullified in the eyes of Nightingale.

The study found that it was harder for participants to differentiate real from computer-generated faces when the people featured were white. One reason posed for this is due to the synthesis software being trained disproportionately on creating white faces more than any other ethnic groups.

The most interesting part of this study comes from tests conducted on a separate group of 223 participants who were asked to rate the trustworthiness of a mix of 128 real and fake photographs. On a scale ranging from one (very untrustworthy) to seven (very trustworthy), participants rated the fake faces as eight per cent more trustworthy, on average, than the pictures of real people. A marginal difference, but a difference nonetheless.

Taking a step back to look at the extreme ends of the results, the four faces that were rated the most untrustworthy were all real, whereas three that were top rated on the trustworthiness scale were fake.

The results of these experiments have prompted researchers to call for safeguards to prevent the circulation of deepfakes online. Not sure what a deepfake is? Well, according to The Guardian, the online epidemic is “the 21st century’s answer to Photoshopping.” The outlet then goes on to describe how they use a form of artificial intelligence called deep learning to make images of fake events. Hence the name.

Ditching their “uncanny valley” telltale sign, deepfakes have evolved to become ”increasingly convincing,” as stated in Scientific American. They have been involved with a series of online crimes including fraud, mistaken identity, the spreading of propaganda and cyber defamation, as well as sexual crimes like revenge porn. This is even more troubling with the knowledge that AI-generated images of people can easily be obtained online by scammers who can then use them to create fake social media profiles. Take the Deepfake Detection Challenge of 2020 for example.

“Anyone can create synthetic content without specialized knowledge of Photoshop or CGI,” said Nightingale, who went on to share her thoughts on the options we can consider in order to limit risks when it comes to such technology. Attempting countermeasures against deepfakes has become a Whack-A-Mole situation or cyber “arms race.” Some possibilities for developers include adding watermarks on pictures and using the flagging process when fake ones pop up. However, Nightingale acknowledged how these measures barely make the cut. “In my opinion, this is bad enough. It’s just going to get worse if we don’t do something to stop it.”

“We should be concerned because these synthetic faces are incredibly effective for nefarious purposes, for things like revenge porn or fraud, for example,” Nightingale summed up.