In a jaw-dropping revelation, supporters of former US President and current candidate in the 2024 presidential election Donald Trump have been accused of unleashing AI-generated images to sway Black voters towards the Republican party.
BBC Oneâs investigative documentary series Panorama unearthed a trove of deepfakes featuring Black individuals seemingly endorsing Trump.
Although Trump has actively sought support from Black voters, who were crucial to Joe Bidenâs 2020 victory, thereâs currently no direct proof linking these images to his campaign. Instead, the AI-generated images found by the BBC appear to have been made and shared by US voters themselves.
Prominent Trump supporter and Florida-based radio host Mark Kaye openly admitted to creating one of these deceptive images. Defending his actions, Kaye told the BBC: âIâm not a photojournalist. Iâm not out there taking pictures of whatâs really happening. Iâm a storyteller.â
He added: âIâm not claiming it is accurate. Iâm not saying, âHey, look, Donald Trump was at this party with all of these African American voters. Look how much they love him.â If anybodyâs voting one way or another because of one photo they see on a Facebook page, thatâs a problem with that person, not with the post itself.â
The investigation revealed dozens of AI-generated images shared widely on social media. The one shared by Kaye depicted Trump smiling with his arms around a group of Black women, while another showed him in front of a house surrounded by young Black men.
The latter image, accompanied by a fabricated story suggesting that Trump spontaneously stopped his motorcade to meet the men, gained thousands of likes on X (formerly Twitter). As mentioned, the BBC investigation exposed these images as fake, citing telltale signs of AI manipulation, such as overly shiny skin and missing fingers.
Some users called it out, but others seemed to have believed the image was real.
The creator of the image featuring Trump with young black men, known only as âShaggyâ from Michigan, reportedly blocked a BBC reporter when questioned about the images. At the time, the post had over 1.3 million likes on X.
Coinciding with these revelations, today, Monday 4 March, MAGA Inc, the main political action committee backing Trump is set to launch an advertising campaign targeting Black voters in key statesâGeorgia, Michigan, and Pennsylvania. This move underscores the strategic importance of reaching out to demographic groups that have historically leaned towards the Democratic Party.
Trumpâs history of racial controversies, from his inflammatory remarks to divisive policies, has created a significant gap in support among Black voters. Despite a decrease in Black voter enthusiasm for President Joe Biden, a mere 25 per cent of Black Americans had a favourable view of Trump according to a December 2023 AP-NORC poll.
Furthermore, a February 2024 poll conducted by the New York Times and Siena College found that in six key swing states, 71 per cent of Black voters would back Biden in 2024, a steep drop from the 92 per cent nationally that helped him win the White House during the last election.
As of now, AI companies have typically declared that their tools are not suitable for deployment in political campaigns at present, although the enforcement of this policy has been inconsistent. Recently, OpenAI took action by prohibiting a developer from utilising its tools. The developer had created a bot imitating Dean Phillips, a Democratic presidential candidate with slim chances of winning. While Phillipsâ campaign had initially endorsed the bot, OpenAI intervened following a report by The Washington Post, determining that it violated the rules against using its technology for political campaigns.
The realm of AI-related perplexity extends beyond the political spectrum. In recent days, an audio clip surfaced on social media, purportedly featuring a school principal from Pikesville High School, Baltimore, engaging in a racist verbal attack against Jewish individuals and Black students. The union representing the principal has asserted that the audio is a product of AI.
Hany Farid, a professor at the University of California, specialises in authenticating digital media and analysing digital forensics. Speaking to CBS News about the Pikesville High School incident, the expert mentioned several clues, such as the consistent rhythm of the speech and indications of editing, which indicate that this audio may indeed be fake. However, on social media platforms, the prevailing sentiment among commenters strongly leans towards the belief that the audio is authentic. The school district has responded by initiating an investigation into the matter.
But back to the latest AI-generated images that are still flooding X as you read this. Ben Nimmo, former lead at Meta, whose role involved countering foreign operations, told the BBC that the confusion brewed by these new-age fakes is a playground for foreign governments itching to manipulate elections.
Moreover, likely, real influencers will soon become the primary targets: âAnybody who has a substantial audience in 2024 needs to start thinking, how do I vet anything which gets sent to me? How do I make sure that I don’t unwittingly become part of some kind of foreign influence operation?â Nimmo explained.
The expert continued: âThese influencers, innocent in their intentions, could unwittingly become Trojan horses for foreign operations. Picture this: foreign puppeteers sliding misinformation into these influencersâ pockets, who then unleash it on their ready-made audiences, making it seem like genuine American voices are echoing the narrative.â
Social media giants claim they have policies to combat this kind of disinformation, with Meta flaunting its new measures to combat AI-generated content during elections. But, as we know, the adaptability of AI evolves at a much faster rate than tech companies can often handle.