In 2022, every dawn is ridden by a new controversial AI tool on the internet. Gone are the days netizens leveraged DALL·E 2 and racist image-spewing DALL·E mini to generate silly art for Twitter shitposting. Instead, recent months have witnessed the deployment of these tools to win legit art competitions, replace human photographers in the publication industry, and even steal original fan art in a growing series of ethical, copyright, and dystopian nightmares.
Just when we thought the innovations on the AI generator front may have come to a relative standstill, a new tool is now roasting people beyond recovery solely based on their selfies.
Dubbed CLIP Interrogator and created by AI generative artist @pharmapsychotic, the tool essentially aids in figuring out “what a good prompt might be to create new images like an existing one.” For instance, let’s take the case of the AI thief who ripped off a Genshin Impact fan artist by taking a screenshot of their work-in-progress livestreamed on Twitch, feeding it into an online image generator to “complete” it first, and uploading the AI version of the art on Twitter six hours before the original artist. The swindler then had the audacity to accuse the artist of theft and proceeded to demand credit for their creation.
With CLIP Interrogator, the thief could essentially upload the ripped screenshot and get a series of text prompts that will help accurately generate similar art using other text-to-image generators like DALL·E mini. The process is a bit cumbersome but it opens up a whole new realm of possibilities for AI-powered tools.
On Twitter, however, people are using CLIP Interrogator to upload their own selfies and get verbally destroyed by a bot. The tool called one user a “beta weak male,” a second “extremely gendered with haunted eyes” and went on to dub a third “Joe Biden as a transgender woman.” It also seemed to reference porn websites specifically when hit up with images of females with tank tops. Are we surprised? Not in the least. Disappointed? As usual.
CLIP Interrogator is rude AF
— Mark Riedl (best stuff @[email protected]) (@mark_riedl) October 20, 2022
AI researcher as a muppet, with a punchable face.
Also thanks but no thanks for pointing out my graying hair pic.twitter.com/OMQOpexuZC
just did a CLIP interrogator of one of my shirtless pics and it called me a "tall emaciated man wolf hybrid"
— fareed (@it_is_fareed) October 21, 2022
"beta weak male" absolutely DESTROYED by the CLIP interrogator 😭 pic.twitter.com/ORO56BSHxa
— Nick is neutraal 🐀 (@ikisnick) October 21, 2022
Since I don’t exactly trust an AI with my own selfies (totally not that I can’t handle the blatant roasting or anything), I decided to test the tool by uploading some viral images of public figures. On my list were resident vampire boi Machine Gun Kelly (MGK), his best bud Pete Davidson, and of course selfie afficiendao, Kim Kardashian.
After several refreshes and dragging minutes of “Error: This application is too busy. Keep trying!” I finally got CLIP Interrogator to generate text prompts based on one of MGK’s infamous mirror selfies. “Non-binary, angst white, Reddit, Discord” the tool spat.
Meanwhile, the American rapper’s bud Davidson got “Yung lean, criminal mugshot, weirdcore, pitbull, and cursed image” to name a few. For reference, the picture in question was the shirtless selfie that the Saturday Night Live star took to hit back at Kanye West while he was dating Kim Kardashian. Talking about the fashion mogul, Kardashian’s viral diamond necklace selfie was described by the AI tool as “inspired by Brenda Chamberlain, wearing a kurta, normal distributions, wig.”
As noted by Futurism, CLIP Interrogator is “built on OpenAI’s Contrastive Language-Image Pre-Training (CLIP) neural network that was released in 2021, and hosted by Hugging Face, which has dedicated some extra resources to deal with the crush of traffic.” As the tool remains over-trafficked, further details are hazy at this point.
All we know for sure is that the roast bot has a long way to go when it comes to biases, especially when used by netizens to comment on their own selfies. And given how Twitter has recorded 320 tweets under the search term ‘CLIP Interrogator’ as of today, it seems like the tool is here to stay for a while.
I’m still kind of tech-illiterate so excuse me if I have no idea what I’m talking about, but something about uploading your face to Clip Interrogator feels like a bad idea.
— t👀nzi (@t00nzi) October 25, 2022
Alongside crypto, NFTs, deepfakes, digital clones, and the metaverse—in all its flop era glory with a pathetic daily active user count—one of the hottest innovations guaranteed to shape our digital lives in the future is none other than AI-powered text-to-image generators.
To be honest, the technology is pretty fun to use. All you need to do is enter a random text prompt (for instance, ‘Walter White as a Starbucks barista’) and wait for a few seconds to witness your nonsensical string of words magically generate a corresponding image. Sometimes, the results are silly and hilarious—making the technology in question a handy tool for shitposters on Twitter. But majorly, the AI-powered creations are nothing less than impressive, even harbouring the potential to pass as high-quality art drawn by a legit human being. And therein lies the ethical, copyright, and dystopian nightmare.
While AI art generators like DALL·E 2, DALL·E mini (which has a track record of spewing awfully racist images, by the way), Stable Diffusion, and Midjourney have proven to be a popular pastime for casual users and tech enthusiasts alike, it has triggered an ethical landmine for artists and their livelihood.
In September 2022, a digital artwork generated using Midjourney won first place in Colorado State Fair’s annual art competition. Shortly after the announcement, the decision was met with immense backlash from participants who accused the winner of, essentially, cheating. To date, publications like The Atlantic have been spotted using AI-generated artworks at the top of their articles, a space typically reserved for photographs and illustrations taken or made by humans with proper credentials and fees.
We’re watching the death of artistry unfold right before our eyes — if creative jobs aren’t safe from machines, then even high-skilled jobs are in danger of becoming obsolete
— OmniMorpho (@OmniMorpho) August 31, 2022
What will we have then?
In the same month, digital distribution platform Steam witnessed the listing of This Girl Does Not Exist, a video game that is completely generated using AI—from its story and art to even music and special effects. And don’t even get me started on the parallel timeline of AI-generated film scripts, trailers, and posters popping up on YouTube and Twitter every passing day. The technology has massive potential spanning several industries, which only exacerbates its footing as a walking pile of copyright infringement.
What makes this AI different is that it's explicitly trained on current working artists. You can see below that the AI generated image(left) even tried to recreate the artist's logo of the artist it ripped off.
— RJ Palmer (@arvalis) August 14, 2022
This thing wants our jobs, its actively anti-artist. pic.twitter.com/4zXDeaIUzw
Now, with the sole mission of topping this entire ethical debate, comes theft using AI image generators. Yes, you read that right. On 11 October, popular Korean anime artist AT was reportedly streaming the process of painting Raiden Shogun, a playable character in the action role-playing game Genshin Impact, in front of a live audience on Twitch.
Before they could even finish the fan art and post it to Twitter, a viewer took a screenshot of the work-in-progress, fed the same to an AI image generator called Novel AI, and “completed” it first.
Uploading the resultant creation to Twitter six hours before AT’s stream ended, the thief then proceeded to demand credit from the original artist. “You posted like 5-6 hours after me, and for that kind of drawing, you can make it fast,” the swindler tweeted, as first reported by Kotaku. “You took [a] reference [from] an AI image but at least admit it.”
During a Twitch stream AT (@haruno_intro) had their art stolen.
— Genel Jumalon ✈️ Fan Expo Cleveland (@GenelJumalon) October 13, 2022
The thief then finished the sketch by using NovelAI and posted on their Twitter before AT finish it.
Then had the AUDACITY to demand a "proper reference" from them. pic.twitter.com/Twv7oWSMaW
The art thief even went as far as to post a Q&A session nobody asked for. “I’m self-taught. It’s been like one week since I started to learn art,” they tweeted in reply to their own AI-generated, or should I call it AI-replicated, image. “I think I’m gifted,” they added to a question that read: “What art school have you attempted?”
Well, I don’t know what this new-age swindler was getting at with their antics but I think we can all collectively agree that they could have gotten away with the theft if they’d not attacked AT for posting their original art. And therein lies yet another concern.
As of today, the thief has deleted their Twitter account following the amount of backlash and reports garnered from both artists and the Genshin Impact community. The discourse has also triggered a conversation about AI art software being used for online harassment and scams involving creative souls—in turn, reducing their skills as mere content farms. Meanwhile, several artists have also reminded their audience to keep backup streams of their process.
Heck, some have even tweeted that the incident has left them feeling dubious about streaming their work-in-progress again. “Now any of us can be accused by art thieves of ‘stealing’ because their AI art ‘finished’ the piece first,” wrote one artist.
— AT (@haruno_intro) October 12, 2022