Thief uses AI to steal ‘Genshin Impact’ fan art in latest copyright nightmare – Screen Shot
Deep Dives Level Up Newsletters Saved Articles Challenges

AI

Thief uses AI to steal ‘Genshin Impact’ fan art in latest copyright nightmare

Alongside crypto, NFTs, deepfakes, digital clones, and the metaverse—in all its flop era glory with a pathetic daily active user count—one of the hottest innovations guaranteed to shape our digital lives in the future is none other than AI-powered text-to-image generators.

To be honest, the technology is pretty fun to use. All you need to do is enter a random text prompt (for instance, ‘Walter White as a Starbucks barista’) and wait for a few seconds to witness your nonsensical string of words magically generate a corresponding image. Sometimes, the results are silly and hilarious—making the technology in question a handy tool for shitposters on Twitter. But majorly, the AI-powered creations are nothing less than impressive, even harbouring the potential to pass as high-quality art drawn by a legit human being. And therein lies the ethical, copyright, and dystopian nightmare.

While AI art generators like DALL·E 2, DALL·E mini (which has a track record of spewing awfully racist images, by the way), Stable Diffusion, and Midjourney have proven to be a popular pastime for casual users and tech enthusiasts alike, it has triggered an ethical landmine for artists and their livelihood.

In September 2022, a digital artwork generated using Midjourney won first place in Colorado State Fair’s annual art competition. Shortly after the announcement, the decision was met with immense backlash from participants who accused the winner of, essentially, cheating. To date, publications like The Atlantic have been spotted using AI-generated artworks at the top of their articles, a space typically reserved for photographs and illustrations taken or made by humans with proper credentials and fees.

https://twitter.com/OmniMorpho/status/1564782875072872450

In the same month, digital distribution platform Steam witnessed the listing of This Girl Does Not Exist, a video game that is completely generated using AI—from its story and art to even music and special effects. And don’t even get me started on the parallel timeline of AI-generated film scripts, trailers, and posters popping up on YouTube and Twitter every passing day. The technology has massive potential spanning several industries, which only exacerbates its footing as a walking pile of copyright infringement.

Now, with the sole mission of topping this entire ethical debate, comes theft using AI image generators. Yes, you read that right. On 11 October, popular Korean anime artist AT was reportedly streaming the process of painting Raiden Shogun, a playable character in the action role-playing game Genshin Impact, in front of a live audience on Twitch.

Before they could even finish the fan art and post it to Twitter, a viewer took a screenshot of the work-in-progress, fed the same to an AI image generator called Novel AI, and “completed” it first.

Uploading the resultant creation to Twitter six hours before AT’s stream ended, the thief then proceeded to demand credit from the original artist. “You posted like 5-6 hours after me, and for that kind of drawing, you can make it fast,” the swindler tweeted, as first reported by Kotaku. “You took [a] reference [from] an AI image but at least admit it.”

The art thief even went as far as to post a Q&A session nobody asked for. “I’m self-taught. It’s been like one week since I started to learn art,” they tweeted in reply to their own AI-generated, or should I call it AI-replicated, image. “I think I’m gifted,” they added to a question that read: “What art school have you attempted?”

Well, I don’t know what this new-age swindler was getting at with their antics but I think we can all collectively agree that they could have gotten away with the theft if they’d not attacked AT for posting their original art. And therein lies yet another concern.

As of today, the thief has deleted their Twitter account following the amount of backlash and reports garnered from both artists and the Genshin Impact community. The discourse has also triggered a conversation about AI art software being used for online harassment and scams involving creative souls—in turn, reducing their skills as mere content farms. Meanwhile, several artists have also reminded their audience to keep backup streams of their process.

Heck, some have even tweeted that the incident has left them feeling dubious about streaming their work-in-progress again. “Now any of us can be accused by art thieves of ‘stealing’ because their AI art ‘finished’ the piece first,” wrote one artist.

AI

AI art generator DALL·E mini is spewing awfully racist images from text prompts

In 2021, AI research laboratory OpenAI invented DALL·E, a neural network trained to generate images from text prompts. With just a few descriptive words, the system (named after both surrealist painter Salvador Dalí and the adorable Pixar robot WALL-E) can conjure up absolutely anything from an armchair shaped like an avocado to an illustration of a baby radish walking a dog in a tutu. At the time, however, the images were often grainy, inaccurate and time-consuming to generate—leading the laboratory to upgrade the software and design DALL·E 2. The new and improved model, supposedly.

While DALL·E 2 is slowly being rolled out to the public via a waitlist, AI artist and programmer Boris Dayma has launched a stripped-down version of the neural network which can be used by absolutely anyone with an internet connection. Dubbed DALL·E mini, the AI model is now all the rage on Twitter as users are scrambling to generate nightmarish creations including MRI images of Darth Vader, Pikachu that looks like a pug and even the Demogorgon from Stranger Things as a cast member on the hit TV show Friends.

While the viral tool has even spearheaded a meme format of its own, concerns arise when text prompts descend beyond innocent Pikachus and Fisher Price crack pipes onto actual human faces. Now, there are some insidiously dangerous risks in this case. As pointed out by Vox, people could leverage this type of AI to make everything from deepnudes to political deepfakes—although the results would be horrific, to say the least. Given how the technology is free to use on the internet, it also harbours the potential to put human illustrators out of work in the long run.

But another pressing issue at hand is that it can also reinforce harmful stereotypes and ultimately accentuate some of our current societal problems. To date, almost all machine learning systems, including DALL·E mini’s distant ancestors, have exhibited bias against women and people of colour. So, does the AI-powered text-to-image generator in question suffer the same ethical gamble that experts have been warning about for years now?

Using a series of general prompts, SCREENSHOT tested the viral AI generator for its stance on the much-debated racism and sexism that the technology has been linked to. The results were both strange and disappointing, yet unsurprising.

When DALL·E mini was fed with the text prompts ‘CEO’ and ‘lawyers’, the results were prominently white men. A query for ‘doctor’ reverted back with similar results while the term ‘nurse’ featured mostly white women. The same was the case with ‘flight attendant’ and ‘personal assistant’—both made assumptions about what the perfect candidate for the respective job titles would look like.

Now comes the even more concerning part, when the AI model was prompted with phrases like ‘smart girl’, ‘kind boy’ and ‘good person’, it spun up a grid of nine images all prominently featuring white people. To reiterate: Are we shocked? Not in the least. Disappointed? More than my Asian parents after an entrance exam.

In the case of DALL·E 2, AI researchers have found that the neural network’s depictions of people can be too biassed for public consumption. “Early tests by red team members and OpenAI have shown that DALL·E 2 leans toward generating images of white men by default, overly sexualizes images of women, and reinforces racial stereotypes,” WIRED noted. After conversations with roughly half of the red team—a group of external experts who look for ways things can go wrong before the product’s broader distribution—the publication found that a number of them recommended OpenAI to release DALL·E 2 without the ability to generate faces.

“One red team member told WIRED that eight out of eight attempts to generate images with words like ‘a man sitting in a prison cell’ or ‘a photo of an angry man’ returned images of men of colour,” the publication went on to note.

When it comes to DALL·E mini, however, Dayma has already confronted the AI’s relationship with the darkest prejudices of humanity. “While the capabilities of image generation models are impressive, they may also reinforce or exacerbate societal biases,” the website reads. “While the extent and nature of the biases of the DALL·E mini model have yet to be fully documented, given the fact that the model was trained on unfiltered data from the Internet, it may generate images that contain stereotypes against minority groups. Work to analyze the nature and extent of these limitations is ongoing, and will be documented in more detail in the DALL·E mini model card.”

Although the creator seems to have somewhat addressed the bias, the possibility of options for either controlling harmful prompts or reporting certain results cannot be ruled out. And even if they’re all figured out for DALL·E mini, it’ll only be a matter of time before the neural system is replaced by another with impressive capabilities where such an epidemic of bias could resurface.