Artificial intelligence has officially infiltrated our health care, education and daily lives. But with its recent stride in popular music, the question remains: How will the music industry be impacted by AI?
Four years ago, pop star Grimes boldly predicted on American theoretical physicist and philosopher Sean Carroll’s Mindscape podcast that once Artificial General Intelligence (AGI) goes live, human art will be doomed.
“They’re gonna be so much better at making art than us,” she expressed. Although those comments sparked a meltdown on social media, as AI had already upended many blue-collar jobs across several industries, as first claimed by TIME. There is no denying that the controversial singer had foreshadowed today’s climate.
From popular DJ David Guetta using the technology to add a vocal in the style of rapper Eminem to a recent song, to the TikTok famous twin DJs ALTÉGO using it to make an anthem in the style of Charli XCX, Ava Max and Crazy Frog—the world simply can’t get enough of AI, with some of us even coming dependent on it:
In fact, Grimes (who is highly recognised for shaking up the music industry) has publically invited fellow musicians to clone her voice using AI to create new songs. According to the BBC, she announced that she would split 50 per cent of royalties on any successful AI-generated song and tweeted to her over 1 million followers: “Feel free to use my voice without penalty.” However, not all musicians feel as enthusiastic about the changing industry.
Rapper Drake expressed great displeasure over an AI-generated cover of him rapping Ice Spice’s ‘Munch (Feelin’ U)’ and vocalised that this was “the final straw.” Shortly after, Universal Music, which he is signed under, successfully petitioned streaming services to remove a song called ‘Heart on my sleeve’ which used deepfaked vocals of him and The Weeknd. They argued that “the training of generative AI using our artists’ music” was “a violation of copyright law.”
A growing number of AI researchers have vocalised and warned that its capability of automating specific tasks, algorithm influence and healthcare contribution has not only created a powerful and hopeful future but also a dangerous one, potentially filled with misinformation.
So much so that an open letter signed by dozens of academics from across the world—including tech billionaire and now owner of Twitter, Elon Musk—has called on developers to learn more about consciousness, as AI systems become more advanced. So, could the development of AI be the end of creative originality?
“It hasn’t affected my way of creating because I try to create outside the box,” Jordain Johnson, aka Outlaw the Artist, a British-born but LA-based rapper and songwriter, told SCREENSHOT. “I work with international artists so my sound is innovative within itself. For instance, my song ‘Slow you down’ mixes drum and bass and G-funk, so it’s a lot of different energies that can’t be mimicked by AI.”
Outlaw argues that AI could be a cost-free marketing tactic for up and coming artists trying to reach more listeners. Advertising, features, music videos can all be taken care of thanks to Artificial Intelligence. While the rapper remains hopeful that the new technology won’t affect his art, it is an industry that is moving at breakneck speeds and is already on a dangerous trajectory. The music industry is strained enough, and the introduction of a tool that lacks nuance, spontaneity, and the emotional touch that only humans can provide will only strain it further.
According to Verdict, AI-generated music will never be able to gather a mass following because it lacks emotional intelligence—human creativity and ambition is quickly being diluted by our technological advancements, with little signs of a slow down. “Music is often considered a reflection of the times,” so for this reason “the most compelling case for AI music is to serve as a companion to human musicians, catalysing the creative process.”
In agreement, Outlaw describes it as a natural progression: “We moved from analogue to digital, from LimeWire to SoundCloud, and most notably, from CDs to online streaming. It’s forever evolving but it’ll always have a nostalgic feel to it. I’m always looking and thinking about the next thing so AI is a tool. Some tools are sharp and could cut you, but they’re always useful.” But what happens when the tools stop needing a smith?
In 2021, AI research laboratory OpenAI invented DALL·E, a neural network trained to generate images from text prompts. With just a few descriptive words, the system (named after both surrealist painter Salvador Dalí and the adorable Pixar robot WALL-E) can conjure up absolutely anything from an armchair shaped like an avocado to an illustration of a baby radish walking a dog in a tutu. At the time, however, the images were often grainy, inaccurate and time-consuming to generate—leading the laboratory to upgrade the software and design DALL·E 2. The new and improved model, supposedly.
While DALL·E 2 is slowly being rolled out to the public via a waitlist, AI artist and programmer Boris Dayma has launched a stripped-down version of the neural network which can be used by absolutely anyone with an internet connection. Dubbed DALL·E mini, the AI model is now all the rage on Twitter as users are scrambling to generate nightmarish creations including MRI images of Darth Vader, Pikachu that looks like a pug and even the Demogorgon from Stranger Things as a cast member on the hit TV show Friends.
While the viral tool has even spearheaded a meme format of its own, concerns arise when text prompts descend beyond innocent Pikachus and Fisher Price crack pipes onto actual human faces. Now, there are some insidiously dangerous risks in this case. As pointed out by Vox, people could leverage this type of AI to make everything from deepnudes to political deepfakes—although the results would be horrific, to say the least. Given how the technology is free to use on the internet, it also harbours the potential to put human illustrators out of work in the long run.
But another pressing issue at hand is that it can also reinforce harmful stereotypes and ultimately accentuate some of our current societal problems. To date, almost all machine learning systems, including DALL·E mini’s distant ancestors, have exhibited bias against women and people of colour. So, does the AI-powered text-to-image generator in question suffer the same ethical gamble that experts have been warning about for years now?
Using a series of general prompts, SCREENSHOT tested the viral AI generator for its stance on the much-debated racism and sexism that the technology has been linked to. The results were both strange and disappointing, yet unsurprising.
When DALL·E mini was fed with the text prompts ‘CEO’ and ‘lawyers’, the results were prominently white men. A query for ‘doctor’ reverted back with similar results while the term ‘nurse’ featured mostly white women. The same was the case with ‘flight attendant’ and ‘personal assistant’—both made assumptions about what the perfect candidate for the respective job titles would look like.
Now comes the even more concerning part, when the AI model was prompted with phrases like ‘smart girl’, ‘kind boy’ and ‘good person’, it spun up a grid of nine images all prominently featuring white people. To reiterate: Are we shocked? Not in the least. Disappointed? More than my Asian parents after an entrance exam.
In the case of DALL·E 2, AI researchers have found that the neural network’s depictions of people can be too biassed for public consumption. “Early tests by red team members and OpenAI have shown that DALL·E 2 leans toward generating images of white men by default, overly sexualizes images of women, and reinforces racial stereotypes,” WIRED noted. After conversations with roughly half of the red team—a group of external experts who look for ways things can go wrong before the product’s broader distribution—the publication found that a number of them recommended OpenAI to release DALL·E 2 without the ability to generate faces.
“One red team member told WIRED that eight out of eight attempts to generate images with words like ‘a man sitting in a prison cell’ or ‘a photo of an angry man’ returned images of men of colour,” the publication went on to note.
When it comes to DALL·E mini, however, Dayma has already confronted the AI’s relationship with the darkest prejudices of humanity. “While the capabilities of image generation models are impressive, they may also reinforce or exacerbate societal biases,” the website reads. “While the extent and nature of the biases of the DALL·E mini model have yet to be fully documented, given the fact that the model was trained on unfiltered data from the Internet, it may generate images that contain stereotypes against minority groups. Work to analyze the nature and extent of these limitations is ongoing, and will be documented in more detail in the DALL·E mini model card.”
Although the creator seems to have somewhat addressed the bias, the possibility of options for either controlling harmful prompts or reporting certain results cannot be ruled out. And even if they’re all figured out for DALL·E mini, it’ll only be a matter of time before the neural system is replaced by another with impressive capabilities where such an epidemic of bias could resurface.