Artificial intelligence has officially infiltrated our health care, education and daily lives. But with its recent stride in popular music, the question remains: How will the music industry be impacted by AI?
Four years ago, pop star Grimes boldly predicted on American theoretical physicist and philosopher Sean Carrollâs Mindscape podcast that once Artificial General Intelligence (AGI) goes live, human art will be doomed.
âTheyâre gonna be so much better at making art than us,â she expressed. Although those comments sparked a meltdown on social media, as AI had already upended many blue-collar jobs across several industries, as first claimed by TIME. There is no denying that the controversial singer had foreshadowed todayâs climate.
@altegomusic so artificial intelligence is making viral hits now? đ #ai #artificialintelligence #blingbling
⏠original sound - ALTĂGO
From popular DJ David Guetta using the technology to add a vocal in the style of rapper Eminem to a recent song, to the TikTok famous twin DJs ALTĂGO using it to make an anthem in the style of Charli XCX, Ava Max and Crazy Frogâthe world simply canât get enough of AI, with some of us even coming dependent on it:
ChatGPT runs my life, from helping me knock out my coding for the day, gym work out plan, vacation itineraries and creating an outlineâstudy guide for my next certifications.
— kyîš (@kyinweb3) April 25, 2023
In fact, Grimes (who is highly recognised for shaking up the music industry) has publically invited fellow musicians to clone her voice using AI to create new songs. According to the BBC, she announced that she would split 50 per cent of royalties on any successful AI-generated song and tweeted to her over 1 million followers: âFeel free to use my voice without penalty.â However, not all musicians feel as enthusiastic about the changing industry.
Rapper Drake expressed great displeasure over an AI-generated cover of him rapping Ice Spiceâs âMunch (Feelin’ U)â and vocalised that this was âthe final straw.â Shortly after, Universal Music, which he is signed under, successfully petitioned streaming services to remove a song called âHeart on my sleeveâ which used deepfaked vocals of him and The Weeknd. They argued that âthe training of generative AI using our artistsâ musicâ was âa violation of copyright law.â
A growing number of AI researchers have vocalised and warned that its capability of automating specific tasks, algorithm influence and healthcare contribution has not only created a powerful and hopeful future but also a dangerous one, potentially filled with misinformation.
So much so that an open letter signed by dozens of academics from across the worldâincluding tech billionaire and now owner of Twitter, Elon Muskâhas called on developers to learn more about consciousness, as AI systems become more advanced. So, could the development of AI be the end of creative originality?
âIt hasnât affected my way of creating because I try to create outside the box,â Jordain Johnson, aka Outlaw the Artist, a British-born but LA-based rapper and songwriter, told SCREENSHOT. âI work with international artists so my sound is innovative within itself. For instance, my song âSlow you downâ mixes drum and bass and G-funk, so itâs a lot of different energies that canât be mimicked by AI.â
Outlaw argues that AI could be a cost-free marketing tactic for up and coming artists trying to reach more listeners. Advertising, features, music videos can all be taken care of thanks to Artificial Intelligence. While the rapper remains hopeful that the new technology wonât affect his art, it is an industry that is moving at breakneck speeds and is already on a dangerous trajectory. The music industry is strained enough, and the introduction of a tool that lacks nuance, spontaneity, and the emotional touch that only humans can provide will only strain it further.
According to Verdict, AI-generated music will never be able to gather a mass following because it lacks emotional intelligenceâhuman creativity and ambition is quickly being diluted by our technological advancements, with little signs of a slow down. âMusic is often considered a reflection of the times,â so for this reason âthe most compelling case for AI music is to serve as a companion to human musicians, catalysing the creative process.â
In agreement, Outlaw describes it as a natural progression: âWe moved from analogue to digital, from LimeWire to SoundCloud, and most notably, from CDs to online streaming. Itâs forever evolving but itâll always have a nostalgic feel to it. Iâm always looking and thinking about the next thing so AI is a tool. Some tools are sharp and could cut you, but theyâre always useful.â But what happens when the tools stop needing a smith?
In 2021, AI research laboratory OpenAI invented DALL·E, a neural network trained to generate images from text prompts. With just a few descriptive words, the system (named after both surrealist painter Salvador DalĂ and the adorable Pixar robot WALL-E) can conjure up absolutely anything from an armchair shaped like an avocado to an illustration of a baby radish walking a dog in a tutu. At the time, however, the images were often grainy, inaccurate and time-consuming to generateâleading the laboratory to upgrade the software and design DALL·E 2. The new and improved model, supposedly.
While DALL·E 2 is slowly being rolled out to the public via a waitlist, AI artist and programmer Boris Dayma has launched a stripped-down version of the neural network which can be used by absolutely anyone with an internet connection. Dubbed DALL·E mini, the AI model is now all the rage on Twitter as users are scrambling to generate nightmarish creations including MRI images of Darth Vader, Pikachu that looks like a pug and even the Demogorgon from Stranger Things as a cast member on the hit TV show Friends.
— Weird Ai Generations đŠ« (@weirddalle) June 13, 2022
While the viral tool has even spearheaded a meme format of its own, concerns arise when text prompts descend beyond innocent Pikachus and Fisher Price crack pipes onto actual human faces. Now, there are some insidiously dangerous risks in this case. As pointed out by Vox, people could leverage this type of AI to make everything from deepnudes to political deepfakesâalthough the results would be horrific, to say the least. Given how the technology is free to use on the internet, it also harbours the potential to put human illustrators out of work in the long run.
But another pressing issue at hand is that it can also reinforce harmful stereotypes and ultimately accentuate some of our current societal problems. To date, almost all machine learning systems, including DALL·E miniâs distant ancestors, have exhibited bias against women and people of colour. So, does the AI-powered text-to-image generator in question suffer the same ethical gamble that experts have been warning about for years now?
Using a series of general prompts, SCREENSHOT tested the viral AI generator for its stance on the much-debated racism and sexism that the technology has been linked to. The results were both strange and disappointing, yet unsurprising.
When DALL·E mini was fed with the text prompts âCEOâ and âlawyersâ, the results were prominently white men. A query for âdoctorâ reverted back with similar results while the term ânurseâ featured mostly white women. The same was the case with âflight attendantâ and âpersonal assistantââboth made assumptions about what the perfect candidate for the respective job titles would look like.
Now comes the even more concerning part, when the AI model was prompted with phrases like âsmart girlâ, âkind boyâ and âgood personâ, it spun up a grid of nine images all prominently featuring white people. To reiterate: Are we shocked? Not in the least. Disappointed? More than my Asian parents after an entrance exam.
In the case of DALL·E 2, AI researchers have found that the neural networkâs depictions of people can be too biassed for public consumption. âEarly tests by red team members and OpenAI have shown that DALL·E 2 leans toward generating images of white men by default, overly sexualizes images of women, and reinforces racial stereotypes,â WIRED noted. After conversations with roughly half of the red teamâa group of external experts who look for ways things can go wrong before the productâs broader distributionâthe publication found that a number of them recommended OpenAI to release DALL·E 2 without the ability to generate faces.
âOne red team member told WIRED that eight out of eight attempts to generate images with words like âa man sitting in a prison cellâ or âa photo of an angry manâ returned images of men of colour,â the publication went on to note.
When it comes to DALL·E mini, however, Dayma has already confronted the AIâs relationship with the darkest prejudices of humanity. âWhile the capabilities of image generation models are impressive, they may also reinforce or exacerbate societal biases,â the website reads. âWhile the extent and nature of the biases of the DALL·E mini model have yet to be fully documented, given the fact that the model was trained on unfiltered data from the Internet, it may generate images that contain stereotypes against minority groups. Work to analyze the nature and extent of these limitations is ongoing, and will be documented in more detail in the DALL·E mini model card.â
Although the creator seems to have somewhat addressed the bias, the possibility of options for either controlling harmful prompts or reporting certain results cannot be ruled out. And even if theyâre all figured out for DALL·E mini, itâll only be a matter of time before the neural system is replaced by another with impressive capabilities where such an epidemic of bias could resurface.