Companies are paying big bucks for people skilled at ChatGPT, and even Elon Musk is worried – Screen Shot
Deep Dives Level Up Newsletters Saved Articles Challenges

AI

Companies are paying big bucks for people skilled at ChatGPT, and even Elon Musk is worried

On 28 March 2023, The Wall Street Journal published an article titled The Jobs Most Exposed to ChatGPT, which highlighted that “AI tools could more quickly handle at least half of the tasks that auditors, interpreters and writers do now.” A terrifying prospect for many, myself included.

Heck, even Twitter menace Elon Musk is freaking out, having recently signed a letter that warns of potential risks to society and civilisation by human-competitive AI systems in the form of economic and political disruptions. Oh, him and 1,000 other experts, including Apple co-founder Steve Wozniak, and Yoshua Benigo, often referred to as one of the “godfathers of AI.”

The letter, which was issued by the non-profit Future of Life Institute, calls for a six-month halt to the “dangerous race” to develop systems more powerful than OpenAI’s newly launched GPT-4. Of course, no one’s listening.

Despite the general worry those of us working in creative industries are currently feeling about the possibility of ChatGPT and other AI chatbot tools coming to take our jobs, some companies are offering six-figure salaries to a select few who are great at wringing results out of them, as initially reported by Bloomberg.

Considering the current hype the technology is undergoing, as well as the immense potential it holds, it’s not surprising that we’re already witnessing a jobs market burgeoning for so-called “prompt engineer” positions—with salaries of up to $335,000 per annum.

To put it simply, these roles would require applicants to be ChatGPT wizards who know how to harness its power as effectively as humanly possible and who can train other employees on how to use those tools to their best ability. The rest of the work would still be left to the AI.

Speaking to Bloomberg, Albert Phelps, one of these lucky prompt engineers who works at a subsidiary of the Accenture consultancy firm in the UK, shared that the job entails being something of an “AI whisperer.”

He added that educational background doesn’t play as big of a part in this role as it does in countless others, with ChatGPT experts who have degrees as disparate as history, philosophy, and English. “It’s wordplay,” Phelps told the publication. “You’re trying to distil the essence or meaning of something into a limited number of words.”

Aged only 29, Phelps studied history before going into financial consulting and ultimately pivoting to AI. On a typical day at his job, the AI virtuoso and his colleagues will write about five different prompts and have 50 individual interactions with large language models such as ChatGPT.

Though Mark Standen, the owner of an AI, automation, and machine learning staffing business in the UK and Ireland, told Bloomberg that prompt engineering is “probably the fastest-moving IT market I’ve worked in for 25 years,” adding that “expert prompt engineers can name their price,” it should be noted that people can also sell their prompt-writing skills for $3 to $10 a pop.

PromptBase, for example, is a marketplace for “buying and selling quality prompts that produce the best results, and save you money on API costs.” On there, prompt engineers can keep 80 per cent of every sale of their prompt, and on custom prompt jobs. Meanwhile, the platform takes 20 per cent.

But when it comes to the crème de la crème, Standen went on to note that while the jobs start at the pound sterling equivalent of about $50,000 per year in the UK, there are candidates in his company’s database looking for between $250,000 and $360,000 per year.

There’s obviously no way of knowing if or when the hype surrounding prompt engineers will ever die down, in turn lowering the six-figure salaries that are currently being offered to more ‘standard’ rates. But one thing is for sure, AI tools aren’t going anywhere anytime soon, and they’re coming for our jobs.

Time to improve those prompt-feeding skills, I guess.

AI

AI art generator DALL·E mini is spewing awfully racist images from text prompts

In 2021, AI research laboratory OpenAI invented DALL·E, a neural network trained to generate images from text prompts. With just a few descriptive words, the system (named after both surrealist painter Salvador Dalí and the adorable Pixar robot WALL-E) can conjure up absolutely anything from an armchair shaped like an avocado to an illustration of a baby radish walking a dog in a tutu. At the time, however, the images were often grainy, inaccurate and time-consuming to generate—leading the laboratory to upgrade the software and design DALL·E 2. The new and improved model, supposedly.

While DALL·E 2 is slowly being rolled out to the public via a waitlist, AI artist and programmer Boris Dayma has launched a stripped-down version of the neural network which can be used by absolutely anyone with an internet connection. Dubbed DALL·E mini, the AI model is now all the rage on Twitter as users are scrambling to generate nightmarish creations including MRI images of Darth Vader, Pikachu that looks like a pug and even the Demogorgon from Stranger Things as a cast member on the hit TV show Friends.

While the viral tool has even spearheaded a meme format of its own, concerns arise when text prompts descend beyond innocent Pikachus and Fisher Price crack pipes onto actual human faces. Now, there are some insidiously dangerous risks in this case. As pointed out by Vox, people could leverage this type of AI to make everything from deepnudes to political deepfakes—although the results would be horrific, to say the least. Given how the technology is free to use on the internet, it also harbours the potential to put human illustrators out of work in the long run.

But another pressing issue at hand is that it can also reinforce harmful stereotypes and ultimately accentuate some of our current societal problems. To date, almost all machine learning systems, including DALL·E mini’s distant ancestors, have exhibited bias against women and people of colour. So, does the AI-powered text-to-image generator in question suffer the same ethical gamble that experts have been warning about for years now?

Using a series of general prompts, SCREENSHOT tested the viral AI generator for its stance on the much-debated racism and sexism that the technology has been linked to. The results were both strange and disappointing, yet unsurprising.

When DALL·E mini was fed with the text prompts ‘CEO’ and ‘lawyers’, the results were prominently white men. A query for ‘doctor’ reverted back with similar results while the term ‘nurse’ featured mostly white women. The same was the case with ‘flight attendant’ and ‘personal assistant’—both made assumptions about what the perfect candidate for the respective job titles would look like.

Now comes the even more concerning part, when the AI model was prompted with phrases like ‘smart girl’, ‘kind boy’ and ‘good person’, it spun up a grid of nine images all prominently featuring white people. To reiterate: Are we shocked? Not in the least. Disappointed? More than my Asian parents after an entrance exam.

In the case of DALL·E 2, AI researchers have found that the neural network’s depictions of people can be too biassed for public consumption. “Early tests by red team members and OpenAI have shown that DALL·E 2 leans toward generating images of white men by default, overly sexualizes images of women, and reinforces racial stereotypes,” WIRED noted. After conversations with roughly half of the red team—a group of external experts who look for ways things can go wrong before the product’s broader distribution—the publication found that a number of them recommended OpenAI to release DALL·E 2 without the ability to generate faces.

“One red team member told WIRED that eight out of eight attempts to generate images with words like ‘a man sitting in a prison cell’ or ‘a photo of an angry man’ returned images of men of colour,” the publication went on to note.

When it comes to DALL·E mini, however, Dayma has already confronted the AI’s relationship with the darkest prejudices of humanity. “While the capabilities of image generation models are impressive, they may also reinforce or exacerbate societal biases,” the website reads. “While the extent and nature of the biases of the DALL·E mini model have yet to be fully documented, given the fact that the model was trained on unfiltered data from the Internet, it may generate images that contain stereotypes against minority groups. Work to analyze the nature and extent of these limitations is ongoing, and will be documented in more detail in the DALL·E mini model card.”

Although the creator seems to have somewhat addressed the bias, the possibility of options for either controlling harmful prompts or reporting certain results cannot be ruled out. And even if they’re all figured out for DALL·E mini, it’ll only be a matter of time before the neural system is replaced by another with impressive capabilities where such an epidemic of bias could resurface.