Facebook users who watched a newspaper video featuring black men were asked if they wanted to “keep seeing videos about primates” by one of the platform’s artificial intelligence recommendation systems. It’s only the latest in a long-running series of errors that have raised concerns over racial bias in AI.
Facebook told the BBC it “was clearly an unacceptable error,” disabled the system and then proceeded to open an investigation. “We apologise to anyone who may have seen these offensive recommendations,” continued the social media giant.
Back in 2015, Google came under fire after its new Photos app categorised photos in one of the most racist ways possible. On 28 June, computer programmer Jacky Alciné found that the feature kept tagging pictures of him and his girlfriend as “gorillas.” He tweeted at Google asking what kind of sample images the company had used that would allow such a terrible mistake to happen.
“Google Photos, y’all fucked up. My friend’s not a gorilla,” read his now-deleted tweet. Google’s chief social architect Yonatan Zunger responded quickly, apologising for the feature: “No, this is not how you determine someone’s target market. This is 100% Not OK.”
The company said it was “appalled and genuinely sorry,” though its fix, Wired reported in 2018, was simply to censor photo searches and tags for the word “gorilla.”
In July 2020, Facebook announced it would be forming new internal teams that would determine whether its algorithms are racially biased or not. The recent “primates” recommendation “was an algorithmic error on Facebook” and did not reflect the content of the video, a representative told BBC News.
“We disabled the entire topic-recommendation feature as soon as we realised this was happening so we could investigate the cause and prevent this from happening again. As we have said while we have made improvements to our AI, we know it’s not perfect and we have more progress to make,” they continued.
In May 2021, Twitter admitted racial biases in the way its “saliency algorithm” cropped previews of images. Studies have also shown biases in the algorithms powering some facial recognition systems.
Research has previously shown that AI is often biased. AI systems aren’t perfect—over the last few years, society has begun to grapple with exactly how much human prejudices can find their way through AI systems. Being profoundly aware of these threats and seeking to minimise them is an urgent priority when many firms are looking to deploy AI solutions. Algorithmic bias can take varied forms such as gender bias, racial prejudice and age discrimination.
First and foremost, however, we need to admit and recognise that AI isn’t perfect. Developing more inclusive algorithms—with a specific goal of removing social bias—is the only way we can prevent this from happening again. Until then, AI will continue to make racist mistakes—driven by human error—for the years to come.