Earlier this week, news broke out that Facebook is planning to change its company name at the annual Connect Conference on 28 October 28 (or even earlier according to The Verge). At first, most publications tried their hand at guessing where the sudden change was coming from—was it to distract us from Frances Haugen, also known as the Facebook whistleblower? Or was Mark Zuckerberg planning something else—something bigger?
As more speculation continued in the media, a “source with direct knowledge of the matter” let the cat out of the bag when speaking to The Verge. From there, things became clearer. “The coming name change is meant to signal the tech giant’s ambition to be known for more than social media and all the ills that entail,” wrote the publication. The rebrand would help position the Facebook app as one of many products under a parent company overseeing groups like Instagram, WhatsApp, Oculus, and more—emphasis being placed on ‘more’.
Sure, Zuckerberg wants the world to acknowledge that his company is about way more than Facebook. I find it hard to imagine someone not knowing that already. Facebook has more than 10,000 employees building consumer hardware like AR glasses (which Zuckerberg believes will eventually be as omnipresent as smartphones).
Heck, about three months ago, the man told The Verge that, over the next several years, “we will effectively transition from people seeing us as primarily being a social media company to being a metaverse company.” In other words, he warned us.
In recent years, Facebook has been steadily laying the groundwork for a greater focus on the next generation of technology—whatever that means. This past summer, it set up a dedicated Metaverse team. More recently, it announced that the head of AR and VR, Andrew Bosworth, will be promoted to Chief Technology Officer (CTO). And just a couple of days ago, it announced plans to hire 10,000 more employees to work on the Metaverse in Europe.
The Metaverse is “going to be a big focus, and I think that this is just going to be a big part of the next chapter for the way that the internet evolves after the mobile internet,” Zuckerberg told The Verge’s Casey Newton this summer. “And I think it’s going to be the next big chapter for our company too, really doubling down in this area.” You know what they say on Love Island, ‘don’t put all your eggs in one basket’, and although it may look like Zuckerberg is not one to make such a misstep, it is clear that the company is expecting quite a lot from this whole Metaverse ordeal. But what are we even talking about here?
While Facebook has been heavily promoting the idea of the Metaverse in recent weeks, it’s still not a concept that’s widely understood. The term was coined originally by sci-fi novelist Neal Stephenson to describe a virtual world people can escape to from a dystopian, real world.
“As of today, the closest thing we have to a Metaverse is online games,” Jack Ramage wrote for Screen Shot in July. “Ever played the game Roblox? If you haven’t, you’re probably somewhat aware of the concept—or at least, you’ve seen videos of screaming children playing the game. Essentially, the online multiplayer game, which is targeted towards children and whose parent company is valued at over 44 billion dollars, is based in a digital sandbox world where its users can program as well as play games created by other users. According to CNBC, the game is often considered an example of a Metaverse. Minecraft, a vast open-world sandbox game, is also considered by some to be a Metaverse,” he continued.
The way Ramage sees it, the Metaverse could soon replace what we currently know as the modern-day internet—“with all the same content but fewer limitations as to where and how that content can be accessed.” So, what’s next for Facebook’s rebranding? First, a new name, then a clear explanation of what the Metaverse is and how we will, without a doubt, all end up addicted to it.
Facebook users who watched a newspaper video featuring black men were asked if they wanted to “keep seeing videos about primates” by one of the platform’s artificial intelligence recommendation systems. It’s only the latest in a long-running series of errors that have raised concerns over racial bias in AI.
Facebook told the BBC it “was clearly an unacceptable error,” disabled the system and then proceeded to open an investigation. “We apologise to anyone who may have seen these offensive recommendations,” continued the social media giant.
Back in 2015, Google came under fire after its new Photos app categorised photos in one of the most racist ways possible. On 28 June, computer programmer Jacky Alciné found that the feature kept tagging pictures of him and his girlfriend as “gorillas.” He tweeted at Google asking what kind of sample images the company had used that would allow such a terrible mistake to happen.
“Google Photos, y’all fucked up. My friend’s not a gorilla,” read his now-deleted tweet. Google’s chief social architect Yonatan Zunger responded quickly, apologising for the feature: “No, this is not how you determine someone’s target market. This is 100% Not OK.”
The company said it was “appalled and genuinely sorry,” though its fix, Wired reported in 2018, was simply to censor photo searches and tags for the word “gorilla.”
In July 2020, Facebook announced it would be forming new internal teams that would determine whether its algorithms are racially biased or not. The recent “primates” recommendation “was an algorithmic error on Facebook” and did not reflect the content of the video, a representative told BBC News.
“We disabled the entire topic-recommendation feature as soon as we realised this was happening so we could investigate the cause and prevent this from happening again. As we have said while we have made improvements to our AI, we know it’s not perfect and we have more progress to make,” they continued.
In May 2021, Twitter admitted racial biases in the way its “saliency algorithm” cropped previews of images. Studies have also shown biases in the algorithms powering some facial recognition systems.
Research has previously shown that AI is often biased. AI systems aren’t perfect—over the last few years, society has begun to grapple with exactly how much human prejudices can find their way through AI systems. Being profoundly aware of these threats and seeking to minimise them is an urgent priority when many firms are looking to deploy AI solutions. Algorithmic bias can take varied forms such as gender bias, racial prejudice and age discrimination.
First and foremost, however, we need to admit and recognise that AI isn’t perfect. Developing more inclusive algorithms—with a specific goal of removing social bias—is the only way we can prevent this from happening again. Until then, AI will continue to make racist mistakes—driven by human error—for the years to come.