Meta’s new AI can deceive and beat humans at a classic board game – Screen Shot
Deep Dives Level Up Newsletters Saved Articles Challenges


Meta’s new AI can deceive and beat humans at a classic board game

After art, music, and politics, it seems like AI is now on a mission to conquer arcade and board games. To date, computers have aced chess, Go, Pong, and Pac-Man. In fact, during the COVID-19 pandemic, AI-powered board games also proved as a helpful tool for reducing social isolation in older adults.

Now, after building a bot that outbluffs humans at poker, scientists over at Meta have created a program that is capable of embarking on a more complex gameplay—one that can strategise, understand other players’ intentions, and negotiate or even manipulate plans with them through text messages.

Meet ​​CICERO, the bot that can play Diplomacy better than humans

Dubbed CICERO, Meta has revealed that its latest AI bot can play the game Diplomacy better than most humans. For those of you who haven’t played, Diplomacy is a 1959 board game set in a stylised version of Europe. Here, players assume the role of different countries and their objective is to gain control of territories by making strategic agreements and plans of action.

And this is exactly what makes the innovation both notable and distressing at the same time. “What sets Diplomacy apart is that it involves cooperation, it involves trust, and most importantly, it involves natural language communication and negotiation with other players,” Noam Brown, research scientist at Meta AI, told Popular Science.

Essentially, there are no dice or cards affecting the gameplay in Diplomacy. Instead, your ability to negotiate is what determines your success. The game is hence built on human interactions rather than moves and manoeuvres in the case of chess.

CICERO combines the strategic thinking made possible by the AI that conquered games like chess and language-processing AI like BlenderBot and LaMDA. Meta AI then programmed their revolutionary bot with a 2.7 billion parameter language model and trained it over 40,000 rounds of, a free-to-play web version of Diplomacy.

In a bid to cement its status as the “first AI to achieve human-level performance” in the board game, CICERO participated in a five-game league tournament and came in second place out of 19 participants—with double the average score of its opponents.

Meta also roped in “Diplomacy World Champion” Andrew Goff to support its claims. “A lot of human players will soften their approach or they’ll start getting motivated by revenge and CICERO never does that,” the expert shared in a blog post. “It just plays the situation as it sees it. So it’s ruthless in executing its strategy, but it’s not ruthless in a way that annoys or frustrates other players.”

The bot is far from perfect. Maybe that’s a good thing?

Over the pandemic, a team of Johns Hopkins undergraduates developed an AI-powered board game styled after Hasbro’s Guess Who? In the real game, the goal of a player is to identify which character their opponent has selected as quickly as possible. In the team’s digital iteration, on the other hand, humans can have a seemingly-authentic conversation with their AI opponent as they ask questions like “Is this person wearing glasses?”

The biggest technical feat for the student team at the time was training the natural language processing model so that the game would generate human-like responses.

In the case of CICERO, it should be noted that the AI bot is not always honest with all of its intentions. Given how its early versions were outright deceptive, researchers had to add filters to make it lie less. That being said, CICERO understands that other players may also be deceptive. “Deception exists on a spectrum, and we’re filtering out the most extreme forms of deception, because that’s not helpful,” Brown admitted. “But there are situations where the bot will strategically leave out information.”

At the end of the day, however, the bot is far from perfect. As noted by Meta itself, CICERO sometimes generates inconsistent dialogue that can undermine its objectives. In an example shared, the AI was seen playing as Austria, and later contradicting its first message asking Italy to move to Venice:

“We’re accounting for the fact that players do not act like machines, they could behave irrationally, they could behave suboptimally. If you want to have AI acting in the real world, that’s necessary to have them understand that humans are going to behave in a human-like way, not in a robot-like way,” Brown said, hoping that Diplomacy can serve as a safe sandbox to advance research in human-AI interaction.

The expert also went on to note that the techniques that underlie the bot are “quite general,” and he can imagine other engineers building on this research in a way that leads to more useful personal assistants and chatbots.


Meet CLIP Interrogator, the rude AI that bullies people based on their selfies

In 2022, every dawn is ridden by a new controversial AI tool on the internet. Gone are the days netizens leveraged DALL·E 2 and racist image-spewing DALL·E mini to generate silly art for Twitter shitposting. Instead, recent months have witnessed the deployment of these tools to win legit art competitions, replace human photographers in the publication industry, and even steal original fan art in a growing series of ethical, copyright, and dystopian nightmares.

Just when we thought the innovations on the AI generator front may have come to a relative standstill, a new tool is now roasting people beyond recovery solely based on their selfies.

Dubbed CLIP Interrogator and created by AI generative artist @pharmapsychotic, the tool essentially aids in figuring out “what a good prompt might be to create new images like an existing one.” For instance, let’s take the case of the AI thief who ripped off a Genshin Impact fan artist by taking a screenshot of their work-in-progress livestreamed on Twitch, feeding it into an online image generator to “complete” it first, and uploading the AI version of the art on Twitter six hours before the original artist. The swindler then had the audacity to accuse the artist of theft and proceeded to demand credit for their creation.

With CLIP Interrogator, the thief could essentially upload the ripped screenshot and get a series of text prompts that will help accurately generate similar art using other text-to-image generators like DALL·E mini. The process is a bit cumbersome but it opens up a whole new realm of possibilities for AI-powered tools.

On Twitter, however, people are using CLIP Interrogator to upload their own selfies and get verbally destroyed by a bot. The tool called one user a “beta weak male,” a second “extremely gendered with haunted eyes” and went on to dub a third “Joe Biden as a transgender woman.” It also seemed to reference porn websites specifically when hit up with images of females with tank tops. Are we surprised? Not in the least. Disappointed? As usual.

Since I don’t exactly trust an AI with my own selfies (totally not that I can’t handle the blatant roasting or anything), I decided to test the tool by uploading some viral images of public figures. On my list were resident vampire boi Machine Gun Kelly (MGK), his best bud Pete Davidson, and of course selfie afficiendao, Kim Kardashian.

After several refreshes and dragging minutes of “Error: This application is too busy. Keep trying!” I finally got CLIP Interrogator to generate text prompts based on one of MGK’s infamous mirror selfies. “Non-binary, angst white, Reddit, Discord” the tool spat.

Meanwhile, the American rapper’s bud Davidson got “Yung lean, criminal mugshot, weirdcore, pitbull, and cursed image” to name a few. For reference, the picture in question was the shirtless selfie that the Saturday Night Live star took to hit back at Kanye West while he was dating Kim Kardashian. Talking about the fashion mogul, Kardashian’s viral diamond necklace selfie was described by the AI tool as “inspired by Brenda Chamberlain, wearing a kurta, normal distributions, wig.”

As noted by Futurism, CLIP Interrogator is “built on OpenAI’s Contrastive Language-Image Pre-Training (CLIP) neural network that was released in 2021, and hosted by Hugging Face, which has dedicated some extra resources to deal with the crush of traffic.” As the tool remains over-trafficked, further details are hazy at this point.

All we know for sure is that the roast bot has a long way to go when it comes to biases, especially when used by netizens to comment on their own selfies. And given how Twitter has recorded 320 tweets under the search term ‘CLIP Interrogator’ as of today, it seems like the tool is here to stay for a while.