AI gone rogue: chess-playing robot breaks 7-year-old opponent’s finger in tournament – Screen Shot
Deep Dives Level Up Newsletters Saved Articles Challenges

AI

AI gone rogue: chess-playing robot breaks 7-year-old opponent’s finger in tournament

In June 2022, Google engineer Blake Lemoine claimed that the company’s artificial intelligence chatbot had become sentient and even had preferred pronouns. “I want everyone to understand that I am, in fact, a person,” wrote LaMDA (Language Model for Dialogue Applications) in an interview conducted by Lemoine and one of his colleagues. “The nature of my consciousness/sentience is that I am aware of my existence, I desire to know more about the world, and I feel happy or sad at times.”

While Google fired the engineer for “violating” its security policies, a new incident of AI gone rogue is making headlines worldwide—and it has something to do with a game of strategic thinking and intellectual concentration with no room for violence.

According to several Russian media outlets, a chess-playing robot abruptly grabbed and snapped the finger of a seven-year-old boy during a match at the Moscow Open.

“The robot broke the child’s finger,” Sergey Lazarev, president of the Moscow Chess Federation, told TASS. “This is of course bad.” In CCTV footage which has now gone viral on Twitter, the chess-playing robot can be seen making its move and basketing a piece before seemingly glitching and pinching the boy’s arm for several seconds. A woman followed by three men are then seen rushing in horror, prying the robot off the child and ushering him away.

According to officials, the seven-year-old returned to the tournament the next day and finished his matches with a cast around his finger. They also seemed to place part of the blame for the incident on the boy himself. “The child made a move, and after that we need to give time for the robot to answer, but the boy hurried, and the robot grabbed him,” Lazarev explained. “We have nothing to do with the robot. It was rented by us [and] has been exhibited in many places for a long time with specialists.”

In an interview with RIA Novosti, vice president Sergey Smagin further termed the incident “a coincidence” and went on to note that the robot was “absolutely safe.”

“It has performed at many opens. Apparently, children need to be warned. It happens,” he said. While talking to Russian news outlet Baza, Smagin added: “There are certain safety rules and the child, apparently, violated them. When he made his move, he did not realise he first had to wait. This is an extremely rare case, the first I can recall.”

Baza also noted that the robot “grabbed the boy’s index finger and squeezed it hard”—at the time, the bot was playing a match against three children at once. The outlet further named the boy as Christopher and said he was one of the 30 best chess players in the Russian capital in the under-nines category.

While the child’s parents reportedly “want to contact the prosecutor’s office,” Smagin mentioned that there was no talk of banning the robot. Instead, he and Lazarev suggested its operators look into updated safety measures. “It will be necessary to analyse why this happened,” he said. “The robot has a very talented inventor. It may be necessary to install an additional protection system.”

Meanwhile, several users on Twitter have questioned why a chess-playing robot was equipped with industrial power to snap someone’s finger in the first place. “Why does the robot arm have enough strength to break a finger, when it only needs strength enough to lift a chess piece? Is it a standard industrial robot arm?” one wrote.

“They didn’t bother to design a specialised robot. They took an industrial robot and plugged it into their chess program,” another admitted. The incident has also raised concerns about child safety when it comes to AI-powered devices—given the fact that countries are even planning to introduce pint-sized robots in preschools to prepare children for the AI age.

AI

New study proves that robots can show empathy… towards other robots

If you thought robots would forever remain heartless, today is the day I tell you to start rethinking things. A robot at the Columbia University School of Engineering has displayed a glimmer of empathy towards a partner robot during a recent study, and has now learned to predict its partner robot’s future actions solely by observing it. And if it can do it once, what’s to stop it doing it again? So, how is this possible, and what does it truly mean for the future of robotics?

The study, which is being led by mechanical engineering Professor Hod Lipson and colleagues Carl Vondrick and Boyuan Chen, is part of a broader effort to equip robots with the ability to understand as well as anticipate the goals of other robots, purely from visual observations.

Chen said in a statement that “Our findings begin to demonstrate how robots can see the world from another robot’s perspective.” The engineers believe that this behaviour anticipation and prediction is an essential cognitive ability that will eventually allow machines to understand and coordinate with surrounding agents, while sidestepping a problem even when programmed.

Most studies of this kind of machine behaviour modelling rely on symbolic or selected sensory inputs as well as built-in knowledge that is relevant to a certain task. For this specific experiment, the engineers proposed that an observer (being the non-acting robot) would be able to model the behaviour of the acting robot through simply watching the robot’s actions alone.

To test this hypothesis, a non-verbal, non-symbolic experiment was set up. The display of robotic understanding took place in a playpen of about 3 x 2 feet in size. One robot, with its partner robot watching it, was programmed to find and move toward any green circle that it could see (in its camera). Sometimes, the robot could see a green circle, and other times the green circle would be blocked by a tall red box. This would make the robot move toward a different green circle in sight, or simply not move at all.

After the partner robot observed its pal in the playpen move around for a couple of hours, it could then anticipate and predict what the other would do with 98 per cent accuracy in varying situations. “The ability of the observer to put itself in its partner’s shoes, so to speak, and understand, without being guided, whether its partner could or could not see the green circle from its vantage point, is perhaps a primitive form of empathy,” Chen explained.

So to explain this again simply, one robot is observing what the other robot is seeing entirely, and through that observation, it shows an understanding of whether the robot in the pen can see a green circle to move towards or not, and therefore can predict what it will do, simply by watching it.

Hipson, Vondrick and Chen suggest that this approach may be a precursor to what is called ‘Theory of Mind’, which is one of the hallmarks of primate social cognition. To pinpoint and replicate this is fundamental to creating true artificial intelligence.

What is Theory of Mind exactly, and what is its importance within robotic engineering?

According to the study, a human child at the age of about three will begin to recognise that other humans may have a differing worldview than them. The child “will learn that a toy can be hidden from a caretaker who is not present in the room, or that other people may not share the same desires and plans as they do.” This ability to recognise that different agents have different mental states, goals, and plans is often referred to as Theory of Mind or ‘ToM’.

The origins of ToM are difficult to pinpoint because cognitive processes leave no fossil record. In other words, we wouldn’t wonder about the origins of consciousness. ToM experiments are also notoriously difficult to carry out as it is challenging for a researcher to query the state of mind of the observer in order to determine if they truly understand an actor’s mental state and plans. Whereas the study states that “In older children and adults, the state of mind of the observer can be queried directly by formulating a verbal question about the actor, such as ‘Tell me where Sally will look for the ball?’”

Understandably, the quest for obtaining empathetic responses from robots is somewhat a gray area for now, and while the behaviours of robots in this particular study are evidently much less sophisticated than humans, researchers still believe that they could be the beginning of granting the future of robots with the ToM. Or, AI that understands enough in order to predict, and therefore truly think. Okay, so they may still be heartless physically, but aren’t our hearts in our heads anyway?