If you thought robots would forever remain heartless, today is the day I tell you to start rethinking things. A robot at the Columbia University School of Engineering has displayed a glimmer of empathy towards a partner robot during a recent study, and has now learned to predict its partner robot’s future actions solely by observing it. And if it can do it once, what’s to stop it doing it again? So, how is this possible, and what does it truly mean for the future of robotics?
The study, which is being led by mechanical engineering Professor Hod Lipson and colleagues Carl Vondrick and Boyuan Chen, is part of a broader effort to equip robots with the ability to understand as well as anticipate the goals of other robots, purely from visual observations.
Chen said in a statement that “Our findings begin to demonstrate how robots can see the world from another robot’s perspective.” The engineers believe that this behaviour anticipation and prediction is an essential cognitive ability that will eventually allow machines to understand and coordinate with surrounding agents, while sidestepping a problem even when programmed.
Most studies of this kind of machine behaviour modelling rely on symbolic or selected sensory inputs as well as built-in knowledge that is relevant to a certain task. For this specific experiment, the engineers proposed that an observer (being the non-acting robot) would be able to model the behaviour of the acting robot through simply watching the robot’s actions alone.
To test this hypothesis, a non-verbal, non-symbolic experiment was set up. The display of robotic understanding took place in a playpen of about 3 x 2 feet in size. One robot, with its partner robot watching it, was programmed to find and move toward any green circle that it could see (in its camera). Sometimes, the robot could see a green circle, and other times the green circle would be blocked by a tall red box. This would make the robot move toward a different green circle in sight, or simply not move at all.
After the partner robot observed its pal in the playpen move around for a couple of hours, it could then anticipate and predict what the other would do with 98 per cent accuracy in varying situations. “The ability of the observer to put itself in its partner’s shoes, so to speak, and understand, without being guided, whether its partner could or could not see the green circle from its vantage point, is perhaps a primitive form of empathy,” Chen explained.
So to explain this again simply, one robot is observing what the other robot is seeing entirely, and through that observation, it shows an understanding of whether the robot in the pen can see a green circle to move towards or not, and therefore can predict what it will do, simply by watching it.
Hipson, Vondrick and Chen suggest that this approach may be a precursor to what is called ‘Theory of Mind’, which is one of the hallmarks of primate social cognition. To pinpoint and replicate this is fundamental to creating true artificial intelligence.
According to the study, a human child at the age of about three will begin to recognise that other humans may have a differing worldview than them. The child “will learn that a toy can be hidden from a caretaker who is not present in the room, or that other people may not share the same desires and plans as they do.” This ability to recognise that different agents have different mental states, goals, and plans is often referred to as Theory of Mind or ‘ToM’.
The origins of ToM are difficult to pinpoint because cognitive processes leave no fossil record. In other words, we wouldn’t wonder about the origins of consciousness. ToM experiments are also notoriously difficult to carry out as it is challenging for a researcher to query the state of mind of the observer in order to determine if they truly understand an actor’s mental state and plans. Whereas the study states that “In older children and adults, the state of mind of the observer can be queried directly by formulating a verbal question about the actor, such as ‘Tell me where Sally will look for the ball?’”
Understandably, the quest for obtaining empathetic responses from robots is somewhat a gray area for now, and while the behaviours of robots in this particular study are evidently much less sophisticated than humans, researchers still believe that they could be the beginning of granting the future of robots with the ToM. Or, AI that understands enough in order to predict, and therefore truly think. Okay, so they may still be heartless physically, but aren’t our hearts in our heads anyway?