If you thought robots would forever remain heartless, today is the day I tell you to start rethinking things. A robot at the Columbia University School of Engineering has displayed a glimmer of empathy towards a partner robot during a recent study, and has now learned to predict its partner robot’s future actions solely by observing it. And if it can do it once, what’s to stop it doing it again? So, how is this possible, and what does it truly mean for the future of robotics?
The study, which is being led by mechanical engineering Professor Hod Lipson and colleagues Carl Vondrick and Boyuan Chen, is part of a broader effort to equip robots with the ability to understand as well as anticipate the goals of other robots, purely from visual observations.
Chen said in a statement that “Our findings begin to demonstrate how robots can see the world from another robot’s perspective.” The engineers believe that this behaviour anticipation and prediction is an essential cognitive ability that will eventually allow machines to understand and coordinate with surrounding agents, while sidestepping a problem even when programmed.
Most studies of this kind of machine behaviour modelling rely on symbolic or selected sensory inputs as well as built-in knowledge that is relevant to a certain task. For this specific experiment, the engineers proposed that an observer (being the non-acting robot) would be able to model the behaviour of the acting robot through simply watching the robot’s actions alone.
To test this hypothesis, a non-verbal, non-symbolic experiment was set up. The display of robotic understanding took place in a playpen of about 3 x 2 feet in size. One robot, with its partner robot watching it, was programmed to find and move toward any green circle that it could see (in its camera). Sometimes, the robot could see a green circle, and other times the green circle would be blocked by a tall red box. This would make the robot move toward a different green circle in sight, or simply not move at all.
After the partner robot observed its pal in the playpen move around for a couple of hours, it could then anticipate and predict what the other would do with 98 per cent accuracy in varying situations. “The ability of the observer to put itself in its partner’s shoes, so to speak, and understand, without being guided, whether its partner could or could not see the green circle from its vantage point, is perhaps a primitive form of empathy,” Chen explained.
So to explain this again simply, one robot is observing what the other robot is seeing entirely, and through that observation, it shows an understanding of whether the robot in the pen can see a green circle to move towards or not, and therefore can predict what it will do, simply by watching it.
Hipson, Vondrick and Chen suggest that this approach may be a precursor to what is called ‘Theory of Mind’, which is one of the hallmarks of primate social cognition. To pinpoint and replicate this is fundamental to creating true artificial intelligence.
According to the study, a human child at the age of about three will begin to recognise that other humans may have a differing worldview than them. The child “will learn that a toy can be hidden from a caretaker who is not present in the room, or that other people may not share the same desires and plans as they do.” This ability to recognise that different agents have different mental states, goals, and plans is often referred to as Theory of Mind or ‘ToM’.
The origins of ToM are difficult to pinpoint because cognitive processes leave no fossil record. In other words, we wouldn’t wonder about the origins of consciousness. ToM experiments are also notoriously difficult to carry out as it is challenging for a researcher to query the state of mind of the observer in order to determine if they truly understand an actor’s mental state and plans. Whereas the study states that “In older children and adults, the state of mind of the observer can be queried directly by formulating a verbal question about the actor, such as ‘Tell me where Sally will look for the ball?’”
Understandably, the quest for obtaining empathetic responses from robots is somewhat a gray area for now, and while the behaviours of robots in this particular study are evidently much less sophisticated than humans, researchers still believe that they could be the beginning of granting the future of robots with the ToM. Or, AI that understands enough in order to predict, and therefore truly think. Okay, so they may still be heartless physically, but aren’t our hearts in our heads anyway?
In a new essay published in The International Journal of Astrobiology, Joseph Gale from The Hebrew University of Jerusalem and co-authors raised awareness of what recent advances in artificial intelligence (AI) could mean for the future of humanity and robots. The study focuses more specifically on pattern recognition and self learning while also presenting a fundamental shift between super intelligence’s relationship with humans. The futurist Ray Kurzweil predicted that the singularity would occur in 2045, but Gale believes this event may be more imminent, especially with the advent of quantum computing. What is singularity exactly, and what does it mean for humanity?
The term ‘the singularity’ has different definitions depending on who you ask, and it often overlaps with ideas like transhumanism. However, broadly speaking, the singularity is the hypothetical future creation of superintelligent machines. Superintelligence is defined as a technologically-created cognitive capacity far beyond what is currently possible for humans, and should the singularity occur, technology will in turn advance beyond our ability to foresee or control its outcomes. Basically, the singularity will be the time when the abilities of a computer overtake the abilities of the human brain—it’s a little concerning, I know.
As we know, a human brain is ‘wired’ differently to a computer, and this may be the reason as to why certain tasks are simple for us but challenging for today’s AI. The size of the brain or the number of neurons it contains doesn’t equate to higher intelligence either. For example, whales and elephants have double the number of neurons in their brains compared to humans, and yet, they are not more intelligent than us.
When the singularity occurs, which should come down to if and when we let it due to our current power over the situation, the human race may very well undergo its decline. As theoretical physicist Stephen Hawking once predicted, and told the BBC, “The development of full artificial intelligence could spell the end of the human race.”
Hawking came to this response based on the technology he used to communicate because of the impacts of the motor neuron disease that he lived with, which involved a basic form of AI. According to Kurzweil’s book The Singularity Is Near, humans may soon be fully replaced by AI or some hybrid form of humans and machines.
American writer Lev Grossman explained this prospect in Time magazine by saying that “Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a superintelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn’t even take breaks…”
Future posed an interesting experiment on ‘supercomputers to superintelligence’ by proposing that we ask our elders whether they ever dared think that one day in the future (meaning now), everyone would be posting and sharing images and information about one another on a social network called Facebook. Or, if they ever imagined that they would soon be able to receive answers to every and any question from a mysterious entity called Google. Chances are that they would probably answer negatively, and who would blame them?
The thing is that very few would have imagined the future that is now, even if assumptions were made on technologies becoming widespread or how they would fundamentally change society. But here we are, and what we might now idealise of our very futures, may turn out to be exaggerated versions of those ideas, or nothing like them at all.
Changes of any kind, in hindsight, always actualise as dramatic, and this is most definitely the case with technology. These sort of dramatic shifts in thinking are what is called singularity, which originally derived from mathematics and describes a point which we are incapable of deciphering its exact properties, or where the equations make no sense and have no sense of direction. Now the term creates a point that could completely change the way we view, as well as function, as human beings.
Because of the potentially approaching singularity, AI will essentially improve itself once it learns how to, and will do so over and over again without our help. Humans will remain biological machines, but if this superintelligent AI were to be kept on a tight leash, humans would be able to use it to their advantage still, meaning that we could use the advancement produced by this technology to expose and discover the wonders of what we haven’t been able to discover in our world yet, and beyond.
Truthfully, the singularity of some spectrum is most definitely due to arrive, it has already within the gaming world and professional fields like health care. That being said, some humans may struggle with the reality of such a time arriving, and some may ignore it altogether (while still using a mobile phone or calculator, ignorantly). While both of these approaches will most definitely remain disastrously behind, others will realise that the path ahead relies on the increasing collaboration with humankind and computers. I argue that the dawn of singularity is here, possibly that it arrived decades ago, and that only in hindsight will we actualise this point in time as dramatic.