The idea of machines performing tasks and also creating a human-like intelligence, or artificial intelligence (AI) goes back at least to Descartes and Leibniz. However, it really started to take shape after the second World War, when it was discovered that electronic computers could not only just crunch numbers, but also manipulate symbols. Since the 1950s, and now six whole decades of research and development later, humans are still safe from AI being close to having the same cognitive abilities as a human child, let alone think and assume like a fully grown adult.
What has come out of the time spent on the idea however, is a field of science split into two categories: Artificial Narrow Intelligence (ANI) which is what we have today, and Artificial General Intelligence (AGI), which is what we ‘hope’ to achieve. What are the differences between the two and why are we still striving towards AGI being successful?
Machines of today exhibit a level of human accuracy in a set of tasks only—recognising speech, text as well as parsing the structure of sentences, mining legal documents and translating stories from language to language with a fair enough logical accuracy. However, all AI in general (not AGI) is mediocre to the extent of being ‘bad’ at other sets of tasks. In the scheme of creating an AGI, we have found ANI, which is narrow intelligence, or in easier terms to understand, one track minded and not open to all the possibilities, yet. ANI will complete, to human accuracy, only tasks that they are programmed to complete.
The professor of informatics at MIT and also creator of the famous program Eliza, Joseph Weizenbaum, made a distinction between computer power and human reason. In today’s terminology, computer power is the ability to use algorithms at an incredible speed; this is ANI, it follows paths of an algorithm.
AGI on the other hand, will achieve a certain goal that it was programmed to do but it will also self-develop, meaning it can deviate from a set of instructions and follow its own set of rules. The concept of ‘general intelligence’ as a whole (with the word general already implying a very broad term) refers to the capacity for efficient cross-domain optimisation. Artificial intelligence researcher Ben Goertzel explains this concept in saying that it is “the ability to achieve complex goals in complex environments using limited computational resources.”
Human intelligence, which ultimately is what these machines are inspired to be like, is variable—not all humans are equally intelligent. That being said, there are several traits that one of these generally intelligent systems should possess to mimic a functionally intelligent human, such as common sense, background knowledge and memory, transfer learning, abstraction and casualty.
Theoretically, it is possible to replicate the functions of a human brain stated above, but it has not proven to be practicable. Unlike computers, human thinking is actually not algorithmic according to the mathematician Roger Penrose. Human thinking and computer thinking are fundamentally different, a computer has the ability to make the right decisions in concrete situations but humans obtain the wisdom to see the whole.
In contrast to Penrose, the Israeli historian and author of Homo Sapiens and 21 Lessons for the 21st Century Yuval Noah Harari argues that our decisions are not the result of ‘some mysterious will’ but in fact the result of “millions of neurons calculating probabilities within a split second,” which sounds a lot like exactly how algorithms function. He writes that our “emotions and desires are in fact no more than biochemical algorithms, there is no reason why computers cannot decipher these algorithms—and do so far better than any Homo sapiens.”
MIT roboticist Rodney Brooks predicts that AGI won’t appear until at least the year 2300. He publicly stated that “it is a fraught time understanding the true promise and dangers of AI. Most of what we read in the headlines… is, I believe, completely off the mark.” And the professor of computer science at the University of Alberta, Richard Sutton, stated in a 2017 talk that “understanding human-level AI will be a profound scientific achievement (and economic boon) and may well happen by 2030 (25 per cent chance), or by 2040 (50 per cent chance)—or never (10 per cent chance).” So evidently AGI prediction is an inherently grey area.
What matters the most is probably preparation—for robots and AGI of any level of intelligence to become successful in a human world, we first must want to interact with them without fear. AGI will have to reciprocate understanding and emotions correctly. Evidently, humans find interpretations of the emotions within our own species difficult enough, so for AGI to reach something we ourselves don’t understand, and may never, is a distant prospect. Fact of the matter is, what’s the rush?