Deep Dives Level Up Newsletters Saved Articles Challenges


An algorithm is challenging the most intelligent human minds in debate. What happens when it wins?

In 2019, a globally recognised debate champion, Harish Natarajan, took part in a live debate with a five and a half foot tall rectangular computer screen, in front of around 800 people. The topic of discussion? Whether or not preschool should be subsidised. The topic is not really the point here however, but the fact that Natarajan had a heated discussion with a computer system, is. This particular algorithm has evolved rather quickly since then too, and it’s inching closer to engaging in the type of complex human interaction that’s represented by formal argumentation.

For a bit of back context: IBM’s Deep Blue was the first computer to defeat a reigning chess champion, Garry Kasparov in 1997, then fourteen years later IBM’s Watson defeated the all-star Jeopardy! players Brad Rutter and Ken Jennings. By this point, intelligent computers were solidified, but a lot of the tests to reach this clarification were based on clear winner or loser outcomes. In other words, the coding behind such technological masterminds led to a defined binary algorithmic path to victory, presenting that in fact a system which could interact with the nuance that enables complex conversation with human beings was still not possible. That is until (potentially) now.

A study published by Nature shows a startling progress within the Artificial Intelligence (AI) industry, and specifically with IMB’s new creation ‘Project Debater’, which is the algorithm that is increasing the likelihood that a computer may soon be able to understand and interact with what could be described as a ‘grey area’, when it comes to the differentiation between humans and technology.

The study consists of IBM researchers from all over the world reporting on the AI system’s progress. Following on from the 2019 debate with Natarajan, a train of similar tests have been recorded, and evaluated on nearly 80 different topics by 15 members of a virtual audience, between Project Debater and three other expert human debaters.

As reported by Scientific American “these human-against-machine contests, neither side is allowed access to the Internet. Instead, each is given 15 minutes to ‘collect their thoughts,’ as Christopher P. Sciacca, manager of communications for IBM Research’s global labs, puts it. This means the human debater can take a moment to jot down ideas about a topic at hand, such as subsidised preschool, while Project Debater combs through millions of previously stored newspaper articles and Wikipedia entries, analysing specific sentences and commonalities and disagreements on particular topics. Following the prep time, both sides alternately deliver four-minute speeches, and then each gives a two-minute closing statement.”

Although Project Debater has come a significantly long way, it still hasn’t managed to argue past the human debaters, but to be fair, neither can most other humans. What is the real point of having an AI system that has the ability to argue anyway? Well, humans live online, and bots are frequently chatting to us without us possibly being able to decipher whether it is one or not. One goal is to make this exact process even harder for us to know the difference.

Researcher Chris Reed from the University of Dundee, who isn’t part of the Project Debater team, but wrote in a commentary also published in Nature saying that “More than 50 laboratories worldwide are working on the problem, including teams at all the large software corporations.” Which leads us to realise that these systems aren’t going anywhere in the future. As Futurism wrote on the topic, to prepare ourselves for what’s to come “we should all perhaps start thinking about how to choose our battles. Before you get sucked into another online argument, keep in mind it might just be some bot on the other end that will endlessly engage in the fight until you just walk away—or waste hours screaming into the digital void.”

Models of what constitutes as a ‘good argument’ are diverse, and on the other hand, a good debate can amount to, as Reed puts it, “little more than formalised intuitions.” The article continues to pose that the challenge argument-technology systems face will basically be whether to treat arguments as local fragments of discourse influenced by an isolated set of considerations, or to “weave them into the larger tapestry of societal-scale debates.” Reed writes that “this is about engineering the problem to be tackled, rather than engineering the solution.”

In the real, human world, there are no clear boundaries to determine an argument; solutions are more often than not subjective to a range of contextual ideas. However, if Project Debater is further adapted and in turn successful, Reed comments that “Given the wildfires of fake news, the polarisation of public opinion and the ubiquity of lazy reasoning, that ease belies an urgent need for humans to be supported in creating, processing, navigating and sharing complex arguments—support that AI might be able to supply. So although Project Debater tackles a grand challenge that acts mainly as a rallying cry for research, it also represents an advance towards AI that can contribute to human reasoning.”

In their essence, AI systems are defined by a machine’s ability to perform a task that is usually associated with intelligent human beings: argument and debate are fundamental to the way humans interact and react to the world around them, and so, within our human world that has become increasingly parallel in importance to another—the internet world—having an intelligent system of interactions and reactions will simply solidify the parallel. Whether that is in fact a good or bad thing is still open to human debate.


Singularity explained simply: will our technological growth soon become uncontrollable?

In a new essay published in The International Journal of Astrobiology, Joseph Gale from The Hebrew University of Jerusalem and co-authors raised awareness of what recent advances in artificial intelligence (AI) could mean for the future of humanity and robots. The study focuses more specifically on pattern recognition and self learning while also presenting a fundamental shift between super intelligence’s relationship with humans. The futurist Ray Kurzweil predicted that the singularity would occur in 2045, but Gale believes this event may be more imminent, especially with the advent of quantum computing. What is singularity exactly, and what does it mean for humanity?

What is the singularity?

The term ‘the singularity’ has different definitions depending on who you ask, and it often overlaps with ideas like transhumanism. However, broadly speaking, the singularity is the hypothetical future creation of superintelligent machines. Superintelligence is defined as a technologically-created cognitive capacity far beyond what is currently possible for humans, and should the singularity occur, technology will in turn advance beyond our ability to foresee or control its outcomes. Basically, the singularity will be the time when the abilities of a computer overtake the abilities of the human brain—it’s a little concerning, I know.

As we know, a human brain is ‘wired’ differently to a computer, and this may be the reason as to why certain tasks are simple for us but challenging for today’s AI. The size of the brain or the number of neurons it contains doesn’t equate to higher intelligence either. For example, whales and elephants have double the number of neurons in their brains compared to humans, and yet, they are not more intelligent than us.

When the singularity occurs, which should come down to if and when we let it due to our current power over the situation, the human race may very well undergo its decline. As theoretical physicist Stephen Hawking once predicted, and told the BBC, “The development of full artificial intelligence could spell the end of the human race.”

Hawking came to this response based on the technology he used to communicate because of the impacts of the motor neuron disease that he lived with, which involved a basic form of AI. According to Kurzweil’s book The Singularity Is Near, humans may soon be fully replaced by AI or some hybrid form of humans and machines.

American writer Lev Grossman explained this prospect in Time magazine by saying that “Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a superintelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn’t even take breaks…”

Future posed an interesting experiment on ‘supercomputers to superintelligence’ by proposing that we ask our elders whether they ever dared think that one day in the future (meaning now), everyone would be posting and sharing images and information about one another on a social network called Facebook. Or, if they ever imagined that they would soon be able to receive answers to every and any question from a mysterious entity called Google. Chances are that they would probably answer negatively, and who would blame them?

The thing is that very few would have imagined the future that is now, even if assumptions were made on technologies becoming widespread or how they would fundamentally change society. But here we are, and what we might now idealise of our very futures, may turn out to be exaggerated versions of those ideas, or nothing like them at all.

Changes of any kind, in hindsight, always actualise as dramatic, and this is most definitely the case with technology. These sort of dramatic shifts in thinking are what is called singularity, which originally derived from mathematics and describes a point which we are incapable of deciphering its exact properties, or where the equations make no sense and have no sense of direction. Now the term creates a point that could completely change the way we view, as well as function, as human beings.

Singularity and AI regeneration

Because of the potentially approaching singularity, AI will essentially improve itself once it learns how to, and will do so over and over again without our help. Humans will remain biological machines, but if this superintelligent AI were to be kept on a tight leash, humans would be able to use it to their advantage still, meaning that we could use the advancement produced by this technology to expose and discover the wonders of what we haven’t been able to discover in our world yet, and beyond.

Truthfully, the singularity of some spectrum is most definitely due to arrive, it has already within the gaming world and professional fields like health care. That being said, some humans may struggle with the reality of such a time arriving, and some may ignore it altogether (while still using a mobile phone or calculator, ignorantly). While both of these approaches will most definitely remain disastrously behind, others will realise that the path ahead relies on the increasing collaboration with humankind and computers. I argue that the dawn of singularity is here, possibly that it arrived decades ago, and that only in hindsight will we actualise this point in time as dramatic.