The last couple of years have seen both rampant fear and hysteric excitement over the implementation of autonomous and intelligent systems (A/IS) within our everyday lives. As some of Silicon Valley’s resourceful entrepreneurs are aptly profiting over the AI epidemic, a group of scientists, academics, and professors have joined forces under the name of The Council on Extended Intelligence (CXI) to find a different, more beneficial, approach towards autonomous and intelligent technologies. Their goal: to build the basis for an A/IS that is not exclusively profit-driven and, most importantly, refuses to accept the machine vs human competition narrative that seems to be ruling.
According to Joichi Ito, Director of the MIT Media Lab, team member of The Council on Extended Intelligence, the only ones who are currently benefiting from Artificial Intelligence technologies are those who master them; those who live within what he calls the ‘singularity bubble’, while the rest of us are left outside the conversation as passive users of overwhelmingly powerful technologies.
The central problem with technological singularity—and the main reason why The Council on Extended Intelligence is working towards changing this narrative—is that it implies a future where a super-intelligent technology supersedes human reason and becomes a sovereign and threatening entity. In light of this movement, media coverage around AI and super algorithms has helped inflate the possible consequences of such an exponential growth of machine intelligence—basically, AI taking over our jobs and bots outsmarting humans to eventually take control over the world—in ways that have only been predicted in sci-fi movies.
At the same time, the use of Artificial Intelligence for surveillance purposes and data exploitations have only increased the mistrust towards this technology and the belief that AI is indeed engineered to work against us. “Widespread surveillance, combined with social-engineering techniques, has eroded trust and can ultimately lead to authoritarianism and the proliferation of systems that reinforce systemic biases rather than correct them. The Council is actively working against this paradigm—in which people have no agency over their identity and their data—as being fundamentally at odds with an open and free society.” Reads a text on The Council on Extended Intelligence’s website. From participatory design to the Digital Identity project, which is set to create a Data Policy template for governments and organisations to provide individuals and society the tools to reclaim their digital identity, the CXI is paving the way for a future where people do not see intelligent machines as opposites but as an extension of our own assets.
To counter the dystopian machine-driven future scenario prophesied by the singularity theory, The CXI is opening a debate to promote the collaboration between human and technology by employing participatory design structures to build intelligent and autonomous machines. In other words, the CXI is made up of tech-experts and professionals who believe that technology should be developed differently than how it has been by governments and private corporations so far.
“Instead of thinking about machine intelligence in terms of humans vs machines, we should consider the system that integrates humans and machines—not Artificial Intelligence but Extended Intelligence. Instead of trying to control or design or even understand systems, it is more important to design systems that participate as responsible, aware and robust elements of even more complex systems.” Explains Ito in Resisting Reduction: A Manifesto, which is a call for action and one of the fundamental texts behind the CXI.
The Council on Extended Intelligence is just starting to promote its philosophical and technological agenda, but as our society delves deeper into the algorithmic age, feeling overwhelmed by the speed in which technology seems to be taking control over every aspect of our lives, we start to understand why it is crucial to rethink its model now that we see the flaws. And to do so, shifting the narrative behind it really is the first step to regaining agency over our future. And that is precisely what Ito and his fellow colleagues are trying to do.