The last couple of years have seen both rampant fear and hysteric excitement over the implementation of autonomous and intelligent systems (A/IS) within our everyday lives. As some of Silicon Valley’s resourceful entrepreneurs are aptly profiting over the AI epidemic, a group of scientists, academics, and professors have joined forces under the name of The Council on Extended Intelligence (CXI) to find a different, more beneficial, approach towards autonomous and intelligent technologies. Their goal: to build the basis for an A/IS that is not exclusively profit-driven and, most importantly, refuses to accept the machine vs human competition narrative that seems to be ruling.
According to Joichi Ito, Director of the MIT Media Lab, team member of The Council on Extended Intelligence, the only ones who are currently benefiting from Artificial Intelligence technologies are those who master them; those who live within what he calls the ‘singularity bubble’, while the rest of us are left outside the conversation as passive users of overwhelmingly powerful technologies.
The central problem with technological singularity—and the main reason why The Council on Extended Intelligence is working towards changing this narrative—is that it implies a future where a super-intelligent technology supersedes human reason and becomes a sovereign and threatening entity. In light of this movement, media coverage around AI and super algorithms has helped inflate the possible consequences of such an exponential growth of machine intelligence—basically, AI taking over our jobs and bots outsmarting humans to eventually take control over the world—in ways that have only been predicted in sci-fi movies.
At the same time, the use of Artificial Intelligence for surveillance purposes and data exploitations have only increased the mistrust towards this technology and the belief that AI is indeed engineered to work against us. “Widespread surveillance, combined with social-engineering techniques, has eroded trust and can ultimately lead to authoritarianism and the proliferation of systems that reinforce systemic biases rather than correct them. The Council is actively working against this paradigm—in which people have no agency over their identity and their data—as being fundamentally at odds with an open and free society.” Reads a text on The Council on Extended Intelligence’s website. From participatory design to the Digital Identity project, which is set to create a Data Policy template for governments and organisations to provide individuals and society the tools to reclaim their digital identity, the CXI is paving the way for a future where people do not see intelligent machines as opposites but as an extension of our own assets.
To counter the dystopian machine-driven future scenario prophesied by the singularity theory, The CXI is opening a debate to promote the collaboration between human and technology by employing participatory design structures to build intelligent and autonomous machines. In other words, the CXI is made up of tech-experts and professionals who believe that technology should be developed differently than how it has been by governments and private corporations so far.
“Instead of thinking about machine intelligence in terms of humans vs machines, we should consider the system that integrates humans and machines—not Artificial Intelligence but Extended Intelligence. Instead of trying to control or design or even understand systems, it is more important to design systems that participate as responsible, aware and robust elements of even more complex systems.” Explains Ito in Resisting Reduction: A Manifesto, which is a call for action and one of the fundamental texts behind the CXI.
The Council on Extended Intelligence is just starting to promote its philosophical and technological agenda, but as our society delves deeper into the algorithmic age, feeling overwhelmed by the speed in which technology seems to be taking control over every aspect of our lives, we start to understand why it is crucial to rethink its model now that we see the flaws. And to do so, shifting the narrative behind it really is the first step to regaining agency over our future. And that is precisely what Ito and his fellow colleagues are trying to do.
China has just undergone its annual Spring Festival, also known around the world for being the biggest human migration on the planet with nearly 3 billion passenger trips made into the country between the end of January and the beginning of March. For the special occasion, the Chinese government provided police officers in megacities such as Zhengzhou, with a brand new AI technology device that is meant to facilitate the recognition of wanted criminals in no longer than 100 milliseconds.
As the travel rush for the Lunar New Year fills the nation’s train stations, officers have been wearing facial recognition sunglasses, the GLXSS ME, which is an AI appliance that enables the police to track suspect citizens even in the most crowded of locations. According to a report published on the Wall Street Journal, during the testing period of this technology, the Chinese police were able to capture seven suspects, and 26 individuals who were travelling with false identities.
Produced by the Beijing-based company LLVision Technology Co, which employs former masterminds from Google, Microsoft, Intel, and China Aeronautics, these glasses signal the next step in government surveillance. “GLXSS Force has been put into combat service. Many successful results are reported, such as seized suspects and criminal vehicles.” Reads the LLVision website. The mobile surveillance device, which is what the company calls these special specs, has certainly been proven successful in tracking suspects, but what about the unauthorised profiling of other citizens?
The specs are the most recent software introduced into China’s increasing AI-based social surveillance agenda, which is becoming particularly committed to facial recognition technologies that target citizens. In recent years, China has been investing millions into the development of tracking technologies, with the most obvious and striking example being its Social Credit System, a points system that gives individuals a score out of 800-900 for behaving as ‘good’ or ‘bad’ citizens. By using over 200 million CCTV cameras and rigid biometric surveillance, people’s moves and actions have been under the constant radar of AI devices, whose presence is becoming increasingly ubiquitous and government-owned.
The technology behind GLXSS ME is not particularly different from that of CCTV cameras, but it is refined: CCTV cannot reach and follow suspects everywhere, the images are blurry, and often by the time the targets are identified they might have already moved out of the field of vision of the camera. But, “By making wearable glasses, with AI on the front end, you get instant and accurate feedback. You can decide right away what the next interaction is going to be.”, Wu Fei, the company’s chief executive, told the WSJ in an interview.
The smart sunglasses embody the intensification of state surveillance powered by the Chinese government in collaboration with facial-recognition companies such as LLVision, and how easily the technology can fall into the wrong hands. Make no mistake here. The increasing ‘safety’ of civilians comes at the very high cost of everyone’s privacy; the harder it is to get away with criminal activity is directly related to the day-to-day surveillance on the ground when it comes to China’s approach.
China’s serious tilt towards using facial recognition technology for security and surveillance purposes comes as no surprise, but this new product certainly adds a darker twist to the state of policing already active in the country. And although China is steps ahead in the AI race compared to Europe or even the U.S., every time a device designed to police citizens gets used by a government, everyone’s privacy rights become more vulnerable. And that is definitely the case with GLXSS ME.