China has just undergone its annual Spring Festival, also known around the world for being the biggest human migration on the planet with nearly 3 billion passenger trips made into the country between the end of January and the beginning of March. For the special occasion, the Chinese government provided police officers in megacities such as Zhengzhou, with a brand new AI technology device that is meant to facilitate the recognition of wanted criminals in no longer than 100 milliseconds.
As the travel rush for the Lunar New Year fills the nation’s train stations, officers have been wearing facial recognition sunglasses, the GLXSS ME, which is an AI appliance that enables the police to track suspect citizens even in the most crowded of locations. According to a report published on the Wall Street Journal, during the testing period of this technology, the Chinese police were able to capture seven suspects, and 26 individuals who were travelling with false identities.
Produced by the Beijing-based company LLVision Technology Co, which employs former masterminds from Google, Microsoft, Intel, and China Aeronautics, these glasses signal the next step in government surveillance. “GLXSS Force has been put into combat service. Many successful results are reported, such as seized suspects and criminal vehicles.” Reads the LLVision website. The mobile surveillance device, which is what the company calls these special specs, has certainly been proven successful in tracking suspects, but what about the unauthorised profiling of other citizens?
The specs are the most recent software introduced into China’s increasing AI-based social surveillance agenda, which is becoming particularly committed to facial recognition technologies that target citizens. In recent years, China has been investing millions into the development of tracking technologies, with the most obvious and striking example being its Social Credit System, a points system that gives individuals a score out of 800-900 for behaving as ‘good’ or ‘bad’ citizens. By using over 200 million CCTV cameras and rigid biometric surveillance, people’s moves and actions have been under the constant radar of AI devices, whose presence is becoming increasingly ubiquitous and government-owned.
The technology behind GLXSS ME is not particularly different from that of CCTV cameras, but it is refined: CCTV cannot reach and follow suspects everywhere, the images are blurry, and often by the time the targets are identified they might have already moved out of the field of vision of the camera. But, “By making wearable glasses, with AI on the front end, you get instant and accurate feedback. You can decide right away what the next interaction is going to be.”, Wu Fei, the company’s chief executive, told the WSJ in an interview.
The smart sunglasses embody the intensification of state surveillance powered by the Chinese government in collaboration with facial-recognition companies such as LLVision, and how easily the technology can fall into the wrong hands. Make no mistake here. The increasing ‘safety’ of civilians comes at the very high cost of everyone’s privacy; the harder it is to get away with criminal activity is directly related to the day-to-day surveillance on the ground when it comes to China’s approach.
China’s serious tilt towards using facial recognition technology for security and surveillance purposes comes as no surprise, but this new product certainly adds a darker twist to the state of policing already active in the country. And although China is steps ahead in the AI race compared to Europe or even the U.S., every time a device designed to police citizens gets used by a government, everyone’s privacy rights become more vulnerable. And that is definitely the case with GLXSS ME.
For many years, women living in Saudi Arabia had to contend with guardianship—essentially, the men in their lives have complete control over where they go, where they study, and whether they are even able to drive or move around freely. That is often contingent on tons of paperwork—applied by the government through an intensely bureaucratic system. Now, an app called Absher, which was created by the Saudi government, digitises that process. It alerts men on the whereabouts of the women they know. Whether they are leaving the country or coming into it (supposedly without their permission), Absher will notify these men, amongst other services which the app offers. It gives men permission to revoke the travelling abilities of the women they are guardians of.
Women in Saudi Arabia, as well as other feminists internationally, have called on Google and Apple to remove these apps from the app store, ever since the issue first gained press coverage. But the app essentially makes a form of oppression and control which already exists in real-life into digital form—for women in Saudi Arabia, it just meant that their request to leave the country or to travel somewhere without a chaperone could be refused faster, rather than getting lost in government bureaucracy or refused a few months later. But this is also part of the larger question—how much do Apple and Google know about the apps that are on their app stores?
This is not the first time that an app with dangerous actual consequences has been brought to the attention of large technology companies. On one hand, there’s the problem of malicious apps—apps with malware that are disguised as other versions of popular apps, like Tinder, or third-party app stores. These are dangerous in a different way than Absher—for example, these apps will steal individual credit card information, or use geotagging information to find out where you are, but they won’t be able to change anything about where you can and can’t go. This is something which security experts have spoken about previously, but it’s a different issue from what is happening with an app like Absher, which is, for all intents and purposes, legal.
In this case however, the problem may have arisen because it’s not immediately obvious that Absher enables this kind of control over women. Absher also hosts a variety of other services, such as passport checks and document scanning, and so it may not have initially been obvious to moderators that the app would be used in this way. As a New York Times article documented in January, women are trying to leave Saudi Arabia in greater numbers than before, partially enabled by technology. Some of them were able to use websites and WhatsApp groups to coordinate with other women, some were even able to use Absher on their male relative’s phones, setting them to let the women travel and escape to safety.
On a larger scale, apps like Absher proliferate because they aren’t technically illegal. What Absher does violate is international human rights law—but that’s also because the government that’s created it and uses it does too. In this way, trying to remove Absher would potentially cause a firestorm, particularly given the relationship between Saudi Arabia and Silicon Valley, in addition to doing little to change a fundamentally broken system. If large international bodies and other groups haven’t been able to alter the misogynistic system of guardianship, the app being removed from the app store is unlikely to do so.
These problems arise because the process of developing an app and putting it on app stores, both for Apple and Google, are fairly straightforward. Google, in comparison to Apple, which has a strict approval process, has also come under fire for the apps that it lets proliferate on Google Play. A report from WIRED UK found that child-friendly apps that were being sold on Google Play were anything but. The problem of content moderation on the app store is one that Google has had to reckon with, but has done precious little about. This is also different because the government of Saudi Arabia has created the app—making it harder to take down than just reckoning with an app developer.
But Apple and Google do have the ability to intervene and remove apps from their app stores as and when it’s deemed necessary. Recently, in India, TikTok was considered to be a danger and a menace to the population, particularly given how many of the users were under the age of 18. A week later, TikTok was removed from their app stores, over concerns about paedophilia.
After all, Saudi Arabia’s repressive policies towards women are not a state secret—human rights organisations and activists have been raising the alarm about them for years—so it’s unsurprising that this may have passed under the radar. But the fact that companies are enabling this kind of human rights abuse should surely be a cause for concern for anyone who cares about freedom or equality.