In the first week of December, COVID-19 cases and deaths in South Korea hit a record high—placing hospitals under intense pressure with fear mounting over the new Omicron variant. As of today, 13 December 2021, the Korea Disease Control and Prevention Agency (KDCA) reported 5,817 new infections with 40 COVID-related fatalities and 24 Omicron cases—bringing the total to 114 since the confirmation of the variant in the country.
With little to no signs of slowing, Prime Minister Kim Boo-kyum admitted that South Korea could be forced to take “extraordinary measures” to tackle the surge. Now, a nationally-funded project in Bucheon—one of the most densely populated cities on the outskirts of Seoul—is prepping itself for the use of artificial intelligence (AI) and facial recognition by leveraging thousands of closed-circuit video cameras to track citizens infected with the virus. All of this, despite concerns about its attack on privacy and consent.
According to a business plan submitted to the Ministry of Science and ICT, obtained by Reuters, the pilot programme will use AI-powered algorithms and facial recognition softwares to “analyse footage gathered by more than 10,820 security cameras and track an infected person’s movements, anyone they had close contact with and whether they were wearing a mask.”
Set to launch in January 2022, Bucheon officials believe the system will reduce the intense workload of contact tracers while helping authorities use such tracing teams more efficiently. Although the country already has an aggressive contact-tracking system in place that harvests everything from credit card records to cell phone locations—among other personal information, of course—the process is not fully automated. Relying on a large number of epidemiological investigators, the team is often overloaded with 24-hour shifts as they trace route maps of infected citizens and reach out to potential primary contacts of the patients.
In a city with a population of 830,000 people, the measures make sense—from a regulatory perspective. “It sometimes takes hours to analyse a single [piece of] CCTV footage. Using visual recognition technology will enable that analysis in an instant,” Bucheon Mayor Jang Deog-cheon tweeted back in 2020, as noted by Reuters, while bidding for national funding of the project. Receiving 1.6 billion won ($1.36 million) from the Ministry of Science and ICT, the city has invested 500 million won ($420,000) of the budget to build the system.
Capable of tracking up to ten people in five to ten minutes simultaneously, the system cuts down the time it takes to trace one person manually—which, according to the business plan, takes about half an hour to one hour each. The pilot project is also designed to cement the testimonies of COVID-19 patients “who aren’t always truthful about their activities and whereabouts” with concrete facts to back them up. Although the Ministry of Science and ICT has no plans of introducing the use of the system nation-wide, the AI-powered project currently requires a team of about 10 staff gathered at a public health centre.
When COVID-19 birthed, invasive tracking methods were first introduced in the country, the systems garnered wide public support. However, some human rights advocates and legislators have expressed concerns that the government will retain and leverage such data beyond the needs of the pandemic.
South Korea is not the only country in the mix either. According to a report by the Columbia Law School in New York, countries like China, Russia, India, Poland and Japan—among several other US states—have rolled out or at least experimented with facial recognition systems for tracking COVID-19 patients. So, is the exchange of personal data and privacy of citizens for convenience, order and safety a fair trade-off? Or are governments using the pandemic as an excuse to normalise surveillance?
“The government’s plan to become a ‘Big Brother’ on the pretext of COVID is a neo-totalitarian idea,” Park Dae-chul, a legislator from the main opposition People Power Party, told Reuters. “It is absolutely wrong to monitor and control the public via CCTV using taxpayers’ money and without the consent from the public.”
According to a Bucheon official, however, privacy concerns are out of the loop altogether—given how the system blurs the faces of those who are not the subject of the tracking session. “There is no privacy issue here as the system traces the confirmed patient based on the Infectious Disease Control and Prevention Act,” the official told Reuters. “Contact tracers stick to that rule so there is no risk of data spill or invasion of privacy.” Although the act states that patients must consent to the use of facial recognition tracking, the system can still legally trace them using their silhouette and clothes otherwise.
The Korea Disease Control and Prevention Agency also noted how such technology is “lawful as long as it is used within the realm of the disease control and prevention law.” The core purpose of the system to efficiently digitise manual labour holds immense potential, don’t get me wrong, but it has to come with the guarantee of safety from external threats.
With everything moving into the digital space and the metaverse, hacking and professional trolling have become second nature to our privacy concerns. As South Korea experiments with the controversial technology to detect child abuse at daycares, the country has to make sure such intrusive technology does not come at the cost of its own citizens.
Facial recognition is one of the many surveillance technologies that were the subject of controversy over the past couple of years for legitimate reasons: the privacy breaches that it results in are highly problematic while its regulation remains blurred. In the UK, for instance, it is used alongside closed-circuit TV networks, while in China it has been used by the police to identify suspect citizens and protesters using—in conjunction with other ‘classic’ biometric surveillance devices—facial recognition technology glasses that are able to recognise among crowds of people wanted criminals in no longer than 100 milliseconds.
But in the US, things might be changing. Since the killing of George Floyd last May, US Democratic lawmakers introduced several bills in order to restrict and monitor the use of biometric surveillance. The Facial Recognition and Biometric Technology Moratorium Act, proposed by Senators Ed Markey of Massachusetts and Jeff Merkley of Oregon as well as by Representatives Pramila Jayapal of Washington and Ayanna Pressley of Massachusetts, would make it illegal for federal law enforcement agencies to use the technology.
At the same time, Democratic Senator Chris Coons from Delaware and Republican Senator Mike Lee from Utah proposed the Facial Recognition Technology Warrant Act, which instead would require federal law enforcement to first receive a warrant from a judge before tracking a suspect live for a period longer than three days. Moreover, the bill would limit the period of surveillance allowed to a maximum of 30 days. Court administrators would also need to be informed on the request for tracking.
It’s no coincidence that such bills have both been introduced at this specific time. Recent studies have been highlighting the bias present in AI surveillance technology as well as its racial targeting, and at a moment when systemic racism within US law enforcement agencies continues to be exposed, additional concerns on the use of facial recognition technology are rising.
Senator Ed Markey recently stated that “Facial recognition technology doesn’t just pose a grave threat to our privacy; it physically endangers Black Americans and other minority populations in our country.” A recent piece published by the New York Times reported the story of an innocent black man who was misidentified in Michigan by a facial recognition software and unjustly arrested.
Surprisingly, the request for more restrictive laws for facial recognition technology has also been pushed by tech companies themselves, only after being pressured by activists to do so. In 2018, a letter addressed to Jeff Bezos and signed by around 70 civil rights and research organisations asked Amazon to reconsider providing governments facial recognition technology.
Among different points, the letter read: “People should be free to walk down the street without being watched by the government. Facial recognition in American communities threatens this freedom. In overpoliced communities of color, it could effectively eliminate it. The federal government could use this facial recognition technology to continuously track immigrants as they embark on new lives. Local police could use it to identify political protesters captured by officer body cameras. With Rekognition, Amazon delivers these dangerous surveillance powers directly to the government.”
The letter was followed by 150,000 petition signatures delivered to Amazon by the American Civil Liberties Union (ACLU) of Washington as well as an internal memo from the company’s employees and a tweet from the activist Amazon employees group Amazonians: We Won’t Build It calling out Bezos’ hypocrisy when it came to publicly backing up the BLM movement.
Despite what seemed like an initial disinterest from Bezos towards the requests, two weeks ago Amazon stated that it is going to place a one-year moratorium of Rekognition (the company’s facial recognition system), following in the steps of fellow tech giant IBM and Microsoft, which recently announced they would stop providing the government and the police with their systems until the technology is properly regulated.
In light of the recent protests asking for a radical change in US law enforcement, scrutiny towards facial recognition technology surveillance seems imperative. The proposed bills, along with the change of policy from tech companies providing the surveillance systems, could potentially set a new direction for governmental use of an extremely dangerous surveillance tool that was already revealing its flaws at the expense of innocent citizens.