‘Unlawful and unethical’: UK police urged to ban facial recognition in all public places – Screen Shot
Deep Dives Level Up Newsletters Saved Articles Challenges

‘Unlawful and unethical’: UK police urged to ban facial recognition in all public places

In 2021, several reports outlined the Metropolitan (Met) Police’s plans to supercharge surveillance following the purchase of a new facial recognition system. Dubbed Retrospective Facial Recognition (RFR), the technology essentially allowed police to process historic images from CCTV feeds, social media, and other sources in a bid to track down suspects.

“Those deploying it can, in effect, turn back the clock to see who you are, where you’ve been, what you have done and with whom, over many months or even years,” Ella Jakubowska, policy advisor at European Digital Rights, told WIRED at the time—adding that the tech can “suppress people’s free expression, assembly, and ability to live without fear.”

In March 2021, a report found that RFR was being used by a sum total of six police forces in England and Wales. Despite this statistic, the technology has largely evaded legal scrutiny—accompanied by the deployment of Live Facial Recognition (LFR) systems in public spaces. LFR essentially scans people’s faces as they walk down streets and compares them to a suspect watchlist in real-time, true Minority Report style.

The UK’s current policy on facial recognition

Over the years, several critics have argued that the use of both RFR and LFR encroaches on individual privacy and social justice. Given its controversial history of misidentifying people of colour, leading to wrongful arrests, and even putting LGBTQ+ lives at risk, experts have warned against the invasive surveillance tools capable of tracking the public on a massive scale.

“In the US, we have seen people being wrongly jailed thanks to RFR,” Silkie Carlo, director of civil liberties group Big Brother Watch, told WIRED. “A wider public conversation and strict safeguards are vital before even contemplating an extreme technology like this, but the Mayor of London has continued to support expensive, pointless, and rights-abusive police technologies.”

In July 2019, the House of Commons Science and Technology Committee recommended restrictions on the use of LFR until concerns regarding the technology’s bias and efficacy were resolved. Then, in August 2020, the UK’s Court of Appeal found that South Wales Police’s use of LFR was unlawful and, in September 2021, the United Nations High Commissioner for Human Rights called for a moratorium on the use of LFR.

However, the Met has stated that it will continue to deploy both the systems in question—as and when it deems it appropriate. “Each police force is responsible for their own use of LFR technologies,” a spokesperson said. “The Met is committed to give prior notification for its overt use of LFR to locate people on a watchlist. We will continue to do this where a policing purpose to deploy [it] justifies the use of LFR.” Meanwhile, Scotland has reported its intention to introduce the technology by 2026.

A history of unchecked biases and errors

Between 2016 and 2019, the Met reportedly deployed LFR 12 times across London. The first case was at the Notting Hill Carnival in 2016, the UK’s biggest African-Caribbean celebration, where one attendee was falsely matched to the real-time database of criminal records. Similarly, at Notting Hill Carnival in 2017, two people were falsely positive while another individual was correctly matched but was no longer wanted.

“Face recognition software has been proven to misidentify ethnic minorities, young people, and women at higher rates,” Electronic Frontier Foundation (EFF), a leading nonprofit defending civil liberties in the digital world, noted in an open letter in September 2022. “And reports of deployments in spaces like Notting Hill Carnival—where the majority of attendees are black—exacerbate concerns about the inherent bias of face recognition technologies and the ways that government use amplifies police powers and aggravates racial disparities.”

After temporary suspension over the COVID-19 pandemic, the police force resumed its deployment of LFR across central London. “On 28 January 2022, one day after the UK Government relaxed mask-wearing requirements, the Met deployed LFR with a watchlist of 9,756 people. Four people were arrested, including one who was misidentified and another who was flagged on outdated information,” EFF highlighted, adding that the tech was once again deployed outside Oxford Street tube station, where it reportedly scanned around 15,600 people’s data and resulted in four “true alerts” and three arrests.

“The Met has previously admitted to deploying LFR in busy areas to scan as many people as possible, despite face recognition data being prone to error. This can implicate people for crimes they haven’t committed.”

According to a 2020 report, London was the third most-surveilled city in the world, with over 620,000 cameras. As of 2022, however, it currently holds the eighth position on the list with 127,373 cameras in total. Another report claimed that, between 2011 and 2022, the number of CCTV cameras more than doubled across the London Boroughs.

UK police and the ‘unlawful and unethical’ use of LFR

After analysing LFR use by the Met and South Wales police, a new study by the University of Cambridge has now concluded that the controversial technology should be banned in “all public spaces.”

The report, authored by the University’s Minderoo Centre for Technology and Democracy, examined three deployments of LFR, one by the Met police and two by South Wales police, with an audit tool based on current legal guidelines. Based on the findings, the experts joined an increasing list of calls to ban the use of facial recognition in streets, airports, and any public spaces—the very areas where police believe it would be most valuable.

In all three cases, the team noted that important information about police use of the technology in question is “kept from view,” including scant demographic data published on arrests or other outcomes—in turn, making it difficult to evaluate whether the tools “perpetuate racial profiling.”

Apart from this lack of transparency, the researchers also found little accountability being taken for their actions, with no clear recourse for people or communities negatively affected by police use, or rather misuse, of the tech. “Police forces are not necessarily answerable or held responsible for harms caused by facial recognition technology,” stated the report’s lead author, Evani Radiya-Dixit. “We find that all three of these deployments fail to meet the minimum ethical and legal standards based on our research on police use of facial recognition. To protect human rights and improve accountability in how technology is used, we must ask what values we want to embed in technology.”

According to the report, at least ten police forces in England and Wales have trialled facial recognition to date, involving its use for operational policing purposes. While facial recognition is seen as a fast, efficient, and cheap way to track down persons of interest, it also noted how officers are increasingly under-resourced and overburdened—leading to a plethora of other policing concerns.

AI

China is using AI to stalk its citizens and predict future crimes. What could possibly go wrong?

Back in December 2021, COVID-19 deaths in South Korea had hit a record high when former Prime Minister Kim Boo-kyum admitted that the country could be forced to take “extraordinary measures” to tackle the surge. The plans included the use of AI and facial recognition by leveraging thousands of closed-circuit video cameras to track citizens infected with the virus.

At the time, the public raised several concerns about the technology’s attack on privacy and consent. Is the exchange of personal data for convenience, order and safety a fair trade-off for citizens? Or are governments using the pandemic as an excuse to normalise surveillance?

Now, reports are surfacing that the police in China are buying technology that harnesses vast surveillance data to predict crime and protests before they happen. What’s worse is that the systems in question are targeting potential troublemakers in the eyes of an algorithm and the Chinese authorities—including not only citizens with a criminal past but also vulnerable groups like ethnic minorities, migrant workers, people with a history of mental illness and those diagnosed with HIV.

According to a New York Times (NYT) report, more than 1.4 billion people living in China are being recorded by police cameras that are installed everywhere from street corners and subway ceilings to hotel lobbies and apartment buildings. Heck, even their phones are being tracked, their purchases monitored and their online chats censored. “Now, even their future is under surveillance,” the publication noted.

The latest generation of technology is capable of warning the police if a drug user makes too many calls to the same number or a victim of a fraud travels to Beijing to petition the government for payment. “They can signal officers each time a person with a history of mental illness gets near a school,” NYT added.

Procurement details and other documents reviewed by the publication also highlighted how the technology extends the boundaries of social and political control and incorporates them ever deeper into people’s lives. “At their most basic, they justify suffocating surveillance and violate privacy, while in the extreme they risk automating systemic discrimination and political repression,” the report mentioned.

In 2020, authorities in southern China allegedly denied a woman’s request to shift to Hong Kong to be with her husband after software warned them that the marriage was suspicious. An investigation later revealed that the two were “not often in the same place at the same time and had not spent the Spring Festival holiday together.” The police then concluded that the marriage had been faked to obtain a migration permit.

So, given the fact that Chinese authorities don’t require warrants to collect personal information, how can we know the future has been accurately predicted if the police intervene before it even happens? According to experts, even if the software fails to deduce human behaviour, it can be considered ‘successful’ since the surveillance itself helps curb unrest and crime to a certain extent.

“This is an invisible cage of technology imposed on society,” said Maya Wang, a senior China researcher with Human Rights Watch. “The disproportionate brunt of it being felt by groups of people that are already severely discriminated against in Chinese society.”

In 2017, entrepreneur Yin Qi, who founded an artificial intelligence start-up called Megvii, first introduced a computer system capable of predicting crimes. At the time, he told Chinese state media that if cameras detected a person spending hours at a stretch on a train station, the system could flag a possible pickpocket.

Fast forward to 2022, the police in Tianjin have reportedly bought software made by Hikvision, a Megvii competitor that aims to predict protests. At its core, the system collects data of Chinese petitioners—a general term used to describe people who try to file complaints about local officials with higher authorities in the country. The model then analyses each of these citizens’ likelihood to petition based on their social and family relationships, past trips and personal situations to help authorities create individual profiles—with fields for officers to describe the temperament of the protester, including “paranoid,” “meticulous” and “short tempered.”

“It would be scary if there were actually people watching behind the camera, but behind it is a system,” Qi told state media back in 2017. “It’s like the search engine we use every day to surf the internet—it’s very neutral. It’s supposed to be a benevolent thing.” He also went on to add that with such surveillance, “the bad guys have nowhere to hide.”