Inevitably, the AI revolution of the last few years has seen machine learning technologies be deployed for increasingly punitive means, creeping not just into on-the-street policing but the judicial system too. Most of the warranted criticism of this change has been looking at AI tools used in the justice system as a strictly US phenomenon, but make no mistake, automated policing is well underway here in the UK too.
Just last week the Metropolitan Police came under fire once again for using automated facial recognition (AFR) software outside of Romford train station—the final deployment in a controversial trial that’s been underway since 2016 and seen the tech tested in various spots around London.
AFR works by scanning the faces of passers-by in real-time and cross-referencing them with the police’s own database of wanted suspects. If someone is flagged as a match—or, as was the case in Romford, simply tries to cover their face—then the officers manning the cameras are alerted and quickly apprehend the ‘suspect’ to search and question. Described by the anti-surveillance campaign group Big Brother Watch as “lawless and authoritarian”, and criticised for its lack of transparency and legal ambiguity, it’s not hard to see how the collection of biometric data without consent poses serious questions about privacy; a further example of technology’s increasingly-sinister encroachment on public space.
Even outside of all the predictable and clichéd references to Orwell’s 1984 surveillance-dystopia, there are countless other reasons we should be wary of this too. Firstly, because it’s both incredibly expensive and incredibly useless. Instead of using the money to put more officers on the street or invest in youth services – which would surely go a long way in curbing the wave of knife crime which is their rationale behind this new approach – millions have been spent on acquiring and testing this technology. And, unsurprisingly, it’s totally ineffective. An FOI request filed by Big Brother Watch found that when the Met was doing tests last year, the system incorrectly matched people 98 per cent of the time. Bearing that in mind and harking back to privacy rights for a second, all of a sudden the Met’s defence that “only images which alert against a watch list subject will be retained for a limited period” is hardly a reassuring one.
Another societal problem posed by this automated policing is a common product of machine-learning itself: the entrenchment of racial biases. Speaking to HuffPost UK, who were reporting on the use of AFR in Romford, Hannah Couchman of the human rights advocacy group Liberty made the case that AFR is disproportionately used to target black people. To get even a rudimentary understanding of how these technologies could be deployed along the lines of racial prejudice, all you need do is look at where the Met has been testing it: not just in places such as Romford and Stratford, both being some of the most racially-diverse areas of London, but perhaps more tellingly the already extremely heavily-policed Notting Hill carnival for two consecutive years—with 95 people falsely flagged as criminals in 2017.
It’s not just with the use of facial recognition that we’re seeing the rise of automated policing either. A report called ‘Policing By Machine’ published yesterday by Liberty revealed that twelve police forces across the UK have been using ‘predictive mapping programs’ that collate data on previous crimes to identify ‘high-risk’ areas on a map, allowing them to figure out where to increase police presence. As well as that, they also found that three forces were using a machine learning programme called Harm Assessment Risk Tool (HART) that supposedly enabled them to predict the likeliness of individuals committing, or being victim to, certain crimes—from stalking and burglary to domestic violence and sexual assault. HART comes to these conclusions by using 34 pieces of data, ranging from your criminal record to gender, income, name and postcode.
Though many argue that algorithms can’t be biased; that they’re somehow magically neutral entities free from human influence, what certainly is bias is the masses of raw data that they rely on. In its dependence on pre-existing crime data, automated policing only serves to reinforce the racial biases that already consume the criminal justice system, discriminating in an overtly racist fashion sometimes, or via trivial things like postcode or socio-economic class, which the Liberty report refers to as “proxies for race.” With machine-learning simply duplicating the deep-seated racist practices of the police, a dangerous form of automation bias can occur: with officers becoming over-dependent on these algorithmic decisions, abandoning reason and simply refusing to question them—even when faced with contradicting evidence.
Whether it’s using this already-biased data pool for facial recognition, creating supposedly ‘high-risk’ areas that are likely already disproportionately policed or conducting psychological assessments to determine potential criminality, the automation of our police force not only concerns our privacy but could also easily serve to perpetuate disciplinary biases across not just race, but also class, gender, sexuality or any other data-set that could potentially replicate discriminations—at a human or institutional level.
If you thought that Fox News is as scary as news channels can get, you better think again. Earlier this month, Chinese news agency Xinhua aired its (and the world’s) first ‘AI’ news anchor, a digital version of real-world Xinhua news anchor Qiu Hao. “This is my very first day at Xinhua News Agency,” declared the impeccably dressed artificial news anchor, who then congratulated himself for his ability to “tirelessly” deliver news, in both English and Chinese, 24/7. But as viewers across China, as well as around the world, eye Digital Hao with awe, some have already questioned whether he truly constitutes an AI breakthrough and whether his emergence means good or bad news for the world of media.
Xinhua’s anchor was developed through machine learning that imitates the voice, facial expressions and gestures of real-life broadcasters in order to forge what the company defines as “a lifelike image instead of a cold robot.” Yet despite Xinhua’s best efforts to simulate an actual human, many complain that the viewing experience of the ‘AI’ anchor isn’t so pleasing due to his flat, arrhythmic, and monotonic delivery.
His questionable imitation of human expression isn’t the digital anchor’s only problem. As pointed out by Will Knight, a senior editor for AI at MIT Technology Review, Xinhua’s virtual anchor is hardly an example of true AI. Knight argues that while machine learning enabled the news agency to come very close to flawlessly mimicking a real-life news persona, they still must feed the text to his virtual alter ego. “We should… always be really careful I think about the use of the term AI, and in this context you don’t want to suggest that this anchor is actually exhibiting any intelligence, because it’s not, it’s just like a kind of very sophisticated digital puppet,” Knight said to CNBC, “What actually creates those images and the movement of the lips and the voice of this anchor is using algorithms that are related to artificial intelligence. But to call this an AI anchor is slightly overselling it.”
Other than misrepresenting the field of AI and diverting the public further away from the true core and purpose of its research, Xinhua’s brand of virtual news anchors risks diminishing the quality of the news itself, seeing as the ‘person’ delivering them is incapable of intelligent analysis. Seeing as most media and news agencies adhere to some worldview, their anchors often deliver the content through the filter of their socio-political affiliation. And while this practice flavours news with a considerable degree of bias, it also helps the viewer engage in a more profound analysis of the events, as opposed to mindlessly digesting them as passive consumers of information.
News anchors draw our focus to particular issues as they surface and shed light on aspects of them we wouldn’t otherwise consider; they have the power to humanise stories and situate them within a broader context. But Digital Hao and his fellow virtualites stifle this type of analysis of news and essentially grant those behind the scenes who feed them the information absolute power over crafting the content viewers are exposed to without owing to the responsibility of actually delivering it. Then again, for the Chinese Communist Party this is of course a desirable outcome. And although in the West freedom of speech is still not as brutally infringed upon as in China, we’d be foolish to think that there aren’t characters in power here who are drooling at the thought of gaining full control over our intake of news.
Knight and his fellows bash Xinhua’s virtual anchor for being a sham AI, incapable of real intelligence and a servant of autocrats who wish to limit any type of unwanted analysis of the news. While their claims are legit, wouldn’t the alternative be even more terrifying? What if we finally managed to manufacture a computer programme actually capable of intelligently analysing written text and such technology was utilised to deliver news to the masses? Could true AI news anchors be trusted to exercise sound judgement, honesty and transparency as they narrate our societies’ stories? This may still be far down the road, but it is safe to predict that a digital ‘24/7’ Jeanine Pirro would mark the beginning of the apocalypse.