With over two million people behind bars at any given time, the United States has the highest incarceration rate of any country in the world. As one-third of federal correctional officer jobs stand vacant, US prisons are currently using cooks, teachers and nurses to guard inmates. Is there a more ‘effective’ way to keep track of inmates instead? Enter Artificial Intelligence (AI) and its flawed wings currently engulfing everything from TV shows to facial-recognition fooling deepfakes.
US prisons are now being asked to evaluate whether Artificial Intelligence could be used to analyse prisoners’ phone calls by Congress, in hopes of finding discussions that “may be of concern.” And it’s a persuasive argument—at least at first glance. Implementing such a system would save time, allowing officials to be allocated to other resources instead of having to monitor each call.
Verus, one of the many AI systems in consideration, has the ability to automatically transcribe each call, analysing them for flagged phrases or words. It’s developers have even gone as far as claiming that the system has helped prevent “at least a few” suicides. And yet, there are always two sides to a coin: privacy advocates stress how such measures could breach privacy and allow for inmates to be wrongfully flagged for suspicion due to mistakes made by the algorithm.
Let’s be real, people in incarnation have very little privacy as it is. Some on the more conservative end of the political spectrum may be inclined to argue that privacy is something a prisoner voluntarily sacrifices when committing a crime. However, it’s important to note that, regardless of the crime at hand, privacy is a fundamental human right—recognised in the UN Declaration of Human Rights. Also keep in mind that calling from prison is a two-way street: involving the prisoner themselves and an individual on the outside. The new proposal has raised concerns that innocent people on the outside could feel that their own privacy is being violated, making them unable to talk about sensitive topics.
Although recent advancements of Artificial Intelligence have brought the technology into the mainstream—from Uber to your banking app— it’s important to remind ourselves that we’re not yet at the point of singularity. Actually, we’re far from it. AI has shown itself to be notorious for incorrectly identifying people of colour—not only in photographs but in audio too. It’s fair to say, as it stands, AI-led transcription is generally a C+ grade at best. Even more worryingly, a study by Stanford University and Georgetown University conducted in 2020 found that the tech is particularly flawed when transcribing the voices of black people. The high error rates in transcribing people of colour have been said to be attributed to a number of factors such as a lack of sufficient sample data used to train the algorithm itself.
Artificial Intelligence has been attributed to false arrests and high incarceration too. There have been several cases in which a black person was falsely arrested as a result of a facial recognition program incorrectly identifying them. Couple this with the fact that black men in the US are six times more likely to be behind bars, as well as how such surveillance technology is often deployed in places with predominantly large black populations—and you get the idea of why, in some cases, we definitely shouldn’t give the hammer of justice solely to a man-made algorithm.
The proposition of handing over the task of monitoring prisoner’s phone calls to Artificial Intelligence may be persuasive—but it’s not as simple as it seems in practice. For a system to be implemented into the justice system, it should be flawless—however, as previous cases have shown, AI is not exactly reliable. Such mistakes could land innocent people behind bars.
Likewise, implementing AI to monitor calls could be a breach of privacy for people outside of prison. Reuters reported the story of Heather Bollin, a woman who engaged regularly over the phone with a man who is currently incarcerated. Calls are regularly monitored, which is stressful enough, but the new AI system would require all calls to be subjected to checks. In the interview, Bollin said, “It’s very unsettling—what if I say something wrong on a call? It could be misconstrued by this technology, and then he could be punished?”
From privacy concerns to automatic racial profiling, the possibilities of bias are endless. And for people like Bollin, on the other end, conversations with her fiance now means forfeiting more of her privacy. “We are supposed to be free people, we are not incarcerated,” she added. “But it feels like my rights are constantly being violated.”