This isn’t just another story about a clever algorithm. It’s a battle over privacy, ethics, and who foots the bill for surveillance. As one MIT Technology Review article laid bare, this is where the abstract debate about AI ethics gets dangerously real.
What Exactly Is This “Prison AI”?
At its core, the technology Securus is using isn’t entirely new. It’s a large language model (LLM), the same family of AI that powers chatbots and writes university essays. But instead of being trained on the open internet, this model has been fed a diet of private conversations—phone calls, video chats, and messages between incarcerated people and their families. Kevin Elder, the president of Securus, boasted that they can point this AI at “an entire treasure trove [of data] to detect and understand when crimes are being thought about or contemplated.”
Think of it as a digital snitch that never sleeps. It scans every conversation in real time, flagging keywords, sentiments, and patterns that its algorithm deems suspicious. The goal, Securus claims, is to intercept contraband drops, prevent violence, and disrupt criminal enterprises before they happen. On the surface, who could argue with preventing crime? But the road to hell, as they say, is paved with good intentions and, in this case, questionable data-mining practices.
The Ethical Minefield of Digital Eavesdropping
This brings us squarely to the thorny issue of inmate surveillance ethics. Prisons have always monitored communications, but this automated, all-seeing eye is a different beast entirely. It’s one thing for a human guard to listen in on a call flagged as high-risk; it’s another for an AI to analyse every single word uttered by an entire prison population, building predictive profiles along the way.
Advocacy groups like the ACLU are, rightly, sounding the alarm. Corene Kendrick of the ACLU’s National Prison Project posed a chilling question: “Are we really trying to stop crime before it happens because we’re monitoring every utterance and thought of incarcerated people?” This isn’t a sci-fi film; it’s a policy being rolled out in American correctional facilities.
The Illusion of Choice
Securus and its defenders will argue that inmates “consent” to this monitoring. But let’s take a closer look at these so-called legal consent frameworks. When an inmate wants to call their partner or child, they are presented with a take-it-or-leave-it proposition: agree to be recorded and analysed, or don’t speak to them at all.
Bianca Tylek, who leads the advocacy group Worth Rises, puts it bluntly: “That’s coercive consent; there’s literally no other way you can communicate with your family.” It’s a classic example of a power imbalance. Consent implies a genuine choice, which simply doesn’t exist here. This isn’t a “terms and conditions” checkbox you scroll past; it’s a digital gatekeeper controlling a fundamental human need for connection.
From Surveillance to Prophecy: The Peril of Recidivism Prediction
The implications of this technology go beyond real-time monitoring. The data gathered is a goldmine for another controversial application of AI in the justice system: recidivism prediction. By analysing years of call data, these systems claim to be able to identify individuals who are more likely to re-offend upon release.
The dangers here are immense. These models are trained on data from a highly specific, stressed, and often biased environment. What happens if the AI learns to associate certain slang, accents, or expressions of frustration with a higher risk profile? It could create a self-fulfilling prophecy, where individuals are flagged and treated with greater suspicion, not because of their actions, but because of a biased algorithm’s prediction. The potential for reinforcing existing societal biases is not just a risk; it’s a near certainty.
Follow the Money: The Regulatory Twist
So, who pays for this sophisticated surveillance apparatus? You might assume it’s the law enforcement agencies who want it. You’d be wrong. In a move that has sparked outrage and legal challenges, the Federal Communications Commission (FCC) made a crucial rule change. As detailed in the original report, the FCC revised its 2024 reforms, allowing telecom companies like Securus to pass the costs of surveillance technology directly on to the inmates and their families.
This isn’t a small fee. The FCC’s ruling effectively allowed for rate caps on inmate calls to be a staggering 66% higher to cover these “security” costs. In essence, families are now funding the very systems that are eavesdropping on their private conversations. It’s a brilliant, if morally bankrupt, business model: create a problem (the need for mass surveillance) and then charge the subjects of that surveillance to solve it.
This decision wasn’t unanimous. FCC Commissioner Anna Gomez voiced strong opposition, arguing that security costs should be borne by law enforcement, not the families of the incarcerated. The ruling has already prompted 14 states to sue the FCC, turning this into a legal battleground.
A Fork in the Road for Prison Tech Reform
The Securus saga highlights a critical juncture for prison tech reform. Technology is not inherently good or evil, but its application in a setting with such a profound power imbalance requires intense scrutiny. Could technology be used to improve lives in prison? Absolutely. Think educational resources, better access to mental health services, or tools that genuinely help with rehabilitation.
However, the current trajectory seems focused on control and surveillance over rehabilitation. The debate isn’t about whether to use technology; it’s about what kind of technology we choose to implement and who benefits. Are we building tools for empowerment and second chances, or are we simply building more effective digital prisons?
The rise of correctional facility AI forces us to confront uncomfortable questions. It’s easy for tech executives in Silicon Valley to talk about “treasure troves” of data and crime prevention. But for the people on the other end of the line, this technology represents a new and invasive form of control, one that they are being forced to pay for.
The future of justice technology is being written today, not in code, but in these regulatory battles and ethical debates. Will we allow private companies to build predictive policing models on the backs of the most vulnerable, or will we demand transparency, accountability, and a focus on human dignity? What do you think the correct balance is between security and privacy in this context?


