It seems every government department on the planet is rushing to sprinkle a bit of AI magic on their operations, promising a new era of sleek, data-driven efficiency. But as the New Zealand Department of Corrections just discovered, handing out powerful new tools without iron-clad rules and supervision is a recipe for trouble. This isn’t just a Kiwi issue; it’s a global cautionary tale about the very real problem of government AI misuse. The incident highlights the urgent need for a serious conversation about public sector AI ethics and underscores the massive gap that often exists between creating a policy and actually getting people to follow it.
What Happens When The AI Gets Ahead of The Policy?
Let’s break down what happened. The New Zealand Department of Corrections, in a bid to modernise, rolled out Microsoft Copilot to its staff in November 2025. The idea was to help with routine administrative tasks. The rules seemed clear: do not, under any circumstances, enter personal or sensitive information into the AI. Yet, as reported by RNZ, a “small number of incidents” occurred where staff did precisely that, using the tool to help draft sensitive reports like Extended Supervision Orders.
The department’s leadership immediately labelled this use as “‘unacceptable'”, and rightly so. While they claim that uptake of the tool has been “relatively low,” with only about 30 percent of staff using it, the fact that anyone used it for restricted work is the real story here. This is a classic case of administrative AI risks coming home to roost. The very tool intended to streamline paperwork became a potential vector for a serious data breach.
Think of it like giving everyone in an office a state-of-the-art power drill. It’s fantastic for putting up shelves, but you’d be horrified if you found someone trying to perform dental work with it. The tool isn’t inherently bad, but its application in a sensitive context, without the right training and oversight, is a disaster waiting to happen. Thankfully, in this case, the department says no private information was technically breached and sent to the Office of the Privacy Commissioner, but that feels more like luck than good management.
Governance: The Boring But Essential Bit
So, how do you stop people from using a drill as a dentist’s tool? This is where governance and robust policy enforcement frameworks come in. It’s not enough to just write a memo and hope for the best.
Corrections seems to be learning this lesson on the fly. They have an “AI assurance officer” and a dedicated “AI working group” in place, which is a solid start. They’re also now implementing auditing measures to monitor how these tools are being used. This move from a passive “don’t do this” policy to active oversight is the critical step that many organisations miss.
The problem is that policy is often seen as a box-ticking exercise. An organisation creates a document, uploads it to the intranet, and considers the job done. But a policy without enforcement is just a suggestion. Effective governance means creating a system where compliance is monitored, and violations have clear and immediate consequences. For public sector bodies handling incredibly sensitive data—from prisoner information to citizen tax records—this isn’t optional. It’s the fundamental basis of public trust.
You Can’t Just Email a Memo and Call It Training
This incident shines a harsh light on the failures of employee compliance training. Corrections insists it takes technology misuse “‘extremely seriously'”, but the very existence of these incidents suggests the initial training and communication fell short.
Simply telling employees “don’t put personal data here” is clearly not enough. Effective training needs to explain the why.
– Show, Don’t Just Tell: Instead of a dry document, training should include real-world scenarios. “Here is an approved use case for Copilot, like summarising a public report. Now, here is an example of a prohibited use, like drafting a prisoner’s psychological assessment, and here’s why it’s prohibited.”
– Explain the Black Box: Staff need to understand, in simple terms, what happens to the data they input into a large language model. Explaining that this data could be used to train the model, stored on third-party servers, and potentially exposed, makes the risk tangible.
– Continuous Reinforcement: AI isn’t a one-and-done training topic. As the technology evolves, so must the training programmes. Regular refreshers and updates are essential to keep pace with new features and emerging risks.
Without this deep, ongoing educational effort, you create a culture where employees, often with the best intentions of being more efficient, can inadvertently cause a major security or ethical incident.
The Looming Spectres of Privacy and Ethics
At the heart of this entire saga are the intertwined issues of privacy and ethics. The Office of the Privacy Commissioner noted that privacy concerns were limited because existing safeguards meant personal data shouldn’t have been entered into Copilot in the first place. That’s a fragile shield. The policy was the only thing standing between standard procedure and a significant privacy breach.
This is the core challenge of public sector AI ethics. The goal is to use technology to improve public services, but the duty to protect citizen data is absolute. In the context of corrections, for example, using AI to analyse trends in contraband smuggling is one thing. Using it to draft a report that could determine an individual’s freedom is another matter entirely. The potential for baked-in bias, factual inaccuracies (hallucinations), or data leaks is simply too high.
As government bodies continue down this path, they must move beyond simply asking “Can we do this?” and start asking “Should we do this?”. The establishment of cross-government working groups, like the one the Government Chief Digital Officer is involved in, is a positive sign. But these groups need to be empowered to set hard boundaries, not just draft well-meaning guidelines. The future of AI in government will be defined not by the cleverness of the algorithms, but by the strength and integrity of the ethical frameworks we build around them.
What is truly needed is a cultural shift, where every public servant sees themselves as a guardian of data ethics. What steps do you think are most critical for building that culture?


