Getting AI Right in Government: Problem Clarity, Shared Accountability, and the Speed of Trust

Emma McDonald (Stats New Zealand) explains why agencies should start with the policy problem, build cross-functional AI governance owned by senior leaders, and design for explainability that makes humans accountable, not the model.

Benji Crooks 19 February 2026
Getting AI Right in Government: Problem Clarity, Shared Accountability, and the Speed of Trust

Benji Crooks, Marketing Director at Public Sector Network, sits down with Emma McDonald (Director, Centre of Data Ethics and Innovation, Stats New Zealand) to discuss what “getting AI right” means in practice, how to build strong AI governance beyond the “tech-only” mindset, and the key ethical risks to manage, ahead of her appearance at Cybersecurity Showcase New Zealand in Wellington.


Benji Crooks: Great. So first of all, could you introduce yourself, your role and the company you work for?

Emma McDonald: Excellent. Emma McDonald. I am the Director for the Centre of Data Ethics and Innovation at Stats New Zealand, and while that sounds very, very grand, it’s three to four of us and some sticky tape, but we’re here focusing on the data ethics within both digital technology, such as AI, as well as within the statistical modelling, in order to be able to get our national statistics.

Benji Crooks: Excellent. So focusing on that AI part, what does getting AI right in government actually mean in practice?

Emma McDonald: I think first thing for me, and the thing that I always stress, is actually understanding what your problem is to start off with. It’s doing that whole proper policy thing right, what problem are you trying to solve and what technology is the best for it. Sometimes it will be AI, sometimes it’ll be a pen and paper. And so actually understanding it’s not about the tool itself, but what problem you’re trying to solve, and being really clear on why you’re using it and why it is the best option for what you’re trying to do.

So I think there is a big push at the moment for everyone to show that they’re on board with AI and they’re doing all the cool, sexy stuff, but always go back to really clean hygiene of what are we using it for, why are we using it, is this the best tool.

Benji Crooks: Absolutely. And I guess with all this cool, sexy stuff that they’re doing with AI, what would you say are the things that agencies should do to put strong AI governance in place?

Emma McDonald: It’s a really, really good question, and there’s a lot of stuff going on around it at the moment about talking about AI. And a lot of the time we default because “It’s a technology so therefore the tech bros have to look after it.” And it’s not.

It is a fundamental thing that is changing the way society works and the way that we work. And so when we’re looking at governance it has to start at the senior leadership level, and there needs to be people who take the accountability of it.

And if you’re looking at places like Australia, the Digital Technology Office has put in that boards and CEs need to be the accountability officers within that. Because you need it at that level, but it’s also the recognition that it’s not just the tech people. It’s not just the privacy or the legal or the ethics. Actually everyone needs to have different eyes on it, because there are so many different aspects to how it is changing things and where the harms may come and where the opportunities are, that everyone needs to be able to actually look at it from different lenses.

So when I talk about practical steps, get your crew together and understand that your crew should have diversity amongst it and actually different points of view. And that’s the benefit of it. You actually want to have those robust discussions so that you know that you’re getting it right.

Benji Crooks: Absolutely. And then I guess once that governance is in place and you have the AI systems up and running, how do you make it as explainable and accessible for public use, not just technically accurate?

Emma McDonald: Yeah, and this is one of the things where I always sort of joke around about the old thing of you’ve got to be able to explain it to your grandmother, or I will more often say “You need to be able to explain it to your drunk friend down the pub.”

A lot of the time we’re not necessarily gonna understand the black box, particularly with generative AI and the fact that it works quite differently to traditional AI methods. So be aware of how it is trained and what the data is going into the training model.

And always remember regardless of what AI you are using, it is a tool. You’re still the grownup in the room. You’re still the one who is accountable for the decision.

So any decision that comes out of it, you need to be able to sit there and go “Yeah, I back this decision, and if anyone asks me questions about it, I will be able to defend this decision just like I would with a decision that I made myself.”

And so people need to have as much information as they need to feel comfortable with that. Understand what’s gone in, understand the prompts that have been used in order to be able to get what comes out, but also understand that if you put the same prompt in the next day, you’re gonna get something slightly different. So understand where the differences will come from, and actually what are the errors that sit within there.

But you’re still the adult in the room, is my key message with anything that you talk about.

Benji Crooks: Exactly. So moving on to risks, and I guess there’s a lot of risk with AI. What are the biggest risks you’re most focused on, privacy, security, or something else?

Emma McDonald: I mean, all of the above. All of them. Everything’s a risk, but also there’s a risk in getting too scared and not using it, because there’s real benefit of being able to use it. So I’d just like to say that as well.

But when I talk about data ethics, I come down to these like six harms. Data ethics sits at the base of all of the data-driven technologies, right, and you need to think about the scale and the reach. This technology can reach a million people in a heartbeat, and you won’t even know necessarily where it’s gone because it’s happening so fast. You’re not going to necessarily know that harm has occurred until it turns up in a newspaper. And so therefore understanding actually a lot of this is invisible to you.

There’s also a collective impact, because when you’re using data in certain ways, people that look a bit like me will be impacted because of the way that we collect data and actually people who look the same are all just put into one wee box, and so you must all be like that, as opposed to getting that individuality. It’s just the collective nature of how you bring data together.

There’s also a power dynamic going on, right. People who understand data do very well and they have the power that most people don’t. Most people don’t understand data and people will just give it away because they don’t think about the power or where it’s gonna go. So understanding there are information asymmetries going on there, and to be aware of that when you’re working with the public.

There’s also obviously really big consent issues going on. It’s really hard to do consent with AI because you don’t know where the data’s gonna end up. So if you can’t do consent, act like you have to get consent anyway and do the right thing. Don’t just be like “Oh it’s all gone, we can’t do anything.” Still do the things.

And then obviously there’s the technical complexities around it. It’s data. People are “I hate maths, I hate data, ick.” “It’s just awful, I was so bad at maths at school.” And so therefore people seize up around the whole idea because they don’t understand the technologies. They don’t necessarily have to, and that’s what we need to make sure, that they’re protected from any of these harms.

We do a lot of the heavy lifting for them and they understand what they need to within it, but we’ve got space so that people can ask questions and there can’t be any stupid questions in this space, right. Because none of us know what’s going on. Who knows what it’s gonna look like tomorrow?

Benji Crooks: Yeah, absolutely. It’s the same as we have a training side of the business and we’re developing AI courses for training, and they’re constantly changing. In the next year there’ll be different kind of AI learnings that will be out there and the different AI systems that we’ll have to teach on.

Emma McDonald: Yeah. And what I worry about is you’ve got the shiny shiny over here and you’re not looking over here. This is probably what’s gonna change the world over here, but we’re looking at this shiny here.

Benji Crooks: Absolutely. And it’s all happening so fast. It’s such a big bubble that at some point is gonna burst, going forward. And we won’t know what’s left, but just go with the ride, right?

Emma McDonald: We’ve been in this change now for pretty much my whole life, and I’m really old.

Benji Crooks: Honestly, yeah. Like I still think AI is a relatively new thing, but it’s been around now for six, seven years since I first thought it was introduced. Was it?

Emma McDonald: Well yeah, so generative AI came around at the end of 2022, but AI was coined in the 1950s with Alan Turing, and in 1893 Ada Lovelace actually identified machines that would be able to do what humans did. So for over 100 years people have had these concepts.

It’s just that the change with generative AI and the fact that it’s a neural network as opposed to the more traditional linear type stuff, that’s been the fundamental change. So it’s opened it up so that more humans can actually get in and use AI, as opposed to the machine learning aspects, but machine learning still has huge amounts of power, and that will still fundamentally change the world, particularly when it loops in with some of the neural networks, the generative AI.

That’s going into way too much detail, Benji, but it’s a really interesting point because AI is gonna change how we work.

Benji Crooks: Mm-hmm. And I guess going back to, we’ll be seeing you at Cybersecurity Showcase in New Zealand, in Wellington. One question I want to ask is what do you hope people will take away from the panel discussion you’re on?

Emma McDonald: I think the thing that I want people to take away is there is huge possibilities, but we need to be aware of where the harms are going to come from, particularly when the harms can be invisible.

And you need to go at the speed of trust, as the Privacy Commissioner would say, but we also need to go at a speed that provides room to learn and room to fail, because it is a new technology.

For a lot of us, we are using it in different ways, and so therefore having the space to be able to learn, to be able to see where failures are in a safe way, and realise that’s not an ideal use for it, and then continuously learn on where we’re going.

Because we are not gonna get it right, and we need to have the space to be able to innovate, but do it in a safe way where we can actually spend some time thinking.

Because AI, I’m just noticing that everything’s so fast now because we can do things so fast with ChatGPT, and it’s all, you still need the time to process your human brain and to actually think, “This could be where the harm comes from,” particularly in really sensitive areas.



Hear Emma McDonald live

Hear Emma McDonald at Cybersecurity Showcase New Zealand (Wellington). Join the discussion on how public sector teams can adopt AI responsibly, strengthen governance and accountability, and manage hidden harms by creating room for learning while moving at the speed of trust.

Published by

Benji Crooks Marketing Director, Delegate Sales