In one of the most candid and thought-provoking sessions of the Showcase, David Shanks urged the public sector to confront an uncomfortable reality: government can neither opt out of digital transformation nor rush into it blindly. As he opened, he joked about being the final speaker before lunch — but quickly shifted into one of the most grounded discussions of the day on navigating the dual forces of innovation and risk.
“You could be forgiven for thinking government is not moving fast enough to seize AI’s opportunities… but equally, the real challenge might be using these technologies without making big mistakes or causing harm.”
Shanks framed the AI moment as a fork in the road: a future of efficient, equitable services on one path — and a future defined by surveillance, security breaches, environmental strain, and social harms on the other. History, he reminded the room, suggests we will land somewhere in between. “We’ll make mistakes, we’ll course-correct, and the task is to strike the balance as best as we can.”
A 12-year-old story: The moment that changed his view of tech forever
Shanks recalled a meeting 12 years ago in a secret Silicon Valley lab — a moment he calls foundational to his thinking. Behind NDAs and closed doors, scientists were developing “biological analog computers”: chips that mimicked the architecture of biological brains.
At that time, the chips had reached “mouse-level intelligence,” with “cat-level already on the roadmap.”
“This was twelve years ago. I don’t know what they’re doing today, but they won’t have stopped.”
Beyond technical capability, he noted the military and geopolitical drivers accelerating AI development through DARPA and similar funding streams. Even then, the scientists pushing boundaries understood that they couldn’t fully understand the systemic, ethical, and social implications of what they were building.
“The technology is increasingly opaque, difficult to parse, and difficult to predict… We are on a roller coaster, and even if we wanted to get off, we cannot.”
The regulatory gap no one saw coming.
Jumping forward to 2017, Shanks reflected on his time as New Zealand’s Chief Censor — a role he described with humour:
“A dystopian title in its own right.”
He shared the now-infamous case of 13 Reasons Why, a Netflix series that included a graphic suicide scene that would have been restricted in cinemas but slipped through because streaming platforms fell between regulatory categories. With New Zealand’s high youth suicide rates, the stakes were severe. Shanks used his authority to manually classify the content — but the case highlighted a deeper problem: analog regulatory frameworks cannot keep pace with real-time digital platforms.
“The system simply wasn’t designed for this. There were gaps everywhere — and those gaps are widening with AI.”
This example highlighted the broader challenge: emerging technologies evolve faster than legislation, safeguards, and public expectations.
Where government goes from here
Shanks’ overarching message was neither doom nor hype — but realism:
AI will evolve whether the government is ready or not.
Regulation must shift from slow, reactive enforcement to agile, principles-based oversight.
Public good must stay at the centre as risks grow alongside capabilities.
Above all, he pushed leaders to stay grounded.
We’re navigating everything in between — and our responsibility is to do it with care, humility, and awareness.”
Session takeaway:
AI adoption is inevitable — driven not just by commercial interests but global military and geopolitical imperatives.
Regulatory systems lag behind — digital platforms and AI models evolve far faster than analog governance structures.
Risk and innovation must advance together — neither maximal acceleration nor excessive caution will serve the public good.
Opaque technologies require new oversight frameworks — especially as AI systems become more complex and unpredictable.
The government’s role is to protect citizens while enabling progress — not choose between extremes.