Foreword
Over the past year, the public sector conversation about AI has matured. We have moved beyond first principles and baseline governance questions, and into a more demanding reality: expectations are rising, budgets are tight, and the pace of change is being set globally. At the same time, the most important decisions we need to make are not about a single tool or model, but about how we design the foundations and operating practices that let AI deliver value safely, repeatedly, and at scale.
The Technology, Data & AI stream at the Government Innovation Showcase Victoria 2026 made that shift clear. Speakers challenged us to separate hype from outcomes, and to treat AI as a capability uplift that reimagines ways of working, not just “added on” to existing processes. This, in turn, is reshaping how public sector organisations operate. At scale, success depends on shifting from reinventing solutions to building shared foundations. It means capturing knowledge, reusing proven approaches, and extending platforms instead of rebuilding them.
The strongest message was practical. Meaningful gains come from redesigning end-to-end work and embedding the capabilities that support it, particularly the data, integration, and governance layers that provide context, control, and continuity in day-to-day operations.
A structural tension remains between short-term delivery and long-term transformation. The Department of Government Services noted, organisations need to "tactically fix for today while reinventing for tomorrow", aligning with Gartner’s “defend and extend” model. This requires actively managing both horizons: delivering immediate improvements while continuing to invest in the capabilities that scale significant returns. Annual funding cycles will continue to constrain this balance, making it critical to manage the gap between ambition and sustained investment.
The priority is clear. Align foundational work to your AI ambition from the outset. Treat governance as an enabler of trusted outcomes, not an overhead. And prioritise a unified context layer that allows AI to be embedded into everyday operations and deliver measurable outcomes at scale.
Trust was the consistent throughline. Whether the focus was privacy, transparency, cyber resilience, information quality or service delivery, the stream reinforced that public value depends on being able to “show your working” and stand up to scrutiny. That means clear accountability, human checkpoints where they matter, strong metadata and lineage, and information integrity that is ready for both audit and action. It also means building organisational confidence to deliver innovation where it truly elevates the scale, quality and outputs to deliver on the productivity promise needed.
This article brings together the key takeaways from each session, highlighting what they signal for public sector leaders right now. The intent is to make the insights easy to apply: where to focus effort, what to prioritise first, and how to build the conditions for AI that is trustworthy, resilient, and genuinely useful.
Opening the day, Minister Danny Pearson left a clear challenge: be bold, be brave.
Cassandra Bisset, Vice President Strategy, Objective
Hype vs Reality – Lessons Learnt in AI Integration.
Syed Ahmed, Executive Director of Data and Analytics, Department of Transport and Planning
- This session cut through the hype by framing AI as a workforce capability shift, not a replacement story. The practical message was that teams should expect modest gains from “chatbot-style” use (because humans still need to validate), and focus attention on where the real value sits: automating end-to-end processes and workflows.
- It walked through three applied patterns and what they mean in practice: conversational AI (about ~20% uplift, but only with training and human review), agentic workflows (2x–20x gains when you automate multi-step work like briefs and incident analysis), and AI-assisted engineering (meaningful efficiency, but only if you redesign the whole delivery process rather than speeding up old steps).
- The takeaway for leaders was that successful adoption is less about buying tools and more about operating AI like a new team member: define the business problem first, get your data definitions and context right, build guardrails and logging, and choose vendors that integrate cleanly with legacy systems. The warning was clear: AI outputs are inherently unpredictable, so accountability, governance, and capability-building become more important, not less.
Extending governance reach. Elevating quality. Powered by AI.
Cassandra Bisset, Vice President Strategy, Objective
- This session reframed the last year of public sector AI work as a shift from “getting governance right” to dealing with a much faster, global pace of change. The speaker pointed attendees to forward-looking signals (like futurist Amy Webb's first release of the Convergence Outlook, launched at SXSW USA) and argued that government leaders need to plan beyond immediate fixes. The direction of technology is already shaping long-horizon issues like critical assets (water, power), care systems, and national competitiveness.
- The core message for agencies was that “do more with less” is colliding with a hard reality: AI projects are expensive, budgets are tight, and around 50% of AI projects are failing. The speaker’s interpretation was that many efforts stall due to extensive discussion about risk and governance, without clear practical “breadcrumb trails” for teams to implementation, eroding internal confidence. At the centre of this gap is Information scaffolding, critical to avoid being one of the 50% of projects failing due to information quality and sensitive screening issues.
- The practical takeaway: stop chasing surface-level productivity wins. Invest in the unsexy foundation work that makes AI trustworthy and scalable: information integrity, curated and contextualised datasets (especially unstructured content), and risk controls built in by design. This also means establishing a “trust centre” that can stand up to audit and public scrutiny. This is the work we’re increasingly doing alongside organisations, helping teams strengthen their information scaffolding so AI can move from experimentation to reliable use. For most roles, the implication is clear: the next wave will come from preparing and enriching information so AI can deliver reliable insight, not from deploying shiny tools on messy data.
Privacy, Data Governance, Transparency and Trust
Sean Morrison, Information Commissioner, Office of the Victorian Information Commissioner
Imka Seecharan, Chief Technology and Information Officer, City of Melbourne
Darshil Mehta, Principal, Data Governance and Capability, Australian Super
Veli Fikret, Senior Director – Data Management, Australian Taxation Office
Charlie Farah, Field CTO, Analytics/AI, Qlik
- This panel demystified “trustworthy AI” as less about new technology and more about disciplined operating practice: leadership-owned frameworks, stopping shadow AI, training and literacy, and baking governance in before deployments rather than bolting it on afterward. The repeated warning was that shiny, rushed implementations create avoidable risk and reputational harm.
- The practical governance model they converged on was “governance + assurance”: not just policies, but the ability to show your working end-to-end. That means observability, auditability, human-in-the-loop checkpoints, and strong data foundations like metadata, classification, tagging, and lineage so agencies can explain what data was used, for what purpose, and how a decision was produced.
- The “what this means for my role” takeaway was that culture and capability-building are now first-order requirements, not nice-to-haves. Agencies need AI literacy (and “killer questions”, not just killer apps), a non-punitive innovation environment within guardrails, shared registers/councils to triage use cases, and cross-agency collaboration to avoid reinventing the wheel. The session landed on a cautious stance: the goal is not “balance” for its own sake, but measured, step-by-step adoption because significant harms to public trust are hard to reverse.
Building Resilient, AI-Ready Government Services: Governance, Security and Innovation
Matt Fowler, Director, Asia Pacific and Japan Field PLM, Campus and Branch, Hewlett Packard Enterprises
- This session positioned the network as the quiet enabler of public sector AI ambitions: agencies want reliable digital experiences for staff and citizens, faster incident resolution, better visibility, and more time for innovation, but they are battling rising complexity, tighter resourcing, and a growing security attack surface.
- It clarified two distinct “AI and networking” plays: AI for networking (AIOps that uses telemetry, ML, and GenAI assistants to predict issues, cut noise, and speed troubleshooting for non-experts) and networking for AI (low-latency, high-throughput, secure data centre networks that support model training and inference, including private/in-house models for security and control).
- The practical implication for IT leaders was that AI value depends on fundamentals: pervasive security (not bolt-ons), real-time telemetry, and open integration so tools can share context (the talk flagged Model Context Protocol as “USB‑C for AI” thinking). The outcomes to aim for are measurable operational relief, like fewer support tickets, fewer site visits, and less “keep the lights on” effort, so teams can redirect capacity into new digital services and AI-enabled experiences.
What makes AI work? Assessing Emerging Business Demands and Requirements for IT to Support CX, Digital, Data and AI.
Nikhil Patinge, Director – WoVG Digital Integration Services | Technology & Digital Platforms, Department of Government Services
Facilitator: Ash Dhareshwar, Director, Strategy and Innovation | Infrastructure Services, Cenitex
- This session defined “AI” in plain terms as digital intelligence at scale, and argued the real challenge is not everyday consumer tools, but enterprise AI that actually reshapes end-to-end workflows inside government and large organisations. The speaker’s point was that the “ChatGPT moment” did not automatically translate into workplace transformation because enterprise AI needs context, controls, and integration to function reliably.
- The core takeaway was that enterprise AI success is mostly a data + integration problem: AI sits on top, but it only becomes useful when you can pull the right data and knowledge from many siloed systems through an integration “fabric” that provides context. This fabric was framed as a logistics layer (getting the right ingredients to the right place) and also a regulatory abstraction layer (enforcing what can and cannot be shared, purpose limits, and governance rules without blocking collaboration).
- For leaders, the practical message was to invest in “digital public infrastructure” that makes AI scalable and resilient: clean data foundations, standardised APIs, integration that survives machinery-of-government changes (abstracting endpoints away from department names), and security/governance/identity baked into the platform. The role of industry is then to build workflow-level solutions on top of these governed, standardised building blocks, which improves speed to delivery and lowers cost per transaction over time.
Help your peers
Share what you've learned with fellow public servants