As CEO of the Public Sector Network, I’ve recently returned from a year of Government Innovation Week events around the world – from Australia and New Zealand to Canada, the UK and the US.
These conferences underscore a clear message: governments are on the move. Leaders are under pressure to improve citizen experience, boost efficiency and strengthen security, all while dealing with tight budgets and legacy systems. What we learned in 2025 is that technology – especially AI – has enormous potential to help meet these goals, but only as part of a broader effort. AI is not a silver bullet on its own; it must be woven into cybersecurity strategies, digital services, data governance and modernization plans.
Across our events, we saw public sector executives doubling down on citizen-centered design and trust. In fact, a recurring theme from roundtable discussions is that “citizens don’t get to ‘switch providers,’ so the bar is higher for government, not lower” . We heard repeatedly that user experience can’t be an afterthought – it is the strategy. One leader described elevating customer satisfaction metrics to the top governance table, using voice-of-customer data to drive every decision. In this Gov 3.0 world, government must be wrapped around people, not the other way around . This means digital services and customer experience (CX) are front and center.
AI as an Enabler – Not a Standalone Solution
One major takeaway is that AI is an enabler, not a standalone cure. It powers improvements in many areas, but its success depends on strong foundations in people, data and culture. For example, we learned that AI can supercharge legacy modernization. Many agencies struggle with decades-old systems, but AI tools – especially advanced code generators and intelligent agents – are being used to bridge that gap. As Deloitte notes, some government teams are deploying AI to “drive rapid prototyping, accelerate legacy modernizations and provide customer support,” and even using “advanced code generation tools [that] can accelerate the process of understanding legacy systems… and generating the resulting code of modernized systems.” . In practice, this means governments are piloting AI assistants that analyze old code, write requirements, or even draft new application code under human guidance. This isn’t about replacing staff, but about liberating our technical experts to focus on design and outcomes rather than rote rewriting. The result: incremental, faster modernization that builds a foundation for long-term transformation .
AI is also crucial in cybersecurity and trust. We heard at every event that cyber risk is one of the biggest threats governments face. Interestingly, the very capabilities of AI that drive innovation can also introduce new cyber risks. As one industry report puts it, AI creates a paradox: the same tools that enable better services also open new attack vectors . Yet leading agencies are flipping this on its head. For example, governments are embedding security into their AI initiatives from the start – applying strong access controls, isolating models and testing them against adversarial attacks. Forward-looking agencies are even using AI offensively: adversarial testing (trying to fool their own systems) and AI-driven monitoring to detect intrusions in real time . They’re “hardening models against manipulation” and setting up automated AI threat detection . Moreover, strategies like Zero Trust and data sovereignty are becoming mandatory. One new trend is “perimeterless security,” where unified Zero Trust networks and shared defense teams help protect citizen data across agencies . The lesson is clear: AI can strengthen security, but only if security is baked into every step. By planning for hybrid deployments (cloud, on-premises, and edge computing) and by using AI tools for defense, governments can harness AI’s promise and keep the public’s trust .
In digital services and CX, AI’s impact is equally pronounced. We saw agencies experimenting with chatbots, virtual assistants and personalized portals. But success here depends on truly understanding citizens. Our roundtable conversations echoed a single refrain: “Human experience is not a side project – it is the strategy.” . Governments are moving away from siloed digital products toward orchestrated experiences that span multiple agencies and life events. For example, one country runs continuous user-discovery loops to refine platforms in real time, measuring success by adoption, speed of benefit and equity of access – not just uptime or internal milestones. These are Gov 3.0 principles in action: measuring “adoption and completion rates, time-to-benefit, burden hours saved” instead of vanity metrics . AI fits into this by providing analytics, personalization and automation: predictive service routing, automated identity verification, natural-language assistants in multiple languages – all reducing friction and improving access. But AI-driven CX only works if we have the right data governance and architecture underneath.
Data, Governance and Cloud – Building Resilient Foundations
A theme that came through strongly is that data is the lifeblood of AI and digital government. Throughout 2025, public sector leaders told us they’re investing in data governance, modern data infrastructure and cloud strategies. In multiple countries, governments are launching new Chief Data or Chief Digital Officers and funding centralized data platforms. The idea is to create shared data foundations – common identity services, payment systems and analytics platforms – to break down silos . For example, one government described a reusable “platform operating model” for identity and notifications across agencies, governed by a single product team. This approach echoes what PSN has advocated: “experience-led portfolios” that prioritize citizen value and fund common capabilities as shared products .
AI’s role here is twofold. First, it helps governments make sense of data through analytics and machine learning. Second, it requires new ways of managing that data responsibly. In our Global PSN community, we’ve seen emphasis on AI governance frameworks. One recent industry outlook put it well: “Enterprise-wide AI governance and data maturity are positioning government agencies to adopt AI responsibly, enhancing service quality while safeguarding privacy, fairness and public confidence.” . This means policies and processes – like ethical review boards for algorithms, strict data-protection standards and model transparency (some agencies are publishing AI model cards) – become the scaffolding around AI initiatives.
Infrastructure is another key factor. As governments ramp up AI use, they’re grappling with where to run the compute. Deloitte warns of an “AI infrastructure reckoning” : cloud costs can skyrocket for heavy AI workloads, so many agencies will adopt a mix of on-premises data centers for steady AI tasks and cloud or edge computing for variable or latency-sensitive work . In practice, we’re advising agencies to think strategically: could edge computing, for example, allow a rural clinic to run AI diagnostics even if the network goes down, or help first responders analyze drone footage on the fly? Conversely, do we need backup plans if a private cloud service fails during a crisis? The bottom line is governments must align compute decisions with mission .
Finally, data sovereignty and interoperability remain high priorities. Several PSN forums discussed how to balance cross-border data sharing with local control. Some nations are exploring secure data hubs or “data trusts” so AI models can learn from broad datasets without violating privacy laws. Others focus on open data standards to enable cross-agency services – a core part of the Gov 3.0 vision of a single government journey for citizens .
Collaboration, Trust and Leadership – The Human Factor
Technology alone won’t get us to Gov 3.0. Across every discussion we hosted, one point was crystal clear: leadership and culture are the ultimate catalysts . Leaders who publicly tie funding to citizen outcomes and make CX metrics non-negotiable see results. One PSN member shared that simply adding “customer satisfaction” as a key decision criteria changed behavior organization-wide. Another described how the COVID crisis was used to permanently embed agile practices, rather than slipping back to old habits.
We also saw that trust must be earned through action, not declarations. In roundtables, participants insisted that buzzwords like “we value privacy” mean little unless backed by transparency and results. One attendee memorably quipped, “If you have to keep saying ‘trust,’ maybe you haven’t earned it yet.” . Practical steps are emerging, such as publishing open service standards, sharing incident logs, and even running resilience drills publicly. PSN’s own events emphasize the SPRITE framework (Security, Privacy, Resilience, Inclusion, Transparency, Ethics) as a pragmatic guide. Agencies are “operationalizing trust” by embedding SPRITE into procurement and design – for example, by making open-data dashboards and AI risk disclosures a requirement for projects . In short, building citizen confidence is now as important as building an app.
Collaboration came up as a “multiplier,” too. Rather than every department reinventing the wheel, leaders want to share code, playbooks and lessons. This is why PSN and FGI are championing shared “workspaces” and communities of practice. As one insight notes, “Transformation slows when everyone starts from scratch; it accelerates when people build on one another’s work.” . We saw practical examples: joint user research sessions across ministries, co-written contracts, and national hubs for reusable digital tools. Agencies that fund common platforms (identity, payments, notifications) as shared products are the bright spots on the Gov 3.0 map . These models – experience-driven investment and open documentation – are literally turning isolated innovation into institutional capability.
Lastly, the workforce imperative cannot be ignored. Governments are committing to “building an AI-fluent workforce” . That means retraining existing staff, running fellowship programs, and even working with schools to cultivate AI skills for future civil servants. We heard repeatedly that no agency can get the benefits of AI and cloud without investing in people. As one leader put it, the move from Gov 2.0 to Gov 3.0 is “not only about technology; it’s about enabling the humans who design, deliver and govern it” .
Conclusion: The Road to 2026
Wrapping up Government Innovation Week 2025, it’s clear we stand at an inflection point. Governments from Canberra to Ottawa to Washington DC are telling us they’re ready to treat citizens like customers – tightly integrated across services, protected by robust security, and empowered by data. The trends we see for 2026 all revolve around harnessing AI and digital tech within a human-centered, collaborative framework.
Public Sector Network will continue to convene leaders around this shared vision. As one conclusion from our roundtables put it, the promise for the year ahead is simple: “We will wrap government around people – and we’ll do it together, through shared ideas, open playbooks, and collective delivery.” . That’s a fitting challenge for 2026. By aligning leadership, culture and technology – and by using AI as the powerful enabler it is meant to be – government can become more efficient, secure and inclusive than ever before.