State Scene Setter: Government Spending, Direction and Priorities
Judy Hurditch, Managing Director and Principal Analyst, Intermedium
- This session opened the event by framing the day’s purpose and context: who the chair is (a long-time public sector tech researcher and former Deputy Commissioner of Taxation), why digital and AI change is accelerating, and how the PSN community and app are meant to help attendees stay connected and access content beyond today.
- For your organisation, the practical focus was “where are we really at with AI?”: the live poll prompted leaders to assess where AI value will land (productivity, citizen experience, risk and compliance, etc.), how ready the organisation is to move beyond pilots into operations, and what’s needed to innovate responsibly (guidelines, assurance frameworks, tools, training, shared use cases, experts).
- The takeaway is that AI maturity is not just technology adoption, it is governance, capability, and execution: you should be able to clearly articulate your current maturity level, what “responsible deployment” requires in your context, and what you need next to move from experimentation to scaled delivery, using peer sentiment and benchmarks as a reality check.
Ministerial Address
Hon Danny Pearson MP, Minister for Government Services, Victorian Government
- This session set a clear direction for Victoria’s innovation agenda: the state is positioning digital infrastructure, especially data centres as the “digital spine”, as a foundational economic play in a global race for investment, productivity, and competitiveness.
- For your organisation, the message was that AI adoption is now a leadership and service-design challenge, not a tech experiment: the focus should be on measurable citizen outcomes (less friction, faster access to services, reduced admin burden for frontline staff) and a more consistent, “one government” digital experience rather than siloed, department-by-department delivery.
- The critical enabler is social licence and trust: the talk argued against both heavy-handed fear-based regulation and “let it rip” deployment, and instead pushed for transparent public value narratives, ethical safeguards, nationally consistent assurance, and serious investment in upskilling and transition pathways (so AI disruption is managed, not ignored).
Secretary's Discussion
Matt Carrick, Secretary, Department of Jobs, Skills, Industry and Regions
- This session connected AI directly to jobs, skills, and productivity, arguing it is both a growth sector in its own right and a “general purpose” capability that will lift (or reshape) productivity across the Victorian economy, including inside government where budgets and workforce capacity are increasingly constrained.
- For your role in the public sector, the practical message was: use AI to stretch limited capability and improve outcomes, not just automate for automation’s sake. The examples (V/Line transport planning and State Trustees will-making) framed AI as a way to shift effort from slow technical work to higher-value human work, which should translate into better service quality and better asset utilisation.
- The enabling agenda is a combined skills pipeline + infrastructure pipeline: universities and TAFEs are positioned as the engine for AI-literate graduates and mid-career upskilling (including partnerships like La Trobe–NEXTDC–NVIDIA and TAFE-led centres of excellence), while Victoria is pursuing data centres and renewables as linked strategic investments (over $20B in the pipeline) that create the compute foundation for AI, with an explicit emphasis on doing this responsibly by balancing power and water needs.
Innovation Under Pressure: How Governments Modernise Without Losing Control
Neville Pinto, Strategic Security Advisor - ANZ, Splunk
- This session reframed “innovation under pressure” as a control and visibility problem, not an ideas problem: public sector teams are being pushed to modernise and adopt AI while facing worsening cyber threats, budget constraints, skills shortages, and complex political and operational realities.
- For your organisation, the core warning was that AI and automation will amplify what already exists: if data pipelines, integrations, dependencies, and operational telemetry are weak or fragmented across legacy and cloud, AI will move fast but produce unreliable outcomes, increase complexity, and raise your risk profile, especially through configuration drift, unknown dependencies, and integration failures.
- The practical takeaway is a leadership choice: do not let automation outrun governance. Build “security as infrastructure” with three explicit layers (control, assurance, visibility), pursue disciplined simplification to reduce operational complexity, and keep institutions understandable, observable, and controllable because in government, scalable innovation ultimately runs on trust.
Executive Panel: The Digital Citizen of 2030 - Thinking Ahead to Prioritise Now
Adam Carthew, Executive Director, Digital Platforms, Department of Government Services
Ashleigh Hart, Chief eHealth Strategy Officer, Victorian Department of Health
Dr Steve Hodgkinson, Former Chief Digital Officer, Victoria Police
Kate Tollenaar, Government Leader and Department of Defence Managing Director, IBM
- This panel argued that the “digital citizen of 2030” is effectively today’s citizen with rising expectations: people will judge government against the best digital experiences they get elsewhere, and if services are slow, fragmented, or inconsistent, frustration quickly becomes a trust problem. The priority is joined-up delivery across agencies so citizens are not forced to be the “integration layer” moving data and context between departments.
- For your organisation, the message was to standardise the foundations and execute reliably at scale: reuse common building blocks (for example whole-of-government platforms like API gateways and identity verification) so teams are not repeatedly solving the same problems, and so you can deliver hundreds of changes a year safely. Innovation is framed as a delivery capability, sustained funding model, and disciplined ways of working, not a series of one-off initiatives.
- Trust in 2030 was positioned as verifiable and continuous, not assumed: citizens should be able to prove what’s needed (for example proof-of-age) without oversharing personal data, and agencies must invest in security, privacy, and resilience to prevent major outages and breaches. The panel also cautioned that AI should not “digitise bureaucracy” by default, and pushed for product + service design approaches (and governance) that remove friction at the root, while preparing now for emerging risks like post-quantum security.
Transforming service delivery with Agentic AI: Presented by Salesforce
Matt Henderson, Inspector, VIC Police
John McLaverty, Program Manager, VIC Police
- This session framed the core challenge as scaling government service delivery without scaling cost, while still building (and keeping) citizen trust. The proposed solution is “context” at scale: using government’s rich but siloed data to support empathetic service delivery, with controlled, need-to-know sharing rather than broad replication.
- For your organisation, the practical message was that moving from automation → generative AI → agentic AI only works if you provide strong grounding and clear responsibility boundaries. “Responsible AI” was described as a shared model: agencies and vendors should build in protections like PII masking, auditability, and controlled data handling, and define what digital agents are permitted to do in the same way you define staff access and delegations.
- The Victoria Police case study showed a concrete “what this looks like in practice”: AI-generated police narratives convert scattered form inputs and free-text into a structured format police can act on, cutting minutes of manual work per report at scale and reducing follow-up calls. The next step is conversational, multilingual reporting (an “Officer Vic” interface) that guides users through mandatory questions, checks for emergencies, and improves accessibility for people with literacy or language barriers, while still keeping human review where required for quality and safety.
Executive Panel: Creating A Modern, Relevant and Trustworthy Victorian Public Service: Fostering A "Future-Ready" Workforce in the AI Era
Catherine de Fontenay, Commissioner, Productivity Commission
Kelly Crosthwaite, Deputy Secretary, Bushfire and Forest Services, Department of Energy, Environment and Climate Action
Lisa Ryan, Advisory Board Member, Victorian Skills Authority
Facilitator: Ash Dhareshwar, Director, Strategy and Innovation | Infrastructure Services, Cenitex
- This session framed the “future workforce” question as less about predicting specific job titles, and more about building a public service that can learn continuously as AI capabilities change faster than formal workforce planning cycles. The speakers argued for replacing fear with a practical stance: assume AI will be embedded everywhere, and focus on making upskilling and formal learning easy, ongoing, and accessible.
- For your organisation, the emphasis was on trustworthy adoption: AI should be treated as a tool with clear accountability (you cannot “blame the robot”), backed by governance, ethics, bias awareness, and strong data management. The leadership risk is getting stuck debating individual productivity tools, rather than doing the harder work of data readiness, automation governance, and organisational redesign to safely capture value at scale.
- The practical workforce shift is toward digital readiness + human judgment: everyone needs baseline digital/AI literacy (with a useful model of digital experts, digital-enabled roles, and digitally informed roles), but the differentiator is critical thinking, creativity, data interpretation, and decision quality. High-stakes environments reinforced that AI should assist and improve human judgment, not replace it, and that agencies should use senior expertise both to train AI and to deliberately develop junior staff capability in a world where “grunt work” is increasingly automated.
AI Governance – How to Get it Right
Lauren Solomon, Special Advisor, Governance Practice, National AI Centre
Facilitator: Ash Dhareshwar, Director, Strategy and Innovation | Infrastructure Services, Cenitex
- This session clarified the National AI Centre’s role as enabling AI uptake through voluntary guidance, practical support, and sector engagement (not regulation), with a focus on helping organisations capture AI benefits while managing risk.
- For your organisation, the key lesson was to make responsible AI guidance usable in the real world: meet teams where they are (early vs advanced maturity), reuse existing governance (privacy, cyber, records, info management) rather than inventing new structures, and supplement principles with templates, playbooks, and tools that people can actually implement.
- The practical takeaway was a balanced approach to capability and trust: encourage safe experimentation and learning (“fall safely”), but keep leadership attention on the harder enablers like data readiness, organisational transparency, workforce engagement, and feedback loops. On transparency statements and training, the direction was “context-led”: more openness tends to build trust, but avoid meaningless disclosure overload, and treat AI literacy as essential across all levels even if not formally mandated.
Help your peers
Share what you've learned with fellow public servants