Harnessing AI in the Australian Public Sector: Insights from Leading Practitioners
Artificial intelligence is no longer a futuristic concept—it is actively reshaping how Australian government departments operate, make decisions, and deliver services. Yet, implementing AI in complex public sector environments requires a strategic approach, executive buy-in, and workforce engagement.
A recent panel discussion brought together government and private sector leaders to share lessons, practical applications, and success stories in AI adoption.
Measuring Productivity and Finding Value
Heather Cotching, Assistant Secretary and Branch Head of Data, Digital and Analytics at the Department of the Prime Minister and Cabinet, highlighted the challenges of evaluating productivity in public sector roles.
“We have constantly shifting priorities and infinite task lists. Even small gains—like reducing the time spent drafting reports, agendas, or meeting notes—can justify the investment in AI licenses.”
By focusing on incremental improvements, Heather explained, departments can make a strong business case for AI adoption. Even if AI only saves a few minutes per task, when scaled across hundreds of staff and repetitive tasks, the cumulative savings are significant. Research from the Digital Transformation Agency suggests that modest productivity improvements can translate into meaningful cost reductions, freeing staff to focus on higher-value work.
AI in Policing: Speed, Safety, and Insights
For Hiro Noda, Coordinator for AI and Emerging Technology at the Australian Federal Police, AI is transforming investigative work.
“We deal with huge volumes of data—from phones to computers—and manually reviewing it could take months.”
Hiro described how AI now helps identify objects, translate text, and flag harmful content, accelerating investigations and reducing human exposure to dangerous material. Beyond processing speed, AI enables officers to detect subtle patterns in youth communications, including emojis and slang, which may indicate child protection risks. In doing so, AI doesn’t replace investigators—it allows them to focus on interpreting insights and taking action.
Defining Problems and Outcomes
Clara Lubbers, Director of Strategy in Digital Services at the Department of Health, Disability and Ageing, stressed that clear problem definition is key to demonstrating AI’s value.
“If you clearly define the problem and the outcomes, it becomes much easier to demonstrate the value to executives.”
Clara cited the Therapeutic Goods Administration, where AI has shortened medicine approval timelines and improved productivity. By connecting AI projects to measurable outcomes—such as faster approvals, reduced backlog, or better service delivery—public sector leaders can show tangible benefits to executives, making it easier to secure support and funding.
Humans in the Loop: Building Trust
For Dan Saldi, Founder and Applied AI Director at Xaana, trust in AI outputs is critical.
“Ownership still lies with humans. Traceable outputs with citations and references give executives confidence to rely on AI in decision-making.”
Dan explained that executives are more likely to trust AI when it can provide evidence for its recommendations. For instance, when AI models cite specific documents or datasets, decision-makers can verify results, making AI a transparent and accountable tool rather than a “black box.” This human-in-the-loop approach ensures that final decisions remain guided by expert judgment.
Workforce Engagement and Training
Eleanor Williams, Managing Director of the Australian Centre for Evaluation at the Treasury, highlighted the workforce’s role in AI adoption.
“Public servants are essentially training AI when they break tasks into smaller pieces and structure them in a way the AI can execute.”
Eleanor emphasized that successful adoption depends on empowering “super users” to champion AI internally while providing structured training for the broader workforce. This approach ensures that AI tools are used effectively and responsibly, creating a culture where staff are confident experimenting with AI to improve outcomes.
Securing Executive Buy-In
The panel shared practical strategies for gaining executive support:
Heather Cotching: “The conversation changes completely when you can show them a working AI model rather than just talk about it in theory.”
Demonstrations allow executives to see AI in action, making its capabilities concrete rather than abstract.Hiro Noda: “Executives need to give the workforce explicit permission to use AI responsibly, so staff aren’t left unsure about what’s allowed.”
Clear organizational guidance encourages experimentation without fear of compliance breaches.Clara Lubbers: Focus on outcomes.
Highlighting measurable benefits aligns AI projects with strategic goals, increasing executive confidence.Dan Saldi: Ensure human oversight and traceable citations.
Transparency and accountability foster trust and reduce hesitation to adopt AI solutions.Eleanor Williams: Use super users to drive adoption and embed training across the organization.
Skilled internal champions accelerate uptake and help maintain consistent standards across teams.
Proven Impact
The panel highlighted real-world outcomes from AI adoption:
AI-driven hospital discharge analytics reduced readmissions by 60%, resulting in millions of dollars in savings from avoidable healthcare costs.
AI-assisted policing accelerated investigations and minimized exposure to hazardous materials.
These examples demonstrate that AI is more than just a productivity tool—it enhances safety, mitigates risk, and facilitates evidence-based decision-making.
AI as a Force Multiplier
“Even small steps with AI can create huge productivity gains,” Heather Cotching said.
Panelists agreed that AI’s true potential lies in augmenting human capability, not replacing it. By defining clear outcomes, empowering staff, and demonstrating real-world applications, the Australian public sector is unlocking AI’s transformative possibilities—one practical project at a time.