Benji Crooks, Marketing Director at Public Sector Network, sat down with Phillip Wagner, principal cybersecurity specialist, ahead of his upcoming course with Public Sector Network, Artificial Intelligence (AI) Policy Development. Wagner shared why most AI training is generic, and how a practical, tailored policy approach, clear approval and assessment roles, and high-level guardrails can help public sector teams adopt AI safely, build trust, and move from experimentation to confident implementation.
Q: For the audience, can you introduce yourself and the course you'll be teaching at Public Sector Network?
Phillip Wagner: Hello, I’m Phillip Wagner. I’m a principal cybersecurity specialist and run my own company. The course is about how to develop an organisation-centric artificial intelligence policy, putting rules around the use of AI in your organisation.
Q: There’s a lot of AI training in the market right now. What makes this course different, especially for public sector leaders and practitioners?
Phillip Wagner: There’s a lot of generic AI training out there. This course is highly specific. First, it teaches you how to write an AI-centric policy tailored to your organisation. Second, I was the first person in Australia to write an approved AI policy for a Commonwealth department. Third, participants receive a template, a Microsoft Word document. We work through it together, you can ask questions, and by the end of the two days you will have already written a substantial part of your policy.
Q: What are the learning outcomes? What should participants leave the course with?
Phillip Wagner: Participants will understand what AI actually is, and what rules need to be put in place, including who can access it, what data can and cannot be processed, what systems and applications are approved, and how to conduct risk assessments of AI products and services. By the end, participants should feel confident implementing these controls in their organisation to keep it safe.
Q: AI governance and policy are changing constantly. How does the course help participants develop policies that stay relevant?
Phillip Wagner: If you don’t have a policy, you don’t have rules in place to guide what people can and can’t do. Without it, it becomes a free-for-all. A good policy creates rules that allow your organisation to use AI safely and in a controlled manner. Governance is growing rapidly across the country. I’m a governance specialist, and I teach governance as part of my practice, including AI governance. The same applies in cybersecurity: assess risk, measure risk, mitigate it, and be prepared for incidents if something goes wrong. It’s also about keeping information safe and providing reassurance to Australians that their information is being protected.
Q: Guardrails are a major part of responsible AI. What safeguards do you focus on at a high level?
Phillip Wagner: At a high level we cover who the approving authority is for AI products and services in your organisation, who the assessing authority is that reviews and recommends approval or rejection, and the responsibilities at each level. That includes individual staff members, supervisors and managers, vendors and suppliers, and contractual requirements. Finally, we cover CEO and board-level accountability, because they are ultimately responsible if there is a serious breach.
Q: We’re seeing strong momentum around AI capability across the public sector. Where do you see this course fitting into that uplift?
Phillip Wagner: The course gives organisations the tool, the policy, and the process to use AI correctly and safely. It’s like learning to drive a car: driving has risks, but rules make it safer. We also teach methods to pilot AI in your environment in a way that won’t compromise the rest of your systems. When we ran this course last year we had around 25 participants, including very senior leaders across state, federal, and local government.
Q: Is the course open to everyone who deals with AI in some way?
Phillip Wagner: Yes. We designed it to be broadly available. The material references Commonwealth guidance as well as strong state-based examples, and we take an agnostic approach by using the best available standards. For local government, where there isn’t always specific legislation, the guidance is to adopt the highest practical standard. Participants leave with a template they can edit and tailor, with prompts and comments to help them make it fit their organisation.
Join Phillip Wagner on the online training course Artificial Intelligence (AI) Policy Development. In this practical training course, you’ll learn how to create clear rules for AI use in your organisation, set approval and assessment responsibilities, and apply high-level guardrails to support safe, confident adoption.