Navigating AI Governance
In Australia, applying AI efficiently in government departments to process internal information involves navigating a complex and evolving regulatory space. This intricate environment includes a mix of voluntary standards, national frameworks, and international guidelines, each offering different pathways but often overlapping in purpose and scope. Understanding which Responsible AI tools are most appropriate for a government context is essential, though challenging, given the myriad of options available.
The Australian AI Ethics Framework provides a foundational set of principles, including fairness, privacy, and accountability. The Voluntary AI Safety Standard lays out a set of guardrails based on these ethical principles, and the National Framework for the Assurance of AI in Government puts all of this into a more step-by-step process. This framework is designed to align AI development with ethical norms and societal values, but integrating it into a government context is not always straightforward. Each department’s mission and unique requirements often demand tailored interpretations of these broad principles, especially when applied to the diverse and sensitive internal information managed by government agencies.
Two key international standards are being referenced by Australian government policy and processes: ISO/IEC 42001:2023 and ISO/IEC 38507:2022. ISO/IEC 42001 provides a detailed set of controls for implementing management systems for AI systems, and will eventually provide a certification pathway for AI systems meeting established governance criteria, making it a promising tool for both public and private sector entities seeking formal recognition of their AI practices, similar to ISO/IEC 27001 for cyber security. ISO/IEC 38507 is particularly relevant to organisations already following the ISO/IEC 38500 series of standards on IT governance, providing guidance on how to address specific governance and policy requirements for AI systems. However, implementing these global standards locally can be complex, as they don’t always align seamlessly with specific Australian regulatory requirements.
Essential Risk Assessments, Frameworks, and Practical Tools for a Secure Future
The first step in any of these governance frameworks is a risk assessment. The Queensland Government has developed the Foundational AI Risk Assessment (FAIRA) to provide a practical process for conducting a risk assessment, including recommendations for risk mitigation activities that can be implemented at a number of different levels within an organisation. Although developed to support Queensland Government departments and agencies, FAIRA isn’t specific to those organisations and is readily available for use by both private and public sector organisations.
Federally, the Digital Transformation Agency has produced guidance on how to implement the National AI Assurance Framework. While aimed at organisations that are formally part of the DTA’s pilot implementation of the assurance framework, the guidance documents are a useful resource for any organisation seeking to implement responsible AI controls. Government departments implementing AI solutions to process sensitive information can benefit from the DTA’s guidance on each of the specific Voluntary AI Safety Standard Guardrails, including pointers to specific legislation, Australian Public Service policies, processes and tools for tackling tasks like deidentification of private information, implementing data quality controls to address risks around fairness, and considerations around the testing the accuracy, reliability and safety of AI systems.
Adopting a Responsible AI strategy in government requires a deliberate and structured approach, and each organisation will have its own specific challenges with respect to the kind of data being used and the kind of tasks AI is being applied to. Departments should take the time to assess which tools and frameworks align best with their missions and responsibilities, identify any gaps within their existing processes and capabilities, and create a plan to address them.
To support this process, KJR is holding dedicated AI Governance Workshops – available in major cities – providing government leads and practitioners the opportunity to learn how to responsibly deploy AI within their departments. By balancing conceptual understanding with practical applications, these workshops aim to help agencies navigate the complex AI regulation maze, ensuring AI use that is compliant, effective, and ethically sound. For workshop details, see AI Governance Workshop – KJR
Published by
About our partner
KJR
KJR provides independent quality engineering that gives organisations the confidence to deploy complex, high-risk technology and AI systems. We focus on decisions, not just defects, helping government move from ambition to outcomes that are practical, responsible and built to last. We partner closely with public servants to deliver complex initiatives in highly regulated environments. Our strength lies in understanding how government really operates, from policy intent and procurement through to security, privacy, accessibility and ethics - and translating strategy into delivery with confidence. Unlike large consultancies that prioritise scale, or vendors that lead with tools, KJR is commercially independent and vendor-agnostic. We specialise in real-world implementation, supporting agencies to design and deliver transparent and explainable AI and digital solutions aligned to whole-of-government frameworks. KJR brings: * Deep experience delivering technology programs within government constraints* A strong commitment to responsible and human-centred AI* End-to-end capability across strategy, delivery, assurance and change* A focus on capability uplift, leaving agencies stronger and more self-sufficient Founded in 1997, KJR is known for working shoulder-to-shoulder with government teams to de-risk innovation and deliver lasting impact. KJR helps government move faster - safely, transparently and with lasting impact.
Learn more