The Mandatory Guardrails for High-risk AI​

In Australia, we’ve seen a wave of new frameworks and policies designed to create a unified approach, helping both public and private sector organisations navigate AI’s risks while still reaping its many benefits.

Author avatar
Amy Lepinay 5 January 2025
The Mandatory Guardrails for High-risk AI​

In Australia, we’ve seen a wave of new frameworks and policies aimed at ensuring the responsible use of AI. From the Voluntary AI Ethics Principles introduced in 2019 to the recent National AI Safety Framework and proposed mandatory guardrails for high-risk uses of AI, it may seem overwhelming at first. But despite the apparent complexity, a clearer picture is emerging. These policies are designed to create a unified approach, helping both public and private sector organisations navigate AI’s risks while still reaping its many benefits.


Fear-based Approach vs Risk-based Mindset

At KJR, we’re committed to helping organisations understand these frameworks and adopt AI responsibly. Many organisations are hesitant to adopt AI due to perceived risks. The key is to move from a fear-based approach to a risk-based mindset. Start small – identify a single AI use case, develop proportional controls, and build from there. The Voluntary AI Safety Standard provides a good checklist of things to consider when putting appropriate governance controls in place, and the Foundational AI Risk Assessment guideline developed by the QLD government provides a very practical approach to risk assessment. These approaches are built on more detailed international standards but are more accessible for everyday use. 

In our recent VDML podcast episode on practical AI assurance, we had the pleasure of having two distinguished guests: James Gauci founder of Ethē, an AI governance platform, and Sean Johnson of Lakefield Drive, who works extensively with boards on developing AI policy and adoption strategy. It was a great conversation and there’ll certainly be a need for a part two, but some key takeaways include:

Ignoring AI is not a practical option: turning off AI to avoid regulations is more prevalent than expected at the moment, but as AI becomes embedded in our everyday tools, the need to have the right governance processes, staff training, and most importantly, the right organisational culture of responsible AI usage will be essential to avoid harm. Attempting to block legitimate use of AI-enabled tools will only serve to create a culture of clandestine usage which will simply increase risk, rather than reduce it.

Appropriate regulation is an innovation enabler: Australian frameworks for responsible AI usage give clarity around the steps required to ensure that AI can be used for the benefit of all. By setting out specific expectations around processes like risk assessment, testing and monitoring, and transparency around AI usage, organisations wanting to use AI to enhance the way they work can be more confident about investing in AI-enabled solutions without exposing themselves to unexpected risks that might emerge from inappropriate usage of AI or from AI solutions that are inherently unsafe in a specific context.


KJR's Feedback on the Mandatory Guardrails for High-risk AI

The journey to responsible AI usage is, of course, ongoing and there is more work to be done, especially in areas of high-risk AI usage. Like many organisations around the country, KJR provided feedback on the federal government’s proposal to introduce mandatory guardrails for high-risk AI.  A formal regulatory framework that goes beyond voluntary compliance is essential in order to deliver certainty around the appropriate usage of AI in contexts such as healthcare, and public safety. The key to the effectiveness of such a framework will be to ensure that the responsibility for AI safety is distributed appropriately across the technology supply chain: vendors of AI models and solutions have a duty of care to develop technology that meets the legal and ethical requirements of their users, deployers of AI solutions also have a duty of care to apply that technology safely. Developing the domestic capability to build and deploy AI responsibly is an essential step in Australia being an active participant in the AI industry.

By embracing AI responsibly, we can transform this technology into a tool for meaningful impact. Whether through mandatory guardrails or proactive governance, the goal remains the same: to harness the benefits of AI while ensuring its safe and ethical use.

Read KJR's response to the proposal for mandatory guardrails: https://consult.industry.gov.au/ai-mandatory-guardrails/submission/view/228

Communities
Cyber Security and Risk Management
General
Workforce, Skills and Capability
Regions
Australia Australia

Published by

Amy Lepinay Marketing Coordinator, Marketing

About our partner

KJR

KJR is a Software Quality Engineering consultancy, that delivers.We are industry leaders in software assurance and AI implementation and are trusted by Australian federal and state government departments, ASX 100 companies, as well as the start-up ecosystem to help drive digital advancements for industry advantage. Specialising in AI and data assurance, governance, DevSecOps and testing we provide advisory and implementation services to key industry sectors including government, defence, health, utilities, and IT.Founded in 1997, we have built a strong CSR charter that includes practices and policies that intend to impact the world positively. We have dedicated our efforts to improving the world through the implementation of various community and innovative technology projects that assist Indigenous communities across Australia in identifying cultural and business opportunities. Our culture is one of inclusion, collaboration, and welcoming diversity which purports our governing focus on creating connections, fostering collaborative engagements and working with like-minded partners for the benefit of the wider community through technology solutions.Our services: https://kjr.com.au/View all our case studies: https://kjr.com.au/case-studies/

Learn more