Getting AI Under Control: The Case for Strong Governance and Management

Why organisations are moving beyond AI principles to practical governance, and how ISO/IEC 42001 helps embed accountable, auditable and risk-based AI management into day-to-day operations.

Laura Galindo 16 February 2026
Getting AI Under Control: The Case for Strong Governance and Management

Artificial intelligence is now firmly embedded in the way organisations operate. It influences decisions, processes and outcomes across business functions.  There is greater urgency to ensure that effective AI governance and management is in place and AI risks are understood and treated.

As AI use becomes more routine, questions around accountability, risk and oversight naturally come to the surface. Organisations are increasingly expected and even required [EC1] to understand where AI is in use, how it affects decisions and outcomes, and how they would explain or justify those outcomes if challenged.

These insights are shaped by the work of Erin Casteel, a Lead Auditor at Intertek SAI Global and a long-standing contributor to the development of international ISO management system standards and IT best practices With deep experience auditing complex organisations across multiple sectors and helping shape the development of International Standards for information security, privacy, risk management, service management and artificial intelligence, Erin brings a rare perspective - one that connects how AI should be governed and managed with how governance and management AI currently plays out inside organisations.


From what you’re seeing, what is driving organisations to look more seriously at AI governance and management now, even where formal regulation or audits may still feel some way off?

Over the last several years, AI has moved from innovation and business cases to operations – today AI risks urgently need to be understood and managed. AI is already being used to make or influence business decisions. It is being integrated into core platforms, including HR, finance, CRM, information security and development. Leaders need to know the answers to questions like: “what happens when it is wrong?” and “How do we prove we acted responsibly?” These are governance questions. 

Boards are realising that if an AI system harms a customer or an employee, saying “it was the model” is not a sufficient defence. Financial and reputational damage impacts are real and increasing over time, both due to increased proliferation of AI and an increased comfort level with its use that can obscure very real risks. AI failures resulting from hallucinated advice, biased outcomes, or leaked training data can result in negative media attention, social media backlash, and customer loss. AI governance and management becomes the mechanism to understand what AI is actually in use, for example, where staff may be using public LLMs with sensitive data and increasing data leakage risks without visibility, formal risk assessment or approval. AI governance and management sets guardrails without banning innovation and brings AI into the organisation’s existing risk and control practices. 

Even if an organisation may believe it is not yet ready for ISO/IEC 42001 certification, it should read the standard, understand the requirements and begin to consider how ISO/IEC 42001 can be used to manage AI risks, whether the organisation is developing, providing or just using AI. Some good news is that ISO management system standards have a common structure and a common set of requirements, so if for example an organisation already has a quality management system aligned with ISO 9001, this can be leveraged significantly to achieve ISO/IEC 42001 certification. It is also important to mention that any organisation that already has ISO/IEC 27001 certification has a huge advantage with ISO/IEC 42001, because risk is managed in exactly the same way, using a Statement of Applicability.


Based on your experience, how do you see ISO/IEC 42001 shaping how organisations approach responsible and auditable use of AI in the coming years?

Organisations have been talking about responsible AI for a very long time now, but for many the focus has been limited to defining a set of principles and ethics statements or slide decks. That is understandable – AI is complex and its impact on all industry sectors is still being understood. ISO/IEC 42001 pushes AI governance and management out of theory and embeds it in day-to-day operations, similar to what ISO/IEC 27001 did for information security. It is the next logical step for an organisation wanting to operationalise AI governance and management. AI becomes auditable, like information security, privacy and quality. 

Organisations today need to ask: “How do we decide whether to deploy an AI system?” “How do we monitor and treat ongoing AI risk?” “What happens when AI fails or causes harm?” This focus changes to the management of AI risks, based on the organisation’s risk appetite, legal, regulatory and contractual requirements, and business objectives. It makes responsible AI tangible, and facilitates proportionality, so organisations stop over-engineering controls for low-risk AI while under-controlling high-risk AI, for example risks related to customer and business decisions, automation and safety. 

Importantly, ISO/IEC 42001 clarifies responsibilities and accountabilities for personnel and other parties.  And because 42001, like all ISO management system standards, is built on Plan-Do-Check-Act, it facilities measurement and evaluation, internal audit, management review, corrective actions and continual improvement. This gives the organisation necessary visibility and control of how AI is being used and mechanisms to effectively address issues and keep improving.


Which aspects of AI governance and management are most commonly underestimated in the early stages?

One of the assumptions organisations can make initially is that AI governance is a technical thing. In reality, accountability includes business owners in regard to the impact of decisions, legal and compliance teams in regard to liability, claims and customer harm, risk and assurance teams responsible for controls and monitoring, as well as IT operations responsible for deployment and changes. Many organisations do not initially recognise how cross-functional AI decisions are and the importance of having clear AI owners. 

Another assumption is that as long as the organisation validates the model before launch, all will be well, when in fact risks that increase after go-live can include data drift, model decay, prompt changes and unintended reuse in new contexts. Traditional SDLC thinking does not map neatly into continuously learning or prompt driven systems, so the organisation will need to adapt to these differences. 

There can also be an assumption that having a human in the loop provides some sort of magical safety net. In reality, humans can over-trust AI outputs. Humans may rubber-stamp due to time pressure. Often the human in the loop does not have sufficient guidance on when to override. In other words, human oversight is often poorly planned and designed, undocumented and does not include sufficient or documented criteria for escalation or override. 

Yet another assumption, in regard to 3rd party and supplier AI risk, that the vendor handles the AI risk. However, the organisation still owns the customer outcomes, any regulatory exposure and ethical impacts. So, these 3rd party risks must be clearly understood and treated. For example, contracts with suppliers may not clarify AI accountability and may not include supplier assurance or ongoing monitoring. 

Finally, organisations can assume “we have mitigated the risks”, when some risks are explicitly accepted, including bias trade-offs, accuracy thresholds and explainability limits. Risk acceptance criteria need to be aligned with the organisation’s risk appetite, including legal or regulatory requirements. This means that the organisation cannot accept risks outside the risk appetite. Organisations are also sometimes uncomfortable formally recording accepted AI risks. But not recording the accepted risks is not an excuse and it does not lower the risk.


Why do organisations often struggle to clearly identify what should be in scope of an AI management system, particularly where AI is embedded in tools or services?

Most organisations have experience with scoping regarding traditional IT systems, where there is a named application with a clear owner and a defined boundary. Embedded AI breaks that model. AI is buried in SaaS platforms. Features are “AI-assisted” rather than “AI systems”. Models are updating silently in the background. As a result, teams can underestimate their reliance on AI. Suppliers also often market smart, intelligent or AI-powered features without providing technical clarity and suppliers can change AI functionality over time without a contract change. 

Basically, if the organisation cannot see the AI, it does not get scoped, and this also means that the full risks may not be understood. Ownership is also often fragmented across the organisation, between IT, the business, procurement and risk and compliance, so there can be the impression that scoping is someone else’s responsibility. There can be confusion between “AI as a feature” vs “AI as a system”, so teams can assume that if it is just one feature it is low risk, or “if we didn’t build it, it is out of scope.” ISO/IEC 42001 focuses on use and impact of AI, not just development. 

A small, embedded AI feature can influence hiring, credit, access, prioritisation or security decisions. It can also introduce bias or regulatory exposure. There can also be a fear of over-scoping – in other words, fear of opening a can of worms. Organisations try to then scope in a defensive way – with narrow definitions, artificial boundaries and exclusions that are convenient rather than risk-based. A way to address this is for the organisation to ask different questions. Rather than asking “did we build the AI?”, they can ask “which tools affect people, customers, safety, money or rights?” or “where would we struggle to explain an outcome if required?” 

It is important to keep in mind that initial scoping of the AI management system, including which AI systems are included, can be iterative – start where you are and maturity and capability will improve over time, but do not wait for perfect. This is too urgent.


Why are AI risk assessments and AI system impact assessments such a critical part of building confidence, capability and audit readiness?

Risk management is an overall management activity that addresses risks to the organization as a whole, including the way AI system developers conduct development projects, AI system providers manage the relationship to their customers, and AI system users utilize the functions of AI systems. AI system impact assessment, by contrast, focuses specifically on the reasonably foreseeable impacts arising from the intended or actual use of a particular AI system. Rather than taking a whole-of-organisation view, it adopts a more product- or service-oriented perspective and is typically carried out by the teams directly responsible for developing, deploying, or technically managing the AI system. While distinct from general risk management, AI system impact assessments form a critical part of the overall risk management lifecycle and provide essential inputs into broader AI risk assessments.

AI risk assessments and AI system impact assessments are critical because they provide the evidence that an organisation understands how its AI can affect people, decisions and outcomes - and has made deliberate, accountable choices about how to manage those risks. AI risk assessments and AI system impact assessments force organisations to answer the most important questions, such as what AI do we actually use, what decisions or outcomes does it influence, who or what could be harmed and how? Confidence comes from transparency, not from simply claiming “low risk”. Over time, teams will move from asking “do we need an AI risk assessment” to “what level of assessment does this AI use case justify?” The organisation will clearly understand and be able to effectively manage its legal, regulatory and compliance risks regarding AI, as well as its security, privacy and misuse risks, strategic and reputational risks, and workforce and culture risks.

Risk and impact assessments provide a defensible basis for scope and applicability, evidence that risks were identified before incidents occurred, as well as clear traceability from AI use cases, risks, controls and acceptance of residual risk. They enable AI governance that is real rather than performative. They also facilitate proportionate, scalable governance, so low-impact use cases can have lighter touch governance and effort is focused on where harm, safety or trust impacts are highest. AI adoption can also be scaled without scaling risk blindly.


Conducting AI risk assessments and AI impact assessments are requirements of ISO/IEC 42001. For further information, ISO/IEC 23894 is Guidance on AI risk management and ISO/IEC 42005 provides guidance for organisations conducting AI system impact assessments. 


[EC1]it is not just increasingly expected - orgs are REQUIRED to understand where AI is in use. Massive risks otherwise.

Published by

Laura Galindo Marketing Coordinator, Intertek SAI Global

About our partner

Intertek SAI Global

At Intertek SAI Global, we understand the organisational challenges of building stakeholder trust and confidence at all stages of maturity. We work with organisations to help them meet stakeholder expectations for quality, safety, sustainability, integrity and desirability in any market and industry worldwide, while embedding a critical risk-based thinking and a continuous improvement culture. Intertek SAI Global has offices in 21 countries and services clients globally, delivering more than 125,000 audits and training more than 100,000 students through its Assurance Learning courses each year.

Learn more