Overview
Clarity before Commitment: A decision-readiness training course for senior public-sector leaders facing high-consequence technology and transformation commitments
You are being asked to approve AI and technology investments under pressure.
Ministers want momentum. Your organisation is already experimenting with AI, often ahead of formal governance. Proposals keep arriving. Some are framed as pilots. Some look like business cases. Others are ideas with just enough credibility to feel hard to say no to.
Each one promises speed, savings, or better outcomes.
Your job is to decide which of these are worth committing to, and which ones are not ready yet.
That decision is harder than it looks.
Across decades of delivery, independent reviews consistently show that large technology programs rarely achieve full success, with industry reporting placing “full success” at roughly a meagre 16%–31%.
In the public sector, the consequences are not abstract. Failure can escalate into multi-billion-dollar outcomes, such as Canada’s Phoenix pay system (from an initial $310 million budget to $5.1+ billion in costs) and the UK Emergency Services Network, where £2 billion was spent with, in audit terms, “nothing to show for it.”
Most proposals are written around the technology, not the outcome. They describe what the tool can do, not what will actually change for citizens, staff, or the system you’re responsible for. They rarely spell out what the decision locks in, what it rules out, or what needs to be true elsewhere in the organisation for the investment to succeed. And in many cases, no one person truly owns the result.
This matters because when technology investments fail, it is rarely because the technology didn’t work or the delivery team lacked capability.
They fail because the organisation committed too early.
Audit insight points to a recurring pattern: technology decisions are taken too early, before the business problem and the full change required are properly understood.
The problem wasn’t delivery. The problem was the decision.
This course gives you a practical way to test whether an AI or technology proposal is ready for commitment, or whether it needs more work before you sign off — so you can slow down the commitment (not the work), protect options, and build decisions that stand up to Treasury, audit, and executive scrutiny.
What this course is (and is not)
- This is not an AI technology briefing
- It is not an implementation methodology, platform selection guide, or delivery playbook.
- This is a decision discipline for executives and senior decision makers.
- The focus is building the discipline and tools to answer: “What must be true before we commit so that value is likely and the decision is defensible?”
- The same discipline applies to any significant technology investment. AI is where the pressure is most acute right now, which is why it is the focus of this course.
Learning Outcomes
By the end of this course, you will be able to:
Distinguish delivery risk from decision risk, and understand which one determines whether an AI investment delivers value
Apply the Clarity, Consequences, Control model to any AI or technology proposal, whether it's a pilot, a business case or an in-progress program
Use the Decision Readiness Ladder to locate where a proposal or initiative sits
Identify early warning signals that a proposal or program direction is moving toward commitment before it's ready
Apply three diagnostic questions to any investment decision, and explain and defend their position under audit, ministerial and board scrutiny
Online Training
Is This AI Investment Worth It? A Decision Model for Government Leaders
Session details
Large-scale technology and transformation programs often fail not during delivery, but at the moment leaders commit without full clarity. This executive-level course helps public-sector leaders strengthen decision readiness before approving major technology and transformation investments. Join peers from across government for a practical, discussion-based program that will help you:
- Assess whether a major initiative is truly ready for executive commitment
- Identify early warning signs of premature technology decisions
- Apply the Clarity–Consequences–Control model to test defensibility
- Sequence approvals and procurement decisions to protect value
Group discounts available (10+ participants). Contact: [email protected]
Familiarity with the topic is required
Key Sessions
Day 1: The Pattern and the Model
Welcome and Opening Remarks
Module 1 – Introduction to the patterns in AI implementation
Focus: The evidence on AI investment outcomes, and why the gap between a compelling demo and a defensible commitment is wider than most organisations realise.
- What the data shows about AI project outcomes
- Analysis of real case studies AI programs – the good and the bad
- The different type of AI, what’s new, what’s proven, where are they suited
- Why audit findings and program reviews point to as challenges to mitigate
Focus: The structural pressures that push experienced leaders toward commitment before the conditions are met.
- Budget cycles, ministerial announcements, procurement momentum and 'no choice' narratives that compress decision timelines
- How AI amplifies the pressure: vendor-funded research, FOMO narratives and pilots that quietly become commitments without a deliberate decision point
- The accountability asymmetry: the political cost of saying 'not yet' is immediate, while the cost of premature commitment shows up years later
- Strategies for managing the pressures
Focus: Three conditions that must be present before any AI or technology commitment is defensible.
- Clarity: Can you state the intended outcome in terms a citizen would understand? Do you know, end to end, how the work is done today?
- Consequences: What does this decision lock in? What options does it close? What has to be decided before this can be decided?
- Control: Is there one person accountable for the business outcome, with the authority to make trade-offs and has the safety to say, 'this isn't working'?
- Scenario Discussion
Focus: A diagnostic for locating where a proposal sits, measured by the conditions that are present rather than the effort that's been spent.
- Six levels of readiness, from 'problem identified' through to 'committed and accountable'
- How to assess work that is in progress, and whether the original conditions for commitment still hold
- The critical gap between 'prepared' and 'committed', and how preparation to ensure its clear what you are committing too
- How the Ladder maps to the Commonwealth Investment Oversight Framework
Day 2: Application Under Pressure
Welcome Remarks and Recap
Early Warning Signals: Spotting Proposals That Aren't Ready
Focus: The practical signals that show up in papers, meetings and procurement activity when a proposal is moving toward commitment faster than its readiness justifies.
- Language signals: ambitious words doing the work that measurable outcomes should be doing ('modernise', 'transform', 'leverage AI')
- Governance signals: regular steering committees, detailed risk registers, but nobody who can name the single owner of the business outcome
- Procurement signals: technology selected before the problem is fully understood, and 'we'll work it out during implementation' becoming the default answer to unresolved design questions
Focus: Converting the decision-readiness model into three questions that can be implemented immediately.
- The three questions, what a satisfactory answer looks like, and what to do when you don't get one
- How these questions connect to audit defence, ministerial accountability and Treasury scrutiny
- Why asking these questions protects you whether the investment succeeds or fails, and why not asking them leaves you exposed either way
Focus: How to hold decision discipline when everyone around you is pushing for commitment.
- Slowing the commitment without slowing the work: how to keep preparation moving at pace while withholding formal commitment until the conditions are met
- The language of protecting the investment: how to frame 'not yet' as a positive action rather than an obstacle
- Recognising when vendor-funded research or analyst reports are shaping the narrative, and how to apply source discipline without being dismissed as anti-innovation
- When to accept gaps and proceed anyway, and how to document that decision so it stands up
Focus: Making the discipline operational from the next steering committee onward.
- Five things to ask for in the next steering committee pack
- How to shift conversations from delivery progress to outcomes and readiness
- How to reduce audit exposure while improving the likelihood that AI investments deliver value
- Reflection and action planning
Meet Your Facilitator
Luke Halliday
Chief Technology Officer, Victorian Government
Luke Halliday previously served as the Whole of Victorian Government Chief Technology Officer, leading major reform programs and advising senior leaders on high-consequence technology decisions.
Luke has operated on all sides of technology commitments: as a founder and CEO of a technology company, as a CIO in critical infrastructure, as a whole-of-government technology leader, and as an advisor. This experience across vendor, owner, executive, and government perspectives informs a practical approach to decision readiness and decision integrity in complex institutional environments.
Register Today
Join this training for professionals working within the Public Sector
Extra Early Bird
Ends 5 Jun
$A 995
per person + tax 400 savingEarly Bird
Ends 3 Jul
$A 1195
per person + taxRegular
Ends 18 Aug
$A 1395
per person + taxFor group or payment enquiries or custom training solutions, please contact [email protected]
Can't see what you need?
Download our training catalogue to review all available topics