We’re releasing our latest report today. Click on the image below to download it.
Why are transformation approvals (eg business case approvals, vendor selections, project transformation decisions) forced to look perfect when delivery is anything but?
That is the quiet contradiction at the heart of many (most?) digital transformation programmes. We build business cases as if inputs are known, where outcomes are predictable and benefits can be estimated with precision. To take this a step further, stakeholders expect the business case guesstimates to guarantee returns on investment.
You know the examples, where numbers have to be crunched and refined to within an inch of their lives and down to 5 decimal places of accuracy?
But once delivery starts, the real world turns up: data quality is imperfect, estimates are guesses, vendor performance varies and integration risk behaves in nonlinear ways (as does each of the delivery team members). Yet the governance model often remains frozen in its earlier fiction.
That is the core flaw. Many transformation programmes are not failing because the teams are lazy or the tools are poor. They’re failing because stakeholders expect certainty and perfection, so transformation leaders feel forced to model uncertainty as certainty.
The result is a familiar pattern: one-and-done decisions, overconfident forecasts, approval based more on narrative strength than genuine likelihood and very little outcome traceability after the decision gate has been passed.
The decision gate might take months or even years to navigate. And yet the real learning only starts once the project is underway, but by then the most important gates have already closed.
This is why the “burning rings of fire” metaphor works so well. In many organisations, transformation has a very small number of large, painful and binary approval moments (rings of fire): solution or vendor selection, business case approval and then project delivery commitment. Each ring is navigated once, then disappears. The organisation optimises for passing gates, not improving outcomes.
That is a governance model built on a theatrical presentation for the approval gatekeepers, not adaptation from ongoing knowledge gathering, learning and decision-making.
So what is the alternative?
Oddly enough, the better mental model may come from fields that live with uncertainty every day.
Actuaries and bookmakers do not ask, “Will this outcome happen?” They ask, “What are the odds right now?”
- They assign probabilities instead of pretending uncertainty has been eliminated
- They update beliefs as signals arrive
- They optimise expected value across many bets rather than searching for a single perfect answer
- They accept that being wrong often is fine, provided the pricing was rational
- They look to the past for patterns that help them to model the future (arguably unlike most telco gated processes)
- Most importantly, they re-price continuously because the world moves and the model must move with it.
As the pack puts it, OSS tries to be perfect. Actuaries prepare models to work with uncertainty.
That shift in mindset is profound. It means moving away from a handful of significant binary decisions with fixed assumptions and one-and-done governance (not to mention potential trauma and blame attached).
Instead, transformation leaders can think in terms of many smaller decisions, probability-weighted bets, adaptive repricing and evidence-led learning loops. This does not make decisions weaker. It simplifies, de-risks and expedites decision-making because the discussion is no longer trapped inside the false promise of certainty or risk / trauma.
The pack then introduces a practical decision operating system built around five parts.
First, frame the bet. Every major transformation decision should have odds attached. That means explicitly defining the upside, downside, expected value, confidence level and time horizon. Instead of asking whether a vendor is “best”, ask what the organisation is betting on and what evidence currently supports that bet. Ask whether the culture of buyer and seller align. Ask whether this remains a marriage worth starting, and subsequently, remaining in.
Second, score the odds. Replace binary scores and winner-takes-all rankings with probability bands, confidence intervals and scenarios. Compare vendors, roadmaps and business cases not just on weighted checklists, but on delivery risk, schedule and cost overrun likelihood, time-to-value and sensitivity to data quality or organisational complexity. That is a much better conversation than pretending the difference between first and second place in a scorecard is objective truth. Technically speaking, there’s generally little difference between vendors when things get to the pointy end.
Third, run the loop. Decisions should not freeze once approved. They should be re-priced whenever evidence, scope or value shifts. Milestones completed, new operational data, variance entering a risk band, changes in sequencing or weakening benefit realisation should all trigger reconsideration. Decisions should move like markets, not freeze like contracts.
Fourth, record the learning. A decision journal forces teams to capture what they believed at the time, what evidence supported it, what had not yet been seen, what probability was assigned and what would trigger a revisit. This prevents hindsight from rewriting the story and turns each project into an institutional learning asset rather than an isolated memory. Future models can be built around current and past learnings / models.
Fifth, stress-test the future. Premortem and backcast workshops ask two powerful questions: if this fails, why did it fail? If it succeeds, what had to be true? Those two lenses surface hidden dependencies, weak control points, capability gaps, sponsorship issues and sequencing problems before commitment hardens.
Finally, review decisions properly. Separate decision quality from outcome quality. A good decision can still have a bad outcome. A bad decision can still have a good outcome if luck intervenes. Organisations that confuse luck with skill will repeat poor decisions and miss the chance to improve their models.
The payoff is not perfect foresight. The payoff is a higher probability of better decisions, made earlier and revised faster. That means better decisions through explicit risk visibility and more rational vendor selection, better outcomes through earlier course correction and less sunk-cost drift and better organisations through learning loops and continuous repricing. You do not eliminate uncertainty. You get better at navigating it.
In other words, the future of transformation may not belong to the teams that can tell the cleanest or grandiose approval story. It may belong to the teams that can most honestly price uncertainty and adapt as reality unfolds. That is a very different mindset and, quite possibly, a much better one.



