How do you build experience when AI handles all the experiences?

The promise is simple: let AI handle the easy work, and humans focus on the hard work, the challenging work, the fun work.

But the hard work is hard because it depends on intuition and experience that comes from years of guidance doing the easy work (the apprenticeship).

So if AI handles all of the experiences the beginner stage (the apprenticeships), we don’t get “more experts” – we get a missing generation of them.

Or do we? We probably do if we keep learning the way we already have. Perhaps we just need to invent an entirely new way of learning?

.

The concept that follows came as an epiphany when trying to find a systematic way to avoid the most catastrophic outages (sev ones) for a tier 1 telco client.

Like many telcos around the world, this client faces what’s known as the talent cliff. Their networks, systems and processes are getting increasingly complex and so are the outages. Unfortunately, the only people able to solve these most challenging problems all seem to be on the cusp of retirement age. Their fault-fix knowledge comes from the experience of solving all manner of problems with these networks for decades.

.

The Background, the Cliff and the Asymptote

For most of human history, knowledge and experience had a reliable engine: volume.

You did lots of small, repetitive tasks. You made low-stakes mistakes. You saw enough variations to build pattern recognition. You learned what “normal” looks like. This allowed you to spot “weird” quickly. That was the apprenticeship pathway.

Let’s look at this through the concepts in the graph below that was my epiphany:

  • In any given field, our learning / knowledge follows the blue line
  • It starts off slowly, progresses quickly, then eventually reaches the phase of diminishing returns
  • That is, the blue line approaches the asymptote of 100% knowledge, which we can never reach. We can never know everything there is to know about any subject or field
  • Before we can master a subject (sector 2 – purple box), we must first go through an apprenticeship (sector 1 – blue box)

But AI is disrupting human history.

The curve of knowledge doesn’t change.

It’s just that AI can absorb huge volumes of beginner work overnight. That’s a productivity win, but it is also a training / learning shockwave.

If humans delegate to AI and stop doing the reps in sector 1, they also lose the opportunity to ever reach sector 2 by building the instincts and mastery those reps used to create.

If you’re not a master today and delegate everything to AI, then you’ll never have the chance to reach sector 2 (or at least not what constitutes mastery today).

The way out starts by separating two things we often bundle together: knowledge and judgement.

.

Step 1 – Accept the asymptote (for humans and for AI)

In any field, learning is asymptotic. You can improve forever, but you never reach “complete”. Returns diminish over time because the remaining improvements are smaller, subtler, and more dependent on unique context.

That’s true for people, and it’s true for AI. Models can be astonishingly fluent, but they still have edges: novelty, missing context, shifting constraints, and situations where the right move depends on judgement rather than retrieval. Black swan events.

So the goal is not “humans must know everything”. Nobody can. The goal is “humans must remain the override” when the situation reaches the edge of what AI can safely do.

.

Step 2 – Reframe the target: train the override, not the recall

In the old model, the constraint was knowledge acquisition. Today, knowledge is increasingly cheap. You can generate explanations, summaries, and options on demand. We’ve gone from knowledge being a scarce resource to arguably being too abundant.

The new constraint is at the moment when AI doesn’t have the answer. Or cannot have it. Or has the wrong answer.

Not because it is broken, but because of the asymptote. Reality is messy. It changes. It overlooks the most important details. It has knowledge gaps that aren’t written down anywhere.

It’s where the AI must ask the master to choose what matters and/or what to do next. This is the override.

However, to be the override, you need to either:

  • Have enough knowledge to fill in gaps yourself. To proceed with incomplete knowledge or to recognise when the AI’s output is misaligned with reality (old school mastery)

  • Not have enough knowledge, but have a skillset for what to do when neither you nor the AI has certainty yet (next gen mastery)

That’s the pivot between the old version of mastery (experience from “time spent near work”) to the new version of mastery (learning to make good judgement under uncertainty).

.

Step 3 – The two hard modes: edge-of-map and information overwhelm

Sometimes you face the edge-of-map problem: novelty, ambiguity, missing context, no proven playbook.

But increasingly (as AI models get ever better), you face the opposite: information overload. Too many signals. Too many plausible explanations. Too many possible options. Too much noise to process in time. Analysis paralysis.

In both cases, the skill is decision-making, not knowledge gathering or answer-finding.

  • What is the actual problem?

  • What matters most right now?

  • What can I safely ignore?

  • What is uncertain but decision-relevant?

  • Is this a choice of no return?
  • What would I check first if I only had 10 minutes?

If you only train recall, the result is obvious. AI will beat you. If you train sensemaking, prioritisation, and judgement, you may just have a way of staying valuable.

This approach is what I refer to as the Ninja Academy or Chaos Gym.

.

Step 4 – Replace apprenticeship with a Chaos Gym

If the old apprenticeship gave you years of gradual exposure, you need a new mechanism that can create concentrated exposure to unknown and chaos on purpose.

Instead of facing a black swan / Sev1 only a once or twice a year, engineer it to face them on a daily basis to develop your Black Swan Muscles.

Build a Chaos Gym: a weekly practice session where the point is not “get the right answer”, it’s “make good calls when the situation is stressful, unclear and the inputs are noisy”.

Keep it simple:

  • Use scenarios with incomplete information and time pressure

  • Add overwhelm on purpose: conflicting signals, too much data, stakeholder interruptions, plausible-but-wrong summaries

  • Force decisions: learners must commit to a next action, not just list possibilities

  • Review the reasoning, not the heroics:

    • What did you assume?

    • What did you verify?

    • What did you ignore, and why?

    • What trade-off did you choose?

This is the core of the NONA-style idea: if AI does away with the easy and the mundane, humans are only left with the more chaotic and less predictable. Therefore, Ninja training has to simulate chaos and unknowns deliberately, rather than hoping people “pick it up” through years of routine work.

In fact, you could even argue that if our Ninjas currently only experience a few Sev1s a year, they’re not really getting enough chances to truly develop Black Swan Mastery. It’s just that they’re better prepared to handle it than anyone else.

.

Step 5 – Use AI to multiply chaos, not remove it

Most teams use AI to reduce struggle.

That feels productive, but it can quietly remove the friction that builds knowledge and the opportunity to test your judgement.

Those “struggles” were also training fuel and pattern recognition for current-day masters.

Therefore, we need to flip the usage of AI:

  • Use AI to generate scenarios, variants, and noise

  • Use AI to act as an adversary: add misleading clues, change constraints mid-stream, make two causes look identical

  • Make a rule: humans commit first, AI critiques second

.

If our AI tools removes the reps, we have to also use AI manufacture them elsewhere – via replays, simulations, and guided drills. If we don’t, the talent cliff + AI will eventually leave us with fewer people who can validate what the automation is telling them.

The point is not to reject AI. The point is to use it cleverly in the learning system.

Today’s experts often look like they’re approaching the asymptote. They can fill in almost any blank because they’ve seen so many variations that it all feels familiar to them. They earned that by doing years of high-volume “easy work” and accumulating a deep internal library of patterns.

The next generation won’t get the same opportunity to “know everything” or “experience everything.”

Instead, this next generation will need to be masters at handling anything and making quick, clear decisions on everything.

If this article was helpful, subscribe to the Passionate About OSS Blog to get each new post sent directly to your inbox. 100% free of charge and free of spam.

Our Solutions

Share:

Most Recent Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.