The last four articles have explored the way some telcos are using AI technologies counter-productively today – AI entanglement, transformation planning, dependency visibility and the hazards of autonomy. The final question for this article is whether telcos can or will ever earn the right to unplug their legacy OSS.
In Part 4, we ended with a deliberate gap. We touched upon how autonomy only seems to have emerged so far, in other industries, when complexity was radically reduced, not when smarter automation layers or rules were added. We also hinted at an uncomfortable truth. That kind of subtraction did not happen by accident.
This final article resolves that gap, introduces more applicable precedents for consideration when targeting Autonomous Networks (AN) and Autonomous Operations (AO), and brings the whole series together.
.
What Tesla FSD actually simplified, and why it mattered
Tesla’s Full Self-Driving (FSD) evolution is often hailed as a pivotal moment in the use of AI to manage complex scenarios.
Tesla’s Full Self-Driving (FSD) version 12 represents one of the most radical transformations in autonomous vehicle history – the complete replacement of traditional programming logic with end-to-end neural networks. This shift from 300,000 lines of carefully crafted C++ code to a system that learns from millions of examples of human driving marks a fundamental change in how machines can be taught to navigate our world.
Fred Pope (here)
Over time, Tesla progressively removed vast amounts of hand-coded logic.
This shift was not a single parallel project or two simultaneously developed stacks inside Tesla, but rather a phased architectural evolution:
- Early FSD and Autopilot used a combination of rule-based logic and separate perception, planning, and control modules
- Over successive versions, Tesla incrementally moved perception into neural networks while retaining classical code for decision logic
- With v12 and subsequent training efforts, Tesla replaced large portions of decision-making logic itself with neural networks trained on data from its fleet
This dramatic code reduction would not have been possible without an unprecedented data collection advantage:
- Tesla vehicles on the road continuously generate massive amounts of real-world driving data from millions of vehicles (much like telemetry signals on telco networks)
- This data is essential for training neural networks to handle the enormous variability of real driving scenarios
- Without this large, diverse dataset, the shift from explicit rules to learned behaviour would not be feasible (whether in autonomous driving or other complex domains like telco)
This is the critical detail that makes Tesla’s story hard to replicate for telcos.
Tesla could remove explicit logic because it had something to replace it with. Massive volumes of coherent, real-world data combined with continuous feedback loops. A learning system that improved through exposure to real situations, not static assumptions.
.
More applicable autonomy precedents for telco
But autonomous vehicles are clearly not telco networks.
Fortunately, we don’t only need to look at a driving analogy to find operational precedents that map more closely to AN and AO. In hyperscale cloud environments, many teams are already pushing beyond manual operations into what is better described as autonomation (automated sub-flows) rather than autonomy (closed-loop, broad-scope, resilient self-governance).
They’re automating slices of the incident lifecycle with strong guardrails and measurable outcomes rather than claiming end-to-end self-governance.
A good example is Microsoft’s DiLink, which uses neural embeddings across incident text and a service dependency graph to automatically link related incidents across services. This reduces human triage load by improving how quickly responders can connect symptoms to upstream causes.
Meta’s DrP goes further in the direction of autonomation by automating large parts of the investigation workflow through programmatic playbooks at scale. It’s reporting measurable MTTR reductions across hundreds of teams. It is not “full autonomy”, but it is a strong precedent for operational decision support and automated investigation steps.
Finally, Microsoft Research’s AIOpsLab treats operations as an interactive environment for agents, combining incident simulation, telemetry observation, and agent orchestration to evaluate how AI agents might handle tasks across the incident lifecycle. It’s a framework for building and assessing these operational agents, rather than a claim that autonomy has already been achieved.
The common thread is not that these organisations have “solved autonomy”. They are however a far closer analogue for telco AN and AO than a single monolithic autonomy story, pointing back to similar lessons we’ve been building throughout this series.
If we want higher levels of autonomation, we need fewer variants, clearer dependencies and tighter learning loops.
.
What the first four articles collectively taught us
At this point, it is worth stepping back.
Part 1 showed that many AI initiatives today unintentionally reinforce legacy OSS and BSS through new forms of entanglement.
Part 2 discussed the various transformation planning models and how we’ll they’re suited to AI initiatives. It demonstrated that transformation outcomes are determined less by technology choice and more by the transformation objectives – evolutionary or revolutionary.
Part 3 argued that subtraction is an essential element of digital transformation, adding that we want to first achieve quantitative visibility of dependencies and coupling before embarking on transformations.
Part 4 challenged the autonomy narrative directly, showing examples of how adding complexity takes us further away from the significant changes we’re seeking.
Together, these lessons point to a single conclusion:
Whether using AI on your transformation or not, the limiting factor is complexity. We need to have a mindset of drastic simplification before any autonomous system can be achieved.
.
Design principles for OSS that can eventually be unplugged
If telcos genuinely want the option to move on from today’s OSS and BSS, different design principles are required:
- Design for subtraction, not addition or permanence
- Reduce variants before adding “intelligence”
- Increase repeatability (also reducing variants). Standardisation is a prerequisite for learning and continual improvement
- Establish learning loops before optimisation loops. Decisions must be measured by outcomes, not just execution
- Understand dependencies quantitatively, explicitly and owned. Hidden coupling is the enemy of subtraction projects
- Visualise your functionality long-tail diagram. Understand what really moves the needle, re-factoring and re-inventing what’s important, not adding to the code-base / logic-base for little benefit
- Make data sets coherent and real-time. Facilitate continual learning, rather that an out-dated, static view of reality
- Measure progress by what is removed, not what is added. The Net Simplicity Score is a success metric
These principles do not require abandoning existing systems overnight. They require ensuring that today’s decisions do not make tomorrow’s impossible.
What other principles have I missed?
.
Closing thoughts (and discussions)
Legacy OSS and BSS will probably not disappear any time soon. But they can be designed to become increasingly unnecessary, replaced by next generation alternatives.
Getting to next-gen is going to require new, longer-range thinking about your transformation planning though. This includes quantitative visibility of all your dependencies, and a willingness to prioritise subtraction over constant addition.
At PAOSS, we increasingly focus on helping operators do exactly that. We’d be delighted to help you build foundations that support long-term change. Not necessarily to replace legacy systems tomorrow, but to ensure today’s decisions don’t close off future options.
AI offers great potential as a disruptive change agent. It can represent a bridge only when systems are allowed to simplify. Otherwise, it quietly becomes the noose for your digital transformation.
This concludes the five-part series. If nothing else, I hope it challenges a number of default assumptions. We also hope it elicits further discussions.
We’d love to hear your thoughts because this is an area that’s changing rapidly. Newer, better ways of working, architectures and systems are appearing every day. Learning is an infinite game.






2 Responses
Excellent series of articles, Ryan
Thanks James!
That’s high praise coming from someone who creates so much fantastic content for a living!!