Many (myself included), believe that recent progress in AI-based technologies should accelerate digital transformation (even if only by inspiring us to think differently about the digital systems we already have).
However, in Part 1 of this series we explored a paradox. The most common uses of AI today are not loosening the grip of legacy OSS/BSS, but quietly perpetuating them for years/decades to come through new forms of entanglement.
In Part 2, we took that concept further and argued that moving on from legacy requires a clear decision first:
Are we pursuing revolutionary change, or are we committed to evolutionary transformation while keeping legacy systems in place?
That decision matters. If we want to take the revolutionary path, before AI can act as a bridge to disruptive transformation, we must first learn how to unravel the entanglement.
.
When tightly coupled systems become a single organism
Earlier in this series I described two very different OSS/BSS systems I worked with early in my career. System #1, which I loved, was richly cross-linked, where everything referenced everything else. The other, System #2, which I had grudging respect for, was far less sophisticated, with minimal linking keys and weak coupling between domains.

The hyper-connected System #1 felt powerful and insightful, but it was a beast to work with. Whilst on paper it seemed like a modular architecture, it was really a well engineered single organism (as you can see in the tightly meshed image of System 1 above).
Fast forward to today and many organisations’ digital systems are like this too. They start out as a patchwork of different, modular systems (a bit like System #2 above), but evolve into the monoliths that we desperately try to avoid and say we no longer have (we’re microservices, not monolith…. well, kind of).
Not a monolith in theory, but with each new integration, correlation rule, enrichment, and shortcut tightening the coupling, we now have digital estates that act as a single organism (ie monolith). Moving from right to left on the continuum above.
Most AI and automation investments simply accelerate this right-to-left process. They don’t just consume data, they embed assumptions, reinforce workflows, and optimise existing paths, further tightening our binding to legacy systems. Over time, intelligence doesn’t simplify the organism. It makes it more efficient as a single organism.
Transformation becomes really difficult. You can’t meaningfully replace or remove parts of a system that behaves as one.
Disentanglement is not the outcome of transformation. It’s a fundamental prerequisite. It requires subtraction projects, which we almost never do.
Pruning a rose-bush gives better, more abundant flowers and future growth, but we don’t like pruning our precious digital systems.
The thing I find humourous is that most of our “Agile” additive projects are at the blue-arrow, far-right end of the long-tail diagram. The stuff that’s easy to do, adds some functionality quickly, but barely moves the needle. Agile projects rarely consists of refactoring the red-box, core functionality that does move the needle.
As indicated above, the disruptive force that is AI allows us to think differently. To totally re-invent the core, as indicated by the green arrow below. But we’ll need to disentangle the blue arrow customisations! 🙂

.
When subtraction breaks ecosystems instead of simplifying them
Most experienced telco practitioners have lived through at least one subtraction project that went badly wrong. A system that was switched off because “everyone knew” what depended on it. SMEs were consulted. Workshops were held. Confidence was high. The change was approved.
And then the system was turned off in production.
Oh oh! We didn’t realise the branch we pruned actually had a bunch of roses attached to it!
Billing broke. Assurance went dark. Orders stalled. Entire ecosystems failed in ways no one predicted. Not because people were careless, but because they were relying on the tribal knowledge within the team (in environments where most of the tribal knowledge had left the building due to retirement, retrenchment, etc).
The most dangerous dependencies are rarely the obvious ones. They’re embedded in configuration and code written years earlier. They’re in systems we never even realised were connected. When subtraction is based on intuition rather than evidence, failure is an outcome we almost have to expect.
Is there a more systematic approach?? I believe so, as we’ll describe in more detail below. But I must also call out that I’ve only done some aspects of it, not the full map we describe. Not yet anyway!
.
To unpick a system, you first need to see the interconnections
Just with the process diagrams we talked about here, most digital systems artefacts lie by omission. They simply aren’t aware of the full scope of relationships / dependencies.
We need quantitative, not qualitative understanding of our dependencies.
Static diagrams age almost immediately. Documentation decays as soon as it’s written. In-flight and BAU activities make changes every day. Tribal knowledge is almost always qualitative (ie best guesses).
If you can’t see how systems interact, how data flows, and how decisions propagate, how dependencies entangle, you’re about to embark on a risky subtraction project.
.
Why so many dependencies remain invisible
Many dependencies are intentionally abstracted to make life easier for day-to-day operators.
But a transformation isn’t a day-to-day activity (hopefully).
Point-to-point integrations are buried in scripts, adapters and middleware. Other dependencies live in data models, rules, adaptors, integrations and assumptions that no one documents because they just feel “obvious” at the time.
Some dependencies aren’t even technical. They’re organisational.
We need a better way of systematically capturing all the different types of dependencies.
Next, we’ll walk you through some ideas about how to achieve this. And using many of the tools you probably already possess.
.
Using discovery tools to map integrations, not just networks
Most telcos already have powerful discovery and reconciliation tools. But they’re typically used to collect information about networks (eg physical topology, inventory, data integrity / confidence, firewall rules, user logs, virtual containers, CMDBs and much more).
We only need a lateral shift in focus. Instead of mapping networks, we can use them to build a map of dependencies across our digital systems.
Instead of asking what assets exist, we ask which systems call which. Who consumes whose data. Where decisions flow. Which hardware contains which software. Which users / groups are allowed to have access to which data.
We already have most of the data. We just have to figure out what to do with it and how to collate / prepare it.
So here’s the concept in a simple use-case format
As A Transformation Consultant I Want To efficiently understand all of the dependencies within a current OSS/BSS/management environment, So That I can understand all possible impacts and create a comprehensive transformation plan before adding, removing and/or modifying anything within that environment
[Yes, that’s use case #001 out of dozens from our Dependency Mapping Tool prototype]
I believe that knowledge graphs are the key here.
Whilst the image below shows a network graph, we can easily turn it into a dependency graph.

By modelling OSS, BSS, data, agents, etc as nodes (instead of the network nodes shown above), and their interactions as edges, we get a living model of dependencies that can be queried and tested. More importantly, it can be dynamically updated by linking to discovery, reconciliation, logs and other data sources.
The diagram above shows a graph from Kuwaiba, the network inventory tool that we use for our OSS Sandpit Project. We can plug dependency data into it directly because its data model is extensible. However, it’s also possible to use a custom graph for your data sources and aggregation tools if they have similar flexibility to Kuwaiba.
Graphs allow us to ask questions that matter for transformation:
- What’s impacted if this system disappears?
- Who’s impacted?
- How are other in-flight projects impacted (or cause impacts)?
- In what sequence should project activities be performed?
Most importantly, graphs turn subtraction projects from a leap of faith into a systematic design activity.
Obviously visibility alone doesn’t simplify systems. But it’s a great start to understanding the full complexity of the current-state organism, which subsequently makes simplification / pruning possible (ie shifting from left-to-right in the continuum above).
In Part 4, we’ll leverage this concept and explore why subtraction is the real path to revolutionary transformation and objectives such as autonomous networks.
PS. If you’re already aware of any tools specifically designed for mapping dependency graphs, please let me know.




