Is AI a digital transformation bridge or noose? (part 1 – AI Entanglement)

Like microservices before it, AI and agentic solutions are increasingly seen as the panacea of digital transformation.

In telco circles, AI is often framed as the fastest path to escape. A way to finally move beyond the clunky, legacy worlds of OSS/BSS. Piecemeal AI projects promise quick wins, modern capabilities, and a stepping stone towards grand visions like Autonomous Networks and Autonomous Operations.

But when you step back and map out the long game, can you see a troubling pattern emerging?

Many operators are quietly reinforcing the very systems they want to obsolete.

.

Is a hyper-connected OSS/BSS landscape a strength or a trap?

I was very lucky. The first two OSS/BSS I worked with used fundamentally different design philosophies. I was lucky because these two polar opposite approaches made it easy to spot patterns that guide my thinking to this day. It helps to identify a pattern that will potentially entangle legacy solutions that network operators are desperate to extricate themselves from. It’s the basis for this series of articles.

But I digress. Let’s get back to those two opposing patterns.

The first was richly cross-linked. Everything referenced everything else. Faults, inventory, services, customers, tickets, performance metrics, etc. All tightly woven together in the data model and the user interface.

The second solution had almost no cross-linking of data. Each domain, each data set and each of the different tools were almost totally isolated. There were almost no linking keys between data sets. Correlations had to be inferred or manually bridged.

Think of them as two ends of a spectrum, each with its own pros and cons.

I loved the first system. Because there were so many linking keys, I could ask questions of the data that my curious mind had never been able to answer before. Obscure questions. Obscure answers. If I wanted to create heat-maps of what type of customer or service type was most impacted by network faults, I could. If I wanted to see which regions had the lowest take-up rates of homes within 200m of network infrastructure that was already in place, I could.

The system felt powerful, cohesive, insightful and operationally efficient. In circa-2001, we’d developed discovery tools that not only collected nodal and connectivity data, but also achieved cross-domain stitching of data sets.

However, there was a major problem with the first system. Hyper-connected data sets need beautifully curated data. It also creates a condition where everything becomes dependent on everything else. And once you reach that point of hyper-dependency, removal is no longer an easy option. Let’s put a pin in that important concept because we’ll keep coming back to that throughout the rest of this series. Beautifully linked data / systems become a risk management exercise.

I call it the chess-board analogy.

Now, onto the second system.

I was FAR less impressed with the second system. It was clunkier, less elegant and so unsophisticated that it almost felt like a high-school project. But over time, something else became clear. Changes were easier. Components could be modified, replaced, or removed with less fear of unintended consequences. Data migration, discovery and reconciliation was incredibly easy by comparison.

And the reality is that most operators only use OSS or BSS tools within their specific domain or locus of control. Not many OSS/BSS users need to ask the kind of obscure questions that fascinate me.

Which of the two approaches is better? It depends!  😉

More on that later in the series.

.

Are AI ambitions about autonomy – or just incremental escape?

I’ve intentionally tried to be contrarian and avoid talking about AI in my blog articles (but have failed miserably, as this series will attest!).

But one thing I have noticed is that it’s incredibly divisive. Some commenters rave about it. Others denounce it. It’s the biggest telco religious war since trying to define what’s OSS and what’s BSS (where does BSS end and OSS begin)?

This tension is amplified by today’s AI narratives. At the strategic level, telcos talk about bold ambitions. Autonomous Networks. Autonomous Operations. Zero-touch everything. AI as the intelligence layer that finally breaks the cost and complexity curve.

These ambitions resonate for good reasons. OSS environments are expensive to run, slow to change and man-oh-man, are they ever politically complex? AI offers a vision that replaces manual effort and automation replaces process sprawl. It feels like a way out.

Others argue that none of this will happen in their lifetimes. I’m no Ray Kurzweil, so I’ll not make any predictions about who is more right.

Regardless, let’s look at what is actually happening on the ground. Most AI initiatives are not sweeping transformations. They are narrow, additive projects. A model here to predict faults. An agent there to recommend actions. A bot to triage alarms or optimise parameters. An automation to reduce manual involvement by human operators.

They’re mostly small and achievable projects that fill in gaps between different systems. Each one justified on speed-to-value, a relatively small budget and with a finishing line that can easily be seen. Relatively small risk. Each one deliberately scoped to avoid bringing core OSS down.

.

Where does AI actually attach itself inside the OSS stack?

However, despite all the rhetoric about AI-centric or intelligence layers, AI projects are rarely acting as a major disruption to the legacy OSS landscape. They mostly just plug in, consuming or enhancing OSS data. They therefore inherit OSS semantics. They tend to align with existing workflows and automations.

AI models need historical data, current state and authoritative definitions. Where do you think all of that comes from? Yep, it lives in OSS/BSS.

Algorithmic approaches like AI are still dependent on interfaces, schemas, reconciliation logic and operational conventions. At this stage at least.

.

How do piecemeal AI wins quietly reinforce legacy systems?

Each quick-win AI or automation or microservice project accelerates entanglement. When an AI pilot works, it gets promoted. It moves from experiment to production. SLAs appear. Dependencies harden. What was once “temporary” becomes business-critical.

Each success justifies the next integration. Another data feed. Another callback. Another exception path. None of these feel dangerous in isolation. In fact, they feel like progress. But collectively, they thicken the web around existing OSS. As world-famous OSS architect Warren Buffett (almost) once said, “The chains of integration are too light until they are too heavy to be broken.”

The irony is that AI often increases the value of the very systems it depends on. By making OSS data more actionable and workflows more efficient, AI makes those systems harder to switch off. Intelligence doesn’t displace the platform. It strengthens it.

To have any chance of replacing legacy OSS/BSS, we must first undertake ruthless simplification programmes. We have to remove the cross-linking, not increase it.

I bet you’ve experienced similar projects to the examples I’ve seen, as described below:

  1. A telco with a PNI (Physical Network Inventory) tool where many of the team has wanted to replace it with an alternate vendor for close to a decade now. However, the PNI has had 1000+ customisations made to it. All of these need to first be unpicked before it can then be churned and then have hundreds of additional customisations added back in to make it work with adjacent systems. And all the parts need replacing while the plane is in the air 24x7x365!
  2. A telco with an FM (Fault Management) tool that pulls data from hundreds of sources and has even more customised rules (enrichments, suppressions, correlations, root-cause, etc, etc) that need cross-checking and replacement

.

If AI is meant to obsolete OSS, is this really the way?

Since I first launched the PassionateAboutOSS blog in 2012, people have been asking why I selected that domain name. In fact, in the first week of launch, one high-profile author told me to change it quickly because all the hundreds of senior telco people he speaks with were telling him that OSS is dead and it was going to be replaced with something else.

In many ways, OSS are painful and antiquated and worthy of replacement.

Yet OSS have somehow endured.

They keep adapting. Telcos keep adding to them. And every time it does, for every additive project, replacement becomes a little harder to coordinate.

New approaches like microservices and AI are not loosening the grip of OSS/BSS. In many cases, they’re tightening the grip and making it even more impossible to obsolete.

Before asking whether AI can replace OSS (as many hope), there is a more fundamental question to answer.

How do we reduce AI entanglement?

In Part 2, we’ll look at how AI initiatives are actively increasing coupling and the various models of transformation.
In Part 3 we’ll describe ways to visualise dependencies and design subtraction projects to reduce coupling.
In Part 4, we’ll looking into a reframe towards autonomous networks via subtraction rather than addition.
Then finally in Part 5, we’ll discuss some design principles that allow OSS to be unplugged… eventually.

If this article was helpful, subscribe to the Passionate About OSS Blog to get each new post sent directly to your inbox. 100% free of charge and free of spam.

Our Solutions

Share:

Most Recent Articles

Managing OSS Transformation Risk

A reader of Mastering Your OSS recently reached out to indicate that he found Chapter 7 useful for understanding OSS-related risks and mitigations. He pointed

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.