One of the things I find incredibly interesting when I look at the Simplified TAM diagram below is that of each of the arrows indicating a workflow, only one has systems that aren’t really designed to manage the operational workflow.
- Assurance has trouble tickets
- Fulfilment has service orders
- Field operations has work orders
- Even billing has order-to-cash sequences

Do you also find it interesting that network inventory was never designed as a workflow-native system?
But, how do we make it so?
I believe we should take a closer look at a sample / simplified asset management lifecycle, such as the diagram below.
What do you notice about it?
From the chevrons and dot-points under them, there are actually a huge number of different systems and flows involved. We’ll explore that in more detail.

Whenever I think about architecting an OSS, my first thoughts tend start with Network inventory as it’s the repository that is most responsible for holding all information about the network’s resources, with associations to services and customers (amongst many other things).
Yet, by design, almost all Network inventory solutions are based on a state repository architecture, not a state transition engine.
That design assumption might have made sense when networks were slower, hardware-driven and operational models were siloed.
But even then, it still doesn’t make sense in a world where assets are continuously moving through design, procurement, deployment, optimisation and retirement loops. If this were true in the more static networks of yesteryear, it’s even more true of today’s more dynamic networks.
The lifecycle shown in the asset lifecycle diagram makes something very clear:
- The asset journey does not live inside a single workflow domain
- It crosses many organisational jurisdictions (eg planners, procurement, suppliers, installers, etc)
- It traverses many different core systems including procurement, logistics, warehousing, field service, assurance, finance and compliance
- Every boundary introduces integrations, handoffs and data translations
It also introduces three invisible planes:
- Physical flow: equipment moving through the world
- Control flow: work orders, approvals, contracts to keep the physical assets moving through the lifecycle
- Data flow: tracking information between systems inventory, telemetry, operational state, current owner, etc
Many failures happen when these planes get out of synch.
Asset Lifecycle Challenges / Consequences
These discrepancies create seven systemic consequences that most operators have simply normalised:
- Swivel-chair operations as engineers jump between ERP, FSM, inventory and assurance
- A spider-web of tools and integrations attempting to synchronise state transitions, identity, location, responsibility and much more
- Perpetual reconciliation because no system owns the lifecycle and there’s no end-to-end tracking ID through all of the chevrons / systems
- Fallouts are commonplace, leading to stranded assets, unfinished flows, sub-optimal work practices
- And since there are rarely any KPIs monitored or quantified across the end-to-end flow, everything effectively becomes “best effort”
- Without end-to-end tracking IDs, we can’t build a quantified BPMN diagram that helps us show where the bottlenecks are in our asset lifecycles
- Without end-to-end tracking IDs, it’s more difficult to build cross-stack tracing agents like mentioned in our earlier jugglers juggling post, “Telco is a Circus with Thousands of Balls in the Air”
Inventory knows what exists
Workflow tools know what is happening
No platform truly understands what is happening to what exists across time.
Are you getting the sense that this gap is far more strategic than it first appears?
Next-generation Network Inventory and Asset Lifecycle Management
To me, this suggests an opportunity for a new class of product.
A network inventory product built around a simple premise:
Treat the asset lifecycle itself as the primary workflow
Not just an asset and connectivity repository
Not procurement workflows
Not service workflows
Not maintenance workflows
But a single orchestrated workflow (Plan to Build to Operate – P2B2O), that manages the lifecycle graph where the asset is the anchor object.
Such a platform would fundamentally change the operating posture of a telco:
- Far fewer integrations because lifecycle context is native
- Dramatically less data mapping
- End-to-end lineage from design intent to retirement
- Real-time chain-of-custody
- Reduced operational friction
- Higher inventory trust
- Stronger financial alignment
- Automation that is constantly asset, state and topology-aware
Implications
There are also two deeper architectural implications worth calling out:
- Today’s OSS stacks were largely optimised around services. Yet capital intensity in telecom still sits overwhelmingly inside physical and logical assets
- Because of this, it doesn’t neatly fit inside TM Forum’s ODA model (or TAM model before it). The closest candidates are:
- Production Domain Orchestration: but too service-centric
- Resource Domain: but too inventory-centric
- Enterprise Domain: but too business-centric
Whoever builds the lifecycle-native control plane effectively creates the economic nervous system of the operator. When linked together with fulfilment, assurance and billing workflows, it tracks all aspects of the ROI:
- Equipment cost
- Cost to install
- Cost to maintain
- Resources used by each service
- Revenue generated by each service (and apportioned by resource)
- And as the diagram indicates below, OSS/BSS are the profit engine of every telco business

This new solution becomes an Asset Lifecycle Operating System that sits horizontally across the stack and quietly eliminates hundreds of micro-frictions that operators currently just accept as unavoidable.
If anyone actually builds this, the question will become “Why were these myriad solutions ever separate?”
The sad reality
However, I’m the first to acknowledge that this is an ideal end-state.
Reaching the ideal of a Lifecycle Operating System for network assets is not primarily a technology challenge. It is a human-factor problem.
ERP teams optimise for financial control, warehouse teams for stock accuracy, field organisations for execution velocity, network teams for uptime and procurement for cost discipline. Each has evolved its own tools, data models, approval structures and risk tolerances over decades.
Those systems are not loosely connected components waiting to be orchestrated. They are deeply entangled embedded operating environments with contractual dependencies, audit implications and ways-of-working.
Attempting to unify them is less like integrating software and more like realigning tectonic plates (or convincing Liverpool and Arsenal to merge).
Even when Open APIs exist, semantic misalignment remains. The same asset can mean a capital object in finance, a serialised unit in logistics, a configuration endpoint in the network and a directly swappable component in maintenance.
Resolving those competing truths requires more than integration. It demands agreement on operational authority, which is where most transformation programmes quietly stall.
The vendor creating such a product would need to get all the siloes of every client / prospect aligned and signed-off.
This is why the path to a lifecycle-native operating model is far more likely to be evolutionary.
The winning pattern will probably not begin with a grand platform replacement, but by evolving Network Inventory product user interfaces to be more process-centric. I’ve started to see some evidence of vendors thinking of UI/UX/CX being process-driven.
The end state is unquestionably difficult, but the stepping-stones to get there are not totally implausible.
We need to first inform everyone who will listen that current-day operational drag is so huge, that architectural reinvention of these collective tools is an economic necessity.
Who’s with me?? 😀




