Most people think AI innovation depends on breakthroughs in models and training algorithms.
But those are just the surface.
Underneath, OSS and BSS aren’t the constraint, but their coordination could become the ultimate facilitator of intelligence.
Sam Altman’s testimony at the Senate Committee Hearings on 8 May 2025 only underscores that:
“I think it’s hard to overstate how important energy is to the future here… Eventually the cost of intelligence, the cost of AI will converge to the cost of energy… the abundance of it will be limited by the abundance of energy. So in terms of long term strategic investments for the US to make, I can’t think of anything more important.”
If interested, you can watch the snippet (or the full 3+ hour hearing) via the video below.
I increasingly find myself thinking about the intersections of Venn Diagrams.
Three Industries Are Quietly Building the Same Stack
AI is accelerating demand across three foundational infrastructure layers (the three circles in the Venn diagram) where intelligence is at the centre:
- Energy
- Connectivity (comms networks), and
- Compute (data centres)
These layers are managed by what are effectively operational and business support systems (OSS and BSS), but known as different things in their industry.
They were historically designed in silos: telco OSS for comms networks, DCIM for data centres, SCADA and Energy Management Systems for energy networks.
And yet, having worked on systems for each of these industries, I can tell you that structurally, they’re solving very similar problems. They even have similarities in the user interfaces they use.
The metrics and algorithms behind each domain’s systems are different, but the general functionalities are really similar. Each manages:
- Complex inventories of physical and logical assets (often with map/geo or topological connectivity views)
- Assurance tools to monitors for faults and anomalies
- Orchestration / provisioning / fulfilment tools to make configuration changes on their respective infrastructure
- Capacity planning for current and future demand, and
- Bills based on usage (and associated customer management – BSS stacks)
These commonalities aren’t coincidental. They’re architectural convergences shaped by the need to manage large-scale, distributed infrastructure / networks. What’s changing now is that these infrastructures are no longer just parallel. They’re increasingly interdependent, especially in the world of AI.
AI runs in data centres, consuming a lot of power in the process. The amount of power drawn by AI workloads means data centres are increasingly being located near energy sources. Data sources, especially renewables, aren’t always located near large population centres. This means comms networks are required to take the intelligence developed from AI in DCs near power sources and distribute it to populations spread around the world.
Whereas power sources, data centres and comms networks were previously developed in isolation, we’re now seeing them being developed as clusters.
So dear reader. Do you see where this is going?
Yes – Jointly coordinated projects, jointly coordinated infrastructure, jointly coordinated support systems.
So, let’s take a closer look at what they might look like. We already see significant overlap between telco OSS and DCIM solutions. We already see comms networks carrying SCADA and related energy management traffic for power companies. They all need to work in unison to deliver reliable AI at scale. They all collect masses of operational data.
However, I’m yet to see data shared outside their own bubbles. This means that the networks are already working together, but their systems and analytics platforms are not combining data to make more informed operational decisions (yet).
Let’s break this down into the constituent parts.
Inventory: The Shared Struggle for Asset Truth
Whether it’s a fibre route, a server rack, or a transformer, every domain grapples with the same fundamental question: Where are my assets, and what depends on them? In telco, inventory systems map logical services onto physical connectivity. In energy, asset registries underpin fault location and switching decisions. In data centres, DCIM systems track capacity, utilisation, and environmental constraints. But as AI reshapes how we deploy compute, energy, and network capacity, asset visibility needs to extend across domains. A training workload running in a remote data centre may depend on both the availability of carbon-free power and the latency of the telco network. Inventory needs to evolve from isolated asset maps into integrated infrastructure intelligence.
Some even use the same GIS (Geographical Information System) to visualise the data, but leave them as disparate data layers for separate operational teams to manage.
When it comes down to it, all forms of networks (telco, energy, LAN/WAN, water, rail, road, etc) are effectively nodes and arc/connections. Any network inventory solution with a flexible data model will be able to handle a multi-utility network model, as we showed in this sandpit model of a renewable energy + supervisory + comms network.
Planning for Power, Packets, and Processing
Each domain uses planning tools to forecast capacity and guide investment. Telcos model traffic growth and service rollout; energy providers plan grid reinforcement and load balancing; data centres forecast cooling, floor space, and power draw.
Many of them even have sophisticated scenario / what-if planning tools to simulate different situations occurring across their managed infrastructure.
In the AI era, these plans can no longer remain separate. A GPU farm doesn’t just need electricity. It needs cheap, reliable electricity, available where low-latency fibre routes already exist. Similarly, the ability to distribute AI inference across edge locations depends on regional power availability and network reachability. Planning systems need shared data models and interoperable assumptions. Without them, we risk building stranded capacity in places that can’t support the full chain of AI workload execution.
Monitoring, Managing and Assuring your Network/s
Just like the network inventory example above, a flexibly-designed assurance solution will be able to handle any form of real-time data, whether that’s alarms, alerts, logs, telemetry, etc. Whether the real-time feeds are coming from telco or energy or water or other types of network device, they still get presented in time-series visuals or alarm lists or similar.
AI-based tools (eg AIOps) are increasingly helping to manage networks, being able to handle the assurance streams at a scale that humans simply can’t process.
From Fulfilment to Closed-Loop Control
Provisioning a service, be it energy delivery, bandwidth allocation, or compute tenancy, follows a similar logic across domains. We accept an intent, match it to resources, execute a workflow and voila, a customer is now serviced.
Telcos have led the way in automation with service fulfilment platforms. Energy providers are catching up with demand-side orchestration. Data centres increasingly rely on API-driven infrastructure-as-code. Clearly, all three industries are heading is toward closed-loop automation.
AI inference models monitor real-time conditions (eg network jitter, transformer load, thermal limits) and trigger dynamic reconfiguration. But this loop only works if the OSS logic in each domain is interoperable. An example: a spike in energy prices triggers AI to reschedule training workloads in a lower-cost zone. This requires dynamic reallocation of compute and re-routing of data traffic. This level of coordination demands orchestration across power, network, and processing layers, via shared fulfilment and assurance logic.
Today, energy networks pay for distributed suppliers (eg residential rooftop solar) to provide energy into the grid. With abundant edge generation, I can foresee a time when the energy networks pay even more for those who soak large-scale energy from the grid to balance loads. What better real-time energy sponge than a data centre with workloads on demand?
Billing Is Where the Abstraction Becomes Real
OSS manages operations. BSS decides who pays for what. And across all three domains, billing systems are adapting to dynamic, usage-based pricing. Telcos have long billed for time, volume and QoS tiers. Data centres now charge for compute cycles, storage, data transfer and power draw.
Energy grids are evolving beyond static tariffs to real-time billing based on market exposure, time-of-use, and carbon intensity. All three require mediation layers to translate raw telemetry into billable events and to allocate those charges across multi-tenant environments.
As infrastructure becomes more dynamic, BSS must evolve into a real-time abstraction layer: not just measuring what happened, but interpreting what value was delivered, under what conditions, and to whom.
My Final Thought: OSS and BSS aren’t just backend tools. In the age of AI, they’re the control and monetisation planes of our inter-dependent physical intelligence infrastructure. To unlock abundance, it seems there are significant opportunities for them to converge. Not just technically, but operationally, commercially, and strategically.
What about Your Final Thought??