These are two questions I’ve often pondered in the context of OSS / telco data, but never had a definitive answer to. I just spotted a new report from CEBR on LinkedIn that helps to provide quantification.
What Exactly Is Real-time?
When it comes to managing large, complex networks, having access to real-time data is a non-negotiable. I’ve always found it interesting to hear what people consider “real-time” means to them. For some, real-time is measured in minutes, for others it’s measured in milliseconds. As shown below, the CEBR report indicates that nearly half of all businesses now expect their “real-time” telemetry to be measured in seconds or milliseconds, up significantly from only a third last year.
This is an interesting result because most telcos I’m familiar with still get their telemetry in 5 minute, or worse, 15 minute batch intervals. And that’s just the cadence at the source (eg devices and/or EMS / NMS). That doesn’t factor in the speed of the ETL (extract, transform, load) pipelines or the processing / visualisation steps. By the time it passes through its entire journey, data can take an hour or more before any person or system can perform any action on that data.
With consuming systems like AIOps tools, which attempt to make predictive recommendations, the traditional understanding of “real-time” within telco just isn’t up to speed (sorry about the bad pun there). An hour from event to action can barely be considered real-time.
I’m curious to get your thoughts. At your telco (or clients if you’re at a vendor / integrator / supplier), what are the bottleneck/s for achieving faster telemetry? [note that there’s additional content below the survey]
What are the Impacts of Real-time Data (Anomaly detection / reduction indicators)?
The CEBR report also helps to quantify some of the impacts of having data that’s not quite real-time, as indicated in the following four graphs.
These figures indicate telemetry is arriving in an order of seconds. This seems much higher than I had expected.
Interestingly, 100% of telcos in the UK saw a reduction in costs after implementing faster real-time data pipelines (where anomalies result in at least a moderate amount of loss).
How Valuable Is Real-time Data?
I have a foreboding, but possibly mistaken, sense that the telemetry data at many telcos just isn’t fast enough to provide the speed of insights that ops teams need. If so, what is the opportunity cost? How valuable is data that does arrive in real-time? Or to ask another way, how costly is data that is not-quite-real-time enough?
The CEBR report helps to answer these questions too, via the following graphs.
A revenue increase of nearly $300M as a result of real-time data represents a significant gain, with most projected impact coming from America.
I should caveat this by saying the report doesn’t show the methodology for how these numbers are calculated. And having been inside the veil and seen the lack of sophistication behind some analyst estimates, I’m always a bit skeptical about the veracity of these types of numbers.
Despite these question marks, it still seems likely that faster data results in faster insights.
Regardless, when paired with consuming systems like AIOps, faster insights should translate into tangible benefits (eg being able to fix before fail or resolve via workarounds like re-routing to avoid SLA impacts or swaying a customer on the verge of churning), but also intangible benefits (eg advance notifications to customers).
Faster insights should directly relate to improved customer experiences, as reflected by 9 out of 10 customers suggesting moderate or significant improvement after implementing real-time data initiatives.
I’d love to hear your thoughts and experiences. Has speeding up your processing rates resulted in significant tangible / intangible benefits at your organisation or customers? Please leave us a comment below.