Network operators often talk about five-nines (ie 99.999% up-time on any given network device) as a measure of the reliability of their networks. However, I personally don’t believe that many of the operators’ customers would consider this as a measure of network quality.
Five-nines is a metric that is widely cited from inside the walls of operators, partly because it sounds impressive but also because it is relatively easy to measure. Devices will issue performance / up-time metrics to an external system (eg OSS) and each can be tracked in isolation.
However, a customer’s perception of quality has nothing to do with the up-time of any given device in the network. A customer’s quality measure is much more sophisticated than that. Yes, it includes availability of their service, but it also includes ease of ordering, ease of billing, ease of fault resolution, etc. This becomes a measure of people, process and not just technology.
The customer’s evaluation of quality is a much harder one for an OSS to derive because it’s not just the collection of isolated performance data streams. It’s the combination of multiple different data sets.
I believe that the way to derive a customer’s quality metric revolves around the user’s journey. For example, when designing a product, we design a process from a customer’s first contact through to their service being available for use (O2A – order to activation), from trouble to resolution (T2R), etc. This provides the benchmark for what the ideal process should look like. However, actual user journeys often deviate from the ideal workflow, getting caught up in loops within the workflow that degrade the user’s experience and take longer than the ideal processing times.
The variance from the user journey benchmark represents the customer’s real perception of quality… Perhaps I should put that another way. The variance from the customer’s expectation of the benchmark represents the customer’s real perception of quality, not five-nines.
But can your OSS measure and manage such a metric?
If so, do you have initiatives in place to continually improve your user journey benchmark?