Defective quality analysis (part 2)

The telco industry is well known for having a five nines (ie 99.999%) up-time engineering standard. That’s about 5 minutes of down-time per year. That’s pretty impressive, although granted it still leaves room for improvement.
OSS are used to measure figures like up-time (and many more of course). This gives CSPs the data to track and manage towards zero defects, or no down-time per year. Admirable ambitions for any organisation indeed.
There’s only one slight problem with this perspective. A telco might think that zero defects equals high quality. Unfortunately what the telco thinks is completely irrelevant. It’s what their customers think the definition of quality is that is important
.”
An earlier post entitled “Defective quality analysis.”

In the previous post (see link above), I raised the need to:

  1. Identify what the CSP’s customers really equate quality with; and
  2. Identify ways to measure and report on those metrics

Whilst researching my upcoming OSS market research report today, I came across an interesting study from Ericsson ConsumerLab. Its findings are presented in the diagram below:

This graph clearly shows that network performance is the number 1 factor in isolation. However, when using combined percentages it also shows that customer service and “the offer” are more important factors than the network’s performance.

Are these findings reflective of your customers’s sentiments? Do you think that your OSS can identify leading indicators to reflect the metrics shown in the graph, or does it require customer survey analysis?

If this article was helpful, subscribe to the Passionate About OSS Blog to get each new post sent directly to your inbox. 100% free of charge and free of spam.

Our Solutions

Share:

Most Recent Articles

No telco wants to buy an OSS/BSS

When you’re a senior exec in a telco and you’ve been made responsible for allocating resources, it’s unlikely that you ever think, “gee, we really

2 Responses

  1. Network Performance is a complete misnomer for a consumer report like this. How does Joe Average know what network performance is? Coverage is as explicit a ‘performance’ metric as you can get. After that, consumer experience is a complex mix of network, handset and application/OTT performance.

    If I’m sat on a train waiting to leave Paddington, and I can’t stream Crash Course History from my YouTube app in HD, is that ‘Network Performance’?

    Assuming we agree the answer is ‘it depends, it’s complicated’ what can the CSP do about it (without rubbing-up the net neutrality supporters the wrong way)?

  2. Great points James.

    I see it as the customer’s “perception” of performance. Whether that perception is tied to justifiable stats or not, that remains their perception. When it comes to NPS, it’s the customer’s perception that is fact, because that’s what causes them to promote (or disparage) the service provider…and/or continue to subscribe to the service (or not).

    Sometimes the perception is tied to measurable stats (eg drop-outs, throughput, packets dropped, etc) but as you say, there are a range of factors in the end-to-end provision of a service and many of them are outside the control of the CSP.
    The main suggestion I was hinting at was that network performance is only one of a raft of parameters that make up a customer’s overall perception of quality. The research is indicating that customer service and “the offer” are more important in the customer’s eyes, yet we don’t hear many NOCs analysing those types of stats. They tend to only monitor up-time or other easily measurable stats.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.