Yesterday’s blog discussed the fact that many of the KPIs gathered and used by OSS / BSS could potentially conflict with other KPIs, even when used within a single organisation. It then posed a question:
“Have you ever seen an organisation define a simplification metric as one of their highest-profile KPIs?”
One of the biggest hurdles facing OSS projects, as described here in The triple constraint of OSS, is complexity. We feel that it’s as important as other big-ticket metrics such as NPS (Net Promoter Score).
Unfortunately, it can be a challenge to justify these projects because they cost money and the return (cost reduction and/or revenue benefit) upon which to build a business case can appear intangible (ie hard to measure or demonstrate). Fortunately, there are many metrics that you can build a business case around, including the ones referenced in TM Forum’s GB935 below:
• decrease customer risk
• decrease excessive contacts
• decrease information loss
• decrease launch time
• decrease operating cost
• decrease problems
• decrease revenue loss
• decrease process time
• decrease waiting time
• decrease time to revenue
• decrease time to market
• increase customer satisfaction
• increase margin
• increase market share
• increase productivity
• increase revenue
The interesting thing about GB 935 is that for all of the metrics it does describe, it doesn’t propose an industry-wide simplification metric. And there are sooooo many factors that influence an OSS/BSS’s level of complexity!
In light of that, I’m going to propose a “catch-all” simplicity metric today. Hopefully it will allow subtraction projects to be easily justified, just as the NPS metric has helped justify customer experience initiatives.
You might be wondering what the benchmark would even be used for. Well:
- To provide a marker that internal simplification efforts/projects can be measured against (and justified by)
- To provide a marker for comparison against external “systems” (ie those of other organisations)… a bit like the “simplicity” comparison between the two MP3 players shown at the bottom of this post https://passionateaboutoss.com/the-3-states-of-oss-consciousness/
So without further ado, the proposed metric is….. drumroll please………
- The NSS (Net Simplicity Score), which could be further broken down into:
- The NCSS (Net Customer Simplicity Score) – A ranking from 0 (lowest) to 10 (highest) how easy is it to choose and use the company / product / service? This is an external metric (ie the ranking of the level of difficulty that your customers face)
- The NOSS (Net Operator Simplicity Score) – A ranking from 0 (lowest) to 10 (highest) how easy is it to choose and use the company / product / service? This is an internal metric (ie for operators to rank complexity of systems and their constituent applications / data / processes)
PS. Feedback on the above came from Ronald Hasenberger (see link here). He rightly pointed out that just because something is simple for users to interact with, doesn’t mean it’s simple behind the scenes – often exactly the opposite. He’s exactly right of course. And making something simple for users actually takes a lot of extra work.
So perhaps there’s a third simplicity factor to add to the two bullets listed above:
- The NSSS (Net System Simplicity Score) – and this one does require a more sophisticated algorithm than just an aggregate of perceptions. Not only that, but it’s the one that truly reflects the systems we design and build. I wonder whether the first two are an initial set of proxies that help drive complexity out of our solutions, but we need to develop Ronald’s third one to make the biggest impact?