“63% of all statistics are made up… including this one“
A CSP I know has a detailed set of KPIs assigned to the various parts of its provisioning factory. A key customer of that CSP was given the highest priority of orders through the provisioning factory by the CSP’s CEO. But after a year, almost none of the orders had made their way through the factory.
Whilst the customer’s services had been ported but not moved onto the CSPs network, the CSP was losing money. Lots of money. The CEO couldn’t understand why highest-priority orders weren’t getting completed and the bottom line was being impacted, so he was prepared to provide resources to remedy the situation.
One by one, the different parts of the provisioning factory were approached and asked how they were faring. One by one, they painted a glowing picture of their throughput and presented KPIs to demonstrate their efficiency. All were offered extra resources to help process more orders. None took the offer up. All were meeting their KPIs.
But none of the orders were getting through the factory.
This tells me two things:
- Each part of the provisioning factory had learned ways of portraying their statistics in a favourable light
- There was no end-to-end KPI to identify orders finding their way through the system
- The process had eddies of re-processing that prevented orders from getting into the full flow of the stream
- But most importantly, their whole KPI method needed to be turned upside down to focus on orders exiting the factory
Would it take an external consultant to perform a non-biased review of the process and KPIs? Or would an approach of setting stretch targets for each unit within the provisioning factory force internal operators to find a more streamlined way?
Or do you have an even better alternative?