This week of posts has followed the theme of the cost of quality. Data quality that is.
But how do you calculate the cost of bad data quality?
Yesterday’s post mentioned starting with PNI (Physical Network Inventory). PNI is the cables, splices / joints, patch panels, ducts, pits, etc. This data doesn’t tend to have a programmable interface to electronically reconcile with. This makes it prone to errors of many types – mistakes in manual entry, reconfigurations that are never documented, assets that are lost or stolen, assets that are damaged or degraded, etc.
Some costs resulting from poor PNI data quality (DQ) can be considered primary costs. This includes SLA breaches caused by an inability to identify a fault within an SLA window due to incorrect / incomplete / indecipherable design data. These costs are the most obvious and easy to calculate because they result in SLA penalties. If a network operator misses a few of these with tier 1 clients then this is the disaster referred to yesterday.
But the true cost of quality is in the ripple-out effects. The secondary costs. These include the many factors that result in unnecessary truck rolls. With truck rolls come extra costs including contractor costs, delayed revenues, design rework costs, etc.
Other secondary effects include:
- Downstream data maintenance in systems that rely on PNI data
- Code in downstream systems that caters for poor data quality, which in turn increases the costs of complexity such as:
- Additional testing
- Additional fixing
- Additional curation
- Delays in the ability to get new products to market
- Ability to accurately price products (due to variation in real costs caused by extra complexity)
- Reduced impact of automations (due to increased variants)
- Potential to impact Machine Learning / Artificial Intelligence engines, which rely on reliable and consistent data at scale
There are probably more sophisticated ways to calculate the cost of quality across all these factors and more, but in most cases I just use a simple multiplier:
- Number of instances of DQ events (eg number of additional truck rolls); times by
- A rule-of-thumb cost impact of each event (eg the cost of each additional truck roll)
Sometimes the rules-of-thumb are challenging to estimate, so I tend to err on the side of conservatism. I figure that even if the rules-of-thumb aren’t perfectly accurate, at least they produce a real cost estimate rather than just anecdotal evidence.
And more importantly, the tertiary and less tangible costs of brand damage (also known as Customer Experience or CX or reputation damage). We’ll talk a little more about that tomorrow.
Read the Passionate About OSS Blog for more or Subscribe to the Passionate About OSS Blog by Email