Whoa! What an awful week.
Last Monday, l promised to write a follow up post to “Things gone wrong” the next day. Well, that clearly didn’t happen, true to form, due to something going wrong.
If you plan on taking gastro for a test-drive like l did, I’d strongly recommend against it. 🙂
Anyway, back to the follow-up post that continues to drow parallels to how other industries measure quality and their rates of failure:
Data quality is an interesting one. As vendors / integrators, can we just blame bad customer data (analogous to carmakers blaming bad roads), as we often do, or do we have to make solutions that cope with bad data (as carmakers must do to keep TGW rates down).
As an industry, we know that source data is going to be unreliable in most cases, particularly for physical network data, but we still design data models that require so much precision in data.
If we are cross-linking and chaining datasets, we are increasing our chance of cascading problems.
I can’t help but thinking that if we are to achieve levels of precision that compare well with other industries then we need to develop whole new approaches to improvement in data and coping with poor data.Read the Passionate About OSS Blog for more or Subscribe to the Passionate About OSS Blog by Email