More things gone wrong

Whoa! What an awful week.
Last Monday, l promised to write a follow up post to “Things gone wrong” the next day. Well, that clearly didn’t happen, true to form, due to something going wrong.

If you plan on taking gastro for a test-drive like l did, I’d strongly recommend against it. 🙂

Anyway, back to the follow-up post that continues to drow parallels to how other industries measure quality and their rates of failure:
Data quality is an interesting one. As vendors / integrators, can we just blame bad customer data (analogous to carmakers blaming bad roads), as we often do, or do we have to make solutions that cope with bad data (as carmakers must do to keep TGW rates down).

As an industry, we know that source data is going to be unreliable in most cases, particularly for physical network data, but we still design data models that require so much precision in data.

If we are cross-linking and chaining datasets, we are increasing our chance of cascading problems.

I can’t help but thinking that if we are to achieve levels of precision that compare well with other industries then we need to develop whole new approaches to improvement in data and coping with poor data.

If this article was helpful, subscribe to the Passionate About OSS Blog to get each new post sent directly to your inbox. 100% free of charge and free of spam.

Our Solutions

Share:

Most Recent Articles

No telco wants to buy an OSS/BSS

When you’re a senior exec in a telco and you’ve been made responsible for allocating resources, it’s unlikely that you ever think, “gee, we really

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.