The law of cascading problems

Engineers like to solve problems. If there are no problems handily available, they will create their own problems.”
Scott Adams
.

As Scott Adams of Dilbert fame stated, us Engineers love to solve problems. In the world of OSS we don’t NEED to create our own because there are already so many that are handily available (although we often do make more of our own anyway for some reason).

One of the problems that awaits us on every job is the law of cascading problems. It goes something like this:

Let’s say we have data sets where our data accuracy levels are as follows:

  • Locations are 90%
  • Pits are 95%
  • Ducts are 90%
  • Cables are 90%
  • Joints are 85%
  • Patch panels are 85%
  • Active devices are 95%
  • Cards are 95%
  • Ports are 90%
  • Bearer circuits are 85%

That all sounds pretty promising doesn’t it (I’ve seen data sets that are much less reliable than that)? If we create an end-to-end circuit through all of these objects, we end up with a success rate of less than 30% (ie multiply all these percentages together).

The data accuracy of individual sets of data can lull many an OSS data newbie into a false sense of security. The combined effect of greater than 70% fallout quickly gets their attention though (and that of their bosses if they’re oblivious to the law of cascading data problems too).

How do you fix it? Go back to the source. Find ways of making your source data more accurate. There are usually no quick fixes, just lots of remediation effort.

This earlier post provides some helpful hints on how to improve your data integrity. Do you have any other suggestions to add to the list?

If this article was helpful, subscribe to the Passionate About OSS Blog to get each new post sent directly to your inbox. 100% free of charge and free of spam.

Our Solutions

Share:

Most Recent Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.