In the law of cascading problems, let’s say we have data sets where our data accuracy levels are as follows:
- Locations are 90%
- Pits are 95%
- Ducts are 90%
- Cables are 90%
- Joints are 85%
- Patch panels are 85%
- Active devices are 95%
- Cards are 95%
- Ports are 90%
- Bearer circuits are 85%
That all sounds pretty promising doesn’t it (we’ve all seen data sets that are much less reliable than that)? If we create an end-to-end circuit through all of these objects, we end up with a success rate of less than 30% (ie multiply all these percentages together).
The same principle applies for the law of hyper-expanding test cases. If each of your data set examples above has a number of distinct variants (eg there are 10 different styles of joints, 6 styles of bearer circuits, etc), you can multiply them together to get your total number of test combinations.
As you can guess, this could end up in a lot of possible combinations to test. I recently had an example where “full” test coverage equated to over 2 million combinations, and that was only including positive tests (ie excluding intentionally invalid data to test for possible unhandled defects).
Writing and conducting 2 million test cases sounds crazy enough, but the effort becomes even greater because each one of these tests needs to be underpinned by test data. Clearly full coverage isn’t viable in such a situation,
In these cases,
- I start with a Venn diagram of each set and determine the variants within each set
- l then try to build a test case for each variant but lock down other sets
- For example
- Set A – all variants of A, other sets only 1 common variant
- Set B – all variants of B except the one locked for set A, other sets only 1 common variant including a single variant from A
- And so on through all sets
- Prioritise which variants are not important and / or rare (thus filtering the list downfurther)