“Regression testing means re-testing an application after its code has been modified to verify that it still functions correctly. Regression testing consists of re-running existing test cases and checking that code changes did not break any previously working functions, inadvertently introduce errors or cause earlier fixed issues to reappear. These test cases should be run as often as possible with an automated regression testing tool, so that code modifications that damage how the application works can be quickly identified and fixed. Regression testing starts as soon as there is anything to test at all. The regression test suite grows as the application moves ahead and test engineers add test cases that test new or rewritten code.”
When operating a large network and / or a network that contains many different device types there’s a regular schedule of patches and changes to coordinate and each patch necessitates a level of re-checking. In my travels, I’ve only seen this re-checking done in a limited way. I have yet to see network operators use the structured regression testing techniques used by programmers / testers when checking code, as per the approach described in the quote above.
In yesterday’s blog, we discussed the need for ongoing attention to your automations. As described, automations need many pieces of an end-to-end puzzle to be perfectly aligned, so I see automated testing and regression testing techniques as the next step towards methodical, repeatable testing. This form of testing could be applied on a scheduled basis to check for exceptions at each step in an automation, ensuring nothing gets lost along the way. It could also be used following any changes to the network or associated systems.
The interesting part about this approach is the need for clever analytics to be able to trace a transaction through the various elements in the end-to-end workflow, because it’s not always a case of 1 transaction in equals one transaction out. Quite the contrary. There are many places in a given workflow that a transaction can either get lost or spawn multiple transactions so you’d need to build an intelligent system to take this into account.
Who’s up for the challenge? Or more importantly, who’s already solved this challenge? If so, how?Read the Passionate About OSS Blog for more or Subscribe to the Passionate About OSS Blog by Email