Yesterday we described the three steps on the path to Zero Touch Assurance:
- Monitoring – Monitoring the events that happen in the network and responding manually
- Post-cognition – Monitoring events / trends that happen in the network, comparing them to past situations (using analytics to identify repeating patterns), using the past to recommend (or automate) a response
- Pre-cognition – Identification of events / trends that have never happened in the network before, yet still being able to provide a recommended / automated response
At face-value, it seems that we need pre-cognition to be able to achieve ZTA, but we also seem to be some distance away from achieving step 3 technologically (please correct me if I’m wrong here!). But today we pose a possible alternate way, using only the more achievable step 2 technology.
The weakness of Post-cognition is that it’s only as useful as the history of past events that it can call upon. But rather than waiting for events to naturally occur, perhaps we could constantly trigger simulated events and reactions to seed the historical database with a far greater set of data to call upon. In other words, pull all the levers to ensure that there is no event that has never happened before. The problem with this brute-force approach is that the constant tinkering could trigger a catastrophic network failure. We want to build up a library of all possible situations, but without putting live customer services at risk.
So we could run many of the more risky, cascading or long-run variants on what other industries might call a “digital twin” of the network instead. By their nature of storing all the current operating data about a given network, an OSS could already be considered to be a digital twin. We’d just need to build the sophisticated, predictive simulations to run on the twin.
More to come tomorrow when we discuss how data collection impacts our ability to achieve ZTA.