Further to yesterday’s post that posed the question about whether your data was AI ready for virtualised network assurance use cases, I thought I’d raise a few more notes.
The two reasons posed were:
- Our data sets haven’t had time to collect much elastic / dynamic network data yet
- Our data is riddled with human-generated data that is error-prone
On the latter case in particular, I sense that we’re going to have to completely re-architect the way we collect and store assurance data. We’re almost definitely going to have to think in terms of automated assurance actions and related logging to avoid the errors of human data creation / logging. The question becomes whether it’s worthwhile trying to wrangle all of our old data into formats that the AI engines can cope with or do we just start afresh with new models? (This brings to mind the recent “perfect data” discussion).
It will be one thing to identify patterns, but another thing entirely to identify optimum response activities and to automate those.
If we get these steps right, does it become logical that the NOC (network) and SOC (security operations centre) become conjoined… at least much more so than they tend to be today? In other words, does incident management merge network incidents and security incidents onto common analysis and response platforms? If so, does that imply another complete re-architecture? It certainly changes the operations model.
I’d love to hear your thoughts and predictions.