Our OSS/BSS collect a lot of information. But how much of it is used and in what ways? How do the users find the information they need to make decisions?
In some cases, our OSS completely overload the user with information. An example might be in performance metrics. Our network might have hundreds of nodes and each node is collecting dozens/hundreds of performance data points every few minutes. If we just present this as hundreds of adjacent time-series graphs then we’re making the task difficult for our users. Finding the few decisionable data points is like trying to find a paragraph of text that exists in print somewhere in the room below. Our users might never find it.
Image from mastertechmold.com
In other cases, we underwhelm our users, not giving them all the data they need. How often do you hear of network operations staff receiving an alarm and then connecting to the device via CLI (Command Line Interface) to perform additional diagnostics? Or a capacity planner jumping between inventory, utilisation graphs, calculators, maps, etc.
For the overwhelming cases, look to deliver roll-ups / drill-downs of information. For example, show a rolled-up heat-map of all metrics across the whole of the network (eg green, orange, red). But then let users progressively drill down (eg by region, by colour / severity, by domain, etc) to find what they’re seeking.
For the underwhelming cases, walk the journey with the users – map the user personas, their most important activities, the data points they need to perform those activities (or generate via those activities) and determine whether supplementary data is required.
Personally, I’ve tended to find that cross-linked data has given me great insights. I like being able to query data and mash it up to test ad-hoc hypotheses. We don’t necessarily need all the cross-linked data baked into our OSS/BSS tools, but we do need great data query and visualisation tools.
That’s one of the reasons the data visualisation block features prominently in my OSS Sandpit architecture.
When considering which tool to use, I looked beyond the norm as the typical telco data management tools tend to have a few limitations.
I decided to look into the types of tools used heavily in the finance industry. In part because I wanted to test Candlestick / Bollinger Band functionality, but also because finance handles massive data sets (often time-series data) and needs to present data in a way that actionable insights can be derived quickly and easily.
- Sankey diagrams – to show relative activity flow volumes
- Candlestick / Bollinger – to show trends and exceptions (see BBs on ChartIQ, which is also integrated into Kx Dashboards)
- Radar views – to map complex comparisons such as performance or utilisation of assets over multiple months
- Word clouds – to show most common text / phrases in log files
- Geo – overlay data such as customer counts, device counts, utilisation, percentage connections, network health, etc, etc onto map views
- Radial convergence – to show the volume of interconnections between devices / ports
- Hierarchical edge bundles – to show network hierarchies for root-cause-trace (RCT). This might include tying together the virtualisation stacks for networks like 5G, where it’s proving to be a challenge to identify root-cause.
- Area grouping – to show predominant network connectivity between areas
- Circular ties – to show graph data relationships
- Not to mention all your typical graphing models such as
- Bubble charts – to show relative volumes
- Scatter-plots – to show network performance vs line length
- Bar charts
- Pie charts