Operators have developed many unique understandings of what impacts the health of their networks.
For example, mobile operators know that they have faster maintenance cycles in coastal areas than they do in warm, dry areas (yes, due to rust). Other operators have a high percentage of faults that are power-related. Others are impacted by failures caused by lightning strikes.
Near-real-time weather pattern and lightning strike data is now readily accessible, potentially for use by our OSS.
I was just speaking with one such operator last week who said, “We looked at it [using lightning strike data] but we ended up jumping at shadows most of the time. We actually started… looking for DSLAM alarms which will show us clumps of power failures and strikes, then we investigate those clumps and determine a cause. Sometimes we send out a single truck to collect artifacts, photos of lightning damage to cables, etc.”
That discussion got me wondering about what other lateral approaches are used by operators to assure their networks. For example:
- What external data sources do you use (eg meteorology, lightning strike, power feed data from power suppliers or sensors, sensor networks, etc)
- Do you use it in proactive or reactive mode (eg to diagnose a fault or to use engineering techniques to prevent faults)
- Have you built algorithms (eg root-cause, predictive maintenance, etc) to utilise your external data sources
- If so, do those algorithms help establish automated closed-loop detect and response cycles
- By measuring and managing, has it created quantifiable improvements in your network health
I’d love to hear about your clever and unique insight-generation ideas. Or even the ideas you’ve proposed that haven’t been built yet.Read the Passionate About OSS Blog for more or Subscribe to the Passionate About OSS Blog by Email