The common data store trend

Some time back, we discussed  A modern twist on OSS architecture that is underpinned by a common data model.
 
Time to discuss this a little more visually.
 
As the blue boxes on the left side of the diagram below show, you may have many different data sources (some master, some slaved). You may have a single OSS tool (monolithic solution) or you may have many OSS tools (best-of-breed approach).
 
You may have multiple BSS, NMS and even direct connections to network devices. You may even have other sources of data that you’ve never used before such as weather patterns, lightning strikes, asset management prediction modelling, SCADA data, HVAC data, building access / security events, etc, etc.
 
The common data model allows you to aggregate those sets to provide insights that have never been readily accessible to you previously.
 
So let’s look at a few key points
  1. Existing network layer systems (eg NMS, NE and their mediation devices) are currently sucking (near)real-time (ie alarm and perf) data out of the network and feeding to an OSS directly. They may also be pushing inventory discovery data to the OSS, although probably only loading less frequently (once-daily typically) .
  2. The common data model provides a few options for data flows: 
    1. If the data store is performant enough, the network layer could feed real-time data to the data store which on-forwards to OSS
    2. multi-home the data from the network to the data store and OSS simultaneously
    3. feed data from the network to the OSS, which may (or not) process before pushing to the data store
  3. Just a quick note regarding data flows: The network will tend to be the master for real-time / assurance flows. However, manual input tends to be the master for design/fulfil flows, so the OSS becomes the master of inventory data as per this link 
  4. The question then becomes where the data enrichment happens (ie appending inventory-related data to alarms) to help with root-cause and service-impact calculations. Enrichment / correlation probably needs to happen in the OSS’s real-time engine, but it could source enrichment data directly from the network, from the OSS’s inventory, or from the common data store 
  5. If the modern ETL tools (eg SNMP and syslog collectors, etc) allow you to do your own ETL to a common data store, a vendor OSS would only need one mediation device (ie to take data from the data store), rather than needing separate ones to pull from all the different NMS/EMS/NE) in your network. This has the potential to reduce mediation license costs from your OSS vendor
  6. Having said that, if you have difficult / proprietary interfaces that make it a challenge to do all of your own ETL then it might be best to let your OSS vendor build your mediation / ETL engines
  7. The big benefit of the common data store is you can choose a best-of-breed approach but still have a common data model to build Business Intelligence queries and reports around
  8. The common data store also takes load off the production OSS application / data servers. Queries and reports can be run against the common data platform, freeing up CPU cycles on the OSS for faster user interactions

The Common Data Model is supported by a few key advancements:

  1. In the past, the mediation layer (ie getting data out of the network and into the OSS) was a challenge. Network operators didn’t tend to want to do this themselves. This introduced a dependency on software suppliers / integrators to build mediation devices and sell them to operators as part of their OSS/BSS solutions. But there’s been a proliferation of highly scalable ETL (Extract, Transform, Load) tools in recent years
  2. Many networks used to have proprietary interfaces that required significant expertise to integrate with. The increasing ubiquity of IP networking and common interfaces (eg SNMP and web interfaces like RESTful, JSON, SOAP, XML) to the network layer makes ETL simpler.=
  3. Massively scalable databases that don’t have as much dependency on relational integrity and can ingest data for myriad sources
  4. A proliferation of data visualisation tools that are more user-friendly instead of having to be a coder capable of writing complex SQL queries
 

If this article was helpful, subscribe to the Passionate About OSS Blog to get each new post sent directly to your inbox. 100% free of charge and free of spam.

Our Solutions

Share:

Most Recent Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.