Synchronicity and OSS data integrity

Trust is built with consistency.”
Lincoln Chafee.

A subscriber to this blog, a talented and insightful Engineer from the Philippines, asked the following question, “how to sync network information and oss inventory, how to sustain the data integrity through KPIs?”

This is a brilliant question and highly complex to resolve due to the nuances of live networks. Every network is different, but here’s the approach that I take to ensure the inventory recorded in the OSS is as closely aligned as possible to the real network inventory:

  1. The network you’re managing – Get an understanding of the network you’re trying to model (as well as the target inventory solutions you need to model data into). Identify the types of inventory that you’ll be recording in your OSS (eg locations, devices, cards, circuits, etc) as well as the services and topologies that interconnect across them
  2. The data sources – Identify the sources of information that will be used to derive the data in your OSS (eg from the devices, spreadsheets, diagrams, etc)
  3. Data collection mechanisms – Identify the mechanism for getting the data from the network to the OSS (eg manual creation through the OSS application, bulk load via data migration, auto-discovery via a discovery interface, etc). Wherever viable, try to establish a programmatic interface to transfer data between network and OSS to avoid data entry errors
  4. Data master model – Then when you’ve categorised all data flows from network to OSS, you’ll need to identify which is the data master. In most cases the network is THE source of truth so in most cases it is the data master. However, there are some cases where the OSS needs to be the master and initiate updates to the network (eg cross-domain circuits are designed in the OSS and then pushed down to the devices via a provisioning interface or work orders)
  5. Load sequence – You may also have to identify the sequence of data to be loaded. For example, a new device may have to be loaded into your OSS database before it’s cards can be loaded due to referential integrity in the database
  6. Reconciliation feedback loop/s – Identify the reconciliation and remediation feedback loop to ensure data sets are monitored and managed as regularly as possible. You want to avoid manual audits wherever possible because they tend to be expensive, time consuming and don’t perpetuate data integrity. Feedback velocity is important of course.

A few other notes:

  • The step that most organisations forget or are unable to achieve is the last one. It is the feedback loop that is essential for maintaining integrity. Therefore you have to seek mechanisms that enforce it and maintain regularity of it. Those mechanisms could be process-driven, data-driven or through integrations
  • Make sure your reconciliation process (point 6) also has an exception bin to spit out any data records that haven’t successfully parsed. You’ll have to process these separately, perhaps even refining your feedback loop to pick them up on the next pass
  • When the interface mechanism (point 3) is a manual process, KPIs are a helpful tool to incentivising the feedback loop (point 6).
  • Always seek to develop discovery interfaces as your first priority wherever viable because they provide a perpetual solution rather than the other ad-hoc migrate/reconcile methods
  • If you do develop discovery interfaces, I don’t recommend auto-updates unless you have complete faith in the reliability of your discovery tools. A faulty auto-updater can quickly corrupt your production database so it is recommended that you manually accept any inventory items identified by the discovery interface. A relatively benign example is flapping ports on a device (ie a fault means they’re reporting as up, down, up, down, etc) generating many superfluous records in your OSS database
  • Discovery within domains (ie devices, cards, ports and perhaps even intra-domain circuits) is a great starting point. Cross-domain data (ie mapping a circuit or service across more than one domain or data feed) is a much bigger challenge for the modeller. It requires preparation of a model that supports end-to-end services, links in with intra-domain data models/conventions and has the linking keys between data sets to allow assimilation / fusion. Refer also to My Graphene Data Model

If this article was helpful, subscribe to the Passionate About OSS Blog to get each new post sent directly to your inbox. 100% free of charge and free of spam.

Our Solutions

Share:

Most Recent Articles

No telco wants to buy an OSS/BSS

When you’re a senior exec in a telco and you’ve been made responsible for allocating resources, it’s unlikely that you ever think, “gee, we really

3 Responses

  1. You are quite right on the velocity feedback especially on the fast dynamics of access network. Any significant delay would render the discovered data as stale. Thanks.

  2. Data sync. / integrity is key to having all subsystems – such as in OSS solutions – supporting a common goal. The complexity of these solutions means that data in different data stores quickly grows out of sync., introducing operational inefficiencies.
    In my experience with telecom data integrity solutions there are 2 key functions that must be supported:
    1. Data Discrepancy Detection – There must be a fast solution for data comparison between different data sources that can easily scale to handle the huge volumes of data supporting telecom systems. It is important to focus only on the “differences” between sources and not waste time on examining data that is already in sync. A Discrepancy solution must be easily configurable to handle comparing different types of data sources and requires a rules engine or equivalent interface to limit and then prioritise the huge numbers of data discrepancies that typically arise.
    2. Repeatable and automated processes – Data discrepancies are introduced continuously between data sources through normal day-to-day operations. A “one off” fix is not sufficient, this is a life-long activity. Automation of the basic processes of a solution enables experienced personnel to concentrate on the business of maintaining “Synchronicity” – not the business of maintaining the solution.

    However, whether repeatable processes can be extended to automated reconciliation of differences between data sources depends on the experience and maturity of those managing the solution. Telecom data sources are huge and complex – needing careful analysis of all possible causes of data discrepancy
    that must be managed by automated solutions.

  3. Hi Doug,

    As one of the rare individuals who excels in this art-form within OSS, I greatly appreciate your helpful feedback for fellow readers.

    Your notes provide valuable additional detail around my point 6 in particular.

    Ryan

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.