“Trust is built with consistency.”
A subscriber to this blog, a talented and insightful Engineer from the Philippines, asked the following question, “how to sync network information and oss inventory, how to sustain the data integrity through KPIs?”
This is a brilliant question and highly complex to resolve due to the nuances of live networks. Every network is different, but here’s the approach that I take to ensure the inventory recorded in the OSS is as closely aligned as possible to the real network inventory:
- The network you’re managing – Get an understanding of the network you’re trying to model (as well as the target inventory solutions you need to model data into). Identify the types of inventory that you’ll be recording in your OSS (eg locations, devices, cards, circuits, etc) as well as the services and topologies that interconnect across them
- The data sources – Identify the sources of information that will be used to derive the data in your OSS (eg from the devices, spreadsheets, diagrams, etc)
- Data collection mechanisms – Identify the mechanism for getting the data from the network to the OSS (eg manual creation through the OSS application, bulk load via data migration, auto-discovery via a discovery interface, etc). Wherever viable, try to establish a programmatic interface to transfer data between network and OSS to avoid data entry errors
- Data master model – Then when you’ve categorised all data flows from network to OSS, you’ll need to identify which is the data master. In most cases the network is THE source of truth so in most cases it is the data master. However, there are some cases where the OSS needs to be the master and initiate updates to the network (eg cross-domain circuits are designed in the OSS and then pushed down to the devices via a provisioning interface or work orders)
- Load sequence – You may also have to identify the sequence of data to be loaded. For example, a new device may have to be loaded into your OSS database before it’s cards can be loaded due to referential integrity in the database
- Reconciliation feedback loop/s – Identify the reconciliation and remediation feedback loop to ensure data sets are monitored and managed as regularly as possible. You want to avoid manual audits wherever possible because they tend to be expensive, time consuming and don’t perpetuate data integrity. Feedback velocity is important of course.
A few other notes:
- The step that most organisations forget or are unable to achieve is the last one. It is the feedback loop that is essential for maintaining integrity. Therefore you have to seek mechanisms that enforce it and maintain regularity of it. Those mechanisms could be process-driven, data-driven or through integrations
- Make sure your reconciliation process (point 6) also has an exception bin to spit out any data records that haven’t successfully parsed. You’ll have to process these separately, perhaps even refining your feedback loop to pick them up on the next pass
- When the interface mechanism (point 3) is a manual process, KPIs are a helpful tool to incentivising the feedback loop (point 6).
- Always seek to develop discovery interfaces as your first priority wherever viable because they provide a perpetual solution rather than the other ad-hoc migrate/reconcile methods
- If you do develop discovery interfaces, I don’t recommend auto-updates unless you have complete faith in the reliability of your discovery tools. A faulty auto-updater can quickly corrupt your production database so it is recommended that you manually accept any inventory items identified by the discovery interface. A relatively benign example is flapping ports on a device (ie a fault means they’re reporting as up, down, up, down, etc) generating many superfluous records in your OSS database
- Discovery within domains (ie devices, cards, ports and perhaps even intra-domain circuits) is a great starting point. Cross-domain data (ie mapping a circuit or service across more than one domain or data feed) is a much bigger challenge for the modeller. It requires preparation of a model that supports end-to-end services, links in with intra-domain data models/conventions and has the linking keys between data sets to allow assimilation / fusion. Refer also to My Graphene Data Model