Data integrity planning

My current project is on data integrity where telco has number of bespoke inventory systems, systems acquired as part of M&As and COTS product. I am trying to put end to end story line in terms of process for data extraction, data investigation and analysis, data correction, impact on upstream and downstream systems. There can be number of scenarios that we need to consider in each phase/task like manual correction/automatic correction, mitigations for impact, checks and verifications required before make any change in data etc..
Requesting you to share your views on end to end process for data integrity activity for network inventory data and what all scenarios can be possible for doing each task
.”
Deepty
, a regular reader of PAOSS.

This is a relatively common exercise for telcos to go through. Unfortunately, every single Telco’s data integrity planning project is very different and as bespoke as the systems that make up their OSS.

Despite every system being different, the actual approach can be broken down into a few common steps:

  1. Map out the origin of data sets from their source, understanding whether the data is system generated via integrated systems, manually created, migrated from other sources, derived from engineering rules, etc
  2. Show the flow of interactions that the data undergoes as it touches other systems and processes as well as the reconciliations that are performed on the data
  3. Show the destinations for the data as it flows into databases, systems and reports

This three step process can range from being a simple task for smaller OSS implementations to an absolutely massive task for large, complex, massively integrated OSS suites. I tend to use graphic design tools like Visio to map high-level data flows (from system to system) and spreadsheets to map specific data flows at a database field level.

Once I have a better feel for what the flows are, I can understand where in the flows are susceptible to quality control issues. But even more importantly, I can better understand what processes I can put in place to ensure that a tight feedback loop improves rather than deteriorates data integrity.

And I should re-iterate here. It is the feedback loop design that is fundamental to retaining a high level of data quality. Without the feedback loops in place, telcos tend to have to go through expensive data clean-up exercises on a regular basis. Feedback loops often require clever designs, especially when the data sources aren’t coming from near-real-time programmatic interfaces (eg for passive equipment such as from cable record systems).

A blog post entitled “Synchronicity” covered a similar set of questions from another PAOSS subscriber named Elver almost exactly one year ago. Over the last year, he and his team have diligently worked through a major data integrity project at a tier-one Telco. He sent me a message recently to inform how successful the project had been and he was delighted with the team’s results, as were other stakeholders in their organisation. An outstanding achievement for all their hard work!

If this article was helpful, subscribe to the Passionate About OSS Blog to get each new post sent directly to your inbox. 100% free of charge and free of spam.

Our Solutions

Share:

Most Recent Articles

No telco wants to buy an OSS/BSS

When you’re a senior exec in a telco and you’ve been made responsible for allocating resources, it’s unlikely that you ever think, “gee, we really

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.