As we all know, data quality can be a big problem for OSS, at almost every operator around the world.
A lot of this problem stems from the fact that most telcos have a lot of passive infrastructure / objects in their networks. Passive devices don’t tend to have programmatic interfaces that allows for current status or details to be polled. This makes it easy for passive device data accuracy to decay. With physical infrastructure being the lowest order data in a hierarchy, many other layers of data are dependent upon it. Inaccuracies there can ripple out and infect other data points.
The point about decay is an important one. The accuracy of data tends to decay over time unless it’s maintained. In the case of physical infrastructure, there are initial data points when it’s first designed, installed, tested and marked up in as-built documentation. After that, infrastructure like cables, pits, racks, etc can remain in-situ without any updates for decades. There’s not a single “ping” that tests whether the physical object still reconciles with its data point/s in the OSS.
If it gets moved or modified in the field, but nobody updates the OSS data points, then a discrepancy entails, often unnoticed.
For example, a fibre cable is shown in an OSS as having a single circuit connected (on fibres 1 and 2). However, a field technician added another circuit (on fibres 3 and 4) in the field, but that new circuit wasn’t updated in OSS records. This discrepancy will remain unnoticed until an error occurs. For example, the design for a new customer circuit specifies splicing fibres 3 and 4. When the technician arrives on site with the design pack and sees 3 and 4 are in use, they just splice fibres 5 and 6 instead… It allows the service to be activated, but there’s no process in place to update OSS records. The problem compounds.
If the discrepancies (and impacts such as repeat truck rolls) get too large, then the operator might decide to “ping” the data again (ie do a site audit) or try to infer a data fix using algorithmic approaches [Refer to big loop – little loop data fix article].
However, there’s another possible way to “ping” and reconcile the data more regularly simply through day to day activities using modern techniques.
Each time a field worker goes to site, they must take a photo of the infrastructure they’re working on (and nearby infrastructure too). The photos need to be submitted with each job as artefacts of completion. The photos are then scanned by the OSS for asset identifiers (eg QR codes affixed to the asset or other AI image recognition techniques) to identify whether the equipment is still at the site and other related data (eg which ports are connected in patch panels, which coloured fibres are spliced, etc). Those data points are then used to reconcile with OSS data, effectively re-setting the data decay cycle.
Once wearables become more commonplace, it’s likely that these physical asset pings will be done using augmented reality image analysis rather than photos whilst workers simply go about their daily activities.