OSS/BSS Testing – The importance of test data

Today’s is the third part in a series about OSS/BSS testing (part 1, part 2).

Many people think about OSS/BSS testing in terms of application functionality and non-functional requirements. They also think about entry criteria / pre-requisites such as the environments, application builds / releases, test case development and maybe even the integrations required.

However, an often overlooked aspect of OSS/BSS functionality testing is the  data that is required to underpin the tests.

Factors to be considered will include:

  • Programmatically collectable data – this refers to data that can be collected from data sources. Great examples are near-real-time alarm and performance data that can be collected by connecting to the network, either to real devices and NMS or simulators
  • Manually created or migrated data – this refers to data that is seeded into the OSS/BSS database. This could be manually created or script-loaded / migrated. Common examples of this are inventory data, especially information about passive infrastructure like buildings, cables, patch-panels, racks, etc. In some cases, even data that can be collected via a programmatic interface still needs to be augmented with manually created or migrated data
  • Base data – for consistency of test results, there usually needs to be a consistent data baseline to underpin the tests.
  • Reversion to baseline – If/when there are test cases that modify the base data set (eg provisioning a new service that reserves resources such as ports on a device), there may need to be a method of roll-back (or reinstatement) to base state. In other cases, a series of tests may require a cascading series of dependent changes (eg reserving a port before activating a service). These examples may need to be rolled-back as a sequence
  • Automated / Regression testing – if automations and/or regression testing is to be performed, automated reversion to consistent base data will also be required. Otherwise discrepancies will appear between automated test cycles
  • Migration of base data – for consistency of results between phases, there also needs to be consistency of data across the different environments on which testing is performed. This may require migration of base data between environments (see yesterday’s post about transitions between environments)
  • Multi-homing of data – particularly for real-time data, sources such as devices / NMS may issue data to more than one destination (eg different environments).
  • Reconciliation / Equivalency testing – When multi-homing of data sources is possible or when Current and Future PROD are running in parallel, equivalency testing can be performed. By performing comparison between the processed data in destination systems it allows for equivalency testing (eg between current and future state mediation devices / probes / MDDs). Transition planning will also be important here as we plan to migrate from Current PROD to Future PROD environments
  • Data Migration Testing – this is testing to confirm the end-to-end validity and completeness of migrated data sets
  • PROD data cuts – once a system is cut over to production status, it’s common to take copies of PROD data and load it onto lower environments. However, care should be taken that the PROD data doesn’t overwrite specially crafted base data (see “base data” dot-point above)


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.