I was speaking with a friend today about an old OSS assurance product that is undergoing a refresh and investment after years of stagnation.
He indicated that it was to come with about 20 out of the box adaptors for data collection. I found that interesting because it was replacing a product that probably had in excess of 100 adaptors. Seemed like a major backward step… until my friend pointed out the types of adaptor in this new product iteration – Splunk, AWS, etc.
Of course!!
Our OSS no longer collect data directly from the network anymore. We have web-scaled processes sucking everything out of the network / EMS, aggregating it and transforming / indexing / storing it (ETL – Extract Transform Load). Then, like any other IT application, our OSS just collect what we need from a data set that has already been consolidated and homogenised.
I don’t know why I’d never thought about it like this before (ie building an architecture that doesn’t even consider connecting to the the multitude of network / device / EMS types). In doing so, we lose the direct connection to the source, but we also reduce our integration tax load (directly to the OSS at least).
Another side benefit of this approach is that the data store then serves data to the various consumers, thus taking load off our OSS servers which have traditionally fed data to many consumers.
This is perfect for organisations that perform a lot of business intelligence (BI) and analytics. They can burn cycles on the data store rather than slowing down real-time operational transactions.