Our Blue Book OSS/BSS Vendors Directory provides a list of over 400 vendors. That clearly states that it’s a highly fragmented market. This amount of fragmentation hurts the industry in many ways, including:
- Duplication – Let’s say 100 of the 400 vendors offer alarm / fault management capabilities. That means there are 100 teams duplicating effort in creating similar functionality. Isn’t that re-inventing the wheel, again and again? Wouldn’t the effort be better spread into developing new / improved functionality, rather than repeating what’s already available (more or less). And it’s not just coding, but testing, training, etc. The talent pool overlaps on duplicated work at the expense of taking us faster into an innovative future
- Profit-share – The collective revenues of all those vendors need to be spread across many investors. Consolidated profits would most likely lead to more coordination of innovation (and less duplication above). And just think at how much capability has been lost in tools developed by companies that are no longer commercially viable
- Overhead – Closely related is that every one of these organisations has an overhead that supports the real work (ie product development, project implementation, etc). Consolidation would bring greater economies of scale
- Consistency – With 400+ vendors, there are 400+ visions / designs / architectures / approaches. This means the cost and complexity of integration hurts us and our customers. The number of variants makes it impossible for everything to easily bolt together – not withstanding the wonderful alignment mechanisms that TM Forum, MEF, etc create (via their products Frameworx, OpenAPIs, MEF 3.0 Framework, LSO, etc). At the end of the day, they create recommendations that vendors can interpret as they see fit. It seems the integration points are proliferating rather than consolidating
- Repeatability and Quality – Repeatability tends to provide a platform for continual improvement. If you do something repeatedly, you have more opportunities to refine. Unfortunately, the bespoke nature of OSS/BSS implementations (and products) means there’s not a lot of repeatability. Linus’s Law of OSS defects also applies, with eyeballs spread across many code-bases. And the spread of our variant trees means that we can never have sufficient test coverage, meaning higher end-to-end failure / fall-out rates than should be acceptable
- Portability – Because each product and implementation is so different, it can be difficult to readily transfer skills between organisations. An immensely knowledgeable, talented and valuable OSS expert at one organisation will likely still need to do an apprenticeship period at a new organisation before becoming nearly as valuable
- Analysis Paralysis – If you’re looking for a new vendor / product, you generally need to consider dozens of alternatives. And it’s not like the decisions are easy. Each vendor provides a different set of functionality, pros and cons. It’s never a simple “apples-for-apples” comparison (although we at PAOSS have refined ways to make comparisons simpler). It’s certainly not like a cola-lover having to choose between Coke and Pepsi. The cost and ramifications of an OSS/BSS procurement decision are decidedly more significant too obviously
- Requirement Spread – Because there are so many vendors with so many niches and such a willingness to customise, our customers tend to have few constraints when compiling a list of requirements for their OSS/BSS. As described in the Lego analogy, reducing the number of building blocks, perhaps counter-intuitively, can actually enhance creativity and productivity
- Shared Insight – Our OSS/BSS collect eye-watering amounts of data. However, every data set is unique – collection / ETL approach, network under management, product offerings, even naming conventions, etc. This makes it challenging to truly benchmark between organisations, or share insights, or share seeded data for cognitive tools
However, I’m very cognisant that OSS come in all shapes and sizes. They all have nuanced requirements and need unique consideration. Yet many of our customers stand on a burning platform and desperately need us to create better outcomes for them.
From the points listed above, the industry is calling out for consolidation – especially in the foundational functionality that is so heavily duplicated – inventory / resource management, alarms, performance, workflows, service ordering, provisioning, security, infrastructure scalability, APIs, etc, etc.
If we had a consistent foundation for all to work on, we could then more easily take the industry forward. It becomes a platform for continuous improvement of core functionality, whilst allowing more widespread customisation / innovation at its edges.
But who could provide such a platform and lead its over-arching vision?
- I don’t think it can be a traditional vendor. Despite there being 400+ vendors, I’m not aware of any that cover the entire scope of TM Forum’s TAM map. Nor do any hold enough market share currently to try to commandeer the foundational platform
- TM Forum, wouldn’t want to compromise their subscriber base by creating something that overlaps with existing offerings
- Solution Integrators often perform a similar role today, combining a multitude of different OSS/BSS offerings on behalf of their customers. But none have a core of foundational products that they’ve rolled out to enough customers to achieve critical mass
- I like the concept of what ONAP is trying to do to rally global carriers to a common cause. However, its size and complexity also worries me. That it’s a core component of the Linux Foundation (LF Networking) gives it more chance of creating a core foundation via collaborative means rather than dictatorial ones
We’d love to hear your thoughts. Is fragmentation a good thing or a bad thing? Do you suggest a better way? Leave us a message below.