I’d love to ask you an important question… how do we benchmark OSS/BSS complexity? To measure how complex our systems are and therefore provide a signpost for simplification.
A colleague has opined that the number of apps in a stack could be used a proxy. I can see where he’s going with that, but I feel that it doesn’t account for architectural differences such as monolith versus microservices.
I’d love to hear your thoughts via the comments box below.
FWIW, here are some additional thoughts from me, but please don’t let them bias your opinions:
- For me, complexity relates to the efficiency of getting tasks done:
- How much time to complete certain tasks
- How many button clicks
- How much swivel-chairing
- How many CPU cycles for automated tasks
- How much admin overhead
- How much duplicated effort and/or rework
- However, there are so many different tasks done within an OSS/BSS stack that it’s difficult to provide a complexity metric that compares one OSS/BSS stack with another. Or compares a single stack before/after changes are made
- In some cases the complexity happens inside the OSS/BSS “black box” (eg tools within the suite aren’t seamlessly integrated, causing operators to perform dual-entry that leads to data inconsistency and downstream re-work)
- In other cases the complexity is inherited from outside the black box (eg product offerings have hundreds of possible variants that are imperceptibly different in the customer’s eyes). I call this The OSS Pyramid of Pain
- In many cases, the complexity of an OSS/BSS stack is less about the systems and integrations, and more about the complexity of The Decision Tree that spans the stack. The spread of the Decision Tree is impacted by:
- The OSS/BSS applications
- Support applications (eg authentication, security, data management, resilience / availability, etc)
- System interfaces (internal and external)
- User interfaces
- Process designs
- Product definitions
- Work practices
- Data models
- Design rules
- Network topologies
- etc, etc
- The more complex the Decision Tree, the more complex it is to transform our OSS. It loosely aligns with what I call this The Chessboard Analogy
- The development strategy used also has an impact, be monolithic, best-of-breed, hosted or in-house developed. For example an in-house-developed solution is likely to have less functionality-bloat than a COTS (off-the-shelf) solution. The COTS solution needs to include additional functionality to enable it to support requirements of multiple customers
- And finally, a benchmark is only as useful as the actions that it triggers. How do we codify a complexity metric that has the equally complex array of contributions described above?
- Perhaps we could take a somewhat abstracted approach like the NPS (Net Promoter Score) does, thus creating a NSS (Net Simplicity Score)
As mentioned above, I’d love to hear your thoughts on how we can benchmark the level of complexity in our OSS/BSS. Please leave your comments below.
Read the Passionate About OSS Blog for more or Subscribe to the Passionate About OSS Blog by Email