How does OSS architecture cope with exponential growth

Yesterday’s blog covered how exponential growth in ICT industries has been (and will continue to be) a challenge for all of us in OSS-land.

We’ve already seen some fundamental changes in OSS in recent years to be able to cope with the massive growth in device counts, bandwidth demands, etc. We’ve seen hyper-scaled hardware/software platforms becoming much more common as well as the load-balanced interfaces that can cope with this scaling.

We also have the beginnings of software-defined networks, which allow innovation at the speed of software in the network. This is a plus in terms of our ability to scale to meet exponential demands, but a challenge for management software to keep up with what will be even more explosive growth and transient services (ie spin-up / tear-down of services)..

What we haven’t coped so well with is the complexity that comes from having far more devices, more device types and much more rapid change within those device types. And that’s not even mentioning the number of configurations of network connectivity they support. This rate of change is also apparent in the product/service variants that marketers develop on the back of the newly available network topologies / services. These are just some of the complexity multipliers that OSS “catches” downstream of decisions made by others.

The complexity explosion is going to continue unabated, which challenges the triple constraint of OSS. There are really only four ways I can see for handling the exponential growth in complexity hitting our OSS:

  1. A reduction in complexity from a variety of factors including moving from big-grid to small-grid OSS models
  2. Abstraction of complexity by element management layers beneath the OSS, including intent-based demands
  3. Simplifying what our OSS are attempting to do. They currently have so much functionality baked in (most of which customers don’t use), that maintaining and building upon it is becoming unviable. Much of this functionality needs to be stripped out and handled as data experiments, which needs a new approach to data management
  4. Using algorithmic learning approaches built into our OSS to handle a level of complexity that the human brain struggles with. This includes machine learning / artificial intelligence (ML / AI) of course, as well as leveraging service defined architectures (think microservices) to create the building blocks of software-driven scaling. Algorithmic learning needs lots of source data to learn from, so the touchpoint explosion will contribute

How are you currently looking to strip complexity out of your OSS and/or systematically benefit from innovation at the speed of software?

Read the Passionate About OSS Blog for more or Subscribe to the Passionate About OSS Blog by Email

Leave a Reply

Your email address will not be published. Required fields are marked *