The modern OSS cycle – to build rather than buy

Have you noticed a changing trend where some of the largest service providers in the world are reverting to building their own OSS / orchestration (ie writing software) rather than buying off-the-shelf? The trends pushing this cycle are software defined networks and agile development models.

In the earliest days of OSS, the service providers made their own tools. They developed for their own internal needs and eventually banded together to write up standards like ITU-T’s M.3400. Over time, the market changed, with more off-the-shelf products being available with baked-in functionality and service providers decided not to develop tools that were already available and just needed configuration.

OSS Development Cycle

But now, it’s all about custom-built microservices, APIs, browser-based user interfaces, DevOps, machine-to-machine interfaces to accommodate automations and the piece de resistance, orchestration. Instead of collaborating with standards, we’re collaborating with open-source. The modern OSS/orchestration is all about rapidly (and continually) rolling out smaller, custom updates.
I acknowledge that open-source could be considered “off-the-shelf” in terms of being a pre-existing code base and there are some existing products that slot into certain blocks within architecture stacks (eg application servers, orchestrators, etc).

Sounds good… However, I two slight reservations with the current approach that may lead to the eventual cycle back to off-the-shelf:

  • The relative scarcity of resources who are skilled and experienced enough to build these new wild-west models (noting tongue-in-cheek that over time the wild-west will be tamed and more of the functional building blocks will become productised and available off-the-shelf). The previous blog was designed to be prerequisite reading to this point. The question becomes whether next-generation languages will allow everyone to code and if not, will service providers want so many pigeon-holed resources (ie developers) on their payroll? Would organisations prefer teams who can make the configuration changes of the previous cycle rather than code changes?
  • The long-term challenge of code maintenance – large developments work well when the project is underway and there is a large pool of resources who know what’s happening and roughly how the code works. But as the projects diminish and the products go into maintenance mode, resources move on. Will the infinite cycle of DevOps actually last forever and the code will be continually renewed or will it be like other cycles and future redundancies strip an organisation of anyone who knew how today’s products worked. At least with off-the-shelf products, vendors are obliged to retain the skills to know and evolve their products until obsolesence

Is it inevitable that the current cycle of in-house development will revert to off-the-shelf solutions once DevOps loses steam and software eventually stops eating the world? How far off is that?

BTW, I’m not trying to suggest that the current off-the-shelf product model is the right way, harking back to the past, because that model also has its own limitations (eg too bloated with baked-in functionality, too inflexible unless the vendor is developing to the beat of your drum, etc). Perhaps the next generation will have fewer warts (or just a different type)?

If this article was helpful, subscribe to the Passionate About OSS Blog to get each new post sent directly to your inbox. 100% free of charge and free of spam.

Our Solutions

Share:

Most Recent Articles

No telco wants to buy an OSS/BSS

When you’re a senior exec in a telco and you’ve been made responsible for allocating resources, it’s unlikely that you ever think, “gee, we really

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.