“I should say… that in the real world exponential curves don’t continue for ever. We get S-curves which closely mimic exponential curves in the beginning, but then tail off after a while often as new technologies hit physical limits which prevent further progress. What seems to happen in practice is that some new technology emerges on its own S-curve which allows overall progress to stay on an something approximating an exponential curve.
The chart above shows interlocking S-curves for change in society over the last 6,000 years. That’s as macro as it gets, but if you break down each of those S-curves they will in turn be comprised of their own interlocking S-curves. The industrial age, for example, was kicked off by the spinning jenny and other simple machines to automate elements of the textile industry, but was then kicked on by canals, steam power, trains, the internal combustion engine, and electricity. Each of these had it’s own S-curve, starting slowly, accelerating fast and then slowing down again. And to the people at the time the change would have seemed as rapid as change seems to us now. It’s only from our perspective looking back that change seems to have been slower in the past. Once again, that’s only because we make the mistake of thinking in absolute rather than relative terms.”
Nic Brisbourne here.
I love that Nic has taken the time to visualise and articulate what many of us can perceive.
Bringing the exponential / S-curve concept into OSS, we’re at a stage in the development of OSS that seems faster than at any other time during my career. Technology change in adjacent industries are flowing into OSS, dragging it (perhaps kicking and screaming) into a very different future. Technologies such as continual integration, cloud-scaling, big-data / graph databases, network virtualisation, robotic process automation (RPA) and many others are making OSS look so different to what they did only five years ago. In fact, we probably need these technologies to keep apace with the other technologies. For example, the touchpoint explosion caused by network virtualisation and IoT mean we need improved database technologies to cope. In turn this introduces a complexity and change that is almost impossible for people to keep track of, driving the need for RPA… etc.
But then, there are also things that aren’t changing.
Many of our OSS have been built through millions of developer days of effort. That forces a monumental decision for the owners of that OSS – to keep up with advances, you need to rapidly overhaul / re-write / supersede / obsolete all that effort and replace it with something that keeps track of the exponential curve. The monolithic OSS of the past simply won’t be able to keep pace, so highly modular solutions, drawing on external tools like cloud, development automation and the like are going to be the only way to track the curve.
All of these technologies rely on programmable interfaces (APIs) to interlock. There is one major component of a telco’s network that doesn’t have an API yet – the physical (passive) network. We don’t have real-time data feeds or programmable control mechanisms to update it and manage these typically unreliable data sources. They are the foundation that everything else is built upon though so for me, this is the biggest digitalisation challenge / road-block that we face. Collectively, we don’t seem to be tackling it with as much rigour as it probably deserves.
2 Responses
Actually we could sort of “digitize” the physical network by massive pre-provisioning. It would require substantial funding and monopolies to set it up & exploit it. In the European Union, this was at some point on the agenda in the mid 1990s but eventually discarded in favor of infrastructure based competition.
Hi Roland
Interesting! I’d love to hear more about how this works. I assume it would align well with the vCPE concept of modern virtualised networking? ie ubiquitous connectivity and “never” replacing the CPE, just reprovisioning it to suit any network function you need it to perform?