Rebuilding the OSS boat during the voyage

When you evaluate the market, you’re looking to understand where people are today. You empathize. Then you move into storytelling. You tell the story of how you can make their lives better. That’s the “after” state.
All marketing ever does is articulate that shift. You have to have the best product, and you’ve got to be the best at explaining what you do… at articulating your value, so people say, ‘Holy crap, I get it and I want it.’

Ryan Deiss

How does “the market” currently feel about our OSS?

See the comments in this post for a perspective from Roger, “It takes too long to get changes made. It costs $xxx,xxx for “you guys” to get out of bed to make a change.” Roger makes these comments from the perspective of an OSS customer and I think they probably reflect the thoughts of many OSS customers globally.

The question for us as OSS developers / integrators is how to make lives better for all the Rogers out there. The key is in the word change. Let’s look at this from the context of Theseus’s paradox, which is a thought experiment that raises the question of whether an object that has had all of its components replaced remains fundamentally the same object.

Vendor products have definitely become more modular since I first started working on OSS. However, they probably haven’t become atomic enough to be readily replaced and enhanced whilst in-flight like the Ship of Theseus. Taking on the perspective of a vendor, I probably wouldn’t want my products to be easily replaceable either (I’d want the moat that Warren Buffett talks about)… [although I’d like to think that I’d want to create irreplaceable products because of what they deliver to customers, including total lifecycle benefits, not because of technical / contractual lock-in].

Customers are taking Theseus matters into their own hands through agile / CI-CD methodologies and the use of micro-services. They have merit, but they’re not ideal because it means the customer needs more custom development resourcing on their payroll than they’d prefer. I’m sure this pendulum will shift back again towards more off-the-shelf solutions in coming years.

That’s why I believe the next great OSS will come from open source roots, where modules will evolve based on customer needs and anyone can (continually) make evolutionary, even revolutionary, improvements whilst on their OSS voyage (ie the full life-cycle).

OSS chat bots

Some pilots are proving quite interesting in enhancing Network Operations. Especially when AI / ML is used in the form of Natural language processing – NOC operators are talking to a ML system in natural language and asking questions in context to an alarm about deep topics ( BGP, 4G eUTRAN, etc). Do more with L1 engineers.
– Use of familiar chat front ends like FB Messenger, Hangouts to have conversation with a OSSChatBot which learns using ML, continuously, the intents of the questions being asked and responds by bringing information back from OSS using APIs. (Performance of a cell site, or Fuel level on a tower site, Weather information in a cluster of sites etc). Ease of use, Open new channels to use OSS
– Training the AI system in TAC tickets, Vendor manuals, Historical tickets, – ability to search and index unstructured data and recommend solutions for problems when they occur or about to occur – informed via OSS. efficiency increase, Lower NOC costs.
– Network planning using ML apis. Ticket prioritization using ML analytics ( which ones to act upon first – certainly not only by severity)

in response to an earlier post entitled, “A thousand AI flowers to bloom for OSS.

Steelysan has raised some brilliant use cases for AI, chatbots specifically, in the assurance domain (hat-tip Steelysan!). OSSChatBots are the logical progression of the knowledge bases of the distant past or the rules / filters / state-machines of the current generation tools, yet I hadn’t considered using chat bots in this manner.

I wonder whether there are similar use cases in the fulfillment domain? Perhaps we can look at it from two perspectives, the common case and the exception case.

If looking at the common case, there is still potential for improved efficiency in fulfillment processes, particularly in the management of orders through to activation. We already have work order management and workforce management tools, but both could be enhanced due to the number of human interactions (handoffs, approvals, task assignment, confirmed completions, etc). An OSSChatBot could handle many of the standard tasks currently handled by order managers.

If looking at the exceptional case, where there is a jeopardy or fall-out state reached, the AI behind an OSSChatBot could guide an inexperienced operator through remedial actions.

These types of tools have the potential to give more power to L1 NOC operators and free up higher levels to do higher-value activities.

Using OSS to handle more sophisticated compliance issues

You’re probably aware that our OSS and/or underlying EMS/NMS have been assisting organisations with their governance, regulatory and compliance (GRC) requirements for some time. Our OSS have provided tools that automatically enforce and validate compliance against configuration policies.

Naturally, network virtualisation and network as a service (NaaS) offerings are going to place an increased dependence on those policies and tools due to the potentially transient services they invoke.

But the more interesting developments are likely to arise out of the regulatory environment space. As more businesses become digital and more personal / confidential data gets gathered, governments are going to create more regulations to comply to. This article by Michael Vizard highlights some of the major regulatory changes underway in the USA alone that will impact how we maintain our communications networks.

I’d hate to alarm you…

… but I will because I’m alarmed. 🙂

The advent of network virtualisation, cloud-scaling and API / microservice-centric OSS means that the security attack surface changes significantly compared with old-style solutions. We now have to consider a more diverse application stack, often where parts of the stack are outside our control because they’re As A Service (XaaS) offerings from other suppliers. Even a DevOps implementation approach can introduce vulnerabilities.

With these new approaches, service providers are tending to take on more of the development / integration effort internally. This means that service providers can no longer rely so heavily on their vendors / integrators to ensure that their OSS solutions are hardened. Security definitely takes a much bigger step up in the list of requirements / priorities / challenges on a modern OSS implementation.

This article from Aricent provides a few insights on the security implications of a modern API architecture.

* Please note that I am not endorsing Aricent products here as I have never used them, but they do provide some thought-provoking ideas for those tasked with securing their OSS.

It’s not the tech, it’s the timing

For those of us in the technology consulting game, we think we’re pretty clever at predicting the next big technology trend. But with the proliferation of information over the Internet, it’s not that difficult to see potentially beneficial technologies coming over the horizon. We can all see network virtualisation, Internet-connected sensor networks, artificial intelligence, etc coming.

The challenge is actually picking the right timing for introducing them. Too early and nobody is ready to buy – death by cashflow. Too late and others have already established a customer base – death by competition.

The technology consultants that stand apart from the rest are the ones who not only know that a technology is going to change the world, but also know WHEN it will change the world and HOW to bring it into reality.

We’ve talked recently about utilising exponential tech like AI in OSS and the picture below tells some of the story in relation to WHEN
Exponential curve opportunity cost
Incremental improvements to the status quo initially keep up with trying to bring a new, exponential technology into being. But over time, the gap progressively increases, as does the opportunity cost of staying with incremental change. Some aspects of the OSS industry has been guilty of trying to make incremental improvement to the platforms that they’ve invested so much time into.

What the graph suggests is to embark on investigations / trials / proofs-of-concept in incremental new tech so that in-house tribal knowledge is developed in readiness for when the time is right to introduce it to customers.

Falling off a cliff vs going to the sky

Have you noticed how the curves we’re dealing with in the service provider industry are either falling off a cliff (eg voice revenues) or going to the sky (eg theoretical exponential growth like IoE)?

Here in the OSS industry, we’re stuck in the middle of these two trend curves too. Falling revenues mean reduced appetite for big IT projects. However, the excitement surrounding exponential technologies like SDN/NFV, IoE and AI are providing just enough inspiration for change that includes the new projects to plug the revenue shortfalls.

The question for sponsors becomes whether they see OSS as being

  • Inextricably linked to the cliff-curve – servicing the legacy products with falling revenue curves and therefore falling project investment; or
  • Able to be the tools that allow their exciting new exponential tools (and hopefully exponential revenues) to be operationalized and therefore attract new project investment

The question for us in the OSS industry is whether we’re able to evangelise and adapt our offerings to ensure sponsors see us as being point 2 rather than point 1


No, not “Wow!” the exclamation but the acronym W-O-W.

Wow stands for Walk Out Working. In other words, if a customer comes into a retail store, they walk out with a working service rather than exasperation. Whilst many customers wouldn’t be aware of it, there are lots of things that have to happen in an OSS / BSS for a customer to be wowed:

  • Order entry
  • Identity checks / approvals
  • Credit checks
  • Inventory availability
  • Rapid provisioning
  • Rapid back-of-house processes to accommodate service activation (eg billing, service order management, SLAs, etc)
  • etc

How long does all of this reasonably take for different product offerings? For mobile services, this is often feasible. For fixed line services, delivery is often measured in weeks, so the service provider would need to provide accommodation to accommodate WOW.

With virtualised networking, perhaps even fixed line services can be wowed if physical connectivity already exists and a virtually enabled CPE is installed at the customer premise. A vCPE that can be remotely configured and doesn’t require a truck roll or physical commissioning activities out in the field.

Wow, the acronym and the exclamation, become more feasible for more service delivery scenarios in a virtualised networking world. It then lays out the challenge to the OSS / BSS to keep up. Could your OSS / BSS, product offerings and related processes meet a WOW target of minutes rather than hours or days?

Software is eating the world…. and eating your job?

A funny thing happened today. I was looking for a reference to Marc Andreessen’s original, “software is eating the world,” quote and came across an article on TechCrunch that expressed many of the same thoughts I was going to write about. However, it doesn’t specifically cover the service provider and OSS industries so I’ll push on, with a few borrowed quotes along the way (in italics, like the following).

Today, the idea that “every company needs to become a software company” is considered almost a cliché. No matter your industry, you’re expected to be reimagining your business to make sure you’re not the next local taxi company or hotel chain caught completely off guard by your equivalent of Uber or Airbnb. But while the inclination to not be “disrupted” by startups or competitors is useful, it’s also not exactly practical. It is decidedly non-trivial for a company in a non-tech traditional industry to start thinking and acting like a software company.”
[or the traditional tech industry like service providers???]

This is completely true of the dilemma facing service providers the world over. A software-centric network, whether SDN, NFV, or others, is nearly inevitable. While the important metrics don’t necessarily stack up yet for SDN, software will continue to swarm all over the service provider market. Meanwhile, the challenge is that the existing workforce at these companies, often in the hundreds of thousands of people, don’t have the skills or interest in developing the skills essential for the software defined service provider of the (near) future.

Even worse for those people, many of the existing roles will be superseded by the automations we’re building in software. Companies like AT&T have been investing in software as a future mode of operation for nearly a decade and are starting to reap the rewards now. Many of their counterparts have barely started the journey.

This old post provided the following diagram:
The blue circle is pushing further into the realm of the green to provide a larger yellow intersection, whereby network engineers will no longer be able to just configure devices, but will need to augment their software development skills. For most service providers, there just isn’t enough IT resources around to make the shift (although with appropriate re-skilling programs and 1+ million IT/Engineering resources coming out of universities in India and China every year, that is perhaps a moot point).

Summarising, I have two points to note:

  1. Bet on the yellow intersect point – service providers will require the converged skill-sets of IT and networks (include security in this) in larger volumes… but consider whether the global availability of these resources has the potential to keep salaries low over the longer term* (maybe the red intersection point is the one for you to target?)
  2. OSS is software and networking (and business) – however, my next post will consider the cyclical nature of a service provider building their own software vs. buying off-the-shelf products to configure to their needs

Will software eat your job? Will software eat my job? To consider this question, I would ask whether AI [Artificial Intelligence] develops to the point that it does a better job at consultancy than I can (or any consultant for that matter)? The answer is a resounding and inevitable yes… for some aspects of consultancy it already can. Can a bot consider far more possible variants for a given consulting problem than a person can and give a more optimal answer? Yes. In response, the follow-up question is what skills will a consulter-bot find more difficult to usurp? Creativity? Relationships? Left-field innovation?

* This is a major generalisation here I know – there are sectors of the IT market where there will be major shortages (like a possible AI skills crunch in the next 2-5 years or even SDN in that timeframe), sometimes due to the newness of the technology preventing a talent pool from being developed yet, sometimes just for supply / demand misalignments.

Hot on the heels of ECOMP comes Indigo

Since releasing ECOMP (Enhanced control orchestration and management platform), AT&T has been busy on a data sharing environment, which it announced at the AT&T Developer Summit and is called Indigo.

Like ECOMP, AT&T is looking to launch Indigo as an Open Source project through the Linux Foundation, hoping for community collaboration.

As you all know, machine learning and artificial intelligence (ML/AI) get better with more data. This project is intended to bring a community effort to the development of a data network that enhances accessibility to data and overcomes obstacles such as security, privacy, commercial sensitivities as well as other technical challenges.

AT&T announced that it will provide further details in coming weeks, which I’ll look to keep you abreast of. This is an important development for our industry for a range of reasons including the insights and efficiencies that ML/AI can deliver from the data it observes (as well as innovative revenue stream possibilities), but I’m most interested in the closed feedback loops that are needed in the OSS / ECOMP space.

For further information, check out this report on SDX Central.

The end of cloud computing

…. but we’ve only just started and we haven’t even got close to figuring out how to manage it yet (from an aggregated view I mean, not just within a single vendor platform)!!

This article from Peter Levine of Andreesen Horowitz predicts “The end of cloud computing.”

Now I’m not so sure that this headline is going to play out in the near future, but Peter Levine does make a really interesting point in his article (and its embedded 25 min video). There are a number of nascent technologies, such as autonomous vehicles, that will need their edge devices to process immense amounts of data locally without having to backhaul it to centralised cloud servers for processing.

Autonomous vehicles will need to consume data in real-time from a multitude of in-car sensors, but only a small percentage of that data will need to be transmitted back over networks to a centralised cloud base. But that backhauled data will be important for the purpose of aggregated learning, analytics, etc, the findings of which will be shipped back to the edge devices.

Edge or fog compute is just one more platform type for our OSS to stay abreast of into the future.

Marc Andreessen’s platform play for OSS

Marc Andreessen describes platforms as “a system that can be programmed and therefore customized by outside developers — users — and in that way, adapted to countless needs and niches that the platform’s original developers could not have possibly contemplated, much less had time to accommodate.”

Platform thinking is an important approach for service providers if they want to recapture market share from the OTT play. As the likes of Facebook have shown, a relatively limited-value software platform becomes enormously more valuable if you can convince others to contribute via content and innovation (as evidenced in FB’s market cap to assets ratio compared with traditional service providers).

As an OSS industry, we have barely scratched the surface on platform thinking. Sure, they are large platforms used by many users, we sometimes offer the ability to deliver customer portals and more recently we’re starting to offer up APIs and microservices.

As we’ve spoken about before, many of the OSS on the market today are the accumulation of many years (decades?) of baked-in functionality (ie product thinking). Unfortunately this baked-in approach assumes that the circumstances that functionality was designed to cater for are identical (or nearly identical) for all customers and won’t change over time. The dynamic and customised nature of OSS clearly tells us that this assumption is not right.

Product thinking doesn’t facilitate the combinatory innovation opportunities represented by nascent technologies such as cloud delivery, network virtualization, network security, Big Data, Machine Learning and Predictive Analytics, resource models, orchestration and automation, wireless sensor networks, IoT/ M2M, Self-organizing Networks (SON) and software development models like DevOps. See more in my research report, The Changing Landscape of OSS.

Platforms are powerful, not just because of the cloud, but also the crowd. With OSS, we’re increasingly utilising cloud delivery and scaling models, but we probably haven’t found a way of leveraging the crowd to gain the extreme network effects that the likes of FB have tapped into. That’s largely because our OSS are constrained by “on-premises” product thinking for our customers. We allow customers to connect internally (some may argue that point! 😉 ), but aside from some community forums or annual customer events, we don’t tend to provide the tools for our global users to connect and share value.

In not having this aggregated view, we also limit our potential on inherent platform advantages such as analytics, machine-learning / AI, combinatorial innovation, decentralised development and collaborative value sharing.

Do you agree that it all starts with re-setting the “on-prem” thinking or are we just not ready for this yet (politically, technically, etc)?

[Noting that there are exceptions that already exist of course, both vendor and customer-side. Also noting that distributed datasets don’t preclude centralised / shared analytics, ML/AI, etc, but segregation of data (meta data even) from those centralised tools does!]


A hat-tip to the authors of a Platform Thinking Point of View from Infosys, whose document has helped frame some of the ideas covered in this blog.

Standard Operating Procedures… or Variable Operating Procedures

Yesterday’s blog discussed the importance, but (perhaps) mythical concept of Standard Operating Procedures (SOP) for service providers and their OSS / BSS. The number of variants, which I can only see amplifying into the future, makes it almost futile to try to implement SOPs.

I say “almost futile” because SOPs are theoretically possible if the number of variants can be significantly compressed.

But given that I haven’t seen any real evidence of this happening at any of the big xSPs that I’ve worked with, I’ll discuss an alternate approach, which I call Variable Operating Procedures (VOP) that are broken into the following key attributes:

  1. Process designs that are based on states, which often correlate with key activities / milestones such as notifications, approvals, provisioning activities, etc for each journey type (eg O2A [Order to Activate] for each product type, T2R [Trouble to Resolve] for each problem type, etc). There is less reliance on the sequencing or conditionals for each journey type that represent the problem for SOPs and related customer experience (but I’ll come back to that later in step 4B)
  2. Tracking of every user journey through each of these states (ie they have end-to-end identifiers that track their progress through their various states to ensure nothing slips through the cracks between handoffs)
  3. Visualising user journeys through Sankey diagrams (see diagram here) that show transitions between each of the states and show where inefficiencies exist
  4. A closed feedback loop (see diagram and description of the complete loop here) that:
    1. Monitors progress of a task through various states, some of which may be mandatory
    2. Uses machine learning to identify the most efficient path for processing an end-to-end journey. This means that there is no pre-ordained sequence of activities to traverse the various states, but notes the sequence that results in the most efficient end-to-end journey (that also meets success criteria such as readiness for service, customer approval / satisfaction, etc)
    3. Uses efficient path learnings to provide decision support recommendations for each subsequent traversal of the same/similar journey type. It should be noted that operators shouldn’t be forced into a decision, as the natural variations in operator resolutions for a journey type will potentially identify even more efficient paths (the Darwinian aspect of the feedback loop and could be reinforced through gamification techniques amongst operators)

It’s clear that the SOP approach isn’t working for large service providers currently, but OSS rely on documented SOPs to control workflows for any given journey type. The proposed VOP approach is better suited to the digital supply chains and exponential network models (eg SDN/NFV, IoT) of the future, but will require a significant overhaul of OSS workflow processing designs.

Is VOP the solution, or have you seen a better response to the SOP problem?

Standard operating procedures… or are they?

Standard Operating Procedures (SOP) have been pivotal in codifying and standardising the use cases of service providers for many years. The theory goes that if you can standardise a process, then you can produce repeatably high quality and streamline it in a cycle of continual improvement.

The repeatability objective is one* of two primary recollections from a book that I read many years ago called The E-Myth Revisited by Michael Gerber. It’s one of the best selling business books of all time and had a significant impact on inherent repeatability within a business. It is an objective that is still highly relevant to the OSS industry, in part represented by the “automate everything” mantra that pervades today.

There’s only one slight problem with the concept of a SOP – I tend to find that there is no standard operating procedure. Our inability to control complexity, in OSS and other parts of the service provider business model, has created a multitude of variants for any given product line. The greater the number of variants there are in a process flow, the harder it is to maintain a standard. You’ve all seen the processes I talk about haven’t you – the ones with complex conditional paths, off-page references to side flows, many hand-overs and hand-backs between swim-lanes, incomplete mappings, etc.

With the changing digital supply chain predicted by John Reilly in his work on The Value Fabric, the number of swimming lanes as well as the more transient nature of process state change represented by network virtualisation and the touchpoint explosion, it’s obvious that associated process complexity is only going to amplify.

This becomes extremely challenging for the traditional OSS / BSS, where standardised workflows are the building blocks upon which automations are constructed.

More on the alternatives tomorrow.

* BTW, the other recollection from The E-Myth is the mindset difference between the predominant Technicians (creating a job for oneself) vs Entrepreneurs (creating a self-sustaining business).

NFV has the potential to amplify the OSS pyramid of pain

In two recent posts, we’ve discussed the changing world order of OSS with NFV as a catalyst, and highlighted the challenges posed by maintaining legacy product offerings (the pyramid of OSS pain).

These two paradigms (and others such as the touchpoint explosion) are dragging traditional service providers closer to a significant crossroad.

Network virtualisation will lead to a new array of products to be developed, providing increased potential for creativity by product and marketing departments. Unless carefully navigated, this could prove to have a major widening effect on The OSS pyramid of pain. NFV will make it easier for administrators to drop new (virtual) devices into the network at will. More devices mean more interoperability ramifications.

If network virtualisation can’t be operationalised by a service provider, then it’s likely that the finger of blame will be pointed at the OSS‘s inability to cope with change, even though the real culprit is likely to exist further upstream, at product / feature level.

I’ve heard the argument that traditional service providers are bound in chains by their legacy product offerings when compared with the OTT players like Google, Facebook, etc. To an extent this is true, but I’d also argue that the OTT players have been more ruthless in getting rid of their non-performers (think Google Buzz and Reader and many more, Facebook’s SuperPoke and Paper, etc), not to mention building architectures to scale up or tear down as required.

The main point of difference between the service provider products remaining in service and the discontinued OTT products is revenue. For service providers, their products are directly revenue-generating whereas the OTT players tend to harvest their revenues from other means (eg advertising on their flagship products). But this is only part of the story.

For traditional service providers, product revenue invariably takes precedence over product lifecyle profitability. The latter would take into account the complexity costs that flow down through the other layers of the pyramid of pain, but this is too hard to measure. I would hazard a guess that many surviving service provider products are actually profitability cannibalisers (as per the whale curve) when the total product lifecycle is tallied up.

The cross-roads mentioned above provide two options:

  1. Continue on the current path where incremental projects are prioritised based on near-immediate payback on investment
  2. Undertake ruthless simplification at all levels of the pyramid of pain, a pruning effect from which new (profitability) growth can appear in a virtualised networking world

But clearly I’m not a CFO, or even an accountant. Do you think you can describe to me why they’ll chose option 1 every time?

NFV as the catalyst for a new OSS world order

operators that seek to implement NFV without preparing their OSS to support it are unlikely to be successful in capturing the new revenue-generating and cost-saving opportunities. OSS should not be an afterthought; it will continue to be central to the operational efficiency and agility of the service provider.”
James Crawshaw
in “Next-Gen OSS for Hybrid Virtualized Networks.”

The quote above comes via an article by Ray Le Maistre entitled, “NFV Should Be Catalyst for OSS Rethink.” Ray’s blog also states, “OSS transformation projects have a poor track record, notes Crawshaw, while enabling integrated management and implementing an evolved OSS will require a new set of skills, process engineering and “a change in mindset that is open to concepts such as DevOps and fail-fast,” notes the analyst.”

Ray comments at the bottom of his article, “There are certain things that will be relevant to all operators and none that will be relevant to all… in theory, a very smart OSS/NFV savvy systems integrator should be cleaning up RIGHT NOW in this market.”

These are some great insights from James and Ray. Unfortunately, they also mark what is a current dilemma for providers who are committing to a virtualised world. Whereas the legacy OSS had a lot of pre-baked functionality already coded, and needed more configuration than code development, the new world of OSS does need more coder-centric integration work.

The question I am pondering is whether this is:

  1. A permanent shift (ie a majority of work will be done by coders to make API-based customisations); or
  2. Whether it is just a temporary state pending the wild-west of NFV/OSS/DevOps to mature to a point where the main building blocks / capabilities are developed and we revert to a heavy bias towards configuration, not development

If it becomes the former, as an industry we need to be focussing our attention on developing more of the skillsets that overlap in the yellow overlap in the Venn diagram below. There are too few people with the paired expertise in networks (particularly virtualised networks) and IT / coding (particularly in our current DevOps / API environment). In fact, in a world of increasingly software-defined networking, does it become increasingly difficult to be a networking expert without the IT / coding skills?

Even rarer is the mythical beast that sits in the red union of this Venn diagram – the one who actually gets what this all means for the organisations implementing this new world order and is navigating them through it. To paraphrase Ray, “a systems integrator that has great expertise in the red zone should be cleaning up RIGHT NOW in this market.”

But are they really mythical beasts? Do you know any??

Your OSS, but with an added zero!

We’re woefully unprepared to deal with orders of magnitude.

Ten times as many orders.
One-tenth the number of hospital visits.
Ten times the traffic.
One-tenth the revenue.
Ten times as fast.

Because dramatic shifts rarely happen, we bracket everything on the increment, preparing for just a relatively small change here or there.
We think we’re ready for a 1 inch rise in sea level, but ten inches is so foreign we work hard to not even consider it.
Except that messages now travel 50 times faster than they used to, sent to us by 100 times as many people as we grew up expecting. Except that we’re spending ten times as much time with a device, and one-tenth as much time reading a book.

Here it comes. The future adds a zero.”
Seth Godin, here in his blog post, “Times 10.”

The future of comms has an added zero. IoT adds an extra zero to the devices we’re managing. Network virtualisation adds an extra zero to the number of service orders due to the shorter life-cycle of orders. By necessity, edge/fog computing reduces the load on comms links compared with the current cloud model, but possibly introduces platforms that have never needed operational support tools before (eg connected cars). Machine learning models adds more than an extra zero to the number of iterations of use cases (eg design, assurance, etc) that can be tested to find an optimal outcome.

We’ve spoken before of the opportunity cost of using incremental rather than exponential OSS planning.

If all of your competitors are bracketing everything on the increment, but you’re not only planning for the exponential, but building solutions that leverage the exponential, do you think you might hold an advantage when the future adds a zero?

OSS at the centre of the universe

Historically, the center of the Universe had been believed to be a number of locations. Many mythological cosmologies included an axis mundi, the central axis of a flat Earth that connects the Earth, heavens, and other realms together. In the 4th century BC Greece, the geocentric model was developed based on astronomical observation, proposing that the center of the Universe lies at the center of a spherical, stationary Earth, around which the sun, moon, planets, and stars rotate. With the development of the heliocentric model by Nicolaus Copernicus in the 16th century, the sun was believed to be the center of the Universe, with the planets (including Earth) and stars orbiting it.
In the early 20th century, the discovery of other galaxies and the development of the Big Bang theory led to the development of cosmological models of a homogeneous, isotropic Universe (which lacks a central point) that is expanding at all points

Perhaps I fall into a line of thinking as outdated as the axis mundi, but I passionately believe that the OSS is the centre of the universe around which all other digital technologies revolve. Even the sexy new “saviour” technologies like Internet of Things (IoT), network virtualisation, etc can only reach their promised potential if there are operational tools sitting in the background managing them and their life-cycle of processes efficiently. And the other “hero” technologies such as analytics, machine learning, APIs, etc aren’t able to do much without the data collected by operational tools.

No matter how far and wide I range in the consulting world of communications technologies and the multitude of industries they impact, I still see them coming back to what OSS can do to improve what they do.

Many people say that OSS is no longer relevant. Has the ICT world moved on to the geometric model, heliocentric or even big bang model? If so, what is at their centre?

Am I just blinded by what Sir Ken Robinson describes as, “When people are in their Element, they connect with something fundamental to their sense of identity, purpose, and well-being. Being there provides a sense of self-revelation, of defining who they really are and what they’re really meant to be doing with their lives.” Am I struggling to see the world from a perspective other than my own?

What is the opportunity cost of not embarking on machine-led decision support?

In earlier blogs, we’ve referenced this great article on SingularityHub to show how the exponentiality of technological progress tends to surprise us as change initially creeps up on us, then overwhelms us in situations like this:
Singularity Hub's power law diagram

But what if we changed the “exponential growth surprise factor” in the diagram above to “opportunity cost?”

Further to yesterday’s blog about the uselessness but value of making predictions, we can use the exponentiality of machines (described by Moore’s Law), to attempt to build our OSS to follow the upper trajectory rather than lower trajectory in the diagram.

But how?
Chances are that you’ve already predicted it yourself.

The industry is already talking about big data analytics, machine learning, IoT, virtualised networks, software like OSS and more. These are machine-led innovations. We’re relatively early in the life-cycle of these concepts, so we’re not yet seeing the divergence of the two trajectories when mapped onto the diagram above.

But if you’re planning your OSS for the next 5-10 years and you’re not seeking to make even little steps into machine-driven decision support technologies now, then I wonder if you may be regretting the missed opportunity years into the future?

Why is mass customisation so important for the future of OSS?

McDonald’s hit a peak moment of productivity by getting to a mythical scale, with a limited menu and little in they way of customization. They could deliver a burger for a fraction of what it might take a diner to do it on demand.
McDonald’s now challenges the idea that custom has to cost more, because they’ve invested in mass customization.
Things that are made on demand by algorithmic systems and robots cost more to set up, but once they do, the magic is that the incremental cost of one more unit is really low. If you’re organized to be in the mass customization business, then the wind of custom everything is at your back.
The future clearly belongs to these mass customization opportunities, situations where there is little cost associated with stop and start, little risk of not meeting expectations, where a robot and software are happily shifting gears all day long
Seth Godin
in “On demand vs. in stock

We’ve all experienced the modern phenomenon of “the market of one.” We all want a solution to our own specific needs, whilst wanting to pay for it at an economy of scale. For example, to continue the burger theme, I rarely order a burger without having to request a change from the menu item (they all seem to put onions and tomatoes on, which I don’t like).

One of the challenges of the OSS market segments I tend to do most work in (the tier-one telcos and utilities) is that they’ve always needed a market of one approach to fit their needs (ie heavy customisation, with few projects being even similar to any previous ones). This approach comes with an associated cost of customisation (during the commissioning and forever after) as well as the challenge in finding the right people to drive these customisations (yes, you may’ve noticed a shortage too!).

If we can overcome this challenge with a model of repeatability overlaid onto customised requirements (ie mass customisation) then we’re going to reduce costs, reduce risks, reduce our reliance on a limited supply of resources and improve quality / reliability.

But OSS is a bit more complex than the burger business (I imagine, having never learnt much about the science of making and delivering burgers to order). So where do we start on our repeatability mantra? Here are a few ideas but I’m sure you can think of many more:

  1. Systematising the OSS product ordering process, whether you make the process self-serve (ie customers can log on to build a shopping cart) or more likely, you streamline the order collection for your sales agents to build a shopping cart with customers
  2. Providing decision support for the install process, guiding the person doing the install in real-time rather than giving them an admin guide. The process of setting up databases, high-availaility, applications, schema, etc will invariably be a little different for each customer and can often take days for top-end installs
  3. Reducing core functionality down to the features that virtually every customer will use, working hard to make those features highly modularised. They become the building blocks that customisations can be built around
  4. Building a platform that is designed to easily plug in additional functionality to create bespoke solutions. This specifically includes clever user experience design to help them find the right plug ins for their requirements rather than confusing them in the vast array of choice
  5. Wherever possible, give the flexibility in data rather than in applications / code
  6. Modularisation of products and processes as well as functionality
  7. Build models of intent that are abstracted from the underlying technology layers

The transient demands facilitated by a future of virtualised networks makes this modularity and repeatability more important than its ever been for OSS.

Can you imagine how you’ll interact with your OSS in 10 years?

Here’s a slightly mind-blowing fact for you – A child born when iPhone was announced will be 10 years old in 2 months (a piece of trivia courtesy of Ben Evans).

That’s nearly 10 years of digitally native workers coming into the telco workforce and 10 years of not-so-digitally native workers exiting it. We marvelled that there was a generation that had joined the workforce that had never experienced life without the Internet. The generation that has never experienced life without mobile Internet, apps, etc is now on the march.

The smart-phone revolution spawned by the iPhone has changed, and will continue to change, the way we interact with information. By contrast, there hasn’t really been much change in the way that we interact with our OSS has there? Sure, there are a few mobility apps that help the field workforce, sales agents, etc and we’re now using browsers as our clients (mostly) but a majority of OSS users still interact with OSS servers via networked PCs that are fitted with a keyboard and mouse. Not much friction has been removed.

The question remains about how other burgeoning technologies such as augmented reality and gesture-based computing will impact how we interact with our OSS in the coming decade. Are they also destined to only supplement the tasks of operators that have a mobile / spatial component to their tasks, like the field workforce?

Machine learning and Artificially Intelligent assistants represent the greater opportunity to change how we interact with our OSS, but only if we radically change our user interfaces to facilitate their strengths. The overcrowded nature of our current OSS don’t readily accommodate small form-factor displays or speech / gesture interactions. An OSS GUI built around a search / predictive / precognitive interaction model is the more likely stepping stone to drastically different OSS interactions in the next ten years. A far more frictionless OSS future.