It’s not the tech, it’s the timing

For those of us in the technology consulting game, we think we’re pretty clever at predicting the next big technology trend. But with the proliferation of information over the Internet, it’s not that difficult to see potentially beneficial technologies coming over the horizon. We can all see network virtualisation, Internet-connected sensor networks, artificial intelligence, etc coming.

The challenge is actually picking the right timing for introducing them. Too early and nobody is ready to buy – death by cashflow. Too late and others have already established a customer base – death by competition.

The technology consultants that stand apart from the rest are the ones who not only know that a technology is going to change the world, but also know WHEN it will change the world and HOW to bring it into reality.

We’ve talked recently about utilising exponential tech like AI in OSS and the picture below tells some of the story in relation to WHEN
Exponential curve opportunity cost
Incremental improvements to the status quo initially keep up with trying to bring a new, exponential technology into being. But over time, the gap progressively increases, as does the opportunity cost of staying with incremental change. Some aspects of the OSS industry has been guilty of trying to make incremental improvement to the platforms that they’ve invested so much time into.

What the graph suggests is to embark on investigations / trials / proofs-of-concept in incremental new tech so that in-house tribal knowledge is developed in readiness for when the time is right to introduce it to customers.

Falling off a cliff vs going to the sky

Have you noticed how the curves we’re dealing with in the service provider industry are either falling off a cliff (eg voice revenues) or going to the sky (eg theoretical exponential growth like IoE)?

Here in the OSS industry, we’re stuck in the middle of these two trend curves too. Falling revenues mean reduced appetite for big IT projects. However, the excitement surrounding exponential technologies like SDN/NFV, IoE and AI are providing just enough inspiration for change that includes the new projects to plug the revenue shortfalls.

The question for sponsors becomes whether they see OSS as being

  • Inextricably linked to the cliff-curve – servicing the legacy products with falling revenue curves and therefore falling project investment; or
  • Able to be the tools that allow their exciting new exponential tools (and hopefully exponential revenues) to be operationalized and therefore attract new project investment

The question for us in the OSS industry is whether we’re able to evangelise and adapt our offerings to ensure sponsors see us as being point 2 rather than point 1

Wow

No, not “Wow!” the exclamation but the acronym W-O-W.

Wow stands for Walk Out Working. In other words, if a customer comes into a retail store, they walk out with a working service rather than exasperation. Whilst many customers wouldn’t be aware of it, there are lots of things that have to happen in an OSS / BSS for a customer to be wowed:

  • Order entry
  • Identity checks / approvals
  • Credit checks
  • Inventory availability
  • Rapid provisioning
  • Rapid back-of-house processes to accommodate service activation (eg billing, service order management, SLAs, etc)
  • etc

How long does all of this reasonably take for different product offerings? For mobile services, this is often feasible. For fixed line services, delivery is often measured in weeks, so the service provider would need to provide accommodation to accommodate WOW.

With virtualised networking, perhaps even fixed line services can be wowed if physical connectivity already exists and a virtually enabled CPE is installed at the customer premise. A vCPE that can be remotely configured and doesn’t require a truck roll or physical commissioning activities out in the field.

Wow, the acronym and the exclamation, become more feasible for more service delivery scenarios in a virtualised networking world. It then lays out the challenge to the OSS / BSS to keep up. Could your OSS / BSS, product offerings and related processes meet a WOW target of minutes rather than hours or days?

Software is eating the world…. and eating your job?

A funny thing happened today. I was looking for a reference to Marc Andreessen’s original, “software is eating the world,” quote and came across an article on TechCrunch that expressed many of the same thoughts I was going to write about. However, it doesn’t specifically cover the service provider and OSS industries so I’ll push on, with a few borrowed quotes along the way (in italics, like the following).

Today, the idea that “every company needs to become a software company” is considered almost a cliché. No matter your industry, you’re expected to be reimagining your business to make sure you’re not the next local taxi company or hotel chain caught completely off guard by your equivalent of Uber or Airbnb. But while the inclination to not be “disrupted” by startups or competitors is useful, it’s also not exactly practical. It is decidedly non-trivial for a company in a non-tech traditional industry to start thinking and acting like a software company.”
[or the traditional tech industry like service providers???]

This is completely true of the dilemma facing service providers the world over. A software-centric network, whether SDN, NFV, or others, is nearly inevitable. While the important metrics don’t necessarily stack up yet for SDN, software will continue to swarm all over the service provider market. Meanwhile, the challenge is that the existing workforce at these companies, often in the hundreds of thousands of people, don’t have the skills or interest in developing the skills essential for the software defined service provider of the (near) future.

Even worse for those people, many of the existing roles will be superseded by the automations we’re building in software. Companies like AT&T have been investing in software as a future mode of operation for nearly a decade and are starting to reap the rewards now. Many of their counterparts have barely started the journey.

This old post provided the following diagram:
Network_Software_Business_Venn
The blue circle is pushing further into the realm of the green to provide a larger yellow intersection, whereby network engineers will no longer be able to just configure devices, but will need to augment their software development skills. For most service providers, there just isn’t enough IT resources around to make the shift (although with appropriate re-skilling programs and 1+ million IT/Engineering resources coming out of universities in India and China every year, that is perhaps a moot point).

Summarising, I have two points to note:

  1. Bet on the yellow intersect point – service providers will require the converged skill-sets of IT and networks (include security in this) in larger volumes… but consider whether the global availability of these resources has the potential to keep salaries low over the longer term* (maybe the red intersection point is the one for you to target?)
  2. OSS is software and networking (and business) – however, my next post will consider the cyclical nature of a service provider building their own software vs. buying off-the-shelf products to configure to their needs

Will software eat your job? Will software eat my job? To consider this question, I would ask whether AI [Artificial Intelligence] develops to the point that it does a better job at consultancy than I can (or any consultant for that matter)? The answer is a resounding and inevitable yes… for some aspects of consultancy it already can. Can a bot consider far more possible variants for a given consulting problem than a person can and give a more optimal answer? Yes. In response, the follow-up question is what skills will a consulter-bot find more difficult to usurp? Creativity? Relationships? Left-field innovation?

* This is a major generalisation here I know – there are sectors of the IT market where there will be major shortages (like a possible AI skills crunch in the next 2-5 years or even SDN in that timeframe), sometimes due to the newness of the technology preventing a talent pool from being developed yet, sometimes just for supply / demand misalignments.

Hot on the heels of ECOMP comes Indigo

Since releasing ECOMP (Enhanced control orchestration and management platform), AT&T has been busy on a data sharing environment, which it announced at the AT&T Developer Summit and is called Indigo.

Like ECOMP, AT&T is looking to launch Indigo as an Open Source project through the Linux Foundation, hoping for community collaboration.

As you all know, machine learning and artificial intelligence (ML/AI) get better with more data. This project is intended to bring a community effort to the development of a data network that enhances accessibility to data and overcomes obstacles such as security, privacy, commercial sensitivities as well as other technical challenges.

AT&T announced that it will provide further details in coming weeks, which I’ll look to keep you abreast of. This is an important development for our industry for a range of reasons including the insights and efficiencies that ML/AI can deliver from the data it observes (as well as innovative revenue stream possibilities), but I’m most interested in the closed feedback loops that are needed in the OSS / ECOMP space.

For further information, check out this report on SDX Central.

The end of cloud computing

…. but we’ve only just started and we haven’t even got close to figuring out how to manage it yet (from an aggregated view I mean, not just within a single vendor platform)!!

This article from Peter Levine of Andreesen Horowitz predicts “The end of cloud computing.”

Now I’m not so sure that this headline is going to play out in the near future, but Peter Levine does make a really interesting point in his article (and its embedded 25 min video). There are a number of nascent technologies, such as autonomous vehicles, that will need their edge devices to process immense amounts of data locally without having to backhaul it to centralised cloud servers for processing.

Autonomous vehicles will need to consume data in real-time from a multitude of in-car sensors, but only a small percentage of that data will need to be transmitted back over networks to a centralised cloud base. But that backhauled data will be important for the purpose of aggregated learning, analytics, etc, the findings of which will be shipped back to the edge devices.

Edge or fog compute is just one more platform type for our OSS to stay abreast of into the future.

Marc Andreessen’s platform play for OSS

Marc Andreessen describes platforms as “a system that can be programmed and therefore customized by outside developers — users — and in that way, adapted to countless needs and niches that the platform’s original developers could not have possibly contemplated, much less had time to accommodate.”

Platform thinking is an important approach for service providers if they want to recapture market share from the OTT play. As the likes of Facebook have shown, a relatively limited-value software platform becomes enormously more valuable if you can convince others to contribute via content and innovation (as evidenced in FB’s market cap to assets ratio compared with traditional service providers).

As an OSS industry, we have barely scratched the surface on platform thinking. Sure, they are large platforms used by many users, we sometimes offer the ability to deliver customer portals and more recently we’re starting to offer up APIs and microservices.

As we’ve spoken about before, many of the OSS on the market today are the accumulation of many years (decades?) of baked-in functionality (ie product thinking). Unfortunately this baked-in approach assumes that the circumstances that functionality was designed to cater for are identical (or nearly identical) for all customers and won’t change over time. The dynamic and customised nature of OSS clearly tells us that this assumption is not right.

Product thinking doesn’t facilitate the combinatory innovation opportunities represented by nascent technologies such as cloud delivery, network virtualization, network security, Big Data, Machine Learning and Predictive Analytics, resource models, orchestration and automation, wireless sensor networks, IoT/ M2M, Self-organizing Networks (SON) and software development models like DevOps. See more in my research report, The Changing Landscape of OSS.

Platforms are powerful, not just because of the cloud, but also the crowd. With OSS, we’re increasingly utilising cloud delivery and scaling models, but we probably haven’t found a way of leveraging the crowd to gain the extreme network effects that the likes of FB have tapped into. That’s largely because our OSS are constrained by “on-premises” product thinking for our customers. We allow customers to connect internally (some may argue that point! 😉 ), but aside from some community forums or annual customer events, we don’t tend to provide the tools for our global users to connect and share value.

In not having this aggregated view, we also limit our potential on inherent platform advantages such as analytics, machine-learning / AI, combinatorial innovation, decentralised development and collaborative value sharing.

Do you agree that it all starts with re-setting the “on-prem” thinking or are we just not ready for this yet (politically, technically, etc)?

[Noting that there are exceptions that already exist of course, both vendor and customer-side. Also noting that distributed datasets don’t preclude centralised / shared analytics, ML/AI, etc, but segregation of data (meta data even) from those centralised tools does!]

 

A hat-tip to the authors of a Platform Thinking Point of View from Infosys, whose document has helped frame some of the ideas covered in this blog.

Standard Operating Procedures… or Variable Operating Procedures

Yesterday’s blog discussed the importance, but (perhaps) mythical concept of Standard Operating Procedures (SOP) for service providers and their OSS / BSS. The number of variants, which I can only see amplifying into the future, makes it almost futile to try to implement SOPs.

I say “almost futile” because SOPs are theoretically possible if the number of variants can be significantly compressed.

But given that I haven’t seen any real evidence of this happening at any of the big xSPs that I’ve worked with, I’ll discuss an alternate approach, which I call Variable Operating Procedures (VOP) that are broken into the following key attributes:

  1. Process designs that are based on states, which often correlate with key activities / milestones such as notifications, approvals, provisioning activities, etc for each journey type (eg O2A [Order to Activate] for each product type, T2R [Trouble to Resolve] for each problem type, etc). There is less reliance on the sequencing or conditionals for each journey type that represent the problem for SOPs and related customer experience (but I’ll come back to that later in step 4B)
  2. Tracking of every user journey through each of these states (ie they have end-to-end identifiers that track their progress through their various states to ensure nothing slips through the cracks between handoffs)
  3. Visualising user journeys through Sankey diagrams (see diagram here) that show transitions between each of the states and show where inefficiencies exist
  4. A closed feedback loop (see diagram and description of the complete loop here) that:
    1. Monitors progress of a task through various states, some of which may be mandatory
    2. Uses machine learning to identify the most efficient path for processing an end-to-end journey. This means that there is no pre-ordained sequence of activities to traverse the various states, but notes the sequence that results in the most efficient end-to-end journey (that also meets success criteria such as readiness for service, customer approval / satisfaction, etc)
    3. Uses efficient path learnings to provide decision support recommendations for each subsequent traversal of the same/similar journey type. It should be noted that operators shouldn’t be forced into a decision, as the natural variations in operator resolutions for a journey type will potentially identify even more efficient paths (the Darwinian aspect of the feedback loop and could be reinforced through gamification techniques amongst operators)

It’s clear that the SOP approach isn’t working for large service providers currently, but OSS rely on documented SOPs to control workflows for any given journey type. The proposed VOP approach is better suited to the digital supply chains and exponential network models (eg SDN/NFV, IoT) of the future, but will require a significant overhaul of OSS workflow processing designs.

Is VOP the solution, or have you seen a better response to the SOP problem?

Standard operating procedures… or are they?

Standard Operating Procedures (SOP) have been pivotal in codifying and standardising the use cases of service providers for many years. The theory goes that if you can standardise a process, then you can produce repeatably high quality and streamline it in a cycle of continual improvement.

The repeatability objective is one* of two primary recollections from a book that I read many years ago called The E-Myth Revisited by Michael Gerber. It’s one of the best selling business books of all time and had a significant impact on inherent repeatability within a business. It is an objective that is still highly relevant to the OSS industry, in part represented by the “automate everything” mantra that pervades today.

There’s only one slight problem with the concept of a SOP – I tend to find that there is no standard operating procedure. Our inability to control complexity, in OSS and other parts of the service provider business model, has created a multitude of variants for any given product line. The greater the number of variants there are in a process flow, the harder it is to maintain a standard. You’ve all seen the processes I talk about haven’t you – the ones with complex conditional paths, off-page references to side flows, many hand-overs and hand-backs between swim-lanes, incomplete mappings, etc.

With the changing digital supply chain predicted by John Reilly in his work on The Value Fabric, the number of swimming lanes as well as the more transient nature of process state change represented by network virtualisation and the touchpoint explosion, it’s obvious that associated process complexity is only going to amplify.

This becomes extremely challenging for the traditional OSS / BSS, where standardised workflows are the building blocks upon which automations are constructed.

More on the alternatives tomorrow.

* BTW, the other recollection from The E-Myth is the mindset difference between the predominant Technicians (creating a job for oneself) vs Entrepreneurs (creating a self-sustaining business).

NFV has the potential to amplify the OSS pyramid of pain

In two recent posts, we’ve discussed the changing world order of OSS with NFV as a catalyst, and highlighted the challenges posed by maintaining legacy product offerings (the pyramid of OSS pain).

These two paradigms (and others such as the touchpoint explosion) are dragging traditional service providers closer to a significant crossroad.

Network virtualisation will lead to a new array of products to be developed, providing increased potential for creativity by product and marketing departments. Unless carefully navigated, this could prove to have a major widening effect on The OSS pyramid of pain. NFV will make it easier for administrators to drop new (virtual) devices into the network at will. More devices mean more interoperability ramifications.

If network virtualisation can’t be operationalised by a service provider, then it’s likely that the finger of blame will be pointed at the OSS‘s inability to cope with change, even though the real culprit is likely to exist further upstream, at product / feature level.

I’ve heard the argument that traditional service providers are bound in chains by their legacy product offerings when compared with the OTT players like Google, Facebook, etc. To an extent this is true, but I’d also argue that the OTT players have been more ruthless in getting rid of their non-performers (think Google Buzz and Reader and many more, Facebook’s SuperPoke and Paper, etc), not to mention building architectures to scale up or tear down as required.

The main point of difference between the service provider products remaining in service and the discontinued OTT products is revenue. For service providers, their products are directly revenue-generating whereas the OTT players tend to harvest their revenues from other means (eg advertising on their flagship products). But this is only part of the story.

For traditional service providers, product revenue invariably takes precedence over product lifecyle profitability. The latter would take into account the complexity costs that flow down through the other layers of the pyramid of pain, but this is too hard to measure. I would hazard a guess that many surviving service provider products are actually profitability cannibalisers (as per the whale curve) when the total product lifecycle is tallied up.

The cross-roads mentioned above provide two options:

  1. Continue on the current path where incremental projects are prioritised based on near-immediate payback on investment
  2. Undertake ruthless simplification at all levels of the pyramid of pain, a pruning effect from which new (profitability) growth can appear in a virtualised networking world

But clearly I’m not a CFO, or even an accountant. Do you think you can describe to me why they’ll chose option 1 every time?

NFV as the catalyst for a new OSS world order

operators that seek to implement NFV without preparing their OSS to support it are unlikely to be successful in capturing the new revenue-generating and cost-saving opportunities. OSS should not be an afterthought; it will continue to be central to the operational efficiency and agility of the service provider.”
James Crawshaw
in “Next-Gen OSS for Hybrid Virtualized Networks.”

The quote above comes via an article by Ray Le Maistre entitled, “NFV Should Be Catalyst for OSS Rethink.” Ray’s blog also states, “OSS transformation projects have a poor track record, notes Crawshaw, while enabling integrated management and implementing an evolved OSS will require a new set of skills, process engineering and “a change in mindset that is open to concepts such as DevOps and fail-fast,” notes the analyst.”

Ray comments at the bottom of his article, “There are certain things that will be relevant to all operators and none that will be relevant to all… in theory, a very smart OSS/NFV savvy systems integrator should be cleaning up RIGHT NOW in this market.”

These are some great insights from James and Ray. Unfortunately, they also mark what is a current dilemma for providers who are committing to a virtualised world. Whereas the legacy OSS had a lot of pre-baked functionality already coded, and needed more configuration than code development, the new world of OSS does need more coder-centric integration work.

The question I am pondering is whether this is:

  1. A permanent shift (ie a majority of work will be done by coders to make API-based customisations); or
  2. Whether it is just a temporary state pending the wild-west of NFV/OSS/DevOps to mature to a point where the main building blocks / capabilities are developed and we revert to a heavy bias towards configuration, not development

If it becomes the former, as an industry we need to be focussing our attention on developing more of the skillsets that overlap in the yellow overlap in the Venn diagram below. There are too few people with the paired expertise in networks (particularly virtualised networks) and IT / coding (particularly in our current DevOps / API environment). In fact, in a world of increasingly software-defined networking, does it become increasingly difficult to be a networking expert without the IT / coding skills?

Even rarer is the mythical beast that sits in the red union of this Venn diagram – the one who actually gets what this all means for the organisations implementing this new world order and is navigating them through it. To paraphrase Ray, “a systems integrator that has great expertise in the red zone should be cleaning up RIGHT NOW in this market.”

But are they really mythical beasts? Do you know any??

Your OSS, but with an added zero!

We’re woefully unprepared to deal with orders of magnitude.

Ten times as many orders.
One-tenth the number of hospital visits.
Ten times the traffic.
One-tenth the revenue.
Ten times as fast.

Because dramatic shifts rarely happen, we bracket everything on the increment, preparing for just a relatively small change here or there.
We think we’re ready for a 1 inch rise in sea level, but ten inches is so foreign we work hard to not even consider it.
Except that messages now travel 50 times faster than they used to, sent to us by 100 times as many people as we grew up expecting. Except that we’re spending ten times as much time with a device, and one-tenth as much time reading a book.

Here it comes. The future adds a zero.”
Seth Godin, here in his blog post, “Times 10.”

The future of comms has an added zero. IoT adds an extra zero to the devices we’re managing. Network virtualisation adds an extra zero to the number of service orders due to the shorter life-cycle of orders. By necessity, edge/fog computing reduces the load on comms links compared with the current cloud model, but possibly introduces platforms that have never needed operational support tools before (eg connected cars). Machine learning models adds more than an extra zero to the number of iterations of use cases (eg design, assurance, etc) that can be tested to find an optimal outcome.

We’ve spoken before of the opportunity cost of using incremental rather than exponential OSS planning.

If all of your competitors are bracketing everything on the increment, but you’re not only planning for the exponential, but building solutions that leverage the exponential, do you think you might hold an advantage when the future adds a zero?

OSS at the centre of the universe

Historically, the center of the Universe had been believed to be a number of locations. Many mythological cosmologies included an axis mundi, the central axis of a flat Earth that connects the Earth, heavens, and other realms together. In the 4th century BC Greece, the geocentric model was developed based on astronomical observation, proposing that the center of the Universe lies at the center of a spherical, stationary Earth, around which the sun, moon, planets, and stars rotate. With the development of the heliocentric model by Nicolaus Copernicus in the 16th century, the sun was believed to be the center of the Universe, with the planets (including Earth) and stars orbiting it.
In the early 20th century, the discovery of other galaxies and the development of the Big Bang theory led to the development of cosmological models of a homogeneous, isotropic Universe (which lacks a central point) that is expanding at all points
.”
Wikipedia.

Perhaps I fall into a line of thinking as outdated as the axis mundi, but I passionately believe that the OSS is the centre of the universe around which all other digital technologies revolve. Even the sexy new “saviour” technologies like Internet of Things (IoT), network virtualisation, etc can only reach their promised potential if there are operational tools sitting in the background managing them and their life-cycle of processes efficiently. And the other “hero” technologies such as analytics, machine learning, APIs, etc aren’t able to do much without the data collected by operational tools.

No matter how far and wide I range in the consulting world of communications technologies and the multitude of industries they impact, I still see them coming back to what OSS can do to improve what they do.

Many people say that OSS is no longer relevant. Has the ICT world moved on to the geometric model, heliocentric or even big bang model? If so, what is at their centre?

Am I just blinded by what Sir Ken Robinson describes as, “When people are in their Element, they connect with something fundamental to their sense of identity, purpose, and well-being. Being there provides a sense of self-revelation, of defining who they really are and what they’re really meant to be doing with their lives.” Am I struggling to see the world from a perspective other than my own?

What is the opportunity cost of not embarking on machine-led decision support?

In earlier blogs, we’ve referenced this great article on SingularityHub to show how the exponentiality of technological progress tends to surprise us as change initially creeps up on us, then overwhelms us in situations like this:
Singularity Hub's power law diagram

But what if we changed the “exponential growth surprise factor” in the diagram above to “opportunity cost?”

Further to yesterday’s blog about the uselessness but value of making predictions, we can use the exponentiality of machines (described by Moore’s Law), to attempt to build our OSS to follow the upper trajectory rather than lower trajectory in the diagram.

But how?
Chances are that you’ve already predicted it yourself.

The industry is already talking about big data analytics, machine learning, IoT, virtualised networks, software like OSS and more. These are machine-led innovations. We’re relatively early in the life-cycle of these concepts, so we’re not yet seeing the divergence of the two trajectories when mapped onto the diagram above.

But if you’re planning your OSS for the next 5-10 years and you’re not seeking to make even little steps into machine-driven decision support technologies now, then I wonder if you may be regretting the missed opportunity years into the future?

Why is mass customisation so important for the future of OSS?

McDonald’s hit a peak moment of productivity by getting to a mythical scale, with a limited menu and little in they way of customization. They could deliver a burger for a fraction of what it might take a diner to do it on demand.
McDonald’s now challenges the idea that custom has to cost more, because they’ve invested in mass customization.
Things that are made on demand by algorithmic systems and robots cost more to set up, but once they do, the magic is that the incremental cost of one more unit is really low. If you’re organized to be in the mass customization business, then the wind of custom everything is at your back.
The future clearly belongs to these mass customization opportunities, situations where there is little cost associated with stop and start, little risk of not meeting expectations, where a robot and software are happily shifting gears all day long
.”
Seth Godin
in “On demand vs. in stock

We’ve all experienced the modern phenomenon of “the market of one.” We all want a solution to our own specific needs, whilst wanting to pay for it at an economy of scale. For example, to continue the burger theme, I rarely order a burger without having to request a change from the menu item (they all seem to put onions and tomatoes on, which I don’t like).

One of the challenges of the OSS market segments I tend to do most work in (the tier-one telcos and utilities) is that they’ve always needed a market of one approach to fit their needs (ie heavy customisation, with few projects being even similar to any previous ones). This approach comes with an associated cost of customisation (during the commissioning and forever after) as well as the challenge in finding the right people to drive these customisations (yes, you may’ve noticed a shortage too!).

If we can overcome this challenge with a model of repeatability overlaid onto customised requirements (ie mass customisation) then we’re going to reduce costs, reduce risks, reduce our reliance on a limited supply of resources and improve quality / reliability.

But OSS is a bit more complex than the burger business (I imagine, having never learnt much about the science of making and delivering burgers to order). So where do we start on our repeatability mantra? Here are a few ideas but I’m sure you can think of many more:

  1. Systematising the OSS product ordering process, whether you make the process self-serve (ie customers can log on to build a shopping cart) or more likely, you streamline the order collection for your sales agents to build a shopping cart with customers
  2. Providing decision support for the install process, guiding the person doing the install in real-time rather than giving them an admin guide. The process of setting up databases, high-availaility, applications, schema, etc will invariably be a little different for each customer and can often take days for top-end installs
  3. Reducing core functionality down to the features that virtually every customer will use, working hard to make those features highly modularised. They become the building blocks that customisations can be built around
  4. Building a platform that is designed to easily plug in additional functionality to create bespoke solutions. This specifically includes clever user experience design to help them find the right plug ins for their requirements rather than confusing them in the vast array of choice
  5. Wherever possible, give the flexibility in data rather than in applications / code
  6. Modularisation of products and processes as well as functionality
  7. Build models of intent that are abstracted from the underlying technology layers

The transient demands facilitated by a future of virtualised networks makes this modularity and repeatability more important than its ever been for OSS.

Can you imagine how you’ll interact with your OSS in 10 years?

Here’s a slightly mind-blowing fact for you – A child born when iPhone was announced will be 10 years old in 2 months (a piece of trivia courtesy of Ben Evans).

That’s nearly 10 years of digitally native workers coming into the telco workforce and 10 years of not-so-digitally native workers exiting it. We marvelled that there was a generation that had joined the workforce that had never experienced life without the Internet. The generation that has never experienced life without mobile Internet, apps, etc is now on the march.

The smart-phone revolution spawned by the iPhone has changed, and will continue to change, the way we interact with information. By contrast, there hasn’t really been much change in the way that we interact with our OSS has there? Sure, there are a few mobility apps that help the field workforce, sales agents, etc and we’re now using browsers as our clients (mostly) but a majority of OSS users still interact with OSS servers via networked PCs that are fitted with a keyboard and mouse. Not much friction has been removed.

The question remains about how other burgeoning technologies such as augmented reality and gesture-based computing will impact how we interact with our OSS in the coming decade. Are they also destined to only supplement the tasks of operators that have a mobile / spatial component to their tasks, like the field workforce?

Machine learning and Artificially Intelligent assistants represent the greater opportunity to change how we interact with our OSS, but only if we radically change our user interfaces to facilitate their strengths. The overcrowded nature of our current OSS don’t readily accommodate small form-factor displays or speech / gesture interactions. An OSS GUI built around a search / predictive / precognitive interaction model is the more likely stepping stone to drastically different OSS interactions in the next ten years. A far more frictionless OSS future.

OSS Mission Control requires horizontal feedback

It Took Sheryl Sandberg exactly 2 sentences to give the best career advice you’ll hear today. I want you to ponder the following question for a moment, because it’s one of the most important questions you’ll ever answer…
The question was posed to Sheryl Sandberg: “What’s the number one thing you look for in someone who can scale with a company?”
Sandberg’s reply: “Someone who takes feedback well. Because people who can take feedback well are people who can learn and grow quickly.
Boom goes the dynamite!”

From inc.com.

This is the third in a series about contrasting the way NASA looks at mission control compared with a typical NOC (and the OSS / BSS that support them) [Post 1 and Post 2 can be found here].

NASA is constantly monitoring signals coming from sensors on its single payload and using that feedback to learn and grow (or recover) quickly. It looks at feedback in a horizontal sense, monitoring the single payload’s performance across a wide range of sensors / systems.

Our NOCs are also constantly monitoring signals, but due to the way our OSS / BSS are usually configured, they tend to react and respond in what I’ll refer to as “vertical loops” or “mini loops.” They look at silos (eg is a device in the transmission domain still performing within expected thresholds? If not, try to rectify it). Generally speaking, our OSS don’t tend to do horizontal feedback loops quite so well.

Network virtualisation appears to be having an influence on this thinking though. The orchestration mindset is taking more of a service / contract / SLA perspective, whereby if a part of the service is degrading / failing, then just shoot it and automatically spin up a replacement.

How many of our OSS are learning and growing from this concept? What are you seeing out in the field?

Does your organisation have the culture to handle new OSS models?

We’ve recently talked about the two service provider business model extremes – OTT / DSP (Over the Top or DSP) versus REIT / TaaU (Telco as a Utility) are affecting OSS.

The fast-twitch OSS that services the OTT / DSP model is bringing about some fascinating changes in the way service providers procure “assets.” They’re no longer buying equipment (or software) outright, but buying as a managed service. This sees the vendor taking most of the capital risk and sharing in the rewards of the product offering whilst the service provider reduces risk and CAPEX outlay but still gets a return from the product. The more financially successful the product, the more the vendor stands to make.

I first saw this model with Unified Communications services being offered through the CSPs back in the 2000’s and was amazed at the audacity. It was effectively the selling of ice to the eskimos (selling carriers services to the carriers) but a very bold and clever approach that was successful for many reasons.

The software-defined era we’re currently embarking on increases the likelihood of third-party, performance-based, sell-through offerings on networks and network management. However, we’re already starting to see that this is causing some cultural friction occurring. Service Provider operations groups have traditionally thought of engineering their own solutions and allocating CAPEX for their implementation. This new model sees the design largely outsourced to vendors and an OPEX-based payment structure.

I mention the OPEX structure because operational teams typically get allocated a budget, but if a product becomes wildly successful then the pay-per-use pricing models mean the carrier is making lots of money, but the vendor is also expecting their share. Unfortunately the fixed budget thinking of carriers doesn’t accommodate the vendors easily. I’m sure this will work itself out as carriers evolve with these new models, but it can be a challenge in the short-term.

How does OSS architecture cope with exponential growth

Yesterday’s blog covered how exponential growth in ICT industries has been (and will continue to be) a challenge for all of us in OSS-land.

We’ve already seen some fundamental changes in OSS in recent years to be able to cope with the massive growth in device counts, bandwidth demands, etc. We’ve seen hyper-scaled hardware/software platforms becoming much more common as well as the load-balanced interfaces that can cope with this scaling.

We also have the beginnings of software-defined networks, which allow innovation at the speed of software in the network. This is a plus in terms of our ability to scale to meet exponential demands, but a challenge for management software to keep up with what will be even more explosive growth and transient services (ie spin-up / tear-down of services)..

What we haven’t coped so well with is the complexity that comes from having far more devices, more device types and much more rapid change within those device types. And that’s not even mentioning the number of configurations of network connectivity they support. This rate of change is also apparent in the product/service variants that marketers develop on the back of the newly available network topologies / services. These are just some of the complexity multipliers that OSS “catches” downstream of decisions made by others.

The complexity explosion is going to continue unabated, which challenges the triple constraint of OSS. There are really only four ways I can see for handling the exponential growth in complexity hitting our OSS:

  1. A reduction in complexity from a variety of factors including moving from big-grid to small-grid OSS models
  2. Abstraction of complexity by element management layers beneath the OSS, including intent-based demands
  3. Simplifying what our OSS are attempting to do. They currently have so much functionality baked in (most of which customers don’t use), that maintaining and building upon it is becoming unviable. Much of this functionality needs to be stripped out and handled as data experiments, which needs a new approach to data management
  4. Using algorithmic learning approaches built into our OSS to handle a level of complexity that the human brain struggles with. This includes machine learning / artificial intelligence (ML / AI) of course, as well as leveraging service defined architectures (think microservices) to create the building blocks of software-driven scaling. Algorithmic learning needs lots of source data to learn from, so the touchpoint explosion will contribute

How are you currently looking to strip complexity out of your OSS and/or systematically benefit from innovation at the speed of software?

How can OSS keep up with exponential progress?

We’ve all heard of Moore’s Law, which predicts the semiconductor industry’s ability to exponentially increase transistor density in an integrated circuit. “Moore’s prediction proved accurate for several decades, and has been used in the semiconductor industry to guide long-term planning and to set targets for research and development. Advancements in digital electronics are strongly linked to Moore’s law: quality-adjusted microprocessor prices, memory capacity, sensors and even the number and size of pixels in digital cameras… Moore’s law describes a driving force of technological and social change, productivity, and economic growth.”

It’s exponentiality has also proven to be helpful for long-term planning in many industries that rely on electronics / computing and that includes the communications industry. By nature, we tend to think in linear terms and exponentiality is harder for us to comprehend (as shown with the old anecdote about the number of grains of wheat on a chessboard).

The problem, as described in a great article on SingularityHub is that the exponentiality of technological progress tends to surprise us as change initially creeps up on us, then overwhelms us in situations like this:
Singularity Hub's power law diagram

The level of complexity that has hit OSS in the last decade has been profound and to be honest, has probably overwhelmed the linear thinking models we’ve applied to OSS. The continued growth from technologies such as network virtualisation, Internet of Things, etc is going to lead to a touchpoint explosion that will make the next few years even more difficult for our current OSS models (especially for the many tools that exist today that have evolved from decade-old frameworks).

Countering exponential growth requires exponential thinking. We know we’re going to deal with vastly greater touch-points and vastly greater variants and vastly greater complexity (see more in the triple-constraint of OSS). Too many OSS projects are already buckling under the weight of this complexity.

Small-grid OSS is my great hope, as well as widespread machine-learning to augment our linear thinking. But these are just a starting-point that I’ll continue to explore and report on here on PAOSS.

I’d love to hear your exponential thoughts on our industry and what our OSS need to do to keep pace.