It’s all about the variants

I’ve been involved in telecom operations for decades, and I’ve learned that there is nothing in networking as inertial as OSS/BSS. A large minority of my experts (but still a minority) think that we should scrap the whole OSS/BSS model and simply integrate operations tasks with the service models of SDN and NFV orchestration. That’s possible too, and we’ll have to wait to see if there’s a sign that this more radical approach—which would really be event-driven—will end up the default path.”
Tom Nolle
here.

Another thought-provoking article from Tom above. It’s worth clicking on the link for an extended read.

I agree with his assertion about OSS / BSS being inertial and designed against principles of the distant past.

I believe the glacial speed of change primarily comes down to one thing – variants.

Networks support lots of variants. Product offerings / bundles introduce more. Processes more still. Entwined best-of-breed integrations introduce yet more still. Etc. Now multiply these numbers and you get the number of variants that OSS need to handle.

That’s the number of variants that must be designed for, developed for, integrated / configured for, tested against, data migrated for, supported and handed over into ops… not to mention the defects that arise from overlooking variants or new variants being introduced with new networks, services, etc.

So whether it’s the old approach or a new event-driven approach, if we can collectively slash the number of variants, we can go a long way to reducing the entanglement.

My fear at the moment is that virtualised networks are going to add a vast number of new variants before anyone thinks of stripping any of the old ones out.

What happens if we cross high-speed trading with OSS?

The law of diminishing marginal utility is a theory in economics that says that with increased consumption, satisfaction decreases.
ou are at a park on a winter’s day, and someone is selling hot dogs for $1 each. You eat one. It tastes good and satisfies you, so you have another one, and another etc. Eventually, you eat so many, that your satisfaction from each hot dog you consume drops. You are less willing to pay the $1 you have to pay for a hot dog. You would only consume another if price drops. But that won’t happen, so you leave, and demand for the hot dogs falls
.”
Wikibooks’ Supply and Demand.

Yesterday’s blog went back to the basics of supply and demand to try to find ways to innovate with out OSS. Did the proposed model help you spawn any great new ideas?

If we look at another fundamental consumer model, The Law of Diminishing Marginal Utility, we can see that with more consumption, there comes less consumer satisfaction. Sounds like what’s happening for telcos globally. There’s ever greater consumption of data, but an increasing disinterest in who transports that data. Network virtualisation, 5G and IoT are sometimes quoted as the saviours of the telco industry (looking only at the supply side), but they’re inevitably going to bring more data to the table, leading to more disinterest right? Sounds like a race to the bottom.

Telcos were highly profitable in times of data shortage but in this age of abundance, a new model is required rather than just speeds and feeds. As OSS providers, we also have to think beyond just bringing greater efficiency to the turn-on of speeds and feeds. But this is completely contra to the way we normally think isn’t it? Completely contra to the way we build our OSS business cases.

What products / services can OSS complement that are in short supply and are highly valued? Some unique content (eg Seinfeld) or apps (Pokemon Go) might fit this criteria, but only for relatively short time periods. Even content and apps are becoming more abundant and less valued. Perhaps the answer is in fulfilling short-term inefficiencies in supply and demand (eg dynamic pricing, dynamic offers, un-met demands, etc) as posed in yesterday’s blog. All of these features require us to look at our OSS data with a completely different lens though.

Our analytics engines might be less focused on time-to-repair and perhaps more on metrics such as analysis to offer (which currently take us months, thus missing transient market inefficiency windows). And not just for the service providers, but as a service for their customers. Is this high-speed trading crossed with OSS??

How to disrupt through your OSS – a base principles framework

We’ve all heard the stories about the communications services industry being ripe for disruption. In fact many over-the-top (OTT) players, like Skype and WhatsApp have already proven this fact for basic communications services, let alone the value-add applications that leverage CSP connectivity.

As much as the innovative technologies they’ve built, the OTT players have thrived via some really interesting business models to service gaps in the market. The oft-quoted examples here are platforms like Airbnb providing clients with access to accommodation without having the capital costs of building housing.

In thinking about the disruption of the CSP industry and how OSS can provide an innovative vehicle to meeting customer demands, the framework below builds upon the basic principles of supply and demand:

Supply and Demand opportunities in a service provider

The objective of the diagram is to identify areas where innovative OSS models could be applied, taking a different line of thinking than just fulfillment / assurance / inventory:

  • Supply – how can OSS influence supply factors such as:
    • Identifying latent supply (eg un-used capacity) and how incremental revenues could be generated from it, such as building dynamic offers
    • Unbundling supply by stripping multiple elements apart to supply smaller desirable components rather than all of the components of a cross-subsidised bundle. Or vice versa, taking smaller components and bundling them together to supply something new
    • Changing the costs structures of supply, which could include virtualisation, automation and many of the other big buzz-words swirling around our industry
  • Demand – how can OSS / BSS build upon demand factors such as:
  • Marketplace / Platforms – how can OSS / BSS better facilitate trade between a CSP‘s subscribers, who are both producer / suppliers and consumers. CSPs traditionally provide data connectivity, but the OTT players tend to provide higher perceived value of trade-connectivity including:
    • Bringing buyers and sellers together on a common platform
    • Providing end-to-end trade support from product development, sales / marketing, trading transactions, escrow / billing / clearing-house; and then
    • Analytics on gathered data to improve the whole cycle
  • Supply chain – how can OSS / BSS help customers to efficiently bring the many pieces of a supply chain together. This is sometimes overlooked but can be one of the hardest parts of a business model to replicate (and therefore build a sustainable competitive advantage from). For OSS / BSS, it includes factors such as:
    • As technologies get more complex, but more modular, partnerships and their corresponding interfaces become more important, playing into microservices strategies
    • Identifying delivery inefficiencies, which include customer impediment factors such as ordering, delivery, activations. Many CSPs have significant challenges in this area, so efficiency opportunities abound

These are just a few of the ideas coming out of the framework above. Can the questions it poses help you to frame your next OSS innovation roadmap (ie taking it beyond just the typical tech roadmap)?

Crossing the OSS tech chasm

When discussing yesterday’s post about increasing feedback loops in OSS, the technology gap on exponential technologies such as IoT, network virtualisation and machine learning reminded me of Geoffrey Moore’s “Crossing the Chasm” as shown in the graph below.

Crossing the chasm

In the context of the abovementioned technologies, the chasm isn’t represented by the adoption of a product (as per Moore’s graph) but in the level of sophistication required to move to future iterations. [Noting that the sophistication chasm may also effect the take-up rate, as the OSS vendors that can cross the chasm to utilising more advanced machine learning, robotics and automation will have a distinct competitive advantage].

This gets back to the argument towards developing these exponential technologies, even if only by investing in the measure and feedback steps in the control loop initially.
Singularity Hub's power law diagram
Image courtesy of singularity hub.

Getting ahead of feedback

Amazon is making its Greengrass functional programming cloud-to-premises bridge available to all customers…
This is an important signal to the market in the area of IoT, and also a potentially critical step in deciding whether edge (fog) computing or centralized cloud will drive cloud infrastructure evolution…
The most compelling application for [Amazon] Lambda is event processing, including IoT. Most event processing is associated with what are called “control loop” applications, meaning that an event triggers a process control reaction. These applications typically demand a very low latency for the obvious reason that if, for example, you get a signal to kick a defective product off an assembly line, you have a short window to do that before the product moves out of range. Short control loops are difficult to achieve over hosted public cloud services because the cloud provider’s data center isn’t local to the process being controlled. [Amazon] Greengrass is a way of moving functions out of the cloud data center and into a server that’s proximate to the process
.”
Tom Nolle
.

It seems to me that closed-loop thinking is going to be one of the biggest factors to impact OSS in coming years.

Multi-purpose machine learning (requiring feedback loops like the one described in the link above) is needed by OSS on many levels. IoT requires automated process responses as described by Tom in his quote above. Virtualised networks will evolve to leverage distributed, automated responsiveness to events to ensure optimal performance in the network.

But I’m surprised at how (relatively) little thought seems to be allocated to feedback loop thinking currently within our OSS projects. We’re good at designing forward paths, but not quite so good at measuring the outcomes of our many variants and using those to feed back insight into earlier steps in the workflow.

We need to get better at the measure and control steps in readiness for when technologies like machine-learning, IoT and network virtualisation catch up. The next step after that will be distributing the decision making process out to where it can make a real-time difference.

OSS S-curves

I should say… that in the real world exponential curves don’t continue for ever. We get S-curves which closely mimic exponential curves in the beginning, but then tail off after a while often as new technologies hit physical limits which prevent further progress. What seems to happen in practice is that some new technology emerges on its own S-curve which allows overall progress to stay on an something approximating an exponential curve.
Socio tech

The chart above shows interlocking S-curves for change in society over the last 6,000 years. That’s as macro as it gets, but if you break down each of those S-curves they will in turn be comprised of their own interlocking S-curves. The industrial age, for example, was kicked off by the spinning jenny and other simple machines to automate elements of the textile industry, but was then kicked on by canals, steam power, trains, the internal combustion engine, and electricity. Each of these had it’s own S-curve, starting slowly, accelerating fast and then slowing down again. And to the people at the time the change would have seemed as rapid as change seems to us now. It’s only from our perspective looking back that change seems to have been slower in the past. Once again, that’s only because we make the mistake of thinking in absolute rather than relative terms.”
Nic Brisbourne
here.

I love that Nic has taken the time to visualise and articulate what many of us can perceive.

Bringing the exponential / S-curve concept into OSS, we’re at a stage in the development of OSS that seems faster than at any other time during my career. Technology change in adjacent industries are flowing into OSS, dragging it (perhaps kicking and screaming) into a very different future. Technologies such as continual integration, cloud-scaling, big-data / graph databases, network virtualisation, robotic process automation (RPA) and many others are making OSS look so different to what they did only five years ago. In fact, we probably need these technologies to keep apace with the other technologies. For example, the touchpoint explosion caused by network virtualisation and IoT mean we need improved database technologies to cope. In turn this introduces a complexity and change that is almost impossible for people to keep track of, driving the need for RPA… etc.

But then, there are also things that aren’t changing.

Many of our OSS have been built through millions of developer days of effort. That forces a monumental decision for the owners of that OSS – to keep up with advances, you need to rapidly overhaul / re-write / supersede / obsolete all that effort and replace it with something that keeps track of the exponential curve. The monolithic OSS of the past simply won’t be able to keep pace, so highly modular solutions, drawing on external tools like cloud, development automation and the like are going to be the only way to track the curve.

All of these technologies rely on programmable interfaces (APIs) to interlock. There is one major component of a telco’s network that doesn’t have an API yet – the physical (passive) network. We don’t have real-time data feeds or programmable control mechanisms to update it and manage these typically unreliable data sources. They are the foundation that everything else is built upon though so for me, this is the biggest digitalisation challenge / road-block that we face. Collectively, we don’t seem to be tackling it with as much rigour as it probably deserves.

The Second Law of OSS Thermodynamics

Applying the Second Law of Thermodynamics to understanding reality, Boyd infers that individuals or organizations that don’t communicate with the outside world by getting new information about the environment or by creating new mental models act like a “closed system.” And just as a closed system in nature will have increasing entropy, or disorder, so too will a person or organization experience mental entropy or disorder if they’re cut off from the outside world and new information.
The more we rely on outdated mental models even while the world around us is changing, the more our mental “entropy” goes up.
Think of an army platoon that’s been cut off from communication with the rest of the regiment. The isolated platoon likely has an idea, or mental model, of where the enemy is located and their capabilities, but things have changed since they last talked to command. As they continue to work with their outdated mental model against a changing reality, confusion, disorder, and frustration are the results
.”
Brett & Kate McKay
here.

Does the description above resonate with you in relation to OSS? It does for me on a couple of levels.

The first is in relation to data quality. If you create a closed system, one where there isn’t continual, ongoing improvement efforts entropy and disorder goes up. If you only do big-bang data fix projects intermittently, you will experience entropy until the next big (and usually expensive) data remediation effort. And if your processes don’t have any improvement mechanisms, they tend to fall into a data death spiral.

The second is in relation to innovation. We can sometimes get stuck in our own mental models of what this industry is about. The technologies that are in enormous flux around us and impacting OSS mean that our mental models should be evolving on a daily / weekly / monthly basis. Unfortunately, sometimes we get stuck in the bubble of our current project and don’t get the feedback into the system that we need. PAOSS was spawned from a situation like that. For 18 months, I was leading a $100m+ project that had little to do with OSS and was increasingly drawing me away from the passion for OSS. PAOSS was my way of popping the bubble.

But I know my mental models still need to shift more than ever before (strong convictions, weakly held). Technologies such as CI/CD, AI/ML, virtualised networking, IoT and many more are changing the world we operate in. This needs vigilance to continually re-orient. That’s fun if you enjoy change, but many of our stakeholders don’t. Change management becomes increasingly important, yet increasingly underestimated.

Some even say that OSS is an old mental model (to which I counter by saying that new operational models look nothing like the old ones, but they’re still operational assistance tools, as per Theseus’s boat).

Theseus’ OSS transformation

Last week we compared OSS to Theseus’ ship, how it was constantly being rebuilt mid-voyage, then posing the question whether it was still actually the same ship.

OSS transformation is increasingly becoming a Theseus’ ship model. In the past, we may’ve had the chance to do big-bang cutovers, where we simply migrated users from one solution to another. This doesn’t seem to be the case anymore, probably because of the complexity of systems in our OSS ecosystems. We’re now being asked to replace components while the ship keeps moving, which isn’t suited to the behemoth, monolithic OSS of the past.

This new model requires a much more modular approach to solution design, creating components that can be shared and consumed by other components, with relationships in data rather than system interconnectedness. In other words, careful planning to avoid the chess-board analogy.

In some ways, we probably have the OTT (over the top) play to thank for this new modular approach. We’ve now become more catalog-driven, agile, web-scaled and microservices in our way of thinking, giving us the smaller building blocks to change out pieces mid-voyage. The behemoth OSS never (rarely?) allowed this level of adaptability.

This complexity of transformation is probably part of the reason why behemoth software stacks across all industries are going to become increasingly rare in coming years.

In our case the Theseus paradox is an easy one to answer. If we change out each component of our OSS mid-voyage, it’s still our OSS, but it looks vastly different to the one that we started the voyage with.

Rebuilding the OSS boat during the voyage

When you evaluate the market, you’re looking to understand where people are today. You empathize. Then you move into storytelling. You tell the story of how you can make their lives better. That’s the “after” state.
All marketing ever does is articulate that shift. You have to have the best product, and you’ve got to be the best at explaining what you do… at articulating your value, so people say, ‘Holy crap, I get it and I want it.’

Ryan Deiss
.

How does “the market” currently feel about our OSS?

See the comments in this post for a perspective from Roger, “It takes too long to get changes made. It costs $xxx,xxx for “you guys” to get out of bed to make a change.” Roger makes these comments from the perspective of an OSS customer and I think they probably reflect the thoughts of many OSS customers globally.

The question for us as OSS developers / integrators is how to make lives better for all the Rogers out there. The key is in the word change. Let’s look at this from the context of Theseus’s paradox, which is a thought experiment that raises the question of whether an object that has had all of its components replaced remains fundamentally the same object.

Vendor products have definitely become more modular since I first started working on OSS. However, they probably haven’t become atomic enough to be readily replaced and enhanced whilst in-flight like the Ship of Theseus. Taking on the perspective of a vendor, I probably wouldn’t want my products to be easily replaceable either (I’d want the moat that Warren Buffett talks about)… [although I’d like to think that I’d want to create irreplaceable products because of what they deliver to customers, including total lifecycle benefits, not because of technical / contractual lock-in].

Customers are taking Theseus matters into their own hands through agile / CI-CD methodologies and the use of micro-services. They have merit, but they’re not ideal because it means the customer needs more custom development resourcing on their payroll than they’d prefer. I’m sure this pendulum will shift back again towards more off-the-shelf solutions in coming years.

That’s why I believe the next great OSS will come from open source roots, where modules will evolve based on customer needs and anyone can (continually) make evolutionary, even revolutionary, improvements whilst on their OSS voyage (ie the full life-cycle).

OSS chat bots

Some pilots are proving quite interesting in enhancing Network Operations. Especially when AI / ML is used in the form of Natural language processing – NOC operators are talking to a ML system in natural language and asking questions in context to an alarm about deep topics ( BGP, 4G eUTRAN, etc). Do more with L1 engineers.
– Use of familiar chat front ends like FB Messenger, Hangouts to have conversation with a OSSChatBot which learns using ML, continuously, the intents of the questions being asked and responds by bringing information back from OSS using APIs. (Performance of a cell site, or Fuel level on a tower site, Weather information in a cluster of sites etc). Ease of use, Open new channels to use OSS
– Training the AI system in TAC tickets, Vendor manuals, Historical tickets, – ability to search and index unstructured data and recommend solutions for problems when they occur or about to occur – informed via OSS. efficiency increase, Lower NOC costs.
– Network planning using ML apis. Ticket prioritization using ML analytics ( which ones to act upon first – certainly not only by severity)

Steelysan
in response to an earlier post entitled, “A thousand AI flowers to bloom for OSS.

Steelysan has raised some brilliant use cases for AI, chatbots specifically, in the assurance domain (hat-tip Steelysan!). OSSChatBots are the logical progression of the knowledge bases of the distant past or the rules / filters / state-machines of the current generation tools, yet I hadn’t considered using chat bots in this manner.

I wonder whether there are similar use cases in the fulfillment domain? Perhaps we can look at it from two perspectives, the common case and the exception case.

If looking at the common case, there is still potential for improved efficiency in fulfillment processes, particularly in the management of orders through to activation. We already have work order management and workforce management tools, but both could be enhanced due to the number of human interactions (handoffs, approvals, task assignment, confirmed completions, etc). An OSSChatBot could handle many of the standard tasks currently handled by order managers.

If looking at the exceptional case, where there is a jeopardy or fall-out state reached, the AI behind an OSSChatBot could guide an inexperienced operator through remedial actions.

These types of tools have the potential to give more power to L1 NOC operators and free up higher levels to do higher-value activities.

Using OSS to handle more sophisticated compliance issues

You’re probably aware that our OSS and/or underlying EMS/NMS have been assisting organisations with their governance, regulatory and compliance (GRC) requirements for some time. Our OSS have provided tools that automatically enforce and validate compliance against configuration policies.

Naturally, network virtualisation and network as a service (NaaS) offerings are going to place an increased dependence on those policies and tools due to the potentially transient services they invoke.

But the more interesting developments are likely to arise out of the regulatory environment space. As more businesses become digital and more personal / confidential data gets gathered, governments are going to create more regulations to comply to. This article by Michael Vizard highlights some of the major regulatory changes underway in the USA alone that will impact how we maintain our communications networks.

I’d hate to alarm you…

… but I will because I’m alarmed. 🙂

The advent of network virtualisation, cloud-scaling and API / microservice-centric OSS means that the security attack surface changes significantly compared with old-style solutions. We now have to consider a more diverse application stack, often where parts of the stack are outside our control because they’re As A Service (XaaS) offerings from other suppliers. Even a DevOps implementation approach can introduce vulnerabilities.

With these new approaches, service providers are tending to take on more of the development / integration effort internally. This means that service providers can no longer rely so heavily on their vendors / integrators to ensure that their OSS solutions are hardened. Security definitely takes a much bigger step up in the list of requirements / priorities / challenges on a modern OSS implementation.

This article from Aricent provides a few insights on the security implications of a modern API architecture.

* Please note that I am not endorsing Aricent products here as I have never used them, but they do provide some thought-provoking ideas for those tasked with securing their OSS.

It’s not the tech, it’s the timing

For those of us in the technology consulting game, we think we’re pretty clever at predicting the next big technology trend. But with the proliferation of information over the Internet, it’s not that difficult to see potentially beneficial technologies coming over the horizon. We can all see network virtualisation, Internet-connected sensor networks, artificial intelligence, etc coming.

The challenge is actually picking the right timing for introducing them. Too early and nobody is ready to buy – death by cashflow. Too late and others have already established a customer base – death by competition.

The technology consultants that stand apart from the rest are the ones who not only know that a technology is going to change the world, but also know WHEN it will change the world and HOW to bring it into reality.

We’ve talked recently about utilising exponential tech like AI in OSS and the picture below tells some of the story in relation to WHEN
Exponential curve opportunity cost
Incremental improvements to the status quo initially keep up with trying to bring a new, exponential technology into being. But over time, the gap progressively increases, as does the opportunity cost of staying with incremental change. Some aspects of the OSS industry has been guilty of trying to make incremental improvement to the platforms that they’ve invested so much time into.

What the graph suggests is to embark on investigations / trials / proofs-of-concept in incremental new tech so that in-house tribal knowledge is developed in readiness for when the time is right to introduce it to customers.

Falling off a cliff vs going to the sky

Have you noticed how the curves we’re dealing with in the service provider industry are either falling off a cliff (eg voice revenues) or going to the sky (eg theoretical exponential growth like IoE)?

Here in the OSS industry, we’re stuck in the middle of these two trend curves too. Falling revenues mean reduced appetite for big IT projects. However, the excitement surrounding exponential technologies like SDN/NFV, IoE and AI are providing just enough inspiration for change that includes the new projects to plug the revenue shortfalls.

The question for sponsors becomes whether they see OSS as being

  • Inextricably linked to the cliff-curve – servicing the legacy products with falling revenue curves and therefore falling project investment; or
  • Able to be the tools that allow their exciting new exponential tools (and hopefully exponential revenues) to be operationalized and therefore attract new project investment

The question for us in the OSS industry is whether we’re able to evangelise and adapt our offerings to ensure sponsors see us as being point 2 rather than point 1

Wow

No, not “Wow!” the exclamation but the acronym W-O-W.

Wow stands for Walk Out Working. In other words, if a customer comes into a retail store, they walk out with a working service rather than exasperation. Whilst many customers wouldn’t be aware of it, there are lots of things that have to happen in an OSS / BSS for a customer to be wowed:

  • Order entry
  • Identity checks / approvals
  • Credit checks
  • Inventory availability
  • Rapid provisioning
  • Rapid back-of-house processes to accommodate service activation (eg billing, service order management, SLAs, etc)
  • etc

How long does all of this reasonably take for different product offerings? For mobile services, this is often feasible. For fixed line services, delivery is often measured in weeks, so the service provider would need to provide accommodation to accommodate WOW.

With virtualised networking, perhaps even fixed line services can be wowed if physical connectivity already exists and a virtually enabled CPE is installed at the customer premise. A vCPE that can be remotely configured and doesn’t require a truck roll or physical commissioning activities out in the field.

Wow, the acronym and the exclamation, become more feasible for more service delivery scenarios in a virtualised networking world. It then lays out the challenge to the OSS / BSS to keep up. Could your OSS / BSS, product offerings and related processes meet a WOW target of minutes rather than hours or days?

Software is eating the world…. and eating your job?

A funny thing happened today. I was looking for a reference to Marc Andreessen’s original, “software is eating the world,” quote and came across an article on TechCrunch that expressed many of the same thoughts I was going to write about. However, it doesn’t specifically cover the service provider and OSS industries so I’ll push on, with a few borrowed quotes along the way (in italics, like the following).

Today, the idea that “every company needs to become a software company” is considered almost a cliché. No matter your industry, you’re expected to be reimagining your business to make sure you’re not the next local taxi company or hotel chain caught completely off guard by your equivalent of Uber or Airbnb. But while the inclination to not be “disrupted” by startups or competitors is useful, it’s also not exactly practical. It is decidedly non-trivial for a company in a non-tech traditional industry to start thinking and acting like a software company.”
[or the traditional tech industry like service providers???]

This is completely true of the dilemma facing service providers the world over. A software-centric network, whether SDN, NFV, or others, is nearly inevitable. While the important metrics don’t necessarily stack up yet for SDN, software will continue to swarm all over the service provider market. Meanwhile, the challenge is that the existing workforce at these companies, often in the hundreds of thousands of people, don’t have the skills or interest in developing the skills essential for the software defined service provider of the (near) future.

Even worse for those people, many of the existing roles will be superseded by the automations we’re building in software. Companies like AT&T have been investing in software as a future mode of operation for nearly a decade and are starting to reap the rewards now. Many of their counterparts have barely started the journey.

This old post provided the following diagram:
Network_Software_Business_Venn
The blue circle is pushing further into the realm of the green to provide a larger yellow intersection, whereby network engineers will no longer be able to just configure devices, but will need to augment their software development skills. For most service providers, there just isn’t enough IT resources around to make the shift (although with appropriate re-skilling programs and 1+ million IT/Engineering resources coming out of universities in India and China every year, that is perhaps a moot point).

Summarising, I have two points to note:

  1. Bet on the yellow intersect point – service providers will require the converged skill-sets of IT and networks (include security in this) in larger volumes… but consider whether the global availability of these resources has the potential to keep salaries low over the longer term* (maybe the red intersection point is the one for you to target?)
  2. OSS is software and networking (and business) – however, my next post will consider the cyclical nature of a service provider building their own software vs. buying off-the-shelf products to configure to their needs

Will software eat your job? Will software eat my job? To consider this question, I would ask whether AI [Artificial Intelligence] develops to the point that it does a better job at consultancy than I can (or any consultant for that matter)? The answer is a resounding and inevitable yes… for some aspects of consultancy it already can. Can a bot consider far more possible variants for a given consulting problem than a person can and give a more optimal answer? Yes. In response, the follow-up question is what skills will a consulter-bot find more difficult to usurp? Creativity? Relationships? Left-field innovation?

* This is a major generalisation here I know – there are sectors of the IT market where there will be major shortages (like a possible AI skills crunch in the next 2-5 years or even SDN in that timeframe), sometimes due to the newness of the technology preventing a talent pool from being developed yet, sometimes just for supply / demand misalignments.

Hot on the heels of ECOMP comes Indigo

Since releasing ECOMP (Enhanced control orchestration and management platform), AT&T has been busy on a data sharing environment, which it announced at the AT&T Developer Summit and is called Indigo.

Like ECOMP, AT&T is looking to launch Indigo as an Open Source project through the Linux Foundation, hoping for community collaboration.

As you all know, machine learning and artificial intelligence (ML/AI) get better with more data. This project is intended to bring a community effort to the development of a data network that enhances accessibility to data and overcomes obstacles such as security, privacy, commercial sensitivities as well as other technical challenges.

AT&T announced that it will provide further details in coming weeks, which I’ll look to keep you abreast of. This is an important development for our industry for a range of reasons including the insights and efficiencies that ML/AI can deliver from the data it observes (as well as innovative revenue stream possibilities), but I’m most interested in the closed feedback loops that are needed in the OSS / ECOMP space.

For further information, check out this report on SDX Central.

The end of cloud computing

…. but we’ve only just started and we haven’t even got close to figuring out how to manage it yet (from an aggregated view I mean, not just within a single vendor platform)!!

This article from Peter Levine of Andreesen Horowitz predicts “The end of cloud computing.”

Now I’m not so sure that this headline is going to play out in the near future, but Peter Levine does make a really interesting point in his article (and its embedded 25 min video). There are a number of nascent technologies, such as autonomous vehicles, that will need their edge devices to process immense amounts of data locally without having to backhaul it to centralised cloud servers for processing.

Autonomous vehicles will need to consume data in real-time from a multitude of in-car sensors, but only a small percentage of that data will need to be transmitted back over networks to a centralised cloud base. But that backhauled data will be important for the purpose of aggregated learning, analytics, etc, the findings of which will be shipped back to the edge devices.

Edge or fog compute is just one more platform type for our OSS to stay abreast of into the future.

Marc Andreessen’s platform play for OSS

Marc Andreessen describes platforms as “a system that can be programmed and therefore customized by outside developers — users — and in that way, adapted to countless needs and niches that the platform’s original developers could not have possibly contemplated, much less had time to accommodate.”

Platform thinking is an important approach for service providers if they want to recapture market share from the OTT play. As the likes of Facebook have shown, a relatively limited-value software platform becomes enormously more valuable if you can convince others to contribute via content and innovation (as evidenced in FB’s market cap to assets ratio compared with traditional service providers).

As an OSS industry, we have barely scratched the surface on platform thinking. Sure, they are large platforms used by many users, we sometimes offer the ability to deliver customer portals and more recently we’re starting to offer up APIs and microservices.

As we’ve spoken about before, many of the OSS on the market today are the accumulation of many years (decades?) of baked-in functionality (ie product thinking). Unfortunately this baked-in approach assumes that the circumstances that functionality was designed to cater for are identical (or nearly identical) for all customers and won’t change over time. The dynamic and customised nature of OSS clearly tells us that this assumption is not right.

Product thinking doesn’t facilitate the combinatory innovation opportunities represented by nascent technologies such as cloud delivery, network virtualization, network security, Big Data, Machine Learning and Predictive Analytics, resource models, orchestration and automation, wireless sensor networks, IoT/ M2M, Self-organizing Networks (SON) and software development models like DevOps. See more in my research report, The Changing Landscape of OSS.

Platforms are powerful, not just because of the cloud, but also the crowd. With OSS, we’re increasingly utilising cloud delivery and scaling models, but we probably haven’t found a way of leveraging the crowd to gain the extreme network effects that the likes of FB have tapped into. That’s largely because our OSS are constrained by “on-premises” product thinking for our customers. We allow customers to connect internally (some may argue that point! 😉 ), but aside from some community forums or annual customer events, we don’t tend to provide the tools for our global users to connect and share value.

In not having this aggregated view, we also limit our potential on inherent platform advantages such as analytics, machine-learning / AI, combinatorial innovation, decentralised development and collaborative value sharing.

Do you agree that it all starts with re-setting the “on-prem” thinking or are we just not ready for this yet (politically, technically, etc)?

[Noting that there are exceptions that already exist of course, both vendor and customer-side. Also noting that distributed datasets don’t preclude centralised / shared analytics, ML/AI, etc, but segregation of data (meta data even) from those centralised tools does!]

 

A hat-tip to the authors of a Platform Thinking Point of View from Infosys, whose document has helped frame some of the ideas covered in this blog.

Standard Operating Procedures… or Variable Operating Procedures

Yesterday’s blog discussed the importance, but (perhaps) mythical concept of Standard Operating Procedures (SOP) for service providers and their OSS / BSS. The number of variants, which I can only see amplifying into the future, makes it almost futile to try to implement SOPs.

I say “almost futile” because SOPs are theoretically possible if the number of variants can be significantly compressed.

But given that I haven’t seen any real evidence of this happening at any of the big xSPs that I’ve worked with, I’ll discuss an alternate approach, which I call Variable Operating Procedures (VOP) that are broken into the following key attributes:

  1. Process designs that are based on states, which often correlate with key activities / milestones such as notifications, approvals, provisioning activities, etc for each journey type (eg O2A [Order to Activate] for each product type, T2R [Trouble to Resolve] for each problem type, etc). There is less reliance on the sequencing or conditionals for each journey type that represent the problem for SOPs and related customer experience (but I’ll come back to that later in step 4B)
  2. Tracking of every user journey through each of these states (ie they have end-to-end identifiers that track their progress through their various states to ensure nothing slips through the cracks between handoffs)
  3. Visualising user journeys through Sankey diagrams (see diagram here) that show transitions between each of the states and show where inefficiencies exist
  4. A closed feedback loop (see diagram and description of the complete loop here) that:
    1. Monitors progress of a task through various states, some of which may be mandatory
    2. Uses machine learning to identify the most efficient path for processing an end-to-end journey. This means that there is no pre-ordained sequence of activities to traverse the various states, but notes the sequence that results in the most efficient end-to-end journey (that also meets success criteria such as readiness for service, customer approval / satisfaction, etc)
    3. Uses efficient path learnings to provide decision support recommendations for each subsequent traversal of the same/similar journey type. It should be noted that operators shouldn’t be forced into a decision, as the natural variations in operator resolutions for a journey type will potentially identify even more efficient paths (the Darwinian aspect of the feedback loop and could be reinforced through gamification techniques amongst operators)

It’s clear that the SOP approach isn’t working for large service providers currently, but OSS rely on documented SOPs to control workflows for any given journey type. The proposed VOP approach is better suited to the digital supply chains and exponential network models (eg SDN/NFV, IoT) of the future, but will require a significant overhaul of OSS workflow processing designs.

Is VOP the solution, or have you seen a better response to the SOP problem?