A new, more sophisticated closed-loop OSS model

Back in early 2014, PAOSS posted an article about the importance of closed loop designs in OSS, which included the picture below:

OSS / DSS feedback loop

It generated quite a bit of discussion at the time and led me to being introduced to two companies that were separately doing some interesting aspects of this theoretical closed loop system. [Interestingly, whilst being global companies, they both had strong roots tying back to my home town of Melbourne, Australia.]

More recently, Brian Levy of TM Forum has published a more sophisticated closed-loop system, in the form of a Knowledge Defined Network (KDN), as seen in the diagram below:
Brian Levy Closed Loop OSS
I like that this control-loop utilises relatively nascent technologies like intent networking and the constantly improving machine-learning capabilities (as well as analytics for delta detection) to form a future OSS / KDN model.

The one thing I’d add is the concept of inputs (in the form of use cases such as service orders or new product types) as well as outputs / outcomes such as service activations for customers and not just the steady-state operations of a self-regulating network. Brian Levy’s loop is arguably more dependent on the availability and accuracy of data, so it needs to be initially seeded with inputs (and processing of workflows).

Current-day OSS are too complex and variable (ie un-repeatable), so perhaps this represents an architectural path towards a simpler future OSS – in terms of human interaction at least – although the technology required to underpin it will be very sophisticated. The sophistication will be palatable if we can deliver the all-important repeatability described in, “I want a business outcome, not a deployment challenge.” BTW. This refers to repeatability / reusability across organisations, not just being able to repeatedly run workflows within organisations.

An OSS knowledge plane for SDN

We propose a new objective for network research: to build a fundamentally different sort of network that can assemble itself given high level instructions, reassemble itself as requirements change, automatically discover when something goes wrong, an automatically fix a detected problem or explain why it cannot do so.
We further argue that to achieve this goal, it is not sufficient to improve incrementally on the techniques and algorithms we know today. Instead, we propose a new construct, the Knowledge Plane, a pervasive system within the network that builds and maintains high level models of what the network is supposed to do, in order to provide services and advice to other elements of the network. The knowledge plane is novel in its reliance on the tools of AI and cognitive systems. We argue that cognitive techniques, rather than traditional algorithmic approaches, are best suited to meeting the uncertainties and complexity of our objective
.”
David Clarke
et al in “A Knowledge Plane for the Internet.”

We know that SDN is built around the concepts of the data plane and the control plane. SDN also proposes centralised knowledge of, and management of, the network. David Clarke and his contemporaries are proposing a machine-driven cognitive layer that could sit on top of SDN‘s control plane.

The other facet of the knowledge plane concept is that it becomes an evolving data-driven approach rather than the complex process-driven approach to getting tasks done.

Brian Levy & Barry Graham have authored a paper entitled, “TM FORUM FUTURE ARCHITECTURE STRATEGY,” which discusses the knowledge plane in more detail as well as providing interesting next-generation OSS/BSS architecture concepts.

I want a business outcome, not a deployment challenge

We can look and take lessons on how services evolved in the cloud space. Our customers have expressed how they want to take these services and want a business outcome, not a deployment challenge.”
Shawn Hakl
.

Make no mistake, cloud OSS is still a deployment challenge (at this nascent stage at least), but in the context of OSS, Shawn Hakl’s quote asks the question, “who carries the burden of that deployment challenge?”

The big service providers have traditionally opted to take on the deployment challenge, almost wearing it as a badge of honour. I get it, because if done well, OSS can be a competitive differentiator.

The cloud model (ie hosted by a trusted partner) becomes attractive from the perspective of repeatability, from the efficiency of doing the same thing repeatedly at scale. Unfortunately this breaks down in a couple of ways for OSS (currently at least).

Firstly, the term “trusted partner” is a rare commodity between OSS providers and consumers for many different reasons (including trust from a security perspective, which is the most common pushback against using hosted OSS). Secondly, we haven’t unlocked the repeatability problem. Every organisation has different networks, different services, different processes, even different business models.

Cloud-hosted OSS represents a big opportunity into the future if we first focus on identification of the base ingredients of repeatability amongst all the disparity. Catalogs (eg service catalogs, product catalogs, virtual network device catalogs) are the closest we have so far. Intent abstraction models follow this theme too, as does platform-thinking / APIs. Where else?

People pay for two things. What about OSS?

People pay for two things:
Results: You do something they couldn’t do for themselves.
Convenience: You do something they don’t want to do for themselves, or you make something difficult easier to do
.”
Ramit Sethi
.

I really like the way Ramit has broken down an infinite number of variants down to just two key categories. Off the top of my head, these categories of payment (ie perceived value) seem to hold true for most industries, but we can unpack how they align with OSS.

In traditional OSS, most of the functionality / capability we provide falls into the convenience category.

In assurance, we tend to act as aggregators and coordinators, the single pane of glass of network health and remedial actions such as trouble-ticketing. But there’s no reason why we couldn’t manage those alarms from our EMS and tickets through spreadsheets. It’s just more convenient to use an OSS.

In fulfilment, we also pull all the pieces together, potentially from a number of different systems ranging from BSS, inventory, EMS and more. Again, it’s just more convenient to use an OSS.

But looking into the future, with the touchpoint explosion, the sheer scale of events hitting our assurance tools and the elastic nature of fulfilment on virtualised networks means that humans can’t manage by themselves. OSS and high-speed decision support tools will be essential to deliver results.

One other slight twist on this story though. All of the convenience we try to create using our OSS can actually result in less convenience. If we develop 1,000 tools that, in isolation, do something they [our customers] don’t want to do for themselves, it adds value. But if those tools in aggregate slow down our OSS significantly, increase support costs (lifetime costs) and make them inflexible to essential changes, then it’s actually reducing the convenience. On this point I have a motto – Just because we can, doesn’t mean we should (build extra convenience tools into our OSS).

It’s all about the variants

I’ve been involved in telecom operations for decades, and I’ve learned that there is nothing in networking as inertial as OSS/BSS. A large minority of my experts (but still a minority) think that we should scrap the whole OSS/BSS model and simply integrate operations tasks with the service models of SDN and NFV orchestration. That’s possible too, and we’ll have to wait to see if there’s a sign that this more radical approach—which would really be event-driven—will end up the default path.”
Tom Nolle
here.

Another thought-provoking article from Tom above. It’s worth clicking on the link for an extended read.

I agree with his assertion about OSS / BSS being inertial and designed against principles of the distant past.

I believe the glacial speed of change primarily comes down to one thing – variants.

Networks support lots of variants. Product offerings / bundles introduce more. Processes more still. Entwined best-of-breed integrations introduce yet more still. Etc. Now multiply these numbers and you get the number of variants that OSS need to handle.

That’s the number of variants that must be designed for, developed for, integrated / configured for, tested against, data migrated for, supported and handed over into ops… not to mention the defects that arise from overlooking variants or new variants being introduced with new networks, services, etc.

So whether it’s the old approach or a new event-driven approach, if we can collectively slash the number of variants, we can go a long way to reducing the entanglement.

My fear at the moment is that virtualised networks are going to add a vast number of new variants before anyone thinks of stripping any of the old ones out.

What happens if we cross high-speed trading with OSS?

The law of diminishing marginal utility is a theory in economics that says that with increased consumption, satisfaction decreases.
ou are at a park on a winter’s day, and someone is selling hot dogs for $1 each. You eat one. It tastes good and satisfies you, so you have another one, and another etc. Eventually, you eat so many, that your satisfaction from each hot dog you consume drops. You are less willing to pay the $1 you have to pay for a hot dog. You would only consume another if price drops. But that won’t happen, so you leave, and demand for the hot dogs falls
.”
Wikibooks’ Supply and Demand.

Yesterday’s blog went back to the basics of supply and demand to try to find ways to innovate with out OSS. Did the proposed model help you spawn any great new ideas?

If we look at another fundamental consumer model, The Law of Diminishing Marginal Utility, we can see that with more consumption, there comes less consumer satisfaction. Sounds like what’s happening for telcos globally. There’s ever greater consumption of data, but an increasing disinterest in who transports that data. Network virtualisation, 5G and IoT are sometimes quoted as the saviours of the telco industry (looking only at the supply side), but they’re inevitably going to bring more data to the table, leading to more disinterest right? Sounds like a race to the bottom.

Telcos were highly profitable in times of data shortage but in this age of abundance, a new model is required rather than just speeds and feeds. As OSS providers, we also have to think beyond just bringing greater efficiency to the turn-on of speeds and feeds. But this is completely contra to the way we normally think isn’t it? Completely contra to the way we build our OSS business cases.

What products / services can OSS complement that are in short supply and are highly valued? Some unique content (eg Seinfeld) or apps (Pokemon Go) might fit this criteria, but only for relatively short time periods. Even content and apps are becoming more abundant and less valued. Perhaps the answer is in fulfilling short-term inefficiencies in supply and demand (eg dynamic pricing, dynamic offers, un-met demands, etc) as posed in yesterday’s blog. All of these features require us to look at our OSS data with a completely different lens though.

Our analytics engines might be less focused on time-to-repair and perhaps more on metrics such as analysis to offer (which currently take us months, thus missing transient market inefficiency windows). And not just for the service providers, but as a service for their customers. Is this high-speed trading crossed with OSS??

How to disrupt through your OSS – a base principles framework

We’ve all heard the stories about the communications services industry being ripe for disruption. In fact many over-the-top (OTT) players, like Skype and WhatsApp have already proven this fact for basic communications services, let alone the value-add applications that leverage CSP connectivity.

As much as the innovative technologies they’ve built, the OTT players have thrived via some really interesting business models to service gaps in the market. The oft-quoted examples here are platforms like Airbnb providing clients with access to accommodation without having the capital costs of building housing.

In thinking about the disruption of the CSP industry and how OSS can provide an innovative vehicle to meeting customer demands, the framework below builds upon the basic principles of supply and demand:

Supply and Demand opportunities in a service provider

The objective of the diagram is to identify areas where innovative OSS models could be applied, taking a different line of thinking than just fulfillment / assurance / inventory:

  • Supply – how can OSS influence supply factors such as:
    • Identifying latent supply (eg un-used capacity) and how incremental revenues could be generated from it, such as building dynamic offers
    • Unbundling supply by stripping multiple elements apart to supply smaller desirable components rather than all of the components of a cross-subsidised bundle. Or vice versa, taking smaller components and bundling them together to supply something new
    • Changing the costs structures of supply, which could include virtualisation, automation and many of the other big buzz-words swirling around our industry
  • Demand – how can OSS / BSS build upon demand factors such as:
  • Marketplace / Platforms – how can OSS / BSS better facilitate trade between a CSP‘s subscribers, who are both producer / suppliers and consumers. CSPs traditionally provide data connectivity, but the OTT players tend to provide higher perceived value of trade-connectivity including:
    • Bringing buyers and sellers together on a common platform
    • Providing end-to-end trade support from product development, sales / marketing, trading transactions, escrow / billing / clearing-house; and then
    • Analytics on gathered data to improve the whole cycle
  • Supply chain – how can OSS / BSS help customers to efficiently bring the many pieces of a supply chain together. This is sometimes overlooked but can be one of the hardest parts of a business model to replicate (and therefore build a sustainable competitive advantage from). For OSS / BSS, it includes factors such as:
    • As technologies get more complex, but more modular, partnerships and their corresponding interfaces become more important, playing into microservices strategies
    • Identifying delivery inefficiencies, which include customer impediment factors such as ordering, delivery, activations. Many CSPs have significant challenges in this area, so efficiency opportunities abound

These are just a few of the ideas coming out of the framework above. Can the questions it poses help you to frame your next OSS innovation roadmap (ie taking it beyond just the typical tech roadmap)?

Crossing the OSS tech chasm

When discussing yesterday’s post about increasing feedback loops in OSS, the technology gap on exponential technologies such as IoT, network virtualisation and machine learning reminded me of Geoffrey Moore’s “Crossing the Chasm” as shown in the graph below.

Crossing the chasm

In the context of the abovementioned technologies, the chasm isn’t represented by the adoption of a product (as per Moore’s graph) but in the level of sophistication required to move to future iterations. [Noting that the sophistication chasm may also effect the take-up rate, as the OSS vendors that can cross the chasm to utilising more advanced machine learning, robotics and automation will have a distinct competitive advantage].

This gets back to the argument towards developing these exponential technologies, even if only by investing in the measure and feedback steps in the control loop initially.
Singularity Hub's power law diagram
Image courtesy of singularity hub.

Getting ahead of feedback

Amazon is making its Greengrass functional programming cloud-to-premises bridge available to all customers…
This is an important signal to the market in the area of IoT, and also a potentially critical step in deciding whether edge (fog) computing or centralized cloud will drive cloud infrastructure evolution…
The most compelling application for [Amazon] Lambda is event processing, including IoT. Most event processing is associated with what are called “control loop” applications, meaning that an event triggers a process control reaction. These applications typically demand a very low latency for the obvious reason that if, for example, you get a signal to kick a defective product off an assembly line, you have a short window to do that before the product moves out of range. Short control loops are difficult to achieve over hosted public cloud services because the cloud provider’s data center isn’t local to the process being controlled. [Amazon] Greengrass is a way of moving functions out of the cloud data center and into a server that’s proximate to the process
.”
Tom Nolle
.

It seems to me that closed-loop thinking is going to be one of the biggest factors to impact OSS in coming years.

Multi-purpose machine learning (requiring feedback loops like the one described in the link above) is needed by OSS on many levels. IoT requires automated process responses as described by Tom in his quote above. Virtualised networks will evolve to leverage distributed, automated responsiveness to events to ensure optimal performance in the network.

But I’m surprised at how (relatively) little thought seems to be allocated to feedback loop thinking currently within our OSS projects. We’re good at designing forward paths, but not quite so good at measuring the outcomes of our many variants and using those to feed back insight into earlier steps in the workflow.

We need to get better at the measure and control steps in readiness for when technologies like machine-learning, IoT and network virtualisation catch up. The next step after that will be distributing the decision making process out to where it can make a real-time difference.

OSS S-curves

I should say… that in the real world exponential curves don’t continue for ever. We get S-curves which closely mimic exponential curves in the beginning, but then tail off after a while often as new technologies hit physical limits which prevent further progress. What seems to happen in practice is that some new technology emerges on its own S-curve which allows overall progress to stay on an something approximating an exponential curve.
Socio tech

The chart above shows interlocking S-curves for change in society over the last 6,000 years. That’s as macro as it gets, but if you break down each of those S-curves they will in turn be comprised of their own interlocking S-curves. The industrial age, for example, was kicked off by the spinning jenny and other simple machines to automate elements of the textile industry, but was then kicked on by canals, steam power, trains, the internal combustion engine, and electricity. Each of these had it’s own S-curve, starting slowly, accelerating fast and then slowing down again. And to the people at the time the change would have seemed as rapid as change seems to us now. It’s only from our perspective looking back that change seems to have been slower in the past. Once again, that’s only because we make the mistake of thinking in absolute rather than relative terms.”
Nic Brisbourne
here.

I love that Nic has taken the time to visualise and articulate what many of us can perceive.

Bringing the exponential / S-curve concept into OSS, we’re at a stage in the development of OSS that seems faster than at any other time during my career. Technology change in adjacent industries are flowing into OSS, dragging it (perhaps kicking and screaming) into a very different future. Technologies such as continual integration, cloud-scaling, big-data / graph databases, network virtualisation, robotic process automation (RPA) and many others are making OSS look so different to what they did only five years ago. In fact, we probably need these technologies to keep apace with the other technologies. For example, the touchpoint explosion caused by network virtualisation and IoT mean we need improved database technologies to cope. In turn this introduces a complexity and change that is almost impossible for people to keep track of, driving the need for RPA… etc.

But then, there are also things that aren’t changing.

Many of our OSS have been built through millions of developer days of effort. That forces a monumental decision for the owners of that OSS – to keep up with advances, you need to rapidly overhaul / re-write / supersede / obsolete all that effort and replace it with something that keeps track of the exponential curve. The monolithic OSS of the past simply won’t be able to keep pace, so highly modular solutions, drawing on external tools like cloud, development automation and the like are going to be the only way to track the curve.

All of these technologies rely on programmable interfaces (APIs) to interlock. There is one major component of a telco’s network that doesn’t have an API yet – the physical (passive) network. We don’t have real-time data feeds or programmable control mechanisms to update it and manage these typically unreliable data sources. They are the foundation that everything else is built upon though so for me, this is the biggest digitalisation challenge / road-block that we face. Collectively, we don’t seem to be tackling it with as much rigour as it probably deserves.

The Second Law of OSS Thermodynamics

Applying the Second Law of Thermodynamics to understanding reality, Boyd infers that individuals or organizations that don’t communicate with the outside world by getting new information about the environment or by creating new mental models act like a “closed system.” And just as a closed system in nature will have increasing entropy, or disorder, so too will a person or organization experience mental entropy or disorder if they’re cut off from the outside world and new information.
The more we rely on outdated mental models even while the world around us is changing, the more our mental “entropy” goes up.
Think of an army platoon that’s been cut off from communication with the rest of the regiment. The isolated platoon likely has an idea, or mental model, of where the enemy is located and their capabilities, but things have changed since they last talked to command. As they continue to work with their outdated mental model against a changing reality, confusion, disorder, and frustration are the results
.”
Brett & Kate McKay
here.

Does the description above resonate with you in relation to OSS? It does for me on a couple of levels.

The first is in relation to data quality. If you create a closed system, one where there isn’t continual, ongoing improvement efforts entropy and disorder goes up. If you only do big-bang data fix projects intermittently, you will experience entropy until the next big (and usually expensive) data remediation effort. And if your processes don’t have any improvement mechanisms, they tend to fall into a data death spiral.

The second is in relation to innovation. We can sometimes get stuck in our own mental models of what this industry is about. The technologies that are in enormous flux around us and impacting OSS mean that our mental models should be evolving on a daily / weekly / monthly basis. Unfortunately, sometimes we get stuck in the bubble of our current project and don’t get the feedback into the system that we need. PAOSS was spawned from a situation like that. For 18 months, I was leading a $100m+ project that had little to do with OSS and was increasingly drawing me away from the passion for OSS. PAOSS was my way of popping the bubble.

But I know my mental models still need to shift more than ever before (strong convictions, weakly held). Technologies such as CI/CD, AI/ML, virtualised networking, IoT and many more are changing the world we operate in. This needs vigilance to continually re-orient. That’s fun if you enjoy change, but many of our stakeholders don’t. Change management becomes increasingly important, yet increasingly underestimated.

Some even say that OSS is an old mental model (to which I counter by saying that new operational models look nothing like the old ones, but they’re still operational assistance tools, as per Theseus’s boat).

Theseus’ OSS transformation

Last week we compared OSS to Theseus’ ship, how it was constantly being rebuilt mid-voyage, then posing the question whether it was still actually the same ship.

OSS transformation is increasingly becoming a Theseus’ ship model. In the past, we may’ve had the chance to do big-bang cutovers, where we simply migrated users from one solution to another. This doesn’t seem to be the case anymore, probably because of the complexity of systems in our OSS ecosystems. We’re now being asked to replace components while the ship keeps moving, which isn’t suited to the behemoth, monolithic OSS of the past.

This new model requires a much more modular approach to solution design, creating components that can be shared and consumed by other components, with relationships in data rather than system interconnectedness. In other words, careful planning to avoid the chess-board analogy.

In some ways, we probably have the OTT (over the top) play to thank for this new modular approach. We’ve now become more catalog-driven, agile, web-scaled and microservices in our way of thinking, giving us the smaller building blocks to change out pieces mid-voyage. The behemoth OSS never (rarely?) allowed this level of adaptability.

This complexity of transformation is probably part of the reason why behemoth software stacks across all industries are going to become increasingly rare in coming years.

In our case the Theseus paradox is an easy one to answer. If we change out each component of our OSS mid-voyage, it’s still our OSS, but it looks vastly different to the one that we started the voyage with.

Rebuilding the OSS boat during the voyage

When you evaluate the market, you’re looking to understand where people are today. You empathize. Then you move into storytelling. You tell the story of how you can make their lives better. That’s the “after” state.
All marketing ever does is articulate that shift. You have to have the best product, and you’ve got to be the best at explaining what you do… at articulating your value, so people say, ‘Holy crap, I get it and I want it.’

Ryan Deiss
.

How does “the market” currently feel about our OSS?

See the comments in this post for a perspective from Roger, “It takes too long to get changes made. It costs $xxx,xxx for “you guys” to get out of bed to make a change.” Roger makes these comments from the perspective of an OSS customer and I think they probably reflect the thoughts of many OSS customers globally.

The question for us as OSS developers / integrators is how to make lives better for all the Rogers out there. The key is in the word change. Let’s look at this from the context of Theseus’s paradox, which is a thought experiment that raises the question of whether an object that has had all of its components replaced remains fundamentally the same object.

Vendor products have definitely become more modular since I first started working on OSS. However, they probably haven’t become atomic enough to be readily replaced and enhanced whilst in-flight like the Ship of Theseus. Taking on the perspective of a vendor, I probably wouldn’t want my products to be easily replaceable either (I’d want the moat that Warren Buffett talks about)… [although I’d like to think that I’d want to create irreplaceable products because of what they deliver to customers, including total lifecycle benefits, not because of technical / contractual lock-in].

Customers are taking Theseus matters into their own hands through agile / CI-CD methodologies and the use of micro-services. They have merit, but they’re not ideal because it means the customer needs more custom development resourcing on their payroll than they’d prefer. I’m sure this pendulum will shift back again towards more off-the-shelf solutions in coming years.

That’s why I believe the next great OSS will come from open source roots, where modules will evolve based on customer needs and anyone can (continually) make evolutionary, even revolutionary, improvements whilst on their OSS voyage (ie the full life-cycle).

OSS chat bots

Some pilots are proving quite interesting in enhancing Network Operations. Especially when AI / ML is used in the form of Natural language processing – NOC operators are talking to a ML system in natural language and asking questions in context to an alarm about deep topics ( BGP, 4G eUTRAN, etc). Do more with L1 engineers.
– Use of familiar chat front ends like FB Messenger, Hangouts to have conversation with a OSSChatBot which learns using ML, continuously, the intents of the questions being asked and responds by bringing information back from OSS using APIs. (Performance of a cell site, or Fuel level on a tower site, Weather information in a cluster of sites etc). Ease of use, Open new channels to use OSS
– Training the AI system in TAC tickets, Vendor manuals, Historical tickets, – ability to search and index unstructured data and recommend solutions for problems when they occur or about to occur – informed via OSS. efficiency increase, Lower NOC costs.
– Network planning using ML apis. Ticket prioritization using ML analytics ( which ones to act upon first – certainly not only by severity)

Steelysan
in response to an earlier post entitled, “A thousand AI flowers to bloom for OSS.

Steelysan has raised some brilliant use cases for AI, chatbots specifically, in the assurance domain (hat-tip Steelysan!). OSSChatBots are the logical progression of the knowledge bases of the distant past or the rules / filters / state-machines of the current generation tools, yet I hadn’t considered using chat bots in this manner.

I wonder whether there are similar use cases in the fulfillment domain? Perhaps we can look at it from two perspectives, the common case and the exception case.

If looking at the common case, there is still potential for improved efficiency in fulfillment processes, particularly in the management of orders through to activation. We already have work order management and workforce management tools, but both could be enhanced due to the number of human interactions (handoffs, approvals, task assignment, confirmed completions, etc). An OSSChatBot could handle many of the standard tasks currently handled by order managers.

If looking at the exceptional case, where there is a jeopardy or fall-out state reached, the AI behind an OSSChatBot could guide an inexperienced operator through remedial actions.

These types of tools have the potential to give more power to L1 NOC operators and free up higher levels to do higher-value activities.

Using OSS to handle more sophisticated compliance issues

You’re probably aware that our OSS and/or underlying EMS/NMS have been assisting organisations with their governance, regulatory and compliance (GRC) requirements for some time. Our OSS have provided tools that automatically enforce and validate compliance against configuration policies.

Naturally, network virtualisation and network as a service (NaaS) offerings are going to place an increased dependence on those policies and tools due to the potentially transient services they invoke.

But the more interesting developments are likely to arise out of the regulatory environment space. As more businesses become digital and more personal / confidential data gets gathered, governments are going to create more regulations to comply to. This article by Michael Vizard highlights some of the major regulatory changes underway in the USA alone that will impact how we maintain our communications networks.

I’d hate to alarm you…

… but I will because I’m alarmed. 🙂

The advent of network virtualisation, cloud-scaling and API / microservice-centric OSS means that the security attack surface changes significantly compared with old-style solutions. We now have to consider a more diverse application stack, often where parts of the stack are outside our control because they’re As A Service (XaaS) offerings from other suppliers. Even a DevOps implementation approach can introduce vulnerabilities.

With these new approaches, service providers are tending to take on more of the development / integration effort internally. This means that service providers can no longer rely so heavily on their vendors / integrators to ensure that their OSS solutions are hardened. Security definitely takes a much bigger step up in the list of requirements / priorities / challenges on a modern OSS implementation.

This article from Aricent provides a few insights on the security implications of a modern API architecture.

* Please note that I am not endorsing Aricent products here as I have never used them, but they do provide some thought-provoking ideas for those tasked with securing their OSS.

It’s not the tech, it’s the timing

For those of us in the technology consulting game, we think we’re pretty clever at predicting the next big technology trend. But with the proliferation of information over the Internet, it’s not that difficult to see potentially beneficial technologies coming over the horizon. We can all see network virtualisation, Internet-connected sensor networks, artificial intelligence, etc coming.

The challenge is actually picking the right timing for introducing them. Too early and nobody is ready to buy – death by cashflow. Too late and others have already established a customer base – death by competition.

The technology consultants that stand apart from the rest are the ones who not only know that a technology is going to change the world, but also know WHEN it will change the world and HOW to bring it into reality.

We’ve talked recently about utilising exponential tech like AI in OSS and the picture below tells some of the story in relation to WHEN
Exponential curve opportunity cost
Incremental improvements to the status quo initially keep up with trying to bring a new, exponential technology into being. But over time, the gap progressively increases, as does the opportunity cost of staying with incremental change. Some aspects of the OSS industry has been guilty of trying to make incremental improvement to the platforms that they’ve invested so much time into.

What the graph suggests is to embark on investigations / trials / proofs-of-concept in incremental new tech so that in-house tribal knowledge is developed in readiness for when the time is right to introduce it to customers.

Falling off a cliff vs going to the sky

Have you noticed how the curves we’re dealing with in the service provider industry are either falling off a cliff (eg voice revenues) or going to the sky (eg theoretical exponential growth like IoE)?

Here in the OSS industry, we’re stuck in the middle of these two trend curves too. Falling revenues mean reduced appetite for big IT projects. However, the excitement surrounding exponential technologies like SDN/NFV, IoE and AI are providing just enough inspiration for change that includes the new projects to plug the revenue shortfalls.

The question for sponsors becomes whether they see OSS as being

  • Inextricably linked to the cliff-curve – servicing the legacy products with falling revenue curves and therefore falling project investment; or
  • Able to be the tools that allow their exciting new exponential tools (and hopefully exponential revenues) to be operationalized and therefore attract new project investment

The question for us in the OSS industry is whether we’re able to evangelise and adapt our offerings to ensure sponsors see us as being point 2 rather than point 1

Wow

No, not “Wow!” the exclamation but the acronym W-O-W.

Wow stands for Walk Out Working. In other words, if a customer comes into a retail store, they walk out with a working service rather than exasperation. Whilst many customers wouldn’t be aware of it, there are lots of things that have to happen in an OSS / BSS for a customer to be wowed:

  • Order entry
  • Identity checks / approvals
  • Credit checks
  • Inventory availability
  • Rapid provisioning
  • Rapid back-of-house processes to accommodate service activation (eg billing, service order management, SLAs, etc)
  • etc

How long does all of this reasonably take for different product offerings? For mobile services, this is often feasible. For fixed line services, delivery is often measured in weeks, so the service provider would need to provide accommodation to accommodate WOW.

With virtualised networking, perhaps even fixed line services can be wowed if physical connectivity already exists and a virtually enabled CPE is installed at the customer premise. A vCPE that can be remotely configured and doesn’t require a truck roll or physical commissioning activities out in the field.

Wow, the acronym and the exclamation, become more feasible for more service delivery scenarios in a virtualised networking world. It then lays out the challenge to the OSS / BSS to keep up. Could your OSS / BSS, product offerings and related processes meet a WOW target of minutes rather than hours or days?

Software is eating the world…. and eating your job?

A funny thing happened today. I was looking for a reference to Marc Andreessen’s original, “software is eating the world,” quote and came across an article on TechCrunch that expressed many of the same thoughts I was going to write about. However, it doesn’t specifically cover the service provider and OSS industries so I’ll push on, with a few borrowed quotes along the way (in italics, like the following).

Today, the idea that “every company needs to become a software company” is considered almost a cliché. No matter your industry, you’re expected to be reimagining your business to make sure you’re not the next local taxi company or hotel chain caught completely off guard by your equivalent of Uber or Airbnb. But while the inclination to not be “disrupted” by startups or competitors is useful, it’s also not exactly practical. It is decidedly non-trivial for a company in a non-tech traditional industry to start thinking and acting like a software company.”
[or the traditional tech industry like service providers???]

This is completely true of the dilemma facing service providers the world over. A software-centric network, whether SDN, NFV, or others, is nearly inevitable. While the important metrics don’t necessarily stack up yet for SDN, software will continue to swarm all over the service provider market. Meanwhile, the challenge is that the existing workforce at these companies, often in the hundreds of thousands of people, don’t have the skills or interest in developing the skills essential for the software defined service provider of the (near) future.

Even worse for those people, many of the existing roles will be superseded by the automations we’re building in software. Companies like AT&T have been investing in software as a future mode of operation for nearly a decade and are starting to reap the rewards now. Many of their counterparts have barely started the journey.

This old post provided the following diagram:
Network_Software_Business_Venn
The blue circle is pushing further into the realm of the green to provide a larger yellow intersection, whereby network engineers will no longer be able to just configure devices, but will need to augment their software development skills. For most service providers, there just isn’t enough IT resources around to make the shift (although with appropriate re-skilling programs and 1+ million IT/Engineering resources coming out of universities in India and China every year, that is perhaps a moot point).

Summarising, I have two points to note:

  1. Bet on the yellow intersect point – service providers will require the converged skill-sets of IT and networks (include security in this) in larger volumes… but consider whether the global availability of these resources has the potential to keep salaries low over the longer term* (maybe the red intersection point is the one for you to target?)
  2. OSS is software and networking (and business) – however, my next post will consider the cyclical nature of a service provider building their own software vs. buying off-the-shelf products to configure to their needs

Will software eat your job? Will software eat my job? To consider this question, I would ask whether AI [Artificial Intelligence] develops to the point that it does a better job at consultancy than I can (or any consultant for that matter)? The answer is a resounding and inevitable yes… for some aspects of consultancy it already can. Can a bot consider far more possible variants for a given consulting problem than a person can and give a more optimal answer? Yes. In response, the follow-up question is what skills will a consulter-bot find more difficult to usurp? Creativity? Relationships? Left-field innovation?

* This is a major generalisation here I know – there are sectors of the IT market where there will be major shortages (like a possible AI skills crunch in the next 2-5 years or even SDN in that timeframe), sometimes due to the newness of the technology preventing a talent pool from being developed yet, sometimes just for supply / demand misalignments.