Cannibalisation intrigues me

We’ve all heard the Kodak story. They invented digital cameras but stuck them in a drawer because it was going to cannibalise their dominant position in the photographic film revenue stream… eventually leading to bankruptcy.

Swisscom invented an equivalent of WhatsApp years before WhatsApp came onto the market. It allowed users (only Swisscom users, not external / global customers BTW) to communicate via a single app – calls, chat, pictures, videos, etc. Swisscom parked it because it was going to cannibalise their voice and SMS revenue streams. That product, iO, is now discontinued. Meanwhile, WhatsApp achieved an exit of nearly $22B by selling to Facebook.

Some network operators are baulking at offering SD-WAN as it may cannibalise their MPLS service offerings. It will be interesting to see how this story plays out.

What also intrigues me is where cannibalisation is going to come for the OSS industry. What is the format of network operationalisation that’s simpler, more desirable to customers, probably cheaper, but completely destroys current revenue models? Do any of the vendors already have such capability but have parked it in a drawer because of revenue destruction?

History seems to have proven that it’s better to cannibalise your own revenues than allow your competitors to do so.

The biggest OSS loser

You are so much more likely to put effort into something when you know whether it will pay off and what the gains will be. Not knowing how things will turn out undermines your motivation and makes you delay taking action.”
Dr Theo Tsaousides
in his book, Brainblocks.

Have you seen the reality TV show, “The Biggest Loser?” I rarely watch TV, but have noticed that it’s been a runaway hit in the ratings here in Australia (and overseas apparently). Why has it been so successful and what does it have to do with OSS?

Well, according to Dr Tsaousides, the success of the show comes down to the obvious body-shape / fitness transformations each of the contestants makes over each season of the show. But more specifically, “You need to watch only one season from beginning to end and you will start craving to be a contestant on the show, regardless of your current weight… Seeing the people’s amazing transformation over a few months is a much more convincing way to start working out and eating well than being told by your doctor that you need to lose weight and about the cardiovascular advantages of exercise. Forecasting a positive outcome, especially when dealing with something new and unfamiliar, leads to action.”

Can you see how this might be a useful technique when planning an OSS transformation?

Change management is always a challenging task on any large OSS transformation. It’s always best to have the entire OSS user population involved in the change, but that’s not always feasible for large groups of users.

It’s one of the reasons I’m always a big advocate for getting a baseline, sandpit version of off-the-shelf OSS stood up and available for the user population to start interacting with. This is particularly helpful if the sandpit is perceptibly better than the current one.

To paraphrase, “Forecasting a positive outcome (via the OSS sandpit), especially when dealing with something new and unfamiliar (the future state after OSS transformation), leads to action (more excitement, engagement and less pushback from the user population during the course of the transformation).”

Do you think the biggest loser technique could work on your next OSS transformation?

Is your data getting too heavy for your OSS to lift?

Data mass is beginning to exhibit gravitational properties – it’s getting heavy – and eventually it will be too big to move.”
Guy Lupo
in this article on TM Forum’s Inform that also includes contributions from George Glass and Dawn Bushaus.

Really interesting concept, and article, linked above.

The touchpoint explosion is helping to make our data sets ever bigger… and heavier.

In my earlier days in OSS, I was tasked with leading the migration of large sets of data into relational databases for use by OSS tools. I was lucky enough to spend years working on a full-scope OSS (ie it’s central database housed data for inventory management, alarm management, performance management, service order management, provisioning, etc, etc).

Having all those data sets in one database made it incredibly powerful as an insight generation tool. With a few SQL joins, you could correlate almost any data sets imaginable. But it was also a double-edged sword. Firstly, ensuring that all of the sets would have linking keys (and with high data quality / reliability) was a data migrator’s nightmare. Secondly, all those joins being done by the OSS made it computationally heavy. It wasn’t uncommon for a device list query to take the OSS 10 minutes to provide a response in the PROD environment.

There’s one concept that makes GIS tools more inherently capable of lifting heavier data sets than OSS – they generally load data in layers (that can be turned on and off in the visual pane) and unlike OSS, don’t attempt to stitch the different sets together. The correlation between data sets is achieved through geographical proximity scans, either algorithmically, or just by the human eye of the operator.

If we now consider real-time data (eg alarms/events, performance counters, etc), we can take a leaf out of Einstein’s book and correlate by space and time (ie by geographical and/or time-series proximity between otherwise unrelated data sets). Just wondering – How many OSS tools have you seen that use these proximity techniques? Very few in my experience.

BTW. I’m the first to acknowledge that a stitched data set (ie via linking keys such as device ID between data sets) is definitely going to be richer than uncorrelated data sets. Nonetheless, this might be a useful technique if your data is getting too heavy for your OSS to lift (eg simple queries are causing minutes of downtime / delay for operators).

Intent to simplify our OSS

The left-hand panel of the triptych below shows the current state of interactions with most OSS. There are hundreds of variants inbound via external sources (ie multi-channel) and even internal sources (eg different service types). Similarly, there are dozens of networks (and downstream systems), each with different interface models. Each needs different formatting and integration costs escalate.
Intent model OSS

The intent model of network provisioning standardises the network interface, drastically simplifying the task of the OSS and the variants required for it to handle. This becomes particularly relevant in a world of NFVs, where it doesn’t matter which vendor’s device type (router say) can be handled via a single command intent rather than having separate interfaces to each different vendor’s device / EMS northbound interface. The unique aspects of each vendor’s implementation are abstracted from the OSS.

The next step would be in standardising the interface / data model upstream of the OSS. That’s a more challenging task!!

Introducing our OSS expert registry, for making connections in the OSS industry

Here at Passionate About OSS, we’re passionate about making OSS happen. We have an extensive network of contacts. We just naturally tend to find ourselves making connections between the many experts in our network. Connecting those who are hoping to find an OSS expert with an OSS expert hoping to be found.

We’ve just introduced a new free-of-charge OSS expert registry to help people find OSS experts when they need to. This registry is intended to cover the buy-side and sell-side of the OSS market. Click on the link above to check it out.

Facebook’s algorithmic feed for OSS

This is the logic that led Facebook inexorably to the ‘algorithmic feed’, which is really just tech jargon for saying that instead of this random (i.e. ‘time-based’) sample of what’s been posted, the platform tries to work out which people you would most like to see things from, and what kinds of things you would most like to see. It ought to be able to work out who your close friends are, and what kinds of things you normally click on, surely? The logic seems (or at any rate seemed) unavoidable. So, instead of a purely random sample, you get a sample based on what you might actually want to see. Unavoidable as it seems, though, this approach has two problems. First, getting that sample ‘right’ is very hard, and beset by all sorts of conceptual challenges. But second, even if it’s a successful sample, it’s still a sample… Facebook has to make subjective judgements about what it seems that people want, and about what metrics seem to capture that, and none of this is static or even in in principle perfectible. Facebook surfs user behaviour..”
Ben Evans
here.

Most of the OSS I’ve seen tend to be akin to Facebook’s old ‘chronological feed’ (where users need to sift through thousands of posts to find what’s most interesting to them).

The typical OSS GUI has thousands of functions (usually displayed on a screen all at once – via charts, menus, buttons, pull-downs, etc). But of all of those available functions, any given user probably only interacts with a handful.
Current-style OSS interface

Most OSS give their users the opportunity to customise their menus, colour schemes, even filters. For some roles such as network ops, designers, order entry operators, there are activity lists, often with sophisticated prioritisation and skills-based routing, which starts to become a little more like the ‘algorithmic feed.’

However, unlike the random nature of information hitting the Facebook feed, there is a more explicit set of things that an OSS user is tasked to achieve. It is a little more directed, like a Google search.

That’s why I feel the future OSS GUI will be more like a simple search bar (like Google) that will provide a direction of intent as well as some recent / regular activity icons. Far less clutter than the typical OSS. The graphs and activity lists that we know and love would still be available to users, but the way of interacting with the OSS to find the most important stuff quickly needs to get more intuitive. In future it may even get predictive in knowing what information will be of interest to you.
OSS interface of the future

Are we better off waiting for OSS technology to catch up?

Yesterday’s post discussed Dave Duggal’s concept of 20th century OSS being all about centralizing command and control to gain efficiency through vertical integration and mass standardization, whilst 21st century OSS are about decentralization – gaining efficiency through horizontal integration of partner ecosystems and mass customization.

We talked about transitioning from a telco market driven by economies of scale (the 20th century benchmark) to a “market of one” (21st century target state), where fully personalised experience exists and is seamless across all channels.

Dave wrote the original article back in 2016. Two years on and some of the technology in our OSS is just starting to catch up to Dave’s concepts. To be completely honest, we still haven’t architected or built the decentralised OSS that truly offer wide-scale partner ecosystems or customer personalisation, particularly at a scale that is cost-viable.

So I’m going to ask a really pointed question. If our OSS are still better suited to 20th century markets and can’t handle the incalculable number of variants that come with a fully personalised customer experience, are we better off waiting for the technology to catch up before trying to build business models that cater to the “market of one?”

Why? Well, as Gadi Solotorevsky, Chief Technology Officer, cVidya in this post on TM Forum’s Inform says, “…digital customers aren’t known for their patience and or tolerance for errors (I should know – I’m one of them). And any serious glitch, e.g. an error in charging, will not only push them towards a competitor – did I mention how easy is to change digital service providers? It will probably find also its way to social media, causing a ripple effect. The same goes for the partners who are enabling operators to offer cool digital services in the first place.”

Better to have a business model that is simpler and repeatable / reliable at massive scale than attempt a 21st century model where it’s the fall-outs that are scaling.

I’d love to hear your thoughts.

BTW. Kudos to those organisations investing in the bleeding edge tech that are attempting to solve what Dave refers to as “the challenge of our times.” I’m certainly not going to criticise their bold efforts. Just highlighting the point that many operators have 21st century ambitions of their OSS whilst only having 20th century capabilities currently.

Extending the OSS beyond a customer’s locus of control

While the 20th century was all about centralizing command and control to gain efficiency through vertical integration and mass standardization, 21st century automation is about decentralization – gaining efficiency through horizontal integration of partner ecosystems and mass customization, as in the context-aware cloud where personalized experience across channels is dynamically orchestrated.
The operational challenge of our time is to coordinate these moving parts into coherent and manageable value chains. Instead of building yet another siloed and brittle application stack, the age of distributed computing requires that we re-think business architecture to avoid becoming hopelessly entangled in a “big ball of CRUD”
.”
Dave Duggal
here on TM Forum’s Inform back in May 2016.

We’ve quickly transitioned from a telco services market driven by economies of scale (Dave’s 20th century comparison) to a “market of one” (21st century), where the market wants a personalised experience that seamlessly crosses all channels.

By and large, the OSS world is stuck between the two centuries. Our tools are largely suited to the 20th century model (in some cases, today’s OSS originated in the 20th century after all), but we know we need to get to personalisation at scale and have tried to retrofit them. We haven’t quite made the jump to the model Dave describes yet, although there are positive signs.

It’s interesting. Telcos have the partner ecosystems, but the challenge is that the entire ecosystem still tends to be centrally controlled by the telco. This is the so-called best-of-breed model.

In the truly distributed model Dave talks about, the telcos would get the long tail of innovation / opportunity by extending their value chain beyond their own OSS stack. They could build an ecosystem that includes partners outside their locus of control. Outside their CAPEX budget too, which is the big attraction. They telcos get to own their customers, build products that are attractive to those customers, gain revenues from those products / customers, but not incur the big capital investment of building the entire OSS stack. Their partners build (and share profits from) external components.

It sounds attractive right? As-a-service models are proliferating and some are steadily gaining take-up, but why is it still not happening much yet, relatively speaking? I believe it comes down to control.

Put simply, the telcos don’t yet have the right business architectures to coordinate all the moving parts. From my customer observation at least, there are too many fall-outs as a customer journeys hand off between components within the internally controlled partner ecosystem. This is especially when we talk omni-channel. A fully personalised solution leaves too many integration / data variants to provide complete test coverage. For example, just at the highest level of an architecture, I’ve yet to see a solution that tracks end-to-end customer journeys across all components of the OSS/BSS as well as channels such as online, IVR, apps, etc.

Dave rightly points out that this is the challenge of our times. If we can coherently and confidently manage moving parts across the entire value chain, we have more chance of extending the partner ecosystem beyond the telco’s locus of control.

OSS feature parity. A functionality arms race

OSS Vendor 1. “I have 1 million features.” (Dr Evil puts finger in mouth)
OSS Vendor 2. “Yeah, well I have 1,000,001 features in my OSS.”

This is the arms-race that we see in OSS, just like almost any other tech product. I imagine that vendors get into this arms-race because they wish to differentiate. Better to differentiate on functionality than price. If there’s a feature parity, then the only differentiator is price. We all know that doesn’t end well!

But I often ask myself a few related questions:

  • Of those million features, how many are actually used regularly
  • As a vendor do you have logging that actually allows you to know what features are being used
  • Taking the Whale Curve perspective, even if being used, how many of those features are actually contributing to the objectives of the vendor
    • Do they clearly contribute towards making sales
    • Do customers delight in using them
    • Would customers be irate if you removed them
    • etc

Earlier this week, I spoke about a friend who created an alarm management tool by himself over a weekend. It didn’t have a million features, but it did have all of what I’d consider to be the most important ones. It did look like a lot of other alarm managers that are now on the market. The GUI based on alarm lists still pervades.

If they all look alike, and all have feature parity, how do you differentiate? If you try to add more features, is it safe to assume that those features will deliver diminishing returns?

But is an alarm list and the flicking of tickets the best way to manage network health?

What if, instead of seeking incremental improvement, someone went back to the most important requirements and considered whether the current approach is meeting those customer needs? I have a strong suspicion that customer feedback will indicate that there are definitely flaws to overcome, especially on high event volume networks.

Clever use of large data volumes provides a level of pre-cognition and automation that wasn’t available when simple alarm lists were first invented. This in turn potentially changes the way that operators can engage with network monitoring and management.

What if someone could identify a whole new user interface / approach that overcame the current flaws and exceeded the key requirements? Would that be more of a differentiator than adding a 1,000,002nd feature?

If you’re looking for a comparison, there were plenty of MP3 players on the market with a heap of features, many more than the iPod. We all know how that one played out!

Pitching an OSS? Don’t call it OSS.

If you asked me how to sell cybersecurity, I wouldn’t call it cybersecurity.” The raw truth of the statement hit me like a lightning bolt between the eyes. Cybersecurity might loosely describe what we do, and we tell people it’s what we’re selling, but it’s not what people buy.
Safety. Assurance. Peace of mind. Confidence. These are the kinds of things that people buy, concepts which ordinary people can understand and relate to because they are feelings which they have experienced themselves. Cybersecurity is not a next gen firewall, or multi-layered endpoint protection with machine learning and threat sandbox technology. Cybersecurity is not risk management or ISO27001 policies. Cybersecurity is being able to use the Internet in any way I can imagine without having to worry I might lose my family photos, get robbed, or get in trouble with my boss. If you could (honestly) sell me “worry free Internet”, I’d buy it in a heartbeat, and so would everyone you know
.”
Corch X
, here.

Sound familiar?
If you asked me how to sell OSS, I wouldn’t call it OSS. Doh! Now you enlighten me… after I’ve already chosen the domain name, PassionateAboutOSS.com. After I’ve already written over 2,000 posts on topics like orchestration, microservices, cloud-native, DevOps, and every other technical buzzword. Time to start again from scratch.

One thing in my favour is that you, the audience I’m interacting with, also speaks in the same jargon. These are the terms we use to communicate with each other. To get things started. To get things done. To get things delivered.

That’s all fine if we’re only interacting with like-minded OSS experts. However, of the thousands of people who interact with our OSS / BSS, only a small percentage are OSS experts. A majority of people use the tools rather than designing, building or commissioning them.

The people who use the tools have a huge range of job roles and reasons for needing to use our OSS / BSS. Just like with cybersecurity, the core reasons could be Safety. Assurance. Peace of mind. Confidence. But they might also include Speed. Efficiency. Reliability. Repeatability. Simplicity. Monetisation. Insightful. And more.

The challenge we have is that so much of the benefit that our OSS and BSS deliver is intangible. We might talk about orchestration delivering speed, simplicity, reliability, etc. But how do we establish a more tangible link?

How do we achieve the equivalent of what the “Intel Inside” marketing ploy delivered, which made people associate an otherwise obscure integrated circuit with a premium feature to consider when they bought their next computing device. How do we ensure that people know that our OSS / BSS is the master of puppets that make our networks dance? It’s our OSS / BSS that are pulling all the strings of operationalisation, connecting customers with networks.

Would an EoL be beneficial for OSS?

In the world of networking, it’s common for devices to go EOL (end-of-life). Capital spend and depreciation models are based around refresh cycles of around 5-7 years. Vendors reinforce this refresh cycle by designing obsolescence into maintenance, support and part supplies. Customers tend to simply submit to the risk of having no vendor support by buying the next generation replacements.

But how often do you hear of an OSS going EOL? Not often right? They tend to get written off only when the cost of upkeep outweighs new revenues.

I know, I can hear you saying that software is different from hardware and of course I agree with you. I’d partially counter by claiming that software architectures and development platforms also have a discernibly useful life just like physical network devices. If you doubt that, I’m sure you’ve seen OSS tools with origins in the 1990s that are still being developed upon. I tend to believe that product usefulness becomes asymptotic for its vendors. With the speed of change and proliferation of new platforms, useful lives are getting ever-shorter.

Would a pre-ordained product replacement life-cycle be beneficial for the OSS industry? It has some merits.

For a start, planned obsolescence enforces designs with interchangeability, in line with the small-grid OSS described yesterday. It promotes short-term enhancements to long-term visions. It becomes easier for customers to write off their investment and inject new capital into the vendor market. It penalises the amount of Frankenstein integrations that tend to become increasingly burdensome (to vendor and customer) into the future. It enforces those mythical beasts of telco software – subtraction projects. It promotes innovation to avoid the asymptotic benefit deterioration curve shown below:
Asymptotic OSS feature development

As the asymptote is being reached, a new jumping-off point commences with the new product.

But it’s a difficult status-quo to break. Vendors have invested millions of developer hours into their products. Taking a product EoL is effectively throwing that invested effort away. For carriers, it means the risk and cost of breaking integrations / processes and replacing them with new ones.

I’d love to hear your thoughts on whether an EOL model might be relevant / useful for your OSS.

The future of work and its impact on OSS

Many years ago, I worked on a seriously big OSS transformation for one of the region’s biggest telcos. Everything was big on the project, the investment, the resources, the documentation. Everything except the outcomes. There was so much inefficiency that I often spoke about making one day of progress for every ten on site. Meetings, bureaucracy, impossible approval cycles, customer re-organisations, over-analysis, etc all added up to stagnation.

This contrasted so much with some of the amazing small teams I’ve worked alongside. Teams that worked cohesively, cleverly and just got stuff done with almost no resources. It’s one of the reasons I feel that the future of work, even for the very large organisations, will be via small teams. Outsourced to small, efficient teams / organisations. The gig economy, and the proliferation of tools that support it, make it an obvious approach to take, especially for very large organisations to leverage. Proof of work technologies, such as those building upon the discovery of blockchain, will provide further impetus to use smaller teams of experts.

Experts like a friend and colleague of mine who once built an alarm management tool in a weekend, by himself. It also happened to be more sophisticated than his employer’s existing tool that had taken years of combined developer effort by a larger team.

Maybe I’ll be proven wrong, but I see the transition to this model of work as being inevitable. The question I have is how to make our OSS more accommodating of this work model. Behemoth OSS stacks won’t. Highly modular OSS made up of many smaller components probably will, as long as they don’t succumb to the OSS chessboard analogy. The pulleys and strings will make it impossible for small, interchangable teams to decipher and manage.

A small-grid OSS model is the one I’d be backing in.

OSS – like a duck on a pond

Let’s start with a basic question. “What does an OSS need to do?”

The basic answer is, “make operations easier.”

The real answer(s) is so much more nuanced than that of course. The term easier can also encapsulate other words such as faster, more accurate, more repeatable, cheaper, etc.

Designing, building, operating and maintaining a sizable network is extremely challenging, despite network operators around the world, and the vendors that supply to them, employing some of the best and brightest. So we design OSS and related tools / processes to make operations easier.

Yet I sometimes wonder whether we achieve that aim – to make operations easier. Seems to me that we tend to focus more on just replicating functions at a higher layer in the management stack. That is, moving the function to the OSS rather than EMS/NMS, without really making it much easier operationally.

Let’s start at the user interface (UI). How often are they intuitive enough for an experienced network operator to start doing tasks with negligible OSS expert guidance?
Let’s look at deployments. How often are the projects low on effort, risk, cost and complexity?
Let’s look at flexibility (ie in-flight modifications or transformations). How often do we actually deliver flexibility to our customers through our OSS. To ask the same as above, how often are our changes low on effort, risk, cost and complexity?

As a small step towards providing an answer, I wonder whether it’s a case of making the hard things look easy and the easy things look hard.

We want to make the really hard operational things much easier to do within an OSS because that’s the primary purpose of an OSS. That’s the example of a duck on a pond. The OSS is gliding along effortlessly across the top of the water, but under the water it is paddling furiously.

Conversely, we want to make the really easy* operational things look hard to do within an OSS so that we’re not constantly being asked to build functionality / complexity into our OSS that doesn’t warrant being there. It diffuses the intent of the OSS. Just because we can, doesn’t mean we should.

OSS implementation, but without the dependencies

One of the challenges with getting a new OSS or OSS transformation project completed can be the large number of dependencies that can cause momentum gridlock. If you’re looking to deliver business value in one big-bang, which is a really common approach to delivering OSS projects, then you end up juggling many different activities and hoping they all align at the right times.

I’ve noticed that the vendors tend to design their delivery schedules around big-bang / waterfall approaches like below.
Big-bang OSS delivery

Many vendors will even assure you that this is their standard practice and are hesitant to consider changes to their “best practice” delivery scheduling. Having been involved in many of these types of deliveries in the past, on both vendor and customer side, I can assure you that they rarely work well.

Generally speaking, the gridlocks occur on the customer-side, but the result is detrimental to customer and vendor alike. Hold-ups mean inefficient allocation of resources as well as the resultant cost / time over-runs.

The alternative is to apply a bit more lateral thinking to how you break down the work into smaller chunks. The lateral thinking work breakdown aims are two-fold:

  1. How to break up the work so that it best avoids dependencies; whilst also
  2. Delivering some sort of value to the customer

There are many dependencies on a typical OSS project – hardware, procurement, IT infrastructure, network connectivity, security, approvals, integrations, licensing, resource availability, data quality and many more. However, each different customer, their org chart and project has its own unique mix of dependencies, so I don’t subscribe to the “best practice” argument to project delivery.

The diagram below shows an example of an alternate breakdown. The business value chunks that are delivered might be tiny in some cases, but at least momentum can be demonstrated. Rather than having a mass of entwined dependencies, you can isolate and minimise dependencies for that sliver of business value. When the dependency/ies has cleared, you can jump straight onto the next activity from an existing build-state rather than having to align all the activities to land in perfect precision.
Incremental OSS work breakdown

OSS that are profitable, difficult, or important?

Apple became the first company to be worth a trillion dollars. They did that by spending five years single-mindedly focusing on doing profitable work. They’ve consistently pushed themselves toward high margin luxury goods and avoided just about everything else. Belying their first two decades, when they focused on breakthrough work that was difficult and perhaps important, nothing they’ve done recently has been either…
Profitable, difficult, or important — each is an option. A choice we get to make every day. ‘None of the above’ is also available, but I’m confident we can seek to do better than that
. ”
Seth Godin
in this post.

I encourage you to view the entire post at the link above. It gives definitions (and examples) of organisations that focus on profitable, difficult or important activities.

In OSS, the organisations that focus on the profitable are the ones investing heavily on glossy sales / marketing and only making incremental improvements to products that have been around for years.

Then there are others that are doing the difficult and innovative and complex work (ie the sexy work for all of us tech-heads). This recent article about ONAP talks about the fantastic tech-driven ambitions of that program, but then distills it down to the business objectives.

That leaves us with the important – the business needs / objectives – and this is where the customers come in. Speak with any OSS customer (or customer’s customer for that matter) and you’ll tend to find frustrations with their OSS. Frustration with complexity, time to deliver / modify, cost to deliver / modify, risks, functionality constraints, etc.

This is a simplification of course, but do you notice that as an industry, our keen focus on the profitable and difficult might just be holding us back from doing the important?

OSS designed as a bundle, or bundled after?

Over the years I’m sure you’ve seen many different OSS demonstrations. You’ve probably also seen presentations by vendors / integrators that have shown multiple different products from their suite.

How integrated have they appeared to you?

  1. Have they seemed tightly integrated, as if carved from a single piece of stone?
  2. Or have they seemed loosely integrated, a series of obviously different stones joined together with some mortar?
  3. Or perhaps even barely associated, a series of completely different objects (possibly through product acquisition) branded under a common marketing name?

There are different pros and cons with each approach. Tight integration possibly suits a greenfields OSS. Looser integration perhaps better suits carve-off for best-of-breed customer architecture models.

I don’t know about you, but I always prefer to be given the impression that an attempt has been made to ensure consistency in the bundling. Consistency of user-interface, workflow, data modelling/presentation, reports, etc. With modern presentation layers, database technologies and the availability of UX / CX expertise, this should be less of a hurdle than it has been in the past.

If ONAP is the answer, what are the questions?

ONAP provides a comprehensive platform for real-time, policy-driven orchestration and automation of physical and virtual network functions that will enable software, network, IT and cloud providers and developers to rapidly automate new services and support complete lifecycle management.
By unifying member resources, ONAP is accelerating the development of a vibrant ecosystem around a globally shared architecture and implementation for network automation–with an open standards focus–faster than any one product could on its own
.”
Part of the ONAP charter from onap.org.

The ONAP project is gaining attention in service provider circles. The Steering Committee of the ONAP project hints at the types of organisations investing in the project. The statement above summarises the mission of this important project. You can bet that the mission has been carefully crafted. As such, one can assume that it represents what these important stakeholders jointly agree to be the future needs of their OSS.

I find it interesting that there are quite a few technical terms (eg policy-driven orchestration) in the mission statement, terms that tend to pre-empt the solution. However, I don’t feel that pre-emptive technical solutions are the real mission, so I’m going to try to reverse-engineer the statement into business needs. Hopefully the business needs (the “why? why? why?” column below) articulates a set of questions / needs that all OSS can work to, as opposed to replicating the technical approach that underpins ONAP.

Phrase Interpretation Why? Why? Why?
real-time The ability to make instantaneous decisions Why1: To adapt to changing conditions
Why2: To take advantage of fleeting opportunities or resolve threats
Why 3: To optimise key business metrics such as financials
Why 4: As CSPs are under increasing pressure from shareholders to deliver on key metrics
policy-driven orchestration To use policies to increase the repeatability of key operational processes Why 1: Repeatability provides the opportunity to improve efficiency, quality and performance
Why 2: Allows an operator to service more customers at less expense
Why 3: Improves corporate profitability and customer perceptions
Why 4: As CSPs are under increasing pressure from shareholders to deliver on key metrics
policy-driven automation To use policies to increase the amount of automation that can be applied to key operational processes Why 1: Automated processes provide the opportunity to improve efficiency, quality and performance
Why 2: Allows an operator to service more customers at less expense
Why 3: Improves corporate profitability and customer perceptions
physical and virtual network functions Our networks will continue to consist of physical devices, but we will increasingly introduce virtualised functionality Why 1: Physical devices will continue to exist into the foreseeable future but virtualisation represents an exciting approach into the future
Why 2: Virtual entities are easier to activate and manage (assuming sufficient capacity exists)
Why 3: Physical equipment supply, build, deploy and test cycles are much longer and labour intensive
Why 4: Virtual assets are more flexible, faster and cheaper to commission
Why 5: Customer services can be turned up faster and cheaper
software, network, IT and cloud providers and developers With this increase in virtualisation, we find an increasingly large and diverse array of suppliers contributing to our value-chain. These suppliers contribute via software, network equipment, IT functions and cloud resources Why 1: CSPs can access innovation and efficiency occurring outside their own organisation
Why 2: CSPs can leverage the opportunities those innovations provide
Why 3: CSPs can deliver more attractive offers to customers
Why 4: Key metrics such as profitability and customer satisfaction are enhanced
rapidly automate new services We want the flexibility to introduce new products and services far faster than we do today Why 1: CSPs can deliver more attractive offers to customers faster than competitors
Why 2: Key metrics such as market share, profitability and customer satisfaction are enhanced as well as improved cashflow
support complete lifecycle management The components that make up our value-chain are changing and evolving so quickly that we need to cope with these changes without impacting customers across any of their interactions with their service Why 1: Customer satisfaction is a key metric and a customer’s experience spans the entire lifecyle of their service.
Why 2: CSPs don’t want customers to churn to competitors
Why 3: Key metrics such as market share, profitability and customer satisfaction are enhanced
unifying member resources To reduce the amount of duplicated and under-synchronised development currently being done by the member bodies of ONAP Why 1: Collaboration and sharing reduces the effort each member body must dedicate to their OSS
Why 2: A reduced resource pool is required
Why 3: Costs can be reduced whilst still achieving a required level of outcome from OSS
vibrant ecosystem To increase the level of supplier interchangability Why 1: To reduce dependence on any supplier/s
Why 2: To improve competition between suppliers
Why 3: Lower prices, greater choice and greater innovation tend to flourish in competitive environments
Why 4: CSPs, as customers of the suppliers, benefit
globally shared architecture To make networks, services and support systems easier to interconnect across the global communications network Why 1: Collaboration on common standards reduces the integration effort between each member at points of interconnect
Why 2: A reduced resource pool is required
Why 3: Costs can be reduced whilst still achieving interconnection benefits

As indicated in earlier posts, ONAP is an exciting initiative for the CSP industry for a number of reasons. My fear for ONAP is that it becomes such a behemoth of technical complexity that it becomes too unwieldy for use by any of the member bodies. I use the analogy of ATM versus Ethernet here, where ONAP is equivalent to ATM in power and complexity. The question is whether there’s an Ethernet answer to the whys that ONAP is trying to solve.

I’d love to hear your thoughts.

(BTW. I’m not saying that the technologies the ONAP team is investigating are the wrong ones. Far from it. I just find it interesting that the mission is starting with a technical direction in mind. I see parallels with the OSS radar analogy.)

Where are the reliability hotspots in your OSS?

As you already know, there are two categories of downtime – unplanned (eg failures) and planned (eg upgrades / maintenance).

Planned downtime sounds a lot nicer (for operators) but the reality is that you could call both types “incidents” – they both impact (or potentially impact) the customer. We sometimes underestimate that fact.

Today’s question is whether you’re able to identify where the hotspots are in your OSS suite when you combine both types of downtime. Can you tell which outages are service-impacting?

In a round-about way, I’m asking whether you already have a dashboard that monitors uptime of all the components (eg applications, probes, middleware, infra, etc) that make up your complete OSS / BSS estate? If you do, does it tell you what you anecdotally know already, or are there sometimes surprises?

Does the data give you the evidence you need to negotiate with the implementers of problematic components (eg patch cadence, the need for reliability fixes, streamlining the patch process, reduction in customisations, etc)? Does it give you reason to make architectural changes (eg webscaling)?

Chasing the big OSS waves

The diagram below attempts to show how the entire market (whether that’s the supplier-side or the buyer-side) will absorb a given new feature.

The leaders pick up the concept at T0 and then it takes another few years before the laggards implement it.
OSS Buyer Developer Curve

Most of us in the OSS implementation world crave to be at the leading edge of change. The right-side of the curve is definitely the sexier side to be on. I know I do. It’s part of the reason this blog exists – to stay abreast of the exciting new ideas, projects and technologies that are coming through in OSS. Funnily enough, there’s probably even people within most of the laggards who are already excited about a new concept not long after T0, but are just unable to implement it until much later.

Supplier sales-pitches also tend to focus on the right side of the curve. That’s where the buzz is. That’s where the premiums are, the rewards for being first to market. It’s the customers on the right-side of the curve that are most attractive as sales targets for many suppliers.

But I also wonder whether the increasing proliferation of tech options within OSS means there’s also increasing inefficiency for suppliers (and possibly buyers) on the right side of the curve? Do we focus all our development efforts on ONAP or [insert any of millions of other alternative platforms, technologies, ideas, etc] today? What if the mass-market goes down an alternate path to the one you’ve chosen? How long before you identify a divergence from the mass-market trend? What’s the impact of changing direction (or not)? Are you bound to spill some blood by playing on the bleeding edge?

The left side of the graph is arguably more predictable. You can already see where the market is trending. Has the whole concept just been hype or has this new thing really made a difference for customers? Most of the implementation hurdles are likely to have already been resolved. Products have matured. More integrations, reports, etc have been developed. Waters have already been chartered.

I don’t have the numbers to back this up, but I also have a suspicion that there’s less supplier competition for the business of laggard or follower customers. I’ve seen some companies that have thrived on this model. They get a nice unimpeded ride on the back of the wave whilst everyone else is fighting to catch the front-edge of it.

Chasing the left side of the curve might seem counter-intuitive because it clearly represents a falling market. But there’s always the next wave to jump onto, each with similar predictability and reduced competition.

Not only that, but a majority of the the most important OSS use-cases have been around for many years. It’s increasingly difficult to find new functionality that delivers tangible benefits. Whilst other suppliers have jumped off to chase the next big thing, the followers can keep refining their solutions for what matters most.

Let me pose the question this way – Can you think of a single OSS product that is so refined that it can’t do the basics any better than it already does? Nope??