How OSS/BSS facilitated Telkomsel’s structural revenue changes

The following two slides were presented by Monty Hong of Indonesia’s Telkomsel at Digital Transformation Asia 2018 last week. They provide a fascinating insight into the changing landscape of comms revenues that providers are grappling with globally and the associated systems decisions that Telkomsel has made.

The first shows the drastic declines in revenues from Telkomsel’s traditional telco products (orange line), contrasted with the rapid rise in revenues from content such as video and gaming.
Telkomsel Revenue Curve

The second shows where Telkomsel is repositioning itself into additional segments of the content value-chain (red chevrons at top of page show where Telkomsel is playing).
Telkomsel gaming ecosystem

Telkomsel has chosen to transform its digital core to specifically cater for this new revenue model with one API ecosystem. One of the focuses of this transformation is to support a multi-speed architectural model. Traditional back-end systems (eg OSS/BSS and system of records) are expected to rarely change, whilst customer-facing systems are expected to be highly agile to cater to changing customer needs.

More about the culture of this change tomorrow.

OSS that capture value, not just create it

I’ve just had a really interesting first day at TM Forum’s Digital Transformation Asia (https://dta.tmforum.org and #tmfdigitalasia ). The quality of presentations was quite high. Some great thought-provoking ideas!!

Nik Willetts kicked off his keynote with the following quote, which I’m paraphrasing, “Telcos need to start capturing value, not just creating it as they have for the last decade.”

For me, this is THE key takeaway for this event, above any of the other interesting technical discussions from day 1 (and undoubtedly on the agenda for the next 2 days too).

The telecommunications industry has made a massive contribution to the digital lifestyle that we now enjoy. It has been instrumental in adding enormous value to our lives and our economy. But all the while, telecommunications providers globally have been experiencing diminishing profitability and share-of-wallet (as described in this earlier post). Clearly the industry has created enormous value, but hasn’t captured as much as it would’ve liked.

The question to ask is how will our thinking and our OSS/BSS stacks help to contribute to capturing more value for our customers. As described in the share of wallet post above, the premium end of the value chain has always been in the content (think in terms of phone conversations in days gone by, or the myriad of comms techniques today such as email, live chat, blogs, etc, etc). That’s what the customer pays for – the experience – not the networks or systems that facilitate it.

Nik’s comments made me think of Andrew Carnegie. Monopolies such as the telecommunications organisations of the past and Andrew Carnegie’s steel business owned vast swathes of the value chain (Carnegie Steel Company owned the mines which extracted the raw materials needed to make steel, controlled the transportation used to deliver the materials and the product, and ran the mills used for steel production). Buyers didn’t care for the mines or mills or transportation. Customers were paying for the end product as it is what helped them achieve their goals, whether that was the railway tracks needed by the railroads or the beams needed by construction companies.

The Internet has allowed enormous proliferation of the premium-end of the telecommunications value chain. It’s too late to stuff that genie back into the bottle. But to Nik’s further comment, we can help customers achieve their goals by becoming their “do-it-yourself” digital partners.

Our customers now look to platforms like Facebook, Instagram, Google, WordPress, Amazon, etc to build their marketing, order capture, product / content delivery, commercial transactions, etc. I really enjoyed Monty Hong‘s presentation that showed how Telkomsel’s OSS/BSS is helping to embed Telkomsel into customers’ digital lifestyles / value-chains. It’s a perfect example of the biggest OSS loser proof discussed in yesterday’s post.

The biggest OSS loser

You are so much more likely to put effort into something when you know whether it will pay off and what the gains will be. Not knowing how things will turn out undermines your motivation and makes you delay taking action.”
Dr Theo Tsaousides
in his book, Brainblocks.

Have you seen the reality TV show, “The Biggest Loser?” I rarely watch TV, but have noticed that it’s been a runaway hit in the ratings here in Australia (and overseas apparently). Why has it been so successful and what does it have to do with OSS?

Well, according to Dr Tsaousides, the success of the show comes down to the obvious body-shape / fitness transformations each of the contestants makes over each season of the show. But more specifically, “You need to watch only one season from beginning to end and you will start craving to be a contestant on the show, regardless of your current weight… Seeing the people’s amazing transformation over a few months is a much more convincing way to start working out and eating well than being told by your doctor that you need to lose weight and about the cardiovascular advantages of exercise. Forecasting a positive outcome, especially when dealing with something new and unfamiliar, leads to action.”

Can you see how this might be a useful technique when planning an OSS transformation?

Change management is always a challenging task on any large OSS transformation. It’s always best to have the entire OSS user population involved in the change, but that’s not always feasible for large groups of users.

It’s one of the reasons I’m always a big advocate for getting a baseline, sandpit version of off-the-shelf OSS stood up and available for the user population to start interacting with. This is particularly helpful if the sandpit is perceptibly better than the current one.

To paraphrase, “Forecasting a positive outcome (via the OSS sandpit), especially when dealing with something new and unfamiliar (the future state after OSS transformation), leads to action (more excitement, engagement and less pushback from the user population during the course of the transformation).”

Do you think the biggest loser technique could work on your next OSS transformation?

Are telco services and SLAs no longer relevant?

I wonder if we’re reaching the point where “telecommunication services” is no longer a relevant term? By association, SLAs are also a bust. But what are they replaced by?

A telecommunication service used to effectively be the allocation of a carrier’s resources for use by a specific customer. Now? Well, less so

  1. Service consumption channel alternatives are increasing, from TV and radio; to PC, to mobile, to tablet, to YouTube, to Insta, to Facebook, to a million others.
    Consumption sources are even more prolific.
  2. Customer contact channel alternatives are also increasing, from contact centres; to IVR, to online, to mobile apps, to Twitter, etc.
  3. A service bundle often utilises third-party components, some of which are “off-net”
  4. Virtualisation is increasingly abstracting services from specific resources. They’re now loosely coupled with resource pools and rely on high availability / elasticity to ensure customer service continuity. Not only that, but those resource pools might extend beyond the carrier’s direct control and out to cloud provider infrastructure

The growing variant-tree is taking the concept beyond the reach of “customer services” and evolves to become “customer experiences.”

The elements that made up a customer service in the past tended to fall within the locus of control of a telco and its OSS. The modern customer experience extends far beyond the control of any one company or its OSS. An SLA – Service Level Agreement – only pertains to the sub-set of an experience that can be measured by the OSS. We can aspire to offer an ELA – Experience Level Agreement – because we don’t have the mechanisms by which to measure or manage the entire experience yet.

The metrics that matter most for telcos today tend to revolve around customer experience (eg NPS). But aside from customer surveys, ratings and derived / contrived metrics, we don’t have electronic customer experience measurements.

Customer services are dead; Long live the customer experiences king… if only we can invent a way to measure the whole scope of what makes up customer experiences.

Introducing our OSS expert registry, for making connections in the OSS industry

Here at Passionate About OSS, we’re passionate about making OSS happen. We have an extensive network of contacts. We just naturally tend to find ourselves making connections between the many experts in our network. Connecting those who are hoping to find an OSS expert with an OSS expert hoping to be found.

We’ve just introduced a new free-of-charge OSS expert registry to help people find OSS experts when they need to. This registry is intended to cover the buy-side and sell-side of the OSS market. Click on the link above to check it out.

Facebook’s algorithmic feed for OSS

This is the logic that led Facebook inexorably to the ‘algorithmic feed’, which is really just tech jargon for saying that instead of this random (i.e. ‘time-based’) sample of what’s been posted, the platform tries to work out which people you would most like to see things from, and what kinds of things you would most like to see. It ought to be able to work out who your close friends are, and what kinds of things you normally click on, surely? The logic seems (or at any rate seemed) unavoidable. So, instead of a purely random sample, you get a sample based on what you might actually want to see. Unavoidable as it seems, though, this approach has two problems. First, getting that sample ‘right’ is very hard, and beset by all sorts of conceptual challenges. But second, even if it’s a successful sample, it’s still a sample… Facebook has to make subjective judgements about what it seems that people want, and about what metrics seem to capture that, and none of this is static or even in in principle perfectible. Facebook surfs user behaviour..”
Ben Evans
here.

Most of the OSS I’ve seen tend to be akin to Facebook’s old ‘chronological feed’ (where users need to sift through thousands of posts to find what’s most interesting to them).

The typical OSS GUI has thousands of functions (usually displayed on a screen all at once – via charts, menus, buttons, pull-downs, etc). But of all of those available functions, any given user probably only interacts with a handful.
Current-style OSS interface

Most OSS give their users the opportunity to customise their menus, colour schemes, even filters. For some roles such as network ops, designers, order entry operators, there are activity lists, often with sophisticated prioritisation and skills-based routing, which starts to become a little more like the ‘algorithmic feed.’

However, unlike the random nature of information hitting the Facebook feed, there is a more explicit set of things that an OSS user is tasked to achieve. It is a little more directed, like a Google search.

That’s why I feel the future OSS GUI will be more like a simple search bar (like Google) that will provide a direction of intent as well as some recent / regular activity icons. Far less clutter than the typical OSS. The graphs and activity lists that we know and love would still be available to users, but the way of interacting with the OSS to find the most important stuff quickly needs to get more intuitive. In future it may even get predictive in knowing what information will be of interest to you.
OSS interface of the future

OSS collaboration rooms. Getting to the coal-face

A number of years ago I heard about an OSS product that introduced collaborative rooms for network operators to collectively solve challenging network health events. It was in line with some of my own thinking about the use of collaboration techniques to solve cross-domain or complex events. But the concept hasn’t caught on in the way that I expected. I was curious why, so I asked around some friends and colleagues who are hands-on managing networks every day.

The answer showed that I hadn’t got close enough to understanding the psyche at the coal-face. It seems that operators have a preference for the current approach, the tick and flick of trouble tickets until the solution forms and the problem is solved.

This shows the psyche of collaboration at a micro scale. I wonder if it holds true at a macro scale too?

No CSP has an everywhere footprint (admittedly cloud providers are close to everywhere though, in part through global presence, in part through coverage of the access domain via their own networks and/or OTT connectivity). For customers that need to cross geo-footprints, carriers take a tick and flick approach in terms of OSS. The OSS of one carrier passes orders to the other carrier’s OSS. Each OSS stays within the bounds of its organisation’s locus of control (see this blog for further context).

To me, there seems to be an opportunity for carriers to get out of their silo. To leverage collaboration for speed, coverage, etc by designing offerings in OSS design rooms rather than standards workshops. A global product catalog sandpit as it were for carriers to design offerings in. Every carrier’s service offering / API / contract resides there for other carriers to interact with.

But once again, I may not be close enough to understanding the psyche at the coal-face. If you work at this coal-face, I’d love to get your opinions on why this would or would not work.

Extending the OSS beyond a customer’s locus of control

While the 20th century was all about centralizing command and control to gain efficiency through vertical integration and mass standardization, 21st century automation is about decentralization – gaining efficiency through horizontal integration of partner ecosystems and mass customization, as in the context-aware cloud where personalized experience across channels is dynamically orchestrated.
The operational challenge of our time is to coordinate these moving parts into coherent and manageable value chains. Instead of building yet another siloed and brittle application stack, the age of distributed computing requires that we re-think business architecture to avoid becoming hopelessly entangled in a “big ball of CRUD”
.”
Dave Duggal
here on TM Forum’s Inform back in May 2016.

We’ve quickly transitioned from a telco services market driven by economies of scale (Dave’s 20th century comparison) to a “market of one” (21st century), where the market wants a personalised experience that seamlessly crosses all channels.

By and large, the OSS world is stuck between the two centuries. Our tools are largely suited to the 20th century model (in some cases, today’s OSS originated in the 20th century after all), but we know we need to get to personalisation at scale and have tried to retrofit them. We haven’t quite made the jump to the model Dave describes yet, although there are positive signs.

It’s interesting. Telcos have the partner ecosystems, but the challenge is that the entire ecosystem still tends to be centrally controlled by the telco. This is the so-called best-of-breed model.

In the truly distributed model Dave talks about, the telcos would get the long tail of innovation / opportunity by extending their value chain beyond their own OSS stack. They could build an ecosystem that includes partners outside their locus of control. Outside their CAPEX budget too, which is the big attraction. They telcos get to own their customers, build products that are attractive to those customers, gain revenues from those products / customers, but not incur the big capital investment of building the entire OSS stack. Their partners build (and share profits from) external components.

It sounds attractive right? As-a-service models are proliferating and some are steadily gaining take-up, but why is it still not happening much yet, relatively speaking? I believe it comes down to control.

Put simply, the telcos don’t yet have the right business architectures to coordinate all the moving parts. From my customer observation at least, there are too many fall-outs as a customer journeys hand off between components within the internally controlled partner ecosystem. This is especially when we talk omni-channel. A fully personalised solution leaves too many integration / data variants to provide complete test coverage. For example, just at the highest level of an architecture, I’ve yet to see a solution that tracks end-to-end customer journeys across all components of the OSS/BSS as well as channels such as online, IVR, apps, etc.

Dave rightly points out that this is the challenge of our times. If we can coherently and confidently manage moving parts across the entire value chain, we have more chance of extending the partner ecosystem beyond the telco’s locus of control.

Pitching an OSS? Don’t call it OSS.

If you asked me how to sell cybersecurity, I wouldn’t call it cybersecurity.” The raw truth of the statement hit me like a lightning bolt between the eyes. Cybersecurity might loosely describe what we do, and we tell people it’s what we’re selling, but it’s not what people buy.
Safety. Assurance. Peace of mind. Confidence. These are the kinds of things that people buy, concepts which ordinary people can understand and relate to because they are feelings which they have experienced themselves. Cybersecurity is not a next gen firewall, or multi-layered endpoint protection with machine learning and threat sandbox technology. Cybersecurity is not risk management or ISO27001 policies. Cybersecurity is being able to use the Internet in any way I can imagine without having to worry I might lose my family photos, get robbed, or get in trouble with my boss. If you could (honestly) sell me “worry free Internet”, I’d buy it in a heartbeat, and so would everyone you know
.”
Corch X
, here.

Sound familiar?
If you asked me how to sell OSS, I wouldn’t call it OSS. Doh! Now you enlighten me… after I’ve already chosen the domain name, PassionateAboutOSS.com. After I’ve already written over 2,000 posts on topics like orchestration, microservices, cloud-native, DevOps, and every other technical buzzword. Time to start again from scratch.

One thing in my favour is that you, the audience I’m interacting with, also speaks in the same jargon. These are the terms we use to communicate with each other. To get things started. To get things done. To get things delivered.

That’s all fine if we’re only interacting with like-minded OSS experts. However, of the thousands of people who interact with our OSS / BSS, only a small percentage are OSS experts. A majority of people use the tools rather than designing, building or commissioning them.

The people who use the tools have a huge range of job roles and reasons for needing to use our OSS / BSS. Just like with cybersecurity, the core reasons could be Safety. Assurance. Peace of mind. Confidence. But they might also include Speed. Efficiency. Reliability. Repeatability. Simplicity. Monetisation. Insightful. And more.

The challenge we have is that so much of the benefit that our OSS and BSS deliver is intangible. We might talk about orchestration delivering speed, simplicity, reliability, etc. But how do we establish a more tangible link?

How do we achieve the equivalent of what the “Intel Inside” marketing ploy delivered, which made people associate an otherwise obscure integrated circuit with a premium feature to consider when they bought their next computing device. How do we ensure that people know that our OSS / BSS is the master of puppets that make our networks dance? It’s our OSS / BSS that are pulling all the strings of operationalisation, connecting customers with networks.

OSS – like a duck on a pond

Let’s start with a basic question. “What does an OSS need to do?”

The basic answer is, “make operations easier.”

The real answer(s) is so much more nuanced than that of course. The term easier can also encapsulate other words such as faster, more accurate, more repeatable, cheaper, etc.

Designing, building, operating and maintaining a sizable network is extremely challenging, despite network operators around the world, and the vendors that supply to them, employing some of the best and brightest. So we design OSS and related tools / processes to make operations easier.

Yet I sometimes wonder whether we achieve that aim – to make operations easier. Seems to me that we tend to focus more on just replicating functions at a higher layer in the management stack. That is, moving the function to the OSS rather than EMS/NMS, without really making it much easier operationally.

Let’s start at the user interface (UI). How often are they intuitive enough for an experienced network operator to start doing tasks with negligible OSS expert guidance?
Let’s look at deployments. How often are the projects low on effort, risk, cost and complexity?
Let’s look at flexibility (ie in-flight modifications or transformations). How often do we actually deliver flexibility to our customers through our OSS. To ask the same as above, how often are our changes low on effort, risk, cost and complexity?

As a small step towards providing an answer, I wonder whether it’s a case of making the hard things look easy and the easy things look hard.

We want to make the really hard operational things much easier to do within an OSS because that’s the primary purpose of an OSS. That’s the example of a duck on a pond. The OSS is gliding along effortlessly across the top of the water, but under the water it is paddling furiously.

Conversely, we want to make the really easy* operational things look hard to do within an OSS so that we’re not constantly being asked to build functionality / complexity into our OSS that doesn’t warrant being there. It diffuses the intent of the OSS. Just because we can, doesn’t mean we should.

Do the laws of physics prevent you from making an OSS pivot?

AIrcraft carrier
Image linked from GCaptain.com.

As you already know, the word pivot has become common in the world of business, particularly the world of start-ups. It’s a euphemism for a significant change in strategic direction. In the context of today’s post, I love the word pivot because it implies a rapid change in direction, something that’s seemingly impossible for most of our OSS and the customers who use them.

I like to use analogies. It’s no coincidence that some of the analogies posted here on PAOSS relate to the challenge in making strategic change in our OSS. Here are just three of those analogies:

The OSS intertia principle relates classical physics with our OSS, where Force equals Mass x Acceleration (F = ma). In other words, the greater the mass (of your OSS), the more force must be applied to reach a given acceleration (ie to effect a change)

The OSS chess-board analogy talks about the rubber bands and pulleys (ie integrations) that enmesh the pieces on our OSS chessboard. This means that other pieces get dragged out of position whenever we try to move any individual piece and chaos ensues.

The aircraft carrier analogy compares OSS (and the CSPs they service) with navies of old. In days gone by, CSPs enjoyed command of the sea. Their boats were big, powerful and mobile enough to move around world. However, their size requires significant planning to change course. The newer application and content communications models are analogous to the advent of aviation. The over the top (OTT) business model has the speed, flexibility, lower cost base and diversity of aircraft. Air supremacy has changed the competitive dynamic. CSPs and our OSS can’t quickly change from being a navy to being an airforce, so the aircraft carrier approach looks to the future whilst working within the constraints of the past.

When making day to day changes within, and to, your OSS does the ability to pivot ever come to mind?

Do you intentionally ensure it stays small, modular and limit its integrations to simplify your game of OSS chess?
If constrained by existing mass that you simply can’t eliminate, do you seek to transform via OSS‘s aviation equivalents?
Or like many of the OSS around the world, are you just making them larger, enmeshed behemoths that will never be able to change the laws of physics and achieve a pivot?

Do any of our global target architectures represent such behemoths?

Build an OSS and they will come… or sometimes not

Build it and they will come.

This is not always true for OSS. Let me recount a few examples.

The project team is disconnected from the users – The team that’s building the OSS in parallel to existing operations doesn’t (or isn’t able to) engage with the end users of the OSS. Once it comes time for cut-over, the end users want to stick with what they know and don’t use the shiny new OSS. From painful experience I can attest that stakeholder management is under-utilised on large OSS projects.

Turf wars – Different groups within a customer are unable to gain consensus on the solution. For example, the operational design team gains the budget to build an OSS but the network assurance team doesn’t endorse this decision. The assurance team then decides not to endorse or support the OSS that is designed and built by the design team. I’ve seen an OSS worth tens of millions of dollars turned off less than 2 years after handover because of turf wars. Stakeholder management again, although this could be easier said than done in this situation.

It sounded like a good idea at the time – The very clever OSS solution team keeps coming up with great enhancements that don’t get used, for whatever reason (eg non fit-for-purpose, lack of awareness of its existence by users, lack of training, etc). I’ve seen a customer that introduced over 500 customisations to an off-the-shelf solution, yet hundreds of those customisations hadn’t been touched by users within a full year prior to doing a utilisation analysis. That’s right, not even used once in the preceding 12 months. Some made sense because they were once-off tools (eg custom migration activities), but many didn’t.

The new OSS is a scary beast – The new solution might be perfect for what the customer has requested in terms of functionality. But if the solution differs greatly from what the operators are used to, it can be too intimidating to be used. A two-week classroom-based training course at the end of an OSS build doesn’t provide sufficient learning to take up all the nuances of the new system like the operators have developed with the old solution. Each significant new OSS needs an apprenticeship, not just a short-course.

It’s obsolete before it’s finishedOSS work in an environment of rapid change – networks, IT infrastructure, organisation models, processes, product offerings, regulatory shifts, disruptive innovation, etc, etc. The longer an OSS takes to implement, the greater the likelihood of obsolescence. All the more reason for designing for incremental delivery of business value rather than big-bang delivery.

What other examples have you experienced where an OSS has been built, but the users haven’t come?

Falsely rewarding based on OSS existence rather than excellence

There’s a common belief that most jobs see people rewarded for presence rather than performance. That is, they’re encouraged to be on site from 9am to 5pm rather than being given free reign over their work schedules as long as key outcomes are met / exceeded.

In OSS vendor / product selection there’s a similar concept. Contracts are often awarded based on existence rather than excellence. When evaluating a product, if it’s able to do a majority of the functions in the long list of requirements then the box is ticked.

However, this doesn’t take into account that there are usually only a very small number of functions that any given customer’s OSS needs to perform at a very high level of efficiency. All the others are effectively just nice to have. That’s the 80/20 rule at work.

When guiding a customer through their vendor selections, I always take them through an exercise to identify the use-cases / functions that really matter. Then we ensure that the demos or proofs of concept focus closely on how excellent the OSS is at those most important factors.

OSS implementation, but without the dependencies

One of the challenges with getting a new OSS or OSS transformation project completed can be the large number of dependencies that can cause momentum gridlock. If you’re looking to deliver business value in one big-bang, which is a really common approach to delivering OSS projects, then you end up juggling many different activities and hoping they all align at the right times.

I’ve noticed that the vendors tend to design their delivery schedules around big-bang / waterfall approaches like below.
Big-bang OSS delivery

Many vendors will even assure you that this is their standard practice and are hesitant to consider changes to their “best practice” delivery scheduling. Having been involved in many of these types of deliveries in the past, on both vendor and customer side, I can assure you that they rarely work well.

Generally speaking, the gridlocks occur on the customer-side, but the result is detrimental to customer and vendor alike. Hold-ups mean inefficient allocation of resources as well as the resultant cost / time over-runs.

The alternative is to apply a bit more lateral thinking to how you break down the work into smaller chunks. The lateral thinking work breakdown aims are two-fold:

  1. How to break up the work so that it best avoids dependencies; whilst also
  2. Delivering some sort of value to the customer

There are many dependencies on a typical OSS project – hardware, procurement, IT infrastructure, network connectivity, security, approvals, integrations, licensing, resource availability, data quality and many more. However, each different customer, their org chart and project has its own unique mix of dependencies, so I don’t subscribe to the “best practice” argument to project delivery.

The diagram below shows an example of an alternate breakdown. The business value chunks that are delivered might be tiny in some cases, but at least momentum can be demonstrated. Rather than having a mass of entwined dependencies, you can isolate and minimise dependencies for that sliver of business value. When the dependency/ies has cleared, you can jump straight onto the next activity from an existing build-state rather than having to align all the activities to land in perfect precision.
Incremental OSS work breakdown

OSS that are profitable, difficult, or important?

Apple became the first company to be worth a trillion dollars. They did that by spending five years single-mindedly focusing on doing profitable work. They’ve consistently pushed themselves toward high margin luxury goods and avoided just about everything else. Belying their first two decades, when they focused on breakthrough work that was difficult and perhaps important, nothing they’ve done recently has been either…
Profitable, difficult, or important — each is an option. A choice we get to make every day. ‘None of the above’ is also available, but I’m confident we can seek to do better than that
. ”
Seth Godin
in this post.

I encourage you to view the entire post at the link above. It gives definitions (and examples) of organisations that focus on profitable, difficult or important activities.

In OSS, the organisations that focus on the profitable are the ones investing heavily on glossy sales / marketing and only making incremental improvements to products that have been around for years.

Then there are others that are doing the difficult and innovative and complex work (ie the sexy work for all of us tech-heads). This recent article about ONAP talks about the fantastic tech-driven ambitions of that program, but then distills it down to the business objectives.

That leaves us with the important – the business needs / objectives – and this is where the customers come in. Speak with any OSS customer (or customer’s customer for that matter) and you’ll tend to find frustrations with their OSS. Frustration with complexity, time to deliver / modify, cost to deliver / modify, risks, functionality constraints, etc.

This is a simplification of course, but do you notice that as an industry, our keen focus on the profitable and difficult might just be holding us back from doing the important?

OSS designed as a bundle, or bundled after?

Over the years I’m sure you’ve seen many different OSS demonstrations. You’ve probably also seen presentations by vendors / integrators that have shown multiple different products from their suite.

How integrated have they appeared to you?

  1. Have they seemed tightly integrated, as if carved from a single piece of stone?
  2. Or have they seemed loosely integrated, a series of obviously different stones joined together with some mortar?
  3. Or perhaps even barely associated, a series of completely different objects (possibly through product acquisition) branded under a common marketing name?

There are different pros and cons with each approach. Tight integration possibly suits a greenfields OSS. Looser integration perhaps better suits carve-off for best-of-breed customer architecture models.

I don’t know about you, but I always prefer to be given the impression that an attempt has been made to ensure consistency in the bundling. Consistency of user-interface, workflow, data modelling/presentation, reports, etc. With modern presentation layers, database technologies and the availability of UX / CX expertise, this should be less of a hurdle than it has been in the past.

Where are the reliability hotspots in your OSS?

As you already know, there are two categories of downtime – unplanned (eg failures) and planned (eg upgrades / maintenance).

Planned downtime sounds a lot nicer (for operators) but the reality is that you could call both types “incidents” – they both impact (or potentially impact) the customer. We sometimes underestimate that fact.

Today’s question is whether you’re able to identify where the hotspots are in your OSS suite when you combine both types of downtime. Can you tell which outages are service-impacting?

In a round-about way, I’m asking whether you already have a dashboard that monitors uptime of all the components (eg applications, probes, middleware, infra, etc) that make up your complete OSS / BSS estate? If you do, does it tell you what you anecdotally know already, or are there sometimes surprises?

Does the data give you the evidence you need to negotiate with the implementers of problematic components (eg patch cadence, the need for reliability fixes, streamlining the patch process, reduction in customisations, etc)? Does it give you reason to make architectural changes (eg webscaling)?

Stop looking for exciting new features for your OSS

The iPhone disrupted the handset business, but has not disrupted the cellular network operators at all, though many people were convinced that it would. For all that’s changed, the same companies still have the same business model and the same customers that they did in 2006. Online flight booking doesn’t disrupt airlines much, but it was hugely disruptive to travel agents. Online booking (for the sake of argument) was sustaining innovation for airlines and disruptive innovation for travel agents.
Meanwhile, the people who are first to bring the disruption to market may not be the people who end up benefiting from it, and indeed the people who win from the disruption may actually be doing something different – they may be in a different part of the value chain. Apple pioneered PCs but lost the PC market, and the big winners were not even other PC companies. Rather, most of the profits went to Microsoft and Intel, which both operated at different layers of the stack. PCs themselves became a low-margin commodity with fierce competition, but PC CPUs and operating systems (and productivity software) turned out to have very strong winner-takes-all effects
.”
Ben Evans
on his blog about Tesla.

As usual, Ben makes some thought-provoking points. The ones above have coaxed me into thinking about OSS from a slightly perspective.

I’d tended to look at OSS as a product to be consumed by network operators (and further downstream by the customers of those network operators). I figured that if our OSS delivered benefit to the downstream customers, the network operators would thrive and would therefore be prepared to invest more into OSS projects. In a way, it’s a bit like a sell-through model.

But the ideas above give some alternatives for OSS providers to reduce dependence on network operator budgets.

Traditional OSS fit within a value-chain that’s driven by customers that wish to communicate. In the past, the telephone network was perceived as the most valuable part of that value-chain. These days, digitisation and competition has meant that the perceived value of the network has dropped to being a low-margin commodity in most cases. We’re generally not prepared to pay a premium for a network service. The Microsofts and Intels of the communications value-chain is far more diverse. It’s the Googles, Facebooks, Instagrams, YouTubes, etc that are perceived to deliver most value to end customers today.

If I were looking for a disruptive OSS business model, I wouldn’t be looking to add exciting new features within the existing OSS model. In fact, I’d be looking to avoid our current revenue dependence on network operators (ie the commoditising aspects of the communications value-chain). Instead I’d be looking for ways to contribute to the most valuable aspects of the chain (eg apps, content, etc). Or even better, to engineer a more exceptional comms value-chain than we enjoy today, with an entirely new type of OSS.

Chasing the big OSS waves

The diagram below attempts to show how the entire market (whether that’s the supplier-side or the buyer-side) will absorb a given new feature.

The leaders pick up the concept at T0 and then it takes another few years before the laggards implement it.
OSS Buyer Developer Curve

Most of us in the OSS implementation world crave to be at the leading edge of change. The right-side of the curve is definitely the sexier side to be on. I know I do. It’s part of the reason this blog exists – to stay abreast of the exciting new ideas, projects and technologies that are coming through in OSS. Funnily enough, there’s probably even people within most of the laggards who are already excited about a new concept not long after T0, but are just unable to implement it until much later.

Supplier sales-pitches also tend to focus on the right side of the curve. That’s where the buzz is. That’s where the premiums are, the rewards for being first to market. It’s the customers on the right-side of the curve that are most attractive as sales targets for many suppliers.

But I also wonder whether the increasing proliferation of tech options within OSS means there’s also increasing inefficiency for suppliers (and possibly buyers) on the right side of the curve? Do we focus all our development efforts on ONAP or [insert any of millions of other alternative platforms, technologies, ideas, etc] today? What if the mass-market goes down an alternate path to the one you’ve chosen? How long before you identify a divergence from the mass-market trend? What’s the impact of changing direction (or not)? Are you bound to spill some blood by playing on the bleeding edge?

The left side of the graph is arguably more predictable. You can already see where the market is trending. Has the whole concept just been hype or has this new thing really made a difference for customers? Most of the implementation hurdles are likely to have already been resolved. Products have matured. More integrations, reports, etc have been developed. Waters have already been chartered.

I don’t have the numbers to back this up, but I also have a suspicion that there’s less supplier competition for the business of laggard or follower customers. I’ve seen some companies that have thrived on this model. They get a nice unimpeded ride on the back of the wave whilst everyone else is fighting to catch the front-edge of it.

Chasing the left side of the curve might seem counter-intuitive because it clearly represents a falling market. But there’s always the next wave to jump onto, each with similar predictability and reduced competition.

Not only that, but a majority of the the most important OSS use-cases have been around for many years. It’s increasingly difficult to find new functionality that delivers tangible benefits. Whilst other suppliers have jumped off to chase the next big thing, the followers can keep refining their solutions for what matters most.

Let me pose the question this way – Can you think of a single OSS product that is so refined that it can’t do the basics any better than it already does? Nope??

If your partners don’t have to talk to you then you win

If your partners don’t have to talk to you then you win.”
Guy Lupo
.

Put another way, the best form of customer service is no customer service (ie your customers and/or partners are so delighted with your automated offerings that they have no reason to contact you). They don’t want to contact you anyway (generally speaking). They just want to consume a perfectly functional and reliable solution.

In the deep, distant past, our comms networks required operators. But then we developed automated dialling / switching. In theory, the network looked after itself and people made billions of calls per year unassisted.

Something happened in the meantime though. Telco operators the world over started receiving lots of calls about their platform and products. You could say that they’re unwanted calls. The telcos even have an acronym called CVR – Call Volume Reduction – that describes their ambitions to reduce the number of customer calls that reach contact centre agents. Tools such as chatbots and IVR have sprung up to reduce the number of calls that an operator fields.

Network as a Service (NaaS), the context within Guy’s comment above, represents the next new tool that will aim to drive CVR (amongst a raft of other benefits). NaaS theoretically allows customers to interact with network operators via impersonal contracts (in the form of APIs). The challenge will be in the reliability – ensuring that nothing falls between the cracks in any of the layers / platforms that combine to form the NaaS.

In the world of NaaS creation, Guy is exactly right – “If your partners [and customers] don’t have to talk to you then you win.” As always, it’s complexity that leads to gaps. The more complex the NaaS stack, the less likely you are to achieve CVR.