OSS transformation is hard. What can we learn from open source?

Have you noticed an increasing presence of open-source tools in your OSS recently? Have you also noticed that open-source is helping to trigger transformation? Have you thought about why that might be?

Some might rightly argue that it is the cost factor. You could also claim that they tend to help resolve specific, but common, problems. They’re smaller and modular.

I’d argue that the reason relates to our most recent two blog posts. They’re fast to install and they’re easy to run in parallel for comparison purposes.

If you’re designing an OSS can you introduce the same concepts? Your OSS might be for internal purposes or to sell to market. Either way, if you make it fast to build and easy to use, you have a greater chance of triggering transformation.

If you have a behemoth OSS to “sell,” transformation persuasion is harder. The customer needs to rally more resources (funds, people, time) just to compare with what they already have. If you have a behemoth on your hands, you need to try even harder to be faster, easier and more modular.

Identifying the fault-lines that trigger OSS churn

Most people slog through their days in a dark funk. They almost never get to do anything interesting or go to interesting places or meet interesting people. They are ignored by marketers who want them to buy their overpriced junk and be grateful for it. They feel disrespected, unappreciated and taken for granted. Nobody wants to take the time to listen to their fears, dreams, hopes and needs. And that’s your opening.
John Carlton
.

Whilst the quote above may relate to marketing, it also has parallels in the build and run phases of an OSS project. We talked about the trauma of OSS yesterday, where the OSS user feels so much trauma with their current OSS that they’re willing to go through the trauma of an OSS transformation. Clearly, a procurement event must be preceded by a significant trauma!

Sometimes that trauma has its roots in the technical, where the existing OSS just can’t do (or be made to do) the things that are most important to the OSS user. Or it can’t do it reliable, at scale, in time, cost effectively, without significant risk / change. That’s a big factor certainly.

However, the churn trigger appears to more often be a human one. The users feel disrespected, unappreciated and taken for granted. But here’s an interesting point that might surprise some users – the suppliers also often feel disrespected, unappreciated and taken for granted.

I have the privilege of working on both sides of the equation, often even as the intermediary between both sides. Where does the blame lie? Where do the fault-lines originate? The reasons are many and varied of course, but like a marriage breakup, it usually comes down to relationships.

Where the communication method is through hand-grenades being thrown over the fence (eg management by email and by contractual clauses), results are clearly going to follow a deteriorating arc. Yet many OSS relationships structurally start from a position of us and them – the fence is erected – from day one.

Coming from a technical background, it took me far too deep into my career to come to this significant realisation – the importance of relationships, not just the quest for technical perfection. The need to listen to both sides’ fears, dreams, hopes and needs.

OSS data that’s even more useless than useless

About 6-8 years ago, I was becoming achingly aware that I’d passed well beyond an information overload (I-O) threshold. More information was reaching my brain each day than I was able to assimilate, process and archive. What to do?

Well, I decided to stop reading newspapers and watching the news, in fact almost all television. I figured that those information sources were empty calories for the brain. At first it was just a trial, but I found that I didn’t miss it much at all and continued. Really important news seemed to find me at the metaphorical water-cooler anyway.

To be completely honest, I’m still operating beyond the I-O threshold, but at least it’s (arguably) now a more healthy information diet. I’m now far more useless at trivia game shows, which could be embarrassing if I ever sign up as a contestant on “Who Wants to be a Millionaire.” And missing out on the latest news sadly makes me far less capable of advising the Queen on how to react to Meghan Markle’s latest royal “atrocity.” The crosses we bear.

But I’m digressing markedly (and Markle-ey) from what this blog is all about – O.S.S.

Let me ask you a question – Is your OSS data like almost everybody else’s (ie also in I-O mode)?

Seth Godin recently quoted 3 rules of data:
First, don’t collect data unless it has a non-zero chance of changing your actions.
Second, before you seek to collect data, consider the costs of processing that data.
Third, acknowledge that data collected isn’t always accurate, and consider the costs of acting on data that’s incorrect.”

If I remove the double-negative from rule #1 – Only collect data if it has even the slightest chance of changing your actions.

Most people take the perspective that we might as well collect everything because storage is just getting so cheap (and we should keep it, not because we ever use it, but just in case our AI tools eventually find some relevance locked away inside it).

In the meantime, pre-AI (rolls eyes), Seth’s other two rules provide further sanity to the situation. Storing data is cheap, except where it has to be timely and accurate enough to make decisive, reliable actions on.

So, let me go back to the revised quote in bold. How much of the data in your OSS database / data-lake / data-warehouse / etc has even the slightest chance of changing your actions? As a percentage??

I suspect a majority is never used. And as most of it ages, it becomes even more useless than useless. One wonders, why are we storing it then?

Becoming the Microsoft of the OSS industry

On Tuesday we pondered, “Would an OSS duopoly be a good thing?

It cited two examples of operating systems amongst other famous duopolies:

  • Microsoft / Apple (PC operating systems)
  • Google / Apple (smartphone operating systems)

Yesterday we provided an example of why consolidation is so much more challenging for OSS companies than say for Coke or Pepsi.

But maybe an operating system model could represent a path to overcome many of the challenges faced by the OSS industry. What if there were a Linux for OSS?

  • One where the drivers for any number of device types is already handled and we don’t have to worry about south-bound integrations anymore (mostly). When new devices come onto the market, they need to have agents designed to interact with the common, well-understood agents on the operating system
  • One where the user interface is generally defined and can be built upon by any number of other applications
  • One where data storage and handling is already pre-defined and additional utilities can be added to make data even easier to interact with
  • One where much of underlying technical complexity is already abstracted and the higher value functionality can be built on top

It seems to me to be a great starting point for solving many of the items listed as awaiting exponential improvement is this OSS Call for Innovation manifesto.

Interestingly, I can’t foresee any of today’s biggest OSS players developing such an operating system without a significant mindset shift. They have the resources to become the Microsoft / Apple / Google of the OSS market, but appear to be quite closed-door in their thinking. Waiting for disruption from elsewhere.

Could ONAP become the platform / OS?

Let me relate this by example. TM Forum recently ran an event called DTA in Kuala Lumpur. It was an event for sharing ideas, conversations and letting the market know all about their products. All of the small to medium suppliers were happy to talk about their products, services and offerings. By contrast, I was ordered out of the rooms of one leading, but some might say struggling, vendor because I was only a walk-up. A walk-up representing a potential customer of them, but they didn’t even ask the question about how I might be of value to them (nor vice versa).

Would an OSS duopoly be a good thing?

The products/vendors page here on PAOSS has a couple of hundred entries currently. We’re currently working on an extended list that will almost double the number on it. More news on that shortly.

The level of fragmentation fascinates me, but if I’m completely honest, it probably disappoints me too. It’s great that it’s providing the platform for a long-tail of innovation. It’s exciting that there’s so many niche opportunities that exist. But it disappoints me because there’s so much duplication. How many alarm / performance / inventory / etc management tools are there? Can you imagine how many developer hours have been duplicated on similar feature development between products? And because there are so many different patterns, it means the total number of integration variants across the industry is putting a huge integration tax on us all.

Compare this to the strength of duopoly markets such as:

  • Microsoft / Apple (PC operating systems)
  • Google / Apple (smartphone operating systems)
  • Boeing / Airbus (commercial aircraft)
  • Visa / Mastercard (credit cards / payments)
  • Coca Cola / Pepsi (beverages, etc)

These duopolies have allowed for consolidation of expertise, effort, revenues/profits, etc. Most also provide a platform upon which smaller organisations / suppliers can innovate without having to re-invent everything (eg applications build upon operating systems, parts for aircraft, etc).

Buuuut……

Then I think about the impediments to achieving drastic consolidation through mergers and acquisitions (M&A) in the OSS industry.

There are opportunities to find complementary product alignment because no supplier services the entire OSS estate (where I’m using TM Forum’s TAM as a guide to the breadth of the OSS estate). However, it would be much harder to approach duopoly in OSS for a number of reasons:

  • Almost every OSS implementation is unique. Even if some of the products start out in common, they usually become quickly customised in terms of integrations, configurations, processes, etc
  • Interfaces to networks and other systems can vary so much. Modern EMS / devices / systems are becoming a little more consistent with IP, SNMP, Web APIs, etc. However, our networks still tend to have a lot of legacy protocols to interface with our networks
  • Consolidation of product lines becomes much harder, partly because of the integrations above, but partly because the functionality-sets and workflows differ so vastly between similar products (eg inventory management tools)
  • Similarly, architectures and build platforms (eg programming languages) are not easily compatible
  • Implementations are often regional for a variety of reasons – regulatory, local partnerships / relationships, language, corporate culture, etc
  • Customers can be very change-averse, even when they’re instigating the change

By contrast, we regularly hear of Coca Cola buying up new brands. It’s relatively easy for Coke to add a new product line/s without having much impact on existing lines.

We also hear about Google’s acquisitions, adding complementary products into its product line or simple for the purpose of acquiring new talent / expertise. There’s also acquisitions for the purpose of removing competitors or buying into customer bases.

Harder in all cases in the OSS industry.

Tomorrow we’ll share a story about an M&A attempting to buy into a customer base.

Then on Thursday, a story awaits on a possibly disruptive strategy towards consolidation in OSS.

Think for a moment…

Many of the most important new companies, including Google, Facebook, Amazon, Netflix, Snapchat, Uber, Airbnb and more are winning not by giving good-enough solutions…, but rather by delivering a superior experience….”
Ben Thompson
, stratechery.com

Think for a moment about the millions of developer hours that have gone into creating today’s OSS tools. Think also for a moment about how many of those tools are really clunky to use, install, configure, administer. How many OSS tools have truck-loads of functionality baked in that is just distracting, features that you’re never going to need or use? Conversely, how many are intuitive enough for a high-school student, let’s say, to use for the first time and become effective within a day of self-driven learning?

Let’s say an OSS came along that had all of the most important features (the ones customers really pay for, not the flashy, nice-to-have features) and offered a vastly superior user experience and user interface. Let’s say it took the market by storm.

With software and cloud delivery, it becomes harder to sustain differentiation. Innovative features and services are readily copied. But have a think about how hard it would be for the incumbent OSS to pick apart the complexity of their code, developed across those millions of developer hours, and throw swathes of it away – overhauling in an attempt to match a truly superior OSS experience.

Can you see why I’m bemused that we’re not replacing developers with more UX experts? We can surely create more differentiation through vastly improved experience than we can in creating new functionality (almost all of the most important functionality has already been developed and we’re now investing developer time on the periphery).

Treating your OSS/BSS suite like a share portfolio

Like most readers, I’m sure your OSS/BSS suite consists of many components. What if you were to look at each of those components as assets? In a share portfolio, you analyse your stocks to see which assets are truly worth keeping and which should be divested.

We don’t tend to take such a long-term analytical view of our OSS/BSS components. We may regularly talk about their performance anecdotally, but I’m talking about a strategic analysis approach.

If you were to look at each of your OSS/BSS components, where would you put them in the BCG Matrix?
BCG matrix
Image sourced from NetMBA here.

How many of your components are giving a return (whatever that may mean in your organisation) and/or have significant growth potential? How many are dogs that are a serious drain on your portfolio?

From an investor’s perspective, we seek to double-down our day-to-day investment in cash-cows and stars. Equally, we seek to divest our dogs.

But that’s not always the case with our OSS/BSS porfolio. We sometimes spend so much of our daily activity tweaking around the edges, trying to fix our dogs or just adding more things into our OSS/BSS suite – all of which distracts us from increasing the total value of our portfolio.

To paraphrase this Motley Fool investment strategy article into an OSS/BSS context:

  • Holding too many shares in a portfolio can crowd out returns for good ideas – being precisely focused on what’s making a difference rather than being distracted by having too many positions. Warren Buffett recommends taking 5-10 positions in companies that you are confident in holding forever (or for a very long period of time), rather than constantly switching. I shall note though that software could arguably be considered to be more perishable than the institutions we invest in – software doesn’t tend to last for decades (except some OSS perhaps  😀 )
  • Good ideas are scarce – ensuring you’re not getting distracted by the latest trends and buzzwords
  • Competitive knowledge advantage – knowing your market segment / portfolio extremely well and how to make the most of it, rather than having to up-skill on every new tool that you bring into the suite
  • Diversification isn’t lost – ensuring there is suitable vendor/product diversification to minimise risk, but also being open to long-term strategic changes in the product mix

Day-trading of OSS / BSS tools might be a fun hobby for those of us who solution them, but is it as beneficial as long-run investment?

I’d love to hear your thoughts and experiences.

How to kill the OSS RFP (part 4)

This is the fourth, and final part (I think) in the series on killing the OSS RFI/RFP process, a process that suppliers and customers alike find to be inefficient. The concept is based on an initiative currently being investigated by TM Forum.

The previous three posts focused on the importance of trusted partnerships and the methods to develop them via OSS procurement events.

Today’s post takes a slightly different tack. It proposes a structural obsolescence that may lead to the death of the RFP. We might not have to kill it. It might die a natural death.

Actually, let me take that back. I’m sure RFPs won’t die out completely as a procurement technique. But I can see a time when RFPs are far less common and significantly different in nature to today’s procurement events.

How??
Technology!
That’s the answer all technologists cite to any form of problem of course. But there’s a growing trend that provides a portent to the future here.

It comes via the XaaS (As a Service) model of software delivery. We’re increasingly building and consuming cloud-native services. OSS of the future, the small-grid model, are likely to consume software as services from multiple suppliers.

And rather than having to go through a procurement event like an RFP to form each supplier contract, the small grid model will simply be a case of consuming one/many services via API contracts. The API contract (eg OpenAPI specification / swagger) will be available for the world to see. You either consume it or you don’t. No lengthy contract negotiation phase to be had.

Now as mentioned above, the RFP won’t die, but evolve. We’ll probably see more RFPs formed between customers and the services companies that will create customised OSS solutions (utilising one/many OSS supplier services). And these RFPs may not be with the massive multinational services companies of today, but increasingly through smaller niche service companies. These micro-RFPs represent the future of OSS work, the gig economy, and will surely be facilitated by smart-RFP / smart-contract models (like the OSS Justice League model).

How to kill the OSS RFP (part 3)

As the title suggests, this is the third in a series of articles spawned by TM Forum’s initiative to investigate better procurement practices than using RFI / RFP processes.

There’s no doubt the RFI / RFP / contract model can be costly and time-consuming. To be honest, I feel the RFI / RFP process can be a reasonably good way of evaluating and identifying a new supplier / partner. I say “can be” because I’ve seen some really inefficient ones too. I’ve definitely refined and improved my vendor procurement methodology significantly over the years.

I feel it’s not so much the RFI / RFP that needs killing (significant disruption maybe), but its natural extension, the contract development and closure phase that can be significantly improved.

As mentioned in the previous two parts of this series (part 1 and part 2), the main stumbling block is human nature, specifically trust.

Have you ever been involved in the contract phase of a large OSS procurement event? How many pages did the contract end up being? Well over a hundred? How long did it take to reach agreement on all the requirements and clauses in that document?

I’d like to introduce the concept of a Minimum Viable Contract (MVC) here. An MVC doesn’t need most of the content that appears in a typical contract. It doesn’t attempt to predict every possible eventuality during the many years the OSS will survive for. Instead it focuses on intent and the formation of a trusting partnership.

I once led a large, multi-organisation bid response. Our response had dozens of contributors, many person-months of effort expended, included hundreds of pages of methodology and other content. It conformed with the RFP conditions. It seemed justified on a bid that exceeded $250M. We came second on that bid.

The winning bidder responded with a single page that included intent and fixed price amount. Their bid didn’t conform to RFP requests. Whereas we’d sought to engender trust through content, they’d engendered trust through relationships (in a part of the world where we couldn’t match the winning bidder’s relationships). The winning bidder’s response was far easier for the customer to evaluate than ours. Undoubtedly their MVC was easier and faster to gain agreement on.

An MVC is definitely a more risky approach for a customer to initiate when entering into a strategically significant partnership. But just like the sports-star transfer comparison in part 2, it starts from a position of trust and seeks to build a trusted partnership in return.

This is a highly contrarian view. What are your thoughts? Would you ever consider entering into an MVC on a big OSS procurement event?

How to kill the OSS RFP (part 2)

Yesterday’s post discussed an initiative that TM Forum is currently investigating – trying to identify an alternate OSS procurement process to the traditional RFI/RFP/contract approach.

It spoke about trusting partnerships being the (possibly) mythological key to killing off the RFP.

Have you noticed how much fear there is going into any OSS procurement event? Fear from suppliers and customers alike. That’s understandable because there are so many horror stories that both sides have heard of, or experienced, from past procurement events. The going-in position is of excitement, fear and an intention to ensure all loopholes are covered through reams of complex contractual terms and conditions. DBC – death by contract.

I’m a huge fan of Australian Rules Football (aka AFL). I’m lucky enough to have been privy to the inside story behind one of the game’s biggest ever player transfers.

The player, a legend of the game, had a history of poor behaviour. With each new contract, his initial club had inserted more and more T&Cs that attempted to control his behaviour (and protect the club from further public relations fallouts). His final contract was many pages long, with significant discussion required by player and club to reach agreement on each clause.

In the meantime, another club attempted to poach the superstar. Their contract offer fit on a single page and had no behaviour / discipline clauses. It was the same basic pro-forma that eveeryone on the team signed up to. The player was shocked. He asked where all the other clauses were. The answer from the poaching club was, to paraphrase, “why would we need those clauses? We trust you to do the right thing.” It became a significant component of the new club getting their man. And their man went on to deliver upon that trust, both on-field and off, over many years. He built one of the greatest careers ever.

I wonder whether this is just an outlier example? Could the same simplified contract model apply to OSS procurement, helping to build the trusting partnerships that everyone in the industry desires? As the initiator of the procurement event, does the customer control the first important step towards building a trusting partnership that lasts for many years?

How to kill the OSS RFP

TM Forum is currently investigating ways to procure OSS without resorting to the current RFI / RFP approach. It has published the following survey results.
Kill the RFP.

As it shows, the RFI / RFP isn’t fit for purpose for suppliers and customers alike. It’s not just the RFI/RFP process. We could extend this further and include contract / procurement process that bolts onto the back of the RFP process.

I feel that part of the process remains relevant – the part that allows customers to evaluate the supplier/s that are best-fit for the customer’s needs. The part that is cumbersome relates to the time, effort and cost required to move from evaluation into formation of a contract.

I believe that this becomes cumbersome because of trust.

Every OSS supplier wants to achieve “trusted” status with their customers. Each supplier wants to be the source trusted to provide the best vision of the future for each customer. Similarly, each OSS customer wants a supplier they can trust and seek guidance from.”
Past PAOSS post.

However, OSS contracts (and the RFPs that lead into them) seem to be the antithesis of trust. They generally work on the assumption that every loophole must be closed that a supplier or vendor could leverage to rort the other.

There are two problems with this:

  • OSS transformations are complex projects and all loopholes can never be covered
  • OSS platforms tend to have a useful life of many years, which makes predicting the related future requirements, trends, challenges, opportunities, technologies, etc difficult to plan for

As a result, OSS RFI/RFP/contracts are so cumbersome. Often, it’s the nature of the RFP itself that makes the whole process cumbersome. The OSS Radar analogy shows an alternative mindset.

Mark Newman of TM Forum states, “…the telecoms industry is transitioning to a partnership model to benefit from innovative new technologies and approaches, and to make decisions and deploy new capabilities more quickly.”
The trusted partnership model is ideal. It allows both parties to avoid the contract development phase and deliver together efficiently. The challenge is human nature (ie we come back to trust).

I wonder whether there is merit in using an independent arbiter? A customer uses the RFI/RFP approach to find a partner or partners, but then all ongoing work is evaluated by the arbiter to ensure balance / trust is maintained between customer (and their need for fair pricing, quality products, etc) and supplier (and their need for realistic requirements, reasonable payment times, etc).

I’d love to hear your thoughts and experiences around partnerships that have worked well (or why they’ve worked badly). Have you ever seen examples where the arbitration model was (or wasn’t) helpful?

That’s not where to disrupt your OSS

The diagram below comes from an actual client’s functionality usage profile.
Long tail of OSS

The x-axis shows the functionality / use-cases. The y-axis shows the number of uses (it could equally represent usefulness or value).

Each big-impact demand (ie individual bars on the left-side of the graph) warrants separate investigation. The bars on the right side (ie the long tail in the red box) don’t. They might be worth investigating if we could treat some/all as a cohort though.

The left side of the graph represent the functionality / use-cases that have been around for decades. Every OSS has them. They’re so common and non-differentiated that they’re not remotely sexy. Customers / stakeholders aren’t going to be wowed by them. They’re just going to expect them. Our product developers have already delivered that functionality, have moved on and are now looking for new things to work on.

And where does the new stuff reside? Generally as new bars on the right side of the graph. That’s the law of diminishing returns territory right there! You’re unlikely to move the needle from out there.

Does this graph convince you to send your most skilled craftsmen back to do more tinkering / disrupting at the left side of the graph… as opposed to adding new features at the right side? Does it inspire you to dream up exciting cohort management techniques for the red box? Perhaps it even persuades you to cull some of the long-tail features that are chewing up lifecycle effort (eg code management, regression testing, complexity tax)?

If it does convince you, don’t forget to think about how you’re going to market it. How are you going to make the left side sexy / differentiated again? Are you going to have to prove just how much easier, cheaper, faster, more efficient, more profitable, etc it is? That brings us back to the OSS proof-of-worth discussion we had yesterday. It also brings us back to Sutton’s Law – go to where the money is.

The OSS proof-of-worth dilemma

Earlier this week we posted an article describing Sutton’s Law of OSS, which effectively tells us to go where the money is. The article suggested that in OSS we instead tend towards the exact opposite – the inverse-Sutton – we go to where the money isn’t. Instead of robbing banks like Willie Sutton, we break into a cemetery and aimlessly look for the cash register.

A good friend responded with the following, “Re: The money trail in OSS … I have yet to meet a senior exec. / decision maker in any telco who believes that any OSS component / solution / process … could provide benefit or return on any investment made. In telco, OSS = cost. I’ve tried very hard and worked with many other clever people also trying hard to find a way to pitch OSS which overcomes this preconception. BSS is often a little easier … in many cases it’s clear that “real money” flows through BSS and needs to be well cared for.”

He has a strong argument. The cost-out mentality is definitely ingrained in our industry.

We are saddled with the burden of proof. We need to prove, often to multiple layers of stakeholders, the true value of the (often intangible) benefits that our OSS deliver.

The same friend also posited, “The consequence is the necessity to establish beneficial working relationships with all key stakeholders – those who specify needs, those who design and implement solutions and those, especially, who hold or control the purse strings. [To an outsider] It’s never immediately obvious who these people are, nor what are their key drivers. Sometimes it’s ambition to climb the ladder, sometimes political need to “wedge” peers to build empires, sometimes it’s necessity to please external stakeholders – sometimes these stakeholders are vendors or government / international agencies. It’s complex and requires true consultancy – technical, business, political … at all levels – to determine needs and steer interactions.

Again, so true. It takes more than just technical arguments.

I’m big on feedback loops, but also surprised at how little they’re used in telco – at all levels.

  • We spend inordinate amounts of time building and justifying business cases, but relatively little measuring the actual benefits produced after we’ve finished our projects (or gaining the learnings to improve the next round of business cases)
  • We collect data in our databases, obliviously let it age, realise at some point in the future that we have a data quality issue and perform remedial actions (eg audits, fixes, etc) instead of designing closed-loop improvement cycles that ensure DQ isn’t constantly deteriorating
  • We’re great at spending huge effort in gathering / arguing / prioritising requirements, but don’t always run requirements traceability all the way into testing and operational rollout.
  • etc

Which leads us back to the burden of proof. Our OSS have all the data in the world, but how often do we use it to justify and persuade – to prove?

Our OSS products have so many modules and technical functionality (so much of it effectively duplicated from vendor to vendor). But I’ve yet to see any vendor product that allows their customer, the OSS operators, to automatically gather proof-of-worth stats (ie executive-ready reports). Nor have I seen any integrator build proof-of-worth consultancy into their offer, whereby they work closely with their customer to define and collect the metrics that matter. BTW. If this sounds hard, I’d be happy to discuss how I approach this task.

So let me leave you with three important questions today:

  1. Have you also experienced the overwhelming burden of the “OSS = cost” mentality
  2. If so, do you have any suggestions / experiences on how you’ve overcome it
  3. Does the proof-of-worth product functionality sound like it could be useful (noting that it doesn’t even have to be a product feature, but a custom report / portal using data that’s constantly coursing through our OSS databases)

Sutton’s Law of OSS

Willie Sutton was an accomplished bank robber, particularly during the 1920s and 1930. Named after Willie, Sutton’s Law effectively states, “I go to where the money is,” which was supposedly Sutton’s response to a reporter’s question asking why he robbed banks instead of easier targets.

Interestingly for the OSS industry, we seem to follow the inverse of Sutton’s Law. We go to where the money isn’t. In other words, we mostly attempt to build business cases around the “cost-out” model, helping our customers achieve cost savings. These savings are in the form of automations that lead to reductions in head-count, cost of doing business, etc. Think about the common buzz-words – AI, machine learning, virtualisation, etc. Are they Sutton, or inverse-Sutton?

Truth be told, we do still go to where the money is because our customers (the network operators) are willing to spend money to save even more money. But you can see where I’m coming from can’t you?

Let me pose a question for you? Who is more likely to be comfortable spending money, someone who is confident in making money from the investment or someone who is going to save money from an investment?

I’d back Sutton’s Law and respond with the former. But we don’t tend to follow Sutton’s Law very often. It can often be challenging because so many of the benefits of our OSS and BSS are intangible. We’re seen as cost centres because we don’t do a good enough job of showing how important we are at operationalising everything that happens in a service provider’s network (and business).

At TM Forum’s DTA event a couple of weeks ago, I was pleased to see that some of the big telco API initiatives (eg Telkomsel, Telstra’s Network as a Service [NaaS] and China Mobile’s Data Security and Privacy Management Framework) are starting to make a real impact. The API model represents our strongest industry-wide push towards revenue-based business cases in years (that I can remember anyway).

Monty Hong of Telkomsel (Indonesia) made a presentation that provides a useful guide for future telco value-stream / revenue-models, effectively showing Sutton’s Law at play:
http://passionateaboutoss.com/how-oss-bss-facilitated-telkomsels-structural-revenue-changes.

The API model is an interesting one though. As well as revenue-in, it also potentially represents a cost out model (ie reduced cost of sales), a platform play (ie leveraging the network effect by allowing partners to build their own revenues on top), but on the downside also potentially triggers revenue cannibalisation.

Personally, I’m considering Sutton’s Law more in terms of our customers’ customers (ie end users of communication services, like the gamers in the Monty Hong link) rather than customers (ie the comms service providers that want to reduce costs).

I’d love to hear about your perception of Sutton’s Law in OSS. Where do you think the money is?

The Jeff Bezos prediction for OSS

If we start to focus on ourselves, instead of focusing on our customers, that will be the beginning of the end … We have to try and delay that day for as long as possible.”
Jeff Bezos
.

Jeff Bezos recently predicted that Amazon is likely to fail and/or go bankrupt at some point in time, as history has eventually proven for most high-flying companies. The quote above was part of that discussion.

I’ve worked with quite a few organisations that have been in the midst of some sort of organisational re-structure – upsizing, downsizing, right-scaling, transforming – whatever the words that might be in effect. Whilst it would be safe to say that all of those companies were espousing being focused on their customers, the organisational re-structures always seem to cause inward-facing behaviour. It’s human nature that change causes feelings of fear, job security, opportunities to expand empires, etc.

And in these types of inward-facing environments in particular, I’ve seen some really interesting decisions made around OSS projects. When making these decisions, customer experience has clearly been a long way down the list of priorities!! And in the current environment of significant structural change in the telco industry, these stimulants of internal-facing behaviour appear to be growing.

Whilst many people want to see OSS projects as technical delivery solutions and focus on the technology, the people and culture aspects of the project can often be even more challenging. They can also be completely underestimated.

What have your experiences been? Do you agree that customer-facing vision, change management and stakeholder management can be just as vital as technical brilliance on an OSS implementation team?

The culture required to support Telkomsel’s OSS/BSS transformation

Yesterday’s post described the ways in which Telkomsel has strategically changed their value-chain to attract revenues with greater premiums than the traditional model of a telco. They’ve used a new digital core and an API framework to help facilitate their business model transformation. As promised yesterday, we’ll take a slightly closer look at the culture of Telkomsel’s transformation today.

Monty Hong of Telkomsel presented the following slides during a presentation at TM Forum’s DTA (Digital Transformation Asia) last week.

The diagram below shows a graph showing the need for patience and ongoing commitment to major structural transformations like the one Telkomsel underwent.

Telkomsel's commitment to transformation

The curve above tends to represent the momentum and morale I’ve felt on most large OSS projects. Unfortunately, I’ve also been involved in projects where project sponsors haven’t stayed the journey beyond the dip (Q4/5 in the graph above) and haven’t experienced the benefits of the proposed project. This graph articulates the message well that change management and stakeholder / sponsor champions are an important, but often overlooked component of an OSS transformation.

The diagram below helps to articulate the benefits of an open API model being made accessible to external market-places. We’re entering an exciting time for OSS, with previously hidden, back-end telco functionality now being increasingly presented to the market (if even only as APIs into the black-box).

Telkomsel's internal/external API influences

Amongst many other benefits, it helps to bring the customer closer to implementers of these back-end systems.

How OSS/BSS facilitated Telkomsel’s structural revenue changes

The following two slides were presented by Monty Hong of Indonesia’s Telkomsel at Digital Transformation Asia 2018 last week. They provide a fascinating insight into the changing landscape of comms revenues that providers are grappling with globally and the associated systems decisions that Telkomsel has made.

The first shows the drastic declines in revenues from Telkomsel’s traditional telco products (orange line), contrasted with the rapid rise in revenues from content such as video and gaming.
Telkomsel Revenue Curve

The second shows where Telkomsel is repositioning itself into additional segments of the content value-chain (red chevrons at top of page show where Telkomsel is playing).
Telkomsel gaming ecosystem

Telkomsel has chosen to transform its digital core to specifically cater for this new revenue model with one API ecosystem. One of the focuses of this transformation is to support a multi-speed architectural model. Traditional back-end systems (eg OSS/BSS and system of records) are expected to rarely change, whilst customer-facing systems are expected to be highly agile to cater to changing customer needs.

More about the culture of this change tomorrow.

OSS that capture value, not just create it

I’ve just had a really interesting first day at TM Forum’s Digital Transformation Asia (https://dta.tmforum.org and #tmfdigitalasia ). The quality of presentations was quite high. Some great thought-provoking ideas!!

Nik Willetts kicked off his keynote with the following quote, which I’m paraphrasing, “Telcos need to start capturing value, not just creating it as they have for the last decade.”

For me, this is THE key takeaway for this event, above any of the other interesting technical discussions from day 1 (and undoubtedly on the agenda for the next 2 days too).

The telecommunications industry has made a massive contribution to the digital lifestyle that we now enjoy. It has been instrumental in adding enormous value to our lives and our economy. But all the while, telecommunications providers globally have been experiencing diminishing profitability and share-of-wallet (as described in this earlier post). Clearly the industry has created enormous value, but hasn’t captured as much as it would’ve liked.

The question to ask is how will our thinking and our OSS/BSS stacks help to contribute to capturing more value for our customers. As described in the share of wallet post above, the premium end of the value chain has always been in the content (think in terms of phone conversations in days gone by, or the myriad of comms techniques today such as email, live chat, blogs, etc, etc). That’s what the customer pays for – the experience – not the networks or systems that facilitate it.

Nik’s comments made me think of Andrew Carnegie. Monopolies such as the telecommunications organisations of the past and Andrew Carnegie’s steel business owned vast swathes of the value chain (Carnegie Steel Company owned the mines which extracted the raw materials needed to make steel, controlled the transportation used to deliver the materials and the product, and ran the mills used for steel production). Buyers didn’t care for the mines or mills or transportation. Customers were paying for the end product as it is what helped them achieve their goals, whether that was the railway tracks needed by the railroads or the beams needed by construction companies.

The Internet has allowed enormous proliferation of the premium-end of the telecommunications value chain. It’s too late to stuff that genie back into the bottle. But to Nik’s further comment, we can help customers achieve their goals by becoming their “do-it-yourself” digital partners.

Our customers now look to platforms like Facebook, Instagram, Google, WordPress, Amazon, etc to build their marketing, order capture, product / content delivery, commercial transactions, etc. I really enjoyed Monty Hong‘s presentation that showed how Telkomsel’s OSS/BSS is helping to embed Telkomsel into customers’ digital lifestyles / value-chains. It’s a perfect example of the biggest OSS loser proof discussed in yesterday’s post.

Pitching an OSS? Don’t call it OSS.

If you asked me how to sell cybersecurity, I wouldn’t call it cybersecurity.” The raw truth of the statement hit me like a lightning bolt between the eyes. Cybersecurity might loosely describe what we do, and we tell people it’s what we’re selling, but it’s not what people buy.
Safety. Assurance. Peace of mind. Confidence. These are the kinds of things that people buy, concepts which ordinary people can understand and relate to because they are feelings which they have experienced themselves. Cybersecurity is not a next gen firewall, or multi-layered endpoint protection with machine learning and threat sandbox technology. Cybersecurity is not risk management or ISO27001 policies. Cybersecurity is being able to use the Internet in any way I can imagine without having to worry I might lose my family photos, get robbed, or get in trouble with my boss. If you could (honestly) sell me “worry free Internet”, I’d buy it in a heartbeat, and so would everyone you know
.”
Corch X
, here.

Sound familiar?
If you asked me how to sell OSS, I wouldn’t call it OSS. Doh! Now you enlighten me… after I’ve already chosen the domain name, PassionateAboutOSS.com. After I’ve already written over 2,000 posts on topics like orchestration, microservices, cloud-native, DevOps, and every other technical buzzword. Time to start again from scratch.

One thing in my favour is that you, the audience I’m interacting with, also speaks in the same jargon. These are the terms we use to communicate with each other. To get things started. To get things done. To get things delivered.

That’s all fine if we’re only interacting with like-minded OSS experts. However, of the thousands of people who interact with our OSS / BSS, only a small percentage are OSS experts. A majority of people use the tools rather than designing, building or commissioning them.

The people who use the tools have a huge range of job roles and reasons for needing to use our OSS / BSS. Just like with cybersecurity, the core reasons could be Safety. Assurance. Peace of mind. Confidence. But they might also include Speed. Efficiency. Reliability. Repeatability. Simplicity. Monetisation. Insightful. And more.

The challenge we have is that so much of the benefit that our OSS and BSS deliver is intangible. We might talk about orchestration delivering speed, simplicity, reliability, etc. But how do we establish a more tangible link?

How do we achieve the equivalent of what the “Intel Inside” marketing ploy delivered, which made people associate an otherwise obscure integrated circuit with a premium feature to consider when they bought their next computing device. How do we ensure that people know that our OSS / BSS is the master of puppets that make our networks dance? It’s our OSS / BSS that are pulling all the strings of operationalisation, connecting customers with networks.

For fear of OSS investment

Friday’s post discussed three analogies about the challenges of performing an OSS pivot.

The biggest challenge in initiating the transformation / replacement of any significant OSS is fear. There are many OSS out there whose “owners” want to change and need to change… but fear changing because a significant pivot would mean a “sell the farm” decision.

The fear is completely understandable. These are highly complex projects with so many potential pitfalls that invest massive amounts of resource (time, money, people). The risks can be huge for sponsors / stakeholders / investors. Failure of these projects can be career changing. The upside potential rarely balances the downside risk.

So, the only choice we have is to present pivots that aren’t “bet the farm” decisions.

The delivery approach of a bet the farm pivot tends to look like this:
The Bet-the-farm OSS Transformation Approach

The less risky, dependency-reduced, stepping-stone transformation tends to look a bit like this, but probably with a lot more verticals, as described here:
The Stepping-Stone OSS Transformation Approach