Becoming the Microsoft of the OSS industry

On Tuesday we pondered, “Would an OSS duopoly be a good thing?

It cited two examples of operating systems amongst other famous duopolies:

  • Microsoft / Apple (PC operating systems)
  • Google / Apple (smartphone operating systems)

Yesterday we provided an example of why consolidation is so much more challenging for OSS companies than say for Coke or Pepsi.

But maybe an operating system model could represent a path to overcome many of the challenges faced by the OSS industry. What if there were a Linux for OSS?

  • One where the drivers for any number of device types is already handled and we don’t have to worry about south-bound integrations anymore (mostly). When new devices come onto the market, they need to have agents designed to interact with the common, well-understood agents on the operating system
  • One where the user interface is generally defined and can be built upon by any number of other applications
  • One where data storage and handling is already pre-defined and additional utilities can be added to make data even easier to interact with
  • One where much of underlying technical complexity is already abstracted and the higher value functionality can be built on top

It seems to me to be a great starting point for solving many of the items listed as awaiting exponential improvement is this OSS Call for Innovation manifesto.

Interestingly, I can’t foresee any of today’s biggest OSS players developing such an operating system without a significant mindset shift. They have the resources to become the Microsoft / Apple / Google of the OSS market, but appear to be quite closed-door in their thinking. Waiting for disruption from elsewhere.

Could ONAP become the platform / OS?

Let me relate this by example. TM Forum recently ran an event called DTA in Kuala Lumpur. It was an event for sharing ideas, conversations and letting the market know all about their products. All of the small to medium suppliers were happy to talk about their products, services and offerings. By contrast, I was ordered out of the rooms of one leading, but some might say struggling, vendor because I was only a walk-up. A walk-up representing a potential customer of them, but they didn’t even ask the question about how I might be of value to them (nor vice versa).

A sad example of the challenges facing OSS supplier consolidation

Yesterday’s post, “Would an OSS duopoly be a good thing?” talked about the benefits and challenges of consolidation of the number of suppliers in the OSS market.

I also promised that today I’ll share an example of the types of challenge that can be faced.

An existing OSS supplier (Company A) had developed a significant foot-hold in the T1 telco market around Asia. They had quite a wide range of products from their total suite installed at each of these customers.

Another OSS supplier (Company X) then acquired Company A. I wasn’t privy to the reasoning behind the purchase but I can surmise that it was a case of customer and revenue growth, primarily to up-sell Company X’s complementary products into Company A’s customers. There was a little bit of functionality overlap, but not a huge amount. In fact Company A’s functionality, if integrated into Company X’s product suite, would’ve given them significantly greater product reach.

To date, the acquisition hasn’t been a good one for Company X. They haven’t been able to up-sell to any of Customer A’s existing customers, probably because there are some significant challenges relating to the introduction of that product into Asia. Not only that, but Company A’s customers had been expecting greater support and new development under new management. When it didn’t arrive (there were no new revenues to facilitate Company X investing in it), those customers started to plan OSS replacement projects.

I understand some integration efforts were investigated between Company A and Company X products, but it just wasn’t an easy fit.

As you can see, quite a few of the challenges of consolidation that were spoken about yesterday were all present in this single acquisition.

Would an OSS duopoly be a good thing?

The products/vendors page here on PAOSS has a couple of hundred entries currently. We’re currently working on an extended list that will almost double the number on it. More news on that shortly.

The level of fragmentation fascinates me, but if I’m completely honest, it probably disappoints me too. It’s great that it’s providing the platform for a long-tail of innovation. It’s exciting that there’s so many niche opportunities that exist. But it disappoints me because there’s so much duplication. How many alarm / performance / inventory / etc management tools are there? Can you imagine how many developer hours have been duplicated on similar feature development between products? And because there are so many different patterns, it means the total number of integration variants across the industry is putting a huge integration tax on us all.

Compare this to the strength of duopoly markets such as:

  • Microsoft / Apple (PC operating systems)
  • Google / Apple (smartphone operating systems)
  • Boeing / Airbus (commercial aircraft)
  • Visa / Mastercard (credit cards / payments)
  • Coca Cola / Pepsi (beverages, etc)

These duopolies have allowed for consolidation of expertise, effort, revenues/profits, etc. Most also provide a platform upon which smaller organisations / suppliers can innovate without having to re-invent everything (eg applications build upon operating systems, parts for aircraft, etc).

Buuuut……

Then I think about the impediments to achieving drastic consolidation through mergers and acquisitions (M&A) in the OSS industry.

There are opportunities to find complementary product alignment because no supplier services the entire OSS estate (where I’m using TM Forum’s TAM as a guide to the breadth of the OSS estate). However, it would be much harder to approach duopoly in OSS for a number of reasons:

  • Almost every OSS implementation is unique. Even if some of the products start out in common, they usually become quickly customised in terms of integrations, configurations, processes, etc
  • Interfaces to networks and other systems can vary so much. Modern EMS / devices / systems are becoming a little more consistent with IP, SNMP, Web APIs, etc. However, our networks still tend to have a lot of legacy protocols to interface with our networks
  • Consolidation of product lines becomes much harder, partly because of the integrations above, but partly because the functionality-sets and workflows differ so vastly between similar products (eg inventory management tools)
  • Similarly, architectures and build platforms (eg programming languages) are not easily compatible
  • Implementations are often regional for a variety of reasons – regulatory, local partnerships / relationships, language, corporate culture, etc
  • Customers can be very change-averse, even when they’re instigating the change

By contrast, we regularly hear of Coca Cola buying up new brands. It’s relatively easy for Coke to add a new product line/s without having much impact on existing lines.

We also hear about Google’s acquisitions, adding complementary products into its product line or simple for the purpose of acquiring new talent / expertise. There’s also acquisitions for the purpose of removing competitors or buying into customer bases.

Harder in all cases in the OSS industry.

Tomorrow we’ll share a story about an M&A attempting to buy into a customer base.

Then on Thursday, a story awaits on a possibly disruptive strategy towards consolidation in OSS.

Nobody dabbles at dentistry

There are some jobs that are only done by accredited professionals.
And then there are most jobs, jobs that some people do for fun, now and then, perhaps in front of the bathroom mirror.
It’s difficult to find your footing when you’re a logo designer, a comedian or a project manager. Because these are gigs that many people think they can do, at least a little bit.”
Seth Godin here.

I’d love to plagiarise the entire post from Seth above, but instead suggest you have a look at the other pearls of wisdom he shares in the link above.

So where does OSS fit in Seth’s thought model? Well, you don’t need an accreditation like a dentist does. Most of the best I’ve met haven’t had any OSS certifications to speak of.

Does the layperson think they can do an OSS role? Most people have never heard of OSS, so I doubt they would believe they could do a role as readily as they could see themselves being logo designers. But the best I’ve met have often come from fields other than telco / IT / network-ops.

One of my earliest OSS projects was for a new carrier in a country that had just deregulated. They were the second fixed-line carrier in this country and tried to poach staff from the incumbent. Few people crossed over. To overcome this lack of experience, the carrier built an OSS team that consisted of a mathematician, an architect, an automotive engineer, a really canny project manager and an assortment of other non-telco professionals.

The executives of that company clearly felt they could develop a team of experts (or just had no choice but to try). The strategy didn’t work out very well for them. It didn’t work out very well for us either. We were constantly trying to bring their team up to speed on the fundamentals in order to use the tools we were developing / delivering (remembering that as one of my first OSS projects, I was still desperately trying to bring myself up to speed – still am for that matter).

As Seth also states, “If you’re doing one of these non-dentist jobs, the best approach is to be extraordinarily good at it. So much better than an amateur that there’s really no room for discussion.” That needs hunger (a hungriness to learn without an accredited syllabus).

It also needs humility though. Even the most extraordinarily good OSS proponents barely scratch the surface of all there is to learn about OSS. It’s the breadth of sub-expertise that probably explains why there is no single accreditation that covers all of OSS.

The bird’s wings analogy for OSS RFPs

A bird sitting on a tree is never afraid of the branch breaking, because her trust is not on the branch but on it’s own wings.”
Unknown.

Last month, we posted a series entitled “How to kill the RFP.” The RFP is a common mechanism for reaching a purchasing agreement between OSS provider and network operator. Unfortunately, it’s deemed to be a non-ideal approach by many buyers and sellers alike. One of the key concepts discussed was trust. In the context of the quote above, the branch is the contract formed out of the RFP (Request for Proposal) and the bird’s wings represent the partnership being formed.

We (and our procurement teams) spend a lot of time in the formation of the contract. We want to fortify the branch to ensure it never breaks. We build massive scaffolding around it. But just like the bird analogy, the initial contract is just a starting point. The bird may wish to come back to the branch / contract from time to time. However, over the (hopefully) 10+ year lifespan of the OSS, the contract will never be able to accommodate all possible eventualities (flight paths).

Focus on building trust in the wings (the relationship) and have faith they will overcome any frailties that appear in the branch (the contract) in the long run.

There may be breaches of trust from either / both sides during the lifespan of the relationship. But the end-game should be really clear – an early OSS churn is a bad outcome for both supplier and customer.

OSS come in all shapes and sizes

As the OSS vendors / suppliers page here on PAOSS shows, there are a LOT of different OSS options, making it an extremely fragmented market. But there’s something of a reason for that fragmentation – customer requirements for OSS come in all shapes and sizes. Here are four of the major categories that I’ve been lucky enough to work on.

Tier 1 telcos – the OSS of these organisations tend to be best classified as having to cope with scale. Scale comes in multiple dimensions. The number of network devices under management tend to be large, as do the types of device. The number of subscribers and customer services tend to be large, not to mention having large amounts of change occurring on a daily basis. The number of process variants and system integrations also tend to be large. And being at scale means that they’re more likely to be able to justify the cost of customisations and automations – either to off-the-shelf products or via purpose-built tools. Budgets, both CAPEX and OPEX, also tend to be large. Except where niche slices of the total OSS suite are being delivered, the vendors that service this market are also large in terms of revenues, but also in their number of services staff available to service the customer’s unique needs. In the case of the telco, the business (and revenue model) is built around the network so it gets the clear attention of the organisation’s executives.

Enterprise customers – these OSS tend to be at the other end of the spectrum, even when the enterprise is large (eg banks). Networks tend to be more homogeneous, being IT/IP-centric. Services tend to be less customer-specific (ie for journaling costs at a business unit level rather than individual subscribers) but follow ITSM process models, so the service management daily delta is not at the same scale as the Tier 1 telco. For enterprise customers, the network is rarely core business, even if it is mission-critical to the business. As such, attention and budgets tend to be much smaller. In turn, this means that the smaller, self-service or open-source OSS products / suppliers tend to be present in this segment.

Then there are two categories of organisation that fit between the two previous ends of the spectrum:

Tier 2/3 telcos, MVNOs and data centres – Similar to the Tier-1 telco, just not at the same scale, which has implications on the nature of their OSS. They generally need all the same types of OSS tools as the T1s, just not catering for the same number of variants. Due to cost constraints, there may be one or a few significant OSS building blocks such as inventory, assurance or orchestration, but often there will also be enterprise-grade and/or open-source products in their OSS stack. CAPEX and OPEX budgets are smaller, so clever jack-of-all-trades OSS experts are often on the operational teams delivering sophisticated solutions on shoe-string budgets. Some of the best OSS experts I’ve come across can trace their roots back to these origins.

Utilities – the OSS of these organisations are a fascinating mix of the first two categories above because enterprise-grade OSS often aren’t really fit-for-purpose and carrier-grade OSS doesn’t quite suit either. Except in the case of multi-utilities (eg power + telco), these organisations tend to have very little service management change, mainly because they tend to have few to no external customers. This makes them similar to enterprise OSS. But like telcos, they often have networks that are more varied than your typical IT/IP-centric networks under management in enterprise-land. They often have less common network topologies and protocols, including older and even proprietary models that enterprise-grade OSS rarely support without expensive mediation. Just like the enterprise, the telco network (and hence the OSS) of a utility is not core business and can’t be justified through driving incremental revenues. However, it is generally mission-critical to the core business (eg tele-protection circuits are in place to ensure resilience of the electricity supply across the power network). As such, telco Network health / reliability and asset management tend to be the main focus of these OSS. And whereas telcos can delegate some responsibility for network security to their customers (using the dumb-pipe excuse), utilities bear full responsibility for the security of their telco networks and the critical infrastructure that these networks and OSS tools support.

These are only broadly general categories and there are more than 50 shades of grey in between. Are there any other broad categories that you feel I’m missing?

What to read from a simple little OSS job advertisement from AWS

Not sure if you noticed, but AWS posted this job advertisement on LinkedIn a couple of days ago – Business Portfolio Leader – Telecom OSS/BSS Solutions.

The advertisement includes the following text:
Amazon Web Services (AWS) is leading the next paradigm shift in computing and is looking for a world class candidate to manage an elite portfolio of strategic AWS technology partners focused on the Operation support System (OSS) and Business Support System (BSS) applications within telecommunications segment. Your job will be to use these strategic partners to develop OSS and BSS applications on AWS infrastructure and platform.”

How do you read this advertisement? I have a few different perspectives to pose to you:

I can’t predict AWS’ future success with this initiative, but I’m assuming they’re creating the role because they see a big opportunity that they wish to capture. They have plenty of places they could otherwise invest, so they must believe the opportunity is big (eg the industry of OSS suppliers selling to CSPs is worth multi-billions of dollars and is waiting to be disrupted).

OSS/BSS are typically seen by CSPs as a very expensive (and risky) cost of doing business. I’m certain there’s a business model for any organisation (possibly AWS and its tech partners) that can significantly improve the OSS/BSS delivery costs/risks for CSPs.

The ad identifies CSPs (specifically the term, “major telecom infrastructure providers”) as the target customer. You could pose the concept that the CSPs won’t want to support a competitor in AWS. The CSPs I’m dealing with can’t get close to matching AWS cost structures so are partnering with AWS etc. Not just for private cloud, but also public and hybrid cloud too. The clip-the-ticket / partnership selling model appears to be becoming more common for telcos globally, so the fear-of-competition barrier “seems” to be coming down a little.

The other big challenge facing the role is network and data security. What’s surprised me most are core network services like directory services (used for internal authentication/AAA purposes). I never thought I’d see these outsourced to third-party cloud providers, but have seen the beginnings of it recently. If CSPs consume those, then OSS/BSS must be up for grabs at some CSPs too. For example, I’d imagine that OSS/BSS tools were amongst the 1,000 business apps that Verizon is moving to AWS.

The really interesting future consideration could be the advanced innovation that AWS et al could bring to the OSS space, and in ways that the telcos and OSS suppliers simply can’t. This recent post showed Google’s intent to bring AI to network operations. It could revolutionise the OSS/BSS industry. Not just for CSPs, but for their customers as well (eg their enterprise-grade OSS). Could it even represent another small step towards the OSS Doomsday Scenario posed here?

And just who are the “strategic partners” that AWS is referring to? I assume this old link might give at least one clue.

I’m certainly no Nostradamus, so I’d love to get your opinions on what ramifications this strategic hire will have on the OSS/BSS industry we know today.

How to kill the OSS RFP (part 4)

This is the fourth, and final part (I think) in the series on killing the OSS RFI/RFP process, a process that suppliers and customers alike find to be inefficient. The concept is based on an initiative currently being investigated by TM Forum.

The previous three posts focused on the importance of trusted partnerships and the methods to develop them via OSS procurement events.

Today’s post takes a slightly different tack. It proposes a structural obsolescence that may lead to the death of the RFP. We might not have to kill it. It might die a natural death.

Actually, let me take that back. I’m sure RFPs won’t die out completely as a procurement technique. But I can see a time when RFPs are far less common and significantly different in nature to today’s procurement events.

How??
Technology!
That’s the answer all technologists cite to any form of problem of course. But there’s a growing trend that provides a portent to the future here.

It comes via the XaaS (As a Service) model of software delivery. We’re increasingly building and consuming cloud-native services. OSS of the future, the small-grid model, are likely to consume software as services from multiple suppliers.

And rather than having to go through a procurement event like an RFP to form each supplier contract, the small grid model will simply be a case of consuming one/many services via API contracts. The API contract (eg OpenAPI specification / swagger) will be available for the world to see. You either consume it or you don’t. No lengthy contract negotiation phase to be had.

Now as mentioned above, the RFP won’t die, but evolve. We’ll probably see more RFPs formed between customers and the services companies that will create customised OSS solutions (utilising one/many OSS supplier services). And these RFPs may not be with the massive multinational services companies of today, but increasingly through smaller niche service companies. These micro-RFPs represent the future of OSS work, the gig economy, and will surely be facilitated by smart-RFP / smart-contract models (like the OSS Justice League model).

How to kill the OSS RFP (part 3)

As the title suggests, this is the third in a series of articles spawned by TM Forum’s initiative to investigate better procurement practices than using RFI / RFP processes.

There’s no doubt the RFI / RFP / contract model can be costly and time-consuming. To be honest, I feel the RFI / RFP process can be a reasonably good way of evaluating and identifying a new supplier / partner. I say “can be” because I’ve seen some really inefficient ones too. I’ve definitely refined and improved my vendor procurement methodology significantly over the years.

I feel it’s not so much the RFI / RFP that needs killing (significant disruption maybe), but its natural extension, the contract development and closure phase that can be significantly improved.

As mentioned in the previous two parts of this series (part 1 and part 2), the main stumbling block is human nature, specifically trust.

Have you ever been involved in the contract phase of a large OSS procurement event? How many pages did the contract end up being? Well over a hundred? How long did it take to reach agreement on all the requirements and clauses in that document?

I’d like to introduce the concept of a Minimum Viable Contract (MVC) here. An MVC doesn’t need most of the content that appears in a typical contract. It doesn’t attempt to predict every possible eventuality during the many years the OSS will survive for. Instead it focuses on intent and the formation of a trusting partnership.

I once led a large, multi-organisation bid response. Our response had dozens of contributors, many person-months of effort expended, included hundreds of pages of methodology and other content. It conformed with the RFP conditions. It seemed justified on a bid that exceeded $250M. We came second on that bid.

The winning bidder responded with a single page that included intent and fixed price amount. Their bid didn’t conform to RFP requests. Whereas we’d sought to engender trust through content, they’d engendered trust through relationships (in a part of the world where we couldn’t match the winning bidder’s relationships). The winning bidder’s response was far easier for the customer to evaluate than ours. Undoubtedly their MVC was easier and faster to gain agreement on.

An MVC is definitely a more risky approach for a customer to initiate when entering into a strategically significant partnership. But just like the sports-star transfer comparison in part 2, it starts from a position of trust and seeks to build a trusted partnership in return.

This is a highly contrarian view. What are your thoughts? Would you ever consider entering into an MVC on a big OSS procurement event?

How to kill the OSS RFP (part 2)

Yesterday’s post discussed an initiative that TM Forum is currently investigating – trying to identify an alternate OSS procurement process to the traditional RFI/RFP/contract approach.

It spoke about trusting partnerships being the (possibly) mythological key to killing off the RFP.

Have you noticed how much fear there is going into any OSS procurement event? Fear from suppliers and customers alike. That’s understandable because there are so many horror stories that both sides have heard of, or experienced, from past procurement events. The going-in position is of excitement, fear and an intention to ensure all loopholes are covered through reams of complex contractual terms and conditions. DBC – death by contract.

I’m a huge fan of Australian Rules Football (aka AFL). I’m lucky enough to have been privy to the inside story behind one of the game’s biggest ever player transfers.

The player, a legend of the game, had a history of poor behaviour. With each new contract, his initial club had inserted more and more T&Cs that attempted to control his behaviour (and protect the club from further public relations fallouts). His final contract was many pages long, with significant discussion required by player and club to reach agreement on each clause.

In the meantime, another club attempted to poach the superstar. Their contract offer fit on a single page and had no behaviour / discipline clauses. It was the same basic pro-forma that eveeryone on the team signed up to. The player was shocked. He asked where all the other clauses were. The answer from the poaching club was, to paraphrase, “why would we need those clauses? We trust you to do the right thing.” It became a significant component of the new club getting their man. And their man went on to deliver upon that trust, both on-field and off, over many years. He built one of the greatest careers ever.

I wonder whether this is just an outlier example? Could the same simplified contract model apply to OSS procurement, helping to build the trusting partnerships that everyone in the industry desires? As the initiator of the procurement event, does the customer control the first important step towards building a trusting partnership that lasts for many years?

How to kill the OSS RFP

TM Forum is currently investigating ways to procure OSS without resorting to the current RFI / RFP approach. It has published the following survey results.
Kill the RFP.

As it shows, the RFI / RFP isn’t fit for purpose for suppliers and customers alike. It’s not just the RFI/RFP process. We could extend this further and include contract / procurement process that bolts onto the back of the RFP process.

I feel that part of the process remains relevant – the part that allows customers to evaluate the supplier/s that are best-fit for the customer’s needs. The part that is cumbersome relates to the time, effort and cost required to move from evaluation into formation of a contract.

I believe that this becomes cumbersome because of trust.

Every OSS supplier wants to achieve “trusted” status with their customers. Each supplier wants to be the source trusted to provide the best vision of the future for each customer. Similarly, each OSS customer wants a supplier they can trust and seek guidance from.”
Past PAOSS post.

However, OSS contracts (and the RFPs that lead into them) seem to be the antithesis of trust. They generally work on the assumption that every loophole must be closed that a supplier or vendor could leverage to rort the other.

There are two problems with this:

  • OSS transformations are complex projects and all loopholes can never be covered
  • OSS platforms tend to have a useful life of many years, which makes predicting the related future requirements, trends, challenges, opportunities, technologies, etc difficult to plan for

As a result, OSS RFI/RFP/contracts are so cumbersome. Often, it’s the nature of the RFP itself that makes the whole process cumbersome. The OSS Radar analogy shows an alternative mindset.

Mark Newman of TM Forum states, “…the telecoms industry is transitioning to a partnership model to benefit from innovative new technologies and approaches, and to make decisions and deploy new capabilities more quickly.”
The trusted partnership model is ideal. It allows both parties to avoid the contract development phase and deliver together efficiently. The challenge is human nature (ie we come back to trust).

I wonder whether there is merit in using an independent arbiter? A customer uses the RFI/RFP approach to find a partner or partners, but then all ongoing work is evaluated by the arbiter to ensure balance / trust is maintained between customer (and their need for fair pricing, quality products, etc) and supplier (and their need for realistic requirements, reasonable payment times, etc).

I’d love to hear your thoughts and experiences around partnerships that have worked well (or why they’ve worked badly). Have you ever seen examples where the arbitration model was (or wasn’t) helpful?

Why does everyone know an operator’s business better than the operator?

The headline today blatantly steals from a post by William Webb. You can read his entire, brilliant post here. All quotes below are from the article.

William’s concept aligns quite closely with yesterday’s article regarding external insights that don’t quite marry up with the real situation faced by operators.

At the Great Telco Debate this week there was no shortage of advice for operators. Some counselled them to move up the value chain or branch out into related areas. Others to build “it” so that they would come… But there were no operators actually talking about doing these things.”
Funny because it’s true.

In most industries the working assumption is that a company knows its customers better than outsiders… But this assumption of knowing your customers seems not to hold in the mobile telecoms industry. It appears that the industry assumes that the mobile operators do not know their customers, but that they – the suppliers generally – understand them better.
Interesting. So this is a case of the suppliers purportedly knowing their customer (the operators) but also their customer’s customer (the end-users of comms services). This concept is almost definitely true of network suppliers. I don’t feel that this is common for OSS suppliers though. In fact it’s an area that could definitely be improved upon – an awareness of our customers’ customers.

At the Great Telco Debate, Nokia spoke about how the telcos needed to be bold, to build networks [eg 5G] for which there was no current business plan on the basis that revenue streams would materialise. Telling your customer to do something which cannot be justified economically seems a risky way to ensure a good long-term relationship.

I actually laughed out loud at the truth behind this one. So many related stories to tell. Another day perhaps!

The operators have been advised for decades that they are in a business that is increasingly becoming a utility and that they need to “move up the value chain” or find some other growth opportunity. This advice seems to be predicated on the view that nobody wants to be a utility, that it is essential for organisations to grow, and that moving around the value chain is easy to do. All merit further investigation. Utility businesses are stable, low-risk and normally profitable. Many companies do not grow but thrive nevertheless. But most problematic, mobile operators have been trying to “move up the value chain” for many years, with conspicuous lack of success.”

The CSP vs DSP business model. There is absolutely a position for both speeds in the telco marketplace. Which is better? Depends on your investment objectives and risk/reward profile.

Most operators, sensibly, appear to be ignoring all this unsolicited advice and getting on with running their networks reliably while delivering ever-more data capacity for ever-lower tariffs. Of course, they listen to ideas emanating from around the industry, but they know their business, their financial constraints, and their competitive and regulatory environment.”

As indicated in yesterday’s post, every client situation is different. We might look at the technical similarities between projects, but differences go beyond that. A supplier or consultant can’t easily replace local knowledge across financial and regulatory environments especially.

The OSS proof-of-worth dilemma

Earlier this week we posted an article describing Sutton’s Law of OSS, which effectively tells us to go where the money is. The article suggested that in OSS we instead tend towards the exact opposite – the inverse-Sutton – we go to where the money isn’t. Instead of robbing banks like Willie Sutton, we break into a cemetery and aimlessly look for the cash register.

A good friend responded with the following, “Re: The money trail in OSS … I have yet to meet a senior exec. / decision maker in any telco who believes that any OSS component / solution / process … could provide benefit or return on any investment made. In telco, OSS = cost. I’ve tried very hard and worked with many other clever people also trying hard to find a way to pitch OSS which overcomes this preconception. BSS is often a little easier … in many cases it’s clear that “real money” flows through BSS and needs to be well cared for.”

He has a strong argument. The cost-out mentality is definitely ingrained in our industry.

We are saddled with the burden of proof. We need to prove, often to multiple layers of stakeholders, the true value of the (often intangible) benefits that our OSS deliver.

The same friend also posited, “The consequence is the necessity to establish beneficial working relationships with all key stakeholders – those who specify needs, those who design and implement solutions and those, especially, who hold or control the purse strings. [To an outsider] It’s never immediately obvious who these people are, nor what are their key drivers. Sometimes it’s ambition to climb the ladder, sometimes political need to “wedge” peers to build empires, sometimes it’s necessity to please external stakeholders – sometimes these stakeholders are vendors or government / international agencies. It’s complex and requires true consultancy – technical, business, political … at all levels – to determine needs and steer interactions.

Again, so true. It takes more than just technical arguments.

I’m big on feedback loops, but also surprised at how little they’re used in telco – at all levels.

  • We spend inordinate amounts of time building and justifying business cases, but relatively little measuring the actual benefits produced after we’ve finished our projects (or gaining the learnings to improve the next round of business cases)
  • We collect data in our databases, obliviously let it age, realise at some point in the future that we have a data quality issue and perform remedial actions (eg audits, fixes, etc) instead of designing closed-loop improvement cycles that ensure DQ isn’t constantly deteriorating
  • We’re great at spending huge effort in gathering / arguing / prioritising requirements, but don’t always run requirements traceability all the way into testing and operational rollout.
  • etc

Which leads us back to the burden of proof. Our OSS have all the data in the world, but how often do we use it to justify and persuade – to prove?

Our OSS products have so many modules and technical functionality (so much of it effectively duplicated from vendor to vendor). But I’ve yet to see any vendor product that allows their customer, the OSS operators, to automatically gather proof-of-worth stats (ie executive-ready reports). Nor have I seen any integrator build proof-of-worth consultancy into their offer, whereby they work closely with their customer to define and collect the metrics that matter. BTW. If this sounds hard, I’d be happy to discuss how I approach this task.

So let me leave you with three important questions today:

  1. Have you also experienced the overwhelming burden of the “OSS = cost” mentality
  2. If so, do you have any suggestions / experiences on how you’ve overcome it
  3. Does the proof-of-worth product functionality sound like it could be useful (noting that it doesn’t even have to be a product feature, but a custom report / portal using data that’s constantly coursing through our OSS databases)

Sutton’s Law of OSS

Willie Sutton was an accomplished bank robber, particularly during the 1920s and 1930. Named after Willie, Sutton’s Law effectively states, “I go to where the money is,” which was supposedly Sutton’s response to a reporter’s question asking why he robbed banks instead of easier targets.

Interestingly for the OSS industry, we seem to follow the inverse of Sutton’s Law. We go to where the money isn’t. In other words, we mostly attempt to build business cases around the “cost-out” model, helping our customers achieve cost savings. These savings are in the form of automations that lead to reductions in head-count, cost of doing business, etc. Think about the common buzz-words – AI, machine learning, virtualisation, etc. Are they Sutton, or inverse-Sutton?

Truth be told, we do still go to where the money is because our customers (the network operators) are willing to spend money to save even more money. But you can see where I’m coming from can’t you?

Let me pose a question for you? Who is more likely to be comfortable spending money, someone who is confident in making money from the investment or someone who is going to save money from an investment?

I’d back Sutton’s Law and respond with the former. But we don’t tend to follow Sutton’s Law very often. It can often be challenging because so many of the benefits of our OSS and BSS are intangible. We’re seen as cost centres because we don’t do a good enough job of showing how important we are at operationalising everything that happens in a service provider’s network (and business).

At TM Forum’s DTA event a couple of weeks ago, I was pleased to see that some of the big telco API initiatives (eg Telkomsel, Telstra’s Network as a Service [NaaS] and China Mobile’s Data Security and Privacy Management Framework) are starting to make a real impact. The API model represents our strongest industry-wide push towards revenue-based business cases in years (that I can remember anyway).

Monty Hong of Telkomsel (Indonesia) made a presentation that provides a useful guide for future telco value-stream / revenue-models, effectively showing Sutton’s Law at play:
http://passionateaboutoss.com/how-oss-bss-facilitated-telkomsels-structural-revenue-changes.

The API model is an interesting one though. As well as revenue-in, it also potentially represents a cost out model (ie reduced cost of sales), a platform play (ie leveraging the network effect by allowing partners to build their own revenues on top), but on the downside also potentially triggers revenue cannibalisation.

Personally, I’m considering Sutton’s Law more in terms of our customers’ customers (ie end users of communication services, like the gamers in the Monty Hong link) rather than customers (ie the comms service providers that want to reduce costs).

I’d love to hear about your perception of Sutton’s Law in OSS. Where do you think the money is?

Cannibalisation intrigues me

We’ve all heard the Kodak story. They invented digital cameras but stuck them in a drawer because it was going to cannibalise their dominant position in the photographic film revenue stream… eventually leading to bankruptcy.

Swisscom invented an equivalent of WhatsApp years before WhatsApp came onto the market. It allowed users (only Swisscom users, not external / global customers BTW) to communicate via a single app – calls, chat, pictures, videos, etc. Swisscom parked it because it was going to cannibalise their voice and SMS revenue streams. That product, iO, is now discontinued. Meanwhile, WhatsApp achieved an exit of nearly $22B by selling to Facebook.

Some network operators are baulking at offering SD-WAN as it may cannibalise their MPLS service offerings. It will be interesting to see how this story plays out.

What also intrigues me is where cannibalisation is going to come for the OSS industry. What is the format of network operationalisation that’s simpler, more desirable to customers, probably cheaper, but completely destroys current revenue models? Do any of the vendors already have such capability but have parked it in a drawer because of revenue destruction?

History seems to have proven that it’s better to cannibalise your own revenues than allow your competitors to do so.

The biggest OSS loser

You are so much more likely to put effort into something when you know whether it will pay off and what the gains will be. Not knowing how things will turn out undermines your motivation and makes you delay taking action.”
Dr Theo Tsaousides
in his book, Brainblocks.

Have you seen the reality TV show, “The Biggest Loser?” I rarely watch TV, but have noticed that it’s been a runaway hit in the ratings here in Australia (and overseas apparently). Why has it been so successful and what does it have to do with OSS?

Well, according to Dr Tsaousides, the success of the show comes down to the obvious body-shape / fitness transformations each of the contestants makes over each season of the show. But more specifically, “You need to watch only one season from beginning to end and you will start craving to be a contestant on the show, regardless of your current weight… Seeing the people’s amazing transformation over a few months is a much more convincing way to start working out and eating well than being told by your doctor that you need to lose weight and about the cardiovascular advantages of exercise. Forecasting a positive outcome, especially when dealing with something new and unfamiliar, leads to action.”

Can you see how this might be a useful technique when planning an OSS transformation?

Change management is always a challenging task on any large OSS transformation. It’s always best to have the entire OSS user population involved in the change, but that’s not always feasible for large groups of users.

It’s one of the reasons I’m always a big advocate for getting a baseline, sandpit version of off-the-shelf OSS stood up and available for the user population to start interacting with. This is particularly helpful if the sandpit is perceptibly better than the current one.

To paraphrase, “Forecasting a positive outcome (via the OSS sandpit), especially when dealing with something new and unfamiliar (the future state after OSS transformation), leads to action (more excitement, engagement and less pushback from the user population during the course of the transformation).”

Do you think the biggest loser technique could work on your next OSS transformation?

Is your data getting too heavy for your OSS to lift?

Data mass is beginning to exhibit gravitational properties – it’s getting heavy – and eventually it will be too big to move.”
Guy Lupo
in this article on TM Forum’s Inform that also includes contributions from George Glass and Dawn Bushaus.

Really interesting concept, and article, linked above.

The touchpoint explosion is helping to make our data sets ever bigger… and heavier.

In my earlier days in OSS, I was tasked with leading the migration of large sets of data into relational databases for use by OSS tools. I was lucky enough to spend years working on a full-scope OSS (ie it’s central database housed data for inventory management, alarm management, performance management, service order management, provisioning, etc, etc).

Having all those data sets in one database made it incredibly powerful as an insight generation tool. With a few SQL joins, you could correlate almost any data sets imaginable. But it was also a double-edged sword. Firstly, ensuring that all of the sets would have linking keys (and with high data quality / reliability) was a data migrator’s nightmare. Secondly, all those joins being done by the OSS made it computationally heavy. It wasn’t uncommon for a device list query to take the OSS 10 minutes to provide a response in the PROD environment.

There’s one concept that makes GIS tools more inherently capable of lifting heavier data sets than OSS – they generally load data in layers (that can be turned on and off in the visual pane) and unlike OSS, don’t attempt to stitch the different sets together. The correlation between data sets is achieved through geographical proximity scans, either algorithmically, or just by the human eye of the operator.

If we now consider real-time data (eg alarms/events, performance counters, etc), we can take a leaf out of Einstein’s book and correlate by space and time (ie by geographical and/or time-series proximity between otherwise unrelated data sets). Just wondering – How many OSS tools have you seen that use these proximity techniques? Very few in my experience.

BTW. I’m the first to acknowledge that a stitched data set (ie via linking keys such as device ID between data sets) is definitely going to be richer than uncorrelated data sets. Nonetheless, this might be a useful technique if your data is getting too heavy for your OSS to lift (eg simple queries are causing minutes of downtime / delay for operators).

Intent to simplify our OSS

The left-hand panel of the triptych below shows the current state of interactions with most OSS. There are hundreds of variants inbound via external sources (ie multi-channel) and even internal sources (eg different service types). Similarly, there are dozens of networks (and downstream systems), each with different interface models. Each needs different formatting and integration costs escalate.
Intent model OSS

The intent model of network provisioning standardises the network interface, drastically simplifying the task of the OSS and the variants required for it to handle. This becomes particularly relevant in a world of NFVs, where it doesn’t matter which vendor’s device type (router say) can be handled via a single command intent rather than having separate interfaces to each different vendor’s device / EMS northbound interface. The unique aspects of each vendor’s implementation are abstracted from the OSS.

The next step would be in standardising the interface / data model upstream of the OSS. That’s a more challenging task!!

Introducing our OSS expert registry, for making connections in the OSS industry

Here at Passionate About OSS, we’re passionate about making OSS happen. We have an extensive network of contacts. We just naturally tend to find ourselves making connections between the many experts in our network. Connecting those who are hoping to find an OSS expert with an OSS expert hoping to be found.

We’ve just introduced a new free-of-charge OSS expert registry to help people find OSS experts when they need to. This registry is intended to cover the buy-side and sell-side of the OSS market. Click on the link above to check it out.

Facebook’s algorithmic feed for OSS

This is the logic that led Facebook inexorably to the ‘algorithmic feed’, which is really just tech jargon for saying that instead of this random (i.e. ‘time-based’) sample of what’s been posted, the platform tries to work out which people you would most like to see things from, and what kinds of things you would most like to see. It ought to be able to work out who your close friends are, and what kinds of things you normally click on, surely? The logic seems (or at any rate seemed) unavoidable. So, instead of a purely random sample, you get a sample based on what you might actually want to see. Unavoidable as it seems, though, this approach has two problems. First, getting that sample ‘right’ is very hard, and beset by all sorts of conceptual challenges. But second, even if it’s a successful sample, it’s still a sample… Facebook has to make subjective judgements about what it seems that people want, and about what metrics seem to capture that, and none of this is static or even in in principle perfectible. Facebook surfs user behaviour..”
Ben Evans
here.

Most of the OSS I’ve seen tend to be akin to Facebook’s old ‘chronological feed’ (where users need to sift through thousands of posts to find what’s most interesting to them).

The typical OSS GUI has thousands of functions (usually displayed on a screen all at once – via charts, menus, buttons, pull-downs, etc). But of all of those available functions, any given user probably only interacts with a handful.
Current-style OSS interface

Most OSS give their users the opportunity to customise their menus, colour schemes, even filters. For some roles such as network ops, designers, order entry operators, there are activity lists, often with sophisticated prioritisation and skills-based routing, which starts to become a little more like the ‘algorithmic feed.’

However, unlike the random nature of information hitting the Facebook feed, there is a more explicit set of things that an OSS user is tasked to achieve. It is a little more directed, like a Google search.

That’s why I feel the future OSS GUI will be more like a simple search bar (like Google) that will provide a direction of intent as well as some recent / regular activity icons. Far less clutter than the typical OSS. The graphs and activity lists that we know and love would still be available to users, but the way of interacting with the OSS to find the most important stuff quickly needs to get more intuitive. In future it may even get predictive in knowing what information will be of interest to you.
OSS interface of the future