Only do the OSS that only you can do

A friend of mine has a great saying, “only do what only you can do.”

Do you think that this holds true for the companies undergoing digital transformation? Banks are now IT companies. Insurers are IT companies. Car manufacturers are now IT companies. Telcos are, well, some are IT companies.

We’ve spoken before about the skill transformations that need to happen within telcos if they’re to become IT companies. Some are actively helping their workforce to become more developer-centric. Some of the big telcos that I’ve been assisting in the last few years are embarking on bold Agile-led IT transformations. They’re cutting more of their own code and managing their own IT developments.

That’s exciting news for all of us in OSS. Even if it loses the name OSS in future, telcos will still need software that efficiently operationalises their networks. We have the overlapping skills in software, networks, business and operations.

But I wonder about the longevity of the in-house approach unless we come focus clearly on the first quote above. If all development is brought in-house, we end up with a lot of duplication across the industry. I’m not really sure that it makes sense doing all the heavy-lifting of all custom OSS tools when the heavy-lifting has already been done elsewhere.

It’s the old ebb and flow between in-house and outsourced OSS.

In my very humble opinion, it’s not just a choice between in-house and outsourced that matters. The more important decisions are around choosing to only develop the tools in-house that only you can do (ie the strategic differentiators).

A single glass of pain or single pane of glass??

Is your OSS a single pane of glass, or a single glass of pain?

You can tell I’m being a little flippant here. People often (perhaps idealistically) talk about OSS as being the single pane of glass (SPOG) to manage a network.

I say “idealistically” for a couple of reasons:

  1. There are usually many personas who interact with an OSS, each with vastly different user interface (UI) needs
  2. There is usually more than one OSS product in a client’s OSS suite, often from different vendors, with varying levels of integration

Where a single pane of glass can be a true ambition is as a consolidated health-status dashboard / portal, Invariably, this portal is used by executive / leader / manager personas who want to quickly see a single-screen health status that covers all networks and/or parts of the OSS suite. When things go wrong, this portal becomes the single glass of pain.

These single panes tend to be heavily customised for each organisation as every one has a unique set of metrics-that-matter. For those designing these panes, the key is to not just include vanity metrics, but to show information that the leader can action.

But the interesting perspective here is whether the single glass of pain is even relevant within your organisation’s culture. It’s just my opinion, but I prefer for coal-face workers to be empowered to make rapid recovery actions rather than requiring direction from up high in the org-chart. Coal-face workers generally have different tools with UIs that *should* help them monitor, manage and repair super-efficiently.

To get back to the “idealistic” comment above, each OSS UI needs to be fit-for-purpose for each unique persona (eg designers, product owners, network operations, etc). To me this implies that there is no single pane of glass…

I should caveat that by citing the example of an OSS search interface, something I’ve yet to see in OSS… although that’s just a front end to dozens of persona-specific panes of glass.

OSS orgitecture

So far this week we’ve been focusing on ways to improve the OSS transformation process. Monday provided 7 models for achieving startup-like efficiency for larger OSS transformations. Tuesday provided suggestions for speeding up the transition from OSS PoC to getting the solution into production, specifically strategies for absorbing an OSS PoC into production.

Both of these posts talk about the speed of getting things done outside the bureaucracy of big operators, big networks and big OSS. Today, as the post title suggests, we’re going to look at orgitecture – how re-designing the structure and culture of an organisation can help streamline digital transformations.

Do you agree with the premise that smaller entities (eg Agile autonomous groups, partners, consultants, etc) can get OSS tasks done more efficiently when operating at arms-length of the larger entity (eg the carrier)? I believe that this is a first principle of physics at play.

If you’ve worked under this arms-length arrangement in the past, you’ll also know that at some point those delivery outcomes need to get integrated back into the big entity. It’s what we referred to yesterday as absorption, where the level of integration effort falls on a continuum between minimally absorbed to fully absorbed.

OSS orgitecture is the re-architecture of the people, processes, culture and org structure to better allow for the absorption process. In the past, all the safety-checks (eg security, approvals, ops handover, etc) were designed on the assumption that internal teams were doing the work. They’re not always a great fit, especially when it comes to documentation review and approval.

For example, I have a belief that the effectiveness of documentation review and approval is inversely proportional to the number of reviewers (in most, but not all cases). Unfortunately, when an external entity is delivering, there tends to be inherently less trust than if an internal entity was delivering. As such, the safety-checks increase.

Another example is when the large organisation uses Agile delivery models, but use supply partners to deliver scope of works. The partners are able to assign effort in a sequential / waterfall manner, but can be delayed by only getting timeslices of attention from client’s staff (ie resources are available according to Agile sprint planning).

Security and cutover planning mechanisms such as Change Review Boards (CRB) have also been designed around old internal delivery models. They also need to be reconsidered to facilitate a pipeline of externally-implemented change.

Perhaps the biggest orgitecture factor is in getting multiple internal business units to work together effectively. In the old world we needed all the business units to reach consensus for a new product to come to market. Sales/Marketing/Products had to work with OSS/IT and Networks. Each of these units tend to have vastly different cultures and different cadences for getting their tasks done. Delivering a new product was as much an organisational challenge as it was a technical challenge and often took months. Those times-to-market are not feasible in a world of software where competitive advantages are fleeting. External entities can potentially help or hinder these timeframes. Careful design of small autonomous teams have the potential to improve abstraction at the interlocks, but culture remains the potential roadblock.

I’m excited by the opportunity for OSS delivery improvement coming from leveraging the gig economy. But if big OSS transformations are to make use of these efficiency gains, then we may also need to consider culture and process refinement as part of the change management.

Seven OSS transformation efficiency models

Do you work in a large organisation? Have you also worked in smaller organisations?
Where have you felt more efficient?

I’ve been lucky enough to work on some massive OSS transformations for large T1 telcos. But I’ve always noticed the inefficiency of working on these projects when embedded inside the bureaucracy of the beast. With all of the documentation, sign-offs, meetings, politics, gaining consensus, budget allocations, etc it can sometimes feel so inefficient. On some past projects, I’ve felt I can accomplish more in a day outside than a week or more inside the beast.

This makes sense when applying the fundamental law of physics F = M x a to OSS projects. In other words, the greater the mass (of the organisation), the more force must be applied to reach a given acceleration (ie to effect a change).

It’s one of the reasons I love working within a small entity (Passionate About OSS), but into big entities (the big telcos and utilities). It’s also why I strongly believe that the big entities need to better leverage smaller working groups to facilitate big OSS change. Not just OSS transformation, but any project where the size of the culture and technology stack are prohibitive.

Here are a few ways you can use to bring a start-up’s efficiency to a big OSS transformation:

  1. Agile methodologies – If done well, Agile can be great at breaking transformations down into smaller, more manageable pieces. The art is in designing small autonomous teams / responsibilities and breakdown of work to minimise dependencies
  2. Partnerships – Using smaller, external partners to deliver outcomes (eg product builds or service offerings) that can be absorbed into the big organisation. There are varying levels of absorption here – from an external, “clip-the-ticket” offering to offerings that are fully absorbed into the large entity’s OSS/BSS stack
  3. Consultancies – Similar to partnerships, but using smaller teams to implement professional services
  4. Spin-out / spin-in teams – Separating small teams of experts out from the bureaucracy of the large organisation so that they can achieve rapid progress
  5. Smart contracts / RFPs – I love the potential for smart contracts to automate the offer of small chunks of work to trusted partners to bid upon and then deliver upon
  6. Externalised Proofs of Concept (PoC) – One of the big challenges in implementing for large organisations is all of the safety checks that slow progress. Many, such as security and privacy mechanisms, are completely justified for a production network. But when a concept needs to be proved, such as user journeys, product integrations, sand-pit environments, etc, then cloud-based PoCs can be brilliant
  7. Alternate brands – Have you also noticed that some of the tier-1 telcos have been spinning out low-cost and/or niche brands with much leaner OSS/BSS stacks, offerings and related culture lately? It’s a clever business model on many levels. Combined with the strangler fig transformation approach, this might just represent a pathway for the big brand to shed many of their OSS/BSS legacy constraints

Can you think of other models that I’ve missed?

The key to these strategies is not so much the carve-out, the process of getting small teams to do tasks efficiently, but the absorb-in process. For example, how to absorb a cloud-based PoC back into the PROD network, where all safety checks (eg security, privacy, operations acceptance, etc) still need to be performed. More on that in tomorrow’s post.

OSS Best Practices, cough, splutter

Organizations that seek transformations frequently bring in an army of outside consultants [or implementers in the case of OSS] who tend to apply one-size-fits-all solutions in the name of “best practices.” Our approach to transforming our respective organizations is to rely instead on insiders — staff who have intimate knowledge about what works and what doesn’t in their daily operations.”
Behnam Tabrizi, Ed Lam, Kirk Gerard and Vernon Irvin here

I don’t know about you, but the term “best practices” causes me make funny noises. A cross between a laugh, cough, derisive snicker and chortle. This noise isn’t always audible, but it definitely sounds inside my head any time someone mentions best practices in the field of OSS.

There are two reasons for my bemusement, no, actually there’s a third, which I’ll share as the story that follows. The first two reasons are:

  • That every OSS project is so different that chaos theory applies. I’m all for systematising aspects of OSS projects to create repeatable building blocks (like TM Forum does with tools such as eTOM). But as much as I build and use repeatable approaches, I know they always have to be customised for each client situation
  • Best practices becomes a mind-set that can prevent the outsiders / implementers from listening to insiders

Luckily, out of all the OSS projects I’ve worked on, there’s only been one where the entire implementation team has stuck with their “best practices” mantra throughout the project.

The team used this phrase as the intellectual high-ground over their OSS-novice clients. To paraphrase their words, “This is best practice. We’ve done it this way successfully for dozens of customers in the past, so you must do it our way.” Interestingly, this project was the most monumental failure of any OSS I’ve worked on.

The implementation team’s organisation lost out because the project was halted part-way through. The client lost out because they had almost no OSS functionality to show for their resource investment.

The project was canned largely because the implementation company wasn’t prepared to budge from their “best practices” thinking. To be honest, their best practices approaches were quite well formed. The only problem was that the changes they were insisting on (to accommodate their 10-person team of outsiders) would’ve caused major re-organisation of the client’s 100,000-person company of insiders. The outsiders / implementers either couldn’t see that or were so arrogant that they wanted the client to bend anyway.

That was a failure on their behalf no doubt, but not the monumental failure. I could see the massive cultural disconnect between client and implementer very early. I could even see the way to fix it (I believe). I was their executive advisor (the bridge between outsiders and insiders) so the monumental failure was mine. Not through lack of trying, I was unable to persuade either party to consider the other’s perspective.

Without compromise, the project became compromised.

Can OSS/BSS assist CX? We’re barely touching the surface

Have you ever experienced an epic customer experience (CX) fail when dealing a network service operator, like the one I described yesterday?

In that example, the OSS/BSS, and possibly the associated people / process, had a direct impact on poor customer experience. Admittedly, that 7 truck-roll experience was a number of years ago now.

We have fewer excuses these days. Smart phones and network connected devices allow us to get OSS/BSS data into the field in ways we previously couldn’t. There’s no need for printed job lists, design packs and the like. Our OSS/BSS can leverage these connected devices to give far better decision intelligence in real time.

If we look to the logistics industry, we can see how parcel tracking technologies help to automatically provide status / progress to parcel recipients. We can see how recipients can also modify their availability, which automatically adjusts logistics delivery sequencing / scheduling.

This has multiple benefits for the logistics company:

  • It increases first time delivery rates
  • Improves the ability to automatically notify customers (eg email, SMS, chatbots)
  • Decreases customer enquiries / complaints
  • Decreases the amount of time the truck drivers need to spend communicating back to base and with clients
  • But most importantly, it improves the customer experience

Logistics is an interesting challenge for our OSS/BSS due to the sheer volume of customer interaction events handled each day.

But it’s another area that excites me even more, where CX is improved through improved data quality:

  • It’s the ability for field workers to interact with OSS/BSS data in real-time
  • To see the design packs
  • To compare with field situations
  • To update the data where there is inconsistency.

Even more excitingly, to introduce augmented reality to assist with decision intelligence for field work crews:

  • To provide an overlay of what fibres need to be spliced together
  • To show exactly which port a patch-lead needs to connect to
  • To show where an underground cable route goes
  • To show where a cable runs through trayway in a data centre
  • etc, etc

We’re barely touching the surface of how our OSS/BSS can assist with CX.

Calculating the cost of quality

This week of posts has followed the theme of the cost of quality. Data quality that is.

But how do you calculate the cost of bad data quality?

Yesterday’s post mentioned starting with PNI (Physical Network Inventory). PNI is the cables, splices / joints, patch panels, ducts, pits, etc. This data doesn’t tend to have a programmable interface to electronically reconcile with. This makes it prone to errors of many types – mistakes in manual entry, reconfigurations that are never documented, assets that are lost or stolen, assets that are damaged or degraded, etc.

Some costs resulting from poor PNI data quality (DQ) can be considered primary costs. This includes SLA breaches caused by an inability to identify a fault within an SLA window due to incorrect / incomplete / indecipherable design data. These costs are the most obvious and easy to calculate because they result in SLA penalties. If a network operator misses a few of these with tier 1 clients then this is the disaster referred to yesterday.

But the true cost of quality is in the ripple-out effects. The secondary costs. These include the many factors that result in unnecessary truck rolls. With truck rolls come extra costs including contractor costs, delayed revenues, design rework costs, etc.

Other secondary effects include:

  • Downstream data maintenance in systems that rely on PNI data
  • Code in downstream systems that caters for poor data quality, which in turn increases the costs of complexity such as:
    • Additional testing
    • Additional fixing
    • Additional curation
  • Delays in the ability to get new products to market
  • Ability to accurately price products (due to variation in real costs caused by extra complexity)
  • Reduced impact of automations (due to increased variants)
  • Potential to impact Machine Learning / Artificial Intelligence engines, which rely on reliable and consistent data at scale
  • etc

There are probably more sophisticated ways to calculate the cost of quality across all these factors and more, but in most cases I just use a simple multiplier:

  • Number of instances of DQ events (eg number of additional truck rolls); times by
  • A rule-of-thumb cost impact of each event (eg the cost of each additional truck roll)

Sometimes the rules-of-thumb are challenging to estimate, so I tend to err on the side of conservatism. I figure that even if the rules-of-thumb aren’t perfectly accurate, at least they produce a real cost estimate rather than just anecdotal evidence.

And more importantly, the tertiary and less tangible costs of brand damage (also known as Customer Experience or CX or reputation damage). We’ll talk a little more about that tomorrow.

 

Waiting for the disaster to invest in the data

Have you seen OSS tools where the applications are brilliant but consigned to failure by bad data? I definitely have! I call it the data death spiral. It’s a well known fact in the industry that bad data can ruin an OSS. You know it. I know it. Everyone knows it.

But how many companies do you know that invest in data quality? I mean truly invest in it.

The status quo is not to invest in the data, but the disaster. That is the disaster caused by the data!

Being a data nerd, it boggles my brain to understand why that is. My only assumption to date is that we don’t adequately measure the cost of quality. Or more to the point, what the cost impact is resulting from bad data.

I recently attempted to model the cost of quality. My model focuses on the ripple-out impacts from poor PNI (Physical Network Inventory) quality data alone. Using conservative numbers, the cost of quality is in the millions for the first carrier I applied it to.

Why do you think operators wait for the disaster before investing in the data? What alternate techniques do you use to focus attention, and investment, on the data?

The OSS Tinder effect

On Friday, we provided a link to an inspiring video showing Rolls-Royce’s vision of an operations centre. That article is a follow-on from other recent posts about to pros and cons of using MVPs (Minimum Viable Products) as an OSS transformation approach.

I’ve been lucky to work on massive OSS projects. Projects that have taken months / years of hard implementation grind to deliver an OSS for clients. One was as close to perfect (technically) as I’ve been involved with. But, alas, it proved to be a failure.

How could that be you’re wondering? Well, it’s what I refer to as the Tinder Effect. On Tinder, first appearances matter. Liked or disliked at the swipe of a hand.

Many new OSS are delivered to users who are already familiar with one or more OSS. If they’re not as pretty or as functional or as intuitive as what the users are accustomed to, then your OSS gets a swipe to the left. As we found out on that project (a ‘we’ that included all the client’s stakeholders and sponsors), first impressions can doom an otherwise successful OSS implementation.

Since then, I’ve invested a lot more time into change management. Change management that starts long before delivery and handover. Long before designs are locked in. Change management that starts with hearts and minds. And starts by involving the end users early in the change process. Getting them involved in the vision, even if not quite as elaborate as Rolls-Royce’s.

OSS transformation is hard. What can we learn from open source?

Have you noticed an increasing presence of open-source tools in your OSS recently? Have you also noticed that open-source is helping to trigger transformation? Have you thought about why that might be?

Some might rightly argue that it is the cost factor. You could also claim that they tend to help resolve specific, but common, problems. They’re smaller and modular.

I’d argue that the reason relates to our most recent two blog posts. They’re fast to install and they’re easy to run in parallel for comparison purposes.

If you’re designing an OSS can you introduce the same concepts? Your OSS might be for internal purposes or to sell to market. Either way, if you make it fast to build and easy to use, you have a greater chance of triggering transformation.

If you have a behemoth OSS to “sell,” transformation persuasion is harder. The customer needs to rally more resources (funds, people, time) just to compare with what they already have. If you have a behemoth on your hands, you need to try even harder to be faster, easier and more modular.

Identifying the fault-lines that trigger OSS churn

Most people slog through their days in a dark funk. They almost never get to do anything interesting or go to interesting places or meet interesting people. They are ignored by marketers who want them to buy their overpriced junk and be grateful for it. They feel disrespected, unappreciated and taken for granted. Nobody wants to take the time to listen to their fears, dreams, hopes and needs. And that’s your opening.
John Carlton
.

Whilst the quote above may relate to marketing, it also has parallels in the build and run phases of an OSS project. We talked about the trauma of OSS yesterday, where the OSS user feels so much trauma with their current OSS that they’re willing to go through the trauma of an OSS transformation. Clearly, a procurement event must be preceded by a significant trauma!

Sometimes that trauma has its roots in the technical, where the existing OSS just can’t do (or be made to do) the things that are most important to the OSS user. Or it can’t do it reliable, at scale, in time, cost effectively, without significant risk / change. That’s a big factor certainly.

However, the churn trigger appears to more often be a human one. The users feel disrespected, unappreciated and taken for granted. But here’s an interesting point that might surprise some users – the suppliers also often feel disrespected, unappreciated and taken for granted.

I have the privilege of working on both sides of the equation, often even as the intermediary between both sides. Where does the blame lie? Where do the fault-lines originate? The reasons are many and varied of course, but like a marriage breakup, it usually comes down to relationships.

Where the communication method is through hand-grenades being thrown over the fence (eg management by email and by contractual clauses), results are clearly going to follow a deteriorating arc. Yet many OSS relationships structurally start from a position of us and them – the fence is erected – from day one.

Coming from a technical background, it took me far too deep into my career to come to this significant realisation – the importance of relationships, not just the quest for technical perfection. The need to listen to both sides’ fears, dreams, hopes and needs.

OSS data that’s even more useless than useless

About 6-8 years ago, I was becoming achingly aware that I’d passed well beyond an information overload (I-O) threshold. More information was reaching my brain each day than I was able to assimilate, process and archive. What to do?

Well, I decided to stop reading newspapers and watching the news, in fact almost all television. I figured that those information sources were empty calories for the brain. At first it was just a trial, but I found that I didn’t miss it much at all and continued. Really important news seemed to find me at the metaphorical water-cooler anyway.

To be completely honest, I’m still operating beyond the I-O threshold, but at least it’s (arguably) now a more healthy information diet. I’m now far more useless at trivia game shows, which could be embarrassing if I ever sign up as a contestant on “Who Wants to be a Millionaire.” And missing out on the latest news sadly makes me far less capable of advising the Queen on how to react to Meghan Markle’s latest royal “atrocity.” The crosses we bear.

But I’m digressing markedly (and Markle-ey) from what this blog is all about – O.S.S.

Let me ask you a question – Is your OSS data like almost everybody else’s (ie also in I-O mode)?

Seth Godin recently quoted 3 rules of data:
First, don’t collect data unless it has a non-zero chance of changing your actions.
Second, before you seek to collect data, consider the costs of processing that data.
Third, acknowledge that data collected isn’t always accurate, and consider the costs of acting on data that’s incorrect.”

If I remove the double-negative from rule #1 – Only collect data if it has even the slightest chance of changing your actions.

Most people take the perspective that we might as well collect everything because storage is just getting so cheap (and we should keep it, not because we ever use it, but just in case our AI tools eventually find some relevance locked away inside it).

In the meantime, pre-AI (rolls eyes), Seth’s other two rules provide further sanity to the situation. Storing data is cheap, except where it has to be timely and accurate enough to make decisive, reliable actions on.

So, let me go back to the revised quote in bold. How much of the data in your OSS database / data-lake / data-warehouse / etc has even the slightest chance of changing your actions? As a percentage??

I suspect a majority is never used. And as most of it ages, it becomes even more useless than useless. One wonders, why are we storing it then?

Becoming the Microsoft of the OSS industry

On Tuesday we pondered, “Would an OSS duopoly be a good thing?

It cited two examples of operating systems amongst other famous duopolies:

  • Microsoft / Apple (PC operating systems)
  • Google / Apple (smartphone operating systems)

Yesterday we provided an example of why consolidation is so much more challenging for OSS companies than say for Coke or Pepsi.

But maybe an operating system model could represent a path to overcome many of the challenges faced by the OSS industry. What if there were a Linux for OSS?

  • One where the drivers for any number of device types is already handled and we don’t have to worry about south-bound integrations anymore (mostly). When new devices come onto the market, they need to have agents designed to interact with the common, well-understood agents on the operating system
  • One where the user interface is generally defined and can be built upon by any number of other applications
  • One where data storage and handling is already pre-defined and additional utilities can be added to make data even easier to interact with
  • One where much of underlying technical complexity is already abstracted and the higher value functionality can be built on top

It seems to me to be a great starting point for solving many of the items listed as awaiting exponential improvement is this OSS Call for Innovation manifesto.

Interestingly, I can’t foresee any of today’s biggest OSS players developing such an operating system without a significant mindset shift. They have the resources to become the Microsoft / Apple / Google of the OSS market, but appear to be quite closed-door in their thinking. Waiting for disruption from elsewhere.

Could ONAP become the platform / OS?

Let me relate this by example. TM Forum recently ran an event called DTA in Kuala Lumpur. It was an event for sharing ideas, conversations and letting the market know all about their products. All of the small to medium suppliers were happy to talk about their products, services and offerings. By contrast, I was ordered out of the rooms of one leading, but some might say struggling, vendor because I was only a walk-up. A walk-up representing a potential customer of them, but they didn’t even ask the question about how I might be of value to them (nor vice versa).

Would an OSS duopoly be a good thing?

The products/vendors page here on PAOSS has a couple of hundred entries currently. We’re currently working on an extended list that will almost double the number on it. More news on that shortly.

The level of fragmentation fascinates me, but if I’m completely honest, it probably disappoints me too. It’s great that it’s providing the platform for a long-tail of innovation. It’s exciting that there’s so many niche opportunities that exist. But it disappoints me because there’s so much duplication. How many alarm / performance / inventory / etc management tools are there? Can you imagine how many developer hours have been duplicated on similar feature development between products? And because there are so many different patterns, it means the total number of integration variants across the industry is putting a huge integration tax on us all.

Compare this to the strength of duopoly markets such as:

  • Microsoft / Apple (PC operating systems)
  • Google / Apple (smartphone operating systems)
  • Boeing / Airbus (commercial aircraft)
  • Visa / Mastercard (credit cards / payments)
  • Coca Cola / Pepsi (beverages, etc)

These duopolies have allowed for consolidation of expertise, effort, revenues/profits, etc. Most also provide a platform upon which smaller organisations / suppliers can innovate without having to re-invent everything (eg applications build upon operating systems, parts for aircraft, etc).

Buuuut……

Then I think about the impediments to achieving drastic consolidation through mergers and acquisitions (M&A) in the OSS industry.

There are opportunities to find complementary product alignment because no supplier services the entire OSS estate (where I’m using TM Forum’s TAM as a guide to the breadth of the OSS estate). However, it would be much harder to approach duopoly in OSS for a number of reasons:

  • Almost every OSS implementation is unique. Even if some of the products start out in common, they usually become quickly customised in terms of integrations, configurations, processes, etc
  • Interfaces to networks and other systems can vary so much. Modern EMS / devices / systems are becoming a little more consistent with IP, SNMP, Web APIs, etc. However, our networks still tend to have a lot of legacy protocols to interface with our networks
  • Consolidation of product lines becomes much harder, partly because of the integrations above, but partly because the functionality-sets and workflows differ so vastly between similar products (eg inventory management tools)
  • Similarly, architectures and build platforms (eg programming languages) are not easily compatible
  • Implementations are often regional for a variety of reasons – regulatory, local partnerships / relationships, language, corporate culture, etc
  • Customers can be very change-averse, even when they’re instigating the change

By contrast, we regularly hear of Coca Cola buying up new brands. It’s relatively easy for Coke to add a new product line/s without having much impact on existing lines.

We also hear about Google’s acquisitions, adding complementary products into its product line or simple for the purpose of acquiring new talent / expertise. There’s also acquisitions for the purpose of removing competitors or buying into customer bases.

Harder in all cases in the OSS industry.

Tomorrow we’ll share a story about an M&A attempting to buy into a customer base.

Then on Thursday, a story awaits on a possibly disruptive strategy towards consolidation in OSS.

Think for a moment…

Many of the most important new companies, including Google, Facebook, Amazon, Netflix, Snapchat, Uber, Airbnb and more are winning not by giving good-enough solutions…, but rather by delivering a superior experience….”
Ben Thompson
, stratechery.com

Think for a moment about the millions of developer hours that have gone into creating today’s OSS tools. Think also for a moment about how many of those tools are really clunky to use, install, configure, administer. How many OSS tools have truck-loads of functionality baked in that is just distracting, features that you’re never going to need or use? Conversely, how many are intuitive enough for a high-school student, let’s say, to use for the first time and become effective within a day of self-driven learning?

Let’s say an OSS came along that had all of the most important features (the ones customers really pay for, not the flashy, nice-to-have features) and offered a vastly superior user experience and user interface. Let’s say it took the market by storm.

With software and cloud delivery, it becomes harder to sustain differentiation. Innovative features and services are readily copied. But have a think about how hard it would be for the incumbent OSS to pick apart the complexity of their code, developed across those millions of developer hours, and throw swathes of it away – overhauling in an attempt to match a truly superior OSS experience.

Can you see why I’m bemused that we’re not replacing developers with more UX experts? We can surely create more differentiation through vastly improved experience than we can in creating new functionality (almost all of the most important functionality has already been developed and we’re now investing developer time on the periphery).

Treating your OSS/BSS suite like a share portfolio

Like most readers, I’m sure your OSS/BSS suite consists of many components. What if you were to look at each of those components as assets? In a share portfolio, you analyse your stocks to see which assets are truly worth keeping and which should be divested.

We don’t tend to take such a long-term analytical view of our OSS/BSS components. We may regularly talk about their performance anecdotally, but I’m talking about a strategic analysis approach.

If you were to look at each of your OSS/BSS components, where would you put them in the BCG Matrix?
BCG matrix
Image sourced from NetMBA here.

How many of your components are giving a return (whatever that may mean in your organisation) and/or have significant growth potential? How many are dogs that are a serious drain on your portfolio?

From an investor’s perspective, we seek to double-down our day-to-day investment in cash-cows and stars. Equally, we seek to divest our dogs.

But that’s not always the case with our OSS/BSS porfolio. We sometimes spend so much of our daily activity tweaking around the edges, trying to fix our dogs or just adding more things into our OSS/BSS suite – all of which distracts us from increasing the total value of our portfolio.

To paraphrase this Motley Fool investment strategy article into an OSS/BSS context:

  • Holding too many shares in a portfolio can crowd out returns for good ideas – being precisely focused on what’s making a difference rather than being distracted by having too many positions. Warren Buffett recommends taking 5-10 positions in companies that you are confident in holding forever (or for a very long period of time), rather than constantly switching. I shall note though that software could arguably be considered to be more perishable than the institutions we invest in – software doesn’t tend to last for decades (except some OSS perhaps  😀 )
  • Good ideas are scarce – ensuring you’re not getting distracted by the latest trends and buzzwords
  • Competitive knowledge advantage – knowing your market segment / portfolio extremely well and how to make the most of it, rather than having to up-skill on every new tool that you bring into the suite
  • Diversification isn’t lost – ensuring there is suitable vendor/product diversification to minimise risk, but also being open to long-term strategic changes in the product mix

Day-trading of OSS / BSS tools might be a fun hobby for those of us who solution them, but is it as beneficial as long-run investment?

I’d love to hear your thoughts and experiences.

How to kill the OSS RFP (part 4)

This is the fourth, and final part (I think) in the series on killing the OSS RFI/RFP process, a process that suppliers and customers alike find to be inefficient. The concept is based on an initiative currently being investigated by TM Forum.

The previous three posts focused on the importance of trusted partnerships and the methods to develop them via OSS procurement events.

Today’s post takes a slightly different tack. It proposes a structural obsolescence that may lead to the death of the RFP. We might not have to kill it. It might die a natural death.

Actually, let me take that back. I’m sure RFPs won’t die out completely as a procurement technique. But I can see a time when RFPs are far less common and significantly different in nature to today’s procurement events.

How??
Technology!
That’s the answer all technologists cite to any form of problem of course. But there’s a growing trend that provides a portent to the future here.

It comes via the XaaS (As a Service) model of software delivery. We’re increasingly building and consuming cloud-native services. OSS of the future, the small-grid model, are likely to consume software as services from multiple suppliers.

And rather than having to go through a procurement event like an RFP to form each supplier contract, the small grid model will simply be a case of consuming one/many services via API contracts. The API contract (eg OpenAPI specification / swagger) will be available for the world to see. You either consume it or you don’t. No lengthy contract negotiation phase to be had.

Now as mentioned above, the RFP won’t die, but evolve. We’ll probably see more RFPs formed between customers and the services companies that will create customised OSS solutions (utilising one/many OSS supplier services). And these RFPs may not be with the massive multinational services companies of today, but increasingly through smaller niche service companies. These micro-RFPs represent the future of OSS work, the gig economy, and will surely be facilitated by smart-RFP / smart-contract models (like the OSS Justice League model).

How to kill the OSS RFP (part 3)

As the title suggests, this is the third in a series of articles spawned by TM Forum’s initiative to investigate better procurement practices than using RFI / RFP processes.

There’s no doubt the RFI / RFP / contract model can be costly and time-consuming. To be honest, I feel the RFI / RFP process can be a reasonably good way of evaluating and identifying a new supplier / partner. I say “can be” because I’ve seen some really inefficient ones too. I’ve definitely refined and improved my vendor procurement methodology significantly over the years.

I feel it’s not so much the RFI / RFP that needs killing (significant disruption maybe), but its natural extension, the contract development and closure phase that can be significantly improved.

As mentioned in the previous two parts of this series (part 1 and part 2), the main stumbling block is human nature, specifically trust.

Have you ever been involved in the contract phase of a large OSS procurement event? How many pages did the contract end up being? Well over a hundred? How long did it take to reach agreement on all the requirements and clauses in that document?

I’d like to introduce the concept of a Minimum Viable Contract (MVC) here. An MVC doesn’t need most of the content that appears in a typical contract. It doesn’t attempt to predict every possible eventuality during the many years the OSS will survive for. Instead it focuses on intent and the formation of a trusting partnership.

I once led a large, multi-organisation bid response. Our response had dozens of contributors, many person-months of effort expended, included hundreds of pages of methodology and other content. It conformed with the RFP conditions. It seemed justified on a bid that exceeded $250M. We came second on that bid.

The winning bidder responded with a single page that included intent and fixed price amount. Their bid didn’t conform to RFP requests. Whereas we’d sought to engender trust through content, they’d engendered trust through relationships (in a part of the world where we couldn’t match the winning bidder’s relationships). The winning bidder’s response was far easier for the customer to evaluate than ours. Undoubtedly their MVC was easier and faster to gain agreement on.

An MVC is definitely a more risky approach for a customer to initiate when entering into a strategically significant partnership. But just like the sports-star transfer comparison in part 2, it starts from a position of trust and seeks to build a trusted partnership in return.

This is a highly contrarian view. What are your thoughts? Would you ever consider entering into an MVC on a big OSS procurement event?

How to kill the OSS RFP (part 2)

Yesterday’s post discussed an initiative that TM Forum is currently investigating – trying to identify an alternate OSS procurement process to the traditional RFI/RFP/contract approach.

It spoke about trusting partnerships being the (possibly) mythological key to killing off the RFP.

Have you noticed how much fear there is going into any OSS procurement event? Fear from suppliers and customers alike. That’s understandable because there are so many horror stories that both sides have heard of, or experienced, from past procurement events. The going-in position is of excitement, fear and an intention to ensure all loopholes are covered through reams of complex contractual terms and conditions. DBC – death by contract.

I’m a huge fan of Australian Rules Football (aka AFL). I’m lucky enough to have been privy to the inside story behind one of the game’s biggest ever player transfers.

The player, a legend of the game, had a history of poor behaviour. With each new contract, his initial club had inserted more and more T&Cs that attempted to control his behaviour (and protect the club from further public relations fallouts). His final contract was many pages long, with significant discussion required by player and club to reach agreement on each clause.

In the meantime, another club attempted to poach the superstar. Their contract offer fit on a single page and had no behaviour / discipline clauses. It was the same basic pro-forma that eveeryone on the team signed up to. The player was shocked. He asked where all the other clauses were. The answer from the poaching club was, to paraphrase, “why would we need those clauses? We trust you to do the right thing.” It became a significant component of the new club getting their man. And their man went on to deliver upon that trust, both on-field and off, over many years. He built one of the greatest careers ever.

I wonder whether this is just an outlier example? Could the same simplified contract model apply to OSS procurement, helping to build the trusting partnerships that everyone in the industry desires? As the initiator of the procurement event, does the customer control the first important step towards building a trusting partnership that lasts for many years?

How to kill the OSS RFP

TM Forum is currently investigating ways to procure OSS without resorting to the current RFI / RFP approach. It has published the following survey results.
Kill the RFP.

As it shows, the RFI / RFP isn’t fit for purpose for suppliers and customers alike. It’s not just the RFI/RFP process. We could extend this further and include contract / procurement process that bolts onto the back of the RFP process.

I feel that part of the process remains relevant – the part that allows customers to evaluate the supplier/s that are best-fit for the customer’s needs. The part that is cumbersome relates to the time, effort and cost required to move from evaluation into formation of a contract.

I believe that this becomes cumbersome because of trust.

Every OSS supplier wants to achieve “trusted” status with their customers. Each supplier wants to be the source trusted to provide the best vision of the future for each customer. Similarly, each OSS customer wants a supplier they can trust and seek guidance from.”
Past PAOSS post.

However, OSS contracts (and the RFPs that lead into them) seem to be the antithesis of trust. They generally work on the assumption that every loophole must be closed that a supplier or vendor could leverage to rort the other.

There are two problems with this:

  • OSS transformations are complex projects and all loopholes can never be covered
  • OSS platforms tend to have a useful life of many years, which makes predicting the related future requirements, trends, challenges, opportunities, technologies, etc difficult to plan for

As a result, OSS RFI/RFP/contracts are so cumbersome. Often, it’s the nature of the RFP itself that makes the whole process cumbersome. The OSS Radar analogy shows an alternative mindset.

Mark Newman of TM Forum states, “…the telecoms industry is transitioning to a partnership model to benefit from innovative new technologies and approaches, and to make decisions and deploy new capabilities more quickly.”
The trusted partnership model is ideal. It allows both parties to avoid the contract development phase and deliver together efficiently. The challenge is human nature (ie we come back to trust).

I wonder whether there is merit in using an independent arbiter? A customer uses the RFI/RFP approach to find a partner or partners, but then all ongoing work is evaluated by the arbiter to ensure balance / trust is maintained between customer (and their need for fair pricing, quality products, etc) and supplier (and their need for realistic requirements, reasonable payment times, etc).

I’d love to hear your thoughts and experiences around partnerships that have worked well (or why they’ve worked badly). Have you ever seen examples where the arbitration model was (or wasn’t) helpful?