The OSS Minimum Feature Set is Not The Goal

This minimum feature set (sometimes called the “minimum viable product”) causes lots of confusion. Founders act like the “minimum” part is the goal. Or worse, that every potential customer should want it. In the real world not every customer is going to get overly excited about your minimum feature set. Only a special subset of customers will and what gets them breathing heavy is the long-term vision for your product.

The reality is that the minimum feature set is 1) a tactic to reduce wasted engineering hours (code left on the floor) and 2) to get the product in the hands of early visionary customers as soon as possible.

You’re selling the vision and delivering the minimum feature set to visionaries not everyone.”
Steve Blank here.

A recent blog series discussed the use of pilots as an OSS transformation and augmentation change agent.
I have the need for OSS speed
Re-framing an OSS replacement strategy
OSS transformation is hard. What can we learn from open source?

Note that you can replace the term pilot in these posts with MVP – Minimum Viable Product.

The attraction in getting an MVP / pilot version of your OSS into the hands of users is familiarity and momentum. The solution becomes more tangible and therefore needs less documentation (eg architecture, designs, requirement gathering, etc) to describe foreign concepts to customers. The downside of the MVP / pilot is that not every customer will “get overly excited about your minimum feature set.”

As Steve says, “Only a special subset of customers will and what gets them breathing heavy is the long-term vision for your product.” The challenge for all of us in OSS is articulating the long-term vision and making it compelling…. and not just leaving the product in its pilot state (we’ve all seen this happen haven’t we?)

We’ll provide an example of a long-term vision tomorrow.

PS. I should also highlight that the maximum feature set also isn’t the goal either.

The layers of ITIL redundancy

Today’s is something of a heretical post, especially for the believers in ITIL. In the world of OSS, we look to build in layers of resiliency and not layers of redundancy.

The following diagram and subsequent text in italics describes a typical ITIL process and is all taken from https://www.computereconomics.com/article.cfm?id=1074

Example of relationship between ITIL incidents, problems, and changes

The sequence of events as shown in Figure 1 is as follows:

  • At TIME = 0, an External Event is detected by the Incident Management process. This could be as simple as a customer calling to say that service is unavailable or it could be an automated alert from a system monitoring device.The incident owner logs and classifies this as incident i2. Then, the incident owner tries to match i2 to known errors, work-arounds, or temporary fixes, but cannot find a match in the database.
  • At TIME = 1, the incident owner dispatches a problem request to the Problem Management process anticipating a work-around, temporary fix, or other assistance. In doing so, the incident owner has prompted the creation of Problem p2.
  • At TIME = 2, the problem owner of p2 returns the expected temporary fix to the incident owner of i2.  Note that both i2 and p2 are active and exist simultaneously. The incident owner for i2 applies the temporary fix.
  • In this case, the work-around requires a change request.  So, at Time = 3, the incident owner for i2 initiates change request, c2.
  • The change request c2 is applied successfully, and at TIME = 4, c2 is closed. Note that for a while i2, p2 and c2 all exist simultaneously.
  • Because c2 was successful, the incident owner for i2 can now confirm that the incident is resolved. At TIME = 5, i2 is closed. However, p2 remains active while the problem owner searches for a permanent fix. The problem owner for p2 would be responsible for implementing the permanent fix and initiating any necessary change requests.

But I look at it slightly differently. At their root, why do Incident Management, Problem Management and Change Management exist? They’re all just mechanisms for resolving a system* health issue. If we detect an event/s and fix it, we don’t have to expend all the effort of flicking tickets around.

Thinking within the T2R paradigm of trouble -> incident -> problem -> change -> resolve holds us back. If we can skip the middle steps and immediately associate a resolution with the event/s, we get a whole lot more efficient. If we can immediately relate a trigger with a reaction, we can also get rid of the intermediate ticket flickers and the processing cycle time.

So, the NOC of the future surely requires us to build a trigger -> reaction recommendation engine (and body of knowledge). That’s a more powerful tool to supply to our NOC operators than incidents, problems and change requests. (Easier to write about than to actually solve though of course)

OSS transformation is hard. What can we learn from open source?

Have you noticed an increasing presence of open-source tools in your OSS recently? Have you also noticed that open-source is helping to trigger transformation? Have you thought about why that might be?

Some might rightly argue that it is the cost factor. You could also claim that they tend to help resolve specific, but common, problems. They’re smaller and modular.

I’d argue that the reason relates to our most recent two blog posts. They’re fast to install and they’re easy to run in parallel for comparison purposes.

If you’re designing an OSS can you introduce the same concepts? Your OSS might be for internal purposes or to sell to market. Either way, if you make it fast to build and easy to use, you have a greater chance of triggering transformation.

If you have a behemoth OSS to “sell,” transformation persuasion is harder. The customer needs to rally more resources (funds, people, time) just to compare with what they already have. If you have a behemoth on your hands, you need to try even harder to be faster, easier and more modular.

I have the need for OSS speed

You already know that speed is important for OSS users. They / we don’t want to wait for minutes for the OSS to respond to a simple query. That’s obvious right? The bleeding obvious.

But that’s not what today’s post is about. So then, what is it about?

Actually, it follows on from yesterday’s post about re-framing of OSS transformation.  If a parallel pilot OSS can be stood up in weeks then it helps persuasion. If the OSS is also fast for operators to learn, then it helps persuasion.  Why is that important? How can speed help with persuasion?

Put simply:

  • It takes x months of uncertainty out of the evaluators’ lives
  • It takes x months of parallel processing out of the evaluators’ lives
  • It also takes x months of task-switching out of the evaluators’ lives
  • Given x months of their lives back, customers will be more easily persuaded

It also helps with the parallel bake-off if your pilot OSS shows a speed improvement.

Whether we’re the buyer or seller in an OSS pilot, it’s incumbent upon us to increase speed.

You may ask how. Many ways, but I’d start with a mass-simplification exercise.

Re-framing an OSS replacement strategy

Friday’s post posed a re-framing exercise that asked you (whether customer, seller or integrator) to run a planning exercise as if you MUST offer a money-back guarantee on your OSS (whether internal or external). It’s designed to force a change in mindset from risk mitigation to risk removal.

We have another re-framing exercise for you today.

As we all know, incumbent OSS can be really difficult to replace / usurp. It becomes a massive exercise for a customer to change the status quo. And when you’re on the team that’s trying to instigate change (again whether you’re internal or external to the OSS customer organisation), you want to minimise the barriers to change.

The ideal replacement approach is to put a parallel pilot in place (which also bears some similarity with the strangler fig analogy). Unfortunately the pilot approach doesn’t get used as often as it could because pilot implementation projects tend to take months to stand up. This implies significant effort and cost, which in turn implies a major procurement event needs to occur.

If the parallel pilot could be stood-up in days or a couple of weeks, then it becomes a more useful replacement persuasion strategy.

So today’s re-framing exercise is to ask yourself what you could do to stand up a pilot version of your OSS in only days/weeks and at very little cost?

Let me add an extra twist to that exercise. When I say stand up the OSS in days/weeks, I also mean to hand over to the users, which means that it has to be intuitive enough for operators to begin using with almost no training. Don’t forget that the parallel solution is unlikely to have additional resources to operate it. It’s likely that the same workforce will need to operate incumbent and pilot, performing a comparison.

So, what you could do to stand up a pilot version of your OSS in only days/weeks, at very little cost and with almost immediate take-up by users?

What’s the one big factor holding back your OSS? And the exercise to reduce it

We’ve talked about some of the emotions we experience in the OSS industry earlier this week, the trauma of OSS and anxiety relating to OSS.

To avoid these types of miserable feelings, it’s human nature to seek to limit them. We over-analyse, we over-specify, we over-engineer, we over-document, we over-contract, we over-react, we over-estimate (nah, actually we almost never over-estimate do we?), we over-resource (well, actually, we don’t seem to do that very often either). Anyway, you get the “over” idea.

What is the one big factor that leads to all of these overs? What is the one big factor that makes our related costs and delivery times become overs too?

Have you guessed yet?

The answer is…… drum-roll please…… RISK.

Let’s face it. OSS projects are as full as a centipede’s sock drawer when it comes to risk. The customer carries risks, the supplier carries risk, the integrators carry risk, the sponsors carry risk, the end-users carry risk, the implementers carry risk. What a burden! And it is a burden that impacts in many ways, as indicated in the triple constraint of OSS projects.

Anyone who’s done more than a few OSS projects knows there are many risks and they tend to respond by going into over-mode (ie all the overs mentioned above). That’s a clever strategy. It’s called risk mitigation.

But today’s post isn’t about risk mitigation. It takes a contrarian approach. Let me explain.

Have you noticed how many companies build risk reduction techniques into their sales models? Phrases like “money-back guarantee” abound. This technique is designed to remove most of the risk for the customer and also remove the associated barrier to purchase. To be fair, it might not actually be a case of removing the risk, but directing all of the risk onto the seller. Marketers call it risk reversal.

I’m sure you’re thinking, “well that’s fine for high-volume, low-cost products like burgers or books, but not so easy for complex, customised solutions like OSS.” I hear you!

I’m not actually asking you to offer a money-back guarantee for your OSS, although Passionate About OSS does offer that all the way from our products through to our high-end consultancy services.

What I am asking you to do (whether customer, seller or integrator) is to run a planning exercise as if you MUST offer a money-back guarantee. What that forces is a change of mindset from risk mitigation to risk removal. It forces consideration of what are the myriad risks “in the system” (for customer, seller and integrator) and how can they be removed? Here are a few risk planning suggestions FWIW.

Set the following challenge for your analysts and engineers – Don’t come to me with a business case for the one-million-and-first feature to add, but prove your brilliance by showing me the business case for the risks you will remove. Risk reduction rather than feature-add or cost-out business cases.

Let me know what you discover and what your results are.

OSS data that’s even more useless than useless

About 6-8 years ago, I was becoming achingly aware that I’d passed well beyond an information overload (I-O) threshold. More information was reaching my brain each day than I was able to assimilate, process and archive. What to do?

Well, I decided to stop reading newspapers and watching the news, in fact almost all television. I figured that those information sources were empty calories for the brain. At first it was just a trial, but I found that I didn’t miss it much at all and continued. Really important news seemed to find me at the metaphorical water-cooler anyway.

To be completely honest, I’m still operating beyond the I-O threshold, but at least it’s (arguably) now a more healthy information diet. I’m now far more useless at trivia game shows, which could be embarrassing if I ever sign up as a contestant on “Who Wants to be a Millionaire.” And missing out on the latest news sadly makes me far less capable of advising the Queen on how to react to Meghan Markle’s latest royal “atrocity.” The crosses we bear.

But I’m digressing markedly (and Markle-ey) from what this blog is all about – O.S.S.

Let me ask you a question – Is your OSS data like almost everybody else’s (ie also in I-O mode)?

Seth Godin recently quoted 3 rules of data:
First, don’t collect data unless it has a non-zero chance of changing your actions.
Second, before you seek to collect data, consider the costs of processing that data.
Third, acknowledge that data collected isn’t always accurate, and consider the costs of acting on data that’s incorrect.”

If I remove the double-negative from rule #1 – Only collect data if it has even the slightest chance of changing your actions.

Most people take the perspective that we might as well collect everything because storage is just getting so cheap (and we should keep it, not because we ever use it, but just in case our AI tools eventually find some relevance locked away inside it).

In the meantime, pre-AI (rolls eyes), Seth’s other two rules provide further sanity to the situation. Storing data is cheap, except where it has to be timely and accurate enough to make decisive, reliable actions on.

So, let me go back to the revised quote in bold. How much of the data in your OSS database / data-lake / data-warehouse / etc has even the slightest chance of changing your actions? As a percentage??

I suspect a majority is never used. And as most of it ages, it becomes even more useless than useless. One wonders, why are we storing it then?

Becoming the Microsoft of the OSS industry

On Tuesday we pondered, “Would an OSS duopoly be a good thing?

It cited two examples of operating systems amongst other famous duopolies:

  • Microsoft / Apple (PC operating systems)
  • Google / Apple (smartphone operating systems)

Yesterday we provided an example of why consolidation is so much more challenging for OSS companies than say for Coke or Pepsi.

But maybe an operating system model could represent a path to overcome many of the challenges faced by the OSS industry. What if there were a Linux for OSS?

  • One where the drivers for any number of device types is already handled and we don’t have to worry about south-bound integrations anymore (mostly). When new devices come onto the market, they need to have agents designed to interact with the common, well-understood agents on the operating system
  • One where the user interface is generally defined and can be built upon by any number of other applications
  • One where data storage and handling is already pre-defined and additional utilities can be added to make data even easier to interact with
  • One where much of underlying technical complexity is already abstracted and the higher value functionality can be built on top

It seems to me to be a great starting point for solving many of the items listed as awaiting exponential improvement is this OSS Call for Innovation manifesto.

Interestingly, I can’t foresee any of today’s biggest OSS players developing such an operating system without a significant mindset shift. They have the resources to become the Microsoft / Apple / Google of the OSS market, but appear to be quite closed-door in their thinking. Waiting for disruption from elsewhere.

Could ONAP become the platform / OS?

Let me relate this by example. TM Forum recently ran an event called DTA in Kuala Lumpur. It was an event for sharing ideas, conversations and letting the market know all about their products. All of the small to medium suppliers were happy to talk about their products, services and offerings. By contrast, I was ordered out of the rooms of one leading, but some might say struggling, vendor because I was only a walk-up. A walk-up representing a potential customer of them, but they didn’t even ask the question about how I might be of value to them (nor vice versa).

Would an OSS duopoly be a good thing?

The products/vendors page here on PAOSS has a couple of hundred entries currently. We’re currently working on an extended list that will almost double the number on it. More news on that shortly.

The level of fragmentation fascinates me, but if I’m completely honest, it probably disappoints me too. It’s great that it’s providing the platform for a long-tail of innovation. It’s exciting that there’s so many niche opportunities that exist. But it disappoints me because there’s so much duplication. How many alarm / performance / inventory / etc management tools are there? Can you imagine how many developer hours have been duplicated on similar feature development between products? And because there are so many different patterns, it means the total number of integration variants across the industry is putting a huge integration tax on us all.

Compare this to the strength of duopoly markets such as:

  • Microsoft / Apple (PC operating systems)
  • Google / Apple (smartphone operating systems)
  • Boeing / Airbus (commercial aircraft)
  • Visa / Mastercard (credit cards / payments)
  • Coca Cola / Pepsi (beverages, etc)

These duopolies have allowed for consolidation of expertise, effort, revenues/profits, etc. Most also provide a platform upon which smaller organisations / suppliers can innovate without having to re-invent everything (eg applications build upon operating systems, parts for aircraft, etc).

Buuuut……

Then I think about the impediments to achieving drastic consolidation through mergers and acquisitions (M&A) in the OSS industry.

There are opportunities to find complementary product alignment because no supplier services the entire OSS estate (where I’m using TM Forum’s TAM as a guide to the breadth of the OSS estate). However, it would be much harder to approach duopoly in OSS for a number of reasons:

  • Almost every OSS implementation is unique. Even if some of the products start out in common, they usually become quickly customised in terms of integrations, configurations, processes, etc
  • Interfaces to networks and other systems can vary so much. Modern EMS / devices / systems are becoming a little more consistent with IP, SNMP, Web APIs, etc. However, our networks still tend to have a lot of legacy protocols to interface with our networks
  • Consolidation of product lines becomes much harder, partly because of the integrations above, but partly because the functionality-sets and workflows differ so vastly between similar products (eg inventory management tools)
  • Similarly, architectures and build platforms (eg programming languages) are not easily compatible
  • Implementations are often regional for a variety of reasons – regulatory, local partnerships / relationships, language, corporate culture, etc
  • Customers can be very change-averse, even when they’re instigating the change

By contrast, we regularly hear of Coca Cola buying up new brands. It’s relatively easy for Coke to add a new product line/s without having much impact on existing lines.

We also hear about Google’s acquisitions, adding complementary products into its product line or simple for the purpose of acquiring new talent / expertise. There’s also acquisitions for the purpose of removing competitors or buying into customer bases.

Harder in all cases in the OSS industry.

Tomorrow we’ll share a story about an M&A attempting to buy into a customer base.

Then on Thursday, a story awaits on a possibly disruptive strategy towards consolidation in OSS.

Do you have a nagging OSS problem you cannot solve?

On Friday, we published a post entitled, “Think for a moment…” which posed the question of whether we might be better-served looking back at our most important existing features and streamlining them rather than inventing new features to solve that have little impact.

Over the weekend, a promotional email landed in my inbox from Nightingale Conant. It is completely unrelated to OSS, yet the steps outlined below seem to be a fairly good guide for identifying what to reinvent within our existing OSS.

Go out and talk to as many people [customers or potential] as you can, and ask them the following questions:
1. Do you have a nagging problem you cannot solve?
2. What is that problem?
3. Why is solving that problem important to you?
4. How would solving that problem change the quality of your life?
5. How much would you be willing to pay to solve that problem?

Note: Before you ask these questions, make sure you let the people know that you’re not going to sell them anything. This way you’ll get quality answers.
After you’ve talked with numerous people, you’ll begin to see a pattern. Look for the common problems that you believe you could solve. You’ll also know how much people are willing to pay to have their problem solved and why.

I’d be curious to hear back from you. Do those first 4 questions identify a pattern that relates to features you’ve never heard of before or features that your OSS already performs (albeit perhaps not as optimally as it could)?

Think for a moment…

Many of the most important new companies, including Google, Facebook, Amazon, Netflix, Snapchat, Uber, Airbnb and more are winning not by giving good-enough solutions…, but rather by delivering a superior experience….”
Ben Thompson
, stratechery.com

Think for a moment about the millions of developer hours that have gone into creating today’s OSS tools. Think also for a moment about how many of those tools are really clunky to use, install, configure, administer. How many OSS tools have truck-loads of functionality baked in that is just distracting, features that you’re never going to need or use? Conversely, how many are intuitive enough for a high-school student, let’s say, to use for the first time and become effective within a day of self-driven learning?

Let’s say an OSS came along that had all of the most important features (the ones customers really pay for, not the flashy, nice-to-have features) and offered a vastly superior user experience and user interface. Let’s say it took the market by storm.

With software and cloud delivery, it becomes harder to sustain differentiation. Innovative features and services are readily copied. But have a think about how hard it would be for the incumbent OSS to pick apart the complexity of their code, developed across those millions of developer hours, and throw swathes of it away – overhauling in an attempt to match a truly superior OSS experience.

Can you see why I’m bemused that we’re not replacing developers with more UX experts? We can surely create more differentiation through vastly improved experience than we can in creating new functionality (almost all of the most important functionality has already been developed and we’re now investing developer time on the periphery).

Nobody dabbles at dentistry

There are some jobs that are only done by accredited professionals.
And then there are most jobs, jobs that some people do for fun, now and then, perhaps in front of the bathroom mirror.
It’s difficult to find your footing when you’re a logo designer, a comedian or a project manager. Because these are gigs that many people think they can do, at least a little bit.”
Seth Godin here.

I’d love to plagiarise the entire post from Seth above, but instead suggest you have a look at the other pearls of wisdom he shares in the link above.

So where does OSS fit in Seth’s thought model? Well, you don’t need an accreditation like a dentist does. Most of the best I’ve met haven’t had any OSS certifications to speak of.

Does the layperson think they can do an OSS role? Most people have never heard of OSS, so I doubt they would believe they could do a role as readily as they could see themselves being logo designers. But the best I’ve met have often come from fields other than telco / IT / network-ops.

One of my earliest OSS projects was for a new carrier in a country that had just deregulated. They were the second fixed-line carrier in this country and tried to poach staff from the incumbent. Few people crossed over. To overcome this lack of experience, the carrier built an OSS team that consisted of a mathematician, an architect, an automotive engineer, a really canny project manager and an assortment of other non-telco professionals.

The executives of that company clearly felt they could develop a team of experts (or just had no choice but to try). The strategy didn’t work out very well for them. It didn’t work out very well for us either. We were constantly trying to bring their team up to speed on the fundamentals in order to use the tools we were developing / delivering (remembering that as one of my first OSS projects, I was still desperately trying to bring myself up to speed – still am for that matter).

As Seth also states, “If you’re doing one of these non-dentist jobs, the best approach is to be extraordinarily good at it. So much better than an amateur that there’s really no room for discussion.” That needs hunger (a hungriness to learn without an accredited syllabus).

It also needs humility though. Even the most extraordinarily good OSS proponents barely scratch the surface of all there is to learn about OSS. It’s the breadth of sub-expertise that probably explains why there is no single accreditation that covers all of OSS.

To link or not to link your OSS. That is the question

The first OSS project I worked on had a full-suite, single vendor solution. All products within the suite were integrated into a single database and that allowed their product developers to introduce a lot of cross-linking. That has its strengths and weaknesses.

The second OSS suite I worked with came from one of the world’s largest network vendors and integrators. Their suite primarily consisted of third-party products that they integrated together for the customer. It was (arguably) a best-of-breed all implemented as a single solution, but since the products were disparate, there was very little cross-linking. This approach also has strengths and weaknesses.

I’d become so used to the massive data migration and cross-referencing exercise required by the first OSS that I was stunned by the lack of time allocated by the second vendor for their data migration activities. The first took months and a significant level of expertise. The second took days and only required fairly simple data sets. That’s a plus for the second OSS.

However, the second OSS was severely lacking in cross-domain data, which impacted the richness of insight that could be easily unlocked.

Let me give an example to give better context.

We know that a trouble ticketing system is responsible for managing the tracking, reporting and resolution of problems in a network operator’s network. This could be as simple as a repository for storing a problem identifier and a list of notes performed to resolve the problem. There’s almost no cross-linking required.

A more referential ticketing system might have links to:

  • Alarm management – to show the events linked to the problem
  • Inventory management – to show the impacted resources (or possibly impacted)
  • Service management – to show the services impacted
  • Customer management – to show the customers impacted and possibly the related customer interactions
  • Spares management – to show the life-cycle of physical resources impacted
  • Workforce management – to manage the people / teams performing restorative actions
  • etc

The referential ticketing system gives far richer information, obviously, but you have to trade that off against the amount of integration and data maintenance that needs to go into supporting it. The question to ask is what level of linking is justifiable from a cost-benefit perspective.

Am I being an OSShole?

“Am I being an asshole?” In other words, am I pointing out problems or am I finding solutions?
Ramit Sethi.

One of the things I’ve noticed working on large and small OSS teams is that people who excel at finding solutions thrive in both. The ones who thrive on only identifying problems seemingly only function in large organisations.

In a small team, everyone needs to contribute to the many solutions that need resolving. There’s a clear line of sight to what’s being delivered. I’ve tended to find that the pure problem-finders feel uncomfortable to be the only ones not clearly delivering.

But there’s absolutely a role for identifying problems or for asking the question that completely re-frames the problem. One of the best I’ve seen is a CEO of a publicly listed company. He had virtually no knowledge of OSS, but could listen to half an hour of technical, round-in-circles discussions, then interject with a summary or question that re-framed and simplified the solution. The team then had a clear direction to implement. The CEO didn’t find the solution directly, but he was an instrumental component in the team reaching a solution.

The question to pose though is whether the question asker is being an OSShole or an agent provocateur*.

* BTW, I use this term within the context of being a change agent, someone who contributes to finding a solution, as opposed to the literal sense, which is to incite others into performing illegal acts.

Treating your OSS/BSS suite like a share portfolio

Like most readers, I’m sure your OSS/BSS suite consists of many components. What if you were to look at each of those components as assets? In a share portfolio, you analyse your stocks to see which assets are truly worth keeping and which should be divested.

We don’t tend to take such a long-term analytical view of our OSS/BSS components. We may regularly talk about their performance anecdotally, but I’m talking about a strategic analysis approach.

If you were to look at each of your OSS/BSS components, where would you put them in the BCG Matrix?
BCG matrix
Image sourced from NetMBA here.

How many of your components are giving a return (whatever that may mean in your organisation) and/or have significant growth potential? How many are dogs that are a serious drain on your portfolio?

From an investor’s perspective, we seek to double-down our day-to-day investment in cash-cows and stars. Equally, we seek to divest our dogs.

But that’s not always the case with our OSS/BSS porfolio. We sometimes spend so much of our daily activity tweaking around the edges, trying to fix our dogs or just adding more things into our OSS/BSS suite – all of which distracts us from increasing the total value of our portfolio.

To paraphrase this Motley Fool investment strategy article into an OSS/BSS context:

  • Holding too many shares in a portfolio can crowd out returns for good ideas – being precisely focused on what’s making a difference rather than being distracted by having too many positions. Warren Buffett recommends taking 5-10 positions in companies that you are confident in holding forever (or for a very long period of time), rather than constantly switching. I shall note though that software could arguably be considered to be more perishable than the institutions we invest in – software doesn’t tend to last for decades (except some OSS perhaps  😀 )
  • Good ideas are scarce – ensuring you’re not getting distracted by the latest trends and buzzwords
  • Competitive knowledge advantage – knowing your market segment / portfolio extremely well and how to make the most of it, rather than having to up-skill on every new tool that you bring into the suite
  • Diversification isn’t lost – ensuring there is suitable vendor/product diversification to minimise risk, but also being open to long-term strategic changes in the product mix

Day-trading of OSS / BSS tools might be a fun hobby for those of us who solution them, but is it as beneficial as long-run investment?

I’d love to hear your thoughts and experiences.

2019 predictions for OSS

Well, this is the time of year when people make big predictions for the coming year. But let me start by saying the headline is something of a misnomer. I’m not clever enough to have any predictions for 2019 for a couple of reasons:

  1. There are far too many clever people working across the myriad fields of expertise that make up an OSS for me to possibly guess which might gain traction this year
  2. I’m yet to figure out whether there are any consistent patterns or cycles like Moore’s Law that uniquely define progress in OSS. On the contrary, you could claim that there are any number of metrics that might define progress for OSS (or to any individual OSS stack). But I’ll also be honest enough to say that I haven’t tried applying any of these futurology techniques to find any useful patterns either.
    Futurism Methodologies

Instead, I’ll call out the many industry-wide challenges / opportunities that are still waiting to be solved in 2019. Many of these same challenges / opportunities have been around since I first started working on OSS projects in circa 2000.

The Passionate About OSS Call for Innovation paper outlines a list of starting points where exponential improvements await.

I’m not sure if any will be solved in 2019 but I will make the prediction that the thousands of very clever people working in the OSS industry will make some exciting steps forward this year. Hopefully they’re some of the quantum leaps that await and not only the ever-present, but still highly challenging, incremental improvements.

How to build a personal, cloud-native OSS sandpit

As a project for 2019, we’re considering the development of a how-to training course that provides a step-by-step guide to build your own OSS sandpit to play with. It will be built around cloud-native and open-source components. It will be cutting-edge and micro-scaled (but highly scalable in case you want to grow it).

Does this sound like something you’d be interested in hearing more about?

Like or comment if you’d like us to keep you across this project in 2019.

I’d love to hear your thoughts on what the sandpit should contain. What would be important to you? We already have a set of key features identified, but will refine it based on community feedback.

How to kill the OSS RFP (part 4)

This is the fourth, and final part (I think) in the series on killing the OSS RFI/RFP process, a process that suppliers and customers alike find to be inefficient. The concept is based on an initiative currently being investigated by TM Forum.

The previous three posts focused on the importance of trusted partnerships and the methods to develop them via OSS procurement events.

Today’s post takes a slightly different tack. It proposes a structural obsolescence that may lead to the death of the RFP. We might not have to kill it. It might die a natural death.

Actually, let me take that back. I’m sure RFPs won’t die out completely as a procurement technique. But I can see a time when RFPs are far less common and significantly different in nature to today’s procurement events.

How??
Technology!
That’s the answer all technologists cite to any form of problem of course. But there’s a growing trend that provides a portent to the future here.

It comes via the XaaS (As a Service) model of software delivery. We’re increasingly building and consuming cloud-native services. OSS of the future, the small-grid model, are likely to consume software as services from multiple suppliers.

And rather than having to go through a procurement event like an RFP to form each supplier contract, the small grid model will simply be a case of consuming one/many services via API contracts. The API contract (eg OpenAPI specification / swagger) will be available for the world to see. You either consume it or you don’t. No lengthy contract negotiation phase to be had.

Now as mentioned above, the RFP won’t die, but evolve. We’ll probably see more RFPs formed between customers and the services companies that will create customised OSS solutions (utilising one/many OSS supplier services). And these RFPs may not be with the massive multinational services companies of today, but increasingly through smaller niche service companies. These micro-RFPs represent the future of OSS work, the gig economy, and will surely be facilitated by smart-RFP / smart-contract models (like the OSS Justice League model).

How to kill the OSS RFP (part 3)

As the title suggests, this is the third in a series of articles spawned by TM Forum’s initiative to investigate better procurement practices than using RFI / RFP processes.

There’s no doubt the RFI / RFP / contract model can be costly and time-consuming. To be honest, I feel the RFI / RFP process can be a reasonably good way of evaluating and identifying a new supplier / partner. I say “can be” because I’ve seen some really inefficient ones too. I’ve definitely refined and improved my vendor procurement methodology significantly over the years.

I feel it’s not so much the RFI / RFP that needs killing (significant disruption maybe), but its natural extension, the contract development and closure phase that can be significantly improved.

As mentioned in the previous two parts of this series (part 1 and part 2), the main stumbling block is human nature, specifically trust.

Have you ever been involved in the contract phase of a large OSS procurement event? How many pages did the contract end up being? Well over a hundred? How long did it take to reach agreement on all the requirements and clauses in that document?

I’d like to introduce the concept of a Minimum Viable Contract (MVC) here. An MVC doesn’t need most of the content that appears in a typical contract. It doesn’t attempt to predict every possible eventuality during the many years the OSS will survive for. Instead it focuses on intent and the formation of a trusting partnership.

I once led a large, multi-organisation bid response. Our response had dozens of contributors, many person-months of effort expended, included hundreds of pages of methodology and other content. It conformed with the RFP conditions. It seemed justified on a bid that exceeded $250M. We came second on that bid.

The winning bidder responded with a single page that included intent and fixed price amount. Their bid didn’t conform to RFP requests. Whereas we’d sought to engender trust through content, they’d engendered trust through relationships (in a part of the world where we couldn’t match the winning bidder’s relationships). The winning bidder’s response was far easier for the customer to evaluate than ours. Undoubtedly their MVC was easier and faster to gain agreement on.

An MVC is definitely a more risky approach for a customer to initiate when entering into a strategically significant partnership. But just like the sports-star transfer comparison in part 2, it starts from a position of trust and seeks to build a trusted partnership in return.

This is a highly contrarian view. What are your thoughts? Would you ever consider entering into an MVC on a big OSS procurement event?

How to kill the OSS RFP (part 2)

Yesterday’s post discussed an initiative that TM Forum is currently investigating – trying to identify an alternate OSS procurement process to the traditional RFI/RFP/contract approach.

It spoke about trusting partnerships being the (possibly) mythological key to killing off the RFP.

Have you noticed how much fear there is going into any OSS procurement event? Fear from suppliers and customers alike. That’s understandable because there are so many horror stories that both sides have heard of, or experienced, from past procurement events. The going-in position is of excitement, fear and an intention to ensure all loopholes are covered through reams of complex contractual terms and conditions. DBC – death by contract.

I’m a huge fan of Australian Rules Football (aka AFL). I’m lucky enough to have been privy to the inside story behind one of the game’s biggest ever player transfers.

The player, a legend of the game, had a history of poor behaviour. With each new contract, his initial club had inserted more and more T&Cs that attempted to control his behaviour (and protect the club from further public relations fallouts). His final contract was many pages long, with significant discussion required by player and club to reach agreement on each clause.

In the meantime, another club attempted to poach the superstar. Their contract offer fit on a single page and had no behaviour / discipline clauses. It was the same basic pro-forma that eveeryone on the team signed up to. The player was shocked. He asked where all the other clauses were. The answer from the poaching club was, to paraphrase, “why would we need those clauses? We trust you to do the right thing.” It became a significant component of the new club getting their man. And their man went on to deliver upon that trust, both on-field and off, over many years. He built one of the greatest careers ever.

I wonder whether this is just an outlier example? Could the same simplified contract model apply to OSS procurement, helping to build the trusting partnerships that everyone in the industry desires? As the initiator of the procurement event, does the customer control the first important step towards building a trusting partnership that lasts for many years?