How to bring your art and your science to your OSS

In the last two posts, we’ve discussed repeatability within the field of OSS implementation – paint-by-numbers vs artisans and then resilience vs precision in delivery practices.

Now I’d like you to have a think about how those posts overlay onto this quote by Karl Popper:
Non-reproducible single occurrences are of no significance to science.”

Every OSS implementation is different. That means that every one is a non-reproducible single occurrence. But if we bring this mindset into our OSS implementations, it means we’re bringing artisinal rather than scientific method to the project.

I’m all for bringing more art, more creativity, more resilience into our OSS projects.

I’m also advocating more science though too. More repeatability. More precision. Whilst every OSS project may be different at a macro level, there are a lot of similarities in the micro-elements. There tends to be similarities in sequences of activities if you pay close attention to the rhythms of your projects. Perhaps our products can use techniques to spot and leverage similarities too.

In other words, bring your art and your science to your OSS. Please leave a comment below. I’d love to hear the techniques you use to achieve this.

The Mona Lisa of OSS

All OSS rely on workflows to make key outcomes happen. Outcomes like activating a customer order, resolving a fault, billing customers, etc. These workflows often touch multiple OSS/BSS products and/or functional capabilities. There’s not always a single-best-way to achieve an outcome.

If you’re responsible for your organisation’s workflows do you want to build a paint-by-numbers approach where each process is repeatable?
Or do you want the bespoke paintings, which could unintentionally lead to a range in quality from Leonardo’s Mona Lisa to my 3 year old’s finger painting?

Apart from new starters, who thrive on a paint-by-numbers approach at first, every person who uses an OSS wants to feel like an accomplished artisan. They want to have the freedom to get stuff done with their own unique brush-strokes. They certainly don’t want to follow a standard, pre-defined pattern day-in and day-out. That would be so boring and demoralising. I don’t blame them. I’d be exactly the same.

This is perhaps why some organisations don’t have documented workflows, or at least they only have loosely defined ones. It’s just too hard to capture all the possibilities on one swim-lane chart.

I’m all for having artisans on the team who are able to handle the rarer situations (eg process fall-outs) with bespoke processes. But bespoke processes should never be the norm. Continual improvement thrives on a strong level of repeatability.

To me, bespoke workflows are not necessarily an indication of a team of free spirited artists that need to be regimented, but of processes with too many variants. Click on this link to find recommendations for reducing the level of bespoke processes in your organisation.

Are processes bespoke or paint-by-numbers in your organisation?

BTW. We’ll take a slightly different perspective on workflow repeatability in tomorrow’s post.

Can OSS/BSS assist CX? We’re barely touching the surface

Have you ever experienced an epic customer experience (CX) fail when dealing a network service operator, like the one I described yesterday?

In that example, the OSS/BSS, and possibly the associated people / process, had a direct impact on poor customer experience. Admittedly, that 7 truck-roll experience was a number of years ago now.

We have fewer excuses these days. Smart phones and network connected devices allow us to get OSS/BSS data into the field in ways we previously couldn’t. There’s no need for printed job lists, design packs and the like. Our OSS/BSS can leverage these connected devices to give far better decision intelligence in real time.

If we look to the logistics industry, we can see how parcel tracking technologies help to automatically provide status / progress to parcel recipients. We can see how recipients can also modify their availability, which automatically adjusts logistics delivery sequencing / scheduling.

This has multiple benefits for the logistics company:

  • It increases first time delivery rates
  • Improves the ability to automatically notify customers (eg email, SMS, chatbots)
  • Decreases customer enquiries / complaints
  • Decreases the amount of time the truck drivers need to spend communicating back to base and with clients
  • But most importantly, it improves the customer experience

Logistics is an interesting challenge for our OSS/BSS due to the sheer volume of customer interaction events handled each day.

But it’s another area that excites me even more, where CX is improved through improved data quality:

  • It’s the ability for field workers to interact with OSS/BSS data in real-time
  • To see the design packs
  • To compare with field situations
  • To update the data where there is inconsistency.

Even more excitingly, to introduce augmented reality to assist with decision intelligence for field work crews:

  • To provide an overlay of what fibres need to be spliced together
  • To show exactly which port a patch-lead needs to connect to
  • To show where an underground cable route goes
  • To show where a cable runs through trayway in a data centre
  • etc, etc

We’re barely touching the surface of how our OSS/BSS can assist with CX.

The 7 truck-roll fail

In yesterday’s post we talked about the cost of quality. We talked about examples of primary, secondary and tertiary costs of bad data quality (DQ). We also highlighted that the tertiary costs, including the damage to brand reputation, can be one of the biggest factors.

I often cite an example where it took 7 truck rolls to connect a service to my house a few years ago. This provider was unable to provide an estimate of when their field staff would arrive each day, so it meant I needed to take a full day off work on each of those 7 occasions.

The primary cost factors are fairly obvious, for me, for the provider and for my employer at the time. On the direct costs alone, it would’ve taken many months, if not years, for the provider to recoup their install costs. Most of it attributable to the OSS/BSS and associated processes.

Many of those 7 truck rolls were a direct result of having bad or incomplete data:

  • They didn’t record that it was a two storey house (and therefore needed a crew with “working at heights” certification and gear)
  • They didn’t record that the install was at a back room at the house (and therefore needed a higher-skilled crew to perform the work)
  • The existing service was installed underground, but they had no records of the route (they went back to the designs and installed a completely different access technology because replicating the existing service was just too complex)

Customer Experience (CX), aka brand damage, is the greatest of all cost of quality factors when you consider studies such as those mentioned below.

A dissatisfied customer will tell 9-15 people about their experience. Around 13% of dissatisfied customers tell more than 20 people.”
White House Office of Consumer Affairs
(according to customerthink.com).

Through this page alone, I’ve told a lot more than 20 (although I haven’t mentioned the provider’s name, so perhaps it doesn’t count! 🙂  ).

But the point is that my 7 truck-roll example above could’ve been avoided if the provider’s OSS/BSS gave better information to their field workers (or perhaps enforced that the field workers populated useful data).

We’ll talk a little more tomorrow about modern Field Services tools and how our OSS/BSS can impact CX in a much more positive way.

Calculating the cost of quality

This week of posts has followed the theme of the cost of quality. Data quality that is.

But how do you calculate the cost of bad data quality?

Yesterday’s post mentioned starting with PNI (Physical Network Inventory). PNI is the cables, splices / joints, patch panels, ducts, pits, etc. This data doesn’t tend to have a programmable interface to electronically reconcile with. This makes it prone to errors of many types – mistakes in manual entry, reconfigurations that are never documented, assets that are lost or stolen, assets that are damaged or degraded, etc.

Some costs resulting from poor PNI data quality (DQ) can be considered primary costs. This includes SLA breaches caused by an inability to identify a fault within an SLA window due to incorrect / incomplete / indecipherable design data. These costs are the most obvious and easy to calculate because they result in SLA penalties. If a network operator misses a few of these with tier 1 clients then this is the disaster referred to yesterday.

But the true cost of quality is in the ripple-out effects. The secondary costs. These include the many factors that result in unnecessary truck rolls. With truck rolls come extra costs including contractor costs, delayed revenues, design rework costs, etc.

Other secondary effects include:

  • Downstream data maintenance in systems that rely on PNI data
  • Code in downstream systems that caters for poor data quality, which in turn increases the costs of complexity such as:
    • Additional testing
    • Additional fixing
    • Additional curation
  • Delays in the ability to get new products to market
  • Ability to accurately price products (due to variation in real costs caused by extra complexity)
  • Reduced impact of automations (due to increased variants)
  • Potential to impact Machine Learning / Artificial Intelligence engines, which rely on reliable and consistent data at scale
  • etc

There are probably more sophisticated ways to calculate the cost of quality across all these factors and more, but in most cases I just use a simple multiplier:

  • Number of instances of DQ events (eg number of additional truck rolls); times by
  • A rule-of-thumb cost impact of each event (eg the cost of each additional truck roll)

Sometimes the rules-of-thumb are challenging to estimate, so I tend to err on the side of conservatism. I figure that even if the rules-of-thumb aren’t perfectly accurate, at least they produce a real cost estimate rather than just anecdotal evidence.

And more importantly, the tertiary and less tangible costs of brand damage (also known as Customer Experience or CX or reputation damage). We’ll talk a little more about that tomorrow.

 

Where an absence of OSS data can still provide insights

The diagram below has some parallels with OSS. The story however is a little long before it gets to the OSS part, so please bear with me.

null

The diagram shows analysis the US Navy performed during WWII on where planes were being shot. The theory was that they should be reinforcing the areas that received the most hits (ie the wing-tips, central part of the body, etc as per the diagram).

Abraham Wald, a statistician, had a completely different perspective. His rationale was that the hits would be more uniform and the blank sections on the diagram above represent the areas that needed reinforcement. Why? Well, the planes that were hit there never made it home for analysis.

In OSS, this is akin to the device or EMS that has failed and is unable to send any telemetry data. No alarms are appearing in our alarm lists for those devices / EMS because they’re no longer capable of sending any.

That’s why we use heartbeat mechanisms to confirm a system is alive (and capable of sending alarms). These come in the form of pollers, pingers or just watching for other signs of life such as network traffic of any form.

In what other areas of OSS can an absence of data demonstrate an insight?

The OSS Minimum Feature Set is Not The Goal

This minimum feature set (sometimes called the “minimum viable product”) causes lots of confusion. Founders act like the “minimum” part is the goal. Or worse, that every potential customer should want it. In the real world not every customer is going to get overly excited about your minimum feature set. Only a special subset of customers will and what gets them breathing heavy is the long-term vision for your product.

The reality is that the minimum feature set is 1) a tactic to reduce wasted engineering hours (code left on the floor) and 2) to get the product in the hands of early visionary customers as soon as possible.

You’re selling the vision and delivering the minimum feature set to visionaries not everyone.”
Steve Blank here.

A recent blog series discussed the use of pilots as an OSS transformation and augmentation change agent.
I have the need for OSS speed
Re-framing an OSS replacement strategy
OSS transformation is hard. What can we learn from open source?

Note that you can replace the term pilot in these posts with MVP – Minimum Viable Product.

The attraction in getting an MVP / pilot version of your OSS into the hands of users is familiarity and momentum. The solution becomes more tangible and therefore needs less documentation (eg architecture, designs, requirement gathering, etc) to describe foreign concepts to customers. The downside of the MVP / pilot is that not every customer will “get overly excited about your minimum feature set.”

As Steve says, “Only a special subset of customers will and what gets them breathing heavy is the long-term vision for your product.” The challenge for all of us in OSS is articulating the long-term vision and making it compelling…. and not just leaving the product in its pilot state (we’ve all seen this happen haven’t we?)

We’ll provide an example of a long-term vision tomorrow.

PS. I should also highlight that the maximum feature set also isn’t the goal either.

The layers of ITIL redundancy

Today’s is something of a heretical post, especially for the believers in ITIL. In the world of OSS, we look to build in layers of resiliency and not layers of redundancy.

The following diagram and subsequent text in italics describes a typical ITIL process and is all taken from https://www.computereconomics.com/article.cfm?id=1074

Example of relationship between ITIL incidents, problems, and changes

The sequence of events as shown in Figure 1 is as follows:

  • At TIME = 0, an External Event is detected by the Incident Management process. This could be as simple as a customer calling to say that service is unavailable or it could be an automated alert from a system monitoring device.The incident owner logs and classifies this as incident i2. Then, the incident owner tries to match i2 to known errors, work-arounds, or temporary fixes, but cannot find a match in the database.
  • At TIME = 1, the incident owner dispatches a problem request to the Problem Management process anticipating a work-around, temporary fix, or other assistance. In doing so, the incident owner has prompted the creation of Problem p2.
  • At TIME = 2, the problem owner of p2 returns the expected temporary fix to the incident owner of i2.  Note that both i2 and p2 are active and exist simultaneously. The incident owner for i2 applies the temporary fix.
  • In this case, the work-around requires a change request.  So, at Time = 3, the incident owner for i2 initiates change request, c2.
  • The change request c2 is applied successfully, and at TIME = 4, c2 is closed. Note that for a while i2, p2 and c2 all exist simultaneously.
  • Because c2 was successful, the incident owner for i2 can now confirm that the incident is resolved. At TIME = 5, i2 is closed. However, p2 remains active while the problem owner searches for a permanent fix. The problem owner for p2 would be responsible for implementing the permanent fix and initiating any necessary change requests.

But I look at it slightly differently. At their root, why do Incident Management, Problem Management and Change Management exist? They’re all just mechanisms for resolving a system* health issue. If we detect an event/s and fix it, we don’t have to expend all the effort of flicking tickets around.

Thinking within the T2R paradigm of trouble -> incident -> problem -> change -> resolve holds us back. If we can skip the middle steps and immediately associate a resolution with the event/s, we get a whole lot more efficient. If we can immediately relate a trigger with a reaction, we can also get rid of the intermediate ticket flickers and the processing cycle time.

So, the NOC of the future surely requires us to build a trigger -> reaction recommendation engine (and body of knowledge). That’s a more powerful tool to supply to our NOC operators than incidents, problems and change requests. (Easier to write about than to actually solve though of course)

OSS transformation is hard. What can we learn from open source?

Have you noticed an increasing presence of open-source tools in your OSS recently? Have you also noticed that open-source is helping to trigger transformation? Have you thought about why that might be?

Some might rightly argue that it is the cost factor. You could also claim that they tend to help resolve specific, but common, problems. They’re smaller and modular.

I’d argue that the reason relates to our most recent two blog posts. They’re fast to install (don’t need to get bogged down in procurement) and they’re easy to run in parallel for comparison purposes.

If you’re designing an OSS can you introduce the same concepts? Your OSS might be for internal purposes or to sell to market. Either way, if you make it fast to build and easy to use, you have a greater chance of triggering transformation.

If you have a behemoth OSS to “sell,” transformation persuasion is harder. The customer needs to rally more resources (funds, people, time) just to compare with what they already have. If you have a behemoth on your hands, you need to try even harder to be faster, easier and more modular.

I have the need for OSS speed

You already know that speed is important for OSS users. They / we don’t want to wait for minutes for the OSS to respond to a simple query. That’s obvious right? The bleeding obvious.

But that’s not what today’s post is about. So then, what is it about?

Actually, it follows on from yesterday’s post about re-framing of OSS transformation.  If a parallel pilot OSS can be stood up in weeks then it helps persuasion. If the OSS is also fast for operators to learn, then it helps persuasion.  Why is that important? How can speed help with persuasion?

Put simply:

  • It takes x months of uncertainty out of the evaluators’ lives
  • It takes x months of parallel processing out of the evaluators’ lives
  • It also takes x months of task-switching out of the evaluators’ lives
  • Given x months of their lives back, customers will be more easily persuaded

It also helps with the parallel bake-off if your pilot OSS shows a speed improvement.

Whether we’re the buyer or seller in an OSS pilot, it’s incumbent upon us to increase speed.

You may ask how. Many ways, but I’d start with a mass-simplification exercise.

Re-framing an OSS replacement strategy

Friday’s post posed a re-framing exercise that asked you (whether customer, seller or integrator) to run a planning exercise as if you MUST offer a money-back guarantee on your OSS (whether internal or external). It’s designed to force a change in mindset from risk mitigation to risk removal.

We have another re-framing exercise for you today.

As we all know, incumbent OSS can be really difficult to replace / usurp. It becomes a massive exercise for a customer to change the status quo. And when you’re on the team that’s trying to instigate change (again whether you’re internal or external to the OSS customer organisation), you want to minimise the barriers to change.

The ideal replacement approach is to put a parallel pilot in place (which also bears some similarity with the strangler fig analogy). Unfortunately the pilot approach doesn’t get used as often as it could because pilot implementation projects tend to take months to stand up. This implies significant effort and cost, which in turn implies a major procurement event needs to occur.

If the parallel pilot could be stood-up in days or a couple of weeks, then it becomes a more useful replacement persuasion strategy.

So today’s re-framing exercise is to ask yourself what you could do to stand up a pilot version of your OSS in only days/weeks and at very little cost?

Let me add an extra twist to that exercise. When I say stand up the OSS in days/weeks, I also mean to hand over to the users, which means that it has to be intuitive enough for operators to begin using with almost no training. Don’t forget that the parallel solution is unlikely to have additional resources to operate it. It’s likely that the same workforce will need to operate incumbent and pilot, performing a comparison.

So, what you could do to stand up a pilot version of your OSS in only days/weeks, at very little cost and with almost immediate take-up by users?

What’s the one big factor holding back your OSS? And the exercise to reduce it

We’ve talked about some of the emotions we experience in the OSS industry earlier this week, the trauma of OSS and anxiety relating to OSS.

To avoid these types of miserable feelings, it’s human nature to seek to limit them. We over-analyse, we over-specify, we over-engineer, we over-document, we over-contract, we over-react, we over-estimate (nah, actually we almost never over-estimate do we?), we over-resource (well, actually, we don’t seem to do that very often either). Anyway, you get the “over” idea.

What is the one big factor that leads to all of these overs? What is the one big factor that makes our related costs and delivery times become overs too?

Have you guessed yet?

The answer is…… drum-roll please…… RISK.

Let’s face it. OSS projects are as full as a centipede’s sock drawer when it comes to risk. The customer carries risks, the supplier carries risk, the integrators carry risk, the sponsors carry risk, the end-users carry risk, the implementers carry risk. What a burden! And it is a burden that impacts in many ways, as indicated in the triple constraint of OSS projects.

Anyone who’s done more than a few OSS projects knows there are many risks and they tend to respond by going into over-mode (ie all the overs mentioned above). That’s a clever strategy. It’s called risk mitigation.

But today’s post isn’t about risk mitigation. It takes a contrarian approach. Let me explain.

Have you noticed how many companies build risk reduction techniques into their sales models? Phrases like “money-back guarantee” abound. This technique is designed to remove most of the risk for the customer and also remove the associated barrier to purchase. To be fair, it might not actually be a case of removing the risk, but directing all of the risk onto the seller. Marketers call it risk reversal.

I’m sure you’re thinking, “well that’s fine for high-volume, low-cost products like burgers or books, but not so easy for complex, customised solutions like OSS.” I hear you!

I’m not actually asking you to offer a money-back guarantee for your OSS, although Passionate About OSS does offer that all the way from our products through to our high-end consultancy services.

What I am asking you to do (whether customer, seller or integrator) is to run a planning exercise as if you MUST offer a money-back guarantee. What that forces is a change of mindset from risk mitigation to risk removal. It forces consideration of what are the myriad risks “in the system” (for customer, seller and integrator) and how can they be removed? Here are a few risk planning suggestions FWIW.

Set the following challenge for your analysts and engineers – Don’t come to me with a business case for the one-million-and-first feature to add, but prove your brilliance by showing me the business case for the risks you will remove. Risk reduction rather than feature-add or cost-out business cases.

Let me know what you discover and what your results are.

OSS data that’s even more useless than useless

About 6-8 years ago, I was becoming achingly aware that I’d passed well beyond an information overload (I-O) threshold. More information was reaching my brain each day than I was able to assimilate, process and archive. What to do?

Well, I decided to stop reading newspapers and watching the news, in fact almost all television. I figured that those information sources were empty calories for the brain. At first it was just a trial, but I found that I didn’t miss it much at all and continued. Really important news seemed to find me at the metaphorical water-cooler anyway.

To be completely honest, I’m still operating beyond the I-O threshold, but at least it’s (arguably) now a more healthy information diet. I’m now far more useless at trivia game shows, which could be embarrassing if I ever sign up as a contestant on “Who Wants to be a Millionaire.” And missing out on the latest news sadly makes me far less capable of advising the Queen on how to react to Meghan Markle’s latest royal “atrocity.” The crosses we bear.

But I’m digressing markedly (and Markle-ey) from what this blog is all about – O.S.S.

Let me ask you a question – Is your OSS data like almost everybody else’s (ie also in I-O mode)?

Seth Godin recently quoted 3 rules of data:
First, don’t collect data unless it has a non-zero chance of changing your actions.
Second, before you seek to collect data, consider the costs of processing that data.
Third, acknowledge that data collected isn’t always accurate, and consider the costs of acting on data that’s incorrect.”

If I remove the double-negative from rule #1 – Only collect data if it has even the slightest chance of changing your actions.

Most people take the perspective that we might as well collect everything because storage is just getting so cheap (and we should keep it, not because we ever use it, but just in case our AI tools eventually find some relevance locked away inside it).

In the meantime, pre-AI (rolls eyes), Seth’s other two rules provide further sanity to the situation. Storing data is cheap, except where it has to be timely and accurate enough to make decisive, reliable actions on.

So, let me go back to the revised quote in bold. How much of the data in your OSS database / data-lake / data-warehouse / etc has even the slightest chance of changing your actions? As a percentage??

I suspect a majority is never used. And as most of it ages, it becomes even more useless than useless. One wonders, why are we storing it then?

Becoming the Microsoft of the OSS industry

On Tuesday we pondered, “Would an OSS duopoly be a good thing?

It cited two examples of operating systems amongst other famous duopolies:

  • Microsoft / Apple (PC operating systems)
  • Google / Apple (smartphone operating systems)

Yesterday we provided an example of why consolidation is so much more challenging for OSS companies than say for Coke or Pepsi.

But maybe an operating system model could represent a path to overcome many of the challenges faced by the OSS industry. What if there were a Linux for OSS?

  • One where the drivers for any number of device types is already handled and we don’t have to worry about south-bound integrations anymore (mostly). When new devices come onto the market, they need to have agents designed to interact with the common, well-understood agents on the operating system
  • One where the user interface is generally defined and can be built upon by any number of other applications
  • One where data storage and handling is already pre-defined and additional utilities can be added to make data even easier to interact with
  • One where much of underlying technical complexity is already abstracted and the higher value functionality can be built on top

It seems to me to be a great starting point for solving many of the items listed as awaiting exponential improvement is this OSS Call for Innovation manifesto.

Interestingly, I can’t foresee any of today’s biggest OSS players developing such an operating system without a significant mindset shift. They have the resources to become the Microsoft / Apple / Google of the OSS market, but appear to be quite closed-door in their thinking. Waiting for disruption from elsewhere.

Could ONAP become the platform / OS?

Let me relate this by example. TM Forum recently ran an event called DTA in Kuala Lumpur. It was an event for sharing ideas, conversations and letting the market know all about their products. All of the small to medium suppliers were happy to talk about their products, services and offerings. By contrast, I was ordered out of the rooms of one leading, but some might say struggling, vendor because I was only a walk-up. A walk-up representing a potential customer of them, but they didn’t even ask the question about how I might be of value to them (nor vice versa).

Would an OSS duopoly be a good thing?

The products/vendors page here on PAOSS has a couple of hundred entries currently. We’re currently working on an extended list that will almost double the number on it. More news on that shortly.

The level of fragmentation fascinates me, but if I’m completely honest, it probably disappoints me too. It’s great that it’s providing the platform for a long-tail of innovation. It’s exciting that there’s so many niche opportunities that exist. But it disappoints me because there’s so much duplication. How many alarm / performance / inventory / etc management tools are there? Can you imagine how many developer hours have been duplicated on similar feature development between products? And because there are so many different patterns, it means the total number of integration variants across the industry is putting a huge integration tax on us all.

Compare this to the strength of duopoly markets such as:

  • Microsoft / Apple (PC operating systems)
  • Google / Apple (smartphone operating systems)
  • Boeing / Airbus (commercial aircraft)
  • Visa / Mastercard (credit cards / payments)
  • Coca Cola / Pepsi (beverages, etc)

These duopolies have allowed for consolidation of expertise, effort, revenues/profits, etc. Most also provide a platform upon which smaller organisations / suppliers can innovate without having to re-invent everything (eg applications build upon operating systems, parts for aircraft, etc).

Buuuut……

Then I think about the impediments to achieving drastic consolidation through mergers and acquisitions (M&A) in the OSS industry.

There are opportunities to find complementary product alignment because no supplier services the entire OSS estate (where I’m using TM Forum’s TAM as a guide to the breadth of the OSS estate). However, it would be much harder to approach duopoly in OSS for a number of reasons:

  • Almost every OSS implementation is unique. Even if some of the products start out in common, they usually become quickly customised in terms of integrations, configurations, processes, etc
  • Interfaces to networks and other systems can vary so much. Modern EMS / devices / systems are becoming a little more consistent with IP, SNMP, Web APIs, etc. However, our networks still tend to have a lot of legacy protocols to interface with our networks
  • Consolidation of product lines becomes much harder, partly because of the integrations above, but partly because the functionality-sets and workflows differ so vastly between similar products (eg inventory management tools)
  • Similarly, architectures and build platforms (eg programming languages) are not easily compatible
  • Implementations are often regional for a variety of reasons – regulatory, local partnerships / relationships, language, corporate culture, etc
  • Customers can be very change-averse, even when they’re instigating the change

By contrast, we regularly hear of Coca Cola buying up new brands. It’s relatively easy for Coke to add a new product line/s without having much impact on existing lines.

We also hear about Google’s acquisitions, adding complementary products into its product line or simple for the purpose of acquiring new talent / expertise. There’s also acquisitions for the purpose of removing competitors or buying into customer bases.

Harder in all cases in the OSS industry.

Tomorrow we’ll share a story about an M&A attempting to buy into a customer base.

Then on Thursday, a story awaits on a possibly disruptive strategy towards consolidation in OSS.

Do you have a nagging OSS problem you cannot solve?

On Friday, we published a post entitled, “Think for a moment…” which posed the question of whether we might be better-served looking back at our most important existing features and streamlining them rather than inventing new features to solve that have little impact.

Over the weekend, a promotional email landed in my inbox from Nightingale Conant. It is completely unrelated to OSS, yet the steps outlined below seem to be a fairly good guide for identifying what to reinvent within our existing OSS.

Go out and talk to as many people [customers or potential] as you can, and ask them the following questions:
1. Do you have a nagging problem you cannot solve?
2. What is that problem?
3. Why is solving that problem important to you?
4. How would solving that problem change the quality of your life?
5. How much would you be willing to pay to solve that problem?

Note: Before you ask these questions, make sure you let the people know that you’re not going to sell them anything. This way you’ll get quality answers.
After you’ve talked with numerous people, you’ll begin to see a pattern. Look for the common problems that you believe you could solve. You’ll also know how much people are willing to pay to have their problem solved and why.

I’d be curious to hear back from you. Do those first 4 questions identify a pattern that relates to features you’ve never heard of before or features that your OSS already performs (albeit perhaps not as optimally as it could)?

Think for a moment…

Many of the most important new companies, including Google, Facebook, Amazon, Netflix, Snapchat, Uber, Airbnb and more are winning not by giving good-enough solutions…, but rather by delivering a superior experience….”
Ben Thompson
, stratechery.com

Think for a moment about the millions of developer hours that have gone into creating today’s OSS tools. Think also for a moment about how many of those tools are really clunky to use, install, configure, administer. How many OSS tools have truck-loads of functionality baked in that is just distracting, features that you’re never going to need or use? Conversely, how many are intuitive enough for a high-school student, let’s say, to use for the first time and become effective within a day of self-driven learning?

Let’s say an OSS came along that had all of the most important features (the ones customers really pay for, not the flashy, nice-to-have features) and offered a vastly superior user experience and user interface. Let’s say it took the market by storm.

With software and cloud delivery, it becomes harder to sustain differentiation. Innovative features and services are readily copied. But have a think about how hard it would be for the incumbent OSS to pick apart the complexity of their code, developed across those millions of developer hours, and throw swathes of it away – overhauling in an attempt to match a truly superior OSS experience.

Can you see why I’m bemused that we’re not replacing developers with more UX experts? We can surely create more differentiation through vastly improved experience than we can in creating new functionality (almost all of the most important functionality has already been developed and we’re now investing developer time on the periphery).

Nobody dabbles at dentistry

There are some jobs that are only done by accredited professionals.
And then there are most jobs, jobs that some people do for fun, now and then, perhaps in front of the bathroom mirror.
It’s difficult to find your footing when you’re a logo designer, a comedian or a project manager. Because these are gigs that many people think they can do, at least a little bit.”
Seth Godin here.

I’d love to plagiarise the entire post from Seth above, but instead suggest you have a look at the other pearls of wisdom he shares in the link above.

So where does OSS fit in Seth’s thought model? Well, you don’t need an accreditation like a dentist does. Most of the best I’ve met haven’t had any OSS certifications to speak of.

Does the layperson think they can do an OSS role? Most people have never heard of OSS, so I doubt they would believe they could do a role as readily as they could see themselves being logo designers. But the best I’ve met have often come from fields other than telco / IT / network-ops.

One of my earliest OSS projects was for a new carrier in a country that had just deregulated. They were the second fixed-line carrier in this country and tried to poach staff from the incumbent. Few people crossed over. To overcome this lack of experience, the carrier built an OSS team that consisted of a mathematician, an architect, an automotive engineer, a really canny project manager and an assortment of other non-telco professionals.

The executives of that company clearly felt they could develop a team of experts (or just had no choice but to try). The strategy didn’t work out very well for them. It didn’t work out very well for us either. We were constantly trying to bring their team up to speed on the fundamentals in order to use the tools we were developing / delivering (remembering that as one of my first OSS projects, I was still desperately trying to bring myself up to speed – still am for that matter).

As Seth also states, “If you’re doing one of these non-dentist jobs, the best approach is to be extraordinarily good at it. So much better than an amateur that there’s really no room for discussion.” That needs hunger (a hungriness to learn without an accredited syllabus).

It also needs humility though. Even the most extraordinarily good OSS proponents barely scratch the surface of all there is to learn about OSS. It’s the breadth of sub-expertise that probably explains why there is no single accreditation that covers all of OSS.

To link or not to link your OSS. That is the question

The first OSS project I worked on had a full-suite, single vendor solution. All products within the suite were integrated into a single database and that allowed their product developers to introduce a lot of cross-linking. That has its strengths and weaknesses.

The second OSS suite I worked with came from one of the world’s largest network vendors and integrators. Their suite primarily consisted of third-party products that they integrated together for the customer. It was (arguably) a best-of-breed all implemented as a single solution, but since the products were disparate, there was very little cross-linking. This approach also has strengths and weaknesses.

I’d become so used to the massive data migration and cross-referencing exercise required by the first OSS that I was stunned by the lack of time allocated by the second vendor for their data migration activities. The first took months and a significant level of expertise. The second took days and only required fairly simple data sets. That’s a plus for the second OSS.

However, the second OSS was severely lacking in cross-domain data, which impacted the richness of insight that could be easily unlocked.

Let me give an example to give better context.

We know that a trouble ticketing system is responsible for managing the tracking, reporting and resolution of problems in a network operator’s network. This could be as simple as a repository for storing a problem identifier and a list of notes performed to resolve the problem. There’s almost no cross-linking required.

A more referential ticketing system might have links to:

  • Alarm management – to show the events linked to the problem
  • Inventory management – to show the impacted resources (or possibly impacted)
  • Service management – to show the services impacted
  • Customer management – to show the customers impacted and possibly the related customer interactions
  • Spares management – to show the life-cycle of physical resources impacted
  • Workforce management – to manage the people / teams performing restorative actions
  • etc

The referential ticketing system gives far richer information, obviously, but you have to trade that off against the amount of integration and data maintenance that needs to go into supporting it. The question to ask is what level of linking is justifiable from a cost-benefit perspective.

Am I being an OSShole?

“Am I being an asshole?” In other words, am I pointing out problems or am I finding solutions?
Ramit Sethi.

One of the things I’ve noticed working on large and small OSS teams is that people who excel at finding solutions thrive in both. The ones who thrive on only identifying problems seemingly only function in large organisations.

In a small team, everyone needs to contribute to the many solutions that need resolving. There’s a clear line of sight to what’s being delivered. I’ve tended to find that the pure problem-finders feel uncomfortable to be the only ones not clearly delivering.

But there’s absolutely a role for identifying problems or for asking the question that completely re-frames the problem. One of the best I’ve seen is a CEO of a publicly listed company. He had virtually no knowledge of OSS, but could listen to half an hour of technical, round-in-circles discussions, then interject with a summary or question that re-framed and simplified the solution. The team then had a clear direction to implement. The CEO didn’t find the solution directly, but he was an instrumental component in the team reaching a solution.

The question to pose though is whether the question asker is being an OSShole or an agent provocateur*.

* BTW, I use this term within the context of being a change agent, someone who contributes to finding a solution, as opposed to the literal sense, which is to incite others into performing illegal acts.