The colour palette analogy of OSS

Let’s say you act for a service provider and the diagram below represents the number of variations you could offer to customers – the number that are technically supported by your solution.
13,824,000 Colours
That’s 13,824,000 colours.

By comparison, the following diagram contains just 20 colours:
20 Colours

If I asked you what colours are in the upper diagram, would you say red, orange, yellow, green, blue, etc? Is it roughly the same response as to the lower diagram?

If you’re the customer, and know you want an “orange*” product, will you be able to easily identify between the many thousands of different orange hues available in the upper diagram? Would you be disenfranchised if you were only offered the two orange hues in the lower diagram instead of thousands? Or might you even be relieved to have a much easier decision to make?

The analogy here to OSS is that just because our solutions can support millions of variants, doesn’t mean we should. If our OSS try to offer millions of variants, it means we have to design, then build, then test, then post-sale support millions of variants.

However, in reality, we can’t provide 100% coverage across so many variants – we aren’t able to sufficiently design, then build, then test, then post-sale support every one of the millions of variants. We end up overlooking some or accept risk on some or estimate a test spread that bypasses others. We’ve effectively opened the door to fall-outs.

And it’s fall-outs that tend to create larger customer dissatisfaction metrics than limited colour palettes.

Just curious – if you’ve delivered OSS into large service providers, have you ever seen evidence of palette analysis (ie variant reduction analysis) across domains (ie products, marketing, networks, digital, IT, field-work, etc)?

Alternatively, have you ever pushed back on decisions made upstream to say you’ll only support a smaller sub-set of options? This doesn’t seem to happen very often.

* When I’m talking about colours, I’m using the term figuratively, not necessarily the hues on a particular handset being sold through a service provider.

The PAOSS Call for Innovation has been released

I’ve been promising to release an OSS Call for Innovation, a manifesto of what OSS can become – a manifesto that also describes areas where exponential improvements are just waiting to happen .

It can be found here:
http://passionateaboutoss.com/oss-call-for-innovation/
And you’ll also notice that it’s a new top-level menu item here on PAOSS.

Each time I’ve released one of these vision-statement style reports in the past, I’ve been pleasantly surprised to find that some of the visions are already being worked on by someone in the industry.

Are there any visions that I’ve overlooked? I’d love to see your comments on the page and spread the word on all the amazing innovations that you’re working on and/or dreaming about.

Who can make your OSS dance?

OSS tend to be powerful software suites that can do millions of things. Experts at the vendors / integrators know how to pull the puppet’s strings and make it dance. As a reader of PAOSS, chances are that you are one of those experts. I’ve sat through countless vendor demonstrations, but I’m sure you’ll still be able to wow me with a demo of what your OSS can do.

Unfortunately, most OSS users don’t have that level of expertise, nor experiences or training, to pull all of your OSS‘s strings. Most only use the tiniest sub-set of functionality.

If we look at the millions of features of your OSS in a decision tree format, how easy will it be for the regular user to find a single leaf on your million-leaf tree? To increase complexity further, OSS workflows actually require the user group to hop from one leaf, to another, to another. Perhaps it’s not even as conceptually simple as a tree structure, but a complex inter-meshing of leaves. That’s a lot of puppet-strings to know and control.

A question for you – You can make your OSS dance, but can your customers / users?

What can you do to assist users to navigate the decision tree? A few thoughts below:

  1. Prune the decision tree – chances are that many of the branches of your OSS are never / rarely used, so why are they there?
  2. Natural language search – a UI that allows users to just ask questions. The tool interprets those questions and navigates the tree by itself (ie it abstracts the decision tree from the user, so they never need to learn how to navigate it)
  3. Use decision support – machine assistance to guide users in navigating efficiently through the decision tree
  4. Restrict access to essential branches – design the GUI to ensure a given persona can only see the clusters of options they will use (eg via the use of role-based functionality filtering)

I’d love to hear your additional thoughts how to make it easier for users to make your  (their) OSS dance.

Deciding whether to PoC or to doc

As recently discussed with two friends and colleagues, Raman and Darko, Proofs of Concept (PoC) or Minimum Viable Product (MVP) implementations can be a double-edged sword.

By building something without fully knowing the end-game, you are potentially building tech-debt that may be very difficult to work around without massive (or complete) overhaul of what you’ve built.

The alternative is to go through a process of discovery to build a detailed document showing what you think the end product might look like.

I’m all for leaving important documentation behind for those who come after us, for those who maintain the solutions we create or for those who build upon our solutions. But you’ll notice the past-tense in the sentence above.

There are pros and cons with each approach, but I tend to believe in documentation in the “as-built” sense. However, there is a definite need for some up-front diagrams/docs too (eg inspiring vision statements, use cases, architecture diagrams, GUI/UX designs, etc).

The two biggest reasons I find for conducting PoCs are:

  • Your PoC delivers something tangible, something that stakeholders far and wide can interact with to test assumptions, usefulness, usability, boundary cases, etc. The creation of a doc can devolve into an almost endless set of “what-if” scenarios and opinions, especially when there are large groups of (sometimes militant) stakeholders
  • You’ve already built something – your PoC establishes the momentum that is oh-so-vital on OSS projects. Even if you incur tech-debt, or completely overhaul what you’ve worked on, you’re still further into the delivery cycle than if you spend months documenting. Often OSS change management can be a bigger obstacle than the technical challenge and momentum is one of change management’s strongest tools

I’m all for deep, reflective thinking but that can happen during the PoC process too. To paraphrase John Kennedy, “Don’t think, don’t hope, (don’t document), DO!” 🙂

This is the best OSS book I’ve ever read

This post is about the most inspiring OSS book I’ve ever read, and yet it doesn’t contain a single word that is directly about OSS (so clearly I’m not spruiking my own OSS-centric book here 😉 ).
It’s a book that outlines the resolutions to so many of the challenges being faced by traditional communications service providers (CSPs) as well as the challenges faced by their OSS.

It resonates strongly with me because it reflects so many of my beliefs, but articulates them brilliantly through experiences from some of the most iconic organisations of our times – through their successes and failures.

And the title?

Insanely Simple: The Obsession That Drives Apple’s Success.
Book by Ken Segall.
Insanely Simple

OSS is downstream of so many Complexity choices that this book needs to be read far beyond the boundaries of OSS. Having said that, we’re incredibly good at adding so many of our own layers of complexity.

Upcoming blogs here on PAOSS will surely share some of its words of wisdom.

One unasked last question for OSS business cases

OSS business case evaluators routinely ask many questions that relate to key metrics like return on investment, capital to be outlaid, expected returns, return on investment, and more of the same circular financial questions. 🙂

They do also ask a few technical questions to decide risk – of doing the project or not doing the project. Timeframes and resources come into play, but again tend to land back on the same financial metric(s). Occasionally they’ll ask how the project will impact the precious NPS (Net Promoter Score), which we all know is a simple estimate to calculate (ie pluck out of thin air).

As you can tell, I’m being a little tongue-in-cheek here so far.

One incredibly important question that I’ve never heard asked, but is usually relatively easy to determine is, “Will this change make future upgrades harder?

The answer to this question will determine whether the project will have a snowballing effect on the TCO (total cost of ownership – yes, another financial metric that actually isn’t ROI) of the OSS. Any customisation to off-the-shelf tools will invariably add to the complexity of performing future upgrades. If customisations feed data to additional customisations, then there is a layer multiple to add to the snowball effect.

Throw in enough multi-layered (meshed?) customisations and otherwise routine upgrades start to become massive undertakings. If upgrades are taking months of planning, then your OSS clearly no longer facilitates the level of flexibility that is essential for modern service providers.

The burden of tech-debt insidiously finds its way into OSS stacks, so when evaluating change, don’t forget that one additional question, “Will this change make future upgrades harder?

Use cases for architectural smoke-tests

I often leverage use-case design and touch-point mapping through the stack to ensure that all of the use-cases can be turned into user-journeys, process journeys and data journeys. This process can pick up the high-level flows, but more importantly, the high-level gaps in your theoretical stack.”

Yesterday’s blog discussed the use of use cases to test a new OSS architecture. TM Forum’s eTOM is the go-to model for process mapping for OSS / BSS. Their process maps define multi-level standards (in terms of granularity of process mapping) to promote a level of process repeatability across the industry. Their clickable model allows you to drill down through the layers of interest to you (note that this is available for members only though).

In terms of quick smoke-testing an OSS stack though, I tend to use a simpler list of use cases for an 80/20 coverage:

  • Service qualification (SQ)
  • Adding new customers
  • New customer orders (order handling)
  • Changes to orders (adds / moves / changes / deletes / suspends / resumes)
  • Logging an incident
  • Running a report
  • Creating a new product (for sale to customers)
  • Tracking network health (which may include tracking of faults, performance, traffic engineering, QoS analysis, etc)
  • Performing network intelligence (viewing inventory, capacity, tracing paths, sites, etc)
  • Performing service intelligence (viewing service health, utilised resources, SLA threshold analysis, etc)
  • Extracting configurations (eg network, device, product, customer or service configs)
  • Tracking customer interactions (and all internal / external events that may impact customer experience such as site visits, bills, etc)
  • Running reports (of all sorts)
  • Data imports
  • Data exports
  • Performing an enquiry (by a customer, for the purpose of sales, service health, parameters, etc)
  • Bill creation

There are many more that may be required depending on what your OSS stack needs to deliver, but hopefully this is a starting point to help your own smoke tests.

Use-case driven OSS architecture

When it comes to designing a multi-vendor (sometimes also referred to as best-of-breed) OSS architecture stack, there is never a truer saying than, “the devil is in the detail.”

Oftentimes, it’s just not feasible to design every interface / integration / data-flow traversing a theoretical OSS stack (eg pre-contract award, whilst building a business case, etc). That level of detail is developed during detailed design or perhaps tech-spikes in the Agile world.

In this interim state, I often leverage use-case design and touch-point mapping through the stack to ensure that all of the use-cases can be turned into user-journeys, process journeys and data journeys. This process can pick up the high-level flows, but more importantly, the high-level gaps in your theoretical stack.

The alternative to canned OSS reports

Reports are an important interaction type into any OSS, obviously. What’s less well observed is the time (ie cost) it can take to create and curate canned reports. [BTW in my crude terminology, Canned Reports are ones where the report format and associated query is created / coded and designed to be run more than once in the future]

I’ve seen situations where an organisation has requested many, many custom reports, which have been costly to set up, but then once set up, have not been used again after user acceptance testing. I know of one company that had over 500 canned reports developed and only ~100 had been used more than a few times in the 12 months prior to when I checked.

My preferred option is to create an open data model that can be queried via a reporting engine, one that allows operators to intuitively create their own ad-hoc reports and then save them for future re-use (and share them if desired).

The top-down, bottom-up design process

When planning out a full-stack business / network / services management solution, I tend to follow the top-down, bottom-up design process.

Let’s take the TMN pyramid as a starting point:
TMN Pyramid
Image courtesy of www.researchgate.net

Bottom-up: When designing the assurance stream (eg alarms, performance, etc), I start at the bottom (Network Elements), understanding what devices exist in the network and what events / notifications / monitors they will issue. From there, I seek to understand what tool/s will manage them (Element Management / Network Management), then keep climbing the stack to understand how to present key information that impacts services and the business.

Top-down: When designing the fulfilment stream (eg designs, provisioning, moves/adds/changes, configuration, etc), I generally start at the top* (Business Management), what services are being offered (Service Management) and figure out how those workflows propagate down into the network and associated tools (such as ticketing, workforce management, service order provisioning, etc).

This helps to build a conceptual architecture (ie a layered architecture with a set of building blocks, where the functional building blocks start out blank). From the conceptual architecture, we can then identify the tools / people / processes that will deliver on the functions within each blank box.

This approach ensures we have the big picture in mind before getting bogged down into the minutiae of integrations, data flows, configurations of tools, etc.

To get momentum quickly, I tend to start with the bottom-up side as data flows (eg SNMP traps) are more standardised and the tools tend to need less configuration to get some (but not all) raw alarms / traps into management tools, perhaps just sand-pit versions. For the next step, the top-down part, I tend to create just one simple service scenario, and perhaps even design the front-end to use The Mechanical Turk model described yesterday, and follow the flow manually down through the stack and into element management or network layers. Then grow both streams from there!

* Note that I’m assuming services are already flowing through the network under management and/or the team has already figured out the services they’re offering. In a completely green-fields situation, the capabilities of the network might determine the product set that can be offered to customers (bottom-up), but generally it will be top-down.

The OSS Mechanical Turk

The Mechanical Turk… was a fake chess-playing machine constructed in the late 18th century. From 1770 until its destruction by fire in 1854 it was exhibited by various owners as an automaton, though it was eventually revealed to be an elaborate hoax.
The Turk was in fact a mechanical illusion that allowed a human chess master hiding inside to operate the machine. With a skilled operator, the Turk won most of the games played during its demonstrations around Europe and the Americas for nearly 84 years, playing and defeating many challengers including statesmen such as Napoleon Bonaparte and Benjamin Franklin
.”
Wikipedia.

This ingenious contraption can be mirrored in certain situations within the OSS industry.

I once heard of an OSS fulfilment solution that had consumed a couple of years of effort and millions of dollars before management decided to try an alternate path because there was still no end in sight. There was so much sunk cost that it was a difficult decision.

The problem statement was delivered to a new team brought in from outside the organisation.

They had it working within a single weekend!!

How?

They had focused on what the end customers needed and developed an efficient self-service portal (a front end) that created tickets. The tickets were then manually entered into the back-end systems. Any alerts from the back-end systems were fed back into the portal.

It did the job because transaction volumes were low enough to be processed manually. The first approach failed because integrations, workflows and exception-handling were enormously complex and they were laser-focused on perfect automation.

The Mechanical Turk approach to this OSS conundrum proved to be far more successful. It doesn’t work in all situations but it could be used more often than it is.

Warren Buffett’s “avoid at all costs” OSS backlog

During the last week, this blog-roll has talked about the benefits, but also the challenges facing implementation techniques like Agile in the world of OSS. There’s no doubt that they’re good at breaking down challenges into smaller pieces for implementation. Unfortunately there’s also the risk of doing for the sake of doing – stuffing more stuff into the backlog – without necessarily thinking of the long-term implications.

Warren Buffett’s “two-list” prioritisation strategy could be an interesting adjunct to Agile, microservices and the like for OSS. Rather than putting all 25 of the listed goals into backlog and prioritising the top 5, the Buffett technique sees only the top 5 entering the backlog and the remaining 20 put into an avoid-at-all-cost bucket… at least until the top 5 are complete.

I know I’m as guilty as Mike Flint in not just tackling the top 5, but keeping the next 20 bubbling away. Is your OSS implementation queue taking the same approach as Buffett or me?

Should your OSS have an exit strategy in mind?

What does the integration map of your OSS suite look like? Does it have lots of meatballs with a spaghetti of interconnections? Is it possibly even too entangled that even creating an integration map would be a nightmare?

Similarly, how many customisations have you made to your out-of-the-box tools?

In recent posts we’ve discussed the frenzy of doing without necessarily considering the life-time costs of all those integrations and customisations. There’s no doubt that most of those tweaks will add capability to the solution, but the long-term costs aren’t always factored in.

We also talk about ruthless subtraction projects. There are many reasons why it’s easier to talk about reduction projects than actually achieve them. Mostly it’s because the integrations and customisations have entwined the solutions so tightly that it’s nearly impossible to wind them back.

But what if, like many start-ups, you had an exit strategy in mind when introducing a new OSS tool into your suite? There is an inevitability of obsolescence in OSS, either through technology change, moving business requirements, breakdowns in supplier / partnership relationships, etc. However, most tools stay around for longer than their useful shelf life because of their stickiness. So why not keep rigid control over the level of stickiness via your exit strategy?

My interpretation of an exit strategy is to ensure by-play with other systems happens at data level rather than integrations and customisations to the OSS tools. It also includes ruthless minimisation of snowball dependencies* within that data. Just because integrations can be done, doesn’t mean they should be done.

* Snowball dependencies are when one builds on another, builds on another, which is a common OSS occurrence.

People pay for two things. What about OSS?

People pay for two things:
Results: You do something they couldn’t do for themselves.
Convenience: You do something they don’t want to do for themselves, or you make something difficult easier to do
.”
Ramit Sethi
.

I really like the way Ramit has broken down an infinite number of variants down to just two key categories. Off the top of my head, these categories of payment (ie perceived value) seem to hold true for most industries, but we can unpack how they align with OSS.

In traditional OSS, most of the functionality / capability we provide falls into the convenience category.

In assurance, we tend to act as aggregators and coordinators, the single pane of glass of network health and remedial actions such as trouble-ticketing. But there’s no reason why we couldn’t manage those alarms from our EMS and tickets through spreadsheets. It’s just more convenient to use an OSS.

In fulfilment, we also pull all the pieces together, potentially from a number of different systems ranging from BSS, inventory, EMS and more. Again, it’s just more convenient to use an OSS.

But looking into the future, with the touchpoint explosion, the sheer scale of events hitting our assurance tools and the elastic nature of fulfilment on virtualised networks means that humans can’t manage by themselves. OSS and high-speed decision support tools will be essential to deliver results.

One other slight twist on this story though. All of the convenience we try to create using our OSS can actually result in less convenience. If we develop 1,000 tools that, in isolation, do something they [our customers] don’t want to do for themselves, it adds value. But if those tools in aggregate slow down our OSS significantly, increase support costs (lifetime costs) and make them inflexible to essential changes, then it’s actually reducing the convenience. On this point I have a motto – Just because we can, doesn’t mean we should (build extra convenience tools into our OSS).

Six things in a disruptive ring

The diagram below shows the six phases in a customer life-cycle as defined by Forrester Research:
Forrester life-cycle

It also represents a map of the omni-channel experience for customers and approximates hand-off points. As far as the customer is concerned, the experience should be a seamless continual loop regardless of whether they engage via retail outlet, online, contact centre, chatbot, IVR, etc (or more likely, a somewhat random mix of all).

From a CSP‘s systems perspective, there are usually completely disparate functions that are designed in isolation, perhaps with only loose integration / hand-off between segments in the ring at best. Typically, the only thing that entwines the ring is people and process. For example, that might be a contact centre operator who hopefully has some level of visibility of each of the segments (but often doesn’t) and can tie the pieces together elegantly for the customer.

If we truly want a robust omni-channel experience for our customers, then all systems need to be designed with a seamless continual loop in mind. We can’t expect human-cost reduction automations like chat-bots or online self-service to thread the pieces together unless we can track every single user’s journey, via common linking keys, through each system.

Our OSS/BSS are better positioned than any others to provide this seamless interlock. We’re typically involved in the Buy and Use segments. We’re often called upon for the Ask segment and sometimes for the Engage. If we can also loop in data from the Discover and Explore segments (usually handled by digital or retail/sales), then we have access to the pieces. Then it just remains for us to pull the jigsaw pieces together.

I’m actually really excited by what this ring-thinking could translate to for CSPs – disruptive customer experience models. It gives the opportunity to re-imagine our systems from the customer experience out, as alluded to in the recent OSS Singapore analogy.

Getting ahead of feedback

Amazon is making its Greengrass functional programming cloud-to-premises bridge available to all customers…
This is an important signal to the market in the area of IoT, and also a potentially critical step in deciding whether edge (fog) computing or centralized cloud will drive cloud infrastructure evolution…
The most compelling application for [Amazon] Lambda is event processing, including IoT. Most event processing is associated with what are called “control loop” applications, meaning that an event triggers a process control reaction. These applications typically demand a very low latency for the obvious reason that if, for example, you get a signal to kick a defective product off an assembly line, you have a short window to do that before the product moves out of range. Short control loops are difficult to achieve over hosted public cloud services because the cloud provider’s data center isn’t local to the process being controlled. [Amazon] Greengrass is a way of moving functions out of the cloud data center and into a server that’s proximate to the process
.”
Tom Nolle
.

It seems to me that closed-loop thinking is going to be one of the biggest factors to impact OSS in coming years.

Multi-purpose machine learning (requiring feedback loops like the one described in the link above) is needed by OSS on many levels. IoT requires automated process responses as described by Tom in his quote above. Virtualised networks will evolve to leverage distributed, automated responsiveness to events to ensure optimal performance in the network.

But I’m surprised at how (relatively) little thought seems to be allocated to feedback loop thinking currently within our OSS projects. We’re good at designing forward paths, but not quite so good at measuring the outcomes of our many variants and using those to feed back insight into earlier steps in the workflow.

We need to get better at the measure and control steps in readiness for when technologies like machine-learning, IoT and network virtualisation catch up. The next step after that will be distributing the decision making process out to where it can make a real-time difference.

The first date principle of product development

“…don’t ask your customers what they like or don’t like about your product. Or what they’d change if they could. That’s all about you. If you want really insightful answers, ask them about themselves instead. You can find out a ton about you by asking them about them.
Jason Fried
.

We’ve previously discussed how the first date analogy applies to selling an OSS. Until reading Jason’s quote above, I hadn’t equated it to product development too!

If you want feedback from customers (or potential customers), you’re generally going to get more valuable information by asking questions about them and understanding their situation than by asking what they think about you. Interestingly, this ties in well with the initial empathising phase of Design Thinking.

OSS that are painful and full of denial

It’s quite common, especially in enterprise technology, for something to propose a new way to solve an existing problem. It can’t be used to solve the problem in the old way, so ‘it doesn’t work’, and proposes a new way, and so ‘no-one will want that’. This is how generational shifts work – first you try to force the new tool to fit the old workflow, and then the new tool creates a new workflow. Both parts are painful and full of denial, but the new model is ultimately much better than the old.”
Ben Evans
from the same post as yesterday’s blog on PAOSS.

The other part that Ben glosses over, but which is highly relevant to OSS is staying with the old tools and old workflows because the change chasm is so wide – this third part is even more painful and full of denial.

The monolith OSS of our recent past (and current state in many cases) is no longer a feasible option, These monoliths tended to be built around highly structured, hyper-connected relational databases, which was/is a strength but also a weakness.

There are nuances of course, but a lot of inventory relationships are unchanging – A conductor / fibre goes in a sheath, in a cable in a duct, in a trench. A port is on a card which is in a device, that’s in a rack that’s in a room, in a building.

In a relational database, these are all built upon joins. And in many OSS, there are thousands of hours of development that have built layers and layers of joins onto these base capabilities to give more advanced capabilities. But the computational complexity of all these joins leads to response times that can run into many minutes, not viable for operators that require near-real-time responsiveness – think network fault rectification where SLAs are tight.

For inventory in particular, the generational shift mentioned by Ben appears to be the graph database.
“[The] difference between graph databases and relational databases is that the connections between nodes directly link in such a way that relating data becomes a simple matter of following connections. You avoid the join index lookup performance problem by specifying connections at insert time, so that the data graph can be walked rather than calculated at query time.
This property, only found in native graph databases, is known as index-free adjacency, and it allows queries to traverse millions of nodes per second, offering response times that are several orders of magnitude faster than with relational databases for connected queries (e.g., friend-of-friend/shortest path)
.”
Johan Svensson here

A transition to a graph DB represents new ways with new tools, and obsolescence of a lot of past efforts. But the next generational shift that obsoletes graph DBs surely isn’t far away either.

The big bets on the monoliths no longer seem viable. It seems to me that the highly modular, interfaced, small-grid OSS is the way of the future.

The story of Mike Flint

Mike Flint was Warren Buffett’s personal airplane pilot for 10 years. (Flint has also flown four US Presidents, so I think we can safely say he is good at his job.) According to Flint, he was talking about his career priorities with Buffett when his boss asked the pilot to go through a 3-step exercise.

Here’s how it works…

STEP 1: Buffett started by asking Flint to write down his top 25 career goals. So, Flint took some time and wrote them down. (Note: you could also complete this exercise with goals for a shorter timeline. For example, write down the top 25 things you want to accomplish this week.)

STEP 2: Then, Buffett asked Flint to review his list and circle his top 5 goals. Again, Flint took some time, made his way through the list, and eventually decided on his 5 most important goals.

Note: If you’re following along at home, pause right now and do these first two steps before moving on to Step 3.

STEP 3: At this point, Flint had two lists. The 5 items he had circled were List A and the 20 items he had not circled were List B.

Flint confirmed that he would start working on his top 5 goals right away. And that’s when Buffett asked him about the second list, “And what about the ones you didn’t circle?”

Flint replied, “Well, the top 5 are my primary focus, but the other 20 come in a close second. They are still important so I’ll work on those intermittently as I see fit. They are not as urgent, but I still plan to give them a dedicated effort.”

To which Buffett replied, “No. You’ve got it wrong, Mike. Everything you didn’t circle just became your Avoid-At-All-Cost list. No matter what, these things get no attention from you until you’ve succeeded with your top 5.”
James Clear here.

For me, this story articulates one of the challenges with the Agile approach to OSS. There are generally many, many little items being inserted into a backlog, possibly even many, many Epics being created. Sure, the highest priority activities will lend to rise in the list of priorities but is it the prioritisation of the minutae instead of the big wins?

Do we take the “avoid at all costs” approach to anything that isn’t in the top 5? There are always lots of little tweaks that can be done to our OSS that give us a sense of accomplishment when cleared from the backlog. They may even make the lives easier for some of the OSS operators. But are they contributing to the most important obstacles facing our organisations?

Can we learn from the Mike Flint story when setting OSS development priorities?

Customers buy the basic and learn to love the features

Most customers buy the basic and learn to love the features, but the whole customer experience is based on trying to sell the features.”
Roger Gibson
.

This statement appears oh-so-true in the OSS sales pitches that I’ve observed.

In many cases the customer really only needs the basic, but when every vendor is selling the features, customers also tend to get caught up in the features. “The basic” effectively represents Pareto’s 20% of functionality that is required by 80% (or more) of customers. However, since every vendor has the basic they don’t feel differentiated enough to sell that 20%. They sell the remaining 80%, “the features,” that give the perspective of uniqueness compared with all the other vendors.

When I help customers through the vendor selection process, I take a different perspective though. I work with the customer to understand what is “the basic” of their business model and what the OSS will channel most impact through. It’s not the hundreds of sexy features, the ones which will get used on rare occasions, but “the basic” that get used hundreds (or thousands, or millions) of times every day. I then work with the customers to figure out a way of benchmarking which vendor solution delivers the greatest efficiency on their basic.

A couple of side benefits come from this strategy too:

  • The basic is usually the easiest part of an OSS to roll-out and get delivery momentum going (and momentum is such an important feature of delivering large, complex OSS projects)
  • Once delivery momentum is established and the customer’s operators are using the basic, there are still hundreds of features to play with and enhance over time. Hundreds of more little wins that enhance the customer experience, building the love