Networks lead. OSS are an afterthought. Time for a change?

In a recent post, we described how changing conditions in networks (eg topologies, technologies, etc) cause us to reconsider our OSS.

Networks always lead and OSS (or any form of network management including EMS/NMS) is always an afterthought. Often a distant afterthought.

But what if we spun this around? What if OSS initiated change in our networks / services? After all, OSS is the platform that operationalises the network. So instead of attempting to cope with a huge variety of network options (which introduces a massive number of variants and in turn, massive complexity, which we’re always struggling with in our OSS), what if we were to define the ways that networks are operationalised?

Let’s assume we want to lead. What has to happen first?

Network vendors tend to lead currently because they’re offering innovation in their networks, but more importantly on the associated services supported over the network. They’re prepared to take the innovation risk knowing that operators are looking to invest in solutions they can offer to their customers (as products / services) for a profit. The modus operandi is for operators to look to network vendors, not OSS vendors / integrators, to help to generate new revenues. It would take a significant perception shift for operators to break this nexus and seek out OSS vendors before network vendors. For a start, OSS vendors have to create a revenue generation story rather than the current tendency towards a cost-out business case.

ONAP provides an interesting new line of thinking though. As you know, it’s an open-source project that represents multiple large network operators banding together to build an innovative new approach to OSS (even if it is being driven by network change – the network virtualisation paradigm shift in particular). With a white-label, software-defined network as a target, we have a small opening. But to turn this into an opportunity, our OSS need to provide innovation in the services being pushed onto the SDN. That innovation has to be in the form of services/products that are readily monetisable by the operators.

Who’s up for this challenge?

As an aside:
If we did take the lead, would our OSS look vastly different to what’s available today? Would they unequivocally need to use the abstract model to cope with the multitude of scenarios?

A purple cow in our OSS paddock

A few years ago, I read a book that had a big impact on the way I thought about OSS and OSS product development. Funnily enough, the book had nothing to do with OSS or product development. It was a book about marketing – a subject that I wasn’t very familiar with at the time, but am now fascinated with.

And the book? Purple Cow by Seth Godin.
Purple Cow

The premise behind the book is that when we go on a trip into the countryside, we notice the first brown or black cows, but after a while we don’t pay attention to them anymore. The novelty has worn off and we filter them out. But if there was a purple cow, that would be remarkable. It would definitely stand out from all the other cows and be talked about. Seth promoted the concept of building something into your products that make them remarkable, worth talking about.

I recently heard an interview with Seth. Despite the book being launched in 2003, apparently he’s still asked on a regular basis whether idea X is a purple cow. His answer is always the same – “I don’t decide whether your idea is a purple cow. The market does.”

That one comment brought a whole new perspective to me. As hard as we might try to build something into our OSS products that create a word-of-mouth buzz, ultimately we don’t decide if it’s a purple cow concept. The market does.

So let me ask you a question. You’ve probably seen plenty of different OSS products over the years (I know I have). How many of them are so remarkable that you want to talk about them with your OSS colleagues, or even have a single feature that’s remarkable enough to discuss?

There are a lot of quite brilliant OSS products out there, but I would still classify almost all of them as brown cows. Brilliant in their own right, but unremarkable for their relative sameness to lots of others.

The two stand-out purple cows for me in recent times have been CROSS’ built-in data quality ranking and Moogsoft’s Incident Room model. But it’s not for me to decide. The market will ultimately decide whether these features are actual purple cows.

I’d love to hear about your most memorable OSS purple cows.

You may also be wondering how to go about developing your own purple OSS cow. Well I start by asking, “What are people complaining about?” or “What are our biggest issues?” That’s where the opportunities lie. Once discovering those issues, the challenge is solving the problem/s in an entirely different, but better, way. I figure that if people care enough to complain about those issues, then they’re sure to talk about any product that solves the problem for them.

Designing OSS to cope with greater transience

There are three broad models of networking in use today. The first is the adaptive model where devices exchange peer information to discover routes and destinations. This is how IP networks, including the Internet, work. The second is the static model where destinations and pathways (routes) are explicitly defined in a tabular way, and the final is the central model where destinations and routes are centrally controlled but dynamically set based on policies and conditions.”
Tom Nolle here.

OSS of decades past worked best with static networks. Services / circuits that were predominantly “nailed up” and (relatively) rarely changed after activation. This took the real-time aspect out of play and justified the significant manual effort required to establish a new service / circuit.

However, adaptive and centrally managed networks have come to dominate the landscape now. In fact, I’m currently working on an assignment where DWDM, a technology that was once largely static, is now being augmented to introduce an SDN controller and dynamic routing (at optical level no less!).

This paradigm shift changes the fundamentals of how OSS operate. Apart from the physical layer, network connectivity is now far more transient, so our tools must be able to cope with that. Not only that, but the changes are too frequent to justify the manual effort of the past.

To tie in with yesterday’s post, we are again faced with the option of abstract / generic modelling or specific modelling.

Put another way, we have to either come up with adaptive / algorithmic mechanisms to deal with that transience (the specific model), or need to mimic “nailed-up” concepts (the abstract model).

More on the implications of this tomorrow.

Which OSS tool model do you prefer – Abstract or Specific?

There’s something I’ve noticed about OSS products – they are either designed to be abstract / flexible or they are designed to cater for specific technologies / topologies.

When designed from the abstract perspective, the tools are built around generic core data models. For example, whether a virtual / logical device, a physical device, a customer premises device, a core network device, an access network device, a trasmission device, a security device, etc, they would all be lumped into a single object called a device (but with flexibility to cater for the many differences).

When designed from the specific perspective, the tools are built with exact network and/or service models in mind. That could be 5G, DWDM, physical plant, etc and when any new technology / topology comes along, a new plug-in has to be built to augment the tools.

The big advantage of the abstract approach is obvious (if they truly do have a flexible core design) – you don’t have to do any product upgrades to support a new technology / topology. You just configure the existing system to mimic the new technology / topology.

The first OSS product I worked with had a brilliant core design and was able to cope with any technology / topology that we were faced with. It often just required some lateral thinking to make the new stuff fit into the abstract data objects.

What I also noticed was that operators always wanted to customise the solution so that it became more specific. They were effectively trying to steer the product from an open / abstract / flexible tool-set to the other end of the scale. They generally paid significant amounts to achieve specificity – to mould the product to exactly what their needs were at that precise moment in time.

However, even as an OSS newbie, I found that to be really short-term thinking. A network (and the services it carries) is constantly evolving. Equipment goes end-of-life / end-of-support every few years. Useful life models are usually approx 5-7 years and capital refresh projects tend to be ongoing. Then, of course, vendors are constantly coming up with new products, features, topologies, practices, etc.

Given this constant change, I’d much rather have a set of tools that are abstract and flexible rather than specific but less malleable. More importantly, I’ll always try to ensure that any customisations should still retain the integrity of the abstract and flexible core rather than steering the product towards specificity.

How about you? What are your thoughts?

An OSS conundrum with many perspectives

Even aside from the OSS impact, it illustrates the contrast between “bottom-up” planning of networks (new card X is cheaper/has more ports) and “top down” (what do we need to change to reduce our costs/increase capacity).”
Robert Curran
.

Robert’s quote above is in response to a post called “Trickle-down impact planning.”

Robert makes a really interesting point. Adding a new card type is a relatively common event for a big network operator. It’s a relatively minor challenge for the networks team – a BAU (Business as Usual) activity in fact. But if you follow the breadcrumbs, the impact to other parts of the business can be quite significant.

Your position in your organisation possibly dictates your perspective on the alternative approaches Robert discusses above. Networks, IT, planning, operations, sales, marketing, projects/delivery, executive – all will have different impacts and a different field of view on the conundrum. This makes it an interesting problem to solve – which viewpoint is the “right” one to tackle the challenge from?

My “solutioning” background tends to align with the top down viewpoint, but today we’ll take a look at this from the perspective of how OSS can assist from either direction.

Bottom Up: In an ideal world, our OSS and associated processes would be able to identify a new card (or similar) and just ripple changes out without interface changes. The first OSS I worked on did this really well. However, it was a “single-vendor” solution so the ripples were self-contained (mostly). This is harder to control in the more typical “best-of-breed” OSS stacks of today. There are architectural mechanisms for controlling the ripples out but it’s still a significant challenge to solve. I’d love to hear from you if you’re aware of any vendors or techniques that do this really well.

Top Down: This is where things get interesting. Should top-down impact analysis even be the task of an OSS/BSS? Since it’s a common / BAU operational task, then you could argue it is. If so, how do we create OSS tools* that help with organisational impact / change / options analysis and not just network impact analysis? How do we build the tools* that can:

  1. Predict the rippling impacts
  2. Allow us to estimate the impact of each
  3. Present options (if relevant) and
  4. Provide a cost-benefit comparison to determine whether any of the options are viable for development

* When I say “tools,” this might be a product, but it could just mean a process, data extract, etc.

I have the sense that this type of functionality falls into the category of, “just because you can, doesn’t mean you should… build it into your OSS.” Have you seen an OSS/BSS with this type of impact analysis functionality built-in?

When your ideas get stolen

When your ideas get stolen.
A few meditations from Seth Godin:
“Good for you. Isn’t it better that your ideas are worth stealing? What would happen if you worked all that time, created that book or that movie or that concept and no one wanted to riff on it, expand it or run with it? Would that be better?
You’re not going to run out of ideas. In fact, the more people grab your ideas and make magic with them, the more of a vacuum is sitting in your outbox, which means you will prompted to come up with even more ideas, right?
Ideas that spread win. They enrich our culture, create connection and improve our lives. Isn’t that why you created your idea in the first place?
The goal isn’t credit. The goal is change.”

A friend of mine has lots of great ideas. Enough to write a really valuable blog. Unfortunately he’s terrified that someone else will steal those ideas. In the meantime, he’s missing out on building a really important personal brand for himself. Do you know anyone like him?

The great thing about writing a daily blog is that it forces you to generate lots of ideas. It forces you to be constantly thinking about your subject matter and how it relates to the world. Putting them out there in the hope that others want to run with them, in the hope that they spread. In the hope that others will expand upon them and make them more powerful, teaching you along the way. At over 2000 posts now, it’s been an immensely enriching experience for me anyway. As Seth states, the goal is definitely change and we can all agree that OSS is in desperate need for change.

It is incumbent on all of us in the OSS industry to come up with a constant stream of ideas – big and small. That’s what we tend to do on a daily basis right? Do yours tend towards the smaller end of the scale, to resolve daily delivery tasks or the larger end of the scale, to solve the industry’s biggest problems?

Of your biggest ideas, how do you get them out into the world for others to riff on? How many of your ideas have been stolen and made a real difference?

If someone rips off your ideas, it’s a badge of honour and you know that you’ll always generate more…unless you don’t let your idea machine run.

Re-writing the Sales vs Networks cultural divide

Brand, marketing, pricing and sales were seen as sexy. Networks and IT were the geeks no one seemed to speak to or care about. … This isolation and excommunication of our technical team had created an environment of disillusion. If you wanted something done the answer was mostly ‘No – we have no budget and no time for that’. Our marketing team knew more about loyalty points … than about our own key product, the telecommunications network.”
Olaf Swantee
, from his book, “4G Mobile Revolution”

Great note here (picked up by James Crawshaw at Heavy Reading). It talks about the great divide that always seems to exist between Sales / Marketing and Network / Ops business units.

I’m really excited about the potential for next generation OSS / orchestration / NaaS (Network as a Service) architectures to narrow this divide though.

In this case:

  1. The Network is offered as a microservice (let’s abstractly call them Resource Facing Services [RFS]);
  2. Sales / Marketing construct customer offerings (let’s call them Customer Facing Services [CFS]) from those RFS; and
  3. There’s a catalog / orchestration layer that marries the CFS with the cohesive set of RFS

The third layer becomes a meet-in-the-middle solution where Sales / Marketing comes together with Network / Ops – and where they can discuss what customers want and what the network can provide.

The RFS are suitably abstracted that Sales / Marketing doesn’t need to understand the network and complexity that sits behind the veil. Perhaps it’s time for Networks / Ops to shine, where the RFS can be almost as sexy as CFS (am I falling too far into the networks / geeky side of the divide?  🙂  )

The CFS are infinitely composable from RFS (within the constraints of the RFS that are available), allowing Sales / Marketing teams to build whatever they want and the Network / Ops teams don’t have to be constantly reacting to new customer offerings.

I wonder if this revolution will give Olaf cause to re-write this section of his book in a few years, or whether we’ll still have the same cultural divide despite the exciting new tools.

Blown away by one innovation. Now to extend on it

Our most recent two posts, from yesterday and Friday, have talked about one stunningly simple idea that helps to overcome one of OSS‘ biggest challenges – data quality. Those posts have stimulated quite a bit of dialogue and it seems there is some consensus about the cleverness of the idea.

I don’t know if the idea will change the OSS landscape (hopefully), or just continue to be a strong selling point for CROSS Network Intelligence, but it has prompted me to think a little longer about innovating around OSS‘ biggest challenges.

Our standard approach of just adding more coats of process around our problems, or building up layers of incremental improvements isn’t going to solve them any time soon (as indicated in our OSS Call for Innovation). So how?

Firstly, we have to be able to articulate the problems! If we know what they are, perhaps we can then take inspiration from the CROSS innovation to spur us into new ways of thinking?

Our biggest problem is complexity. That has infiltrated almost every aspect of our OSS. There are so many posts about identifying and resolving complexity here on PAOSS that we might skip over that one in this post.

I decided to go back to a very old post that used the Toyota 5-whys approach to identify the real cause of the problems we face in OSS [I probably should update that analysis because I have a whole bunch of additional ideas now, as I’m sure you do too… suggested improvements welcomed BTW].

What do you notice about the root-causes in that 5-whys analysis? Most of the biggest causes aren’t related to system design at all (although there are plenty of problems to fix in that space too!). CROSS has tackled the data quality root-cause, but almost all of the others are human-centric factors – change controls, availability of skilled resources, requirement / objective mis-matches, stakeholder management, etc. Yet, we always seem to see OSS as a technical problem.

How do you fix those people challenges? Ken Segal puts it this way, “When process is king, ideas will never be. It takes only common sense to recognize that the more layers you add to a process, the more watered down the final work will become.” Easier said than done, but a worthy objective!

I’ve just been blown away by the most elegant OSS innovation I’ve seen in decades

Looking back, I now consider myself extremely lucky to have worked with an amazing product on the first OSS project I worked on (all the way back in 2000). And I say amazing because the underlying data models and core product architecture are still better than any other I’ve worked with in the two decades since. The core is the most elegant, simple and powerful I’ve seen to date. Most importantly, the models were designed to cope with any technology, product or service variant that could be modelled as a hierarchy, whether physical or virtual / logical. I never found a technology that couldn’t be modelled into the core product and it required no special overlays to implement a new network model. Sadly, the company no longer exists and the product is languishing on the books of the company that bought out the assets but isn’t leveraging them.

Having been so spoilt on the first assignment, I’ve been slightly underwhelmed by the level of elegant innovation I’ve observed in OSS since. That’s possibly part of the reason for the OSS Call for Innovation published late last year. There have been many exciting innovations introduced since, but many modern tools are still more complex and complicated than they should be, for implementers and operators alike.

But during a product demo last week, I was blown away by an innovation that was so simple in concept, yet so powerful that it is probably the single most impressive innovation I’ve seen since that first OSS. Like any new elegant solution, it left me wondering why it hasn’t been thought of previously. You’re probably wondering what it is. Well first let me start by explaining the problem that it seeks to overcome.

Many inventory-based OSS rely on highly structured and hierarchical data. This is a double-edged sword. Significant inter-relationship of data increases the insight generation opportunities, but the downside is that it can be immensely challenging to get the data right (and to maintain a high-quality data state). Limited data inter-relationships make the project easier to implement, but tend to allow less rich data analyses. In particular, connectivity data (eg circuits, cables, bearers, VPNs, etc) can be a massive challenge because it requires the linking of separate silos of data, often with no linking key. In fact, the data quality problem was probably one of the most significant root-causes of the demise of my first OSS client.

Now getting back to the present. The product feature that blew me away was the first I’ve seen that allows significant inter-relationship of data (yet in a simple data model), but still copes with poor data quality. Let’s say your OSS has a hierarchical data model that comprises Location, Rack, Equipment, Card, Port (or similar) and you have to make a connection from one device’s port to another’s. In most cases, you have to build up the whole pyramid of data perfectly for each device before you can create a customer connection between them. Let’s also say that for one device you have a full pyramid of perfect data, but for the other end, you only know the location.

The simple feature is to connect a port to a location now, or any other point to point on the hierarchy (and clean up the far-end data later on if you wish). It also allows the intermediate hops on the route to be connected at any point in the hierarchy. That’s too simple right, yet most inventory tools don’t allow connections to be made between different levels of their hierarchies. For implementers, data migration / creation / cleansing gets a whole lot simpler with this approach. But what’s even more impressive is that the solution then assigns a data quality ranking to the data that’s just been created. The quality ranking is subsequently considered by tools such as circuit design / routing, impact analysis, etc. However, you’ll have noted that the data quality issue still hasn’t been fixed. That’s correct, so this product then provides the tools that show where quality rankings are lower, thus allowing remediation activities to be prioritised.

If you have an inventory data quality challenge and / or are wondering the name of this product, it’s CROSS, from the team at CROSS Network Intelligence (www.cross-ni.com).

Is your data AI-ready (part 2)

Further to yesterday’s post that posed the question about whether your data was AI ready for virtualised network assurance use cases, I thought I’d raise a few more notes.

The two reasons posed were:

  1. Our data sets haven’t had time to collect much elastic / dynamic network data yet
  2. Our data is riddled with human-generated data that is error-prone

On the latter case in particular, I sense that we’re going to have to completely re-architect the way we collect and store assurance data. We’re almost definitely going to have to think in terms of automated assurance actions and related logging to avoid the errors of human data creation / logging. The question becomes whether it’s worthwhile trying to wrangle all of our old data into formats that the AI engines can cope with or do we just start afresh with new models? (This brings to mind the recent “perfect data” discussion).

It will be one thing to identify patterns, but another thing entirely to identify optimum response activities and to automate those.

If we get these steps right, does it become logical that the NOC (network) and SOC (security operations centre) become conjoined… at least much more so than they tend to be today? In other words, does incident management merge network incidents and security incidents onto common analysis and response platforms? If so, does that imply another complete re-architecture? It certainly changes the operations model.

I’d love to hear your thoughts and predictions.

Drinking from the OSS firehose

Most people know what they want, but don’t know how to get it. When you don’t know the next step, you procrastinate or feel lost. But a little research can turn a vague desire into specific actions.
For example: When musicians say, “I need a booking agent”, I ask, “Which one? What’s their name?”
You can’t act on a vague desire. But with an hour of research you could find the names of ten booking agents that work with ten artists you admire. Then you’ve got a list of the next ten people you need to contact.
A life coach told me that most of his job is just helping people get specific. Once they turn a vague goal into a list of specific steps, it’s easy to take action
.”
Derek Sivers
in his blog, “Get Specific!

In a post last week, I spoke about feeling like never before that I’m at an OSS cross-road, looking towards a set of paths. The paths all contribute heavily to the next-generations of OSS, but there’s the feeling of dread that no one person will have the ability to step out each path. The paths I’m talking about include network virtualisation, data-science / artificial-intelligence / machine-learning, open-source deployments like ONAP, cloud infrastructure and delivery models, and so many more. Each represents a life’s work to become a fully-fledged expert.

In the past, a single OSS polymath could potentially scramble along a majority of the paths and understand the terrain within their local OSS environment. But that’s becoming increasingly less likely as we become ever more dependent upon the interconnection of disparate expertise.

This represents a growing risk. If nobody understands the whole terrain, how do we map out Derek’s “list of specific steps” on our complex OSS projects? If we can’t adequately break down the work, we’re at risk of running projects as a set of vague, disjointed activities. So I imagine you’re wondering how we do “Get Specific!”?

Most technology experts appear to me to have a predilection to plan projects from the bottom up (ie building up a solution from their detailed understanding of some parts of the project). However, on projects as complex as OSS, I’ve never seen a bottom-up plan come together efficiently. Nobody knows enough of the details to build up the entire plan.

Instead, I prefer the top-down approach of building a WBS (work breakdown structure), progressively diving deeper into the details and turning the vague goal into a tree of ever more specific steps. I consider the ability to break down complex projects into manageable chunks of work as my only real super-power, but in reality it largely just comes from using the WBS approach.

Okay, it might sound a bit like a waterfall model (depends on how you design the tree really), but it beats the “trying to drink from a firehose” alternative model.

Which approach works best for you?

After the boys of OSS have gone

Something has always bothered me about the medical profession. Whenever you visit a GP (General Practitioner), unless you need to come back for test results or ongoing treatment, the doctor never finds out if their diagnoses / prescriptions have been effective. In my experience at least, they don’t call to see whether there were any complications, allergic reactions to treatments, improvement in condition, etc and only find out if you make a follow-up appointment. As a result, they never close the feedback loop or gather a potentially rich source of data on their efficacy.

I sometimes wonder whether this is true of OSS implementers too. There can be a tendency to move from one implementation project to the next, from one customer to the next, without having the time to circle back on previous clients. Any unrealised but ongoing problems are handed over to operations and/or product support teams, so the implementers may not get to see them. Alternatively the team might also be consistently missing out on identifying opportunities for value-add on their projects.

If you’re an implementer (as I often am), how do you close the loop to find out what you could be doing better? Do you retain dialog with customers after handover? Do you question your support teams about what client problems / enquiries are landing on their desk? Do you ever book follow-up sessions with client staff at scheduled intervals after handover? Are you always engaged on an operational handover period where you have the chance to see post-handover challenges first-hand?

Just like a doctor, you’re bound to hear of any major or catastrophic outcomes after a “patient’s” initial visit. But what about the niggling ailments your clients have that could be easily rectified for all future clients… if only you knew of them?

I’d love to hear the thoughts from implementers on how they’re continually upping their game. Similarly, if you’re in ops / support, what experiences (ie messes) are consistently landing with you to clean up after the implementers have moved on? Do you have any suggestions for how they (we) could close the loop better with you?

Note: For all the highly talented women out there in OSS-land, please note that I’m not overlooking you. The title of my post is just a play on Don Henley’s famous song.

Trickle-down impact planning

We introduced the concept of The Trickle-down Effect last year, an effect that sees the most minor changes trickling down through an OSS stack, with much bigger consequences than expected.

The trickle-down effect can be insidious, turning a nice open COTS solution into a beast that needs constant attention to cope with the most minor of operational changes. The more customisations made, the more gnarly the beast tends to be.”

Here’s an example I saw recently. An internal business unit wanted to introduce a new card type into the chassis set they managed. Speaking with the physical inventory team, it seemed the change was quite small and a budget was developed for the works… but the budget (dollars / time / risk) was about to blow out in a big way.

The new card wasn’t being picked up in their fault-management or performance management engines. It wasn’t picked up in key reports, nor was it being identified in the configuration management database or logical inventory. Every one of these systems needed interface changes. Not massive change obviously, but collectively the budget blew out by 10x and expedite changes pushed out the work previously planned by each of the interface development and testing teams.

These trickle-down impacts were known…. by some people…. but weren’t communicated to the business unit responsible for managing the new card type. There’s a possibility that they may not have even added the new card type if they realised the full OSS cost consequences.

Are these trickle-down impacts known and readily communicated within your OSS change processes?

One sentence to make most OSS experts cringe

Let me warn you. The following sentence is going to make many OSS experts cringe, maybe even feel slightly disgusted, but take the time to read the remainder of the post and ponder how it fits within your specific OSS context/s.

“Our OSS need to help people spend money!”

Notice the word is “help” and not “coerce?” This is not a post about turning our OSS into sales tools, well, not directly anyway.

May I ask you a question – Do you ever spend time thinking about how your OSS is helping your customer’s customer (which I’ll refer to as the end-customer) to spend their money? And I mean making it easier for them to buy the stuff they want to buy in return for some form of value / utility, not trick or coerce them into buying stuff they don’t want.

Let me step you through the layers of thinking here.

The first layer for most OSS experts is their direct customer, which is usually the service provider or enterprise that buys and operates the OSS. We might think they are buying an OSS, but we’re wrong. An organisation buys an OSS, not because it wants an Operational Support System, but because it wants Operational Support.

The second layer is a distinct mindset change for most OSS experts. Following on from the first layer, OSS has the potential to be far more than just operational support. Operational support conjures up the image of being a cost-centre, or something that is a necessary evil of doing business (ie in support of other revenue-raising activities). To remain relevant and justify OSS project budgets, we have to flip the cost-centre mentality and demonstrate a clear connection with revenue chains. The more obvious the connection, the better. Are you wondering how?

That’s where the third layer comes in. We have to think hard about the end-customer and empathise with their experiences. These experiences might be a consumer to a service provider’s (your direct customer) product offerings. It might even be a buying cycle that the service provider’s products facilitate. Either way, we need to simplify their ability to buy.

So let’s work back up through those layers again:
Layer 3 – If end-customers find it easier to buy stuff, then your customer wins more revenue (and brand value)
Layer 2 – If your customer sees that its OSS / BSS has unquestionably influenced revenue increase, then more is invested on OSS projects
Layer 1 – If your customer recognises that your OSS / BSS has undeniably influenced the increased OSS project budget, you too get entrusted with a greater budget to attempt to repeat the increased end-customer buy cycle… but only if you continue to come up with ideas that make it easier for people (end-customers) to spend their money.

At what layer does your thinking stop?

The pruning saw technique for OSS fall-out management

Many different user journeys flow through our OSS every day. These include external / customer journeys, internal / operator journeys and possibly even machines-to-machine or system journeys. Unfortunately, not all of these journeys are correctly completed through to resolution.

The incomplete or unsatisfactory journeys could include inter-system fall-outs, customer complaints, service quality issues, and many more.

If we categorise and graph these unsuccessful journeys, we will often find a graph like the one below. This example shows that a small number of journey categories account for a large proportion of the problematic journeys. However, it also shows a long tail of other categories that are individually insignificant, but collectively significant.

Long tail of OSS demands

The place where everybody starts (including me) is to focus on the big wins on lhe left side of the graph. lt makes sense that this is where your efforts will have the biggest impacts. Unfortunately, since that’s where everybody focusses effort, chances are the significant gains have already been achieved and optimisation is already quite high (but numbers are still high due constraints within the existing solution stack).

The long tail intrigues me. It’s harder to build a business case for because there are many causes and little return for solving them. Alternatively, if we can slowly and systematically remove many of these rarer events, we’re removing the noise and narrowing our focus on the signal, thus simplifying our overall solution.

Since they’re statistically rarer, we can often afford to be more ruthless in the ways that we prevent them from occurring. But how do we identify and prevent them?

Each of the bars on the chart above represent leaves on a decision tree (faulty leaves in this case). If we work our way back up the branches, we can ruthlessly prune. Examples could be:

  • The removal of obscure service options that lead to faults
  • Reduction (or expansion or refinement) of process designs to better cope with boundary cases that generate problems
  • Removal of grandfathered products that have few customers, generate losses and consume disproportionate support effort
  • Removal or refinement of interfaces, especially east-west systems that contribute little to an end-to-end process but lead to many fall-outs
  • Identification of improved exception handling, either within systems or at hand-off points

I’m sure you can think of many more, especially when you start tracing back up decision trees with a pruning saw in hand!!

 

How smart contracts might reduce risk and enhance trust on OSS projects

Last Friday, we spoke about all wanting to develop trusted OSS supplier / customer relationships but rarely finding them and a contrarian factor for why trust is so hard to achieve in OSS – complexity.

Trust is the glue that allows OSS projects to happen. Not only that, it becomes a catch-22 with complexity. If OSS partners don’t trust each other, requirements, contracts, etc get more complex as a self-protection barrier. But with every increase in complexity, there becomes an increasing challenge to deliver and hence, risk of further reduction in trust.

On a smaller scale, you’ve seen it on all projects – if the project starts to falter, increased monitoring attention is placed on the project, which puts increased administrative load on the project team and reduces the time they have to deliver the intended outcomes. Sometimes the increased admin / report gains the attention of sponsors and access to additional resources, but usually it just detracts from the available delivery capability.

Vish Nandlall also associates trust and complexity in organisational models in his LinkedIn post below:

This is one of the reasons I’m excited about what smart contracts can do for the organisations and OSS projects of the future. Just as “Likes” and “Supplier Rankings” have facilitated online trust models, smart contracts success rankings have the ability to do the same for OSS suppliers, large and small. For example, rather than needing to engage “Big Vendor A” to build your entire, monolithic OSS stack, if an operator develops simpler, more modular work breakdowns (eg microservices), then they can engage “Freelancer B” and “Small Vendor C” to make valuable contributions on smaller risk increments. Being lower in complexity and risk means B and C have a greater chance of engendering trust, but their historical contract success ranking forces them to develop trust as a key metric.

Potential OSS failures aren’t always technical

I recently attended an event where a brainstorming question was posed about how a particular next-gen OSS concept might fail. Interesting exercise!

There were a lot of super-clever technical people in the room. The brainstorming of ideas was a fascinating one. We dived deeply into the experiences of many of the technical people in the room and all the potential technical reasons for failure.

But I was left with an overwhelming feeling that:

    1. Most, if not all, of those technical hurdles could be overcome if given enough resources
    2. None of the more likely causes of failure were brought up, including:
      • People-related factors (or organisational change factors) such as resistance to change, a shortage of skills in a nascent area, stakeholder management, lack of “champion” support if momentum slows, inability to reach consensus on scope / design, etc
      • Financial viability factors such as inability to deliver on time/cost/scope, parallel operations and maintenance of legacy, lower additional benefit than predicted in the business case

That’s where I’ve noticed a greater proportion of OSS project failures anyway. Does this align with your experiences?

Finding the most important problems to solve

The problem with OSS is that there are too many problems. We don’t have to look too hard to find a problem that needs solving.

An inter-related issue is that we’re (almost always) constrained by resources and aren’t able to solve every problem we find. I have a theory – As much as you are skilled at solving OSS problems, it’s actually your skill at deciding which problem to solve that’s more important.

With continuous release methodologies gaining favour, it’s easy to prioritise on the most urgent or easiest problems to solve. But what if we were to apply the Warren Buffett 20 punch-card approach to tackling OSS problems?

I could improve your ultimate financial welfare by giving you a ticket with only twenty slots in it so that you had twenty punches – representing all the investments that you got to make in a lifetime. And once you’d punched through the card, you couldn’t make any more investments at all. Under those rules, you’d really think carefully about what you did, and you’d be forced to load up on what you’d really thought about. So you’d do so much better.”
Warren Buffett
.

I’m going through this exact dilemma at the moment – am I so busy giving attention to the obvious problems that I’m not allowing enough time to discover the most important ones? I figure that anyone can see and get caught up in the noise of the obvious problems, but only a rare few can listen through it…

Bringing Eminem’s blank canvas to OSS

“When you start out in your career, you have a blank canvas, so you can paint anywhere that you want because the shit ain’t been painted on yet. And then your second album comes out, and you paint a little more and you paint a little more. By the time you get to your seventh and eighth album you’ve already painted all over it. There’s nowhere else to paint.”
Eminem. (on Rick Rubin and Malcolm Gladwell’s Broken Record podcast)

To each their own. Personally, Eminem’s music has never done it for me, whether his first or eighth album, but the quote above did strike a chord (awful pun).

It takes many, many hours to paint in the detail of an OSS painting. By the time a product has been going for a few years, there’s not much room left on the canvas and the detail of the existing parts of the work is so nuanced that it’s hard to contemplate painting over.

But this doesn’t consider that over the years, OSS have been painted on many different canvases. First there were mainframes, then client-server, relational databases, XaaS, virtualisation (of servers and networks), and a whole continuum in between… not to mention the future possibilities of blockchain, AI, IoT, etc. And that’s not even considering the changes in programming languages along the way. In fact, new canvases are now presenting themselves at a rate that’s hard to keep up with.

The good thing about this is that we have the chance to start over with a blank canvas each time, to create something uniquely suited to that canvas. However, we invariably attempt to bring as much of the old thinking across as possible, immediately leaving little space left to paint something new. Constraints that existed on the old canvas don’t always apply to each new canvas, but we still have a habit of bringing them across anyway.

We don’t always ask enough questions like:

  • Does this existing process still suit the new canvas
  • Can we skip steps
  • Can we obsolete any of the old / unused functionality
  • Are old and new architectures (at all levels) easily transmutable
  • Does the user interface need to be ported or replaced
  • Do we even need a user interface (assuming the rise of machine-to-machine with IoT, etc)
  • Does the old data model have any relevance to the new canvas
  • Do the assurance rules of fixed-network services still apply to virtualised networks
  • Do the fulfillment rules of fixed-network services still apply to virtualised networks
  • Are there too many devices to individually manage or can they be managed as a cohort
  • Does the new model give us access to new data and/or techniques that will allow us to make decisions (or derive insights) differently
  • Does the old billing or revenue model still apply to the new platform
  • Can we increase modularity and abstraction between modules

“The real reason “blockchain” or “AI” may actually change businesses now or in the future, isn’t that the technology can do remarkable things that can’t be done today, it’s that it provides a reason for companies to look at new ways of working, new systems and finally get excited about what can be done when you build around technology.”
Tom Goodwin
.

The chains of integration are too light until…

Chains of habit are too light to be felt until they are too heavy to be broken.”
Warren Buffett (although he attributed it to an unknown author, perhaps originating with Samuel Johnson).

What if I were to replace the word “habit” in the quote above with “OSS integration” or “OSS customisation” or “feature releases?”

The elegant quote reflects this image:

The chains of feature releases are light at t0. They’re much heavier at t0+100.

Like habits, if we could project forward and see the effects, would we allow destructive customisations to form in their infancy? The problem is that we don’t see them as destructive at infancy. We don’t see how they entangle.