The layers of ITIL redundancy

Today’s is something of a heretical post, especially for the believers in ITIL. In the world of OSS, we look to build in layers of resiliency and not layers of redundancy.

The following diagram and subsequent text in italics describes a typical ITIL process and is all taken from https://www.computereconomics.com/article.cfm?id=1074

Example of relationship between ITIL incidents, problems, and changes

The sequence of events as shown in Figure 1 is as follows:

  • At TIME = 0, an External Event is detected by the Incident Management process. This could be as simple as a customer calling to say that service is unavailable or it could be an automated alert from a system monitoring device.The incident owner logs and classifies this as incident i2. Then, the incident owner tries to match i2 to known errors, work-arounds, or temporary fixes, but cannot find a match in the database.
  • At TIME = 1, the incident owner dispatches a problem request to the Problem Management process anticipating a work-around, temporary fix, or other assistance. In doing so, the incident owner has prompted the creation of Problem p2.
  • At TIME = 2, the problem owner of p2 returns the expected temporary fix to the incident owner of i2.  Note that both i2 and p2 are active and exist simultaneously. The incident owner for i2 applies the temporary fix.
  • In this case, the work-around requires a change request.  So, at Time = 3, the incident owner for i2 initiates change request, c2.
  • The change request c2 is applied successfully, and at TIME = 4, c2 is closed. Note that for a while i2, p2 and c2 all exist simultaneously.
  • Because c2 was successful, the incident owner for i2 can now confirm that the incident is resolved. At TIME = 5, i2 is closed. However, p2 remains active while the problem owner searches for a permanent fix. The problem owner for p2 would be responsible for implementing the permanent fix and initiating any necessary change requests.

But I look at it slightly differently. At their root, why do Incident Management, Problem Management and Change Management exist? They’re all just mechanisms for resolving a system* health issue. If we detect an event/s and fix it, we don’t have to expend all the effort of flicking tickets around.

Thinking within the T2R paradigm of trouble -> incident -> problem -> change -> resolve holds us back. If we can skip the middle steps and immediately associate a resolution with the event/s, we get a whole lot more efficient. If we can immediately relate a trigger with a reaction, we can also get rid of the intermediate ticket flickers and the processing cycle time.

So, the NOC of the future surely requires us to build a trigger -> reaction recommendation engine (and body of knowledge). That’s a more powerful tool to supply to our NOC operators than incidents, problems and change requests. (Easier to write about than to actually solve though of course)

OSS transformation is hard. What can we learn from open source?

Have you noticed an increasing presence of open-source tools in your OSS recently? Have you also noticed that open-source is helping to trigger transformation? Have you thought about why that might be?

Some might rightly argue that it is the cost factor. You could also claim that they tend to help resolve specific, but common, problems. They’re smaller and modular.

I’d argue that the reason relates to our most recent two blog posts. They’re fast to install (don’t need to get bogged down in procurement) and they’re easy to run in parallel for comparison purposes.

If you’re designing an OSS can you introduce the same concepts? Your OSS might be for internal purposes or to sell to market. Either way, if you make it fast to build and easy to use, you have a greater chance of triggering transformation.

If you have a behemoth OSS to “sell,” transformation persuasion is harder. The customer needs to rally more resources (funds, people, time) just to compare with what they already have. If you have a behemoth on your hands, you need to try even harder to be faster, easier and more modular.

I have the need for OSS speed

You already know that speed is important for OSS users. They / we don’t want to wait for minutes for the OSS to respond to a simple query. That’s obvious right? The bleeding obvious.

But that’s not what today’s post is about. So then, what is it about?

Actually, it follows on from yesterday’s post about re-framing of OSS transformation.  If a parallel pilot OSS can be stood up in weeks then it helps persuasion. If the OSS is also fast for operators to learn, then it helps persuasion.  Why is that important? How can speed help with persuasion?

Put simply:

  • It takes x months of uncertainty out of the evaluators’ lives
  • It takes x months of parallel processing out of the evaluators’ lives
  • It also takes x months of task-switching out of the evaluators’ lives
  • Given x months of their lives back, customers will be more easily persuaded

It also helps with the parallel bake-off if your pilot OSS shows a speed improvement.

Whether we’re the buyer or seller in an OSS pilot, it’s incumbent upon us to increase speed.

You may ask how. Many ways, but I’d start with a mass-simplification exercise.

Re-framing an OSS replacement strategy

Friday’s post posed a re-framing exercise that asked you (whether customer, seller or integrator) to run a planning exercise as if you MUST offer a money-back guarantee on your OSS (whether internal or external). It’s designed to force a change in mindset from risk mitigation to risk removal.

We have another re-framing exercise for you today.

As we all know, incumbent OSS can be really difficult to replace / usurp. It becomes a massive exercise for a customer to change the status quo. And when you’re on the team that’s trying to instigate change (again whether you’re internal or external to the OSS customer organisation), you want to minimise the barriers to change.

The ideal replacement approach is to put a parallel pilot in place (which also bears some similarity with the strangler fig analogy). Unfortunately the pilot approach doesn’t get used as often as it could because pilot implementation projects tend to take months to stand up. This implies significant effort and cost, which in turn implies a major procurement event needs to occur.

If the parallel pilot could be stood-up in days or a couple of weeks, then it becomes a more useful replacement persuasion strategy.

So today’s re-framing exercise is to ask yourself what you could do to stand up a pilot version of your OSS in only days/weeks and at very little cost?

Let me add an extra twist to that exercise. When I say stand up the OSS in days/weeks, I also mean to hand over to the users, which means that it has to be intuitive enough for operators to begin using with almost no training. Don’t forget that the parallel solution is unlikely to have additional resources to operate it. It’s likely that the same workforce will need to operate incumbent and pilot, performing a comparison.

So, what you could do to stand up a pilot version of your OSS in only days/weeks, at very little cost and with almost immediate take-up by users?

What’s the one big factor holding back your OSS? And the exercise to reduce it

We’ve talked about some of the emotions we experience in the OSS industry earlier this week, the trauma of OSS and anxiety relating to OSS.

To avoid these types of miserable feelings, it’s human nature to seek to limit them. We over-analyse, we over-specify, we over-engineer, we over-document, we over-contract, we over-react, we over-estimate (nah, actually we almost never over-estimate do we?), we over-resource (well, actually, we don’t seem to do that very often either). Anyway, you get the “over” idea.

What is the one big factor that leads to all of these overs? What is the one big factor that makes our related costs and delivery times become overs too?

Have you guessed yet?

The answer is…… drum-roll please…… RISK.

Let’s face it. OSS projects are as full as a centipede’s sock drawer when it comes to risk. The customer carries risks, the supplier carries risk, the integrators carry risk, the sponsors carry risk, the end-users carry risk, the implementers carry risk. What a burden! And it is a burden that impacts in many ways, as indicated in the triple constraint of OSS projects.

Anyone who’s done more than a few OSS projects knows there are many risks and they tend to respond by going into over-mode (ie all the overs mentioned above). That’s a clever strategy. It’s called risk mitigation.

But today’s post isn’t about risk mitigation. It takes a contrarian approach. Let me explain.

Have you noticed how many companies build risk reduction techniques into their sales models? Phrases like “money-back guarantee” abound. This technique is designed to remove most of the risk for the customer and also remove the associated barrier to purchase. To be fair, it might not actually be a case of removing the risk, but directing all of the risk onto the seller. Marketers call it risk reversal.

I’m sure you’re thinking, “well that’s fine for high-volume, low-cost products like burgers or books, but not so easy for complex, customised solutions like OSS.” I hear you!

I’m not actually asking you to offer a money-back guarantee for your OSS, although Passionate About OSS does offer that all the way from our products through to our high-end consultancy services.

What I am asking you to do (whether customer, seller or integrator) is to run a planning exercise as if you MUST offer a money-back guarantee. What that forces is a change of mindset from risk mitigation to risk removal. It forces consideration of what are the myriad risks “in the system” (for customer, seller and integrator) and how can they be removed? Here are a few risk planning suggestions FWIW.

Set the following challenge for your analysts and engineers – Don’t come to me with a business case for the one-million-and-first feature to add, but prove your brilliance by showing me the business case for the risks you will remove. Risk reduction rather than feature-add or cost-out business cases.

Let me know what you discover and what your results are.

Identifying the fault-lines that trigger OSS churn

Most people slog through their days in a dark funk. They almost never get to do anything interesting or go to interesting places or meet interesting people. They are ignored by marketers who want them to buy their overpriced junk and be grateful for it. They feel disrespected, unappreciated and taken for granted. Nobody wants to take the time to listen to their fears, dreams, hopes and needs. And that’s your opening.
John Carlton
.

Whilst the quote above may relate to marketing, it also has parallels in the build and run phases of an OSS project. We talked about the trauma of OSS yesterday, where the OSS user feels so much trauma with their current OSS that they’re willing to go through the trauma of an OSS transformation. Clearly, a procurement event must be preceded by a significant trauma!

Sometimes that trauma has its roots in the technical, where the existing OSS just can’t do (or be made to do) the things that are most important to the OSS user. Or it can’t do it reliable, at scale, in time, cost effectively, without significant risk / change. That’s a big factor certainly.

However, the churn trigger appears to more often be a human one. The users feel disrespected, unappreciated and taken for granted. But here’s an interesting point that might surprise some users – the suppliers also often feel disrespected, unappreciated and taken for granted.

I have the privilege of working on both sides of the equation, often even as the intermediary between both sides. Where does the blame lie? Where do the fault-lines originate? The reasons are many and varied of course, but like a marriage breakup, it usually comes down to relationships.

Where the communication method is through hand-grenades being thrown over the fence (eg management by email and by contractual clauses), results are clearly going to follow a deteriorating arc. Yet many OSS relationships structurally start from a position of us and them – the fence is erected – from day one.

Coming from a technical background, it took me far too deep into my career to come to this significant realisation – the importance of relationships, not just the quest for technical perfection. The need to listen to both sides’ fears, dreams, hopes and needs.

Addressing the trauma of OSS

You also have to understand their level of trauma. Your product, service or information is selling a solution to someone who is in trauma. There are different levels, from someone who needs a nail to finish the swing set in their backyard to someone who just found out they have a life-threatening disease. All of your customers had something happen in their life, where the problem got to an unmanageable point that caused them to actively search for your solution.
A buying decision is an emotional decision
.”
John Carlton
.

My clients tend to fall into three (fairly logical) categories:

  1. They’re looking to buy an OSS
  2. They’re looking to sell an OSS
  3. They’re in the process of implementing an OSS

Category 3 clients tend to bring a very technical perspective to the task. Lists of requirements, architectures, designs, processes, training, etc.

Category 2 clients tend to also bring a technical perspective to the task. Lists of features, processes, standards, workflows, etc.

Category 1 clients also tend to break down the buying decision in a technical manner. List of requirements, evaluation criteria, ranking/weighting models, etc.

But what’s interesting about this is that category 1 is actually a very human initiative. It precedes the other two categories (ie it is the lead action). And category 1 clients tend to only reach this state of needing help due to a level of trauma. The buying decision is an emotional decision.

Nobody wants to go through an OSS replacement or the procurement event that precedes it. It’s also a traumatic experience for the many people involved. As much as I love being involved in these projects, I wouldn’t wish them on anyone.

I wonder whether taking the human perspective, actively putting ourselves in the position of understanding the trauma the buyer is experiencing, might change the way we approach all three categories above?

That is, taking less of a technical approach (although that’s still important of course), but more on addressing the trauma. As the first step, do you step back to understand what is the root-cause of your customer’s unique trauma?

Zero Touch Assurance – ZTA (part 3)

This is the third in a series on ZTA, following on from yesterday’s post that suggested intentionally triggering events to allow the accumulation of a much larger library of historical network data.

Today we’ll look at the impact of data collection on our ability to achieve ZTA and refer back to part 1 in the series too.

  1. Monitoring – There is monitoring the events that happen in the network and responding manually
  2. Post-cognition – There is monitoring events that happen in the network, comparing them to past events and actions (using analytics to identify repeating patterns), using the past to recommend (or automate) a response
  3. Pre-cognition – There is identifying events that have never happened in the network before, yet still being able to provide a recommended / automated response

In my early days of OSS projects, it was common that network performance data would be collected at 15 minute intervals at best. Sometimes even less if it put too much load on the processor of any given network devices. That was useful for long and medium term trend analysis, but averaging across the 15 minute period meant that significant performance events could be missed. Back in those days it was mostly Stage 1 – Monitoring. Stage 2, Post-cognition, was unsophisticated at a system level (eg manually adjusting threshold-crossing event levels) so post-cognition relied on talented operators who could remember similar events in the past.

If we want to reach the goal of ZTA, we have to drastically reduce measurement / polling / notification intervals. Ideally, we want near-real-time data collection across the following dimensions:

  • To extract (from the device/EMS)
  • To transform / normalise the data (different devices may use different counter models for example)
  • To load
  • To identify patterns (15 minute poll cycles disguise too many events)
  • To compare with past patterns
  • To compare with past responses / results
  • To recommend or automate a response

I’m sure you can see the challenge here. The faster the poll cycle, the more data that needs to be processed. It can be a significant feat of engineering to process large data volumes at near-real-time speeds (streaming analytics) on large networks.

Zero Touch Assurance – ZTA (part 2)

Yesterday we described the three steps on the path to Zero Touch Assurance:

  1. Monitoring – Monitoring the events that happen in the network and responding manually
  2. Post-cognition – Monitoring events / trends that happen in the network, comparing them to past situations (using analytics to identify repeating patterns), using the past to recommend (or automate) a response
  3. Pre-cognition – Identification of events / trends that have never happened in the network before, yet still being able to provide a recommended / automated response

At face-value, it seems that we need pre-cognition to be able to achieve ZTA, but we also seem to be some distance away from achieving step 3 technologically (please correct me if I’m wrong here!). But today we pose a possible alternate way, using only the more achievable step 2 technology.

The weakness of Post-cognition is that it’s only as useful as the history of past events that it can call upon. But rather than waiting for events to naturally occur, perhaps we could constantly trigger simulated events and reactions to seed the historical database with a far greater set of data to call upon. In other words, pull all the levers to ensure that there is no event that has never happened before. The problem with this brute-force approach is that the constant tinkering could trigger a catastrophic network failure. We want to build up a library of all possible situations, but without putting live customer services at risk.

So we could run many of the more risky, cascading or long-run variants on what other industries might call a “digital twin” of the network instead. By their nature of storing all the current operating data about a given network, an OSS could already be considered to be a digital twin. We’d just need to build the sophisticated, predictive simulations to run on the twin.

More to come tomorrow when we discuss how data collection impacts our ability to achieve ZTA.

Zero Touch Assurance – ZTA (part 1)

A couple of years ago, we published a series on pre-cognitive OSS based on the following quote by Ben Evans about three classes of search/discovery:

  1. There is giving you what you already know you want (Amazon, Google)
  2. There is working out what you want (Amazon and Google’s aspiration)
  3. And then there is suggesting what you might want (Heywood Hill).

Today, I look to apply a similar model towards the holy grail of OSS – Zero Touch Assurance (ZTA).

  1. Monitoring – There is monitoring the events that happen in the network and responding manually
  2. Post-cognition – There is monitoring events that happen in the network, comparing them to past events and actions (using analytics to identify repeating patterns), using the past to recommend (or automate) a response
  3. Pre-cognition – There is identifying events that have never happened in the network before, yet still being able to provide a recommended / automated response

The third step, pre-cognition, is where the supposed holy grail lies. It’s where everyone talks about the prospect of AI solving all of our problems. It seems we’re still a way off this happening.

But I wonder whether the actual ZTA solution might be more of a brute-force version of step 2 – post-cognition?

More on that tomorrow.

A sad example of the challenges facing OSS supplier consolidation

Yesterday’s post, “Would an OSS duopoly be a good thing?” talked about the benefits and challenges of consolidation of the number of suppliers in the OSS market.

I also promised that today I’ll share an example of the types of challenge that can be faced.

An existing OSS supplier (Company A) had developed a significant foot-hold in the T1 telco market around Asia. They had quite a wide range of products from their total suite installed at each of these customers.

Another OSS supplier (Company X) then acquired Company A. I wasn’t privy to the reasoning behind the purchase but I can surmise that it was a case of customer and revenue growth, primarily to up-sell Company X’s complementary products into Company A’s customers. There was a little bit of functionality overlap, but not a huge amount. In fact Company A’s functionality, if integrated into Company X’s product suite, would’ve given them significantly greater product reach.

To date, the acquisition hasn’t been a good one for Company X. They haven’t been able to up-sell to any of Customer A’s existing customers, probably because there are some significant challenges relating to the introduction of that product into Asia. Not only that, but Company A’s customers had been expecting greater support and new development under new management. When it didn’t arrive (there were no new revenues to facilitate Company X investing in it), those customers started to plan OSS replacement projects.

I understand some integration efforts were investigated between Company A and Company X products, but it just wasn’t an easy fit.

As you can see, quite a few of the challenges of consolidation that were spoken about yesterday were all present in this single acquisition.

Would an OSS duopoly be a good thing?

The products/vendors page here on PAOSS has a couple of hundred entries currently. We’re currently working on an extended list that will almost double the number on it. More news on that shortly.

The level of fragmentation fascinates me, but if I’m completely honest, it probably disappoints me too. It’s great that it’s providing the platform for a long-tail of innovation. It’s exciting that there’s so many niche opportunities that exist. But it disappoints me because there’s so much duplication. How many alarm / performance / inventory / etc management tools are there? Can you imagine how many developer hours have been duplicated on similar feature development between products? And because there are so many different patterns, it means the total number of integration variants across the industry is putting a huge integration tax on us all.

Compare this to the strength of duopoly markets such as:

  • Microsoft / Apple (PC operating systems)
  • Google / Apple (smartphone operating systems)
  • Boeing / Airbus (commercial aircraft)
  • Visa / Mastercard (credit cards / payments)
  • Coca Cola / Pepsi (beverages, etc)

These duopolies have allowed for consolidation of expertise, effort, revenues/profits, etc. Most also provide a platform upon which smaller organisations / suppliers can innovate without having to re-invent everything (eg applications build upon operating systems, parts for aircraft, etc).

Buuuut……

Then I think about the impediments to achieving drastic consolidation through mergers and acquisitions (M&A) in the OSS industry.

There are opportunities to find complementary product alignment because no supplier services the entire OSS estate (where I’m using TM Forum’s TAM as a guide to the breadth of the OSS estate). However, it would be much harder to approach duopoly in OSS for a number of reasons:

  • Almost every OSS implementation is unique. Even if some of the products start out in common, they usually become quickly customised in terms of integrations, configurations, processes, etc
  • Interfaces to networks and other systems can vary so much. Modern EMS / devices / systems are becoming a little more consistent with IP, SNMP, Web APIs, etc. However, our networks still tend to have a lot of legacy protocols to interface with our networks
  • Consolidation of product lines becomes much harder, partly because of the integrations above, but partly because the functionality-sets and workflows differ so vastly between similar products (eg inventory management tools)
  • Similarly, architectures and build platforms (eg programming languages) are not easily compatible
  • Implementations are often regional for a variety of reasons – regulatory, local partnerships / relationships, language, corporate culture, etc
  • Customers can be very change-averse, even when they’re instigating the change

By contrast, we regularly hear of Coca Cola buying up new brands. It’s relatively easy for Coke to add a new product line/s without having much impact on existing lines.

We also hear about Google’s acquisitions, adding complementary products into its product line or simple for the purpose of acquiring new talent / expertise. There’s also acquisitions for the purpose of removing competitors or buying into customer bases.

Harder in all cases in the OSS industry.

Tomorrow we’ll share a story about an M&A attempting to buy into a customer base.

Then on Thursday, a story awaits on a possibly disruptive strategy towards consolidation in OSS.

Do you have a nagging OSS problem you cannot solve?

On Friday, we published a post entitled, “Think for a moment…” which posed the question of whether we might be better-served looking back at our most important existing features and streamlining them rather than inventing new features to solve that have little impact.

Over the weekend, a promotional email landed in my inbox from Nightingale Conant. It is completely unrelated to OSS, yet the steps outlined below seem to be a fairly good guide for identifying what to reinvent within our existing OSS.

Go out and talk to as many people [customers or potential] as you can, and ask them the following questions:
1. Do you have a nagging problem you cannot solve?
2. What is that problem?
3. Why is solving that problem important to you?
4. How would solving that problem change the quality of your life?
5. How much would you be willing to pay to solve that problem?

Note: Before you ask these questions, make sure you let the people know that you’re not going to sell them anything. This way you’ll get quality answers.
After you’ve talked with numerous people, you’ll begin to see a pattern. Look for the common problems that you believe you could solve. You’ll also know how much people are willing to pay to have their problem solved and why.

I’d be curious to hear back from you. Do those first 4 questions identify a pattern that relates to features you’ve never heard of before or features that your OSS already performs (albeit perhaps not as optimally as it could)?

Think for a moment…

Many of the most important new companies, including Google, Facebook, Amazon, Netflix, Snapchat, Uber, Airbnb and more are winning not by giving good-enough solutions…, but rather by delivering a superior experience….”
Ben Thompson
, stratechery.com

Think for a moment about the millions of developer hours that have gone into creating today’s OSS tools. Think also for a moment about how many of those tools are really clunky to use, install, configure, administer. How many OSS tools have truck-loads of functionality baked in that is just distracting, features that you’re never going to need or use? Conversely, how many are intuitive enough for a high-school student, let’s say, to use for the first time and become effective within a day of self-driven learning?

Let’s say an OSS came along that had all of the most important features (the ones customers really pay for, not the flashy, nice-to-have features) and offered a vastly superior user experience and user interface. Let’s say it took the market by storm.

With software and cloud delivery, it becomes harder to sustain differentiation. Innovative features and services are readily copied. But have a think about how hard it would be for the incumbent OSS to pick apart the complexity of their code, developed across those millions of developer hours, and throw swathes of it away – overhauling in an attempt to match a truly superior OSS experience.

Can you see why I’m bemused that we’re not replacing developers with more UX experts? We can surely create more differentiation through vastly improved experience than we can in creating new functionality (almost all of the most important functionality has already been developed and we’re now investing developer time on the periphery).

Nobody dabbles at dentistry

There are some jobs that are only done by accredited professionals.
And then there are most jobs, jobs that some people do for fun, now and then, perhaps in front of the bathroom mirror.
It’s difficult to find your footing when you’re a logo designer, a comedian or a project manager. Because these are gigs that many people think they can do, at least a little bit.”
Seth Godin here.

I’d love to plagiarise the entire post from Seth above, but instead suggest you have a look at the other pearls of wisdom he shares in the link above.

So where does OSS fit in Seth’s thought model? Well, you don’t need an accreditation like a dentist does. Most of the best I’ve met haven’t had any OSS certifications to speak of.

Does the layperson think they can do an OSS role? Most people have never heard of OSS, so I doubt they would believe they could do a role as readily as they could see themselves being logo designers. But the best I’ve met have often come from fields other than telco / IT / network-ops.

One of my earliest OSS projects was for a new carrier in a country that had just deregulated. They were the second fixed-line carrier in this country and tried to poach staff from the incumbent. Few people crossed over. To overcome this lack of experience, the carrier built an OSS team that consisted of a mathematician, an architect, an automotive engineer, a really canny project manager and an assortment of other non-telco professionals.

The executives of that company clearly felt they could develop a team of experts (or just had no choice but to try). The strategy didn’t work out very well for them. It didn’t work out very well for us either. We were constantly trying to bring their team up to speed on the fundamentals in order to use the tools we were developing / delivering (remembering that as one of my first OSS projects, I was still desperately trying to bring myself up to speed – still am for that matter).

As Seth also states, “If you’re doing one of these non-dentist jobs, the best approach is to be extraordinarily good at it. So much better than an amateur that there’s really no room for discussion.” That needs hunger (a hungriness to learn without an accredited syllabus).

It also needs humility though. Even the most extraordinarily good OSS proponents barely scratch the surface of all there is to learn about OSS. It’s the breadth of sub-expertise that probably explains why there is no single accreditation that covers all of OSS.

Am I being an OSShole?

“Am I being an asshole?” In other words, am I pointing out problems or am I finding solutions?
Ramit Sethi.

One of the things I’ve noticed working on large and small OSS teams is that people who excel at finding solutions thrive in both. The ones who thrive on only identifying problems seemingly only function in large organisations.

In a small team, everyone needs to contribute to the many solutions that need resolving. There’s a clear line of sight to what’s being delivered. I’ve tended to find that the pure problem-finders feel uncomfortable to be the only ones not clearly delivering.

But there’s absolutely a role for identifying problems or for asking the question that completely re-frames the problem. One of the best I’ve seen is a CEO of a publicly listed company. He had virtually no knowledge of OSS, but could listen to half an hour of technical, round-in-circles discussions, then interject with a summary or question that re-framed and simplified the solution. The team then had a clear direction to implement. The CEO didn’t find the solution directly, but he was an instrumental component in the team reaching a solution.

The question to pose though is whether the question asker is being an OSShole or an agent provocateur*.

* BTW, I use this term within the context of being a change agent, someone who contributes to finding a solution, as opposed to the literal sense, which is to incite others into performing illegal acts.

Treating your OSS/BSS suite like a share portfolio

Like most readers, I’m sure your OSS/BSS suite consists of many components. What if you were to look at each of those components as assets? In a share portfolio, you analyse your stocks to see which assets are truly worth keeping and which should be divested.

We don’t tend to take such a long-term analytical view of our OSS/BSS components. We may regularly talk about their performance anecdotally, but I’m talking about a strategic analysis approach.

If you were to look at each of your OSS/BSS components, where would you put them in the BCG Matrix?
BCG matrix
Image sourced from NetMBA here.

How many of your components are giving a return (whatever that may mean in your organisation) and/or have significant growth potential? How many are dogs that are a serious drain on your portfolio?

From an investor’s perspective, we seek to double-down our day-to-day investment in cash-cows and stars. Equally, we seek to divest our dogs.

But that’s not always the case with our OSS/BSS porfolio. We sometimes spend so much of our daily activity tweaking around the edges, trying to fix our dogs or just adding more things into our OSS/BSS suite – all of which distracts us from increasing the total value of our portfolio.

To paraphrase this Motley Fool investment strategy article into an OSS/BSS context:

  • Holding too many shares in a portfolio can crowd out returns for good ideas – being precisely focused on what’s making a difference rather than being distracted by having too many positions. Warren Buffett recommends taking 5-10 positions in companies that you are confident in holding forever (or for a very long period of time), rather than constantly switching. I shall note though that software could arguably be considered to be more perishable than the institutions we invest in – software doesn’t tend to last for decades (except some OSS perhaps  😀 )
  • Good ideas are scarce – ensuring you’re not getting distracted by the latest trends and buzzwords
  • Competitive knowledge advantage – knowing your market segment / portfolio extremely well and how to make the most of it, rather than having to up-skill on every new tool that you bring into the suite
  • Diversification isn’t lost – ensuring there is suitable vendor/product diversification to minimise risk, but also being open to long-term strategic changes in the product mix

Day-trading of OSS / BSS tools might be a fun hobby for those of us who solution them, but is it as beneficial as long-run investment?

I’d love to hear your thoughts and experiences.

2019 predictions for OSS

Well, this is the time of year when people make big predictions for the coming year. But let me start by saying the headline is something of a misnomer. I’m not clever enough to have any predictions for 2019 for a couple of reasons:

  1. There are far too many clever people working across the myriad fields of expertise that make up an OSS for me to possibly guess which might gain traction this year
  2. I’m yet to figure out whether there are any consistent patterns or cycles like Moore’s Law that uniquely define progress in OSS. On the contrary, you could claim that there are any number of metrics that might define progress for OSS (or to any individual OSS stack). But I’ll also be honest enough to say that I haven’t tried applying any of these futurology techniques to find any useful patterns either.
    Futurism Methodologies

Instead, I’ll call out the many industry-wide challenges / opportunities that are still waiting to be solved in 2019. Many of these same challenges / opportunities have been around since I first started working on OSS projects in circa 2000.

The Passionate About OSS Call for Innovation paper outlines a list of starting points where exponential improvements await.

I’m not sure if any will be solved in 2019 but I will make the prediction that the thousands of very clever people working in the OSS industry will make some exciting steps forward this year. Hopefully they’re some of the quantum leaps that await and not only the ever-present, but still highly challenging, incremental improvements.

My favourite OSS saying

My favourite OSS saying – “Just because you can, doesn’t mean you should.”

OSS are amazing things. They’re designed to gather, process and compile all sorts of information from all sorts of sources. I like to claim that OSS/BSS are the puppet masters of any significant network operator because they assist in every corner of the business. They assist with the processes carried out by almost every business unit.

They can be (and have been) adapted to fulfill all sorts of weird and wonderful requirements. That’s the great thing about software. It can be *easily* modified to do almost anything you want. But just because you can, doesn’t mean you should.

In many cases, we have looked at a problem from a technical perspective and determined that our OSS can (and did) solve it. But if the same problem were also looked at from business and/or operational perspectives, would it make sense for our OSS to solve it?

Some time back, I was involved in a micro project that added 1 new field to an existing report. Sounds simple. Unfortunately by the time all the rigorous deploy and transition processes were followed, to get the update into PROD, the support bill from our team alone ran into tens of thousands of dollars. Months later, I found out that the business unit that had requested the additional field had a bug in their code and wasn’t even picking up the extra field. Nobody had even noticed until a secondary bug prompted another developer to ask how the original code was functioning.

It wasn’t deemed important enough to fix. Many tens of thousands of dollars were wasted because we didn’t think to ask up the design tree why the functionality was (wasn’t) important to the business.

Other examples are when we use the OSS to solve a problem by expensive customisation / integration when manual processes can do the job more cash efficiently.

Another example was a client that had developed hundreds of customisations to resolve annoying / cumbersome, but incredibly rare tasks. The efficiency of removing those tasks didn’t come close to compensating for the expense of building the automations / tools. Just one sample of those tools was a $1000 efficiency improvement for a ~$200,000 project cost… on a task that had only been run twice in the preceding 5 years.

 

How to kill the OSS RFP (part 4)

This is the fourth, and final part (I think) in the series on killing the OSS RFI/RFP process, a process that suppliers and customers alike find to be inefficient. The concept is based on an initiative currently being investigated by TM Forum.

The previous three posts focused on the importance of trusted partnerships and the methods to develop them via OSS procurement events.

Today’s post takes a slightly different tack. It proposes a structural obsolescence that may lead to the death of the RFP. We might not have to kill it. It might die a natural death.

Actually, let me take that back. I’m sure RFPs won’t die out completely as a procurement technique. But I can see a time when RFPs are far less common and significantly different in nature to today’s procurement events.

How??
Technology!
That’s the answer all technologists cite to any form of problem of course. But there’s a growing trend that provides a portent to the future here.

It comes via the XaaS (As a Service) model of software delivery. We’re increasingly building and consuming cloud-native services. OSS of the future, the small-grid model, are likely to consume software as services from multiple suppliers.

And rather than having to go through a procurement event like an RFP to form each supplier contract, the small grid model will simply be a case of consuming one/many services via API contracts. The API contract (eg OpenAPI specification / swagger) will be available for the world to see. You either consume it or you don’t. No lengthy contract negotiation phase to be had.

Now as mentioned above, the RFP won’t die, but evolve. We’ll probably see more RFPs formed between customers and the services companies that will create customised OSS solutions (utilising one/many OSS supplier services). And these RFPs may not be with the massive multinational services companies of today, but increasingly through smaller niche service companies. These micro-RFPs represent the future of OSS work, the gig economy, and will surely be facilitated by smart-RFP / smart-contract models (like the OSS Justice League model).