3 categories of OSS investment justification

Insurer IAG has modelled the financial cost that a data breach or ransomware attack would have on its business, in part to understand how much proposed infosec investments might offset its losses.
Head of cybersecurity and governance Ian Cameron told IBM Think 2018 in Sydney that the “value-at-risk modelling” project called upon the company’s actuarial expertise to put numbers on different types and levels of security threats.
“Because we’re an insurance company, we can use actuarial methods to price or model what the costs of a loss event would be,” Cameron said.
“If we have a major data breach or a major ransomware attack, we’ve done some really great work in the past 12 months to model the net cost of losses to our organisation in terms of the loss of productivity, the cost of advertising to address the concerns of our customers, the legal costs, and the costs of regulatory oversight.
“We’ve been able to work out the distribution of loss from a small event to a very big event
.”
Ry Crozier
on IT News.

There are really only three main categories of benefit that an OSS can be built around:

  • Cost reduction
  • Revenue generation / increase
  • Brand value (ie insurance of the brand, via protection of customer perception of the brand)

The last on the list is rarely used (in my experience) to justify OSS/BSS investment. The IAG experience of costing out infosec risk to operations and brand is an interesting one. It’s also one that has some strong parallels for the OSS/BSS of network operators.

Many people in the telecoms industry treat OSS/BSS as an afterthought and/or an expensive cost centre. Those people fail to recognise that the OSS/BSS are the operationalisation engines that allow customers to use the network assets.

Just as IAG was able to do through actuarial analysis, a telco’s OSS/BSS team could “work out the distribution of loss from a small event to (be) a very big event” (for the telco’s brand value). Consider the loss of repute during sustained network outages. Consider the impact of negative word-of-mouth from billing mistakes. Consider how revenue leakage analysis and predictive network health management might offset losses.

Can the IAG approach work for justifying your investment in OSS/BSS?

Do you use any other major categories for justifying OSS/BSS spend?

OSS stepping stone or wet cement

Very often, what is meant to be a stepping stone turns out to be a slab of wet cement that will harden around your foot if you do not take the next step soon enough.”
Richelle E. Goodrich
.

Not sure about your parts of the world, but I’ve noticed the terms “tactical” (ie stepping stone solution) and “strategic” (ie long-term solution) entering the architectural vernacular here in Australia.

OSS seem to be full of tactical solutions. We’re always on a journey to somewhere else. I love that mindset – getting moving now, but still keeping the future in mind. There’s just one slight problem… how many times have we seen a tactical solution that was build years before? Perhaps it’s not actually a problem at all in some cases – the short-term fix is obviously “good enough” to have survived.

As a colleague insightfully pointed out last week – “if you create a tactical solution without also preparing a strategic solution, you don’t have a tactical solution, you have a solution.

When architecting your OSS solutions, do you find yourself more easily focussing on the tactical, the strategic, or is having an eye on both the essential part of your solution?

OSS compromise, no, prioritise

On Friday, we talked about how making compromises on OSS can actually be a method for reducing risk. We used the OSS vendor selection process to discuss the point, where many stakeholders contribute to the list of requirements that help to select the best-fit product for the organisation.

To continue with this same theme, I’d like to introduce you to a way of prioritising requirements that borrows from the risk / likelihood matrix commonly used in project management.

The diagram below shows the matrix as it applies to OSS.
OSS automation grid

The y-axis shows the frequency of use (of a particular feature / requirement). They x-axis shows the time / cost savings that will result from having this functionality or automation.

If you add two extra columns to your requirements list, the frequency of use and the resultant savings, you’ll quickly identify which requirements are highest priority (green) based on business benefit. Naturally there are other factors to consider, such as cost-benefit, but it should quickly narrow down to your 80/20 that will allow your OSS to make the most difference.

The same model can be used to sub-prioritise too. For example, you might have a requirement to activate orders – but some orders will occur very frequently, whilst other order types occur rarely. In this case, when configuring the order activation functionality, it might make sense to prioritise on the green order types first.

OSS compromise, not compromised

When you’ve got multiple powerful parties involved in a decision, compromise is unavoidable. The point is not that compromise is a necessary evil. Rather, compromise can be valuable in itself, because it demonstrates that you’ve made use of diverse opinions, which is a way of limiting risk.”
Chip and Dan Heath
in their book, Decisive.

This risk perspective on compromise (ie diversity of thought), is a fascinating one in the context of OSS.

Let’s just look at Vendor Selection as one example scenario. In the lead-up to buying a new OSS, there are always lots of different requirements that are thrown into the hat. These requirements are likely to come from more than one business unit, and from a diverse set of actors / contributors. This process, the OSS Thrashing process, tends to lead to some very robust discussions. Even in the highly unlikely event of every requirement being met by a single OSS solution, there are still compromises to be made in terms of prioritisation on which features are introduced first. Or which functionality is dropped / delayed if funding doesn’t permit.

The more likely situation is that each of the product options will have different strengths and weaknesses, each possibly aligning better or worse to some of the requirement contributor needs. By making the final decision, some requirements will be included, others precluded. Compromise isn’t an option, it’s a reality. The perspective posed by the Heath brothers is whether all requirement contributors enter the OSS vendor selection process prepared for compromise (thus diversity of thought) or does one actor / business-unit seek to steamroll the process (thus introducing greater risk)?

The OSS transformation dilemma

There’s a particular carrier that I know quite well that appears to despise a particular OSS vendor… but keeps coming back to them… and keeps getting let down by them… but keeps coming back to them. And I’m not just talking about support of their existing OSS, but whole new tools.

It never made sense to me… until reading Seth Godin’s blog today. In it, he states, “…this market segment knows that things that are too good to be true can’t possibly work, and that’s fine with them, because they don’t actually want to change–they simply want to be able to tell themselves that they tried. That the organization they paid their money to failed, of course it wasn’t their failure. Once you see that this short-cut market segment exists, you can choose to serve them or to ignore them. And you can be among them or refuse to buy in

It starts to makes sense. The same carrier has a tendency to spend big money on the big-4 consultants whenever an important decision needs to be made. If the big, ambitious project then fails, the carrier’s project sponsors can say that the big-4 organization they paid their money to failed.

Does that ring true of any telco you’ve worked with? That they don’t actually want to change–they simply want to be able to tell themselves that they tried (or be seen to have tried) with their OSS transformation?

Are we actually stuck in one big dilemma? Are our OSS transformations actually so hard that they’re destined to fail, yet are already failing so badly that we desperately need to transform them? If so, then Seth’s insightful observation gives the appearance of progress AND protection from the pain of failure.

Not sure about you, but I’ll take Seth’s “refuse to buy in” option and try to incite change.

From PoC to OSS sandpit

You all know I’m a fan of training operators in OSS sandpits (and as apprenticeships during the build phase) rather than a week or two of classroom training at the end of a project.

To reduce the re-work in building a sandpit environment, which will probably be a dev/test environment rather than a production environment, I like to go all the way back to the vendor selection process.
From PoC to OSS sandpit

Running a Proof of Concept (PoC) is a key element of vendor selection in my opinion. The PoC should only include a small short-list of pre-selected solutions so as to not waste time of operator or vendor / integrator. But once short-listed, the PoC should be a cut-down reflection of the customer’s context. Where feasible, it should connect to some real devices / apps (maybe lab devices / apps, possibly via a common/simple interface like SNMP). This takes some time on both sides to set up, but it shows how easily (or not) the solution can integrate with the customer’s active network, BSS, etc. It should be specifically set up to show the device types, alarm types, naming conventions, workflows, etc that fit into the customer’s specific context. That allows the customer to understand the new OSS in terms they’re familiar with.

And since the effort has been made to set up the PoC, doesn’t it make sense to make further use of it and not just throw it away? If the winning bidder then leaves the PoC environment in the hands of the customer, it becomes the sandpit to play in. The big benefit for the winning bidder is that hopefully the customer will have less “what if?” questions that distract the project team during the implementation phase. Questions can be demonstrated, even if only partially, using the sandpit environment rather than empty words.

Post Implementation Review (PIR)

Have you noticed that OSS projects need to go through extensive review to get funding of business cases? That makes sense. They tend to be a big investment after all. Many OSS projects fail, so we want to make sure this one doesn’t and we perform thorough planing / due-diligence.

But I do find it interesting that we spend less time and effort on Post Implementation Reviews (PIRs). We might do the review of the project, but do we compare with the Cost Benefit Analysis (CBA) that undoubtedly goes into each business case?

OSS Project Analysis Scales

Even more interesting is that we spend even less time and effort performing ongoing analysis of an implemented OSS
against against the CBA.

Why interesting? Well, if we took the time to figure out what has really worked, we might have better (and more persuasive) data to improve our future business cases. Not only that, but more chance to reduce the effort on the business case side of the scale compared with current approaches (as per diagrams above).

What do you think?

The OSS dart-board analogy

The dartboard, by contrast, is not remotely logical, but is somehow brilliant. The 20 sector sits between the dismal scores of five and one. Most players aim for the triple-20, because that’s what professionals do. However, for all but the best darts players, this is a mistake. If you are not very good at darts, your best opening approach is not to aim at triple-20 at all. Instead, aim at the south-west quadrant of the board, towards 19 and 16. You won’t get 180 that way, but nor will you score three. It’s a common mistake in darts to assume you should simply aim for the highest possible score. You should also consider the consequences if you miss.”
Rory Sutherland
on Wired.

When aggressive corporate goals and metrics are combined with brilliant solution architects, we tend to aim for triple-20 with our OSS solutions don’t we? The problem is, when it comes to delivery, we don’t tend to have the laser-sharp precision of a professional darts player do we? No matter how experienced we are, there tends to be hidden surprises – some technical, some personal (or should I say inter-personal?), some contractual, etc – that deflect our aim.

The OSS dart-board analogy asks the question about whether we should set the lofty goals of a triple-20 [yellow circle below], with high risk of dismal results if we miss (think too about the OSS stretch-goal rule); or whether we’re better to target the 19/16 corner of the board [blue circle below] that has scaled back objectives, but a corresponding reduction in risk.

OSS Dart-board Analogy

Roland Leners posed the following brilliant question, “What if we built OSS and IT systems around people’s willingness to change instead of against corporate goals and metrics? Would the corporation be worse off at the end?” in response to a recent post called, “Did we forget the OSS operating model?

There are too many facets to count on Roland’s question but I suspect that in many cases the corporate goals / metrics are akin to the triple-20 focus, whilst the team’s willingness to change aligns to the 19/16 corner. And that is bound to reduce delivery risk.

I’d love to hear your thoughts!!

The OSS farm equipment analogy

OSS End of Financial Year
It’s an interesting season as we come up to the EOFY (end of financial year – on 30 June). Budget cycles are coming to an end. At organisations that don’t carry un-spent budgets into the next financial year, the looming EOFY triggers a use-it-or-lose-it mindset.

In some cases, organisations are almost forced to allocate funds on OSS investments even if they haven’t always had the time to identify requirements and / or model detailed return projections. That’s normally anathema to me because an OSS‘ reputation is determined by the demonstrable value it creates for years to come. However, I can completely understand a client’s short-term objectives. The challenge we face is to minimise any risk of short-term spend conflicting with long-term objectives.

I take the perspective of allocating funds to build the most generally useful asset (BTW, I like Robert Kiyosaki’s simple definition of an asset as, “in reality, an asset is only something that puts money in your pocket,”) In the case of OSS, putting money in one’s pocket needs to consider earnings [or cost reductions] that exceed outgoings such as maintenance, licensing, operations, etc as well as cost of capital. Not a trivial task!

So this is where the farm equipment analogy comes in.

If we haven’t had the chance to conduct demand estimation (eg does the telco’s market want the equivalent of wheat, rice, stone fruit, etc) or product mix modelling (ie which mix of those products will bear optimal returns) then it becomes hard to predict what type of machinery is best fit for our future crops. If we haven’t confirmed that we’ll focus efforts on wheat, then it could be a gamble to invest big in a combine harvester (yet). We probably also don’t want to invest capital and ongoing maintenance on a fruit tree shaker if our trees won’t begin bearing fruit for another few years.

Therefore, a safer investment recommendation would be on a general-purpose machine that is most likely to be useful for any type of crop (eg a tractor).

In OSS terminology, if you’re not sure if your product mix will provision 100 customers a day or 100,000 then it could be a little risky to invest in an off-the-shelf orchestration / provisioning engine. Still potentially risky, but less so, would be to invest in a resource and service inventory solution (if you have a lot of network assets), alarm management tools (if you process a lot of alarms), service order entry, workforce management, etc.

Having said that, a lot of operators already have a strong gut-feel for where they intend to get returns on their investment. They may not have done the numbers extensively, but they know their market roadmap. If wheat is your specialty, go ahead and get the combine harvester.

I’d love to get your take on this analogy. How do you invest capital in your OSS without being sure of the projections (given that we’re never sure on projections becoming reality)?

How to run an OSS PoC

This is the third in a series describing the process of finding the right OSS solution for your specific needs and getting estimated pricing to help you build a business case.

The first post described the overall OSS selection process we use. The second described the way we poll the market and prepare a short-list of OSS products / vendors based on current capabilities.

Once you’ve prepared the short-list it’s time to get into specifics. We generally do this via a PoC (Proof of Concept) phase with the short-listed suppliers. We have a few very specific principles when designing the PoC:

  • We want it to reflect the operator’s context so that they can grasp what’s being presented (which can be a challenge when a vendor runs their own generic demos). This “context” is usually in the form of using the operator’s device types, naming conventions, service types, etc. It also means setting up a network scenario that is representative of the operator’s, which could be a hypothetical model, a small segment of a real network, lab model or similar
  • PoC collateral must clearly describe the PoC and related context. It should clearly identify the important scenarios and selection criteria. Ideally it should logically complement the collateral provided in the previous step (ie the requirement gathering)
  • We want it to focus on the most important conditions. If we take the 80/20 rule as a guide, will quickly identify the most common service types, devices, configurations, functions, reports, etc that we want to model
  • Identify efficacy across those most important conditions. Don’t just look for the functionality that implements those conditions, but also the speed at which they can be done at a scale required by the operator. This could include bulk load or processing capabilities and may require simulators (or real integrations – see below) to generate volume
  • We want it to be a simple as is feasible so that it minimises the effort required both of suppliers and operators
  • Consider a light-weight integration if possible. One of the biggest challenges with an OSS is getting data in and out. If you can get a rapid integration with a real network (eg a microservice, SNMP traps, syslog events or similar) then it will give an indication of integration challenges ahead. However, note the previous point as it might be quite time-consuming for both operator and supplier to set up a real-time integration
  • Take note of the level of resourcing required by each supplier to run the PoC (eg how many supplier staff, server scaling, etc.). This will give an indication of the level of resourcing the operator will need to allocate for the actual implementation, including organisational change management factors
  • Attempt to offer PoC platform consistency so that all operators are on a level playing field, which might be through designing the PoC on common devices or topologies with common interfaces. You may even look to go the opposite way if you think the rarity of your conditions could be a deal-breaker

Note that we tend to scale the size/complexity/reality of the PoC to the scale of project budget out of consideration of vendor and operator alike. If it’s a small project / budget, then we do a light PoC. If it’s a massive transformation, then the PoC definitely has to go deeper (ie more integrations, more scenarios, more data migration and integrity challenges, etc)…. although ultimately our customers decide how deep they’re comfortable in going.

Best of luck and feel free to contact us if we can assist with the running of your OSS PoC.

Powerful ranking systems with hidden variables

There are ratings and rankings that ostensibly exist to give us information (and we are supposed to use that information to change our behavior).
But if we don’t know what variables matter, how is it supposed to be useful?
Just because it can be easily measured with two digits doesn’t mean that it’s accurate, important or useful.
[Marketers learned a long time ago that people love rankings and daily specials. The best way to boost sales is to put something in a little box on the menu, and, when in doubt, rank things. And sometimes people even make up the rankings.]

Seth Godin
here.

Are there any rankings that are made up in OSS? Our OSS collect an amazing amount of data so there’s rarely a need to make up the data we present.

Are they based on hidden variables? Generally, we use raw counters and / or well known metrics so we’re usually quite transparent with what our OSS present.

What about when we’re trying to select the right vendor to fulfill the OSS needs of our organisation? As Seth states, Just because it can be easily measured with two digits* doesn’t mean that it’s accurate, important or useful. [* In this case, I’m thinking of a 2 x 2 matrix].

The interesting thing about OSS ranking systems is that there is so much nuance in the variables that matter. There are potentially hundreds of evaluation criteria and even vast contrasts in how to interpret a given criteria.

For example, a criteria might be “time to activate a service.” A vendor might have a really efficient workflow for activating single services manually but have no bulk load or automation interface. For one operator (which does single activations manually), the TTAS metric for that product would be great, but for another operator (which does thousands of activations a day and tries to automate), the TTAS metric for the same product would be awful.

As much as we love ranking systems… there are hundreds of products on the market (in some cases, hundreds of products in a single operator’s OSS stack), each fitting unique operator needs differently… so a 2 x 2 matrix is never going to cut it as a vendor selection tool… not even as a short-listing tool.

Better to build yourself a vendor selection framework. You can find a few OSS product / vendor selection hints here based on the numerous vendor / product selections I’ve helped customers with in the past.

Designing an OSS from NFRs backwards

When we’re preparing a design (or capturing requirements) for a new or updated OSS, I suspect most of us design with functional requirements (FRs) in mind. That is, our first line of thinking is on the shiny new features or system behaviours we have to implement.

But what if we were to flip this completely? What if we were to design against Non-Functional Requirements (NFRs) instead? [In case you’re not familiar with NFRs, they’re the requirements that measure the function or performance of a solution rather than features / behaviours]

What if we already have all the really important functionality in our OSS (the 80/20 rule suggests you will), but those functions are just really inefficient to use? What if we can meet the FR of searching a database for a piece of inventory… but our loaded system takes 5 mins to return the results of the query? It doesn’t sound much, but if it’s an important task that you’re doing dozens of times a day, then you’re wasting hours each day. Worse still, if it’s a system task that needs to run hundreds of times a day…

I personally find NFRs to be really hard to design for because we usually won’t know response times until we’ve actually built the functionality and tried different load / fail-over / pattern (eg different query types) models on the available infrastructure. Yes, we can benchmark, but that tends to be a bit speculative.

Unfortunately, if we’ve built a solution that works, but end up with queries that take minutes… when our SLAs might be 5-15 mins, then we’ve possibly failed in our design role.

We can claim that it’s not our fault. We only have finite infrastructure (eg compute, storage, network), each with inherent performance constraints. It is what it is right?…. maybe.

What if we took the perspective of determining our most important features (the 80/20 rule again), setting NFR benchmarks for each and then designing the solution back from there? That is, putting effort into making our most important features super-efficient rather than adding new nice-to-have features (features that will increase load, thus making NFRs harder to hit mind you!)?

In this new world of open-source, we have more “product control” than we’ve probably had before. This gives us more of a chance to start with the non-functionals and work back towards a product. An example might be redesigning our inventory to work with Graph database technology rather than the existing relational databases.

How feasible is this NFR concept? Do you know anyone in OSS who does it this way? Do you have any clever tricks for ensuring your developed features stay within NFR targets?

Blown away by one innovation. Now to extend on it

Our most recent two posts, from yesterday and Friday, have talked about one stunningly simple idea that helps to overcome one of OSS‘ biggest challenges – data quality. Those posts have stimulated quite a bit of dialogue and it seems there is some consensus about the cleverness of the idea.

I don’t know if the idea will change the OSS landscape (hopefully), or just continue to be a strong selling point for CROSS Network Intelligence, but it has prompted me to think a little longer about innovating around OSS‘ biggest challenges.

Our standard approach of just adding more coats of process around our problems, or building up layers of incremental improvements isn’t going to solve them any time soon (as indicated in our OSS Call for Innovation). So how?

Firstly, we have to be able to articulate the problems! If we know what they are, perhaps we can then take inspiration from the CROSS innovation to spur us into new ways of thinking?

Our biggest problem is complexity. That has infiltrated almost every aspect of our OSS. There are so many posts about identifying and resolving complexity here on PAOSS that we might skip over that one in this post.

I decided to go back to a very old post that used the Toyota 5-whys approach to identify the real cause of the problems we face in OSS [I probably should update that analysis because I have a whole bunch of additional ideas now, as I’m sure you do too… suggested improvements welcomed BTW].

What do you notice about the root-causes in that 5-whys analysis? Most of the biggest causes aren’t related to system design at all (although there are plenty of problems to fix in that space too!). CROSS has tackled the data quality root-cause, but almost all of the others are human-centric factors – change controls, availability of skilled resources, requirement / objective mis-matches, stakeholder management, etc. Yet, we always seem to see OSS as a technical problem.

How do you fix those people challenges? Ken Segal puts it this way, “When process is king, ideas will never be. It takes only common sense to recognize that the more layers you add to a process, the more watered down the final work will become.” Easier said than done, but a worthy objective!

Those who rule perfect data…

A Passionate About OSS article last month spoke of how the investment strategy of a $106 billion VC fund has changed my thinking on our OSS‘ most valuable asset. Masayoshi Son is quoted in that article as follows:

“Those who rule data will rule the entire world. That’s what people of the future will say.”

But one question keeps coming back to me… if you’re ruling poor quality data, will you rule nothing whatsoever?

Along the same lines, the old adage, “practice makes perfect,” is not very helpful if you’re not practicing in a constructive way. A better (albeit somewhat impossible) variant on the adage would be “PERFECT practice makes perfect.”

Let me share an example. There is a product that is completely ground-breaking in its ability to automate and optimise designs of large-scale network roll-outs – designs that include outside plant and access network technologies. In bake-offs with some of the best available network designers, this product and its algorithm consistently beats the humans by far more than 25% (when measured by capital costs, implementation time and various other metrics).

Its one challenge in taking over the world and automating every future network design is having a base set of data that is so perfect that no re-design work is required. For example, if the base data says a duct route is available and has capacity for inserting a cable, then the product assumes it can use the duct in its optimal design. But when the field techs arrive at site, they find the duct is too badly damaged to use or already filled to capacity with other cables that can’t be overhauled. A new optimal design has to be calculated to consider the lack of availability of that duct.

The tool still gives great results, even after all the manual intervention, but perfect source data would give breathtaking results.

So I’d look to make one small tweak to Masayoshi Son’s quote. “Those who rule PERFECT* data will rule the entire world. That’s what people of the future will say.”

* whereby perfect means as high in quality as realistically possible.

So, perhaps those expensive data audits and cumbersome data quality processes will have a far greater ROI (Return on Investment) in future than any of us could ever estimate.

How smart contracts might reduce risk and enhance trust on OSS projects

Last Friday, we spoke about all wanting to develop trusted OSS supplier / customer relationships but rarely finding them and a contrarian factor for why trust is so hard to achieve in OSS – complexity.

Trust is the glue that allows OSS projects to happen. Not only that, it becomes a catch-22 with complexity. If OSS partners don’t trust each other, requirements, contracts, etc get more complex as a self-protection barrier. But with every increase in complexity, there becomes an increasing challenge to deliver and hence, risk of further reduction in trust.

On a smaller scale, you’ve seen it on all projects – if the project starts to falter, increased monitoring attention is placed on the project, which puts increased administrative load on the project team and reduces the time they have to deliver the intended outcomes. Sometimes the increased admin / report gains the attention of sponsors and access to additional resources, but usually it just detracts from the available delivery capability.

Vish Nandlall also associates trust and complexity in organisational models in his LinkedIn post below:

This is one of the reasons I’m excited about what smart contracts can do for the organisations and OSS projects of the future. Just as “Likes” and “Supplier Rankings” have facilitated online trust models, smart contracts success rankings have the ability to do the same for OSS suppliers, large and small. For example, rather than needing to engage “Big Vendor A” to build your entire, monolithic OSS stack, if an operator develops simpler, more modular work breakdowns (eg microservices), then they can engage “Freelancer B” and “Small Vendor C” to make valuable contributions on smaller risk increments. Being lower in complexity and risk means B and C have a greater chance of engendering trust, but their historical contract success ranking forces them to develop trust as a key metric.

I found a way to save ten million dollars

Yesterday’s post about egos in OSS contained the following Dilbert cartoon:
Dilbert - I found a way to save a million dollars.
It reminded me of a story from many years ago.

I was working in a developing country, advising the board of a tier-one telco on the implementation of their first-ever OSS (they’d only ever operated their networks at NMS level previously). During the analysis phase I came across some data that showed an interesting opportunity for an innovation relating to their voice Points of Interconnect (PoI).

From a back-of-a-paper napkin analysis it seemed that a ~$50-100k investment could’ve resulted in an improvement to the company’s profit by at least $10M. I ran the concept, and the numbers, past their head of switching. His response was, “I think you’re right…. but I’m not going to recommend it.”

You could say that I was a little bewildered.

He then followed with, “You have to see this from my perspective. If I recommend this project and it succeeds, I receive no benefit. I’m not due for promotion for another two years at the earliest. I will barely receive any recognition at all, certainly no financial reward. The company receives all the benefits. But if the project fails, I will be put aside, passed over for any future promotions. It would be a career killer.”

He was right. I hadn’t seen it from his perspective… still not sure that I do, but as a consultant, I was only ever passing through their corporate culture rather than having a 4-5 decade career embedded within it.

It wasn’t within my OSS scope, but I quietly mentioned it to the board. They delegated the decision back to the head of switching. The project was not recommended to proceed, not even for further analysis.

It’s interesting the human factors that come into play when project investment is under evaluation isn’t it? What human factors have you seen influence purchasing decisions?

Bad OSS ego decisions

A long, long time ago Dennis Haslinger told me that most of the most serious mistakes I would make in life would be bad ego decisions. I have found that to be true.”
Gary Halbert
.

OSS is an industry filled with highly intelligent people. In every country I’ve visited to work on OSS assignments, perhaps excluding Vietnam, my colleagues have been predominantly male. Dare I say it, do those two preceding facts imply a significant ego level exists on many (most?) OSS projects (or has it just been a coincidence that I’ve experienced)?

Given that OSS projects tend to cross business units, inter-departmental power plays like the one described in the Dilbert comic below can become just another potential pitfall.
Dilbert - I found a way to save a million dollars

To be honest, I can’t recall any examples where ego (mine or others) has lead to serious mistakes as such, but I’ve seen many cases where it’s lead to serious stagnation, delays in project delivery, that have been extremely costly.

One example is cited in this post, where the intellectual brilliance of one person caused a document to blow out from 30 pages to 150+, causing a 3+ month delay and more than $100k additional cost.

Stakeholder management and change management are highly underestimated factors in the success of OSS projects.

PS. The “intellectual brilliance” link above also talks about the possible benefits of smart contracts in OSS delivery. I wonder whether smart contracts will reduce some of the ego-related stagnation on OSS projects, or simply shift it from the delivery phase to the up-front smart contract agreement phase, thus introducing more “what if scenario” stagnation?

Raising the OSS horizon

With the holiday period looming for many of us, we will have the head-space to reflect – on the year(s) gone and to ponder the one(s) upcoming. I’d like to pose the rhetorical question, “What do you expect to reflect on?

It’s probably safe to say that a majority of OSS experts are engaged in delivery roles. Delivery roles tend to require great problem-solving skills. That’s one of the exciting aspects of being an OSS expert after all.

There’s one slight problem though. Delivery roles tend to have a focus on the immediacy of delivery, a short-term problem-solving horizon. This generates incremental improvements like new dashboards within an existing dashboard framework, refining processes, next release software upgrades, releasing new stuff that adds to the accumulation of tech-debt, etc, etc.

That’s great, highly talented, admirable work, often exactly what our customers are requesting, but not necessarily what our industry needs most.

We need the revolutionary, not the evolutionary. And that means raising our horizons – to identify and comprehend the bigger challenges and then solving those. That is the intent of the OSS Call for Innovation – to lift our vision to a more distant horizon.

When you reflect during this holiday period, how distant will your horizon be?

PS. Upon your own reflection, are there additional big challenges or exponential opportunities that should be captured in the OSS Call for Innovation?

The strangulation of OSS feature releases

The diagram below provides a time-sequence view of how tech-debt accumulation eventually strangles new OSS feature releases unless the drastic measures described are taken.

The increasing percentage of tech debt

At start-up (t0), the system is brand new and has no legacy to maintain, so all effort can be dedicated to delivering new features (or products) as well as testing to ensure control of quality.

But over time (t0 + 10, where 10 is a nominal metric that could be days, years, release cycles, etc), effort is now required to maintain existing functionality / infrastructure. Not only that, but the test load increases. New features need to be tested as well as regression testing done on the legacy because there are now more variants to consider. You’ll notice that less of the effort is now available for adding new features.

The rest of the chart is self-explanatory I hope. Over a longer period of time, so much effort is required just to maintain and assure the status quo that there is almost no time left to add new features. Any new features come with a significant testing and maintenance load.

Many traditional telcos (Mammoths) and their OSS suites have found themselves at t0+100. The legacy is so large and entwined that it’s a massive undertaking to make any pivotal change (the chess-board analogy).

This is where startups and the digital / cloud players have a significant disruptive advantage over the Mammoths. They’re at t0 to t0+10 (if the metric is in years) and can roll out more new features proportionally.

What the chart above doesn’t show is subtraction projects, the effort required to ensure the legacy maintenance load and number of variants (ie testing load) are hacked away at every opportunity. The digital players call this re-factoring and the telcos, well, they don’t really have a name for it because they rarely do it (do they?).

Telcos (and their OSS suites) are like hoarders, starting off with an empty house (t0) and progressively filling it with stuff until they can barely see any carpet for the clutter (t0+100). It generally takes the intervention of an outsider to force a de-cluttering because the hoarder can’t notice a problem.

The risk with the Agile, DevOps, continuous release movement that’s currently underway is that it’s rapidly speeding up the release cadence so we might be near t0 now but we’re going to get to t0+100 far faster than before when release cadences were far slower.

Can we all see that an additional colour MUST be added to the time-series chart above – the colour that represents reductionist effort? I’m so passionate about this that it’s a strong thread running through the arc of my next book (keep an eye out for upcoming posts as I’ll be seeking your help and insights on it in the lead-up to launch).

10 ways to #GetOutOfTheBuilding

Eric Ries’ “The Lean Startup,” has a short chapter entitled, “Get out of the Building.” It basically describes getting away from your screen – away from reading market research, white papers, your business plan, your code, etc – and out into customer-land. Out of your comfort zone and into a world of primary research that extends beyond talking to your uncle (see video below for that reference!).

This concept applies equally well to OSS product developers as it does to start-up entrepreneurs. In fact the concept is so important that the chapter name has inspired it’s own hashtag (#GetOutOfTheBuilding).

This YouTube video provides 10 tips for getting out of the building (I’ve started the clip at Tendai Charasika’s list of 10 ways but you may want to scroll back a bit for his more detailed descriptions).

But there’s one thing that’s even better than getting out of the building and asking questions of customers. After all, customers don’t always tell the complete truth (even when they have good intentions). No, the better research is to observe what they do, not what they say. #ObserveWhatTheyDoNotWhatTheySay

This could be by being out of the building and observing customer behaviour… or it could be through looking at customer usage statistics generated by your OSS. That data might just show what a customer is doing… or not doing (eg customers might do small volume transactions through the OSS user interface, but have a hack for bulk transactions because the UI isn’t efficient at scale).

Not sure if it’s indicative of the industry as a whole, but my experience working for / with vendors is that they don’t heavily subscribe to either of these hashtags when designing and refining their products.

Does your OSS collect primary data to #ObserveWhatTheyDoNotWhatTheySay? If it does, do you ever make use of it? Or do you prefer to talk with your uncle (does he know much about OSS BTW)?