The chains of integration are too light until…

Chains of habit are too light to be felt until they are too heavy to be broken.”
Warren Buffett (although he attributed it to an unknown author, perhaps originating with Samuel Johnson).

What if I were to replace the word “habit” in the quote above with “OSS integration” or “OSS customisation” or “feature releases?”

The elegant quote reflects this image:

The chains of feature releases are light at t0. They’re much heavier at t0+100.

Like habits, if we could project forward and see the effects, would we allow destructive customisations to form in their infancy? The problem is that we don’t see them as destructive at infancy. We don’t see how they entangle.

Posing a Network Data Synchronisation Protocol (NDSP) concept

Data quality is one of the biggest challenges we face in OSS. A product could be technically perfect, but if the data being pumped into it is poor, then the user experience of the product will be awful – the OSS becomes unusable, and that in itself generates a data quality death spiral.

This becomes even more important for the autonomous, self-healing, programmable, cooperative networks being developed (think IoT, virtualised networks, Self-Organizing Networks). If we look at IoT networks for example, they’ll be expected to operate unattended for long periods, but with code and data auto-propagating between nodes to ensure a level of self-optimisation.

So today I’d like to pose a question. What if we could develop the equivalent of Network Time Protocol (NTP) for data? Just as NTP synchronises clocking across networks, Network Data Synchronisation Protocol (NDSP) would synchronise data across our networks through a feedback-loop / synchronisation algorithm.

Of course there are differences from NTP. NTP only tries to coordinate one data field (time) along a common scale (time as measured along a 64+64 bits continuum). The only parallel for network data is in life-cycle state changes (eg in-service, port up/down, etc).

For NTP, the stratum of the clock is defined (see image below from wikipedia).

This has analogies with data, where some data sources can be seen to be more reliable than others (ie primary sources rather than secondary or tertiary sources). However, there are scenarios where stratum 2 sources (eg OSS) might push state changes down through stratum 1 (eg NMS) and into stratum 0 (the network devices). An example might be renaming of a hostname or pushing a new service into the network.

One challenge would be the vast different data sets and how to disseminate / reconcile across the network without overloading it with management / communications packets. The other would be that format consistency. I once had a device type that had four different port naming conventions, and that was just within its own NMS! Imagine how many port name variations (and translations) might have existed across the multiple inventories that exist in our networks. The good thing about the NDSP concept is that it might force greater consistency across different vendor platforms.

Another would be that NDSP would become a huge security target as it would have the power to change configurations and because of its reach through the network.

So what do you think? Has the NDSP concept already been developed? Have you implemented something similar in your OSS? What are the scenarios in which it could succeed? Or fail?

I’m predicting the demise of the OSS horse

“What will telcos do about the 30% of workers AI is going to displace?”
Dawn Bushaus

That question, which is the headline of Dawn’s article on TM Forum’s Inform platform, struck me as being quite profound.

As an aside, I’m not interested in the number – the 30% – because I concur with Tom Goodwin’s sentiments on LinkedIn, “There is a lot of nonsense about AI.
Next time someone says “x% of businesses will be using AI by 2020” or “AI will be worth $xxxBn by 2025” or any of those other generic crapspeak comments, know that this means nothing.
AI is a VERY broad area within computer science that includes about 6-8 very different strands of work. It spans robotics, image recognition, machine learning, natural language processing, speech recognition and far more. Nobody agrees on what is and isn’t in this.
This means it covers everything from superintelligence to artificial creativity to chatbots
.”

For the purpose of this article, let’s just say that in 5 years AI will replace a percentage of jobs that we in tech / telco / OSS are currently doing. I know at least a few telcos that have created updated operating plans built around a headcount reduction much greater than the 30% mentioned in Dawn’s article. This is despite the touchpoint explosion and increased complexity that is already beginning to crash down onto us and will continue apace over the next 5 years.

Now, assuming you expect to still be working in 5 years time and are worried that your role might be in the disappearing 30% (or whatever percentage), what do you do now?

First, figure out what the modern equivalents of the horse are in the context of Warren Buffett’s quote below:

“What you really should have done in 1905 or so, when you saw what was going to happen with the auto is you should have gone short horses. There were 20 million horses in 1900 and there’s about 4 million now. So it’s easy to figure out the losers, the loser is the horse. But the winner is the auto overall. [Yet] 2000 companies (carmakers) just about failed.”

It seems impossible to predict how AI (all strands) might disrupt tech / telco / OSS in the next 5 years – and like the auto industry, more impossible to predict the winners (the technologies, the companies, the roles). However, it’s almost definitely easier to predict the losers.

Massive amounts are being invested into automation (by carriers, product vendors and integrators), so if the investments succeed, operational roles are likely to be net losers. OSS are typically built to make operational roles more efficient – but if swathes of operator roles are automated, then does operational support also become a net loser? In its current form, probably yes.

Second, if you are a modern-day horse, ponder which of your skills are transferable into the future (eg chassis building, brakes, steering, etc) and which are not (eg buggy-whip making, horse-manure collecting, horse grooming, etc). Assuming operator-driven OSS activities will diminish, but automation (with or without AI) will increase, can you take your current networks / operations knowledge and combine that with up-skilling in data, software and automation tools?

Even if OSS user interfaces are made redundant by automation and AI, we’ll still need to feed the new technologies with operations-style data, seed their learning algorithms and build new operational processes around them.

The next question is double-edged – for both individuals and telcos alike – how are you up-skilling for a future without horses?

Are your various device inventory repositories in synch?

Does your organisation have a number of different device inventory repositories?
Hint: You might even be surprised by how many you have.

Examples include:

  • Physical network inventory
  • Logical network inventory
  • DNS records
  • CMDB (Config Management DB)
  • IPAM (IP Address Management)
  • EMS (Element Management Systems)
  • SIEM (Security Information and Event Management)
  • Desktop / server management
  • not to mention the management information base on the devices themselves (the only true primary data source)

Have you recently checked how in-synch they are? As a starting point, are device counts aligned (noting that different repositories perhaps only cover subsets of device ranges)? If not, how many cross-match between repositories?

If they’re out of synch, do you have a routine process for triangulating / reconciling / cleansing? Even better, do you have an automated, closed-loop process for ensuring synchronisation across the different repositories?

Would anyone like to offer some thoughts on the many reasons why it’s important to have inventory alignment?

I’ll give a few little hints:

  • Security
  • Assurance
  • Activations
  • Integrations
  • Asset lifecycle management
  • Financial management (ie asset depreciation)

The concept of DevOps is missing one really important thing

There’s a concept that’s building a buzz across all digital industries – you may’ve heard of it – it’s a little thing called DevOps. Someone (most probably a tester) decided to extend it and now you might even hear the #DevTestOps moniker being mentioned.

In the ultimate of undeserved acknowledgements, I even get a reference on Wikipedia’s DevOps page. It references this DevOps life-cycle diagram from an earlier post that I can take no credit for:

However, there is one really important chevron missing from the DevOps infinite loop above. Can you picture what it might be?

If I show you this time series below, does it help identify what’s missing from the DevOps infinite loop? I refer to the diagram below as The Tech-Debt Wreck
The increasing percentage of tech debt
If I give you a hint that it primarily relates to the grey band in the time series above, would that help?

Okay, okay. I’m sure you’ve guessed it already, but the big thing missing from the DevOps loop is pruning, or what I refer to as subtraction projects (others might call it re-factoring). Without pruning, the rapid release mantra of DevOps will take the digital world from t0 to t0+100 faster than at any time before in our history.

As a result, I’m advocating a variation on DevOps… or DevTestOps even… I want you to preach a revised version of the label – let’s start a movement called #DevTestPruneOps. Actually, the pruning should go at the start, before each dev / test cycle, but by calling it #PruneDevTestOps, I fear its lineage might get lost.

Torturous OSS version upgrades

Have you ever worked on an OSS where a COTS (Commercial Off-The-Shelf) solution has been so heavily customised that implementing the product’s next version upgrade has become a massive challenge? The solution has become so entangled that if the product was upgraded, it would break the customisations and/or integrations that are dependent upon that product.

This trickle-down effect is the perfect example of The Chess-board Analogy or The Tech-debt Wreck at work. Unfortunately, it is far too common, particularly in large, complex OSS environments.

The OSS then either has to:

  • skip the upgrade or
  • take a significant cost/effort hit and perform an upgrade that might otherwise be quite simple.

If the operator decides to take the “skip” path for a few upgrades in a row, then it gets further from the vendor’s baseline and potentially misses out on significant patches, functionality or security hardening. Then, when finally making the decision to upgrade, a much more complex project ensues.

It’s just one more reason why a “simple” customisation often has a much greater life-cycle cost than was initially envisaged.

How to reduce the impact?

  1. We’ve recently spoken about using RPA tools for pseudo-integrations, allowing the operator to leave the COTS product un-changed, but using RPA to shift data between applications
  2. Attempt to achieve business outcomes via data / process / config changes to the COTS product rather than customisations
  3. Enforce a policy of integration as a last resort as a means of minimising the chess-board implications (ie attempting to solve problems via processes, in data, etc before considering any integration or customisation)
  4. Enforcing modularity in the end-to-end architecture via carefully designed control points, microservices, etc

There are probably many other methods that I’m forgetting about whilst writing the article. I’d love to hear the approach/es you use to minimise the impact of COTS version upgrades. Similarly, have you heard of any clever vendor-led initiatives that are designed to minimise upgrade costs and/or simplify the upgrade path?

A summary of RPA uses in an OSS suite

This is the sixth and final post in a series about the four styles of RPA (Robotic Process Automation) in OSS.

Over the last few days, we’ve looked into the following styles of RPA used in OSS, their implementation approaches, pros / cons and the types of automation they’re best suited to:

  1. Automating repeatable tasks – using an algorithmic approach to completing regular, mundane tasks
  2. Streamlining processes / tasks – using an algorithmic approach to assist an operator during a process or as an alternate integration technique
  3. Predefined decision support – guiding operators through a complex decision process
  4. As part of a closed-loop system – that provides a learning, improving solution

RPA tools can significantly improve the usability of an OSS suite, especially for end-to-end processes that jump between different applications (in the many ways mentioned in the above links).

However, there can be a tendency to use the power of RPAs to “solve all problems” (see this article about automating bad processes). That can introduce a life-cycle of pain for operators and RPA admins alike. Like any OSS integration, we should look to keep the design as simple and streamlined as possible before embarking on implementation (subtraction projects).

RPA in OSS feedback loops

This is the fifth in a series about the four styles of RPA (Robotic Process Automation) in OSS.

The fourth of those styles is as part of a closed-loop system such as the one described here. Here’s a diagram from that link:
OSS / DSS feedback loop

This is the most valuable style of RPA because it represents a learning and improving system.

Note though that RPA tools only represent the DSS (Decision Support System) component of the closed-loop so they need to be supplemented with the other components. Also note that an RPA tool can only perform the DSS role in this loop if it can accept feedback (eg via an API) and modify its output in response. The RPA tool could then perform fully automated tasks (ie machine-to-machine) or semi-automated (Decision support for humans).

Setting up this type of solution can be far more challenging than the earlier styles of RPA use, but the results are potentially the most powerful too.

Almost any OSS process could be enhanced by this closed-loop model. It’s just a case of whether the benefits justify the effort. Broad examples include assurance (network health / performance), fulfilment / activations, operations, strategy, etc.

Using RPA as an alternate OSS integration

This is the third in a series about the four styles of RPA (Robotic Process Automation) in OSS.

The second of those styles is Streamlining processes / tasks by following an algorithmic approach to simplify processes for operators.

These can be particularly helpful during swivel-chair processes where multiple disparate systems are partially integrated but each needs the same data (ie reducing the amount of duplicated data entry between systems). As well as streamlining the process it also improves data consistency rates.

The most valuable aspect of this style of RPA is that it can minimise the amount of integration between systems, thus potentially reducing solution maintenance into the future. The RPA can even act as the integration technique where an API isn’t available or documentation isn’t available (think legacy systems here).

Will it take open source to unlock OSS potential?

I have this sense that the other OSS, open source software, holds the key to the next wave of OSS (Operational Support Systems) innovation.

Why? Well, as yesterday’s post indicated (through Nic Brisbourne), “it’s hard to do big things in a small way.” I’d like to put a slight twist on that concept by saying, “it’s hard to do big things in a fragmented way.” [OSS is short for fragmented after all]

The skilled resources in OSS are so widely spread across many organisations (doing a lot of duplicated work) that we can’t reach a critical mass of innovation. Open source projects like ONAP represent a possible path to critical mass through sharing and augmentating code. They provide the foundation upon which bigger things can be built. If we don’t uplift the foundations across the whole industry quickly, we risk losing relevance (just ask our customers for their gripes list!).

BTW. Did you notice the news that Six Linux Foundation open source networking projects have just merged into one? The six initial projects are ONAP, OPNFV, OpenDaylight, FD.io, PDNA, and SNAS. The new project is called the LF Networking Fund (LFN).

But you may ask how organisations can protect their trade secrets whilst embracing open source innovation. Derek Sivers provides a fascinating story and line of thinking in “Why my code and ideas are public.” I really recommend having a read about Valerie.

Alternatively, if you’re equating open source with free / unprofitable, this link provides a list of highly successful organisations with significant open source contributions. There are plenty of creative ways to be rewarded for open source effort.

Comment below if you agree or disagree about whether we need OSS to unlock the potential of OSS innovation.

It’s hard to do big things in a small way

it’s hard to do big things in a small way, so I suspect incumbents have more of an advantage than they do in most industries.”
Nic Brisbourne
.

The quote above came from a piece about the rise of ConstructTech (ie building houses via means such as 3D printing). However, it is equally true of the OSS industry.

Our OSS tend to be behemoths, or at least the ones I work on seem to be. They’ve been developed over many years and have millions of sunk person-hours invested in them. And they’ve been customised to each client’s business like vines wrapped around a pillar. This gives enormous incumbency power and acts as a barrier to smaller innovators having a big impact in the world of OSS.

Want an example of it being hard to do big things in a small way? Ever heard of ONAP? AT&T is a massive telco with revenues to match, committed to a more software-centric future, and has developed millions of lines of code yet it still needs the broader industry to help flesh out its vision for ONAP.

There are occasionally niche products developed but it’s definitely hard to do big things in a small way. The small grid analogy proposed earlier gives more room for the long tail of innovation, allowing smaller innovators to impact the larger ecosystem.

Write a comment below if you’d like to point out an outlier to this trend.

The two types of disruptive technologists

OSS is an industry that’s undergoing constant, and massive change. But it still hasn’t been disrupted in the modern sense of that term. It’s still waiting to have its Uber/AirBnB-moment, where the old way becomes almost obsoleted by the introduction of a new way. OSS is not just waiting, but primed for disruption.

It’s a massive industry in terms of revenues, but it’s still far from delivering everything that customers want/need. It’s potentially even holding back the large-scale service provider industry from being even more influential / efficient in the current digital communications world. Our recent OSS Call for Innovation spelled out the challenges and opportunities in detail.

Today we’ll talk about the two types of disruptive technologists – one that assists change and one that hinders.

The first disruptive technologist is a rare beast – they’re the innovators who create solutions that are distinctly different from anything else in the market, changing the market (for the better) in the process. As discussed in this recent post, most of the significant changes occurring to OSS have been extrinsic (from adjacent industries like IT or networking rather than OSS). We need more of these.

The second disruptive technologist is all too common – they’re the technologists whose actions disrupt an OSS implementation. They’re usually well-intended, but can get in the way of innovation in two main ways:
1) By not looking beyond incremental change to existing solutions
2) Halting momentum by creating and resolving a million “what if?” scenarios

Most of us probably fall into the second category more often than the first. We need to reverse that trend individually and collectively though don’t we?

Would you like to nominate someone who stands out as being the first type of disruptive technologist and why?

How “what if?” scenarios can halt a project

Let’s admit it; we’ve all worked on an OSS project that has gone into a period of extended stagnation because of a fear of the unknown. I call them “What if?” scenarios. They’re the scenarios where someone asks, “What if x happens?” and then the team gets side-tracked whilst finding an answer / resolution. The problem with “What if?” scenarios is that many of them will never happen, or will happen on such rare occasions that the impact will be negligible. They’re the opposite end of the Pareto Principle – they’re the 20% that take up the 80% of effort / budget / time. They need to be minimised and/or mitigated.

In some cases, the “what if?” questions comes from a lack of understanding about the situation, the product suite and / or the future solution. That’s completely understandable because we can never predict all of the eventualities of an OSS project at the outset. That’s the OctopOSS at work – you think you have all of the tentacles under control, but another one always comes and whacks you on the back of the head.

The best way to reduce the “what if?” questions from getting out of control is to give stakeholders a sandpit / MVP / rapid-prototype / PoC environment to interact with.

The benefit of the prototype environment is that it delivers something tangible, something that stakeholders far and wide can interact with and test assumptions, usefulness, usability, boundary cases, scalability, etc. Stakeholders get to understand the context of the product and get a better feeling for what the end solution is going to look like. That way, many of the speculative “what ifs?” are bypassed and you start getting into the more productive dialogue earlier. The alternative, the creation of a document or discussion, can devolve into an almost endless set of “what-if” scenarios and opinions, especially when there are large groups of (sometimes militant) stakeholders.

The more dangerous “what if?” questions come from the experts. They’re the ones who demonstrate their intellectual prowess by finding scenario after scenario that nobody else is considering. I have huge admiration for those who can uncover potential edge cases, race conditions, loopholes in code, etc. The challenge is that they can be extremely hard to document, test for and circumvent. They’re also often very difficult to quantify or prove a likelihood of occurrence, thus consuming significant resources.

Rather than divert resources to resolving all these “what if?” questions one-by-one, I try to seek a higher-order “safety-net” solution. This might be in the form of exception handling, try-catch blocks, fall-out analysis reports, etc. Or, it might mean assigning a watching brief on the problem and handling it only if it arises in future.

What is your OSS answer : question ratio?

Experts know a lot…. obviously.
They have lots of answers… obviously.

There are lots of OSS experts. Combined, they know A LOT!!

Powerful indeed, but not sure if that’s what we need right now. I feel like we’re in a bit of an OSS innovation funk. The biggest improvements in OSS are coming from outside OSS – extrinsic improvement.

Where’s the intrinsic improvement coming from? Do we need someone to shake it up (do we need everyone to shake it up?)? Do we need new thinking to identify and create new patterns? To re-organise and revolutionise what the experts already know. Or do we need to ask the massive questions that re-frame the situation for the experts?

So, considering this funky moment in time, is the real expert the one who knows lots of answers… or the person who can catalyse change by asking the best mind-shift questions?

May I ask you – As an OSS expert, are you prouder of your answers…. or your questions?

To tackle that from a different angle – What is your answer : question ratio? Are you such an important expert that your day is so full of giving brilliant answers that you have no time left to ruminate and develop brilliant questions?

If so, can we take some of your answer time back and re-prioritise it please?

In the words of Socrates, “I cannot teach anybody anything, I can only make them think.

The exposure effect can work for or against OSS projects

The exposure effect (no, not the one circulating through Hollywood) has a few interesting implications for OSS.

“The mere-exposure effect is a psychological phenomenon by which people tend to develop a preference for things merely because they are familiar with them.”
Wikipedia

In effect, it’s the repetition that drills familiarity, comfort, but also bias, into our sub-conscious. Repetition doesn’t make a piece of information true, but it can make us believe it’s true.

Many OSS experts are exposed to particular vendors/products for a number of years during their careers, and in doing so, the exposure effect can build. It can have a subtle bias on vendor selection, whereby the evaluators choose the solution/s they know ahead of the best-fit solution for their organisation. Perhaps having independent vendor selection facilitators who are familiar with many products can help to reduce this bias?

The exposure effect can also appear through sales and marketing efforts. By regularly contacting customers and repetitively promoting their wares, the customer builds a familiarity with the product. In theory it works for OSS products as it does with beer commercials. This can work for or against, depending on the situation.

In the case for, it can help to build a guiding coalition to get a complex, internal OSS project approved and supported through the challenging times that await every OSS project. I’d even go so far as to say, “you should use it to help build a guiding coalition,” rather than, “you can use it to help build a guiding coalition.” Never underestimate the importance of organisational change management on an OSS project.

In the case against, it can again develop a bias towards vendors / products that aren’t best-fit for the organisation. Similarly, if a “best-fit” product doesn’t take the time to develop repetition, they may never even get considered in a selection process, as highlighted in the diagram below.

7 touches of sales
Courtesy of the OnlineMarketingInstitute.

The 10 minute / 1 minute / 10 second OSS challenge

Check out the video below, which gives an example of the 10 minute / 1 minute / 10 second challenge (you can check out more of them here).

When given 10 minutes to sketch Spiderman, the result is far richer than when the artist is given only 10 seconds… well obviously!!

But let me pose a question. If Sketch B was compiled from 60 sequential 10s updates (ie Sketch B would also take 10 mins total sketching time) do you think the final sketch would look as impressive as the 1 x 10 min sketch (Sketch A)? The total sketching time is the same, but will the results be similar?

From the 10s sketch above, you can see that the composition is not as precise. Subsequent updates would have to work around the initial structural flaws.

Do you wonder whether this is somewhat analogous to creating OSS using continuous development frameworks like Agile or DevOps? By having tightly compressed (eg weekly) release cycles, are we compromising the structure from the start?

I’m a big believer in rapid prototyping with subsequent incremental improvements instead of the old big-bang OSS delivery model. I’m also impressed with automated dev / test / release frameworks. However, I’m concerned that rapid release cycles can enforce unnecessary deadlines and introduce structural compromises that are difficult to fix mid-flight.

Keeping the OSS executioner away

With the increasing pace of change, the moment a research report, competitive analysis, or strategic plan is delivered to a client, its currency and relevance rapidly diminishes as new trends, issues, and unforeseen disrupters arise.”
Soren Kaplan
.

By the same token as the quote above, does it follow that the currency and relevance of an OSS rapidly diminishes as soon as it is delivered to a client?

In the case of research reports, analyses and strategic plans, currency diminishes because the static data sets upon which they’re built are also losing currency. That’s not the case for an OSS – they are data collection and processing engines for streaming (ie constantly refreshing) data. [As an aside here – Relevance can still decrease if data quality is steadily deteriorating, irrespective of its currency. Meanwhile currency can decrease if the ever expanding pool of OSS data becomes so large as to be unmanagable or responsiveness is usurped by newer data processing technologies]

However, as with research reports, analyses and strategic plans, the value of an OSS is not so much related to the data collected, but the questions asked of, and answers / insights derived from, that data.

Apart from the asides mentioned above, the currency and relevance of OSS only diminish as a result of new trends, issues and disrupters if new questions can not or are not being asked with them.

You’ll recall from yesterday’s post that, “An ability to use technology to manage, interpret and visualise real data in a client’s data stores, not just industry trend data,” is as true of OSS tools as it is of OSS consultants. I’m constantly surprised that so few OSS are designed with intuitive, flexible data interrogation tools built in. It seems that product teams are happy to delegate that responsibility to off-the-shelf reporting tools or leave it up to the client to build their own.

The future of telco / service provider consulting

Change happens when YOU and I DO things. Not when we argue.”
James Altucher
.

We recently discussed how ego can cause stagnation in OSS delivery. The same post also indicated how smart contracts potentially streamline OSS delivery and change management.

Along similar analytical lines, there’s a structural shift underway in traditional business consulting, as described in a recent post contrasting “clean” and “dirty” consulting. There’s an increasing skepticism in traditional “gut-feel” or “set-and-forget” (aka clean) consulting and a greater client trust in hard data / analytics and end-to-end implementation (dirty consulting).

Clients have less need for consultants that just turn the ignition and lay out sketchy directions, but increasingly need ones that can help driving the car all the way to their desired destination.

Consultants capable of meeting these needs for the telco / service provider industries have:

  • Extensive coal-face (delivery) experience, seeing and learning from real success and failure situations / scenarios
  • An ability to use technology to manage, interpret and visualise real data in a client’s data stores, not just industry trend data
  • An ability to build repeatable frameworks (including the development of smart contracts)
  • A mix of business, IT and network / tech expertise, like all valuable tripods

Have you noticed that the four key features above are perfectly aligned with having worked in OSSOSS/BSS data stores contain information that’s relevant to all parts of a telco / service provider business. That makes us perfectly suited to being the high-value consultants of the future, not just contractors into operations business units.

Few consultancy tasks are productisable today, but as technology continues to advance, traditional consulting roles will increasingly be replaced by IP (Intellectual Property) frameworks, data analytics, automations and tools… as long as the technology provides real business benefit.

Bad OSS ego decisions

A long, long time ago Dennis Haslinger told me that most of the most serious mistakes I would make in life would be bad ego decisions. I have found that to be true.”
Gary Halbert
.

OSS is an industry filled with highly intelligent people. In every country I’ve visited to work on OSS assignments, perhaps excluding Vietnam, my colleagues have been predominantly male. Dare I say it, do those two preceding facts imply a significant ego level exists on many (most?) OSS projects (or has it just been a coincidence that I’ve experienced)?

Given that OSS projects tend to cross business units, inter-departmental power plays like the one described in the Dilbert comic below can become just another potential pitfall.
Dilbert - I found a way to save a million dollars

To be honest, I can’t recall any examples where ego (mine or others) has lead to serious mistakes as such, but I’ve seen many cases where it’s lead to serious stagnation, delays in project delivery, that have been extremely costly.

One example is cited in this post, where the intellectual brilliance of one person caused a document to blow out from 30 pages to 150+, causing a 3+ month delay and more than $100k additional cost.

Stakeholder management and change management are highly underestimated factors in the success of OSS projects.

PS. The “intellectual brilliance” link above also talks about the possible benefits of smart contracts in OSS delivery. I wonder whether smart contracts will reduce some of the ego-related stagnation on OSS projects, or simply shift it from the delivery phase to the up-front smart contract agreement phase, thus introducing more “what if scenario” stagnation?

Raising the OSS horizon

With the holiday period looming for many of us, we will have the head-space to reflect – on the year(s) gone and to ponder the one(s) upcoming. I’d like to pose the rhetorical question, “What do you expect to reflect on?

It’s probably safe to say that a majority of OSS experts are engaged in delivery roles. Delivery roles tend to require great problem-solving skills. That’s one of the exciting aspects of being an OSS expert after all.

There’s one slight problem though. Delivery roles tend to have a focus on the immediacy of delivery, a short-term problem-solving horizon. This generates incremental improvements like new dashboards within an existing dashboard framework, refining processes, next release software upgrades, releasing new stuff that adds to the accumulation of tech-debt, etc, etc.

That’s great, highly talented, admirable work, often exactly what our customers are requesting, but not necessarily what our industry needs most.

We need the revolutionary, not the evolutionary. And that means raising our horizons – to identify and comprehend the bigger challenges and then solving those. That is the intent of the OSS Call for Innovation – to lift our vision to a more distant horizon.

When you reflect during this holiday period, how distant will your horizon be?

PS. Upon your own reflection, are there additional big challenges or exponential opportunities that should be captured in the OSS Call for Innovation?