Using OSS/BSS to steer the ship

For network operators, our OSS and BSS touch most parts of the business. The network, and the services they carry, are core business so a majority of business units will be contributing to that core business. As such, our OSS and BSS provide many of the metrics used by those business units.

This is a privileged position to be in. We get to see what indicators are most important to the business, as well as the levers used to control those indicators. From this privileged position, we also get to see the aggregated impact of all these KPIs.

In your years of working on OSS / BSS, how many times have you seen key business indicators that are conflicting between business units? They generally become more apparent on cross-team projects where the objectives of one internal team directly conflict with the objectives of another internal team/s.

In theory, a KPI tree can be used to improve consistency and ensure all business units are pulling towards a common objective… [but what if, like most organisations, there are many objectives? Does that mean you have a KPI forest and the trees end up fighting for light?]

But here’s a thought… Have you ever seen an OSS/BSS suite with the ability to easily build KPI trees? I haven’t. I’ve seen thousands of standalone reports containing myriad indicators, but never a consolidated roll-up of metrics. I have seen a few products that show operational metrics rolled-up into a single dashboard, but not business metrics. They appear to have been designed to show an information hierarchy, but not necessarily with KPI trees in mind specifically.

What do you think? Does it make sense for us to offer KPI trees as base product functionality from our reporting modules? Would this functionality help our OSS/BSS add more value back into the businesses we support?

Operator involvement on OSS projects

You cannot simply have your end users give some specifications then leave while you attempt to build your new system. They need to be involved throughout the process. Ultimately, it is their tool to use.”
José Manuel De Arce
here.

As an OSS consultant and implementer, I couldn’t agree more with José’s quote above. José, by the way is an OSS Manager at Telefónica, so he sits on the operator’s side of the implementation equation. I’m glad he takes the perspective he does.

Unfortunately, many OSS operators are so busy with operations, they don’t get the time to help with defining and tuning the solutions that are being built for them. It’s understandable. They are measured by their ability to keep the network (and services) running and in a healthy state.

From the implementation side, it reminds me of this old comic:
Too busy

The comic reminds me of OSS implementations for two reasons:

  1. Without ongoing input from operators, you can only guess at how the new tools could improve their efficacy and mitigate their challenges
  2. Without ongoing involvement from operators, they don’t learn the nuances of how the new tool works or the scenarios it’s designed to resolve… what I refer to as an OSS apprenticeship

I’ve seen it time after time on OSS implementations (and other projects for that matter) – [As a customer] you get back what you put in.

The Goldilocks OSS story

We all know the story of Goldilocks and the Three Bears where Goldilocks chooses the option that’s not too heavy, not too light, but just right.

The same model applies to OSS – finding / building a solution that’s not too heavy, not too light, but just right. To be honest, we probably tend to veer towards the too heavy, especially over time. We put more complexity into our architectures, integrations and customisations… because we can… which end up burdening us and our solutions.

A perfect example is AT&T offering its ECOMP project (now part of the even bigger Linux Foundation Network Fund) up for open source in the hope that others would contribute and help mature it. As a fairytale analogy, it’s an admission that it’s too heavy even for one of the global heavyweights to handle by itself.

The ONAP Charter has some great plans including, “…real-time, policy-driven orchestration and automation of physical and virtual network functions that will enable software, network, IT and cloud providers and developers to rapidly automate new services and support complete lifecycle management.”

These are fantastic ambitions to strive for, especially at the Pappa Bear end of the market. I have huge admiration for those who are creating and chasing bold OSS plans. But what about for the large majority of customers that fall into the Goldilocks category? Is our field of vision so heavy (ie so grand and so far into the future) that we’re missing the opportunity to solve the business problems of our customers and make a difference for them with lighter solutions today?

TM Forum’s Digital Transformation World is due to start in just over two weeks. It will be fascinating to see how many of the presentations and booths consider the Goldilocks requirements. There probably won’t be many because it’s just not as sexy a story as one that mentions heavy solutions like policy-driven orchestration, zero-touch automation, AI / ML / analytics, self-scaling / self-healing networks, etc.

[I should also note that I fall into the category of loving to listen to the heavy solutions too!! ]

Powerful ranking systems with hidden variables

There are ratings and rankings that ostensibly exist to give us information (and we are supposed to use that information to change our behavior).
But if we don’t know what variables matter, how is it supposed to be useful?
Just because it can be easily measured with two digits doesn’t mean that it’s accurate, important or useful.
[Marketers learned a long time ago that people love rankings and daily specials. The best way to boost sales is to put something in a little box on the menu, and, when in doubt, rank things. And sometimes people even make up the rankings.]

Seth Godin
here.

Are there any rankings that are made up in OSS? Our OSS collect an amazing amount of data so there’s rarely a need to make up the data we present.

Are they based on hidden variables? Generally, we use raw counters and / or well known metrics so we’re usually quite transparent with what our OSS present.

What about when we’re trying to select the right vendor to fulfill the OSS needs of our organisation? As Seth states, Just because it can be easily measured with two digits* doesn’t mean that it’s accurate, important or useful. [* In this case, I’m thinking of a 2 x 2 matrix].

The interesting thing about OSS ranking systems is that there is so much nuance in the variables that matter. There are potentially hundreds of evaluation criteria and even vast contrasts in how to interpret a given criteria.

For example, a criteria might be “time to activate a service.” A vendor might have a really efficient workflow for activating single services manually but have no bulk load or automation interface. For one operator (which does single activations manually), the TTAS metric for that product would be great, but for another operator (which does thousands of activations a day and tries to automate), the TTAS metric for the same product would be awful.

As much as we love ranking systems… there are hundreds of products on the market (in some cases, hundreds of products in a single operator’s OSS stack), each fitting unique operator needs differently… so a 2 x 2 matrix is never going to cut it as a vendor selection tool… not even as a short-listing tool.

Better to build yourself a vendor selection framework. You can find a few OSS product / vendor selection hints here based on the numerous vendor / product selections I’ve helped customers with in the past.

Designing OSS to cope with greater transience (part 2)

This is the second episode discussing the significant change to OSS thinking caused by modern network models. Yesterday’s post discussed how there has been a paradigm shift from static networks (think PDH) to dynamic / transient networks (think SDN/NFV) and that OSS are faced with a similar paradigm shift in how they manage modern network models.

We can either come up with adaptive / algorithmic mechanisms to deal with that transience, or mimic the “nailed-up” concepts of the past.

Let’s take Carrier Ethernet as a basis for explanation, with its E-LAN service model [We could similarly analyse E-Line and E-Tree service models, but maybe another day].

An E-Line is a point-to-point service between an A-end UNI (User-Network Interface) and a Z-end UNI, connected by an EVC (Ethernet Virtual Connection). The EVC is a conceptual pipe that is carried across a service provider’s network – a pipe that can actually span multiple network assets / links.

In our OSS, we can apply either:

  1. Abstract Model – Just mimic the EVC as a point-to-point connection between the two UNIs
  2. Specific Model – Attempt to tie network assets / links associated with the conceptual pipe to the EVC construct

The abstract OSS can be set up just once and delegate the responsibility of real-time switching / transience within the EVC to network controllers / EMS. This is the simpler model, but doesn’t add as much value to assurance use-cases in particular.

The specific OSS must either have the algorithms / policies to dynamically manage the EVC or to dynamically associate assets to the EVC. This is obviously much more sophisticated, but provides operators with a more real-time view of network utilisation and health.

An OSS conundrum with many perspectives

Even aside from the OSS impact, it illustrates the contrast between “bottom-up” planning of networks (new card X is cheaper/has more ports) and “top down” (what do we need to change to reduce our costs/increase capacity).”
Robert Curran
.

Robert’s quote above is in response to a post called “Trickle-down impact planning.”

Robert makes a really interesting point. Adding a new card type is a relatively common event for a big network operator. It’s a relatively minor challenge for the networks team – a BAU (Business as Usual) activity in fact. But if you follow the breadcrumbs, the impact to other parts of the business can be quite significant.

Your position in your organisation possibly dictates your perspective on the alternative approaches Robert discusses above. Networks, IT, planning, operations, sales, marketing, projects/delivery, executive – all will have different impacts and a different field of view on the conundrum. This makes it an interesting problem to solve – which viewpoint is the “right” one to tackle the challenge from?

My “solutioning” background tends to align with the top down viewpoint, but today we’ll take a look at this from the perspective of how OSS can assist from either direction.

Bottom Up: In an ideal world, our OSS and associated processes would be able to identify a new card (or similar) and just ripple changes out without interface changes. The first OSS I worked on did this really well. However, it was a “single-vendor” solution so the ripples were self-contained (mostly). This is harder to control in the more typical “best-of-breed” OSS stacks of today. There are architectural mechanisms for controlling the ripples out but it’s still a significant challenge to solve. I’d love to hear from you if you’re aware of any vendors or techniques that do this really well.

Top Down: This is where things get interesting. Should top-down impact analysis even be the task of an OSS/BSS? Since it’s a common / BAU operational task, then you could argue it is. If so, how do we create OSS tools* that help with organisational impact / change / options analysis and not just network impact analysis? How do we build the tools* that can:

  1. Predict the rippling impacts
  2. Allow us to estimate the impact of each
  3. Present options (if relevant) and
  4. Provide a cost-benefit comparison to determine whether any of the options are viable for development

* When I say “tools,” this might be a product, but it could just mean a process, data extract, etc.

I have the sense that this type of functionality falls into the category of, “just because you can, doesn’t mean you should… build it into your OSS.” Have you seen an OSS/BSS with this type of impact analysis functionality built-in?

Designing an OSS from NFRs backwards

When we’re preparing a design (or capturing requirements) for a new or updated OSS, I suspect most of us design with functional requirements (FRs) in mind. That is, our first line of thinking is on the shiny new features or system behaviours we have to implement.

But what if we were to flip this completely? What if we were to design against Non-Functional Requirements (NFRs) instead? [In case you’re not familiar with NFRs, they’re the requirements that measure the function or performance of a solution rather than features / behaviours]

What if we already have all the really important functionality in our OSS (the 80/20 rule suggests you will), but those functions are just really inefficient to use? What if we can meet the FR of searching a database for a piece of inventory… but our loaded system takes 5 mins to return the results of the query? It doesn’t sound much, but if it’s an important task that you’re doing dozens of times a day, then you’re wasting hours each day. Worse still, if it’s a system task that needs to run hundreds of times a day…

I personally find NFRs to be really hard to design for because we usually won’t know response times until we’ve actually built the functionality and tried different load / fail-over / pattern (eg different query types) models on the available infrastructure. Yes, we can benchmark, but that tends to be a bit speculative.

Unfortunately, if we’ve built a solution that works, but end up with queries that take minutes… when our SLAs might be 5-15 mins, then we’ve possibly failed in our design role.

We can claim that it’s not our fault. We only have finite infrastructure (eg compute, storage, network), each with inherent performance constraints. It is what it is right?…. maybe.

What if we took the perspective of determining our most important features (the 80/20 rule again), setting NFR benchmarks for each and then designing the solution back from there? That is, putting effort into making our most important features super-efficient rather than adding new nice-to-have features (features that will increase load, thus making NFRs harder to hit mind you!)?

In this new world of open-source, we have more “product control” than we’ve probably had before. This gives us more of a chance to start with the non-functionals and work back towards a product. An example might be redesigning our inventory to work with Graph database technology rather than the existing relational databases.

How feasible is this NFR concept? Do you know anyone in OSS who does it this way? Do you have any clever tricks for ensuring your developed features stay within NFR targets?

Re-writing the Sales vs Networks cultural divide

Brand, marketing, pricing and sales were seen as sexy. Networks and IT were the geeks no one seemed to speak to or care about. … This isolation and excommunication of our technical team had created an environment of disillusion. If you wanted something done the answer was mostly ‘No – we have no budget and no time for that’. Our marketing team knew more about loyalty points … than about our own key product, the telecommunications network.”
Olaf Swantee
, from his book, “4G Mobile Revolution”

Great note here (picked up by James Crawshaw at Heavy Reading). It talks about the great divide that always seems to exist between Sales / Marketing and Network / Ops business units.

I’m really excited about the potential for next generation OSS / orchestration / NaaS (Network as a Service) architectures to narrow this divide though.

In this case:

  1. The Network is offered as a microservice (let’s abstractly call them Resource Facing Services [RFS]);
  2. Sales / Marketing construct customer offerings (let’s call them Customer Facing Services [CFS]) from those RFS; and
  3. There’s a catalog / orchestration layer that marries the CFS with the cohesive set of RFS

The third layer becomes a meet-in-the-middle solution where Sales / Marketing comes together with Network / Ops – and where they can discuss what customers want and what the network can provide.

The RFS are suitably abstracted that Sales / Marketing doesn’t need to understand the network and complexity that sits behind the veil. Perhaps it’s time for Networks / Ops to shine, where the RFS can be almost as sexy as CFS (am I falling too far into the networks / geeky side of the divide?  🙂  )

The CFS are infinitely composable from RFS (within the constraints of the RFS that are available), allowing Sales / Marketing teams to build whatever they want and the Network / Ops teams don’t have to be constantly reacting to new customer offerings.

I wonder if this revolution will give Olaf cause to re-write this section of his book in a few years, or whether we’ll still have the same cultural divide despite the exciting new tools.

Blown away by one innovation. Now to extend on it

Our most recent two posts, from yesterday and Friday, have talked about one stunningly simple idea that helps to overcome one of OSS‘ biggest challenges – data quality. Those posts have stimulated quite a bit of dialogue and it seems there is some consensus about the cleverness of the idea.

I don’t know if the idea will change the OSS landscape (hopefully), or just continue to be a strong selling point for CROSS Network Intelligence, but it has prompted me to think a little longer about innovating around OSS‘ biggest challenges.

Our standard approach of just adding more coats of process around our problems, or building up layers of incremental improvements isn’t going to solve them any time soon (as indicated in our OSS Call for Innovation). So how?

Firstly, we have to be able to articulate the problems! If we know what they are, perhaps we can then take inspiration from the CROSS innovation to spur us into new ways of thinking?

Our biggest problem is complexity. That has infiltrated almost every aspect of our OSS. There are so many posts about identifying and resolving complexity here on PAOSS that we might skip over that one in this post.

I decided to go back to a very old post that used the Toyota 5-whys approach to identify the real cause of the problems we face in OSS [I probably should update that analysis because I have a whole bunch of additional ideas now, as I’m sure you do too… suggested improvements welcomed BTW].

What do you notice about the root-causes in that 5-whys analysis? Most of the biggest causes aren’t related to system design at all (although there are plenty of problems to fix in that space too!). CROSS has tackled the data quality root-cause, but almost all of the others are human-centric factors – change controls, availability of skilled resources, requirement / objective mis-matches, stakeholder management, etc. Yet, we always seem to see OSS as a technical problem.

How do you fix those people challenges? Ken Segal puts it this way, “When process is king, ideas will never be. It takes only common sense to recognize that the more layers you add to a process, the more watered down the final work will become.” Easier said than done, but a worthy objective!

How smart contracts might reduce risk and enhance trust on OSS projects

Last Friday, we spoke about all wanting to develop trusted OSS supplier / customer relationships but rarely finding them and a contrarian factor for why trust is so hard to achieve in OSS – complexity.

Trust is the glue that allows OSS projects to happen. Not only that, it becomes a catch-22 with complexity. If OSS partners don’t trust each other, requirements, contracts, etc get more complex as a self-protection barrier. But with every increase in complexity, there becomes an increasing challenge to deliver and hence, risk of further reduction in trust.

On a smaller scale, you’ve seen it on all projects – if the project starts to falter, increased monitoring attention is placed on the project, which puts increased administrative load on the project team and reduces the time they have to deliver the intended outcomes. Sometimes the increased admin / report gains the attention of sponsors and access to additional resources, but usually it just detracts from the available delivery capability.

Vish Nandlall also associates trust and complexity in organisational models in his LinkedIn post below:

This is one of the reasons I’m excited about what smart contracts can do for the organisations and OSS projects of the future. Just as “Likes” and “Supplier Rankings” have facilitated online trust models, smart contracts success rankings have the ability to do the same for OSS suppliers, large and small. For example, rather than needing to engage “Big Vendor A” to build your entire, monolithic OSS stack, if an operator develops simpler, more modular work breakdowns (eg microservices), then they can engage “Freelancer B” and “Small Vendor C” to make valuable contributions on smaller risk increments. Being lower in complexity and risk means B and C have a greater chance of engendering trust, but their historical contract success ranking forces them to develop trust as a key metric.

Fast / Slow OSS processes

Yesterday’s post discussed using smart contracts and Network as a Service (NaaS) to give a network the properties that will allow it to self-heal.

It mentioned a couple of key challenges, one being that there will always be physical activities such as cable cuts fixes, faulty equipment replacement, physical equipment expansion / contraction / lifecycle-management.

In a TM Forum presentation last week, Sylvain Denis of Orange proposed the theory of fast and slow OSS processes. Fast – soft factories (software and logical resources) within the operations stack are inherently automatable (notwithstanding the complexities and cost-benefit dilemma of actually building automations). Slow – physical factories are slow processes as they usually rely on human tasks and/or have location constraints.

Orchestration relies on programmatic interfaces to both. Not all physical factories have programmatic interfaces in all OSS / BSS stacks yet. It will remain a key requirement for the forseeable future to be able to handle dual-speed processes / factories.

Dan Pink’s 6 critical OSS senses

I recently wrote an article that spoke about the obsolescence of jobs in OSS, particularly as a result of Artificial Intelligence.

But an article by someone much more knowledgeable about AI than me, Rodney Brooks, had this to say, “We are surrounded by hysteria about the future of artificial intelligence and robotics — hysteria about how powerful they will become, how quickly, and what they will do to jobs.” He then describes The Seven Deadly Sins of AI Predictions here.

Back into my box I go, tail between my legs! Nonetheless, the premise of my article still holds true. The world of OSS is changing quickly and we’re constantly developing new automations, so our roles will inevitably change. My article also proposed some ideas on how to best plan our own adaptation.

That got me thinking… Many people in OSS are “left-brain” dominant right? But left-brained jobs (ie repeatable, predictable, algorithmic) can be more easily out-sourced or automated, thus making them more prone to obsolescence. That concept reminded me of Daniel Pink’s premise in A Whole New Mind where right-brained skills become more valuable so this is where our training should be focused. He argues that we’re on the cusp of a new era that will favor “conceptual” thinkers like artists, inventors and storytellers. [and OSS consultants??]

He also implores us to enhance six critical senses, namely:

  • Design – the ability to create something that’s emotionally and/or visually engaging
  • Story – to create a compelling and persuasive narrative
  • Symphony – the ability to synthesise new insights, particularly from seeing the big picture
  • Empathy – the ability to understand and care for others
  • Play – to create a culture of games, humour and play, and
  • Meaning – to find a purpose that will provide an almost spiritual fulfillment.

I must admit that I hadn’t previously thought about adding these factors to my development plan. Had you?
Do you agree with Dan Pink or will you continue to opt for left-brain skills / knowledge enhancement?

Will it take open source to unlock OSS potential?

I have this sense that the other OSS, open source software, holds the key to the next wave of OSS (Operational Support Systems) innovation.

Why? Well, as yesterday’s post indicated (through Nic Brisbourne), “it’s hard to do big things in a small way.” I’d like to put a slight twist on that concept by saying, “it’s hard to do big things in a fragmented way.” [OSS is short for fragmented after all]

The skilled resources in OSS are so widely spread across many organisations (doing a lot of duplicated work) that we can’t reach a critical mass of innovation. Open source projects like ONAP represent a possible path to critical mass through sharing and augmentating code. They provide the foundation upon which bigger things can be built. If we don’t uplift the foundations across the whole industry quickly, we risk losing relevance (just ask our customers for their gripes list!).

BTW. Did you notice the news that Six Linux Foundation open source networking projects have just merged into one? The six initial projects are ONAP, OPNFV, OpenDaylight, FD.io, PDNA, and SNAS. The new project is called the LF Networking Fund (LFN).

But you may ask how organisations can protect their trade secrets whilst embracing open source innovation. Derek Sivers provides a fascinating story and line of thinking in “Why my code and ideas are public.” I really recommend having a read about Valerie.

Alternatively, if you’re equating open source with free / unprofitable, this link provides a list of highly successful organisations with significant open source contributions. There are plenty of creative ways to be rewarded for open source effort.

Comment below if you agree or disagree about whether we need OSS to unlock the potential of OSS innovation.

What is your OSS answer : question ratio?

Experts know a lot…. obviously.
They have lots of answers… obviously.

There are lots of OSS experts. Combined, they know A LOT!!

Powerful indeed, but not sure if that’s what we need right now. I feel like we’re in a bit of an OSS innovation funk. The biggest improvements in OSS are coming from outside OSS – extrinsic improvement.

Where’s the intrinsic improvement coming from? Do we need someone to shake it up (do we need everyone to shake it up?)? Do we need new thinking to identify and create new patterns? To re-organise and revolutionise what the experts already know. Or do we need to ask the massive questions that re-frame the situation for the experts?

So, considering this funky moment in time, is the real expert the one who knows lots of answers… or the person who can catalyse change by asking the best mind-shift questions?

May I ask you – As an OSS expert, are you prouder of your answers…. or your questions?

To tackle that from a different angle – What is your answer : question ratio? Are you such an important expert that your day is so full of giving brilliant answers that you have no time left to ruminate and develop brilliant questions?

If so, can we take some of your answer time back and re-prioritise it please?

In the words of Socrates, “I cannot teach anybody anything, I can only make them think.

Customers don’t invest in OSS. What do they invest in?

“An organisation buys an OSS, not because it wants an Operational Support System, but because it wants Operational Support.”

So if our customers are not investing in our OSS, what are they actually investing in? Easy! They’re investing in the ability to solve their own problems and opportunities in future.

If we don’t actually understand operations, what chance do we have to deliver operational support? We keep hearing the term, “customer experience this,” “CX that,” so it must be important right? Operational support staff might be a few steps removed from us (intentionally or unintentionally) but they are our “real” customers and the only way we can develop a solution that empathises with them is by spending time with them and listening (not always easy for us know-it-all OSS builder-types).

And just because we have a history in ops doesn’t mean we can assume to know this time. Operations are different at each organisation.

So, are we sure we understand the nature, extent and context of the unique problem/s that this customer needs to solve (not wants to solve)?

If the customer thinks they have a problem, they do have a problem

Omni-channel is an interesting concept because it generates two distinctly different views.
The customer will use whichever channel (eg digital, apps, contact-centre, IVR, etc) that they want to use.
The service provider will try to push the customer onto whichever channel suits the service provider best.

The customer will often want to use digital or apps, back-ended by OSS – whether that’s to place an order, make configuration changes, etc. The service provider is happy for the customer to use these low-cost, self-service channels.

But when the customer has a problem, they’ll often try to self-diagnose, then prefer to speak with a person who has the skills to trouble-shoot and work with the back-end systems and processes. Unfortunately, the service provider still tries to push the customer into low-cost, self-service channels. Ooops!

If the customer thinks they have a problem, they do have a problem (even if technically, they don’t).
Omni-channel means giving customers the channels that they want to work via, not the channels the service provider wants them to work via.
Call Volume Reduction (CVR) projects (which can overlap into our OSS) sometimes lose sight of this fact just because the service provider has their heart set on reducing costs.

Funding beyond the walls of operations

You can have more – if you become more.”
Jim Rohn.

I believe that this is as true of our OSS as it is of ourselves.

Many people use the name Operational Support Systems to put an electric fence around our OSS, to limit uses to just operational activities. However, the reach, awareness and power of what they (we) offer goes far beyond that.

We have powerful insights to deliver to almost every department in an organisation – beyond just operations and IT. But first we need to understand the challenges and opportunities faced by those departments so that we can give them something useful.

That doesn’t necessarily mean expensive integrations but unlocking the knowledge that resides in our data.

Looking for additional funding for your OSS? Start by seeking ways to make it more valuable to more people… or even one step further back – seeking to understand the challenges beyond the walls of operations.

When low OSS performance is actually high performance

It’s not unusual for something to be positioned as the high performance alternative. The car that can go 0 to 60 in three seconds, the corkscrew that’s five times faster, the punch press that’s incredibly efficient…
The thing is, though, that the high performance vs. low performance debate misses something. High at what?
That corkscrew that’s optimized for speed is more expensive, more difficult to operate and requires more maintenance.
That car that goes so fast is also more difficult to drive, harder to park and generally a pain in the neck to live with.
You may find that a low-performance alternative is exactly what you need to actually get your work done. Which is the highest performance you can hope for
.”
Seth Godin
in this article, What sort of performance?

Whether selecting a vendor / product, designing requirements or building an OSS solution, we can sometimes lose track of what level of performance is actually required to get the work done can’t we?

How many times have you seen a requirement sheet that specifies a Ferrari, but you know the customer lives in a location with potholed and cobblestoned roads? Is it right to spec them – sell them – build them – charge them for a Ferrari?

I have to admit to being guilty of this one too. I have gotten carried away in what the OSS can do, nearer the higher performance end of the spectrum, rather than taking the more pragmatic view of what the customer really needs.

Automations, custom reports and integrations are the perfect OSS examples of low performance actually being high performance. We spend a truckload of money on these types of features to avoid manual tasks (curse having to do those manual tasks)… when a simple cost-benefit analysis would reveal that it makes a lot more sense to stay manual in many cases.

Keeping the OSS executioner away

With the increasing pace of change, the moment a research report, competitive analysis, or strategic plan is delivered to a client, its currency and relevance rapidly diminishes as new trends, issues, and unforeseen disrupters arise.”
Soren Kaplan
.

By the same token as the quote above, does it follow that the currency and relevance of an OSS rapidly diminishes as soon as it is delivered to a client?

In the case of research reports, analyses and strategic plans, currency diminishes because the static data sets upon which they’re built are also losing currency. That’s not the case for an OSS – they are data collection and processing engines for streaming (ie constantly refreshing) data. [As an aside here – Relevance can still decrease if data quality is steadily deteriorating, irrespective of its currency. Meanwhile currency can decrease if the ever expanding pool of OSS data becomes so large as to be unmanagable or responsiveness is usurped by newer data processing technologies]

However, as with research reports, analyses and strategic plans, the value of an OSS is not so much related to the data collected, but the questions asked of, and answers / insights derived from, that data.

Apart from the asides mentioned above, the currency and relevance of OSS only diminish as a result of new trends, issues and disrupters if new questions can not or are not being asked with them.

You’ll recall from yesterday’s post that, “An ability to use technology to manage, interpret and visualise real data in a client’s data stores, not just industry trend data,” is as true of OSS tools as it is of OSS consultants. I’m constantly surprised that so few OSS are designed with intuitive, flexible data interrogation tools built in. It seems that product teams are happy to delegate that responsibility to off-the-shelf reporting tools or leave it up to the client to build their own.

Raising the OSS horizon

With the holiday period looming for many of us, we will have the head-space to reflect – on the year(s) gone and to ponder the one(s) upcoming. I’d like to pose the rhetorical question, “What do you expect to reflect on?

It’s probably safe to say that a majority of OSS experts are engaged in delivery roles. Delivery roles tend to require great problem-solving skills. That’s one of the exciting aspects of being an OSS expert after all.

There’s one slight problem though. Delivery roles tend to have a focus on the immediacy of delivery, a short-term problem-solving horizon. This generates incremental improvements like new dashboards within an existing dashboard framework, refining processes, next release software upgrades, releasing new stuff that adds to the accumulation of tech-debt, etc, etc.

That’s great, highly talented, admirable work, often exactly what our customers are requesting, but not necessarily what our industry needs most.

We need the revolutionary, not the evolutionary. And that means raising our horizons – to identify and comprehend the bigger challenges and then solving those. That is the intent of the OSS Call for Innovation – to lift our vision to a more distant horizon.

When you reflect during this holiday period, how distant will your horizon be?

PS. Upon your own reflection, are there additional big challenges or exponential opportunities that should be captured in the OSS Call for Innovation?