The OSS MoSCoW requirement prioritisation technique

Since the soccer World Cup is currently taking place in Russia, I thought I’d include reference to the MoSCoW technique in today’s blog. It could be used as part of your vendor selection processes for the purpose of OSS requirement prioritisation.

The term MoSCoW itself is an acronym derived from the first letter of each of four prioritization categories (Must have, Should have, Could have, and Won’t have), with the interstitial Os added to make the word pronounceable.”
Wikipedia.

It can be used to rank the importance an OSS operator gives to each of their requirements, such as the sample below:

Index Description MoSCoW
1 Requirement #1 Must have (mandatory)
2 Requirement #2 Should have (desired)
3 Requirement #3 Could have (optional)
4 Requirement #5 Won’t have (not required)

But that’s only part of the story in a vendor selection process – the operator’s wish-list. This needs to be cross-referenced with a vendor or solution integrator’s ability to deliver to those wishes. This is where the following compliance codes come in:

  • FC – Fully Compliant – This functionality is available in the baseline installation of the proposed OSS
  • NC – Non-compliant – This functionality is not able to be supported by the proposed OSS
  • PC – Partially Compliant – This functionality is partially supported by the proposed OSS (a vendor description of what is / isn’t supported is required)
  • WC – Will Comply – This functionality is not available in the baseline installation of the proposed software, but will be provided via customisation of the proposed OSS
  • CC – Can Comply – This functionality is possible, but not part of the offer and can be provided at additional cost

So a quick example might look like the following:

Index Description MoSCoW Compliance Comment
1 Requirement #1 Must have (mandatory) FC
2 Requirement #2 Should have (desired) PC Can only delivery the functionality of 2a, not 2b in this solution

Yellow columns are created by the operator / customer, blue cells are populated by the vendor / SI. I usually also add blue columns to indicate which product module delivers the compliance and room to pose questions / assumptions back to the customer.

BTW. I’ve only heard of the MoSCoW acronym recently but have been using a similar technique for years. My prioritisation approach is a little simpler. I just use Mandatory, Preferred, Optional.

The OSS dart-board analogy

The dartboard, by contrast, is not remotely logical, but is somehow brilliant. The 20 sector sits between the dismal scores of five and one. Most players aim for the triple-20, because that’s what professionals do. However, for all but the best darts players, this is a mistake. If you are not very good at darts, your best opening approach is not to aim at triple-20 at all. Instead, aim at the south-west quadrant of the board, towards 19 and 16. You won’t get 180 that way, but nor will you score three. It’s a common mistake in darts to assume you should simply aim for the highest possible score. You should also consider the consequences if you miss.”
Rory Sutherland
on Wired.

When aggressive corporate goals and metrics are combined with brilliant solution architects, we tend to aim for triple-20 with our OSS solutions don’t we? The problem is, when it comes to delivery, we don’t tend to have the laser-sharp precision of a professional darts player do we? No matter how experienced we are, there tends to be hidden surprises – some technical, some personal (or should I say inter-personal?), some contractual, etc – that deflect our aim.

The OSS dart-board analogy asks the question about whether we should set the lofty goals of a triple-20 [yellow circle below], with high risk of dismal results if we miss (think too about the OSS stretch-goal rule); or whether we’re better to target the 19/16 corner of the board [blue circle below] that has scaled back objectives, but a corresponding reduction in risk.

OSS Dart-board Analogy

Roland Leners posed the following brilliant question, “What if we built OSS and IT systems around people’s willingness to change instead of against corporate goals and metrics? Would the corporation be worse off at the end?” in response to a recent post called, “Did we forget the OSS operating model?

There are too many facets to count on Roland’s question but I suspect that in many cases the corporate goals / metrics are akin to the triple-20 focus, whilst the team’s willingness to change aligns to the 19/16 corner. And that is bound to reduce delivery risk.

I’d love to hear your thoughts!!

1.045 Trillion reasons to re-consider your OSS strategy

The global Internet of Things (IoT) market will be worth $1.1 trillion in revenue by 2025 as market value shifts from connectivity to platforms, applications and services. By that point, there will be more than 25 billion IoT connections (cellular and non-cellular), driven largely by growth in the industrial IoT market. The Asia Pacific region is forecast to become the largest global IoT region in terms of both connections and revenue.
Although connectivity revenue will grow over the period, it will only account for 5 per cent of the total IoT revenue opportunity by 2025, underscoring the need for operators to expand their capabilities beyond connectivity in order to capture a greater share of market value
.”
GSMA Intelligence
, referred to here.

Let’s look at these projected numbers. The GSMA Intelligence report forecasts only 5 cents in every dollar of IoT spend (of a $1.1T market opportunity) will be allocated to connectivity. That leaves $1.045T on the table if network operators just focus on connectivity.

Traditional OSS tend to focus on managing connectivity – less so on managing marketplaces, customer-facing platforms and applications. Does that headline number – $1.045T – provide you with an incentive to re-consider what your OSS manages and future use cases?

IoT OSS market opportunity

IoT requires slightly different OSS thinking:

  • Rather than integrating to a (relatively) small number of device types, IoT will have an almost infinite number of sensor types from a huge range of suppliers.
  • Rather than managing devices individually, their sheer volume means that devices will need to be increasingly managed in cohorts via policy controls.
  • Rather than a fairly narrow set of network-comms based services, functionality explodes into diverse areas like metering, vehicle fleets, health-care, manufacturing, asset controls, etc, etc so IoT controllers will need to be developed by a much longer-tail of suppliers (meaning open development platforms and/or scalable certification processes to integrate into the IoT controller platforms).
  • There are undoubtedly many, many additional differences.

Caveat: I haven’t evaluated the claims / numbers in the GSMA Intelligence report. This blog is just to prompt a thought-experiment around hypothetical projections.

The paint the fence automation analogy

There are so many actions that could be automated by / with / in our OSS. It can be hard to know where to start can’t it? One approach is to look at where the largest amounts of manual effort is being expended by operators. Another way is to employ the “paint the fence” analogy.

When envisaging fulfilment workflows, it’s easiest to picture actions that start with a customer and wipe down through the OSS / BSS stack.

When envisaging assurance workflows, it’s easiest to picture actions that start in the network and wipe up through the OSS / BSS stack.
Paint the fence OSS analogy

Of course there are exceptions to these rules, but to go a step further, wipe down = revenue, wipe up = costs. We want to optimise both through automation of course.

Like ensuring paint coverage when painting a fence, OSS automation has the potential to best improve Customer Experience coverage when we use brushstrokes down and up.

On the downstroke, it’s through faster service activations, quotes, response times, etc. On the upstroke, it’s through network reliability (downtime reduction), preventative maintenance, expedited notifications, etc.

You’ll notice that these are indicators that are favourable to the customers. I’m sure it won’t take much sluething to see the association to trailing metrics that are favourable to the network operators though right?

OSS, the great multipliers

Skills multiply labors by two, five, 10, 50, 100 times. You can chop a tree down with a hammer, but it takes about 30 days. That’s called labor. But if you trade the hammer in for an ax, you can chop the tree down in about 30 minutes. What’s the difference in 30 days and 30 minutes? Skills—skills make the difference.”
Jim Rohn
, here.

OSS can be great labour multipliers. They can deliver baked-in “skills” that multiply labors by two, five, 10, 50, 100 times. They can be not just a hammer, not just an axe, but a turbo-charged chainsaw.

The more pertinent question to understand with our OSS though is, “Why are we chopping this tree down?” Is it the right tree to achieve a useful outcome? Do the benefits outweigh the cost of the hammer / axe / chainsaw?

What if our really clever OSS engineers come up with a brilliant design that can reduce a task from 30 days to 30 minutes…. but it takes us 35 days to design/build/test/deploy the customisation for a once-off use? We’re clearly cutting down the wrong tree.

What if instead we could reduce this same task from 30 days to 1 day with just a quick analysis and process change? It’s nowhere near as sexy or challenging for us OSS engineers though. The very clever 30 minute solution is another case of “just because we can, doesn’t mean we should.”

How to identify a short-list of best-fit OSS suppliers for you

In yesterday’s post, we talked about how to estimate OSS pricing. One of the key pillars of the approach was to first identify a short-list of vendors / integrators best-suited to implementing your specific OSS, then working closely with them to construct a pricing model.

Finding the right vendor / integrator can be a complex challenge. There are dozens, if not hundreds of OSS / BSS solutions to choose from and there are rarely like-for-like comparators. There are some generic comparison tools such as Gartner’s Magic Quadrant, but there’s no way that they can cater for the nuanced requirements of each OSS operator.

Okay, so you don’t want to hear about problems. You want solutions. Well today’s post provides a description of the approach we’ve used and refined across the many product / vendor selection processes we’ve conducted with OSS operators.

We start with a short-listing exercise. You won’t want to deal with dozens of possible suppliers. You’ll want to quickly and efficiently identify a small number of candidates that have capabilities that best match your needs. Then you can invest a majority of your precious vendor selection time in the short-list. But how do you know the up-to-date capabilities of each supplier? We’ll get to that shortly.

For the short-listing process, I use a requirement gathering and evaluation template. You can find a PDF version of the template here. Note that the content within it is out-dated and I now tend to use a more benefit-centric classification rather than feature-centric classification, but the template itself is still applicable.

STEP ONE – Requirement Gathering
The first step is to prepare a list of requirements (as per page 3 of the PDF):
Requirement Capture.
The left-most three columns in the diagram above (in white) are filled out by the operator, which classifies a list of requirements and how important they are (ie mandatory, etc). The depth of requirements (column 2) is up to you and can range from specific technical details to high-level objectives. They could even take the form of user-stories or intended benefits.

STEP TWO – Issue your requirement template to a list of possible vendors
Once you’ve identified the list of requirements, you want to identify a list of possible vendors/integrators that might be able to deliver on those requirements. The PAOSS vendor/product list might help you to identify possible candidates. We then send the requirement matrix to the vendors. Note that we also send an introduction pack that provides the context of the solution the OSS operator needs.

STEP THREE – Vendor Self-analysis
The right-most three columns in the diagram above (in aqua) are designed to be filled out by the vendor/integrator. The suppliers are best suited to fill out these columns because they best understand their own current offerings and capabilities.
Note that the status column is a pick-list of compliance level, where FC = Fully Compliant. See page 2 of the template for other definitions. Given that it is a self-assessment, you may choose to change the Status (vendor self-rankings) if you know better and/or ask more questions to validate the assessments.
The “Module” column identifies which of the vendor’s many products would be required to deliver on the requirement. This column becomes important later on as it will indicate which product modules are most important for the overall solution you want. It may allow you to de-prioritise some modules (and requirements) if price becomes an issue.

STEP FOUR – Compare Responses
Once all the suppliers have returned their matrix of responses, you can compare them at a high-level based on the summary matrix (on page 1 of the template)
OSS Requirement Summary
For each of the main categories, you’ll be able to quickly see which vendors are the most FC (Fully Compliant) or NC (Non-Compliant) on the mandatory requirements.

Of course you’ll need to analyse more deeply than just the Summary Matrix, but across all the vendor selection processes we’ve been involved with, there has always been a clear identification of the suppliers of best fit.

Hopefully the process above is fairly clear. If not, contact us and we’d be happy to guide you through the process.

Getting a price estimate for your OSS

Sometimes a simple question deserves a simple answer: “A piece of string is twice as long as half its length”. This is a brilliant answer… if you have its length… Without a strategy, how do you know if it is successful? It might be prettier, but is it solving a define business problem, saving or making money, or fulfilling any measurable goals? In other words: can you measure the string?
Carmine Porco
here.

I was recently asked how to obtain OSS pricing by a University student for a paper-based assignment. To make things harder, the target client was to be a tier-2 telco with a small SDN / NFV network.

As you probably know already, very few OSS providers make their list prices known. The few vendors that do tend to focus on the high volume, self-serve end of the market, which I’ll refer to as “Enterprise Grade.” I haven’t heard of any “Telco Grade” OSS suppliers making their list prices available to the public.

There are so many variables when finding the right OSS for a customer’s needs and the vendors have so much pricing flexibility that there is no single definitive number. There are also rarely like-for-like alternatives when selecting an OSS vendor / product. Just like the fabled piece of string, the best way is to define the business problem and get help to measure it. In the case of OSS pricing, it’s to design a set of requirements and then go to market to request quotes.

Now, I can’t imagine many vendors being prepared to invest their valuable time in developing pricing based on paper studies, but I have found them to be extremely helpful when there’s a real buyer. I’ll caveat that by saying that if the customer (eg service provider) you’re working with is prepared to invest the time to help put a list of requirements together then you have a starting point to approach the market for customised pricing.

We’ve run quite a few of these vendor selections and have refined the process along the way to streamline for vendors and customers alike. Here’s a template we’ve used as a starting point for discussions with customers:

OSS vendor selection process

Note that each customer will end up with a different mapping of the diagram above to suit their specific needs. We also have existing templates (eg Questionnaire, Requirement Matrix, etc) to support the selection process where needed.

If you’re interested in reading more about the process of finding the right OSS vendor and pricing for you, click here and here.

Of course, we’d also be delighted to help if you need assistance to develop an OSS solution, get OSS pricing estimates, develop a workable business case and/or find the right OSS vendor/products for you.

Using OSS/BSS to steer the ship

For network operators, our OSS and BSS touch most parts of the business. The network, and the services they carry, are core business so a majority of business units will be contributing to that core business. As such, our OSS and BSS provide many of the metrics used by those business units.

This is a privileged position to be in. We get to see what indicators are most important to the business, as well as the levers used to control those indicators. From this privileged position, we also get to see the aggregated impact of all these KPIs.

In your years of working on OSS / BSS, how many times have you seen key business indicators that are conflicting between business units? They generally become more apparent on cross-team projects where the objectives of one internal team directly conflict with the objectives of another internal team/s.

In theory, a KPI tree can be used to improve consistency and ensure all business units are pulling towards a common objective… [but what if, like most organisations, there are many objectives? Does that mean you have a KPI forest and the trees end up fighting for light?]

But here’s a thought… Have you ever seen an OSS/BSS suite with the ability to easily build KPI trees? I haven’t. I’ve seen thousands of standalone reports containing myriad indicators, but never a consolidated roll-up of metrics. I have seen a few products that show operational metrics rolled-up into a single dashboard, but not business metrics. They appear to have been designed to show an information hierarchy, but not necessarily with KPI trees in mind specifically.

What do you think? Does it make sense for us to offer KPI trees as base product functionality from our reporting modules? Would this functionality help our OSS/BSS add more value back into the businesses we support?

Operator involvement on OSS projects

You cannot simply have your end users give some specifications then leave while you attempt to build your new system. They need to be involved throughout the process. Ultimately, it is their tool to use.”
José Manuel De Arce
here.

As an OSS consultant and implementer, I couldn’t agree more with José’s quote above. José, by the way is an OSS Manager at Telefónica, so he sits on the operator’s side of the implementation equation. I’m glad he takes the perspective he does.

Unfortunately, many OSS operators are so busy with operations, they don’t get the time to help with defining and tuning the solutions that are being built for them. It’s understandable. They are measured by their ability to keep the network (and services) running and in a healthy state.

From the implementation side, it reminds me of this old comic:
Too busy

The comic reminds me of OSS implementations for two reasons:

  1. Without ongoing input from operators, you can only guess at how the new tools could improve their efficacy and mitigate their challenges
  2. Without ongoing involvement from operators, they don’t learn the nuances of how the new tool works or the scenarios it’s designed to resolve… what I refer to as an OSS apprenticeship

I’ve seen it time after time on OSS implementations (and other projects for that matter) – [As a customer] you get back what you put in.

The Goldilocks OSS story

We all know the story of Goldilocks and the Three Bears where Goldilocks chooses the option that’s not too heavy, not too light, but just right.

The same model applies to OSS – finding / building a solution that’s not too heavy, not too light, but just right. To be honest, we probably tend to veer towards the too heavy, especially over time. We put more complexity into our architectures, integrations and customisations… because we can… which end up burdening us and our solutions.

A perfect example is AT&T offering its ECOMP project (now part of the even bigger Linux Foundation Network Fund) up for open source in the hope that others would contribute and help mature it. As a fairytale analogy, it’s an admission that it’s too heavy even for one of the global heavyweights to handle by itself.

The ONAP Charter has some great plans including, “…real-time, policy-driven orchestration and automation of physical and virtual network functions that will enable software, network, IT and cloud providers and developers to rapidly automate new services and support complete lifecycle management.”

These are fantastic ambitions to strive for, especially at the Pappa Bear end of the market. I have huge admiration for those who are creating and chasing bold OSS plans. But what about for the large majority of customers that fall into the Goldilocks category? Is our field of vision so heavy (ie so grand and so far into the future) that we’re missing the opportunity to solve the business problems of our customers and make a difference for them with lighter solutions today?

TM Forum’s Digital Transformation World is due to start in just over two weeks. It will be fascinating to see how many of the presentations and booths consider the Goldilocks requirements. There probably won’t be many because it’s just not as sexy a story as one that mentions heavy solutions like policy-driven orchestration, zero-touch automation, AI / ML / analytics, self-scaling / self-healing networks, etc.

[I should also note that I fall into the category of loving to listen to the heavy solutions too!! ]

Powerful ranking systems with hidden variables

There are ratings and rankings that ostensibly exist to give us information (and we are supposed to use that information to change our behavior).
But if we don’t know what variables matter, how is it supposed to be useful?
Just because it can be easily measured with two digits doesn’t mean that it’s accurate, important or useful.
[Marketers learned a long time ago that people love rankings and daily specials. The best way to boost sales is to put something in a little box on the menu, and, when in doubt, rank things. And sometimes people even make up the rankings.]

Seth Godin
here.

Are there any rankings that are made up in OSS? Our OSS collect an amazing amount of data so there’s rarely a need to make up the data we present.

Are they based on hidden variables? Generally, we use raw counters and / or well known metrics so we’re usually quite transparent with what our OSS present.

What about when we’re trying to select the right vendor to fulfill the OSS needs of our organisation? As Seth states, Just because it can be easily measured with two digits* doesn’t mean that it’s accurate, important or useful. [* In this case, I’m thinking of a 2 x 2 matrix].

The interesting thing about OSS ranking systems is that there is so much nuance in the variables that matter. There are potentially hundreds of evaluation criteria and even vast contrasts in how to interpret a given criteria.

For example, a criteria might be “time to activate a service.” A vendor might have a really efficient workflow for activating single services manually but have no bulk load or automation interface. For one operator (which does single activations manually), the TTAS metric for that product would be great, but for another operator (which does thousands of activations a day and tries to automate), the TTAS metric for the same product would be awful.

As much as we love ranking systems… there are hundreds of products on the market (in some cases, hundreds of products in a single operator’s OSS stack), each fitting unique operator needs differently… so a 2 x 2 matrix is never going to cut it as a vendor selection tool… not even as a short-listing tool.

Better to build yourself a vendor selection framework. You can find a few OSS product / vendor selection hints here based on the numerous vendor / product selections I’ve helped customers with in the past.

Designing OSS to cope with greater transience (part 2)

This is the second episode discussing the significant change to OSS thinking caused by modern network models. Yesterday’s post discussed how there has been a paradigm shift from static networks (think PDH) to dynamic / transient networks (think SDN/NFV) and that OSS are faced with a similar paradigm shift in how they manage modern network models.

We can either come up with adaptive / algorithmic mechanisms to deal with that transience, or mimic the “nailed-up” concepts of the past.

Let’s take Carrier Ethernet as a basis for explanation, with its E-LAN service model [We could similarly analyse E-Line and E-Tree service models, but maybe another day].

An E-Line is a point-to-point service between an A-end UNI (User-Network Interface) and a Z-end UNI, connected by an EVC (Ethernet Virtual Connection). The EVC is a conceptual pipe that is carried across a service provider’s network – a pipe that can actually span multiple network assets / links.

In our OSS, we can apply either:

  1. Abstract Model – Just mimic the EVC as a point-to-point connection between the two UNIs
  2. Specific Model – Attempt to tie network assets / links associated with the conceptual pipe to the EVC construct

The abstract OSS can be set up just once and delegate the responsibility of real-time switching / transience within the EVC to network controllers / EMS. This is the simpler model, but doesn’t add as much value to assurance use-cases in particular.

The specific OSS must either have the algorithms / policies to dynamically manage the EVC or to dynamically associate assets to the EVC. This is obviously much more sophisticated, but provides operators with a more real-time view of network utilisation and health.

An OSS conundrum with many perspectives

Even aside from the OSS impact, it illustrates the contrast between “bottom-up” planning of networks (new card X is cheaper/has more ports) and “top down” (what do we need to change to reduce our costs/increase capacity).”
Robert Curran
.

Robert’s quote above is in response to a post called “Trickle-down impact planning.”

Robert makes a really interesting point. Adding a new card type is a relatively common event for a big network operator. It’s a relatively minor challenge for the networks team – a BAU (Business as Usual) activity in fact. But if you follow the breadcrumbs, the impact to other parts of the business can be quite significant.

Your position in your organisation possibly dictates your perspective on the alternative approaches Robert discusses above. Networks, IT, planning, operations, sales, marketing, projects/delivery, executive – all will have different impacts and a different field of view on the conundrum. This makes it an interesting problem to solve – which viewpoint is the “right” one to tackle the challenge from?

My “solutioning” background tends to align with the top down viewpoint, but today we’ll take a look at this from the perspective of how OSS can assist from either direction.

Bottom Up: In an ideal world, our OSS and associated processes would be able to identify a new card (or similar) and just ripple changes out without interface changes. The first OSS I worked on did this really well. However, it was a “single-vendor” solution so the ripples were self-contained (mostly). This is harder to control in the more typical “best-of-breed” OSS stacks of today. There are architectural mechanisms for controlling the ripples out but it’s still a significant challenge to solve. I’d love to hear from you if you’re aware of any vendors or techniques that do this really well.

Top Down: This is where things get interesting. Should top-down impact analysis even be the task of an OSS/BSS? Since it’s a common / BAU operational task, then you could argue it is. If so, how do we create OSS tools* that help with organisational impact / change / options analysis and not just network impact analysis? How do we build the tools* that can:

  1. Predict the rippling impacts
  2. Allow us to estimate the impact of each
  3. Present options (if relevant) and
  4. Provide a cost-benefit comparison to determine whether any of the options are viable for development

* When I say “tools,” this might be a product, but it could just mean a process, data extract, etc.

I have the sense that this type of functionality falls into the category of, “just because you can, doesn’t mean you should… build it into your OSS.” Have you seen an OSS/BSS with this type of impact analysis functionality built-in?

Designing an OSS from NFRs backwards

When we’re preparing a design (or capturing requirements) for a new or updated OSS, I suspect most of us design with functional requirements (FRs) in mind. That is, our first line of thinking is on the shiny new features or system behaviours we have to implement.

But what if we were to flip this completely? What if we were to design against Non-Functional Requirements (NFRs) instead? [In case you’re not familiar with NFRs, they’re the requirements that measure the function or performance of a solution rather than features / behaviours]

What if we already have all the really important functionality in our OSS (the 80/20 rule suggests you will), but those functions are just really inefficient to use? What if we can meet the FR of searching a database for a piece of inventory… but our loaded system takes 5 mins to return the results of the query? It doesn’t sound much, but if it’s an important task that you’re doing dozens of times a day, then you’re wasting hours each day. Worse still, if it’s a system task that needs to run hundreds of times a day…

I personally find NFRs to be really hard to design for because we usually won’t know response times until we’ve actually built the functionality and tried different load / fail-over / pattern (eg different query types) models on the available infrastructure. Yes, we can benchmark, but that tends to be a bit speculative.

Unfortunately, if we’ve built a solution that works, but end up with queries that take minutes… when our SLAs might be 5-15 mins, then we’ve possibly failed in our design role.

We can claim that it’s not our fault. We only have finite infrastructure (eg compute, storage, network), each with inherent performance constraints. It is what it is right?…. maybe.

What if we took the perspective of determining our most important features (the 80/20 rule again), setting NFR benchmarks for each and then designing the solution back from there? That is, putting effort into making our most important features super-efficient rather than adding new nice-to-have features (features that will increase load, thus making NFRs harder to hit mind you!)?

In this new world of open-source, we have more “product control” than we’ve probably had before. This gives us more of a chance to start with the non-functionals and work back towards a product. An example might be redesigning our inventory to work with Graph database technology rather than the existing relational databases.

How feasible is this NFR concept? Do you know anyone in OSS who does it this way? Do you have any clever tricks for ensuring your developed features stay within NFR targets?

Re-writing the Sales vs Networks cultural divide

Brand, marketing, pricing and sales were seen as sexy. Networks and IT were the geeks no one seemed to speak to or care about. … This isolation and excommunication of our technical team had created an environment of disillusion. If you wanted something done the answer was mostly ‘No – we have no budget and no time for that’. Our marketing team knew more about loyalty points … than about our own key product, the telecommunications network.”
Olaf Swantee
, from his book, “4G Mobile Revolution”

Great note here (picked up by James Crawshaw at Heavy Reading). It talks about the great divide that always seems to exist between Sales / Marketing and Network / Ops business units.

I’m really excited about the potential for next generation OSS / orchestration / NaaS (Network as a Service) architectures to narrow this divide though.

In this case:

  1. The Network is offered as a microservice (let’s abstractly call them Resource Facing Services [RFS]);
  2. Sales / Marketing construct customer offerings (let’s call them Customer Facing Services [CFS]) from those RFS; and
  3. There’s a catalog / orchestration layer that marries the CFS with the cohesive set of RFS

The third layer becomes a meet-in-the-middle solution where Sales / Marketing comes together with Network / Ops – and where they can discuss what customers want and what the network can provide.

The RFS are suitably abstracted that Sales / Marketing doesn’t need to understand the network and complexity that sits behind the veil. Perhaps it’s time for Networks / Ops to shine, where the RFS can be almost as sexy as CFS (am I falling too far into the networks / geeky side of the divide?  🙂  )

The CFS are infinitely composable from RFS (within the constraints of the RFS that are available), allowing Sales / Marketing teams to build whatever they want and the Network / Ops teams don’t have to be constantly reacting to new customer offerings.

I wonder if this revolution will give Olaf cause to re-write this section of his book in a few years, or whether we’ll still have the same cultural divide despite the exciting new tools.

Blown away by one innovation. Now to extend on it

Our most recent two posts, from yesterday and Friday, have talked about one stunningly simple idea that helps to overcome one of OSS‘ biggest challenges – data quality. Those posts have stimulated quite a bit of dialogue and it seems there is some consensus about the cleverness of the idea.

I don’t know if the idea will change the OSS landscape (hopefully), or just continue to be a strong selling point for CROSS Network Intelligence, but it has prompted me to think a little longer about innovating around OSS‘ biggest challenges.

Our standard approach of just adding more coats of process around our problems, or building up layers of incremental improvements isn’t going to solve them any time soon (as indicated in our OSS Call for Innovation). So how?

Firstly, we have to be able to articulate the problems! If we know what they are, perhaps we can then take inspiration from the CROSS innovation to spur us into new ways of thinking?

Our biggest problem is complexity. That has infiltrated almost every aspect of our OSS. There are so many posts about identifying and resolving complexity here on PAOSS that we might skip over that one in this post.

I decided to go back to a very old post that used the Toyota 5-whys approach to identify the real cause of the problems we face in OSS [I probably should update that analysis because I have a whole bunch of additional ideas now, as I’m sure you do too… suggested improvements welcomed BTW].

What do you notice about the root-causes in that 5-whys analysis? Most of the biggest causes aren’t related to system design at all (although there are plenty of problems to fix in that space too!). CROSS has tackled the data quality root-cause, but almost all of the others are human-centric factors – change controls, availability of skilled resources, requirement / objective mis-matches, stakeholder management, etc. Yet, we always seem to see OSS as a technical problem.

How do you fix those people challenges? Ken Segal puts it this way, “When process is king, ideas will never be. It takes only common sense to recognize that the more layers you add to a process, the more watered down the final work will become.” Easier said than done, but a worthy objective!

How smart contracts might reduce risk and enhance trust on OSS projects

Last Friday, we spoke about all wanting to develop trusted OSS supplier / customer relationships but rarely finding them and a contrarian factor for why trust is so hard to achieve in OSS – complexity.

Trust is the glue that allows OSS projects to happen. Not only that, it becomes a catch-22 with complexity. If OSS partners don’t trust each other, requirements, contracts, etc get more complex as a self-protection barrier. But with every increase in complexity, there becomes an increasing challenge to deliver and hence, risk of further reduction in trust.

On a smaller scale, you’ve seen it on all projects – if the project starts to falter, increased monitoring attention is placed on the project, which puts increased administrative load on the project team and reduces the time they have to deliver the intended outcomes. Sometimes the increased admin / report gains the attention of sponsors and access to additional resources, but usually it just detracts from the available delivery capability.

Vish Nandlall also associates trust and complexity in organisational models in his LinkedIn post below:

This is one of the reasons I’m excited about what smart contracts can do for the organisations and OSS projects of the future. Just as “Likes” and “Supplier Rankings” have facilitated online trust models, smart contracts success rankings have the ability to do the same for OSS suppliers, large and small. For example, rather than needing to engage “Big Vendor A” to build your entire, monolithic OSS stack, if an operator develops simpler, more modular work breakdowns (eg microservices), then they can engage “Freelancer B” and “Small Vendor C” to make valuable contributions on smaller risk increments. Being lower in complexity and risk means B and C have a greater chance of engendering trust, but their historical contract success ranking forces them to develop trust as a key metric.

Fast / Slow OSS processes

Yesterday’s post discussed using smart contracts and Network as a Service (NaaS) to give a network the properties that will allow it to self-heal.

It mentioned a couple of key challenges, one being that there will always be physical activities such as cable cuts fixes, faulty equipment replacement, physical equipment expansion / contraction / lifecycle-management.

In a TM Forum presentation last week, Sylvain Denis of Orange proposed the theory of fast and slow OSS processes. Fast – soft factories (software and logical resources) within the operations stack are inherently automatable (notwithstanding the complexities and cost-benefit dilemma of actually building automations). Slow – physical factories are slow processes as they usually rely on human tasks and/or have location constraints.

Orchestration relies on programmatic interfaces to both. Not all physical factories have programmatic interfaces in all OSS / BSS stacks yet. It will remain a key requirement for the forseeable future to be able to handle dual-speed processes / factories.

Dan Pink’s 6 critical OSS senses

I recently wrote an article that spoke about the obsolescence of jobs in OSS, particularly as a result of Artificial Intelligence.

But an article by someone much more knowledgeable about AI than me, Rodney Brooks, had this to say, “We are surrounded by hysteria about the future of artificial intelligence and robotics — hysteria about how powerful they will become, how quickly, and what they will do to jobs.” He then describes The Seven Deadly Sins of AI Predictions here.

Back into my box I go, tail between my legs! Nonetheless, the premise of my article still holds true. The world of OSS is changing quickly and we’re constantly developing new automations, so our roles will inevitably change. My article also proposed some ideas on how to best plan our own adaptation.

That got me thinking… Many people in OSS are “left-brain” dominant right? But left-brained jobs (ie repeatable, predictable, algorithmic) can be more easily out-sourced or automated, thus making them more prone to obsolescence. That concept reminded me of Daniel Pink’s premise in A Whole New Mind where right-brained skills become more valuable so this is where our training should be focused. He argues that we’re on the cusp of a new era that will favor “conceptual” thinkers like artists, inventors and storytellers. [and OSS consultants??]

He also implores us to enhance six critical senses, namely:

  • Design – the ability to create something that’s emotionally and/or visually engaging
  • Story – to create a compelling and persuasive narrative
  • Symphony – the ability to synthesise new insights, particularly from seeing the big picture
  • Empathy – the ability to understand and care for others
  • Play – to create a culture of games, humour and play, and
  • Meaning – to find a purpose that will provide an almost spiritual fulfillment.

I must admit that I hadn’t previously thought about adding these factors to my development plan. Had you?
Do you agree with Dan Pink or will you continue to opt for left-brain skills / knowledge enhancement?

Will it take open source to unlock OSS potential?

I have this sense that the other OSS, open source software, holds the key to the next wave of OSS (Operational Support Systems) innovation.

Why? Well, as yesterday’s post indicated (through Nic Brisbourne), “it’s hard to do big things in a small way.” I’d like to put a slight twist on that concept by saying, “it’s hard to do big things in a fragmented way.” [OSS is short for fragmented after all]

The skilled resources in OSS are so widely spread across many organisations (doing a lot of duplicated work) that we can’t reach a critical mass of innovation. Open source projects like ONAP represent a possible path to critical mass through sharing and augmentating code. They provide the foundation upon which bigger things can be built. If we don’t uplift the foundations across the whole industry quickly, we risk losing relevance (just ask our customers for their gripes list!).

BTW. Did you notice the news that Six Linux Foundation open source networking projects have just merged into one? The six initial projects are ONAP, OPNFV, OpenDaylight, FD.io, PDNA, and SNAS. The new project is called the LF Networking Fund (LFN).

But you may ask how organisations can protect their trade secrets whilst embracing open source innovation. Derek Sivers provides a fascinating story and line of thinking in “Why my code and ideas are public.” I really recommend having a read about Valerie.

Alternatively, if you’re equating open source with free / unprofitable, this link provides a list of highly successful organisations with significant open source contributions. There are plenty of creative ways to be rewarded for open source effort.

Comment below if you agree or disagree about whether we need OSS to unlock the potential of OSS innovation.