Operator involvement on OSS projects

You cannot simply have your end users give some specifications then leave while you attempt to build your new system. They need to be involved throughout the process. Ultimately, it is their tool to use.”
José Manuel De Arce
here.

As an OSS consultant and implementer, I couldn’t agree more with José’s quote above. José, by the way is an OSS Manager at Telefónica, so he sits on the operator’s side of the implementation equation. I’m glad he takes the perspective he does.

Unfortunately, many OSS operators are so busy with operations, they don’t get the time to help with defining and tuning the solutions that are being built for them. It’s understandable. They are measured by their ability to keep the network (and services) running and in a healthy state.

From the implementation side, it reminds me of this old comic:
Too busy

The comic reminds me of OSS implementations for two reasons:

  1. Without ongoing input from operators, you can only guess at how the new tools could improve their efficacy and mitigate their challenges
  2. Without ongoing involvement from operators, they don’t learn the nuances of how the new tool works or the scenarios it’s designed to resolve… what I refer to as an OSS apprenticeship

I’ve seen it time after time on OSS implementations (and other projects for that matter) – [As a customer] you get back what you put in.

The Goldilocks OSS story

We all know the story of Goldilocks and the Three Bears where Goldilocks chooses the option that’s not too heavy, not too light, but just right.

The same model applies to OSS – finding / building a solution that’s not too heavy, not too light, but just right. To be honest, we probably tend to veer towards the too heavy, especially over time. We put more complexity into our architectures, integrations and customisations… because we can… which end up burdening us and our solutions.

A perfect example is AT&T offering its ECOMP project (now part of the even bigger Linux Foundation Network Fund) up for open source in the hope that others would contribute and help mature it. As a fairytale analogy, it’s an admission that it’s too heavy even for one of the global heavyweights to handle by itself.

The ONAP Charter has some great plans including, “…real-time, policy-driven orchestration and automation of physical and virtual network functions that will enable software, network, IT and cloud providers and developers to rapidly automate new services and support complete lifecycle management.”

These are fantastic ambitions to strive for, especially at the Pappa Bear end of the market. I have huge admiration for those who are creating and chasing bold OSS plans. But what about for the large majority of customers that fall into the Goldilocks category? Is our field of vision so heavy (ie so grand and so far into the future) that we’re missing the opportunity to solve the business problems of our customers and make a difference for them with lighter solutions today?

TM Forum’s Digital Transformation World is due to start in just over two weeks. It will be fascinating to see how many of the presentations and booths consider the Goldilocks requirements. There probably won’t be many because it’s just not as sexy a story as one that mentions heavy solutions like policy-driven orchestration, zero-touch automation, AI / ML / analytics, self-scaling / self-healing networks, etc.

[I should also note that I fall into the category of loving to listen to the heavy solutions too!! ]

Powerful ranking systems with hidden variables

There are ratings and rankings that ostensibly exist to give us information (and we are supposed to use that information to change our behavior).
But if we don’t know what variables matter, how is it supposed to be useful?
Just because it can be easily measured with two digits doesn’t mean that it’s accurate, important or useful.
[Marketers learned a long time ago that people love rankings and daily specials. The best way to boost sales is to put something in a little box on the menu, and, when in doubt, rank things. And sometimes people even make up the rankings.]

Seth Godin
here.

Are there any rankings that are made up in OSS? Our OSS collect an amazing amount of data so there’s rarely a need to make up the data we present.

Are they based on hidden variables? Generally, we use raw counters and / or well known metrics so we’re usually quite transparent with what our OSS present.

What about when we’re trying to select the right vendor to fulfill the OSS needs of our organisation? As Seth states, Just because it can be easily measured with two digits* doesn’t mean that it’s accurate, important or useful. [* In this case, I’m thinking of a 2 x 2 matrix].

The interesting thing about OSS ranking systems is that there is so much nuance in the variables that matter. There are potentially hundreds of evaluation criteria and even vast contrasts in how to interpret a given criteria.

For example, a criteria might be “time to activate a service.” A vendor might have a really efficient workflow for activating single services manually but have no bulk load or automation interface. For one operator (which does single activations manually), the TTAS metric for that product would be great, but for another operator (which does thousands of activations a day and tries to automate), the TTAS metric for the same product would be awful.

As much as we love ranking systems… there are hundreds of products on the market (in some cases, hundreds of products in a single operator’s OSS stack), each fitting unique operator needs differently… so a 2 x 2 matrix is never going to cut it as a vendor selection tool… not even as a short-listing tool.

Better to build yourself a vendor selection framework. You can find a few OSS product / vendor selection hints here based on the numerous vendor / product selections I’ve helped customers with in the past.

Automated Network Operations as a Service (ANOaaS)

Google has started applying its artificial intelligence (AI) expertise to network operations and expects to make its tools available to companies building virtual networks on its global cloud platform.
That could be a troubling sign for network technology vendors such as Ericsson AB (Nasdaq: ERIC), Huawei Technologies Co. Ltd. and Nokia Corp. (NYSE: NOK), which now see AI in the network environment as a potential differentiator and growth opportunity…
Google already uses software-defined network (SDN) technology as the bedrock of this infrastructure and last week revealed details of an in-development “Google Assistant for Networking” tool, designed to further minimize human intervention in network processes.
That tool would feature various data models to handle tasks related to network topology, configuration, telemetry and policy.
.”
Iain Morris
here on Light Reading.

This is an interesting, but predictable, turn of events isn’t it? If (when?) automated network operations as a service (ANOaaS) is perfected, it has the ability to significantly change the OSS space doesn’t it?

Let’s have a look at this from a few scenarios (and I’m considering ANOaaS from the perspective of any of the massive cloud providers who are also already developing significant AI/ML resource pools, not just Google).

Large Enterprise, Utilities, etc with small networks (by comparison to telco networks), where the network and network operations are simply a cost of doing business rather than core business. Virtual networks and ANOaaS seem like an attractive model for these types of customer (ignoring data sovereignty concerns and the myriad other local contexts for now). Outsourcing this responsibility significantly reduces CAPEX and head-count to run what’s effectively non-core business. This appears to represent a big disruptive risk for the many OSS vendors who service the Enterprise / Utilities market (eg Solarwinds, CA, etc, etc).

T2/3 Telcos with relatively small networks that tend to run lean operations. In this scenario, the network is core business but having a team of ML/AI expects is hard to justify. Automations are much easier to build for homogeneous (consistent) infrastructure platforms (like those of the cloud providers) than for those carrying different technologies (like T2/T3 telcos perhaps?). Combine complexity, lack of scale and lack of large ML/AI resource pools and it becomes hard for T2/T3 telcos to deliver cost-effective ANOaaS either internally or externally to their customer base. Perhaps outsourcing the network (ie VNO) and ANOaaS allows these operators to focus more on sales?

T1 Telcos have large networks, heterogenous platforms and large workforces where the network is core business. The question becomes whether they can build network cloud at the scale and price-point of Amazon, Microsoft, Google, etc. This is partly dependent upon internal processes, but also on what vendors like Ericsson, Huawei and Nokia can deliver, as quoted as a risk above.

As you probably noticed, I just made up ANOaaS. Does a term already exist for this? How do you think it’s going to change the OSS and telco markets?

Is service personalisation the answer?

The actions taken by the telecom industry have mostly been around cost cutting, both in terms of opex and capex, and that has not resulted in breaking the curve. Too few activities has been centered around revenue growth, such as focused activities in personalization, customer experience, segmentation, targeted offerings that become part of or drive ecosystems. These activities are essential if you want to break the curve; thus, it is time to gear up for growth… I am very surprised that very few, if any, service providers today collect and analyze data, create dynamic targeted offerings based on real-time insights, and do that per segment or individual.”
Lars Sandstrom
, here.

I have two completely opposite and conflicting perspectives on the pervading wisdom of personalised services (including segmentation of one and targeted offerings) in the telecoms industry.

Telcos tend to be large organisations. If I invest in a large organisation it’s because the business model is simple, repeatable and has a moat (as Warren Buffett likes to say). Personalisation is contra to two of those three mantras – personalisation makes our OSS/BSS far more complicated and hence less repeatable (unless we build in lots of automations, which BTW, are inherently more complex).

I’m more interested in reliable and enduring profitability than just revenue growth (not that I’m discounting the search for revenue growth of course). The complexity of personalisation leads to significant increases in systems costs. As such, you’d want to be really sure that personalisation is going to give an even larger up-tick in revenues (ie ROI). Seems like a big gamble to me.

For my traditional telco services, I don’t want personalised, dynamic offers that I have to evaluate and make decisions on regularly. I want set and forget (mostly). It’s a bit like my electricity – I don’t want to constantly choose between green electricity, blue electricity, red electricity – I just want my appliances to work when I turn on the switch and not have bill shock at the end of the month / quarter. In telco, it’s not just green / blue / red. We seem to want to create the millions of shades of the rainbow, which is a nightmare for OSS/BSS implementers.

I can see the argument however for personalisation in what I’ll call the over-the-top services (let’s include content and apps as well). Telcos tend to be much better suited to building the platforms that support the whole long tail than selecting individual winners (except perhaps supply of popular content like sport broadcasts, etc).

So, if I’m looking for a cool, challenging project or to sell some products or services (you’ll notice that the quote above is on a supplier’s blog BTW), then I’ll definitely recommend personalisation. But if I want my telco customers to be reliably profitable…

Am I taking a short-term view on this? Is personalisation going to be expected by all end-users in future, leaving providers with no choice but to go down this path??

Networks lead. OSS are an afterthought. Time for a change?

In a recent post, we described how changing conditions in networks (eg topologies, technologies, etc) cause us to reconsider our OSS.

Networks always lead and OSS (or any form of network management including EMS/NMS) is always an afterthought. Often a distant afterthought.

But what if we spun this around? What if OSS initiated change in our networks / services? After all, OSS is the platform that operationalises the network. So instead of attempting to cope with a huge variety of network options (which introduces a massive number of variants and in turn, massive complexity, which we’re always struggling with in our OSS), what if we were to define the ways that networks are operationalised?

Let’s assume we want to lead. What has to happen first?

Network vendors tend to lead currently because they’re offering innovation in their networks, but more importantly on the associated services supported over the network. They’re prepared to take the innovation risk knowing that operators are looking to invest in solutions they can offer to their customers (as products / services) for a profit. The modus operandi is for operators to look to network vendors, not OSS vendors / integrators, to help to generate new revenues. It would take a significant perception shift for operators to break this nexus and seek out OSS vendors before network vendors. For a start, OSS vendors have to create a revenue generation story rather than the current tendency towards a cost-out business case.

ONAP provides an interesting new line of thinking though. As you know, it’s an open-source project that represents multiple large network operators banding together to build an innovative new approach to OSS (even if it is being driven by network change – the network virtualisation paradigm shift in particular). With a white-label, software-defined network as a target, we have a small opening. But to turn this into an opportunity, our OSS need to provide innovation in the services being pushed onto the SDN. That innovation has to be in the form of services/products that are readily monetisable by the operators.

Who’s up for this challenge?

As an aside:
If we did take the lead, would our OSS look vastly different to what’s available today? Would they unequivocally need to use the abstract model to cope with the multitude of scenarios?

A purple cow in our OSS paddock

A few years ago, I read a book that had a big impact on the way I thought about OSS and OSS product development. Funnily enough, the book had nothing to do with OSS or product development. It was a book about marketing – a subject that I wasn’t very familiar with at the time, but am now fascinated with.

And the book? Purple Cow by Seth Godin.
Purple Cow

The premise behind the book is that when we go on a trip into the countryside, we notice the first brown or black cows, but after a while we don’t pay attention to them anymore. The novelty has worn off and we filter them out. But if there was a purple cow, that would be remarkable. It would definitely stand out from all the other cows and be talked about. Seth promoted the concept of building something into your products that make them remarkable, worth talking about.

I recently heard an interview with Seth. Despite the book being launched in 2003, apparently he’s still asked on a regular basis whether idea X is a purple cow. His answer is always the same – “I don’t decide whether your idea is a purple cow. The market does.”

That one comment brought a whole new perspective to me. As hard as we might try to build something into our OSS products that create a word-of-mouth buzz, ultimately we don’t decide if it’s a purple cow concept. The market does.

So let me ask you a question. You’ve probably seen plenty of different OSS products over the years (I know I have). How many of them are so remarkable that you want to talk about them with your OSS colleagues, or even have a single feature that’s remarkable enough to discuss?

There are a lot of quite brilliant OSS products out there, but I would still classify almost all of them as brown cows. Brilliant in their own right, but unremarkable for their relative sameness to lots of others.

The two stand-out purple cows for me in recent times have been CROSS’ built-in data quality ranking and Moogsoft’s Incident Room model. But it’s not for me to decide. The market will ultimately decide whether these features are actual purple cows.

I’d love to hear about your most memorable OSS purple cows.

You may also be wondering how to go about developing your own purple OSS cow. Well I start by asking, “What are people complaining about?” or “What are our biggest issues?” That’s where the opportunities lie. Once discovering those issues, the challenge is solving the problem/s in an entirely different, but better, way. I figure that if people care enough to complain about those issues, then they’re sure to talk about any product that solves the problem for them.

Assuming the other person can’t come up with the answer

Just a quick word of warning. This blog starts off away from OSS, but please persevere. It ends up back with a couple of key OSS learnings.

Long ago in the technology consulting game, I came to an important realisation. When arriving on a fresh new client site, chances are that many of the “easy technical solutions” that pop into my head to solve the client’s situation have already been tried by the client. After all, the client is almost always staffed with clever people, but they also know the local context far better than me.

Alan Weiss captures the sentiment brilliantly in the quote below.
I’ve found that in many instances a client will solve his or her own problem by talking it through while I simply listen. I may be asked to reaffirm or validate the wisdom of the solution, but the other person has engaged in some nifty self-therapy in the meantime.
I’m often told that I’m an excellent problem solver in these discussions! But all I’ve really done is listen without interrupting or even trying to interpret.
Here are the keys:
• Never feel that you’re not valuable if you’re not actively contributing.
• Practice “active listening”.
• Never cut-off or interrupt the other person.
• Stop trying to prove how smart you are.
• Stop assuming the other person can’t come up with the answer
.”

I’m male and an Engineer, so some might say I’m predisposed to immediately jumping into problem solving mode before fully understanding a situation… I have to admit that I do have to fight really hard to resist this urge (and sometimes don’t succeed). But enough about stereotypes.

One of the techniques that I’ve found to be more successful is to pose investigative questions rather than posing “brilliant” answers. If any gaps are appearing, then provide bridging connections (ie through broader industry trends, ideas, people, process, technology, contract, etc) that supplement the answers the client already has. These bridges might be built in the form of statements, but often it’s just through leading questions that allow the client to resolve / affirm for themselves.

But as promised earlier, this is more an OSS blog than a consulting one, so there is an OSS call-out.

You’ll notice in the first paragraph that I wrote “easy technical solutions,” rather than “easy solutions.” In almost all cases, the client representatives have great coverage of the technical side of the problems. They know their technology well, they’ve already tried (or thought about) many of the technology alternatives.

However, the gaps I’ve found to be surprisingly common aren’t related to technology at all. A Toyota five-why analysis shows they’re factors like organisational change management, executive buy-in, change controls, availability of skilled resources, requirement / objective mis-matches, stakeholder management, etc, as described in this recent post.

It’s not coincidence then that the blog roll here on PAOSS often looks beyond the technology of OSS.

If you’re an OSS problem solver, three messages:
1) Stop assuming the other person (client, colleague, etc) can’t come up with the answer
2) Broaden your vision to see beyond the technology solution
3) Get great at asking questions (if you aren’t already of course)

Does this align or conflict with your experiences?

I will never understand…

I will never understand why Advertising is an investment and customer service is a cost.
Let’s spend millions trying to reach people, but if they try to reach us, make our contact details impossible to find, incentivise call center workers to hang up as fast as possible or ideally outsource it to a bot. It’s absolute lunacy and it absolutely matters
.”
Tom Goodwin
here.

Couldn’t agree more Tom. In fact, we’ve spoken about this exact irony here on PAOSS a few times before (eg herehere and here).

Telcos call it CVR – Call Volume Reduction (ie reduction in the number of customers’ calls that are responded to by a real person who represents the telco). But what CVR really translates to is, “we’re happy for you to reach us on our terms (ie if you want to buy something from us), but not on your terms (ie you have a problem that needs to be resolved).” Unfortunately, customer service is the exact opposite – it has to be on the customer’s terms, not yours.

Even more unfortunately, many of the problems that need to be resolved are being caused in our OSS / BSS (not always “by” our OSS / BSS, but that’s another story). Worse still, the contact centre has no chance of determining where to start understanding the problem due to the complexity of fall-out management and the complicated data flows through our OSS / BSS.

Bill Gates said, “Your most unhappy customers are your greatest source of learning.”

Let me ask you a question – Do you have a direct line of learning from your unhappy customers to your backlog of OSS / BSS enhancements? Or even an indirect line of learning? Nope?? If so, you’re not alone.

Let me ask you another question – You’re an OSS expert. Do you have any idea what problems your customers are raising with your contact centre staff? Or perhaps that should be problems they’re not getting to raise with contact centre staff due to the “success” of CVR measures? Nope?? If so, you’re not alone here either.

Can you think of a really simple and obvious way to start fixing this?

Does the death of ATM bear comparison with telco-grade open-source OSS?

Hands up if you’re old enough to remember ATM here? And I don’t mean the type of ATM that sits on the side of a building dispensing cash – no I mean Asynchronous Transfer Mode.

For those who aren’t familiar with ATM, a little background. ATM was THE telco-grade packet-switching technology of choice for most carriers globally around the turn of the century. Who knows, there might still be some ATM switches/routers out there in the wild today.

ATM was a powerful beast, with enormous configurability and custom-designed with immense scale in mind. It was created by telco-grade standards bodies with the intent of carrying voice, video, data, whatever, over big data pipes.

With such pedigree, you may be wondering then, how it was beaten out by a technology that was designed to cheaply connect small groups of computers clustered within 100 metres of each other (and a theoretical maximum bandwidth of 10Mbps).

Why does the technology that scaled up to become carrier Ethernet exist in modern telco networks, whereas ATM is largely obsoleted? Others may beg to differ, and there are probably a multitude of factors, but I feel it boils down to operational simplicity. Customers wanted operational simplicity and operators didn’t want to have a degree in ATM just to be able to drive it. By being designed to be all things to all people (carriers), did that make ATM compromised from the start?

Now I’ll state up front that I love the initiative and collaboration being shown by many of the telcos in committing to open-source programs like ONAP. It’s a really exciting time for the industry. It’s a sign that the telcos are wresting control back from the vendors in terms of driving where the collective innovation goes.

Buuuuuuut…..

Just like with ATM, are the big open source programs just too big and too complicated? Do you need a 100% focus on ONAP to be able to make it work, or even to follow all the moving parts? Are these initiatives trying to be all things to all carriers instead of changing needs to more simplified use cases?

Sometimes the ‘right’ way to do it just doesn’t exist yet, but often it does exist but is very expensive. So, the question is whether the ‘cheap, bad’ solution gets better faster than the ‘expensive, good’ solution gets cheap. In the broader tech industry (as described in the ‘disruption’ concept), generally the cheap product gets good. The way that the PC grew and killed specialized professional hardware vendors like Sun and SGi is a good example. However, in mobile it has tended to be the other way around – the expensive good product gets cheaper faster than the cheap bad product can get good.”
Ben Evans
here.

Is there an Ethernet equivalent in the OSS world, something that’s “cheap, bad” but getting better (and getting customer buy-in) rapidly?

Blown away by one innovation. Now to extend on it

Our most recent two posts, from yesterday and Friday, have talked about one stunningly simple idea that helps to overcome one of OSS‘ biggest challenges – data quality. Those posts have stimulated quite a bit of dialogue and it seems there is some consensus about the cleverness of the idea.

I don’t know if the idea will change the OSS landscape (hopefully), or just continue to be a strong selling point for CROSS Network Intelligence, but it has prompted me to think a little longer about innovating around OSS‘ biggest challenges.

Our standard approach of just adding more coats of process around our problems, or building up layers of incremental improvements isn’t going to solve them any time soon (as indicated in our OSS Call for Innovation). So how?

Firstly, we have to be able to articulate the problems! If we know what they are, perhaps we can then take inspiration from the CROSS innovation to spur us into new ways of thinking?

Our biggest problem is complexity. That has infiltrated almost every aspect of our OSS. There are so many posts about identifying and resolving complexity here on PAOSS that we might skip over that one in this post.

I decided to go back to a very old post that used the Toyota 5-whys approach to identify the real cause of the problems we face in OSS [I probably should update that analysis because I have a whole bunch of additional ideas now, as I’m sure you do too… suggested improvements welcomed BTW].

What do you notice about the root-causes in that 5-whys analysis? Most of the biggest causes aren’t related to system design at all (although there are plenty of problems to fix in that space too!). CROSS has tackled the data quality root-cause, but almost all of the others are human-centric factors – change controls, availability of skilled resources, requirement / objective mis-matches, stakeholder management, etc. Yet, we always seem to see OSS as a technical problem.

How do you fix those people challenges? Ken Segal puts it this way, “When process is king, ideas will never be. It takes only common sense to recognize that the more layers you add to a process, the more watered down the final work will become.” Easier said than done, but a worthy objective!

I’ve just been blown away by the most elegant OSS innovation I’ve seen in decades

Looking back, I now consider myself extremely lucky to have worked with an amazing product on the first OSS project I worked on (all the way back in 2000). And I say amazing because the underlying data models and core product architecture are still better than any other I’ve worked with in the two decades since. The core is the most elegant, simple and powerful I’ve seen to date. Most importantly, the models were designed to cope with any technology, product or service variant that could be modelled as a hierarchy, whether physical or virtual / logical. I never found a technology that couldn’t be modelled into the core product and it required no special overlays to implement a new network model. Sadly, the company no longer exists and the product is languishing on the books of the company that bought out the assets but isn’t leveraging them.

Having been so spoilt on the first assignment, I’ve been slightly underwhelmed by the level of elegant innovation I’ve observed in OSS since. That’s possibly part of the reason for the OSS Call for Innovation published late last year. There have been many exciting innovations introduced since, but many modern tools are still more complex and complicated than they should be, for implementers and operators alike.

But during a product demo last week, I was blown away by an innovation that was so simple in concept, yet so powerful that it is probably the single most impressive innovation I’ve seen since that first OSS. Like any new elegant solution, it left me wondering why it hasn’t been thought of previously. You’re probably wondering what it is. Well first let me start by explaining the problem that it seeks to overcome.

Many inventory-based OSS rely on highly structured and hierarchical data. This is a double-edged sword. Significant inter-relationship of data increases the insight generation opportunities, but the downside is that it can be immensely challenging to get the data right (and to maintain a high-quality data state). Limited data inter-relationships make the project easier to implement, but tend to allow less rich data analyses. In particular, connectivity data (eg circuits, cables, bearers, VPNs, etc) can be a massive challenge because it requires the linking of separate silos of data, often with no linking key. In fact, the data quality problem was probably one of the most significant root-causes of the demise of my first OSS client.

Now getting back to the present. The product feature that blew me away was the first I’ve seen that allows significant inter-relationship of data (yet in a simple data model), but still copes with poor data quality. Let’s say your OSS has a hierarchical data model that comprises Location, Rack, Equipment, Card, Port (or similar) and you have to make a connection from one device’s port to another’s. In most cases, you have to build up the whole pyramid of data perfectly for each device before you can create a customer connection between them. Let’s also say that for one device you have a full pyramid of perfect data, but for the other end, you only know the location.

The simple feature is to connect a port to a location now, or any other point to point on the hierarchy (and clean up the far-end data later on if you wish). It also allows the intermediate hops on the route to be connected at any point in the hierarchy. That’s too simple right, yet most inventory tools don’t allow connections to be made between different levels of their hierarchies. For implementers, data migration / creation / cleansing gets a whole lot simpler with this approach. But what’s even more impressive is that the solution then assigns a data quality ranking to the data that’s just been created. The quality ranking is subsequently considered by tools such as circuit design / routing, impact analysis, etc. However, you’ll have noted that the data quality issue still hasn’t been fixed. That’s correct, so this product then provides the tools that show where quality rankings are lower, thus allowing remediation activities to be prioritised.

If you have an inventory data quality challenge and / or are wondering the name of this product, it’s CROSS, from the team at CROSS Network Intelligence (www.cross-ni.com).

Is your data AI-ready (part 2)

Further to yesterday’s post that posed the question about whether your data was AI ready for virtualised network assurance use cases, I thought I’d raise a few more notes.

The two reasons posed were:

  1. Our data sets haven’t had time to collect much elastic / dynamic network data yet
  2. Our data is riddled with human-generated data that is error-prone

On the latter case in particular, I sense that we’re going to have to completely re-architect the way we collect and store assurance data. We’re almost definitely going to have to think in terms of automated assurance actions and related logging to avoid the errors of human data creation / logging. The question becomes whether it’s worthwhile trying to wrangle all of our old data into formats that the AI engines can cope with or do we just start afresh with new models? (This brings to mind the recent “perfect data” discussion).

It will be one thing to identify patterns, but another thing entirely to identify optimum response activities and to automate those.

If we get these steps right, does it become logical that the NOC (network) and SOC (security operations centre) become conjoined… at least much more so than they tend to be today? In other words, does incident management merge network incidents and security incidents onto common analysis and response platforms? If so, does that imply another complete re-architecture? It certainly changes the operations model.

I’d love to hear your thoughts and predictions.

After the boys of OSS have gone

Something has always bothered me about the medical profession. Whenever you visit a GP (General Practitioner), unless you need to come back for test results or ongoing treatment, the doctor never finds out if their diagnoses / prescriptions have been effective. In my experience at least, they don’t call to see whether there were any complications, allergic reactions to treatments, improvement in condition, etc and only find out if you make a follow-up appointment. As a result, they never close the feedback loop or gather a potentially rich source of data on their efficacy.

I sometimes wonder whether this is true of OSS implementers too. There can be a tendency to move from one implementation project to the next, from one customer to the next, without having the time to circle back on previous clients. Any unrealised but ongoing problems are handed over to operations and/or product support teams, so the implementers may not get to see them. Alternatively the team might also be consistently missing out on identifying opportunities for value-add on their projects.

If you’re an implementer (as I often am), how do you close the loop to find out what you could be doing better? Do you retain dialog with customers after handover? Do you question your support teams about what client problems / enquiries are landing on their desk? Do you ever book follow-up sessions with client staff at scheduled intervals after handover? Are you always engaged on an operational handover period where you have the chance to see post-handover challenges first-hand?

Just like a doctor, you’re bound to hear of any major or catastrophic outcomes after a “patient’s” initial visit. But what about the niggling ailments your clients have that could be easily rectified for all future clients… if only you knew of them?

I’d love to hear the thoughts from implementers on how they’re continually upping their game. Similarly, if you’re in ops / support, what experiences (ie messes) are consistently landing with you to clean up after the implementers have moved on? Do you have any suggestions for how they (we) could close the loop better with you?

Note: For all the highly talented women out there in OSS-land, please note that I’m not overlooking you. The title of my post is just a play on Don Henley’s famous song.

Trickle-down impact planning

We introduced the concept of The Trickle-down Effect last year, an effect that sees the most minor changes trickling down through an OSS stack, with much bigger consequences than expected.

The trickle-down effect can be insidious, turning a nice open COTS solution into a beast that needs constant attention to cope with the most minor of operational changes. The more customisations made, the more gnarly the beast tends to be.”

Here’s an example I saw recently. An internal business unit wanted to introduce a new card type into the chassis set they managed. Speaking with the physical inventory team, it seemed the change was quite small and a budget was developed for the works… but the budget (dollars / time / risk) was about to blow out in a big way.

The new card wasn’t being picked up in their fault-management or performance management engines. It wasn’t picked up in key reports, nor was it being identified in the configuration management database or logical inventory. Every one of these systems needed interface changes. Not massive change obviously, but collectively the budget blew out by 10x and expedite changes pushed out the work previously planned by each of the interface development and testing teams.

These trickle-down impacts were known…. by some people…. but weren’t communicated to the business unit responsible for managing the new card type. There’s a possibility that they may not have even added the new card type if they realised the full OSS cost consequences.

Are these trickle-down impacts known and readily communicated within your OSS change processes?

One sentence to make most OSS experts cringe

Let me warn you. The following sentence is going to make many OSS experts cringe, maybe even feel slightly disgusted, but take the time to read the remainder of the post and ponder how it fits within your specific OSS context/s.

“Our OSS need to help people spend money!”

Notice the word is “help” and not “coerce?” This is not a post about turning our OSS into sales tools, well, not directly anyway.

May I ask you a question – Do you ever spend time thinking about how your OSS is helping your customer’s customer (which I’ll refer to as the end-customer) to spend their money? And I mean making it easier for them to buy the stuff they want to buy in return for some form of value / utility, not trick or coerce them into buying stuff they don’t want.

Let me step you through the layers of thinking here.

The first layer for most OSS experts is their direct customer, which is usually the service provider or enterprise that buys and operates the OSS. We might think they are buying an OSS, but we’re wrong. An organisation buys an OSS, not because it wants an Operational Support System, but because it wants Operational Support.

The second layer is a distinct mindset change for most OSS experts. Following on from the first layer, OSS has the potential to be far more than just operational support. Operational support conjures up the image of being a cost-centre, or something that is a necessary evil of doing business (ie in support of other revenue-raising activities). To remain relevant and justify OSS project budgets, we have to flip the cost-centre mentality and demonstrate a clear connection with revenue chains. The more obvious the connection, the better. Are you wondering how?

That’s where the third layer comes in. We have to think hard about the end-customer and empathise with their experiences. These experiences might be a consumer to a service provider’s (your direct customer) product offerings. It might even be a buying cycle that the service provider’s products facilitate. Either way, we need to simplify their ability to buy.

So let’s work back up through those layers again:
Layer 3 – If end-customers find it easier to buy stuff, then your customer wins more revenue (and brand value)
Layer 2 – If your customer sees that its OSS / BSS has unquestionably influenced revenue increase, then more is invested on OSS projects
Layer 1 – If your customer recognises that your OSS / BSS has undeniably influenced the increased OSS project budget, you too get entrusted with a greater budget to attempt to repeat the increased end-customer buy cycle… but only if you continue to come up with ideas that make it easier for people (end-customers) to spend their money.

At what layer does your thinking stop?

How smart contracts might reduce risk and enhance trust on OSS projects

Last Friday, we spoke about all wanting to develop trusted OSS supplier / customer relationships but rarely finding them and a contrarian factor for why trust is so hard to achieve in OSS – complexity.

Trust is the glue that allows OSS projects to happen. Not only that, it becomes a catch-22 with complexity. If OSS partners don’t trust each other, requirements, contracts, etc get more complex as a self-protection barrier. But with every increase in complexity, there becomes an increasing challenge to deliver and hence, risk of further reduction in trust.

On a smaller scale, you’ve seen it on all projects – if the project starts to falter, increased monitoring attention is placed on the project, which puts increased administrative load on the project team and reduces the time they have to deliver the intended outcomes. Sometimes the increased admin / report gains the attention of sponsors and access to additional resources, but usually it just detracts from the available delivery capability.

Vish Nandlall also associates trust and complexity in organisational models in his LinkedIn post below:

This is one of the reasons I’m excited about what smart contracts can do for the organisations and OSS projects of the future. Just as “Likes” and “Supplier Rankings” have facilitated online trust models, smart contracts success rankings have the ability to do the same for OSS suppliers, large and small. For example, rather than needing to engage “Big Vendor A” to build your entire, monolithic OSS stack, if an operator develops simpler, more modular work breakdowns (eg microservices), then they can engage “Freelancer B” and “Small Vendor C” to make valuable contributions on smaller risk increments. Being lower in complexity and risk means B and C have a greater chance of engendering trust, but their historical contract success ranking forces them to develop trust as a key metric.

An OSS niche market opportunity?

The survey found that 82 percent of service providers conduct less than half of customer transactions digitally, despite the fact that nearly 80 percent of respondents said they are moving forward with business-wide digital transformation programs of varying size and scale. This underscores a large perception gap in understanding, completing and benefiting from digitalization programs.

The study revealed that more than one-third of service providers have completed some aspect of digital transformation, but challenges persist; nearly three-quarters of service providers identify legacy systems and processes, challenges relating to staff and skillsets and business risk as the greatest obstacles to transforming digital services delivery.

Driving a successful digital transformation requires companies to transform myriad business and operational domains, including customer journeys, digital product catalogs, partner management platforms and networks via software-defined networking (SDN) and network functions virtualization (NFV).
Survey from Netcracker and ICT Intuition.

Interesting study from Netcracker and ICT Intuition. To re-iterate with some key numbers and take-aways:

  1. 82% of responding service providers can increase digital transactions by at least 50% (in theory).  Digital transactions tend to be significantly cheaper for service providers than manual transactions. However, some customers will work the omni-channel experience to find the channel that they’re most comfortable dealing with. In many cases, this means attempting to avoid digital experiences. As a side note, any attempts to become 100% digital are likely to require social / behavioural engineering of customers and/or an associated churn rate
  2. Nearly 75% of responding service providers identify legacy systems / processes, skillsets and business risk as biggest challenges. This reads as putting a digital interface onto back-end systems like BSS / OSS tools. This is less of a challenge for newer operators that have been designed with digitalised customer interactions in mind. The other challenge for operators is that the digital front-ends are rarely designed to bolt onto the operators’ existing legacy back-end systems and need significant integration
  3. If an operator want to build a digital transaction regime, they should expect an OSS / BSS transformation too.

To overcome these challenges, I’ve noticed that some operators have been building up separate (often low-cost) brands with digital-native front ends, back ends, processes and skills bases. These brands tend to target the ever-expanding digitally native generations and be seen as the stepping stone to obsoleting legacy solutions (and perhaps even legacy business models?).

I wonder whether this is a market niche for smaller OSS players to target and grow into whilst the big OSS brands chase the bigger-brother operator brands?

We all want to develop trusted OSS partnerships, so why does so much scepticism exist?

Every OSS supplier wants to achieve “trusted” status with their customers. Each supplier wants to be the source trusted to provide the best vision of the future for each customer.

I’m an independent consultant, so I have been lucky enough to represent many organisations on both sides of that equation. And in that position, I’ve been able to get a first-hand view of the perception of trust between OSS vendors / integrators (suppliers) and operators (customers). Let’s just say that in general, we’re working in an industry with more scepticism than trust.

So if trust is so important and such a desired status, where is it breaking down?

Whilst I’d like to assume that most people in our industry go into OSS projects with the very best of intentions, there are definitely some suppliers that try to trick and entrap their customers whilst acting in an untrustworthy way. For the rest of this post, I’m going to assume the best – assume that we all have great intentions. We then look at why the trust relationships might be breaking down and some of the ways we can do better.

Jon Gordon provides a great list of 11 ways to build trust. Check out his link for a more detailed view, but the 11 factors are as follows:

  1. Say what you are going to do and then do what you say!
  2. Communicate, communicate, communicate
  3. Trust is built one day, one interaction at a time, and yet it can be lost in a moment because of one poor decision
  4. Value long term relationships more than short term success
  5. Sell without selling out. Focus more on your core principles and customer loyalty than short term commissions and profits.
  6. Trust generates commitment; commitment fosters teamwork; and teamwork delivers results.
  7. Be honest!
  8. Become a coach. Coach your customers. Coach your team at work
  9. Show people you care about them
  10. Always do the right thing. We trust those who live, walk and work with integrity.
  11. When you don’t do the right thing, admit it. Be transparent, authentic and willing to share your mistakes and faults

They all sound quite obvious don’t they? Do you also notice that many of the 11 (eg communication, transparency, admitting failure, doing what you say, etc) can be really easy to say but harder to do flawlessly under the pressure of complex OSS delivery projects (and ongoing operations)?

I know I certainly can’t claim a perfect track record on all of  these items. Numbers 1 and 2 can be particularly difficult when under extreme delivery pressure, especially when things just aren’t going to plan technically and you’re focussing attention on regaining control of the situation. In those situations, communication and transparency are what the customer needs to maintain confidence, but the customer relationship takes time that also needs to be allocated to overcoming the technical challenges. It becomes a balancing act.

So, how do we position ourselves to make it easier to keep to these 11 best intentions? Simple. By making a concerted effort to reduce complexity… actually not so simple as it sounds, but rewarding if you can achieve it. The less complex your delivery projects (or operational models), the more repeatable and reliable a supplier’s OSS delivery becomes. The more reliable, the less friction and a reduced chance of fracturing relationships. Subsequently, the more chance of building and retaining trust.

Hat-tip to Robert Curran of Aria Networks for spawning a discussion about trust.

Interaction points with fast/slow processes

Further to yesterday’s post on fast / slow processes and factory platforms, a concept presented by Sylvain Denis of Orange in Melbourne last week, here’s a diagram from Sylvain’s presentation pack :

The yellow blocks represent the fast (automated) processes. The orange blocks represent the slow processes.

The next slide showed the human interaction points (blue boxes) into this API / factory stack.