Can OSS/BSS assist CX? We’re barely touching the surface

Have you ever experienced an epic customer experience (CX) fail when dealing a network service operator, like the one I described yesterday?

In that example, the OSS/BSS, and possibly the associated people / process, had a direct impact on poor customer experience. Admittedly, that 7 truck-roll experience was a number of years ago now.

We have fewer excuses these days. Smart phones and network connected devices allow us to get OSS/BSS data into the field in ways we previously couldn’t. There’s no need for printed job lists, design packs and the like. Our OSS/BSS can leverage these connected devices to give far better decision intelligence in real time.

If we look to the logistics industry, we can see how parcel tracking technologies help to automatically provide status / progress to parcel recipients. We can see how recipients can also modify their availability, which automatically adjusts logistics delivery sequencing / scheduling.

This has multiple benefits for the logistics company:

  • It increases first time delivery rates
  • Improves the ability to automatically notify customers (eg email, SMS, chatbots)
  • Decreases customer enquiries / complaints
  • Decreases the amount of time the truck drivers need to spend communicating back to base and with clients
  • But most importantly, it improves the customer experience

Logistics is an interesting challenge for our OSS/BSS due to the sheer volume of customer interaction events handled each day.

But it’s another area that excites me even more, where CX is improved through improved data quality:

  • It’s the ability for field workers to interact with OSS/BSS data in real-time
  • To see the design packs
  • To compare with field situations
  • To update the data where there is inconsistency.

Even more excitingly, to introduce augmented reality to assist with decision intelligence for field work crews:

  • To provide an overlay of what fibres need to be spliced together
  • To show exactly which port a patch-lead needs to connect to
  • To show where an underground cable route goes
  • To show where a cable runs through trayway in a data centre
  • etc, etc

We’re barely touching the surface of how our OSS/BSS can assist with CX.

The OSS Minimum Feature Set is Not The Goal

This minimum feature set (sometimes called the “minimum viable product”) causes lots of confusion. Founders act like the “minimum” part is the goal. Or worse, that every potential customer should want it. In the real world not every customer is going to get overly excited about your minimum feature set. Only a special subset of customers will and what gets them breathing heavy is the long-term vision for your product.

The reality is that the minimum feature set is 1) a tactic to reduce wasted engineering hours (code left on the floor) and 2) to get the product in the hands of early visionary customers as soon as possible.

You’re selling the vision and delivering the minimum feature set to visionaries not everyone.”
Steve Blank here.

A recent blog series discussed the use of pilots as an OSS transformation and augmentation change agent.
I have the need for OSS speed
Re-framing an OSS replacement strategy
OSS transformation is hard. What can we learn from open source?

Note that you can replace the term pilot in these posts with MVP – Minimum Viable Product.

The attraction in getting an MVP / pilot version of your OSS into the hands of users is familiarity and momentum. The solution becomes more tangible and therefore needs less documentation (eg architecture, designs, requirement gathering, etc) to describe foreign concepts to customers. The downside of the MVP / pilot is that not every customer will “get overly excited about your minimum feature set.”

As Steve says, “Only a special subset of customers will and what gets them breathing heavy is the long-term vision for your product.” The challenge for all of us in OSS is articulating the long-term vision and making it compelling…. and not just leaving the product in its pilot state (we’ve all seen this happen haven’t we?)

We’ll provide an example of a long-term vision tomorrow.

PS. I should also highlight that the maximum feature set also isn’t the goal either.

An OSS theatre of combat

Have you sat on both sides of the OSS procurement process? That is, been an OSS buyer (eg writing an RFP) and an OSS seller (eg responded to an RFP) on separate projects?

Have you noticed the amount of brain-power allocated to transferral of risk from both angles?

If you’re the buyer, you seek to transfer risk to the seller through clever RFP clauses.
If you’re the seller, you seek to transfer risk to the buyer through exclusions, risk margins, etc in your RFP response.

We openly collaborate on features during the RFP, contract formation, design and implementation phases. We’re open to finding the optimal technical solution throughout those phases.

But when it comes to risk, it’s bordering on passive-aggressive behaviour when you think about it. We’re also not so transparent or collaborative about risk in the pre-implementation phases. That increases the likelihood of combative risk / issue management during the implementation phase.

The trusting long-term relationship that both parties wish to foster starts off with a negative undercurrent.

The reality is that OSS projects carry significant risk. Both sides carry a large risk burden. It seems like we could be as collaborative on risks as we are on requirements and features.

Thoughts?

Addressing the trauma of OSS

You also have to understand their level of trauma. Your product, service or information is selling a solution to someone who is in trauma. There are different levels, from someone who needs a nail to finish the swing set in their backyard to someone who just found out they have a life-threatening disease. All of your customers had something happen in their life, where the problem got to an unmanageable point that caused them to actively search for your solution.
A buying decision is an emotional decision
.”
John Carlton
.

My clients tend to fall into three (fairly logical) categories:

  1. They’re looking to buy an OSS
  2. They’re looking to sell an OSS
  3. They’re in the process of implementing an OSS

Category 3 clients tend to bring a very technical perspective to the task. Lists of requirements, architectures, designs, processes, training, etc.

Category 2 clients tend to also bring a technical perspective to the task. Lists of features, processes, standards, workflows, etc.

Category 1 clients also tend to break down the buying decision in a technical manner. List of requirements, evaluation criteria, ranking/weighting models, etc.

But what’s interesting about this is that category 1 is actually a very human initiative. It precedes the other two categories (ie it is the lead action). And category 1 clients tend to only reach this state of needing help due to a level of trauma. The buying decision is an emotional decision.

Nobody wants to go through an OSS replacement or the procurement event that precedes it. It’s also a traumatic experience for the many people involved. As much as I love being involved in these projects, I wouldn’t wish them on anyone.

I wonder whether taking the human perspective, actively putting ourselves in the position of understanding the trauma the buyer is experiencing, might change the way we approach all three categories above?

That is, taking less of a technical approach (although that’s still important of course), but more on addressing the trauma. As the first step, do you step back to understand what is the root-cause of your customer’s unique trauma?

Do you have a nagging OSS problem you cannot solve?

On Friday, we published a post entitled, “Think for a moment…” which posed the question of whether we might be better-served looking back at our most important existing features and streamlining them rather than inventing new features to solve that have little impact.

Over the weekend, a promotional email landed in my inbox from Nightingale Conant. It is completely unrelated to OSS, yet the steps outlined below seem to be a fairly good guide for identifying what to reinvent within our existing OSS.

Go out and talk to as many people [customers or potential] as you can, and ask them the following questions:
1. Do you have a nagging problem you cannot solve?
2. What is that problem?
3. Why is solving that problem important to you?
4. How would solving that problem change the quality of your life?
5. How much would you be willing to pay to solve that problem?

Note: Before you ask these questions, make sure you let the people know that you’re not going to sell them anything. This way you’ll get quality answers.
After you’ve talked with numerous people, you’ll begin to see a pattern. Look for the common problems that you believe you could solve. You’ll also know how much people are willing to pay to have their problem solved and why.

I’d be curious to hear back from you. Do those first 4 questions identify a pattern that relates to features you’ve never heard of before or features that your OSS already performs (albeit perhaps not as optimally as it could)?

To link or not to link your OSS. That is the question

The first OSS project I worked on had a full-suite, single vendor solution. All products within the suite were integrated into a single database and that allowed their product developers to introduce a lot of cross-linking. That has its strengths and weaknesses.

The second OSS suite I worked with came from one of the world’s largest network vendors and integrators. Their suite primarily consisted of third-party products that they integrated together for the customer. It was (arguably) a best-of-breed all implemented as a single solution, but since the products were disparate, there was very little cross-linking. This approach also has strengths and weaknesses.

I’d become so used to the massive data migration and cross-referencing exercise required by the first OSS that I was stunned by the lack of time allocated by the second vendor for their data migration activities. The first took months and a significant level of expertise. The second took days and only required fairly simple data sets. That’s a plus for the second OSS.

However, the second OSS was severely lacking in cross-domain data, which impacted the richness of insight that could be easily unlocked.

Let me give an example to give better context.

We know that a trouble ticketing system is responsible for managing the tracking, reporting and resolution of problems in a network operator’s network. This could be as simple as a repository for storing a problem identifier and a list of notes performed to resolve the problem. There’s almost no cross-linking required.

A more referential ticketing system might have links to:

  • Alarm management – to show the events linked to the problem
  • Inventory management – to show the impacted resources (or possibly impacted)
  • Service management – to show the services impacted
  • Customer management – to show the customers impacted and possibly the related customer interactions
  • Spares management – to show the life-cycle of physical resources impacted
  • Workforce management – to manage the people / teams performing restorative actions
  • etc

The referential ticketing system gives far richer information, obviously, but you have to trade that off against the amount of integration and data maintenance that needs to go into supporting it. The question to ask is what level of linking is justifiable from a cost-benefit perspective.

My favourite OSS saying

My favourite OSS saying – “Just because you can, doesn’t mean you should.”

OSS are amazing things. They’re designed to gather, process and compile all sorts of information from all sorts of sources. I like to claim that OSS/BSS are the puppet masters of any significant network operator because they assist in every corner of the business. They assist with the processes carried out by almost every business unit.

They can be (and have been) adapted to fulfill all sorts of weird and wonderful requirements. That’s the great thing about software. It can be *easily* modified to do almost anything you want. But just because you can, doesn’t mean you should.

In many cases, we have looked at a problem from a technical perspective and determined that our OSS can (and did) solve it. But if the same problem were also looked at from business and/or operational perspectives, would it make sense for our OSS to solve it?

Some time back, I was involved in a micro project that added 1 new field to an existing report. Sounds simple. Unfortunately by the time all the rigorous deploy and transition processes were followed, to get the update into PROD, the support bill from our team alone ran into tens of thousands of dollars. Months later, I found out that the business unit that had requested the additional field had a bug in their code and wasn’t even picking up the extra field. Nobody had even noticed until a secondary bug prompted another developer to ask how the original code was functioning.

It wasn’t deemed important enough to fix. Many tens of thousands of dollars were wasted because we didn’t think to ask up the design tree why the functionality was (wasn’t) important to the business.

Other examples are when we use the OSS to solve a problem by expensive customisation / integration when manual processes can do the job more cash efficiently.

Another example was a client that had developed hundreds of customisations to resolve annoying / cumbersome, but incredibly rare tasks. The efficiency of removing those tasks didn’t come close to compensating for the expense of building the automations / tools. Just one sample of those tools was a $1000 efficiency improvement for a ~$200,000 project cost… on a task that had only been run twice in the preceding 5 years.

 

How to build a personal, cloud-native OSS sandpit

As a project for 2019, we’re considering the development of a how-to training course that provides a step-by-step guide to build your own OSS sandpit to play with. It will be built around cloud-native and open-source components. It will be cutting-edge and micro-scaled (but highly scalable in case you want to grow it).

Does this sound like something you’d be interested in hearing more about?

Like or comment if you’d like us to keep you across this project in 2019.

I’d love to hear your thoughts on what the sandpit should contain. What would be important to you? We already have a set of key features identified, but will refine it based on community feedback.

The Theory of Evolution, OSS evolution

Evolution says that biological change is a property of populations — that every individual is a trial run of an experimental combination of traits, and that at the end of the trial, you are done and discarded, and the only thing that matters is what aggregate collection of traits end up in the next generation. The individual is not the focus, the population is. And that’s hard for many people to accept, because their entire perception is centered on self and the individual.”
FreeThoughtBlog.

Have we almost reached the point where the same can be said for OSS workflows? In the past (and the present?) we had pre-defined process flows. There may be an occasional if/else decision gate, but we could capture most variants on a process diagram. These pre-defined processes were / are akin to a production line.

Process diagrams are becoming harder to lock down as our decision trees get more complicated. Technologies proliferate, legacy product lines don’t get obsoleted, the number of customer contact channels increases. Not only that, but we’re now marketing to a segment of one, treating every one of our customers as unique, whilst trying not to break our OSS / BSS.

Do we have the technology yet that allows each transaction / workflow instance to just be treated as an experimental combination of attributes / tasks? More importantly, do we have the ability to identify any successful mutations that allow the population (ie the combination of all transactions) to get progressively better, faster, stronger.

It seems that to get to CX nirvana, being able to treat every customer completely uniquely, we need to first master an understanding of the population at scale. Conversely, to achieve the benefits of scale, we need to understand and learn from every customer interaction uniquely.

That’s evolution. The benchmark sets the pattern for future workflows until a variant / mutation identifies a better benchmark to establish the new pattern for future workflows, which continues.

The production line workflow model of the past won’t get us there. We need an evolution workflow model that is designed to accommodate infinite optionality and continually learn from it.

Does such a workflow tool exist yet? Actually, it’s more than a workflow tool. It’s a continually improving loop workflow.

That’s not where to disrupt your OSS

The diagram below comes from an actual client’s functionality usage profile.
Long tail of OSS

The x-axis shows the functionality / use-cases. The y-axis shows the number of uses (it could equally represent usefulness or value).

Each big-impact demand (ie individual bars on the left-side of the graph) warrants separate investigation. The bars on the right side (ie the long tail in the red box) don’t. They might be worth investigating if we could treat some/all as a cohort though.

The left side of the graph represent the functionality / use-cases that have been around for decades. Every OSS has them. They’re so common and non-differentiated that they’re not remotely sexy. Customers / stakeholders aren’t going to be wowed by them. They’re just going to expect them. Our product developers have already delivered that functionality, have moved on and are now looking for new things to work on.

And where does the new stuff reside? Generally as new bars on the right side of the graph. That’s the law of diminishing returns territory right there! You’re unlikely to move the needle from out there.

Does this graph convince you to send your most skilled craftsmen back to do more tinkering / disrupting at the left side of the graph… as opposed to adding new features at the right side? Does it inspire you to dream up exciting cohort management techniques for the red box? Perhaps it even persuades you to cull some of the long-tail features that are chewing up lifecycle effort (eg code management, regression testing, complexity tax)?

If it does convince you, don’t forget to think about how you’re going to market it. How are you going to make the left side sexy / differentiated again? Are you going to have to prove just how much easier, cheaper, faster, more efficient, more profitable, etc it is? That brings us back to the OSS proof-of-worth discussion we had yesterday. It also brings us back to Sutton’s Law – go to where the money is.

The OSS proof-of-worth dilemma

Earlier this week we posted an article describing Sutton’s Law of OSS, which effectively tells us to go where the money is. The article suggested that in OSS we instead tend towards the exact opposite – the inverse-Sutton – we go to where the money isn’t. Instead of robbing banks like Willie Sutton, we break into a cemetery and aimlessly look for the cash register.

A good friend responded with the following, “Re: The money trail in OSS … I have yet to meet a senior exec. / decision maker in any telco who believes that any OSS component / solution / process … could provide benefit or return on any investment made. In telco, OSS = cost. I’ve tried very hard and worked with many other clever people also trying hard to find a way to pitch OSS which overcomes this preconception. BSS is often a little easier … in many cases it’s clear that “real money” flows through BSS and needs to be well cared for.”

He has a strong argument. The cost-out mentality is definitely ingrained in our industry.

We are saddled with the burden of proof. We need to prove, often to multiple layers of stakeholders, the true value of the (often intangible) benefits that our OSS deliver.

The same friend also posited, “The consequence is the necessity to establish beneficial working relationships with all key stakeholders – those who specify needs, those who design and implement solutions and those, especially, who hold or control the purse strings. [To an outsider] It’s never immediately obvious who these people are, nor what are their key drivers. Sometimes it’s ambition to climb the ladder, sometimes political need to “wedge” peers to build empires, sometimes it’s necessity to please external stakeholders – sometimes these stakeholders are vendors or government / international agencies. It’s complex and requires true consultancy – technical, business, political … at all levels – to determine needs and steer interactions.

Again, so true. It takes more than just technical arguments.

I’m big on feedback loops, but also surprised at how little they’re used in telco – at all levels.

  • We spend inordinate amounts of time building and justifying business cases, but relatively little measuring the actual benefits produced after we’ve finished our projects (or gaining the learnings to improve the next round of business cases)
  • We collect data in our databases, obliviously let it age, realise at some point in the future that we have a data quality issue and perform remedial actions (eg audits, fixes, etc) instead of designing closed-loop improvement cycles that ensure DQ isn’t constantly deteriorating
  • We’re great at spending huge effort in gathering / arguing / prioritising requirements, but don’t always run requirements traceability all the way into testing and operational rollout.
  • etc

Which leads us back to the burden of proof. Our OSS have all the data in the world, but how often do we use it to justify and persuade – to prove?

Our OSS products have so many modules and technical functionality (so much of it effectively duplicated from vendor to vendor). But I’ve yet to see any vendor product that allows their customer, the OSS operators, to automatically gather proof-of-worth stats (ie executive-ready reports). Nor have I seen any integrator build proof-of-worth consultancy into their offer, whereby they work closely with their customer to define and collect the metrics that matter. BTW. If this sounds hard, I’d be happy to discuss how I approach this task.

So let me leave you with three important questions today:

  1. Have you also experienced the overwhelming burden of the “OSS = cost” mentality
  2. If so, do you have any suggestions / experiences on how you’ve overcome it
  3. Does the proof-of-worth product functionality sound like it could be useful (noting that it doesn’t even have to be a product feature, but a custom report / portal using data that’s constantly coursing through our OSS databases)

The Jeff Bezos prediction for OSS

If we start to focus on ourselves, instead of focusing on our customers, that will be the beginning of the end … We have to try and delay that day for as long as possible.”
Jeff Bezos
.

Jeff Bezos recently predicted that Amazon is likely to fail and/or go bankrupt at some point in time, as history has eventually proven for most high-flying companies. The quote above was part of that discussion.

I’ve worked with quite a few organisations that have been in the midst of some sort of organisational re-structure – upsizing, downsizing, right-scaling, transforming – whatever the words that might be in effect. Whilst it would be safe to say that all of those companies were espousing being focused on their customers, the organisational re-structures always seem to cause inward-facing behaviour. It’s human nature that change causes feelings of fear, job security, opportunities to expand empires, etc.

And in these types of inward-facing environments in particular, I’ve seen some really interesting decisions made around OSS projects. When making these decisions, customer experience has clearly been a long way down the list of priorities!! And in the current environment of significant structural change in the telco industry, these stimulants of internal-facing behaviour appear to be growing.

Whilst many people want to see OSS projects as technical delivery solutions and focus on the technology, the people and culture aspects of the project can often be even more challenging. They can also be completely underestimated.

What have your experiences been? Do you agree that customer-facing vision, change management and stakeholder management can be just as vital as technical brilliance on an OSS implementation team?

OSS that capture value, not just create it

I’ve just had a really interesting first day at TM Forum’s Digital Transformation Asia (https://dta.tmforum.org and #tmfdigitalasia ). The quality of presentations was quite high. Some great thought-provoking ideas!!

Nik Willetts kicked off his keynote with the following quote, which I’m paraphrasing, “Telcos need to start capturing value, not just creating it as they have for the last decade.”

For me, this is THE key takeaway for this event, above any of the other interesting technical discussions from day 1 (and undoubtedly on the agenda for the next 2 days too).

The telecommunications industry has made a massive contribution to the digital lifestyle that we now enjoy. It has been instrumental in adding enormous value to our lives and our economy. But all the while, telecommunications providers globally have been experiencing diminishing profitability and share-of-wallet (as described in this earlier post). Clearly the industry has created enormous value, but hasn’t captured as much as it would’ve liked.

The question to ask is how will our thinking and our OSS/BSS stacks help to contribute to capturing more value for our customers. As described in the share of wallet post above, the premium end of the value chain has always been in the content (think in terms of phone conversations in days gone by, or the myriad of comms techniques today such as email, live chat, blogs, etc, etc). That’s what the customer pays for – the experience – not the networks or systems that facilitate it.

Nik’s comments made me think of Andrew Carnegie. Monopolies such as the telecommunications organisations of the past and Andrew Carnegie’s steel business owned vast swathes of the value chain (Carnegie Steel Company owned the mines which extracted the raw materials needed to make steel, controlled the transportation used to deliver the materials and the product, and ran the mills used for steel production). Buyers didn’t care for the mines or mills or transportation. Customers were paying for the end product as it is what helped them achieve their goals, whether that was the railway tracks needed by the railroads or the beams needed by construction companies.

The Internet has allowed enormous proliferation of the premium-end of the telecommunications value chain. It’s too late to stuff that genie back into the bottle. But to Nik’s further comment, we can help customers achieve their goals by becoming their “do-it-yourself” digital partners.

Our customers now look to platforms like Facebook, Instagram, Google, WordPress, Amazon, etc to build their marketing, order capture, product / content delivery, commercial transactions, etc. I really enjoyed Monty Hong‘s presentation that showed how Telkomsel’s OSS/BSS is helping to embed Telkomsel into customers’ digital lifestyles / value-chains. It’s a perfect example of the biggest OSS loser proof discussed in yesterday’s post.

Telco services that are bigger, faster, better and the OSS that supports that

We all know of the tectonic shifts in the world of telco services, profitability and business models.

One common trend is for telcos to offer pipes that are bigger and faster. Seems like a commoditising business model to me, but our OSS still need to support that. How? Through enabling efficiency at scale. Building tools, GUIs, workflows, integrations, sales pipelines, etc that enable telcos march seamlessly towards offering ever bigger/faster pipes. An OSS/BSS stack that supports this could represent one of the few remaining sustainable competitive advantages, so any such OSS/BSS could be highly valuable to its owner.

But if the bigger/faster pipe model is commoditising and there’s little differentiation between competing telcos’ OSS/BSS on service activation, then what is the alternative? Services that are better? But what is “better”? More to the point, what is sustainably better (ie can’t be easily copied by competitors)? Services that are “better” are likely to come in many different forms, but they’re unlikely to be related to the pipe (except maybe reliability / SLA / QoS). They’re more likely to be in the “bundling,” which may include premium content, apps, customer support, third-party products, etc. An OSS/BSS that is highly flexible in supporting any mix of bundling becomes important. Product / service catalogs are one of many possible examples.

An even bigger differentiator is not bigger / faster / better, but different (if perceived by the market as being invaluably different). The challenge with being different is that “different” tends to be fleeting. It tends to only last for a short period of time before competitors catch up. Since many of the differences available to telco services are defined in software, the window of opportunity is getting increasingly short… except when it comes to the OSS/BSS being able to operationalise that differentiator. It’s not uncommon for a new feature to take 9+ months to get to market, with changes to the OSS/BSS taking up a significant chunk of the project’s critical path. Having an OSS/BSS stack that can repeatedly get a product / feature to market much faster than competing telcos provides greater opportunity to capture the market during the window of difference.

OSS collaboration rooms. Getting to the coal-face

A number of years ago I heard about an OSS product that introduced collaborative rooms for network operators to collectively solve challenging network health events. It was in line with some of my own thinking about the use of collaboration techniques to solve cross-domain or complex events. But the concept hasn’t caught on in the way that I expected. I was curious why, so I asked around some friends and colleagues who are hands-on managing networks every day.

The answer showed that I hadn’t got close enough to understanding the psyche at the coal-face. It seems that operators have a preference for the current approach, the tick and flick of trouble tickets until the solution forms and the problem is solved.

This shows the psyche of collaboration at a micro scale. I wonder if it holds true at a macro scale too?

No CSP has an everywhere footprint (admittedly cloud providers are close to everywhere though, in part through global presence, in part through coverage of the access domain via their own networks and/or OTT connectivity). For customers that need to cross geo-footprints, carriers take a tick and flick approach in terms of OSS. The OSS of one carrier passes orders to the other carrier’s OSS. Each OSS stays within the bounds of its organisation’s locus of control (see this blog for further context).

To me, there seems to be an opportunity for carriers to get out of their silo. To leverage collaboration for speed, coverage, etc by designing offerings in OSS design rooms rather than standards workshops. A global product catalog sandpit as it were for carriers to design offerings in. Every carrier’s service offering / API / contract resides there for other carriers to interact with.

But once again, I may not be close enough to understanding the psyche at the coal-face. If you work at this coal-face, I’d love to get your opinions on why this would or would not work.

OSS feature parity. A functionality arms race

OSS Vendor 1. “I have 1 million features.” (Dr Evil puts finger in mouth)
OSS Vendor 2. “Yeah, well I have 1,000,001 features in my OSS.”

This is the arms-race that we see in OSS, just like almost any other tech product. I imagine that vendors get into this arms-race because they wish to differentiate. Better to differentiate on functionality than price. If there’s a feature parity, then the only differentiator is price. We all know that doesn’t end well!

But I often ask myself a few related questions:

  • Of those million features, how many are actually used regularly
  • As a vendor do you have logging that actually allows you to know what features are being used
  • Taking the Whale Curve perspective, even if being used, how many of those features are actually contributing to the objectives of the vendor
    • Do they clearly contribute towards making sales
    • Do customers delight in using them
    • Would customers be irate if you removed them
    • etc

Earlier this week, I spoke about a friend who created an alarm management tool by himself over a weekend. It didn’t have a million features, but it did have all of what I’d consider to be the most important ones. It did look like a lot of other alarm managers that are now on the market. The GUI based on alarm lists still pervades.

If they all look alike, and all have feature parity, how do you differentiate? If you try to add more features, is it safe to assume that those features will deliver diminishing returns?

But is an alarm list and the flicking of tickets the best way to manage network health?

What if, instead of seeking incremental improvement, someone went back to the most important requirements and considered whether the current approach is meeting those customer needs? I have a strong suspicion that customer feedback will indicate that there are definitely flaws to overcome, especially on high event volume networks.

Clever use of large data volumes provides a level of pre-cognition and automation that wasn’t available when simple alarm lists were first invented. This in turn potentially changes the way that operators can engage with network monitoring and management.

What if someone could identify a whole new user interface / approach that overcame the current flaws and exceeded the key requirements? Would that be more of a differentiator than adding a 1,000,002nd feature?

If you’re looking for a comparison, there were plenty of MP3 players on the market with a heap of features, many more than the iPod. We all know how that one played out!

What if the OSS solution lies in its connections?

Imagine for a moment that you’re sitting in front of a pristine chess board, awaiting the opportunity to make your first move. All of the pieces have been exquisitely carved from stone, polished to a sheen. The rules of the game have been established for centuries, so you know exactly which piece is able to move in which sequences. Time to make the opening move.

You’ve studied the games of the masters who have preceded you and have planned your opening gambit, the procession of moves that will hopefully take you into a match-winning position. Due to your skills with modern automations, you’ve connected some of the chess pieces with delicate strings to implement your opening gambit with precision.

Unfortunately, after the first few moves, your strings are starting to pull the pieces out of position. Your opponent has countered well and you’re having to modify your initial plans. You introduce some additional pulleys and springs to help retain the rightful position of your pieces on the board and cope with unexpected changes in strategy. The automations are becoming ever more complex, taking more time to plan and implement than the actual next move.

The board is starting to devolve into unmanageable chaos.

Does this sound like the analogy of a modern OSS? It’s what I refer to as the chessboard analogy.

We’ve been at this OSS game for long enough to already have an understanding of all of the main pieces. TM Forum’s TAM provides this definition as a useful guide. The pieces are modular, elegant and quite well understood by its many players. The rules of the game haven’t really changed much. The main use cases of an OSS from decades ago (ie assure, fulfil, plan, build, etc) probably don’t differ significantly from those of today. This
“should” set the foundations for interchangeability of applications.

We see programs of work like ONAP, where millions of lines of code are being developed to re-write the rules of the game. I’m a big advocate of many of the principles of ONAP, but I’m still not sure that such a massive re-write is what’s needed.

It’s not so much in the components of our OSS as in the connections between them where things tend to go awry.

The foundation of all brilliance is seeing connections when no one else does.”
Richard Parkinson
.

This article distills ONAP from its answers back to the core questions. What if instead of seeking an entirely-new architectural stack, we focused on solving the core questions and the chessboard problem – the problem of connections?

Perhaps the answer to the connection problem lies in the interchangeable small grid OSS model discussed in yesterday’s article on planned OSS obsolescence.
But it probably also incorporates what ONAP calls, “real-time, policy-driven orchestration and automation,” to replace pre-defined processes. I wonder instead whether state-based transitions, being guided by intent/policy rules and feedback loops (ie learning systems) might hold the key. An evolving and learning solution that shares similarities with the electrical pathways in our brain, which strengthen the more they’re used and diminish if no longer used.

The future of work and its impact on OSS

Many years ago, I worked on a seriously big OSS transformation for one of the region’s biggest telcos. Everything was big on the project, the investment, the resources, the documentation. Everything except the outcomes. There was so much inefficiency that I often spoke about making one day of progress for every ten on site. Meetings, bureaucracy, impossible approval cycles, customer re-organisations, over-analysis, etc all added up to stagnation.

This contrasted so much with some of the amazing small teams I’ve worked alongside. Teams that worked cohesively, cleverly and just got stuff done with almost no resources. It’s one of the reasons I feel that the future of work, even for the very large organisations, will be via small teams. Outsourced to small, efficient teams / organisations. The gig economy, and the proliferation of tools that support it, make it an obvious approach to take, especially for very large organisations to leverage. Proof of work technologies, such as those building upon the discovery of blockchain, will provide further impetus to use smaller teams of experts.

Experts like a friend and colleague of mine who once built an alarm management tool in a weekend, by himself. It also happened to be more sophisticated than his employer’s existing tool that had taken years of combined developer effort by a larger team.

Maybe I’ll be proven wrong, but I see the transition to this model of work as being inevitable. The question I have is how to make our OSS more accommodating of this work model. Behemoth OSS stacks won’t. Highly modular OSS made up of many smaller components probably will, as long as they don’t succumb to the OSS chessboard analogy. The pulleys and strings will make it impossible for small, interchangable teams to decipher and manage.

A small-grid OSS model is the one I’d be backing in.

OSS – like a duck on a pond

Let’s start with a basic question. “What does an OSS need to do?”

The basic answer is, “make operations easier.”

The real answer(s) is so much more nuanced than that of course. The term easier can also encapsulate other words such as faster, more accurate, more repeatable, cheaper, etc.

Designing, building, operating and maintaining a sizable network is extremely challenging, despite network operators around the world, and the vendors that supply to them, employing some of the best and brightest. So we design OSS and related tools / processes to make operations easier.

Yet I sometimes wonder whether we achieve that aim – to make operations easier. Seems to me that we tend to focus more on just replicating functions at a higher layer in the management stack. That is, moving the function to the OSS rather than EMS/NMS, without really making it much easier operationally.

Let’s start at the user interface (UI). How often are they intuitive enough for an experienced network operator to start doing tasks with negligible OSS expert guidance?
Let’s look at deployments. How often are the projects low on effort, risk, cost and complexity?
Let’s look at flexibility (ie in-flight modifications or transformations). How often do we actually deliver flexibility to our customers through our OSS. To ask the same as above, how often are our changes low on effort, risk, cost and complexity?

As a small step towards providing an answer, I wonder whether it’s a case of making the hard things look easy and the easy things look hard.

We want to make the really hard operational things much easier to do within an OSS because that’s the primary purpose of an OSS. That’s the example of a duck on a pond. The OSS is gliding along effortlessly across the top of the water, but under the water it is paddling furiously.

Conversely, we want to make the really easy* operational things look hard to do within an OSS so that we’re not constantly being asked to build functionality / complexity into our OSS that doesn’t warrant being there. It diffuses the intent of the OSS. Just because we can, doesn’t mean we should.

Do the laws of physics prevent you from making an OSS pivot?

AIrcraft carrier
Image linked from GCaptain.com.

As you already know, the word pivot has become common in the world of business, particularly the world of start-ups. It’s a euphemism for a significant change in strategic direction. In the context of today’s post, I love the word pivot because it implies a rapid change in direction, something that’s seemingly impossible for most of our OSS and the customers who use them.

I like to use analogies. It’s no coincidence that some of the analogies posted here on PAOSS relate to the challenge in making strategic change in our OSS. Here are just three of those analogies:

The OSS intertia principle relates classical physics with our OSS, where Force equals Mass x Acceleration (F = ma). In other words, the greater the mass (of your OSS), the more force must be applied to reach a given acceleration (ie to effect a change)

The OSS chess-board analogy talks about the rubber bands and pulleys (ie integrations) that enmesh the pieces on our OSS chessboard. This means that other pieces get dragged out of position whenever we try to move any individual piece and chaos ensues.

The aircraft carrier analogy compares OSS (and the CSPs they service) with navies of old. In days gone by, CSPs enjoyed command of the sea. Their boats were big, powerful and mobile enough to move around world. However, their size requires significant planning to change course. The newer application and content communications models are analogous to the advent of aviation. The over the top (OTT) business model has the speed, flexibility, lower cost base and diversity of aircraft. Air supremacy has changed the competitive dynamic. CSPs and our OSS can’t quickly change from being a navy to being an airforce, so the aircraft carrier approach looks to the future whilst working within the constraints of the past.

When making day to day changes within, and to, your OSS does the ability to pivot ever come to mind?

Do you intentionally ensure it stays small, modular and limit its integrations to simplify your game of OSS chess?
If constrained by existing mass that you simply can’t eliminate, do you seek to transform via OSS‘s aviation equivalents?
Or like many of the OSS around the world, are you just making them larger, enmeshed behemoths that will never be able to change the laws of physics and achieve a pivot?

Do any of our global target architectures represent such behemoths?