To link or not to link your OSS. That is the question

The first OSS project I worked on had a full-suite, single vendor solution. All products within the suite were integrated into a single database and that allowed their product developers to introduce a lot of cross-linking. That has its strengths and weaknesses.

The second OSS suite I worked with came from one of the world’s largest network vendors and integrators. Their suite primarily consisted of third-party products that they integrated together for the customer. It was (arguably) a best-of-breed all implemented as a single solution, but since the products were disparate, there was very little cross-linking. This approach also has strengths and weaknesses.

I’d become so used to the massive data migration and cross-referencing exercise required by the first OSS that I was stunned by the lack of time allocated by the second vendor for their data migration activities. The first took months and a significant level of expertise. The second took days and only required fairly simple data sets. That’s a plus for the second OSS.

However, the second OSS was severely lacking in cross-domain data, which impacted the richness of insight that could be easily unlocked.

Let me give an example to give better context.

We know that a trouble ticketing system is responsible for managing the tracking, reporting and resolution of problems in a network operator’s network. This could be as simple as a repository for storing a problem identifier and a list of notes performed to resolve the problem. There’s almost no cross-linking required.

A more referential ticketing system might have links to:

  • Alarm management – to show the events linked to the problem
  • Inventory management – to show the impacted resources (or possibly impacted)
  • Service management – to show the services impacted
  • Customer management – to show the customers impacted and possibly the related customer interactions
  • Spares management – to show the life-cycle of physical resources impacted
  • Workforce management – to manage the people / teams performing restorative actions
  • etc

The referential ticketing system gives far richer information, obviously, but you have to trade that off against the amount of integration and data maintenance that needs to go into supporting it. The question to ask is what level of linking is justifiable from a cost-benefit perspective.

My favourite OSS saying

My favourite OSS saying – “Just because you can, doesn’t mean you should.”

OSS are amazing things. They’re designed to gather, process and compile all sorts of information from all sorts of sources. I like to claim that OSS/BSS are the puppet masters of any significant network operator because they assist in every corner of the business. They assist with the processes carried out by almost every business unit.

They can be (and have been) adapted to fulfill all sorts of weird and wonderful requirements. That’s the great thing about software. It can be *easily* modified to do almost anything you want. But just because you can, doesn’t mean you should.

In many cases, we have looked at a problem from a technical perspective and determined that our OSS can (and did) solve it. But if the same problem were also looked at from business and/or operational perspectives, would it make sense for our OSS to solve it?

Some time back, I was involved in a micro project that added 1 new field to an existing report. Sounds simple. Unfortunately by the time all the rigorous deploy and transition processes were followed, to get the update into PROD, the support bill from our team alone ran into tens of thousands of dollars. Months later, I found out that the business unit that had requested the additional field had a bug in their code and wasn’t even picking up the extra field. Nobody had even noticed until a secondary bug prompted another developer to ask how the original code was functioning.

It wasn’t deemed important enough to fix. Many tens of thousands of dollars were wasted because we didn’t think to ask up the design tree why the functionality was (wasn’t) important to the business.

Other examples are when we use the OSS to solve a problem by expensive customisation / integration when manual processes can do the job more cash efficiently.

Another example was a client that had developed hundreds of customisations to resolve annoying / cumbersome, but incredibly rare tasks. The efficiency of removing those tasks didn’t come close to compensating for the expense of building the automations / tools. Just one sample of those tools was a $1000 efficiency improvement for a ~$200,000 project cost… on a task that had only been run twice in the preceding 5 years.

 

How to build a personal, cloud-native OSS sandpit

As a project for 2019, we’re considering the development of a how-to training course that provides a step-by-step guide to build your own OSS sandpit to play with. It will be built around cloud-native and open-source components. It will be cutting-edge and micro-scaled (but highly scalable in case you want to grow it).

Does this sound like something you’d be interested in hearing more about?

Like or comment if you’d like us to keep you across this project in 2019.

I’d love to hear your thoughts on what the sandpit should contain. What would be important to you? We already have a set of key features identified, but will refine it based on community feedback.

The Theory of Evolution, OSS evolution

Evolution says that biological change is a property of populations — that every individual is a trial run of an experimental combination of traits, and that at the end of the trial, you are done and discarded, and the only thing that matters is what aggregate collection of traits end up in the next generation. The individual is not the focus, the population is. And that’s hard for many people to accept, because their entire perception is centered on self and the individual.”
FreeThoughtBlog.

Have we almost reached the point where the same can be said for OSS workflows? In the past (and the present?) we had pre-defined process flows. There may be an occasional if/else decision gate, but we could capture most variants on a process diagram. These pre-defined processes were / are akin to a production line.

Process diagrams are becoming harder to lock down as our decision trees get more complicated. Technologies proliferate, legacy product lines don’t get obsoleted, the number of customer contact channels increases. Not only that, but we’re now marketing to a segment of one, treating every one of our customers as unique, whilst trying not to break our OSS / BSS.

Do we have the technology yet that allows each transaction / workflow instance to just be treated as an experimental combination of attributes / tasks? More importantly, do we have the ability to identify any successful mutations that allow the population (ie the combination of all transactions) to get progressively better, faster, stronger.

It seems that to get to CX nirvana, being able to treat every customer completely uniquely, we need to first master an understanding of the population at scale. Conversely, to achieve the benefits of scale, we need to understand and learn from every customer interaction uniquely.

That’s evolution. The benchmark sets the pattern for future workflows until a variant / mutation identifies a better benchmark to establish the new pattern for future workflows, which continues.

The production line workflow model of the past won’t get us there. We need an evolution workflow model that is designed to accommodate infinite optionality and continually learn from it.

Does such a workflow tool exist yet? Actually, it’s more than a workflow tool. It’s a continually improving loop workflow.

That’s not where to disrupt your OSS

The diagram below comes from an actual client’s functionality usage profile.
Long tail of OSS

The x-axis shows the functionality / use-cases. The y-axis shows the number of uses (it could equally represent usefulness or value).

Each big-impact demand (ie individual bars on the left-side of the graph) warrants separate investigation. The bars on the right side (ie the long tail in the red box) don’t. They might be worth investigating if we could treat some/all as a cohort though.

The left side of the graph represent the functionality / use-cases that have been around for decades. Every OSS has them. They’re so common and non-differentiated that they’re not remotely sexy. Customers / stakeholders aren’t going to be wowed by them. They’re just going to expect them. Our product developers have already delivered that functionality, have moved on and are now looking for new things to work on.

And where does the new stuff reside? Generally as new bars on the right side of the graph. That’s the law of diminishing returns territory right there! You’re unlikely to move the needle from out there.

Does this graph convince you to send your most skilled craftsmen back to do more tinkering / disrupting at the left side of the graph… as opposed to adding new features at the right side? Does it inspire you to dream up exciting cohort management techniques for the red box? Perhaps it even persuades you to cull some of the long-tail features that are chewing up lifecycle effort (eg code management, regression testing, complexity tax)?

If it does convince you, don’t forget to think about how you’re going to market it. How are you going to make the left side sexy / differentiated again? Are you going to have to prove just how much easier, cheaper, faster, more efficient, more profitable, etc it is? That brings us back to the OSS proof-of-worth discussion we had yesterday. It also brings us back to Sutton’s Law – go to where the money is.

The OSS proof-of-worth dilemma

Earlier this week we posted an article describing Sutton’s Law of OSS, which effectively tells us to go where the money is. The article suggested that in OSS we instead tend towards the exact opposite – the inverse-Sutton – we go to where the money isn’t. Instead of robbing banks like Willie Sutton, we break into a cemetery and aimlessly look for the cash register.

A good friend responded with the following, “Re: The money trail in OSS … I have yet to meet a senior exec. / decision maker in any telco who believes that any OSS component / solution / process … could provide benefit or return on any investment made. In telco, OSS = cost. I’ve tried very hard and worked with many other clever people also trying hard to find a way to pitch OSS which overcomes this preconception. BSS is often a little easier … in many cases it’s clear that “real money” flows through BSS and needs to be well cared for.”

He has a strong argument. The cost-out mentality is definitely ingrained in our industry.

We are saddled with the burden of proof. We need to prove, often to multiple layers of stakeholders, the true value of the (often intangible) benefits that our OSS deliver.

The same friend also posited, “The consequence is the necessity to establish beneficial working relationships with all key stakeholders – those who specify needs, those who design and implement solutions and those, especially, who hold or control the purse strings. [To an outsider] It’s never immediately obvious who these people are, nor what are their key drivers. Sometimes it’s ambition to climb the ladder, sometimes political need to “wedge” peers to build empires, sometimes it’s necessity to please external stakeholders – sometimes these stakeholders are vendors or government / international agencies. It’s complex and requires true consultancy – technical, business, political … at all levels – to determine needs and steer interactions.

Again, so true. It takes more than just technical arguments.

I’m big on feedback loops, but also surprised at how little they’re used in telco – at all levels.

  • We spend inordinate amounts of time building and justifying business cases, but relatively little measuring the actual benefits produced after we’ve finished our projects (or gaining the learnings to improve the next round of business cases)
  • We collect data in our databases, obliviously let it age, realise at some point in the future that we have a data quality issue and perform remedial actions (eg audits, fixes, etc) instead of designing closed-loop improvement cycles that ensure DQ isn’t constantly deteriorating
  • We’re great at spending huge effort in gathering / arguing / prioritising requirements, but don’t always run requirements traceability all the way into testing and operational rollout.
  • etc

Which leads us back to the burden of proof. Our OSS have all the data in the world, but how often do we use it to justify and persuade – to prove?

Our OSS products have so many modules and technical functionality (so much of it effectively duplicated from vendor to vendor). But I’ve yet to see any vendor product that allows their customer, the OSS operators, to automatically gather proof-of-worth stats (ie executive-ready reports). Nor have I seen any integrator build proof-of-worth consultancy into their offer, whereby they work closely with their customer to define and collect the metrics that matter. BTW. If this sounds hard, I’d be happy to discuss how I approach this task.

So let me leave you with three important questions today:

  1. Have you also experienced the overwhelming burden of the “OSS = cost” mentality
  2. If so, do you have any suggestions / experiences on how you’ve overcome it
  3. Does the proof-of-worth product functionality sound like it could be useful (noting that it doesn’t even have to be a product feature, but a custom report / portal using data that’s constantly coursing through our OSS databases)

The Jeff Bezos prediction for OSS

If we start to focus on ourselves, instead of focusing on our customers, that will be the beginning of the end … We have to try and delay that day for as long as possible.”
Jeff Bezos
.

Jeff Bezos recently predicted that Amazon is likely to fail and/or go bankrupt at some point in time, as history has eventually proven for most high-flying companies. The quote above was part of that discussion.

I’ve worked with quite a few organisations that have been in the midst of some sort of organisational re-structure – upsizing, downsizing, right-scaling, transforming – whatever the words that might be in effect. Whilst it would be safe to say that all of those companies were espousing being focused on their customers, the organisational re-structures always seem to cause inward-facing behaviour. It’s human nature that change causes feelings of fear, job security, opportunities to expand empires, etc.

And in these types of inward-facing environments in particular, I’ve seen some really interesting decisions made around OSS projects. When making these decisions, customer experience has clearly been a long way down the list of priorities!! And in the current environment of significant structural change in the telco industry, these stimulants of internal-facing behaviour appear to be growing.

Whilst many people want to see OSS projects as technical delivery solutions and focus on the technology, the people and culture aspects of the project can often be even more challenging. They can also be completely underestimated.

What have your experiences been? Do you agree that customer-facing vision, change management and stakeholder management can be just as vital as technical brilliance on an OSS implementation team?

OSS that capture value, not just create it

I’ve just had a really interesting first day at TM Forum’s Digital Transformation Asia (https://dta.tmforum.org and #tmfdigitalasia ). The quality of presentations was quite high. Some great thought-provoking ideas!!

Nik Willetts kicked off his keynote with the following quote, which I’m paraphrasing, “Telcos need to start capturing value, not just creating it as they have for the last decade.”

For me, this is THE key takeaway for this event, above any of the other interesting technical discussions from day 1 (and undoubtedly on the agenda for the next 2 days too).

The telecommunications industry has made a massive contribution to the digital lifestyle that we now enjoy. It has been instrumental in adding enormous value to our lives and our economy. But all the while, telecommunications providers globally have been experiencing diminishing profitability and share-of-wallet (as described in this earlier post). Clearly the industry has created enormous value, but hasn’t captured as much as it would’ve liked.

The question to ask is how will our thinking and our OSS/BSS stacks help to contribute to capturing more value for our customers. As described in the share of wallet post above, the premium end of the value chain has always been in the content (think in terms of phone conversations in days gone by, or the myriad of comms techniques today such as email, live chat, blogs, etc, etc). That’s what the customer pays for – the experience – not the networks or systems that facilitate it.

Nik’s comments made me think of Andrew Carnegie. Monopolies such as the telecommunications organisations of the past and Andrew Carnegie’s steel business owned vast swathes of the value chain (Carnegie Steel Company owned the mines which extracted the raw materials needed to make steel, controlled the transportation used to deliver the materials and the product, and ran the mills used for steel production). Buyers didn’t care for the mines or mills or transportation. Customers were paying for the end product as it is what helped them achieve their goals, whether that was the railway tracks needed by the railroads or the beams needed by construction companies.

The Internet has allowed enormous proliferation of the premium-end of the telecommunications value chain. It’s too late to stuff that genie back into the bottle. But to Nik’s further comment, we can help customers achieve their goals by becoming their “do-it-yourself” digital partners.

Our customers now look to platforms like Facebook, Instagram, Google, WordPress, Amazon, etc to build their marketing, order capture, product / content delivery, commercial transactions, etc. I really enjoyed Monty Hong‘s presentation that showed how Telkomsel’s OSS/BSS is helping to embed Telkomsel into customers’ digital lifestyles / value-chains. It’s a perfect example of the biggest OSS loser proof discussed in yesterday’s post.

Telco services that are bigger, faster, better and the OSS that supports that

We all know of the tectonic shifts in the world of telco services, profitability and business models.

One common trend is for telcos to offer pipes that are bigger and faster. Seems like a commoditising business model to me, but our OSS still need to support that. How? Through enabling efficiency at scale. Building tools, GUIs, workflows, integrations, sales pipelines, etc that enable telcos march seamlessly towards offering ever bigger/faster pipes. An OSS/BSS stack that supports this could represent one of the few remaining sustainable competitive advantages, so any such OSS/BSS could be highly valuable to its owner.

But if the bigger/faster pipe model is commoditising and there’s little differentiation between competing telcos’ OSS/BSS on service activation, then what is the alternative? Services that are better? But what is “better”? More to the point, what is sustainably better (ie can’t be easily copied by competitors)? Services that are “better” are likely to come in many different forms, but they’re unlikely to be related to the pipe (except maybe reliability / SLA / QoS). They’re more likely to be in the “bundling,” which may include premium content, apps, customer support, third-party products, etc. An OSS/BSS that is highly flexible in supporting any mix of bundling becomes important. Product / service catalogs are one of many possible examples.

An even bigger differentiator is not bigger / faster / better, but different (if perceived by the market as being invaluably different). The challenge with being different is that “different” tends to be fleeting. It tends to only last for a short period of time before competitors catch up. Since many of the differences available to telco services are defined in software, the window of opportunity is getting increasingly short… except when it comes to the OSS/BSS being able to operationalise that differentiator. It’s not uncommon for a new feature to take 9+ months to get to market, with changes to the OSS/BSS taking up a significant chunk of the project’s critical path. Having an OSS/BSS stack that can repeatedly get a product / feature to market much faster than competing telcos provides greater opportunity to capture the market during the window of difference.

OSS collaboration rooms. Getting to the coal-face

A number of years ago I heard about an OSS product that introduced collaborative rooms for network operators to collectively solve challenging network health events. It was in line with some of my own thinking about the use of collaboration techniques to solve cross-domain or complex events. But the concept hasn’t caught on in the way that I expected. I was curious why, so I asked around some friends and colleagues who are hands-on managing networks every day.

The answer showed that I hadn’t got close enough to understanding the psyche at the coal-face. It seems that operators have a preference for the current approach, the tick and flick of trouble tickets until the solution forms and the problem is solved.

This shows the psyche of collaboration at a micro scale. I wonder if it holds true at a macro scale too?

No CSP has an everywhere footprint (admittedly cloud providers are close to everywhere though, in part through global presence, in part through coverage of the access domain via their own networks and/or OTT connectivity). For customers that need to cross geo-footprints, carriers take a tick and flick approach in terms of OSS. The OSS of one carrier passes orders to the other carrier’s OSS. Each OSS stays within the bounds of its organisation’s locus of control (see this blog for further context).

To me, there seems to be an opportunity for carriers to get out of their silo. To leverage collaboration for speed, coverage, etc by designing offerings in OSS design rooms rather than standards workshops. A global product catalog sandpit as it were for carriers to design offerings in. Every carrier’s service offering / API / contract resides there for other carriers to interact with.

But once again, I may not be close enough to understanding the psyche at the coal-face. If you work at this coal-face, I’d love to get your opinions on why this would or would not work.

OSS feature parity. A functionality arms race

OSS Vendor 1. “I have 1 million features.” (Dr Evil puts finger in mouth)
OSS Vendor 2. “Yeah, well I have 1,000,001 features in my OSS.”

This is the arms-race that we see in OSS, just like almost any other tech product. I imagine that vendors get into this arms-race because they wish to differentiate. Better to differentiate on functionality than price. If there’s a feature parity, then the only differentiator is price. We all know that doesn’t end well!

But I often ask myself a few related questions:

  • Of those million features, how many are actually used regularly
  • As a vendor do you have logging that actually allows you to know what features are being used
  • Taking the Whale Curve perspective, even if being used, how many of those features are actually contributing to the objectives of the vendor
    • Do they clearly contribute towards making sales
    • Do customers delight in using them
    • Would customers be irate if you removed them
    • etc

Earlier this week, I spoke about a friend who created an alarm management tool by himself over a weekend. It didn’t have a million features, but it did have all of what I’d consider to be the most important ones. It did look like a lot of other alarm managers that are now on the market. The GUI based on alarm lists still pervades.

If they all look alike, and all have feature parity, how do you differentiate? If you try to add more features, is it safe to assume that those features will deliver diminishing returns?

But is an alarm list and the flicking of tickets the best way to manage network health?

What if, instead of seeking incremental improvement, someone went back to the most important requirements and considered whether the current approach is meeting those customer needs? I have a strong suspicion that customer feedback will indicate that there are definitely flaws to overcome, especially on high event volume networks.

Clever use of large data volumes provides a level of pre-cognition and automation that wasn’t available when simple alarm lists were first invented. This in turn potentially changes the way that operators can engage with network monitoring and management.

What if someone could identify a whole new user interface / approach that overcame the current flaws and exceeded the key requirements? Would that be more of a differentiator than adding a 1,000,002nd feature?

If you’re looking for a comparison, there were plenty of MP3 players on the market with a heap of features, many more than the iPod. We all know how that one played out!

What if the OSS solution lies in its connections?

Imagine for a moment that you’re sitting in front of a pristine chess board, awaiting the opportunity to make your first move. All of the pieces have been exquisitely carved from stone, polished to a sheen. The rules of the game have been established for centuries, so you know exactly which piece is able to move in which sequences. Time to make the opening move.

You’ve studied the games of the masters who have preceded you and have planned your opening gambit, the procession of moves that will hopefully take you into a match-winning position. Due to your skills with modern automations, you’ve connected some of the chess pieces with delicate strings to implement your opening gambit with precision.

Unfortunately, after the first few moves, your strings are starting to pull the pieces out of position. Your opponent has countered well and you’re having to modify your initial plans. You introduce some additional pulleys and springs to help retain the rightful position of your pieces on the board and cope with unexpected changes in strategy. The automations are becoming ever more complex, taking more time to plan and implement than the actual next move.

The board is starting to devolve into unmanageable chaos.

Does this sound like the analogy of a modern OSS? It’s what I refer to as the chessboard analogy.

We’ve been at this OSS game for long enough to already have an understanding of all of the main pieces. TM Forum’s TAM provides this definition as a useful guide. The pieces are modular, elegant and quite well understood by its many players. The rules of the game haven’t really changed much. The main use cases of an OSS from decades ago (ie assure, fulfil, plan, build, etc) probably don’t differ significantly from those of today. This
“should” set the foundations for interchangeability of applications.

We see programs of work like ONAP, where millions of lines of code are being developed to re-write the rules of the game. I’m a big advocate of many of the principles of ONAP, but I’m still not sure that such a massive re-write is what’s needed.

It’s not so much in the components of our OSS as in the connections between them where things tend to go awry.

The foundation of all brilliance is seeing connections when no one else does.”
Richard Parkinson
.

This article distills ONAP from its answers back to the core questions. What if instead of seeking an entirely-new architectural stack, we focused on solving the core questions and the chessboard problem – the problem of connections?

Perhaps the answer to the connection problem lies in the interchangeable small grid OSS model discussed in yesterday’s article on planned OSS obsolescence.
But it probably also incorporates what ONAP calls, “real-time, policy-driven orchestration and automation,” to replace pre-defined processes. I wonder instead whether state-based transitions, being guided by intent/policy rules and feedback loops (ie learning systems) might hold the key. An evolving and learning solution that shares similarities with the electrical pathways in our brain, which strengthen the more they’re used and diminish if no longer used.

The future of work and its impact on OSS

Many years ago, I worked on a seriously big OSS transformation for one of the region’s biggest telcos. Everything was big on the project, the investment, the resources, the documentation. Everything except the outcomes. There was so much inefficiency that I often spoke about making one day of progress for every ten on site. Meetings, bureaucracy, impossible approval cycles, customer re-organisations, over-analysis, etc all added up to stagnation.

This contrasted so much with some of the amazing small teams I’ve worked alongside. Teams that worked cohesively, cleverly and just got stuff done with almost no resources. It’s one of the reasons I feel that the future of work, even for the very large organisations, will be via small teams. Outsourced to small, efficient teams / organisations. The gig economy, and the proliferation of tools that support it, make it an obvious approach to take, especially for very large organisations to leverage. Proof of work technologies, such as those building upon the discovery of blockchain, will provide further impetus to use smaller teams of experts.

Experts like a friend and colleague of mine who once built an alarm management tool in a weekend, by himself. It also happened to be more sophisticated than his employer’s existing tool that had taken years of combined developer effort by a larger team.

Maybe I’ll be proven wrong, but I see the transition to this model of work as being inevitable. The question I have is how to make our OSS more accommodating of this work model. Behemoth OSS stacks won’t. Highly modular OSS made up of many smaller components probably will, as long as they don’t succumb to the OSS chessboard analogy. The pulleys and strings will make it impossible for small, interchangable teams to decipher and manage.

A small-grid OSS model is the one I’d be backing in.

OSS – like a duck on a pond

Let’s start with a basic question. “What does an OSS need to do?”

The basic answer is, “make operations easier.”

The real answer(s) is so much more nuanced than that of course. The term easier can also encapsulate other words such as faster, more accurate, more repeatable, cheaper, etc.

Designing, building, operating and maintaining a sizable network is extremely challenging, despite network operators around the world, and the vendors that supply to them, employing some of the best and brightest. So we design OSS and related tools / processes to make operations easier.

Yet I sometimes wonder whether we achieve that aim – to make operations easier. Seems to me that we tend to focus more on just replicating functions at a higher layer in the management stack. That is, moving the function to the OSS rather than EMS/NMS, without really making it much easier operationally.

Let’s start at the user interface (UI). How often are they intuitive enough for an experienced network operator to start doing tasks with negligible OSS expert guidance?
Let’s look at deployments. How often are the projects low on effort, risk, cost and complexity?
Let’s look at flexibility (ie in-flight modifications or transformations). How often do we actually deliver flexibility to our customers through our OSS. To ask the same as above, how often are our changes low on effort, risk, cost and complexity?

As a small step towards providing an answer, I wonder whether it’s a case of making the hard things look easy and the easy things look hard.

We want to make the really hard operational things much easier to do within an OSS because that’s the primary purpose of an OSS. That’s the example of a duck on a pond. The OSS is gliding along effortlessly across the top of the water, but under the water it is paddling furiously.

Conversely, we want to make the really easy* operational things look hard to do within an OSS so that we’re not constantly being asked to build functionality / complexity into our OSS that doesn’t warrant being there. It diffuses the intent of the OSS. Just because we can, doesn’t mean we should.

Do the laws of physics prevent you from making an OSS pivot?

AIrcraft carrier
Image linked from GCaptain.com.

As you already know, the word pivot has become common in the world of business, particularly the world of start-ups. It’s a euphemism for a significant change in strategic direction. In the context of today’s post, I love the word pivot because it implies a rapid change in direction, something that’s seemingly impossible for most of our OSS and the customers who use them.

I like to use analogies. It’s no coincidence that some of the analogies posted here on PAOSS relate to the challenge in making strategic change in our OSS. Here are just three of those analogies:

The OSS intertia principle relates classical physics with our OSS, where Force equals Mass x Acceleration (F = ma). In other words, the greater the mass (of your OSS), the more force must be applied to reach a given acceleration (ie to effect a change)

The OSS chess-board analogy talks about the rubber bands and pulleys (ie integrations) that enmesh the pieces on our OSS chessboard. This means that other pieces get dragged out of position whenever we try to move any individual piece and chaos ensues.

The aircraft carrier analogy compares OSS (and the CSPs they service) with navies of old. In days gone by, CSPs enjoyed command of the sea. Their boats were big, powerful and mobile enough to move around world. However, their size requires significant planning to change course. The newer application and content communications models are analogous to the advent of aviation. The over the top (OTT) business model has the speed, flexibility, lower cost base and diversity of aircraft. Air supremacy has changed the competitive dynamic. CSPs and our OSS can’t quickly change from being a navy to being an airforce, so the aircraft carrier approach looks to the future whilst working within the constraints of the past.

When making day to day changes within, and to, your OSS does the ability to pivot ever come to mind?

Do you intentionally ensure it stays small, modular and limit its integrations to simplify your game of OSS chess?
If constrained by existing mass that you simply can’t eliminate, do you seek to transform via OSS‘s aviation equivalents?
Or like many of the OSS around the world, are you just making them larger, enmeshed behemoths that will never be able to change the laws of physics and achieve a pivot?

Do any of our global target architectures represent such behemoths?

Build an OSS and they will come… or sometimes not

Build it and they will come.

This is not always true for OSS. Let me recount a few examples.

The project team is disconnected from the users – The team that’s building the OSS in parallel to existing operations doesn’t (or isn’t able to) engage with the end users of the OSS. Once it comes time for cut-over, the end users want to stick with what they know and don’t use the shiny new OSS. From painful experience I can attest that stakeholder management is under-utilised on large OSS projects.

Turf wars – Different groups within a customer are unable to gain consensus on the solution. For example, the operational design team gains the budget to build an OSS but the network assurance team doesn’t endorse this decision. The assurance team then decides not to endorse or support the OSS that is designed and built by the design team. I’ve seen an OSS worth tens of millions of dollars turned off less than 2 years after handover because of turf wars. Stakeholder management again, although this could be easier said than done in this situation.

It sounded like a good idea at the time – The very clever OSS solution team keeps coming up with great enhancements that don’t get used, for whatever reason (eg non fit-for-purpose, lack of awareness of its existence by users, lack of training, etc). I’ve seen a customer that introduced over 500 customisations to an off-the-shelf solution, yet hundreds of those customisations hadn’t been touched by users within a full year prior to doing a utilisation analysis. That’s right, not even used once in the preceding 12 months. Some made sense because they were once-off tools (eg custom migration activities), but many didn’t.

The new OSS is a scary beast – The new solution might be perfect for what the customer has requested in terms of functionality. But if the solution differs greatly from what the operators are used to, it can be too intimidating to be used. A two-week classroom-based training course at the end of an OSS build doesn’t provide sufficient learning to take up all the nuances of the new system like the operators have developed with the old solution. Each significant new OSS needs an apprenticeship, not just a short-course.

It’s obsolete before it’s finishedOSS work in an environment of rapid change – networks, IT infrastructure, organisation models, processes, product offerings, regulatory shifts, disruptive innovation, etc, etc. The longer an OSS takes to implement, the greater the likelihood of obsolescence. All the more reason for designing for incremental delivery of business value rather than big-bang delivery.

What other examples have you experienced where an OSS has been built, but the users haven’t come?

Falsely rewarding based on OSS existence rather than excellence

There’s a common belief that most jobs see people rewarded for presence rather than performance. That is, they’re encouraged to be on site from 9am to 5pm rather than being given free reign over their work schedules as long as key outcomes are met / exceeded.

In OSS vendor / product selection there’s a similar concept. Contracts are often awarded based on existence rather than excellence. When evaluating a product, if it’s able to do a majority of the functions in the long list of requirements then the box is ticked.

However, this doesn’t take into account that there are usually only a very small number of functions that any given customer’s OSS needs to perform at a very high level of efficiency. All the others are effectively just nice to have. That’s the 80/20 rule at work.

When guiding a customer through their vendor selections, I always take them through an exercise to identify the use-cases / functions that really matter. Then we ensure that the demos or proofs of concept focus closely on how excellent the OSS is at those most important factors.

OSS automations – just because we can, doesn’t mean we should

Automation is about using machines / algorithms to respond faster than humans can, or more efficiently than humans can, or more accurately than humans can… but only if the outcomes justify the costs. When it comes to automations, it’s a case of, “just because we can, doesn’t mean we should.”

The more complex the decision tree you’re trying to automate, the higher the costs and therefore the harder it becomes to cost-justify. So the first step in any automation is taking a lateral thinking approach to simplifying the decision tree.

This recent post highlighted a graph from Nokia’s Bell Labs and the financial dependency that network slicing has on operational automation:
Nokia Network Slicing

Let’s use the Toyota Five Whys technique to work our way through the implications of this:

Statement 0: As CSPs, we need to drastically reduce complexity in the processes / decision-trees across our whole organisation.

Why 1? So that we can apply significant levels of automation

Why 2? So that we can apply technologies / techniques such as network slicing or virtualisation that are cost-justifiable

Why 3? So that we can offer differentiated, premium services

Why 4? So that our offerings don’t become commodities

Why 5? So that we retain corporate profitability to return to shareholders and/or invest in further interesting projects

I love that we’re looking to all number of automation technologies / techniques to apply to our OSS. However, we’re bypassing the all-important statement 0. We’re starting at Why 1 and partially missing the cost-justifiable part of Why 2. If our automation projects don’t prove cost-justifiable, then we never get the chance to reach whys 3, 4 and 5.

Persona mapping for OSS PoCs

When selecting new applications for an OSS or to augment an existing OSS, it always makes sense to me to run a Proof of Concept. But what do we want to demonstrate in that PoC? For me, we want to run demonstrations of the factors (eg features, use-cases, processes, etc) that justify the investment.

A simple exercise you can use is to identify the personas / roles that interact with the OSS. This could include personas such as NOC operator, strategic planner, network engineer, order entry, field ops, data / analytics, application administrator, etc. The actual personas will differ within each organisation of course.

For each of those personas, we can identify and interview an individual that represents that persona.

Interview questions include:

  1. What are the key responsibilities of your role
  2. What is the most important goal / KPI for your role
  3. How does this OSS (or proposed OSS) support you meeting this goal
  4. Describe the single most important process / function that you perform using the OSS
  5. Why is it so important
  6. How often do you perform this process / function
  7. Please provide a short list of other important processes / functions you perform with this OSS

We can then build this into a matrix and seek to prioritise into a set of use-cases. Based on time and cost constraints, we can then build the top-n of those use-cases into implementation scenarios for the PoC.

OSS operationalisation at scale

We had a highly flexible network design team at a previous company. Not because we wanted to necessarily, but because we were forced to by the client’s allocation of work.

Our team was largely based on casual workers because there was little to predict whether we needed 2 designers or 50 in any given week. The workload being assigned by the client was incredibly lumpy.

But we were lucky. We only had design work. The lumpiness in design effort flowed down through the work stack into construction, test and deployment teams. The constructors had millions of dollars of equipment that they needed to mobilise and demobilise as the work ebbed and flowed. Unfortunately for the constructors, they’d prepared their rate cards on the assumption of a fairly consistent level of work coming through (it was a very big project).

This lumpiness didn’t work out for anyone in the delivery pipeline, the client included. It was actually quite instrumental in a few of the constructors going into liquidation. The client struggled to meet roll-out targets.

The allocation of work was being made via the client’s B/OSS stack. The B/OSS teams were blissfully unaware of the downstream impact of their sporadic allocation of designs. Towards the end of the project, they were starting to get more consistent and delivery teams started to get into more of a rhythm… just as the network was coming to the end of build.

As OSS builders, we sometimes get so wrapped up in delivering functionality that we can forget that one of the key requirements of an OSS is to operationalise at scale. In addition to UI / CX design, this might be something as simple as smoothing the effort allocation for work under our OSS‘s management.