The colour palette analogy of OSS

Let’s say you act for a service provider and the diagram below represents the number of variations you could offer to customers – the number that are technically supported by your solution.
13,824,000 Colours
That’s 13,824,000 colours.

By comparison, the following diagram contains just 20 colours:
20 Colours

If I asked you what colours are in the upper diagram, would you say red, orange, yellow, green, blue, etc? Is it roughly the same response as to the lower diagram?

If you’re the customer, and know you want an “orange*” product, will you be able to easily identify between the many thousands of different orange hues available in the upper diagram? Would you be disenfranchised if you were only offered the two orange hues in the lower diagram instead of thousands? Or might you even be relieved to have a much easier decision to make?

The analogy here to OSS is that just because our solutions can support millions of variants, doesn’t mean we should. If our OSS try to offer millions of variants, it means we have to design, then build, then test, then post-sale support millions of variants.

However, in reality, we can’t provide 100% coverage across so many variants – we aren’t able to sufficiently design, then build, then test, then post-sale support every one of the millions of variants. We end up overlooking some or accept risk on some or estimate a test spread that bypasses others. We’ve effectively opened the door to fall-outs.

And it’s fall-outs that tend to create larger customer dissatisfaction metrics than limited colour palettes.

Just curious – if you’ve delivered OSS into large service providers, have you ever seen evidence of palette analysis (ie variant reduction analysis) across domains (ie products, marketing, networks, digital, IT, field-work, etc)?

Alternatively, have you ever pushed back on decisions made upstream to say you’ll only support a smaller sub-set of options? This doesn’t seem to happen very often.

* When I’m talking about colours, I’m using the term figuratively, not necessarily the hues on a particular handset being sold through a service provider.

One of the biggest insights we had…

One of the biggest insights we had was that we decided not to try to manage your music library on the iPod, but to manage it in iTunes. Other companies tried to do everything on the device itself and made it so complicated that it was useless.”
Steve Jobs
.

How does this insight apply to OSS? Can this “off device” perspective help us in designing better OSS?

Let’s face it – many OSS are bordering on useless due to the complexity that’s build in to the user experience. So what complexity can we take off the “device?” Let’s start by saying “the device” is the UI of our OSS (although noting the off-device perspective could be viewed much more broadly than that).

What are the complexities that we face when using an OSS;

  • The process of order entry / service design / service parameters / provisioning can be time consuming and prone to errors,
  • Searching / choosing / tracing resources, particularly on large networks, can result in very slow response times,
  • Navigating through multiple layers of inventory in CLI or tabular forms can be challenging,
  • Dealing with fixed processes that don’t accommodate the many weird and wonderful variants that we encounter
  • Dealing with workflows that cross multiple integration boundaries and slip through the cracks,
  • Analysing data that is flawed generally produces flawed results
  • Identifying the proverbial needle in the haystack when something goes wrong
  • And many, many more

How can we take some of those complexities “off-device”

  • Abstracting order and provisioning complexity through the use of catalogs and auto-populating as many values as possible,
  • Using augmented decision support to assist operators through complex processes, choosing from layers of resources, finding root-causes to problems, etc,
  • Using event-based processes that traverse process states rather than fixed processes, particularly where omni-channel interactions are available to customers
  • Using inventory discovery (and automated build-up / tear-down in virtualised networks) and decision support to present simpler navigations and views of resources
  • Off-device data grooming / curation to make data analysis more intuitive on-device
  • etc

In effect, we’re describing the tasks of an “on-device” persona (typically day-to-day OSS operators that need greater efficiency) and “off-device” persona/s (these are typically OSS admins, configuration experts, integrators, data scientists, UI/UX experts, automation developers, etc who tune the OSS).

The augmented analytics journey

Smart Data Discovery goes beyond data monitoring to help business users discover subtle and important factors and identify issues and patterns within the data so the organization can identify challenges and capitalize on opportunities. These tools allow business users to leverage sophisticated analytical techniques without the assistance of technical professionals or analysts. Users can perform advanced analytics in an easy-to-use, drag and drop interface without knowledge of statistical analysis or algorithms. Smart Data Discovery tools should enable gathering, preparation, integration and analysis of data and allow users to share findings and apply strategic, operational and tactical activities and will suggest relationships, identifies patterns, suggests visualization techniques and formats, highlights trends and patterns and helps to forecast and predict results for planning activities.

Augmented Data Preparation empowers business users with access to meaningful data to test theories and hypotheses without the assistance of data scientists or IT staff. It allows users access to crucial data and Information and allows them to connect to various data sources (personal, external, cloud, and IT provisioned). Users can mash-up and integrate data in a single, uniform, interactive view and leverage auto-suggested relationships, JOINs, type casts, hierarchies and clean, reduce and clarify data so that it is easier to use and interpret, using integrated statistical algorithms like binning, clustering and regression for noise reduction and identification of trends and patterns. The ideal solution should balance agility with data governance to provide data quality and clear watermarks to identify the source of data.

Augmented Analytics automates data insight by utilizing machine learning and natural language to automate data preparation and enable data sharing. This advanced use, manipulation and presentation of data simplifies data to present clear results and provides access to sophisticated tools so business users can make day-to-day decisions with confidence. Users can go beyond opinion and bias to get real insight and act on data quickly and accurately.”
The definitions above come from a post by Kartik Patel entitled, “What is Augmented Analytics and Why Does it Matter?.”

Over the years I’ve loved playing with data and learnt so much from it – about networks, about services, about opportunities, about failures, about gaps, etc. However, modern statistical analysis techniques fall into one of the categories described in “You have to love being incompetent“, where I’m yet to develop the skills to a comfortable level. Revisiting my fifth year uni mathematics content is more nightmare than dream, so if augmented analytics tools can bypass the stats, I can’t wait to try them out.

The concepts described by Kartik above would take those data learning opportunities out of the data science labs and into the hands of the masses. Having worked with data science labs in the past, the value of the information has been mixed, all dependent upon which data scientist I dealt with. Some were great and had their fingers on the pulse of what data could resolve the questions asked. Others, not so much.

I’m excited about augmented analytics, but I’m even more excited about the layer that sits on top of it – the layer that manages, shares and socialises the aggregation of questions (and their answers). Data in itself doesn’t provide any great insight. It only responds when clever questions are asked of it.

OSS data has an immeasurable number of profound insights just waiting to be unlocked, so I can’t wait to see where this relatively nascent field of augmented analytics takes us.

If you can’t repeat it, you can’t improve it

The cloud model (ie hosted by a trusted partner) becomes attractive from the perspective of repeatability, from the efficiency of doing the same thing repeatedly at scale.”
From, “I want a business outcome, not a deployment challenge.”

OSS struggles when it comes to repeatability. Often within an organisation, but almost always when comparing between organisations. That’s why there’s so much fragmentation, which in turn is holding the industry back because there is so much duplicated effort and brain-power spread across all the multitude of vendors in the market.

I’ve worked on many OSS projects, but none have even closely resembled each other, even back in the days when I regularly helped the same vendors deliver to different clients. That works well for my desire to have constant mental stimulation, but doesn’t build a very efficient business model for the industry.

Closed loop architectures are the way of the future for OSS, but only if we can make our solutions repeatable, measurable / comparable and hence, refinable (ie improvable). If we can’t then we may as well forget about AI. After all, AI requires lots of comparable data.

I’ve worked with service providers that have prided themselves on building bespoke solutions for every customer. I’m all for making every customer feel unique and having their exact needs met, but this can still be accommodated through repeatable building blocks with custom tweaks around the edges. Then there are the providers that have so many variants that you might as well be designing / building / testing an OSS for completely bespoke solutions.

You could even look at it this way – If you can’t implement a repeatable process / solution, then measure it, then compare it and then refine it, then you can’t create a customer offering that is improving.

Omnichannel will remain disjointed until…

Omnichannel is intended to be a strategy that provides customers with a seamless, consistent experience across all of their contact channels – channels that include online/digital, IVR, contact centre, mobile app, retail store, B2B portal, etc.

The challenge of delivering consistency across these platforms is that there is little cross-over between the organisations that deliver these tools. Each is a fragmented market in its own right and the only time interaction happens (in my experience at least) is on an as-needed basis for a given project.

Two keys to delivering seamless customer experience are the ability to identify unique customers and the ability to track their journeys through different channels. The problem is that some of these channels aren’t designed to uniquely identify and if they can, aren’t consistent with other products in their linking-key strategies.

A related problem is that user journeys won’t follow a single step-by-step sequence through the channels. So rather than process flows, user journeys need to be tracked as state transitions through their various life-cycles.

OSS/BSS are ideally situated to manage linking keys across channels (if the channels can provide the data) as well as handling state-transition user journeys.

Omnichannel represents a significant opportunity, in part because there are two layers of buyers for such technology. The first is the service provider that wants to provide their customer with a truly omnichannel experience. The second is to provide omnichannel infrastructure to the service providers’ customers, customers that are in business and want to offer consistent omnichannel experiences for their end-customers.

Who is going to be the first to connect the various channel products / integrators together?

Use cases for architectural smoke-tests

I often leverage use-case design and touch-point mapping through the stack to ensure that all of the use-cases can be turned into user-journeys, process journeys and data journeys. This process can pick up the high-level flows, but more importantly, the high-level gaps in your theoretical stack.”

Yesterday’s blog discussed the use of use cases to test a new OSS architecture. TM Forum’s eTOM is the go-to model for process mapping for OSS / BSS. Their process maps define multi-level standards (in terms of granularity of process mapping) to promote a level of process repeatability across the industry. Their clickable model allows you to drill down through the layers of interest to you (note that this is available for members only though).

In terms of quick smoke-testing an OSS stack though, I tend to use a simpler list of use cases for an 80/20 coverage:

  • Service qualification (SQ)
  • Adding new customers
  • New customer orders (order handling)
  • Changes to orders (adds / moves / changes / deletes / suspends / resumes)
  • Logging an incident
  • Running a report
  • Creating a new product (for sale to customers)
  • Tracking network health (which may include tracking of faults, performance, traffic engineering, QoS analysis, etc)
  • Performing network intelligence (viewing inventory, capacity, tracing paths, sites, etc)
  • Performing service intelligence (viewing service health, utilised resources, SLA threshold analysis, etc)
  • Extracting configurations (eg network, device, product, customer or service configs)
  • Tracking customer interactions (and all internal / external events that may impact customer experience such as site visits, bills, etc)
  • Running reports (of all sorts)
  • Data imports
  • Data exports
  • Performing an enquiry (by a customer, for the purpose of sales, service health, parameters, etc)
  • Bill creation

There are many more that may be required depending on what your OSS stack needs to deliver, but hopefully this is a starting point to help your own smoke tests.

Use-case driven OSS architecture

When it comes to designing a multi-vendor (sometimes also referred to as best-of-breed) OSS architecture stack, there is never a truer saying than, “the devil is in the detail.”

Oftentimes, it’s just not feasible to design every interface / integration / data-flow traversing a theoretical OSS stack (eg pre-contract award, whilst building a business case, etc). That level of detail is developed during detailed design or perhaps tech-spikes in the Agile world.

In this interim state, I often leverage use-case design and touch-point mapping through the stack to ensure that all of the use-cases can be turned into user-journeys, process journeys and data journeys. This process can pick up the high-level flows, but more importantly, the high-level gaps in your theoretical stack.

A new, more sophisticated closed-loop OSS model

Back in early 2014, PAOSS posted an article about the importance of closed loop designs in OSS, which included the picture below:

OSS / DSS feedback loop

It generated quite a bit of discussion at the time and led me to being introduced to two companies that were separately doing some interesting aspects of this theoretical closed loop system. [Interestingly, whilst being global companies, they both had strong roots tying back to my home town of Melbourne, Australia.]

More recently, Brian Levy of TM Forum has published a more sophisticated closed-loop system, in the form of a Knowledge Defined Network (KDN), as seen in the diagram below:
Brian Levy Closed Loop OSS
I like that this control-loop utilises relatively nascent technologies like intent networking and the constantly improving machine-learning capabilities (as well as analytics for delta detection) to form a future OSS / KDN model.

The one thing I’d add is the concept of inputs (in the form of use cases such as service orders or new product types) as well as outputs / outcomes such as service activations for customers and not just the steady-state operations of a self-regulating network. Brian Levy’s loop is arguably more dependent on the availability and accuracy of data, so it needs to be initially seeded with inputs (and processing of workflows).

Current-day OSS are too complex and variable (ie un-repeatable), so perhaps this represents an architectural path towards a simpler future OSS – in terms of human interaction at least – although the technology required to underpin it will be very sophisticated. The sophistication will be palatable if we can deliver the all-important repeatability described in, “I want a business outcome, not a deployment challenge.” BTW. This refers to repeatability / reusability across organisations, not just being able to repeatedly run workflows within organisations.

When in doubt, connect

When in doubt, connect.
That’s what fast-growing, important organizations do.
Making stuff is great.
Making connections is even better
.”
Seth Godin
in his post here.

Simple words. Simple concept. Interesting message…. with a traffic-light of OSS layers.

Layer 1 – A connection red light
The more connections an OSS has, the more enriched the data tends to become. However, by contrast the more interconnected an OSS gets, the more inflexible and difficult it gets to maintain. The chess-board analogy and the handshake analogy attest to the challenges associated to a highly connected OSS. In this case, when in doubt, don’t connect.

Layer 2 – A connection orange light
Five of the top seven companies (by market cap) in the world are tech companies (1. Apple, 2. Alphabet (Google), 3. Microsoft, 6. Amazon, 7. Facebook). They have become valuable through the power of connection. China Telecom represents one of the original connectors, the telecommunications carriers, and just makes it into the top 10 whilst Verizon and AT&T are both worth over $250B too. Our OSS allow these connections to be made – but they’re making a connection “for” the customer rather than making a connection “with” the customer. Until brand-OSS starts making a connection with the customers, we’ll never be fast growing like the tech titans. When in doubt, connect at a deeper level.

Layer 3 – A connection green light
Tripods or Linchpins are the most valuable of OSS resources because of their ability to connect (ideas, people, products, workflows, contracts, etc). They are the pinnacle of OSS implementers because their breadth of connections allows them to interconnect the most complex of OSS jigsaws. If you want to become an OSS tripod, then Seth’s lead is a great one to follow – When in doubt, connect.

Instead of the easy metrics…

What is it that you hope to accomplish? Not what you hope to measure as a result of this social media strategy/launch, but to actually change, create or build?

An easy but inaccurate measurement will only distract you. It might be easy to calibrate, arbitrary and do-able, but is that the purpose of your work?

I know that there’s a long history of a certain metric being a stand-in for what you really want, but perhaps that metric, even though it’s tried, might not be true. Perhaps those clicks, views, likes and grps are only there because they’re easy, not relevant.

If you and your team can agree on the goal, the real goal, they might be able to help you with the journey…

System innovations almost always involve rejecting the standard metrics as a first step in making a difference. When you measure the same metrics, you’re likely to create the same outcomes. But if you can see past the metrics to the results, it’s possible to change the status quo.”
Seth Godin on his blog here.

There are a lot of standard metrics in OSS and comms networks. In the context of Seth’s post, I have two layers of metrics for you to think about. One layer is the traditional role of OSS – to provide statistics on the operation of the comms network / services / etc. The second layer is in the statistics of the OSS itself.

Layer 1 – Our OSS tend to just provide a semi-standard set of metrics because service providers tend to use similar metrics. We even had standards bodies helping providers get consistent in their metrics. But are those metrics still working for a modern carrier?
Can we disrupt the standard set of metrics by asking what a service provider is really wanting to achieve in our changing environment?

Layer 2 – What do we know about our own OSS? How are they used? How does that usage differ across clients? How does it differ between other products? What are the metrics that help sell an OSS (either internally or externally)?

The trickle-down effect

There’s an interesting thing with off-the-shelf OSS solutions that are subsequently highly customised by the buyer. I call it the trickle-down effect.

By nature, commercial-off-the-shelf (COTS) solutions tend to be designed to cope with as many variants as their designers can imagine. They’re designed to be inclusive in nature.

But customised COTS solutions tend to narrow down that field of view, adding specific actions, filters, etc to make workflows more efficient, reports more relevant, etc. Exclusive in nature.

The unintended result of being exclusive is the trickle-down effect. If you exclude / filter things out, chances are you’ll have to continually update those customisations. For example, if you’re filtering on a certain device type, but that device type is superseded, then you’ll have to change all of the filters, as well as anything downstream of that filter.

The trickle-down effect can be insidious, turning a nice open COTS solution into a beast that needs constant attention to cope with the most minor of operational changes. The more customisations mat, the more gnarly the beast tends to be.

The alternative to canned OSS reports

Reports are an important interaction type into any OSS, obviously. What’s less well observed is the time (ie cost) it can take to create and curate canned reports. [BTW in my crude terminology, Canned Reports are ones where the report format and associated query is created / coded and designed to be run more than once in the future]

I’ve seen situations where an organisation has requested many, many custom reports, which have been costly to set up, but then once set up, have not been used again after user acceptance testing. I know of one company that had over 500 canned reports developed and only ~100 had been used more than a few times in the 12 months prior to when I checked.

My preferred option is to create an open data model that can be queried via a reporting engine, one that allows operators to intuitively create their own ad-hoc reports and then save them for future re-use (and share them if desired).

OSS billionaires with perfect abs

If [more] information was the answer, then we’d all be billionaires with perfect abs.”
Derek Sivers
.

The sharing economy has made a deluge of information available to us at negligible cost. We have more information available at our fingertips than we can ever consume and process. So why don’t we all have massive bank balances and perfect abs? (although I’m sure most PAOSS readers do of course)

The answer is that information is only as good as the decisions we are able to make with it. More specifically, the ability to distill the information down to the insights that compel great decisions to be made.

As OSS implementers, we can easily bombard our users with enough information calories to make perfect abs an impossible dream. Too easily in fact.

It’s much harder to consistently produce insights of great value. Perhaps it even needs the unique billionaire’s lens to spot the insights hidden in the information. But that’s what makes billionaires so rare.

Herein lies the message I want to leave you with today – how do us OSS Engineers train ourselves to see information more through a billionaire / value lens rather than our more typical technical lenses? I’m not a billionaire so I (we?) spend too much time thinking about technically correct solutions rather that thinking about valuable solutions. Do we have the right type of training / thinking to actually know what a valuable solution looks like?

What is your lead domino?

OSS can be complicated beasts with many tentacles. It can make starting a new project daunting. When I start, I like to use a WBS to show all the tentacles (people, processes, technology, contracts) on a single page, then look for the lead domino (or dominoes).

A lead domino is the first piece, the one that starts the momentum and makes a load of other pieces easier or irrelevant.

Each project is different of course, but I tend to look to get base-level software operational as the lead domino. It doesn’t have to do much. It can be on the cloud, in a sandpit, on a laptop. It can show minimal functionality and not even any full use cases. It can represent only a simplistic persona, not the many personas an OSS normally needs to represent.

But what it does show is something tangible. Something that can be interacted with and built upon. Most importantly, it gives people a context and immediately takes away a lot of the “what-if?” questions that can derail the planning stage. It provides the basis to answer other questions. It provides a level of understanding that allows level-up questions to be asked.

Or it might be hearts and minds exercise to get the organisation deeply engaged in the project and help overcome the complexities and challenges that will ensue. Or it could just be getting key infrastructure in place like servers, databases or DCNs (Data Control Networks) that will allow the pieces of the puzzle to communicate with each other.

On an uplift project, it might be something more strategic like a straw-man of the architecture, an end-to-end transaction flow, a revised UI or a data model.

For OSS implementations, just like dominoes, it’s choosing the right place/s to kick-start the momentum.

Should your OSS have an exit strategy in mind?

What does the integration map of your OSS suite look like? Does it have lots of meatballs with a spaghetti of interconnections? Is it possibly even too entangled that even creating an integration map would be a nightmare?

Similarly, how many customisations have you made to your out-of-the-box tools?

In recent posts we’ve discussed the frenzy of doing without necessarily considering the life-time costs of all those integrations and customisations. There’s no doubt that most of those tweaks will add capability to the solution, but the long-term costs aren’t always factored in.

We also talk about ruthless subtraction projects. There are many reasons why it’s easier to talk about reduction projects than actually achieve them. Mostly it’s because the integrations and customisations have entwined the solutions so tightly that it’s nearly impossible to wind them back.

But what if, like many start-ups, you had an exit strategy in mind when introducing a new OSS tool into your suite? There is an inevitability of obsolescence in OSS, either through technology change, moving business requirements, breakdowns in supplier / partnership relationships, etc. However, most tools stay around for longer than their useful shelf life because of their stickiness. So why not keep rigid control over the level of stickiness via your exit strategy?

My interpretation of an exit strategy is to ensure by-play with other systems happens at data level rather than integrations and customisations to the OSS tools. It also includes ruthless minimisation of snowball dependencies* within that data. Just because integrations can be done, doesn’t mean they should be done.

* Snowball dependencies are when one builds on another, builds on another, which is a common OSS occurrence.

The interview question tech recruiters will never ask, but should

Over the last few days, this blog has been diving into the career steps that OSS (and tech) specialists can make in readiness for the inevitable changes in employment dynamics that machine learning and AI will bring about.

You’ve heard all the stories about robots and AI taking all of our jobs. Any job that is repeatable and / or easy to learn will be automated. We’ve also seen that products are commoditising and many services are too, possibly because automation and globalisation is increasing the supply of both. Commoditisation means less profitability per unit to drive projects and salaries. We’ve already seen how this is impacting the traditional investors in OSS (eg CSPs).

Oh doom and gloom. So what do we do next?

Art!

That leads to my contrarian interview question. “So, tell me about your art.”

More importantly, using the comments section below, tell me about YOUR art. I’d love to hear your thoughts.

FWIW, here’s Seth Godin’s definition of art, which resonates with me, “Art isn’t only a painting. Art is anything that’s creative, passionate, and personal. And great art resonates with the viewer, not only with the creator.
What makes someone an artist? I don’t think it has anything to do with a paintbrush. There are painters who follow the numbers, or paint billboards, or work in a small village in China, painting reproductions. These folks, while swell people, aren’t artists. On the other hand, Charlie Chaplin was an artist, beyond a doubt. So is Jonathan Ive, who designed the iPod. You can be an artist who works with oil paints or marble, sure. But there are artists who work with numbers, business models, and customer conversations. Art is about intent and communication, not substances
.”

If we paint our OSS by numbers (not with numbers), we’re more easily replaced. If we inspire with solutions that are unique, creative, passionate and personal, that’s harder for machines to replace.

Next step, man-machine partnerships

Getting literate in the language of the future posed the thought that data is the language of the future and therefore it is incumbent on all OSS practitioners to ensure their literacy.

It also posed that data literacy provides a stepping stone to a future where machine learning is more prevalent. This got me thinking. We currently think about man-machine interfaces where we design ways for humans and machines to get info into the system. But the next step, the future way of thinking will be man-machine partnerships, where we design ways to leverage machines to get more out of the system.

Data literacy will be essential to making this transition from MM interface to MM partnership. Machines (eg IoT and OSS) will get data into the system. Through partnerships with humans, machines will also be instrumental in the actions / insights coming out of the system.

Getting literate in the language of the future

As we all know, digitalisation of everything is decreasing barriers to entry and increasing the speed of change in almost every perceivable industry. Unfortunately, this probably also means the half-life of opportunity exploitation is also shrinking. The organisations that seem to be best at leveraging opportunities in the market are the ones that are able to identify and act the quickest.

They identify and act quickly… based on data. That’s why we hear about data-driven organisations and data-driven decision support. OSS collect enormous amounts of data every year. But it’s only those who can turn that information into action who are able to turn opportunities into outcomes.

Data is the language of the future (well today too of course), so literacy in that language will become increasingly important. I’m not expecting to become a highly competent data scientist any time soon, but I’m certainly not expecting to delegate completely to mathletes either.

The language of data is not just in the data sets, but also in the data processing techniques that will become increasingly important – regression, clustering, statistics, augmentation, pattern-matching, joining, etc. If you can’t speak the language, you can’t drive the change or ask the right questions to unearth the gems you’re seeking. Speaking the language allows you to take the tentative first steps towards machine learning and AI.

As a consultant, I see myself as a connector – of ideas, people, concepts, solutions – and I see OSS largely falling into the same category. But the consultancy, and/or OSS, skills of today will undoubtedly need to be augmented with connection of data too – connection of data sets, data analysis techniques, data models – to be able to prove consultancy hypotheses with real data. That’s where consultancy and OSS is going, so that’s where I need to go too.

Finding the metrics to invert the OSS iceberg

Invert the iceberg – show just how much capability is hidden under the surface by mapping how dependant the organisation’s positive outcomes are on its OSS (OSS as the puppet-master). All the sexy stuff (eg 5G, IoT, network virtualisation, etc) can only be monetised if our OSS pull all the strings to make them happen.”
Yesterday’s post.

Have you noticed that network teams and network vendors tend to be far better at eliciting excitement from executive sponsors than those in charge of OSS? There is always far more interest in 5G, IoT, network virtualisation, etc than in the systems that support them. Do you think this is from the differences in messaging used? For example, networks are always about being a platform for new services, extra speeds, new revenues, increased functionality, etc. Networks are undoubtedly complex machines, but the metrics above are easily understood by execs, customers and almost anyone else who is involved in producing or consuming comms services.

BSS also have their metrics that are easily understood by those who don’t comprehend the underlying complexity of BSS. Metrics such as ARPU (Average Revenue Per User), Total Revenue Billed, Cost per Bill, etc.

But in OSS, what are our standard metrics? Are any of them stapled to the lifeblood of any business (ie money)? We talk about number of services activated, number of faults resolved*, mean-time-to-repair, etc. These are all back-end metrics that don’t elicit much interest from anyone who isn’t in ops… unless something goes wrong like activation rates dipping, fault resolution times spiking.

What if we were to flip metrics like the following to better uncover the hidden part of the iceberg:

  • services activated -> value of services activated (based on revenues generated)
  • number of faults resolved -> customer satisfaction impact
  • mean-time-to-repair -> amount of revenue assured (ie prevention of loss, particularly against any SLAs existing on the service/s)

For event management and activation processes, it probably won’t take much to identify metrics with greater perceived value. For other systems like performance, trouble ticketing, work-force management, even inventory, we’ll have to be a bit more creative.

* Has there ever been a more “shoot the messenger” metric in service providers? Problems arise in the network, but the OSS cops the flack because they’re the reporters of the problems.

What happens if we cross high-speed trading with OSS?

The law of diminishing marginal utility is a theory in economics that says that with increased consumption, satisfaction decreases.
ou are at a park on a winter’s day, and someone is selling hot dogs for $1 each. You eat one. It tastes good and satisfies you, so you have another one, and another etc. Eventually, you eat so many, that your satisfaction from each hot dog you consume drops. You are less willing to pay the $1 you have to pay for a hot dog. You would only consume another if price drops. But that won’t happen, so you leave, and demand for the hot dogs falls
.”
Wikibooks’ Supply and Demand.

Yesterday’s blog went back to the basics of supply and demand to try to find ways to innovate with out OSS. Did the proposed model help you spawn any great new ideas?

If we look at another fundamental consumer model, The Law of Diminishing Marginal Utility, we can see that with more consumption, there comes less consumer satisfaction. Sounds like what’s happening for telcos globally. There’s ever greater consumption of data, but an increasing disinterest in who transports that data. Network virtualisation, 5G and IoT are sometimes quoted as the saviours of the telco industry (looking only at the supply side), but they’re inevitably going to bring more data to the table, leading to more disinterest right? Sounds like a race to the bottom.

Telcos were highly profitable in times of data shortage but in this age of abundance, a new model is required rather than just speeds and feeds. As OSS providers, we also have to think beyond just bringing greater efficiency to the turn-on of speeds and feeds. But this is completely contra to the way we normally think isn’t it? Completely contra to the way we build our OSS business cases.

What products / services can OSS complement that are in short supply and are highly valued? Some unique content (eg Seinfeld) or apps (Pokemon Go) might fit this criteria, but only for relatively short time periods. Even content and apps are becoming more abundant and less valued. Perhaps the answer is in fulfilling short-term inefficiencies in supply and demand (eg dynamic pricing, dynamic offers, un-met demands, etc) as posed in yesterday’s blog. All of these features require us to look at our OSS data with a completely different lens though.

Our analytics engines might be less focused on time-to-repair and perhaps more on metrics such as analysis to offer (which currently take us months, thus missing transient market inefficiency windows). And not just for the service providers, but as a service for their customers. Is this high-speed trading crossed with OSS??