Micro-strangulation vs COTS customisation

Over the last couple of posts, we’ve referred to the following diagram and its ability to create a glass ceiling on OSS feature releases:
The increasing percentage of tech debt

Yesterday’s post indicated that the current proliferation of microservices has the potential to amplify the strangulation.

So how does that compare with the previous approach that was built around COTS (Commercial off-the-shelf) OSS packages?

With COTS, the same time-series chart exists, just that it sees the management of legacy, etc fall largely with the COTS vendor, freeing up the service provider… until the service provider starts building customisations and the overhead becomes shared.

With microservices, the rationalisation responsibility is shifted to the in-house (or insourced) microservice developers.

And a third option: If the COTS is actually delivered via a cloud “OSS as a service” (OSSaaS) model, then there’s a greater incentive for the vendor to constantly re-factor and reduce clutter.

A fourth option, which I haven’t actually seen as a business model yet, is once an accumulation of modular microservices begins to grow, vendors might begin to offer microservices as a COTS offering.

Can the OSS mammoths survive extinction?

Startups win with data. Mammoths go extinct with products.”
Jay Sharma
.

Interesting phraseology. I love the play on words with the term mammoths. There are some telcos that are mammoth in size but are threatened with extinction though changes in environment and new competitors appearing.

I tend to agree with the intent of the quote, but also have some reservations. For example, products are still a key part of the business model of digital phenoms like Google, Facebook, etc. It’s their compelling products that allow them to collect the all-important data. As consumers, we want the product, they get our data. We also want the products sold by the Mammoths but perhaps they don’t leverage the data entwined in our usage (or more importantly, the advertising revenues that gets attracted to all that usage) as well as the phenoms do.

Another interesting play on words exists here for the telcos – in the “winning with data.” Telcos are losing at data (their profitability per bit is rapidly declining to the point of commoditisation), so perhaps a mindset shift is required. Moving the business model that’s built on the transport of data to a model based on the understanding of, and learning from, data. It’s certainly not a lack of data that’s holding them back. Our OSS / BSS collect and curate plenty. The difference is that Google’s and Facebook’s customers are advertisers, whilst the Mammoths’ customers are subscribers.

As OSS providers, the question remains for us to solve – how can we provide the products that allow the Mammoths to win with data?

PS. The other part of this equation is the rise of data privacy regulations such as GDPR (General Data Protection Regulation). Is it just me, or do the Mammoths seem to attract more attention in relation to privacy of our data than the OTT service providers?

Analytics and OSS seasonality

Seasonality is an important factor for network and service assurance. It’s also known as time-of-day/week/month/year specific activity.

For example, we often monitor network health through the analysis of performance metrics (eg CPU utilisation) and set up thresholds to alert us if those metrics go above (or below) certain levels. The most basic threshold is a fixed one (eg if a CPU goes above 95% utilisation, then raise an alert). However, this might just create unnecessary activity. Perhaps we do an extract at 2am every evening, which causes CPU utilisation to bounce at nearly 100% for long perids of time. We don’t want to receive an alert in the middle of the night for what might be expected behaviour.

Another example might be a higher network load for phone / SMS traffic on major holidays or during disaster events.

The great thing about modern analytics tools is that as long as they have long time series of data, then they can spot patterns of expected behaviour at certain times/dates that humans might not be observing and adjust alerting accordingly. This reduces the number of spurious notifications for network assurance operators to chase up on.

10 ways to #GetOutOfTheBuilding

Eric Ries’ “The Lean Startup,” has a short chapter entitled, “Get out of the Building.” It basically describes getting away from your screen – away from reading market research, white papers, your business plan, your code, etc – and out into customer-land. Out of your comfort zone and into a world of primary research that extends beyond talking to your uncle (see video below for that reference!).

This concept applies equally well to OSS product developers as it does to start-up entrepreneurs. In fact the concept is so important that the chapter name has inspired it’s own hashtag (#GetOutOfTheBuilding).

This YouTube video provides 10 tips for getting out of the building (I’ve started the clip at Tendai Charasika’s list of 10 ways but you may want to scroll back a bit for his more detailed descriptions).

But there’s one thing that’s even better than getting out of the building and asking questions of customers. After all, customers don’t always tell the complete truth (even when they have good intentions). No, the better research is to observe what they do, not what they say. #ObserveWhatTheyDoNotWhatTheySay

This could be by being out of the building and observing customer behaviour… or it could be through looking at customer usage statistics generated by your OSS. That data might just show what a customer is doing… or not doing (eg customers might do small volume transactions through the OSS user interface, but have a hack for bulk transactions because the UI isn’t efficient at scale).

Not sure if it’s indicative of the industry as a whole, but my experience working for / with vendors is that they don’t heavily subscribe to either of these hashtags when designing and refining their products.

Does your OSS collect primary data to #ObserveWhatTheyDoNotWhatTheySay? If it does, do you ever make use of it? Or do you prefer to talk with your uncle (does he know much about OSS BTW)?

Watching customers under an omnichannel strobe light

Omnichannel will remain full of holes until we figure out a way of tracking user journeys rather than trying to prescribe (design, document, maintain) process flows.

As a customer jumps between the various channels, they move between systems. In doing so, we tend to lose the ability to watch customer’s journey as a single continuous flow. It’s like trying to watch customer behaviour under a strobe light… except that the light only strobes on for a few seconds every minute.

Theoretically, omnichannel is a great concept for customers because it allows them to step through any channel at any time to suit their unique behavioural preferences. In practice, it can be a challenging experience for customers because of a lack of consistency and flow between channels.

It’s a massive challenge for providers to deliver consistency and flow because the disparate channels have vastly different user interfaces and experiences. IVR, digital, retail, etc all come from completely different design roots.

Vendors are selling the dream of cost reductions through improved efficiency within their channels. Unfortunately this is the wrong place for a service provider to look. It’s the easier place to look, but the wrong place nonetheless. Processes already tend to be relatively efficient within a channel and data tends to be tracked well within a channel.

The much harder, but better place to seek benefits is through the cross-channel user journeys, the hand-offs between channels. That’s where the real competitive advantage opportunities lie.

Do you want dirty or clean automation?

Earlier in the week, we spoke about the differences between dirty and clean consulting, as posed by Dr Richard Claydon, and how it impacted the use of consultants on OSS projects.

The same clean / dirty construct applies to automation projects / tools such as RPA (Robotic Process Automation).

Clean Automation = simply building robotic automations (ie fixed algorithms) that manage existing process designs
Dirty Automation = understanding the process deeply first, optimising it for automation, then creating the automation.

The first is cheap(er) and easy(er)… in the short-term at least.
The second requires getting hands dirty, analysing flows, analysing work practices, analysing data / logs, understanding operator psychology, identifying inefficiencies, refining processes to make them better suited to automation, etc.

Dirty automation requires analysis, not just of the SOP (Standard Operating Procedure), but the actual state-changes occurring from start to end of each iteration of process implementation.
This also represents the better launching-off point to lead into machine-learning (ie cognitive automation), rather than algorithmic or robotic automation.

Are we measuring OSS at the wrong end?

I have a really simple philosophical question to pose of you today – Are we measuring our OSS at the wrong end?
It seems that a vast majority of our OSS measurement is at the input end of a process rather than at the output.

Just a few examples:

  • Financial predictions in a business cases vs Return on Invested Capital (ROIC) of that project
  • Implementation costs vs lifetime ownership implication costs
  • Revenues vs profitability (of products, services, workflows, activities, etc)
  • OSS costs vs enablement of service and/or monetisation of assets (ie operationalising assets such as network equipment via service activation)
  • OSS incidents raised (or even resolved) vs insurance on brand value (ie prevention of negative word-of-mouth caused by network / service outages)

In each of these cases, it’s much easier to measure the inputs. However, the output measurements portray a far more powerful message don’t you think?

6 principles of OSS UI design

When we talk about building capabilities by design, there are a set of four core capabilities that you should keep in mind:

  • Designed for self-sufficiency: Enable an environment where the business user is capable of acquiring, blending, presenting, and visualizing their data discoveries. IT needs to move away from being command and control to being an information broker in a new kind of business-IT partnership that removes barriers, so that users have more options, more empowerment, and greater autonomy.
  • Designed for collaboration: Have tools and platforms that allow people to share and work together on different ideas for review and contribution. This further closes that business-IT gap, establishes transparency, and fosters a collective learning culture.
  • Designed for visualization: Data visualizations have been elevated to a whole new form of communication that leverages cognitive hardwiring, enriches visual discovery, and helps tell a story about data to move from understanding to insight.
  • Designed for mobility: It is not enough to be just able to consume information on mobile devices, instead users must be able to work and play with data “on the go” and make discovery a portable, personalized experience.

Lindy Ryan in the book, “The Visual Imperative: Creating a Visual Culture of Data Discovery.”

When it comes to OSS specifically, I have two additional design principles:

  • Designed for Search – there is so much data in our OSS / BSS suites; some linked, some not; some normalised, some not; some cleansed, some not; This design principle allows abstraction from all those data challenges to allow operators to make psuedo-natural language requests for information. Noting that this could be considered an overlap between points 1 and 3 in the prior list
  • Designed for user journeys – in an omni-channel world, the entry point and traversal of any OSS workflow could cross multiple channels (eg online, retail store, IVR, app, etc). This makes pre-defined workflows almost impossible to design / predict. Instead, on OSS / BSS suite must be able to handle complete flexibility of user journeys between state / event transitions

Building an OSS piggybank with scoreboard pressure

“The gameplan tells what you want to happen, but the scoreboard tells what is happening.”
John C Maxwell

Over the years, I’ve found it interesting that most of the organisations I’ve consulted to have significant hurdles for a new OSS to jump through to get funded (the gameplan), but rarely spend much time on the results (the scoreboard)… apart from the burndown of capital during the implementation project.

From one perspective, that’s great for OSS implementers. With less accountability, we can move straight on to the next implementation and not have to justify whether our projects are worth the investment. It allows us to focus on justifying whether we’ve done a technically brilliant implementation instead.

However, from the other perspective, we’re short-changing ourselves if we’re not proving the value of our projects. We’re not building up the credits in the sponsor bank ahead of the inevitable withdrawals (ie when one of our OSS projects goes over time, budget or functionality is reduced to bring in time/budget). It’s the lack of credits that make sponsors skeptical of any OSS investment value and force the aforementioned jumping through hoops.

One of our OSS‘s primary functions is to collect and process data – to be the central nervous system for our organisations. We have the data to build the scoreboards. Perhaps we just don’t apply enough creativity to proving the enormous value of what our OSS are facilitating.

Do you ever consider whether you’re on the left or right side of this ledger / scoreboard?

If OSS is my hammer, am I only seeing nails?

OSS is a powerful multi-purpose tool, much like a hammer.

If OSS is my only tool, do I see all problems as nails that I have to drive home with my OSS?

The downside of this is that it then needs to be designed, built, integrated, tested, released, supported, upgraded, data curated and maintained. The Total Cost of Ownership (TCO) for a given problem extends far beyond the time-frame envisaged during most solutioning exercises.

To be honest, I’ve probably been guilty of using OSS to solve problems before seeking alternatives in the past.

What if our going-in position was that answers should be found elsewhere – outside OSS – and OSS simply becomes the all-powerful last resort? The sledgehammer rather than the ball-pein hammer.

With all this big data I keep hearing about, has anyone ever seen any stats relating to the real life-time costs of OSS customisations made by a service provider to its off-the-shelf OSS? If such data exists, I’d love to see what the cost-benefit break-even point might look like and what we could learn from it. I assume we’re contributing to our very own Whale Curve but have nothing to back that assumption up yet.

Big circle. Little circle. Crossing the red line

Data quality is the bane of many a telco. If the data quality is rubbish then the OSS tools effectively become rubbish too.

Feedback loops are one of the most underutilised tools in a data fix arsenal. However, few people realise that there are what I call big circle feedback loops as well as little circles.

The little circle is using feedback in data alone, using data to compare and reconcile other data. That can produce good results, but it’s only part of the story. Many data challenges extend further than that if you’re seeking a resolution.

The big circle is designing feedback loops that incorporate data quality into end-to-end processes, which includes the field-work part of the process.

Redline markups have been the traditional mechanism to get feedback from the field back into improving OSS data. For example, if designers issue a design pack out to field techs that prove to be incorrect, then techs return the design with redline markups to show what they’ve implemented in the field instead.

With mobile technology and the right software tools, field workers could directly update data. Unfortunately this model doesn’t seem to fit into practices that have been around for decades.

There remain great opportunities to improve the efficiency of big circle feedback loops. They probably need a new way of thinking, but still need to fit into the existing context of field workers.

Dirty tickets done dirt cheap

The only way to get rid of Dirty Tickets of Work (DToW) is to get rid of Tickets of Work (ToW)

DToW is terminology used in Telstra to indicate that incorrect information has been entered into the ToW or where the field tech hasn’t been able to complete the ToW as originally designed / planned. I’m not sure if only Telstra uses this terminology. I haven’t heard it used at any other service provider I’ve worked at.

A DToW is an important metric because it effectively means the job has just got more expensive due to quality issues. lt probably means re- design effort,perhaps data audit / remediation and an extra truck roll… at a minimum.

I love the concept and am proposing to extend it to other workflows like Dirty Service Orders, Dirty Trouble Tickets, Dirty API calls, Dirty Processing (fall-outs), etc.

Because of the quality / cost implications, many very clever people have spent a lot of effort wrestling with solutions to this problem. Technical solutions, process solutions, data solutions, user interface solutions. To my knowledge, the problem remains to be solved, not just at Telstra, but at every other Telco that uses a different name for the same metric.

Now we could take the traditional (eg Six Sigma) approach, which is improving the quality of all the ingredients of a ToW. Or, we could take the lightbulb perspective posed in the opening quote and ask how we can build a solution that doesn’t require ToWs or SOs or TTs, etc.

That might just start a revolution for OSS.

How do we get to zero field work? Ubiquitous and over-provisioned connectivity, virtualised networks and CPE (vCPE) and colour-palette solution simplicity are surely a starting point. Blanket wireless networks and a greater use of feedback loop thinking probably help too.

How do we get to no service orders? I’m thinking consumption-based billing here, not your first reaction – thinking I’m espousing free use. But perhaps free use is an option as there are plenty of other revenue models available to clever service providers.

How do we get to no trouble tickets?
Self-healing, highly resilient, elastic networks (and OSS). Also, robotic event processing and automated pattern-recognition / decision-support / root-cause. The perspective here is a “no moving parts” electronics analogy – where Solid State Drives (SSD) tend to be more reliable than spinning drives.

Hat tip to Roger Gibson again for seeding a couple of ideas here.

A quick OSS complexity checker

The following quick checklist will give you a feel for whether your OSS is too complex for general users:

  1. Who are the personas that interact with your OSS (give those personas names and attributes to give life to them)
  2. What are they trying to achieve with your OSS (what specific use cases do they fulfil)
  3. How many hours a week do the personas dedicate to those tasks (ie full-time, part-time, occasional)
  4. Compare that with how many hours per week it actually takes them to become (and stay) proficient

Don’t just estimate, collect actual user experiences / feelings.

Readers of this blog probably tend to spend all our working hours on our OSS, but for many of our users  OSS are only a part-time means-to-an-end. Many of the users of our OSS will also be situated in roles where there is high turnover (and therefore high training costs). As such, our user experience design has to assume a lower level of expertise than your peers have.

Now to extend the list above just a little further:

5. How do we use our understanding of item 2 above to monetise our OSS (either internally or externally)

6. Is there a clear association between a customer’s investment (ie item 5) and the value it’s creating for them (ie a value multiplier)

If the value equation (item 6) is too complex, your OSS will get lumped into the “cost-centre” bucket that is holding our industry back.

The colour palette analogy of OSS

Let’s say you act for a service provider and the diagram below represents the number of variations you could offer to customers – the number that are technically supported by your solution.
13,824,000 Colours
That’s 13,824,000 colours.

By comparison, the following diagram contains just 20 colours:
20 Colours

If I asked you what colours are in the upper diagram, would you say red, orange, yellow, green, blue, etc? Is it roughly the same response as to the lower diagram?

If you’re the customer, and know you want an “orange*” product, will you be able to easily identify between the many thousands of different orange hues available in the upper diagram? Would you be disenfranchised if you were only offered the two orange hues in the lower diagram instead of thousands? Or might you even be relieved to have a much easier decision to make?

The analogy here to OSS is that just because our solutions can support millions of variants, doesn’t mean we should. If our OSS try to offer millions of variants, it means we have to design, then build, then test, then post-sale support millions of variants.

However, in reality, we can’t provide 100% coverage across so many variants – we aren’t able to sufficiently design, then build, then test, then post-sale support every one of the millions of variants. We end up overlooking some or accept risk on some or estimate a test spread that bypasses others. We’ve effectively opened the door to fall-outs.

And it’s fall-outs that tend to create larger customer dissatisfaction metrics than limited colour palettes.

Just curious – if you’ve delivered OSS into large service providers, have you ever seen evidence of palette analysis (ie variant reduction analysis) across domains (ie products, marketing, networks, digital, IT, field-work, etc)?

Alternatively, have you ever pushed back on decisions made upstream to say you’ll only support a smaller sub-set of options? This doesn’t seem to happen very often.

* When I’m talking about colours, I’m using the term figuratively, not necessarily the hues on a particular handset being sold through a service provider.

One of the biggest insights we had…

One of the biggest insights we had was that we decided not to try to manage your music library on the iPod, but to manage it in iTunes. Other companies tried to do everything on the device itself and made it so complicated that it was useless.”
Steve Jobs
.

How does this insight apply to OSS? Can this “off device” perspective help us in designing better OSS?

Let’s face it – many OSS are bordering on useless due to the complexity that’s build in to the user experience. So what complexity can we take off the “device?” Let’s start by saying “the device” is the UI of our OSS (although noting the off-device perspective could be viewed much more broadly than that).

What are the complexities that we face when using an OSS;

  • The process of order entry / service design / service parameters / provisioning can be time consuming and prone to errors,
  • Searching / choosing / tracing resources, particularly on large networks, can result in very slow response times,
  • Navigating through multiple layers of inventory in CLI or tabular forms can be challenging,
  • Dealing with fixed processes that don’t accommodate the many weird and wonderful variants that we encounter
  • Dealing with workflows that cross multiple integration boundaries and slip through the cracks,
  • Analysing data that is flawed generally produces flawed results
  • Identifying the proverbial needle in the haystack when something goes wrong
  • And many, many more

How can we take some of those complexities “off-device”

  • Abstracting order and provisioning complexity through the use of catalogs and auto-populating as many values as possible,
  • Using augmented decision support to assist operators through complex processes, choosing from layers of resources, finding root-causes to problems, etc,
  • Using event-based processes that traverse process states rather than fixed processes, particularly where omni-channel interactions are available to customers
  • Using inventory discovery (and automated build-up / tear-down in virtualised networks) and decision support to present simpler navigations and views of resources
  • Off-device data grooming / curation to make data analysis more intuitive on-device
  • etc

In effect, we’re describing the tasks of an “on-device” persona (typically day-to-day OSS operators that need greater efficiency) and “off-device” persona/s (these are typically OSS admins, configuration experts, integrators, data scientists, UI/UX experts, automation developers, etc who tune the OSS).

The augmented analytics journey

Smart Data Discovery goes beyond data monitoring to help business users discover subtle and important factors and identify issues and patterns within the data so the organization can identify challenges and capitalize on opportunities. These tools allow business users to leverage sophisticated analytical techniques without the assistance of technical professionals or analysts. Users can perform advanced analytics in an easy-to-use, drag and drop interface without knowledge of statistical analysis or algorithms. Smart Data Discovery tools should enable gathering, preparation, integration and analysis of data and allow users to share findings and apply strategic, operational and tactical activities and will suggest relationships, identifies patterns, suggests visualization techniques and formats, highlights trends and patterns and helps to forecast and predict results for planning activities.

Augmented Data Preparation empowers business users with access to meaningful data to test theories and hypotheses without the assistance of data scientists or IT staff. It allows users access to crucial data and Information and allows them to connect to various data sources (personal, external, cloud, and IT provisioned). Users can mash-up and integrate data in a single, uniform, interactive view and leverage auto-suggested relationships, JOINs, type casts, hierarchies and clean, reduce and clarify data so that it is easier to use and interpret, using integrated statistical algorithms like binning, clustering and regression for noise reduction and identification of trends and patterns. The ideal solution should balance agility with data governance to provide data quality and clear watermarks to identify the source of data.

Augmented Analytics automates data insight by utilizing machine learning and natural language to automate data preparation and enable data sharing. This advanced use, manipulation and presentation of data simplifies data to present clear results and provides access to sophisticated tools so business users can make day-to-day decisions with confidence. Users can go beyond opinion and bias to get real insight and act on data quickly and accurately.”
The definitions above come from a post by Kartik Patel entitled, “What is Augmented Analytics and Why Does it Matter?.”

Over the years I’ve loved playing with data and learnt so much from it – about networks, about services, about opportunities, about failures, about gaps, etc. However, modern statistical analysis techniques fall into one of the categories described in “You have to love being incompetent“, where I’m yet to develop the skills to a comfortable level. Revisiting my fifth year uni mathematics content is more nightmare than dream, so if augmented analytics tools can bypass the stats, I can’t wait to try them out.

The concepts described by Kartik above would take those data learning opportunities out of the data science labs and into the hands of the masses. Having worked with data science labs in the past, the value of the information has been mixed, all dependent upon which data scientist I dealt with. Some were great and had their fingers on the pulse of what data could resolve the questions asked. Others, not so much.

I’m excited about augmented analytics, but I’m even more excited about the layer that sits on top of it – the layer that manages, shares and socialises the aggregation of questions (and their answers). Data in itself doesn’t provide any great insight. It only responds when clever questions are asked of it.

OSS data has an immeasurable number of profound insights just waiting to be unlocked, so I can’t wait to see where this relatively nascent field of augmented analytics takes us.

If you can’t repeat it, you can’t improve it

The cloud model (ie hosted by a trusted partner) becomes attractive from the perspective of repeatability, from the efficiency of doing the same thing repeatedly at scale.”
From, “I want a business outcome, not a deployment challenge.”

OSS struggles when it comes to repeatability. Often within an organisation, but almost always when comparing between organisations. That’s why there’s so much fragmentation, which in turn is holding the industry back because there is so much duplicated effort and brain-power spread across all the multitude of vendors in the market.

I’ve worked on many OSS projects, but none have even closely resembled each other, even back in the days when I regularly helped the same vendors deliver to different clients. That works well for my desire to have constant mental stimulation, but doesn’t build a very efficient business model for the industry.

Closed loop architectures are the way of the future for OSS, but only if we can make our solutions repeatable, measurable / comparable and hence, refinable (ie improvable). If we can’t then we may as well forget about AI. After all, AI requires lots of comparable data.

I’ve worked with service providers that have prided themselves on building bespoke solutions for every customer. I’m all for making every customer feel unique and having their exact needs met, but this can still be accommodated through repeatable building blocks with custom tweaks around the edges. Then there are the providers that have so many variants that you might as well be designing / building / testing an OSS for completely bespoke solutions.

You could even look at it this way – If you can’t implement a repeatable process / solution, then measure it, then compare it and then refine it, then you can’t create a customer offering that is improving.

Omnichannel will remain disjointed until…

Omnichannel is intended to be a strategy that provides customers with a seamless, consistent experience across all of their contact channels – channels that include online/digital, IVR, contact centre, mobile app, retail store, B2B portal, etc.

The challenge of delivering consistency across these platforms is that there is little cross-over between the organisations that deliver these tools. Each is a fragmented market in its own right and the only time interaction happens (in my experience at least) is on an as-needed basis for a given project.

Two keys to delivering seamless customer experience are the ability to identify unique customers and the ability to track their journeys through different channels. The problem is that some of these channels aren’t designed to uniquely identify and if they can, aren’t consistent with other products in their linking-key strategies.

A related problem is that user journeys won’t follow a single step-by-step sequence through the channels. So rather than process flows, user journeys need to be tracked as state transitions through their various life-cycles.

OSS/BSS are ideally situated to manage linking keys across channels (if the channels can provide the data) as well as handling state-transition user journeys.

Omnichannel represents a significant opportunity, in part because there are two layers of buyers for such technology. The first is the service provider that wants to provide their customer with a truly omnichannel experience. The second is to provide omnichannel infrastructure to the service providers’ customers, customers that are in business and want to offer consistent omnichannel experiences for their end-customers.

Who is going to be the first to connect the various channel products / integrators together?

Use cases for architectural smoke-tests

I often leverage use-case design and touch-point mapping through the stack to ensure that all of the use-cases can be turned into user-journeys, process journeys and data journeys. This process can pick up the high-level flows, but more importantly, the high-level gaps in your theoretical stack.”

Yesterday’s blog discussed the use of use cases to test a new OSS architecture. TM Forum’s eTOM is the go-to model for process mapping for OSS / BSS. Their process maps define multi-level standards (in terms of granularity of process mapping) to promote a level of process repeatability across the industry. Their clickable model allows you to drill down through the layers of interest to you (note that this is available for members only though).

In terms of quick smoke-testing an OSS stack though, I tend to use a simpler list of use cases for an 80/20 coverage:

  • Service qualification (SQ)
  • Adding new customers
  • New customer orders (order handling)
  • Changes to orders (adds / moves / changes / deletes / suspends / resumes)
  • Logging an incident
  • Running a report
  • Creating a new product (for sale to customers)
  • Tracking network health (which may include tracking of faults, performance, traffic engineering, QoS analysis, etc)
  • Performing network intelligence (viewing inventory, capacity, tracing paths, sites, etc)
  • Performing service intelligence (viewing service health, utilised resources, SLA threshold analysis, etc)
  • Extracting configurations (eg network, device, product, customer or service configs)
  • Tracking customer interactions (and all internal / external events that may impact customer experience such as site visits, bills, etc)
  • Running reports (of all sorts)
  • Data imports
  • Data exports
  • Performing an enquiry (by a customer, for the purpose of sales, service health, parameters, etc)
  • Bill creation

There are many more that may be required depending on what your OSS stack needs to deliver, but hopefully this is a starting point to help your own smoke tests.

Use-case driven OSS architecture

When it comes to designing a multi-vendor (sometimes also referred to as best-of-breed) OSS architecture stack, there is never a truer saying than, “the devil is in the detail.”

Oftentimes, it’s just not feasible to design every interface / integration / data-flow traversing a theoretical OSS stack (eg pre-contract award, whilst building a business case, etc). That level of detail is developed during detailed design or perhaps tech-spikes in the Agile world.

In this interim state, I often leverage use-case design and touch-point mapping through the stack to ensure that all of the use-cases can be turned into user-journeys, process journeys and data journeys. This process can pick up the high-level flows, but more importantly, the high-level gaps in your theoretical stack.