I was speaking with a friend today about an old OSS assurance product that is undergoing a refresh and investment after years of stagnation.
He indicated that it was to come with about 20 out of the box adaptors for data collection. I found that interesting because it was replacing a product that probably had in excess of 100 adaptors. Seemed like a major backward step… until my friend pointed out the types of adaptor in this new product iteration – Splunk, AWS, etc.
Our OSS no longer collect data directly from the network anymore. We have web-scaled processes sucking everything out of the network / EMS, aggregating it and transforming / indexing / storing it. Then, like any other IT application, our OSS just collect what we need from a data set that has already been consolidated and homogenised.
I don’t know why I’d never thought about it like this before (ie building an architecture that doesn’t even consider connecting to the the multitude of network / device / EMS types). In doing so, we lose the direct connection to the source, but we also reduce our integration tax load (directly to the OSS at least).
This is the third part of a series describing a really exciting analysis I’ve just finished.
Part 1 described how we can turn simple log files into a Sankey diagram that shows real-life process flows (not just a theoretical diagram drawn by BAs and SMEs), like below:
Part 2 described how the logs are broken down into a design tree and how we can assign weightings to each branch based on the data stored in the logs, as below:
I’ve already had lots of great feedback in relation to the Part 1 blog, especially from people who’ve had challenges capturing as-is process. The feedback has been greatly appreciated so I’m looking forward to helping them draw up their flow-charts on the way to helping optimise their process flows.
But that’s just the starting point. Today’s post is where things get really exciting (for me at least). Today we build on part 2 and not just record weightings, but use them to assist future decisions.
We can use the decision tree to “predict forward” and help operators / algorithms make optimal decisions whilst working towards process completion. We can use a feedback loop to steer an operator (or application) down the most optimal branches of the tree (and/or avoid the fall-out variants).
This allows us to create a closed-loop, self-optimising, Decision Support System (DSS), as follows:
Using log data alone, we can perform decision optimisation based on “likelihood of success” or “time to complete” as per the weightings table. If supplemented with additional data, the weightings table could also allow decisions to be optimised by “cost to complete” or many other factors.
The model has the potential to be used in “real-time” mode, using the constant stream of process logs to continually refine and adapt. For example:
If the long-term average of a process path is 1 minute, but there’s currently a problem with and that path is failing, then another path (one that is otherwise slightly less optimised over the long-term), could be used until the first path is repaired
An operator happens to choose a new, more optimal path than has ever been identified previously (the delta function in the diagram). It then sets a new benchmark and informs the new approach via the DSS (Darwinian selection)
If you’re wondering how the DSS could be implemented, I can envisage a few ways:
Using existing RPA (Robotic Process Automation) tools [which are particularly relevant if the workflow box in the diagram above crosses multiple different applications (not just a single monolithic OSS/BSS)]
Providing a feedback path into the functionality of the OSS/BSS and it’s GUI
Via notifications (eg email, Slack, etc) to operators
Via a simple, more manual process like flow diagrams, work instructions, scorecards or similar
This visualisation is exciting because it shows how your processes are actually flowing (or not), as opposed to the theoretical process diagrams that are laboriously created by BAs in conjunction with SMEs. It also shows which branches in the flow are actually being utilised and where inefficiencies are appearing (and are therefore optimisation targets).
Some people have wondered how simple activity logs can be used to show the Sankey diagrams. Hopefully the diagram below helps to describe this. You scan the log data looking for variants / patterns of flows and overlay those onto a map of decision states (DPs). In the diagram above, there are only 3 DPs, but 303 different variants (sounds implausible, but there are many variants that do multiple loops through the 3 states and are therefore considered to be a different variant).
The numbers / weightings you see on the Sankey diagram are the number* of instances (of a single flow type) that have transitioned between two DPs / states.
* Note that this is not the same as the count value that appears in the Weightings table. We’ll get to that in tomorrow’s post when we describe how to use the weightings data for decision support.
In your travels, I don’t suppose you’ve ever come across anyone having challenges to capture and/or optimise their as-is OSS/BSS process flows? Once or twice?? 🙂
Well I’ve just completed an analysis that I’m really excited about. It’s something I’ve been thinking about for some time, but have just finished proving on the weekend. I thought it might have relevance to you too. It quickly helps to visualise as-is process and identify areas to optimise.
The method takes activity logs (eg from OSS, ITIL, WFM, SAP or similar) and turns them into a process diagram (a Sankey diagram) like below with real instance volumes. Much better than a theoretical process map designed by BAs and SMEs don’t you think?? And much faster and more accurate too!!
A theoretical process map might just show a sequence of 3 steps, but the diagram above has used actual logs to show what’s really occurring. It highlights governance issues (skipped steps) and inefficiencies (ie the various loops) in the process too. Perfect for process improvement.
But more excitingly, it proves a path towards real-time “predict-forward” decision support without having to get into the complexities of AI. More has been included in the analysis!
If this is of interest to you, let me know and I’ll be happy to walk you through the full analysis. Or if you want to know how your real as-is processes perform, I’d be happy to help turn your logs into visuals like the one above.
PS1. You might think you need a lot of fields to prepare the diagrams above. The good news is the only mandatory fields would be something like:
Flow type – eg Order type, project type or similar (only required if the extract contains multiple flow types mixed together. The diagram above represents just one flow type)
Flow instance identifier – eg Order number, project number or similar (the diagram above was based on data that had around 600,000 flow instances)
Activity identifier – eg Activity name (as per the 3 states in the diagram above), recorded against each flow instance. Note that they will ideally be an enumerated list (ie from a finite pick-list)
Timestamps – Start/end timestamp on each activity instance
If the log contains other details such as the name of the operator who completed each activity, that can help add richness, but not mandatory.
PS2. The main objective of the analysis was to test concepts raised in the following blog posts:
For most end-customers, the OSS/BSS we create are merely back-office systems that they never see. The closest they get are the customer portals that they interact with to drive workflows through our OSS/BSS. And yet, our OSS/BSS still have a big part to play in customer experience. In times where customers can readily substitute one carrier for another, customer service has become a key differentiator for many carriers. It therefore also becomes a priority for our OSS/BSS.
Customers now have multiple engagement options (aka omni-channel) and form factors (eg in-person, phone, tablet, mobile phone, kiosk, etc). The only options we used to have were a call to a contact centre / IVR (Interactive Voice Response), a visit to a store, or a visit from an account manager for business customers. Now there are websites, applications, text messages, multiple social media channels, chatbots, portals, blogs, etc. They all represent different challenges as far as offering a seamless customer experience across all channels.
I’ve just noticed TM Forum’s “Omni-channel Guidebook” (GB994), which does a great job at describing the challenges and opportunities. For example, it explains the importance of identity. End-users can only get a truly seamless experience if they can be uniquely identified across all channels. Unfortunately, some channels (eg IVR, website) don’t force end-users to self-identify.
The Ovum report, “Optimizing Customer Service in a Multi Channel World, March 2011” indicates that around 74% of customers use 3 channels or more for engaging customer service. In most cases, it’s our OSS/BSS that provide the data that supports a seamless experience across channels. But what if we have no unique key? What if the unique key we have (eg phone number) doesn’t uniquely identify the different people who use that contact point (eg different family members who use the same fixed-line phone)?
We could use personality profiling across these channels, but we’ve already seen how that has worked out for Cambridge Analytica and Facebook in terms of customer privacy and security.
I’d love to hear how you’ve done cross-channel identity management in your OSS/BSS. Have you solved the omni-channel identity conundrum?
PS. One thing I find really interesting. The whole omni-channel thing is about giving customers (or potential customers) the ability to connect via the channel they’re most comfortable with. But there’s one glaring exception. When an end-user decides a phone conversation is the only way to resolve their issue (often after already trying the self-service options), they call the contact centre number. But many big telcos insist on trying to deflect as many calls as possible to self-service options (they call it CVR – call volume reduction), because contact centre staff are much more expensive per transaction than the automated channels. That seems to be an anti-customer-experience technique if you ask me. What are your thoughts?
The last four posts have discussed how our OSS/BSS need to cope with different modes of working to perform effectively. We started off with the thread of “group flow,” where multiple different users of our tools can work cohesively. Then we talked about how flow requires a lack of interruptions, yet many of the roles using our OSS actually need constant availability (ie to be constantly interrupted).
From a user experience (UI/UX) perspective, we need an awareness of the state the operator/s needs to be in to perform each step of an end-to-end process, be it:
Deep think or flow mode – where the operator needs uninterrupted time to resolve a complex and/or complicated activity (eg a design activity)
Constant availability mode – where the operator needs to quickly respond to the needs of others and therefore needs a stream of notifications / interruptions (eg network fault resolutions)
Group flow mode – where a group of operators need to collaborate effectively and cohesively to resolve a complex and/or complicated activity (eg resolve a cross-domain fault situation)
This is a strong argument for every OSS/BSS supplier to have UI/UX experts on their team. Yet most leave their UI/UX with their coders. They tend to take the perspective that if the function can be performed, it’s time to move on to building the next function. That was the same argument used by all MP3 player suppliers before the iPod came along with its beautiful form and function and dominated the market.
Interestingly, modern architectural principles potentially make UI/UX design more challenging. With old, monolithic OSS/BSS, you at least had more control over end-to-end workflows (I’m not suggesting we should go back to the monoliths BTW). These days, you need to accommodate the unique nuances / inconsistencies of third-party modules like APIs / microservices.
As Evan Linwood incisively identified, ” I guess we live in the age of cloud based API providers, theoretically enabling loads of pre-canned integration patterns but these may not be ideal for a large service provider… Definitely if the underlying availability isn’t there, but could also occur through things like schema mismanagement across multiple providers? (Which might actually be an argument for better management / B/OSS, rather than against the use of microservices!”
Am I convincing any of you to hire more UI/UX resources? Or convincing you to register for UI/UX as your next training course instead of learning a ninth programming language?
Put simply, we need your assistance to take our OSS from this…
We contrasted this with the mechanisms used in most OSS that actually prevent flow-state from occurring. Today I’m going to dive into the work that goes into creating a new design (to activate a customer), and how our current OSS designs / processes inhibit flow.
“Being switched on at all times and expected to pick things up immediately makes us miserable, says [Cal] Newport. “It mismatches with the social circuits in our brain. It makes us feel bad that someone is waiting for us to reply to them. It makes us anxious.”
Because it is so easy to dash off a quick reply on email, Slack or other messaging apps, we feel guilty for not doing so, and there is an expectation that we will do it. This, says Newport, has greatly increased the number of things on people’s plates. “The average knowledge worker is responsible for more things than they were before email. This makes us frenetic. We should be thinking about how to remove the things on their plate, not giving people more to do…
Going cold turkey on email or Slack will only work if there is an alternative in place. Newport suggests, as many others now do, that physical communication is more effective. But the important thing is to encourage a culture where clear communication is the norm.
Newport is advocating for a more linear approach to workflows. People need to completely stop one task in order to fully transition their thought processes to the next one. However, this is hard when we are constantly seeing emails or being reminded about previous tasks. Some of our thoughts are still on the previous work – an effect called attention residue.”
That resonates completely with me. So let’s consider that and look into the collaboration process of a stylised order activation:
Customer places an order via an order-entry portal
Perform SQ (Service Qualification) and Credit Checks, automated processes
Order is broken into work order activities (automated process)
Designer1 picks up design work order activity from activity list and commences outside plant design (cables, pits, pipes). Her design pack includes:
Updating AutoCAD / GIS drawings to show outside plant (new cable in existing pit/pipe, plus lead-in cable)
Updating OSS to show splicing / patching changes
Creates project BoQ (bill of quantities) in a spreadsheet
Designer2 picks up next work order activity from activity list and commences active network design. His design pack includes:
Allocation of CPE (Customer Premises Equipment) from warehouse
Allocation of IP address from ranges available in IPAM (IP address manager)
Configuration plan for CPE and network edge devices
FieldWorkTeamLeader reviews inside plant and outside plant designs and allocates to FieldWorker1. FieldWorker1 is also issued with a printed design pack and the required materials
FieldWorker1 commences build activities and finds out there’s a problem with the design. It indicates splicing the customer lead-in to fibres 1/2, but they appear to already be in use
So, what does FieldWorker1 do next?
The activity list / queue process has worked reasonably well up until this step in the process. It allowed each person to work autonomously, stay in deep focus and in the sequence of their own choosing. But now, FieldWorker1 needs her issue resolved within only a few minutes or must move on to her next job (and next site). That would mean an additional truck-roll, but also annoying the customer who now has to re-schedule and take an additional day off work to open their house for the installer.
FieldWorker1 now needs to collaborate quickly with Designer1, Designer2 and FieldWorkTeamLeader. But most OSS simply don’t provide the tools to do so. The go-forward decision in our example draws upon information from multiple sources (ie AutoCAD drawing, GIS, spreadsheet, design document, IPAM and the OSS). Not only that, but the print-outs given to the field worker don’t reflect real-time changes in data. Nor do they give any up-stream context that might help her resolve this issue.
So FieldWorker1 contacts the designers directly (and separately) via phone.
Designer1 and Designer2 have to leave deep-think mode to respond urgently to the notification from FieldWorker1 and then take minutes to pull up the data. Designer1 and Designer2 have to contact each other about conflicting data sets. Too much time passes. FieldWorker1 moves to her next job.
Our challenge as OSS designers is to create a collaborative workspace that has real-time access to all data (not just the local context as the issue probably lies in data that’s upstream of what’s shown in the design pack). Our workspace must also provide all participants with the tools to engage visually and aurally – to choreograph head-office and on-site resources into “group flow” to resolve the issue.
Even if such tools existed today, the question I still have is how we ensure our designers aren’t interrupted from their all-important deep-think mode. How do we prevent them from having to drop everything multiple times a day/hour? Perhaps the answer is in an organisational structure – where all designers have to cycle through the Design Support function (eg 1 day in a fortnight), to take support calls from field workers and help them resolve issues. It will give designers a greater appreciation for problems occurring in the field and also help them avoid responding to emails, slack messages, etc when in design mode.
Yesterday’s post talked about the difference between “flow state” and “office state” in relation to OSS delivery. It referenced a book I’m currently reading called Stealing Fire.
The post mainly focused on how the interruptions of “office state” actually inhibit our productivity, learning and ability to think laterally on our OSS. But that got me thinking that perhaps flow doesn’t just relate to OSS project delivery. It also relates to post-implementation use of the OSS we implement.
If we think about the various personas who use an OSS (such as NOC operators, designers, order entry operators, capacity planners, etc), do our user interfaces and workflows assist or inhibit them to get into the zone? More importantly, if those personas need to work collaboratively with others, do we facilitate them getting into “group flow?”
Stealing Fire suggests that it costs around $500k to train each Navy SEAL and around $4.25m to train each elite SEAL (DEVGRU). It also describes how this level of training allows DEVGRU units to quickly get into group flow and function together almost as if choreographed, even in high-pressure / high-noise environments.
Contrast this with collaborative activities within our OSS. We use tickets, emails, Slack notifications, work order activity lists, etc to collaborate. It seems to me that these are the precise instruments that prevent us from getting into flow individually. I assume it’s the same collectively. I can’t think back to any end-to-end OSS workflows that seem highly choreographed or seamlessly effective.
Think about it. If you experience significant rates of process fall-out / error, then it would seem to indicate an OSS that’s not conducive to group flow. Ditto for lengthy O2A (order to activate) or T2R (trouble to resolve) times. Ditto for bringing new products to market.
I’d love to hear your thoughts. Has any OSS environment you’ve worked in facilitated group flow? If so, was it the people and/or the tools? Alternatively, have the OSS you’ve used inhibited group flow?
PS. Stealing Fire details how organisations such as Google and DARPA are investing heavily in flow research. They can obviously see the pay-off from those investments (or potential pay-offs). We seem to barely even invest in UI/UX experts to assist with the designs of our OSS products and workflows.
Operators have developed many unique understandings of what impacts the health of their networks.
For example, mobile operators know that they have faster maintenance cycles in coastal areas than they do in warm, dry areas (yes, due to rust). Other operators have a high percentage of faults that are power-related. Others are impacted by failures caused by lightning strikes.
Near-real-time weather pattern and lightning strike data is now readily accessible, potentially for use by our OSS.
I was just speaking with one such operator last week who said, “We looked at it [using lightning strike data] but we ended up jumping at shadows most of the time. We actually started… looking for DSLAM alarms which will show us clumps of power failures and strikes, then we investigate those clumps and determine a cause. Sometimes we send out a single truck to collect artifacts, photos of lightning damage to cables, etc.”
That discussion got me wondering about what other lateral approaches are used by operators to assure their networks. For example:
What external data sources do you use (eg meteorology, lightning strike, power feed data from power suppliers or sensors, sensor networks, etc)
Do you use it in proactive or reactive mode (eg to diagnose a fault or to use engineering techniques to prevent faults)
Have you built algorithms (eg root-cause, predictive maintenance, etc) to utilise your external data sources
If so, do those algorithms help establish automated closed-loop detect and response cycles
By measuring and managing, has it created quantifiable improvements in your network health
I’d love to hear about your clever and unique insight-generation ideas. Or even the ideas you’ve proposed that haven’t been built yet.
TM Forum’s Open Digital Architecture (ODA) White Paper begins with the following statement:
Telecoms is at a crucial turning point. The last decade has dealt a series of punishing blows to an industry that had previously enjoyed enviable growth for more than 20 years. Services that once returned high margins are being reduced to commodities in the digital world, and our insatiable appetite for data demands continuous investment in infrastructure. On the other hand, communications service providers (CSPs) and their partners are in an excellent position to guide and capitalize on the next wave of digital revolution.
Clearly, a reduction in profitability leads to a reduction in cash available for projects – including OSS transformation projects. And reduced profitability almost inevitably leads executives to start thinking about head-count reduction too.
As Luke Clifton of Macquarie Telecom observed here, “Telstra is reportedly planning to shed 1,200 people from its enterprise business with many of these people directly involved in managing small-to-medium sized business customers. More than 10,000 customers in this segment will no longer have access to dedicated Account Managers, instead relegated to being managed by Telstra’s “Digital Hub”… Telstra, like the big banks once did, is seemingly betting that customers won’t leave them nor will they notice the downgrade in their service. It will be interesting to see how 10,000 additional organisations will be managed through a Digital Hub.
Simply put, you cannot cut quality people without cutting the quality of service. Those two ideals are intrinsically linked…”
As a fairly broad trend across the telco sector, projects and jobs are being cut, whilst technology change is forcing transformation. And as suggested in Luke’s “Digital Hub” quote above, it all leads to increased expectations on our OSS/BSS.
Pressure is coming at our OSS from all angles, and with no signs of abating.
To quote Queen, “Pressure. Pushing down on me.Pressing down on you.”
So it seems to me there are only three broad options when planning our OSS roadmaps:
We learn to cope with increased pressure (although this doesn’t seem like a viable long-term option)
We reduce the size (eg functionality, transaction volumes, etc) of our OSS footprint [But have you noticed that all of our roadmaps seem expansionary in terms of functionality, volumes, technologies incorporated, etc??]
We look beyond the realms of traditional OSS/BSS functionality (eg just servicing operations) and into areas of opportunity
TM Forum’s ODA White Paper goes on to state, “The growth opportunities attached to new 5G ecosystems are estimated to be worth over $580 billion in the next decade. Servicing these opportunities requires transformation of the entire industry. Early digital transformation efforts focused on improving customer experience and embracing new technologies such as virtualization, with promises of wide-scale automation and greater agility. It has become clear that these ‘projects’ alone are not enough. CSPs’ business and operating models, choice of technology partners, mindset, decision-making and time to market must also change. True digital business transformation is not an easy or quick path, but it is essential to surviving and thriving in the future digital market.”
BTW. I’m not suggesting 5G is the panacea or single opportunity here. My use of the quote above is drawing more heavily on the opportunities relating to digital transformation. Not of the telcos themselves, but digital transformation of their customers. If data is the oil of the 21st century, then our OSS/BSS and telco assets have the potential to be the miners and pipelines of that oil.
If / when our OSS go from being cost centres to revenue generators (directly attributable to revenue, not the indirect attribution by most OSS today), then we might feel some of the pressure easing off us.
Fault / Alarm management tools have lots of strings to their functionality bows to help operators focus in on the target/s that matter most. ITU-T’s recommendation X.733 provided an early framework and common model for classification of alarms. This allowed OSS vendors to build a standardised set of filters (eg severity, probable cause, etc). ITU-T’s recommendation M.3703 then provided a set of guiding use cases for managing alarms. These recommendations have been around since the 1990’s (or possibly even before).
Despite these “noise reduction” tools being readily available, they’re still not “compressing” event lists enough in all cases.
I imagine, like me, you’ve heard many customer stories where so many new events are appearing in an event list each day that the NOC (network operations centre) just can’t keep up. Dozens of new events are appearing on the screen, then scrolling off the bottom of it before an operator has even had a chance to stop and think about a resolution.
So if humans can’t keep up with the volume, we need to empower machines with their faster processing capabilities to do the job. But to do so, we first have to take a step away from the noise and help build a systematic root-cause analysis (RCA) pipeline.
I call it a pipeline because there are generally a lot of RCA rules that are required. There are a few general RCA rules that can be applied “out of the box” on a generic network, but most need to be specifically crafted to each network.
So here’s a step-by-step guide to build your RCA pipeline:
Scope – Identify your initial target / scope. For example, what are you seeking to prioritise:
Event volume reduction to give the NOC breathing space to function better
Identifying “most important” events (but defining what is most important)
Minimising SLA breaches
Gather Data – Gather incident and ticket data. Your OSS is probably already doing this, but you may need to pull data together from various sources (eg alarms/events, performance, tickets, external sources like weather data, etc)
Pattern Identification – Pattern identification and categorisation of incidents. This generally requires a pattern identification tool, ideally supplied by your alarm management and/or analytics supplier
Prioritise – Using a long-tail graph like below, prioritise pattern groups by the following (and in line with item #1 above):
Number of instances of the pattern / group (ie frequency)
Priority of instances (ie urgency of resolution)
Number of linked incidents (ie volume)
Other technique, such as a cumulative/blended metric
Gather Resolution Knowledge – Understand current NOC approaches to fault-identification and triage, as well as what’s important to them (noting that they may have biases such as managing to vanity metrics)
Note any Existing Resolutions – Identify and categorise any existing resolutions and/or RCA rules (if data supports this)
Short-list Remaining Patterns – Overlay resolution pattern on long-tail (to show which patterns are already solved for). then identify remaining priority patterns on the long-tail that don’t have a resolution yet.
Codify Patterns – Progressively set out to identify possible root-cause by analysing cause-effect such as:
Other (as helped to be defined by NOC operators)
Knowledge base – Create a knowledge base that itemises root-causes and supporting information
Build Algorithm / Automation – Create an algorithm for identifying root-cause and related alarms. Identify level of complexity, risks, unknowns, likelihood, control/monitoring plan for post-install, etc. Then build pilot algorithm (and possibly roll-back technique??). This might not just be an RCA rule, but could also include other automations. Automations could include creating a common problem and linking all events (not just root cause event but all related events), escalations, triggering automated workflows, etc
Test pilot algorithm (with analytics??)
Introduce algorithm into production use – But continue to monitor what’s being suppressed to
Repeat – Then repeat from steps 7 to 12 to codify the next most important pattern
Leading metrics – Identify leading metrics and/or preventative measures that could precede the RCA rule. Establish closed-loop automated resolution
Note that this mapping is just my demarc interpretation, but isn’t the definitive guide. It’s definitely open to differing opinions (ie religious wars).
Many of you will be familiar with the framework that the mapping is overlaid onto – TM Forum’s TAM (The Application Map). Version R17.5.1 in this case. It is as close as we get to a standard mapping of OSS/BSS functionality modules. I find it to be a really useful guide, so today’s article is going to call on the TAM again.
As you would’ve noticed in the diagram above, there are many, many modules that make up the complete OSS/BSS estate. And you should note that the diagram above only includes Level 2 mapping. The TAM recommendation gets a lot more granular than this. This level of granularity can be really important for large, complex telcos.
For the OSS/BSS that support smaller telcos, network providers or utilities, this might be overkill. Similarly, there are OSS/BSS vendors that want to cover all or large parts of the entire estate for these types of customers. But as you’d expect, they don’t want to provide the same depth of functionality coverage that the big telcos might need.
As such, I thought I’d provide the cut-down TAM mapping below for those who want a less complex OSS/BSS suite.
It’s a really subjective mapping because each telco, provider or vendor will have their own perspective on mandatory features or modules. Hopefully it provides a useful starting point for planning a low complexity OSS/BSS.
Then what high-level functionality goes into these building blocks? That’s possibly even more subjective, but here are some hints:
One of the benefits of virtualisation or NaaS (Network as a Service) is that it provides a layer of programmability to your network. That is, to be able to instantiate network services by software through a network API. Virtualisation also tends to assume/imply that there is a huge amount of available capacity (the resource pool) that it can shift workloads between. If one virtual service instance dies or deteriorates, then just automatically spin up another. If one route goes down, customer services are automatically re-directed via alternate routes and the service is maintained. No problem…
But there are some problems that can’t be solved in software. You can’t just use software to fix a cable that’s been cut by an excavator. You can’t just use software to fix failed electronics. Modern virtualised networks can do a great job of self-healing, routing around the problem areas. But there are still physical failures that need to be repaired / replaced / maintained by a field workforce. NSA doesn’t tend to cover that.
Looking at the diagram below, NSA does a great job of the closed-loop assurance within the red circle. But it then needs to kick out to the green closed-loop assurance processes that are already driven by our OSS/BSS.
As described in the link above, “Perhaps if the NSA was just assuring the yellow cloud/s, any time it identifies any physical degradation / failure in the resource pool, it kicks a notification up to the Customer Service Assurance (CSA) tools in the OSS/BSS layers? The OSS/BSS would then coordinate 1) any required customer notifications and 2) any truck rolls or fixes that can’t be achieved programmatically; just like it already does today. The additional benefit of this two-tiered assurance approach is that NSA can handle the NFV / VNF world, whilst not trying to replicate the enormous effort that’s already been invested into the CSA (ie the existing OSS/BSS assurance stack that looks after PNFs, other physical resources and the field workforce processes that look after it all).”
Therefore, a key part of the NSA process is how it kicks up from closed-loop 1 to closed-loop 2. Then, after closed-loop 2 has repaired the physical problem, NSA needs to be aware that the repaired resource is now back in the pool of available resources. Does your NSA automatically notice this, or must it receive a notification from closed loop 2?
It could be as simple as NSA sending alarms into the alarm list with a clearly articulate root-cause. The alarm has a ticket/s raised against it. The ticket triggers the field workforce to rectify it and the triggers customer assurance teams/tools to send notifications to impacted customers (if indeed they send notifications to customers who may not actually be effected yet due to the resilience measures that have kicked in). Standard OSS/BSS practice!
I spent some time with a client going through their OSS/BSS yesterday. They’re an Australian telco with a primarily home-grown, browser-based OSS/BSS. One of its features was something I’ve never seen in an OSS/BSS before. But really quite subtle and cool.
They have four tiers of users:
Super-admins (the carrier’s in-house admins),
Standard (their in-house users),
Partners (they use many channel partners to sell their services),
Customer (the end-users of the carrier’s services).
All users have access to the same OSS/BSS, but just with different levels of functionality / visibility, of course.
Anyway, the feature that I thought was really cool was that the super-admins have access to what they call the masquerade function. It allows them to masquerade as any other user on the system without having to log-out / login to other accounts. This allows them to see exactly what each user is seeing and experience exactly what they’re experiencing (notwithstanding any platform or network access differences such as different browsers, response times, etc).
This is clearly helpful for issue resolution, but I feel it’s even more helpful for design, feature release and testing across different personas.
In my experience at least, OSS/BSS builders tend to focus on a primary persona (eg the end-user) and can overlook multi-persona design and testing. The masquerade function can make this task easier.
Network slicing allows operators to segment their network and configure each different slice to the specific needs of that customer (or group of customers). So rather than the network infrastructure being configured for the best compromise that suits all use-cases, instead each slice can be configured optimally for each use-case. That’s an exciting concept.
The big potential roadblock however, falls almost entirely on our OSS/BSS. If our operational tools require significant manual intervention on just one network now, then what chance do operators have of efficiently looking after many networks (ie all the slices).
But something just dawned on me today. I was assuming that the onus for managing each slice would fall on the network operator. What if we take the approach that telcos use with security on network pipes instead? That is, the telco shifts the onus of security onto their customer (in most cases). They provide a dumb pipe and ask the customer to manage their own security mechanisms (eg firewalls) on the end.
In the case of network slicing, operators just provide “dumb slices.” The operator assumes responsibility for providing the network resource pool (VNFs – Virtual Network Functions) and the automation of slice management including fulfilment (ie adds, modifies, deletes, holds, etc) and assurance. But the customers take responsibility for actually managing their network (slice) with their own OSS/BSS (which they probably already have a suite of anyway).
This approach doesn’t seem to require the same level of sophistication. The main impacts I see (and I’m probably overlooking plenty of others) are:
There’s a new class of OSS/BSS required by the operators, that of automated slice management
The customers already have their own OSS/BSS, but they currently tend to focus on monitoring, ticketing, escalations, etc. Their new customer OSS/BSS would need to take more responsibility for provisioning, including traffic engineering
And I’d expect that to support customer-driven provisioning, the operators would probably need to provide ways for customers to programmatically interface with the network resources that make up their slice. That is, operators would need to offer network APIs or NaaS to their customers externally, not just for internal purposes
Determining the optimal slice model. For example, does the carrier offer:
A small number of slice types (eg video, IoT low latency, IoT low chat, etc), where each slice caters for a category of customers, but with many slice instances (one for each customer)
A small number of slice instances, where all customers in that category share the single slice
Customised slices for premium customers
A mix of the above
.In the meantime, changes could be made as they have in the past, via customer portals, etc.
Back in the old days, Network Service Assurance probably had a different meaning than it might today.
Clearly it’s assurance of a network service. That’s fairly obvious. But it’s in the definition of “network service” where the old and new terminologies have the potential to diverge.
In years past, telco networks were “nailed up” and network functions were physical appliances. I would’ve implied (probably incorrectly, but bear with me) that a “network service” was “owned” by the carrier and was something like a bearer circuit (as distinct from a customer service or customer circuit). Those bearer circuits, using protocols such as in DWDM, SDH, SONET, ATM, etc potentially carried lots of customer circuits so they were definitely worth assuring. And in those nailed-up networks, we knew exactly which network appliances / resources / bearers were being utilised. This simplified service impact analysis (SIA) and allowed targeted fault-fix.
In those networks the OSS/BSS was generally able to establish a clear line of association from customer service to physical resources as per the TMN pyramid below. Yes, some abstraction happened as information permeated up the stack, but awareness of connectivity and resource utilisation was generally retained end-to-end (E2E).
But in the more modern computer or virtualised network, it all goes a bit haywire, perhaps starting right back at the definition of a network service.
The modern “network service” is more aligned to ETSI’s NFV definition – “a composition of network functions and defined by its functional and behavioral specification. The Network Service contributes to the behaviour of the higher layer service, which is characterised by at least performance, dependability, and security specifications. The end-to-end network service behaviour is the result of a combination of the individual network function behaviours as well as the behaviours of the network infrastructure composition mechanism.”
They are applications running at OSI’s application layer that can be consumed by other applications. These network services include DNS, DHCP, VoIP, etc, but the concept of NaaS (Network as a Service) expands the possibilities further.
So now the customer services at the top of the pyramid (BSS / BML) are quite separated from the resources at the physical layer, other than to say the customer services consume from a pool of resources (the yellow cloud below). Assurance becomes more disconnected as a result.
OSS/BSS are able to tie customer services to pools of resources (the yellow cloud). And OSS/BSS tools also include PNI / WFM (Physical Network Inventory / Workforce Management) to manage the bottom, physical layer. But now there’s potentially an opaque gulf in the middle where virtualisation / NaaS exists.
The end-to-end association between customer services and the physical resources that carry them is lost. Unless we can find a way to establish E2E association, we just have to hope that our modern Network Service Assurance (NSA) tools make the yellow cloud robust to the point of infallibility. BTW. If the yellow cloud includes NaaS, then the NSA has to assure the NaaS gateway, catalog and all services instantiated through the gateway.
But as we know, there will always be failures in physical infrastructure (cable cuts, electronic malfunctions, etc). The individual resources can’t afford to be infallible, even if the resource pool seeks to provide collective resiliency.
Modern NSA has to find a way to manage the resource pool but also coordinate fault-fix in the physical resources that underpin it like the OSS used to do (still do??). They have to do more than just build policies and actions to ensure SLAs don’t they? They can seek to manage security, power, performance, utilisation and more. Unfortunately, not everything can be fixed programmatically, although that is a great place for NSA to start.
Perhaps if the NSA was just assuring the yellow cloud, any time it identifies any physical degradation / failure in the resource pool, it kicks a notification up to the Customer Service Assurance (CSA) tools in the OSS/BSS layers? The OSS/BSS would then coordinate 1) any required customer notifications and 2) any truck rolls or fixes that can’t be achieved programmatically; just like it already does today. The additional benefit of this two-tiered assurance approach is that NSA can handle the NFV / VNF world, whilst not trying to replicate the enormous effort that’s already been invested into the CSA (ie the existing OSS/BSS assurance stack that looks after PNFs, other physical resources and the field workforce processes that look after it all).
I’d love to hear your thoughts. Hopefully you can even correct me if/where I’m wrong.
I’d like to introduce the concept of CT/IR – Continual Test / Incremental Resilience. Analogous to CI/CD (Continuous Integration / Continuous Delivery) before it, CT/IR is a method to systematically and programmatically test the resilience of the network, then ensuring resilience is continually improving.
The continual, incremental improvement in resiliency potentially comes via multiple feedback loops:
Ideally, the existing resilience mechanisms work around or overcome any degradation or failure in the network
The continual triggering of faults into the network will provide additional seed data for AI/ML tools to learn from and improve upon, especially root-cause analysis (noting that in the case of CT/IR, the root-cause is certain – we KNOW the cause – because we triggered it – rather than reverse engineering what the cause may have been)
We can program the network to overcome the problem (eg turn up extra capacity, re-engineer traffic flows, change configurations, etc). Having the NaaS that we spoke about yesterday, provides greater programmability for the network by the way.
We can implement systematic programs / projects to fix endemic faults or weak spots in the network *
Perform regression tests to constantly stress-test the network as it evolves through network augmentation, new device types, etc
Now, you may argue that no carrier in their right mind will allow intentional faults to be triggered. So that’s where we unleash the chaos monkeys on our digital twin technology and/or PSUP (Production Support) environments at first. Then on our prod network if we develop enough trust in it.
I live in Australia, which suffers from severe bushfires every summer. Our fire-fighters spend a lot of time back-burning during the cooler months to reduce flammable material and therefore the severity of summer fires. Occasionally the back-burns get out of control, causing problems. But they’re still done for the greater good. The same principle could apply to unleashing chaos monkeys on a production network… once you’re confident in your ability to control the problems that might follow.
* When I say network, I’m also referring to the physical and logical network, but also support functions such as EMS (Element Management Systems), NCM (Network Configuration Management tools), backup/restore mechanisms, service order replay processes in the event of an outage, OSS/BSS, NaaS, etc.
As the title suggests above, NaaS has the potential to be as big a paradigm shift for networks (and OSS/BSS) as Agile has been for software development.
There are many facets to the Agile story, but for me one of the most important aspects is that it has taken end-to-end (E2E), monolithic thinking and has modularised it. Agile has broken software down into pieces that can be worked on by smaller, more autonomous teams than the methods used prior to it.
The same monolithic, E2E approach pervades the network space currently. If a network operator wants to add a new network type or a new product type/bundle, large project teams must be stood up. And these project teams must tackle E2E complexity, especially across an IT stack that is already a spaghetti of interactions.
But before I dive into the merits of NaaS, let me take you back a few steps, back into the past. Actually, for many operators, it’s not the past, but the current-day model.
As per the orange arrow, customers of all types (Retail, Enterprise and Wholesale) interact with their network operator through BSS (and possibly OSS) tools. [As an aside, see this recent post for a “religious war” discussion on where BSS ends and OSS begins]. The customer engagement occurs (sometimes directly, sometimes indirectly) via BSS tools such as:
Order Entry, Order Management
Product Catalog (Product / Offer Management)
SLA (Service Level Agreement) Management
If the customer wants a new instance of an existing service, then all’s good with the current paradigm. Where things become more challenging is when significant changes occur (as reflected by the yellow arrows in the diagram above).
For example, if any of the following are introduced, there are end-to-end impacts. They necessitate E2E changes to the IT spaghetti and require formation of a project team that includes multiple business units (eg products, marketing, IT, networks, change management to support all the workers impacted by system/process change, etc)
A new product or product bundle is to be taken to market
An end-customer needs a custom offering (especially in the case of managed service offerings for large corporate / government customers)
A new network type is added into the network
System and / or process transformations occur in the IT stack
If we just narrow in on point 3 above, fundamental changes are happening in network technology stacks already. Network virtualisation (SDN/NFV) and 5G are currently generating large investments of time and money. They’re fundamental changes because they also change the shape of our traditional OSS/BSS/IT stacks, as follows.
We now not only have Physical Network Functions (PNF) to manage, but Virtual Network Functions (VNF) as well. In fact it now becomes even more difficult because our IT stacks need to handle PNF and VNF concurrently. Each has their own nuances in terms of over-arching management.
The virtualisation of networks and application infrastructure means that our OSS see greater southbound abstraction. Greater southbound abstraction means we potentially lose E2E visibility of physical infrastructure. Yet we still need to manage E2E change to IT stacks for new products, network types, etc.
The diagram below shows how NaaS changes the paradigm. It de-couples the network service offerings from the network itself. Customer Facing Services (CFS) [as presented by BSS/OSS/NaaS] are de-coupled from Resource Facing Services (RFS) [as presented by the network / domains].
NaaS becomes a “meet-in-the-middle” tool. It effectively de-couples
The products / marketing teams (who generate customer offerings / bundles) from
The networks / operations teams (who design, build and maintain the network).and
The IT teams (who design, build and maintain the IT stack)
It allows product teams to be highly creative with their CFS offerings from the available RFS building blocks. Consider it like Lego. The network / ops teams create the building blocks and the products / marketing teams have huge scope for innovation. The products / marketing teams rarely need to ask for custom building blocks to be made.
You’ll notice that the entire stack shown in the diagram below is far more modular than the diagram above. Being modular makes the network stack more suited to being worked on by smaller autonomous teams. The yellow arrows indicate that modularity, both in terms of the IT stack and in terms of the teams that need to be stood up to make changes. Hence my claim that NaaS is to networks what Agile has been to software.
You will have also noted that NaaS allows the Network / Resource part of this stack to be broken into entirely separate network domains. Separation in terms of IT stacks, management and autonomy. It also allows new domains to be stood up independently, which accommodates the newer virtualised network domains (and their VNFs) as well as platforms such as ONAP.
The NaaS layer comprises:
A TMF standards-based API Gateway
A Master Services Catalog
A common / consistent framework of presentation of all domains
The ramifications of this excites me even more that what’s shown in the diagram above. By offering access to the network via APIs and as a catalog of services, it allows a large developer pool to provide innovative offerings to end customers (as shown in the green box below). It opens up the long tail of innovation that we discussed last week.
Some telcos will open up their NaaS to internal or partner developers. Others are drooling at the prospect of offering network APIs for consumption by the market.
You’ve probably already identified this, but the awesome thing for the developer community is that they can combine services/APIs not just from the telcos but any other third-party providers (eg Netflix, Amazon, Facebook, etc, etc, etc). I could’ve shown these as East-West services in the diagram but decided to keep it simpler.
Developers are not constrained to offering communications services. They can now create / offer higher-order services that also happen to have communications requirements.
If you weren’t already on board with the concept, hopefully this article has convinced you that NaaS will be to networks what Agile has been to software.
Agree or disagree? Leave me a comment below.
PS1. I’ve used the old TMN pyramid as the basis of the diagram to tie the discussion to legacy solutions, not to imply size or emphasis of any of the layers.
PS3. Similarly, the size of the NaaS layer is to bring attention to it rather than to imply it is a monolithic stack in it’s own right. In reality, it is actually a much thinner shim layer architecturally
PS4. The analogy between NaaS and Agile is to show similarities, not to imply that NaaS replaces Agile. They can definitely be used together
PS5. I’ve used the term IT quite generically (operationally and technically) just to keep the diagram and discussion as simple as possible. In reality, there are many sub-functions like data centre operations, application monitoring, application control, applications development, product owner, etc. These are split differently at each operator.
Back in the earliest days of OSS (and networks for that matter), it was the telcos that generated almost all of the innovation. That effectively limited innovation to being developed by the privileged few, those who worked for the government-owned, monopoly telcos.
But over time, the financial leaders at those telcos felt the costs of their amazing research and development labs outweighed the benefits and shut them down (or starved them at best). OSS (and network) vendors stepped into the void to assume responsibility for most of the innovation. But there was a dilemma for the vendors (and for telcos and consumers too) – they needed to innovate fast enough to win work against their competitors, but slow enough to accrue revenues from the investment in their earlier innovations. And innovation was still being constrained to the privileged few, those who worked for vendors and integrators.
Now, the telcos are increasingly pushing to innovate wider and faster than the current vendor collective can accommodate. It means we have to reach further out to the long-tail of innovators. To open the floor beyond the privileged few. Excitingly, this opportunity appears to be looming.
“How?” you may ask.
Network as a Service (NaaS) and API platform offerings.
If every telco offers consumption of their infrastructure via API, it provides the opportunity for any developer to bundle their own unique offering of products, services, applications, hosting, etc and take it to market. If you’re heading to TM Forum’s Digital Transformation World (DTW) in Nice next week, there are a number of Catalyst projects on display in this space, including:
The challenge for the telcos is in how to support the growth of this model. To foster the vendor market, it was easy enough for the telcos to identify the big suppliers and funnel projects (and funding) through them. But now they have to figure out a funnel that’s segmented at a much smaller scale – to facilitate take-up by the millions of developers globally who might consume their products (network APIs in this case) rather than the hundreds/thousands of large suppliers.
This brings us back to smart contracts and micro-procurement as well as the technologies such as blockchain that support these models. This ties in with another TM Forum initiative to revolutionise the procurement event:
But an additional benefit for the telcos, if and when the NaaS platform model takes hold, is that the developers also become a unpaid salesforce for the telcos. The developers will be responsible for marketing and selling their own bundles, which will drive consumption and revenues on the telcos’ assets.
Exciting new business models and supply chains are bound to evolve out of this long tail of innovation.
All OSS products are excellent these days. And all OSS vendors know what the most important functionality is. They already have those features built into their products. That is, they’ve already added the all-important features at the left side of the graph.
But it also means product teams are tending to only add the relatively unimportant new features to the right edge of the graph (ie inside the red box). Relatively unimportant and therefore delivering minimal differential advantage.
The challenge for users is that there is a huge amount of relatively worthless functionality that they have to navigate around. This tends to make the user interfaces non-intuitive.
But another approach, a product-led differentiator, dawned on me when discussing the many sources of OSS friction in yesterday’s post. What if we asked our product teams to take a focus on designing solutions that remove friction instead of the typical approach of adding features (and complexity)?
Almost every OSS I’m aware of has many areas of friction. It’s what gives the OSS industry a bad name. But what if one vendor reduced friction to levels far less than any other competitor? Would it be a differentiator? I’m quite certain customers would be lining up to buy a frictionless OSS even if it didn’t have every perceivable feature.