Orders down, faults up

As mentioned in a post about Service and Resource Availability last week, I do tend to think of OSS workflows around an “orders down, faults up,” flow direction. And that means customers (services) at the top, network (resources) at the bottom of the (TMN) pyramid.

I also think of inventory (yellow) as the point where Assurance / Faults (blue) and Fulfillment / Orders (purple) collide and enhance, as per the diagram below.

These are highly generic examples, but let’s take a closer look:

Assurance flow (blue) – an alarm or event in the network (NEL/NE layer) pushes up through the stack to the OSS as a fault. The inventory (network / service) helps to enrich the fault with additional information (eg device name, location, connectivity, correlation of this and other faults, etc) to help resolve the fault (either manually by operators or automatically by algorithms). It also helps associate the fault in the network/resource with the customer/s using those resources. This allows notifications to be issued to customers. Note that this simple flow doesn’t reflect examples such as an incident (ie when a customer notices a problem first and calls it in before the OSS has been able to issue a notification).

Fulfillment flow (purple) – a customer places an order (BML/BSS layer or above) and it pushes down through the stack, including changes in the network (NEL/NE layer). Once all the appropriate network changes have been made, the order is ready for use by the customer. Once again, inventory plays an important part, associating customer / service identifiers with suitable resources from the available resource pool. Generally the (customer facing) service orders won’t have the technology-specific details required to actually update the network configurations though. That’s where the inventory often helps to fill in the knowledge gaps and send technology-specific commands down into the network. [See Friday’s post for more information about CFS and RFS definitions and mappings]

Inventory flows (yellow) – an inventory is relevant to assurance and fulfillment flows if the BSS and network / resource layers don’t hold enough information to be fully processed. The enriching information stored by inventory must come from somewhere. Some of it comes from Discovery (usually an automated process of collecting from the network or other sources), or via Manual / Scripted Input (eg physical network designs including patch cables and splicing). Some data (eg splices) just can’t be collected automatically as they relate to passive equipment that has no programmatic interface. This data just has to be created manually.
But arguably the more important inventory data is actually recording the mappings made from customers (services) to network (resources). Inventory solutions  are often where these linking keys / relationships are recorded.

These flows also tend to indicate the direction of data mastery. Whilst the network itself is the source of truth, Fulfillment flows will start at the BSS and customer / service / order data will tend to be mastered there before orders are pushed into the network as provisioning commands. For assurance flows, the network will tend to be the master source of data, but with enrichment along it’s path northbound.

Just keep in mind that there are many exceptions to these examples. Data and processes can flow in many different ways. The diagram above is just useful for helping newcomers to understand some conceptual processes and data models / flows.

Developing an OSS Training Plan

Last year a tier-1 telco asked me to develop a training  / mentoring plan for graduates entering their OSS stream. Not just a short-term training plan, but a 4-5 year career development model for their team. They’re setting aside approximately a day a week for personal development for each trainee entering the program.

That blew me away. I was so impressed that they were willing to invest so much time into the long-term development of their staff.

I’ll share the 5-year OSS Training & Mentoring Plan below, but first a few things that came to mind when preparing it:

  • The key to a successful career is developing rare, enduring and valuable skills as fast as possible, particularly early in their career. Same goes for relationships. These traits have compounding effects in the same way compounding interest works
  • Whilst there was a focus on the individual, it the effort expended also had to contribute intrinsic as well as extrinsic benefits to the telco
  • It had to start with the most basic building blocks and terminologies, both from a global perspective, but also within the telco’s specific context
  • It then had to build from basic into the mastery-level subject matter
  • Learning OSS is not an occasional two-week training course, but an apprenticeship
  • Most “real” learning happens through experience in-person, interacting with others (colleagues, clients, sponsors and other stakeholders) and hands-on using tools / data / processes
  • It had to incorporate methods of understanding the telco’s specific context and contributing to it as fast as possible through work-related activities.
  • The reality is that even OSS experts tend to take months before becoming their most productive within a new environment
  • Unlike medicine, there is almost never one “best practice” way to do any OSS tasks
  • It had to look outside the telco’s environment and include interaction with external ideas and expertise through widely attended conferences
  • It had to cover all aspects of OSS from strategy to delivery to operations and all associated roles
  • If certifications can be gained, then all the better
  • Our industry changes quickly, so whilst planning out an indicative syllabus over 5 years, there has to be flexibility to adapt in-flight

Whilst the following OSS Training Plan will be overkill for most, it may help to give you some ideas for creating a factory to build your own Valuable OSS Tripods

OSS Training and Mentoring Plan

But OSS is such a broad subject matter to learn and nobody’s learning journey will be the same. I’d love to hear your thoughts about what else should be included in (or omitted from) this OSS Training and Mentoring Plan. Are there any other OSS-related training courses that you’d like to suggest for inclusion?

What’s in your OSS for me?

May I ask you a question?  Do the senior executives at your organisation ever USE your OSS/BSS?

I’d love to hear your answer.

My guess is that few, if any, do. Not directly anyway. They may depend on reports whose data comes from our OSS, but is that all?

Execs are ultimately responsible for signing off large budget allocations (in CAPEX and OPEX) for our OSS. But if they don’t see any tangible benefits, do the execs just see OSS as cost centres? And cost centres tend to become targets for cost reduction right?

Building on last week’s OSS Scoreboard Analogy, the senior execs are the head coaches of the team. They don’t need the transactional data our OSS are brilliant at collating (eg every network device’s health metrics). They need insights at a corporate objective level.

How can we increase the executives’s, “what’s in it for me?” ranking of the OSS/BSS we implement? We can start by considering OSS design through the lens of senior executive responsibilities:

  • Strategy / objective development
  • Strategy execution (planning and ongoing management to targets)
  • Clear communication of priorities and goals
  • Optimising productivity
  • Risk management / mitigation
  • Optimising capital allocation
  • Team development

And they are busy, so they need concise, actionable information.

Do we deliver functionality that helps with any of those responsibilities? Rarely!

Could we? Definitely!

Should we? Again, I’d love to hear your thoughts!

 

Riffing with your OSS

Data collection and data science is becoming big business. Not just in telco – our OSS have always been one of the biggest data gatherers around – but across all sectors that are increasingly digitising (should I just say, “all sectors” because they’re all digitising?).

Why do you think we’re so keen to collect so much data?

I’m assuming that the first wave had mainly been because we’d almost all prefer to make data-driven decisions (ie decisions based on “proof”) rather than “gut-feel” decisions.

We’re increasingly seeing a second wave come through – to use data not just to identify trends and guide our decisions, but to drive automated actions.

I wonder whether this has the potential to buffer us from making key insights / observations about the business, especially senior leaders who don’t have the time to “science” their data? Have teams already cleansed, manipulated, aggregated and presented data, thus stripping out all the nuances before senior leaders ever even see your data?

I regretfully don’t get to “play” with data as much as I used to. I say regretfully because looking at raw data sets often gives you the opportunity to identify trends, outliers, anomalies and patterns that might otherwise remain hidden. Raw data also gives you the opportunity to riff off it – to observe and then ask different questions of the data.

How about you? Do you still get the opportunity to observe and hypothesise using raw OSS/BSS data? Or do you make your decisions using data that’s already been sanitised (eg executive dashboards / reports)?

 

Over 30 Autonomous Networking User Stories

The following is a set of user stories I’ve provided to TM Forum to help with their current Autonomous Networking initiative.

They’re just an initial discussion point for others to riff off. We’d love to get your comments, additions and recommended refinements too.

As a Head of Network Operations, I want to Automatically maintain the health of my network (within expected tolerances if necessary) So that Customer service quality is kept to an optimal level with little or no human intervention
As a Head of Network Operations, I want to Ensure the overall solution is designed with machine-led automations as a guiding principle So that Human intervention can not be easily engineered into the systems/processes
As a Head of Network Operations, I want to Automatically identify any failures of resources or services within the entire network So that All relevant data can be collected, logged, codified and earmarked for effective remedial action without human interaction
As a Head of Network Operations, I want to Automatically identify any degradation of resource or service performance within the network So that All relevant data can be collected, logged, codified and earmarked for effective remedial action without human interaction
As a Head of Network Operations, I want to Map each codified data set (for failure or degradation cases) to a remedial action plan So that Remedial activities can be initiated without human interaction
As a Head of Network Operations, I want to Identify which remedial activities can be initiated via a programmatic interface and which activities require manual involvement such as a truck roll So that Even manual activities can be automatically initiated
As a Head of Network Operations, I want to Ensure that automations are able to resolve all known failure / degradation scenarios So that Activities can be initiated for any failure or degradation and be automatically resolved through to closure (with little or no human intervention)
As a Head of Network Operations, I want to Ensure there is sufficient network resilience So that Any failure or degradation can be automatically bypassed (temporarily or permanently)
As a Head of Network Operations, I want to Ensure there is sufficient resilience within all support systems So that Any failure or degradation can be automatically bypassed (temporarily or permanently) to ensure customer service is maintained
As a Head of Network Operations, I want to Ensure that operator initiated changes (eg planned maintenance, software upgrades, etc) automatically generate change tracking, documentation and logging So that The change can be monitored (by systems and humans where necessary) to ensure there is minimal or no impact to customer services, but also to ensure resolution data is consistently recorded
As a Head of Network Operations, I want to Ensure that customer initiated changes (eg by raising an incident) automatically generate change tracking, documentation and logging So that The change can be monitored (by systems and humans where necessary) to ensure the incident is closed expediently, but also to ensure resolution data is consistently recorded
As a Head of Network Operations, I want to Initiate planned outages with or without triggering automated remedial activities So that The change agents can decide to use automations or not and ensure automations don’t adversely effect the activities that are scheduled for the planned outage window
As a Head of Network Operations, I want to Ensure that if an unplanned outage does occur, impacted customers are automatically notified (on first instance and via a communications sequence if necessary throughout the outage window) So that Customer experience can be managed as best possible
As a Head of Network Operations, I want to Ensure that if an unplanned outage does occur without a remedial action being triggered, a post-mortem analysis is initiated So that Automations can be revised to cope with this previously unhandled outage scenario
As a Head of Network Operations, I want to Ensure that even previously un-seen new fail scenarios can be handled by remedial automations So that Customer service quality is kept to an optimal level with little or no human intervention
As a Head of Network Operations, I want to Automatically monitor the effects of remedial actions So that Remedial automations don’t trigger race conditions that result in further degradation and/or downstream impacts
As a Head of Network Operations, I want to Be able to manually override any automations by following a documented sequence of events So that If a race condition is inadvertently triggered by an automation, it can be negated quickly and effectively before causing further degradation
As a Head of Network Operations, I want to Intentionally trigger network/service outages and/or degradations, including cascaded scenarios on an scheduled and/or randomised basis So that The resilience of the network and systems can be thoroughly tested (and improved if necessary)
As a Head of Network Operations, I want to Intentionally trigger network/service outages and/or degradations, including cascaded scenarios on an ad-hoc basis So that The resilience of the network and systems can be thoroughly tested (and improved if necessary)
As a Head of Network Operations, I want to Perform scheduled compliance checks on the network So that Expected configurations and policies are in place across the network
As a Head of Network Operations, I want to Automatically generate scheduled reports relating to the effectiveness of the network, services and automations So that The overall solution health (including automations) can be monitored
As a Head of Network Operations, I want to Automatically generate dashboards (in near-real-time) relating to the effectiveness of the network, services and automations So that The overall solution health (including automations) can be monitored
As a Head of Network Operations, I want to Ensure that automations are able to extend across all domains within the solution So that Remedial actions aren’t constrained by system hand-offs
As a Head of Network Operations, I want to Ensure configuration backups are performed automatically on all relevant systems (eg EMS, OSS, etc) So that A recent good solution configuration can be stored as protection in case automations fail and corrupt configurations within the system
As a Head of Network Operations, I want to Ensure configuration restores are performed and tested automatically on all relevant systems (eg EMS, OSS, etc) So that A recent good solution configuration can be reverted to in case automations fail and corrupt configurations within the system
As a Head of Network Operations, I want to Ensure automations are able to manage the entire service lifecycle (add, modify/upgrade, suspend, restore, delete) So that Customer services can evolve to meet client expectations with little or no human intervention
As a Head of Network Operations, I want to Have a design and architecture that uses intent-based and/or policy-based actions So that The complexity of automations is minimised (eg automations don’t need to consider custom rules for different device makes/models, etc)
As a Head of Network Operations, I want to Ensure as many components of the solution (eg EMS, OSS, customer portals, etc) have programmatic interfaces (even if manual activities are required in back-end processes) So that Automations can initiate remedial actions in near real time
As a Head of Network Operations, I want to Ensure all components and data flows within the solution are securely hardened (eg encryption of data in motion and at rest) So that The power of the autonomous platform can not be leveraged for nefarious purposes
As a Head of Network Operations, I want to Ensure that all required metrics can be automatically sourced from the network / systems in as near real time as feasible / useful So that Automations have the full set of data they need to initiate remedial actions and it is as up-to-date as possible for precise decision-making
As a Head of Network Operations, I want to Use the power of learning machines So that The sophistication and speed of remedial response is faster, more accurate and more reliable than if manual interaction were used
As a Head of Network Operations, I want to Record actual event patterns and replay scenarios offline So that Event clusters and response patterns can be thoroughly tested as part of the certification process prior to being released into production environments
As a Head of Network Operations, I want to Capture metrics that can be cross-referenced against event patterns and remedial actions So that Regressions and/or refinements can improve existing automations (ie continuous retraining of the model)
As a Head of Network Operations, I want to Be able to seed a knowledge base with relevant event/action data, whether the pattern source is from Production, an offline environment, a digital twin environment or other production-like environments So that The database is able to identify real scenarios, even if  scenarios are intentially initiated, but could potentially cause network degradation to a production environment
As a Head of Network Operations, I want to Ensure that programmatic interfaces also allow for revert / rollback capabilities So that Remedial actions that aren’t beneficial can be rolled back to the previous state; OR other remedial actions are performed, allowing the automation to revert to original configuration / state
As a Head of Network Operations, I want to Be able to initiate circuit breakers to override any automations So that If a race condition is inadvertently triggered by an automation, it can be negated quickly and effectively before causing further degradation
As a Head of Network Operations, I want to Manually or automatically generate response-plans (ie documented sequences of activities) for any remedial actions fed back into the system So that Internal (eg quality control) or external (eg regulatory) bodies can review “best-practice” remedial activities at any point in time
As a Head of Network Operations, I want to Intentionally trigger catastrophic network failures (in non-prod environments) So that We can trial many remedial actions until we find an optimal solution to seed the knowledge base with

H-OSS-ton, we have a problem

You’ve all probably seen this scene from the Tom Hanks movie, Apollo 13 right? But you’re probably wondering what it has to do with OSS?

Well, this scene came to mind when I was preparing a list of user stories required to facilitate Autonomous Networking.

More specifically, to the use-case where we want the Autonomous Network to quickly recover (as best it can) from unplanned catastrophic network failures.

Of course we don’t want catastrophic network failures in production environments, but if one does occur, we’d prefer that our learning machines already have some idea on how to respond to any unlikely situation. We don’t want them to be learning response mechanisms after a production event.

But similarly, we don’t want to trigger massive outages on production just to build up a knowledge base of possible cause-effect groupings. That would be ridiculous.

That’s where the Apollo 13 analogy comes into play:

  • The engineers on the ground (ie the non-prod environment) were tasked with finding a solution to the problem (as they said, “fitting a square peg in a round hole”)
  • The parts the Engineers were given matched the parts available in the spacecraft (ie non-prod and prod weren’t an exact match, but enough of a replica to be useful)
  • The Engineers were able to trial many combinations using the available parts until they found a workable resolution to the problem (even if it relied heavily on duct tape!)
  • Once the workable solution was found, it was codified (as a procedure manual) and transferred to the spacecraft (ie migrating seed data from non-prod to prod)

If I were responsible for building an Autonomous Network, I’d want to dream up as many failure scenarios as I could, initiate them in non-prod and then duct-tape* solutions together for them all… and then attempt to pre-seed those learnings into production.

* By “duct-tape” I mean letting the learning machine attempt to find optimal solutions by trialing different combinations of automated / programmatic and manual interventions.

For those starting out in OSS product, here’s a tip

For those starting out in product, here’s a tip: Design, Defaults*, Documentation, Details and Delivery really matter in software.”
Jeetu Patel here.

* Note that you can interpret “Defaults” to be Out-Of-The-Box functionality offered by the product.

Let’s break those 5 D-words down and describe why they really matter to the OSS industry shall we?

  • Design – The power of OSS product development tends to lie with engineering, ie the developers. I have huge admiration for the very clever and very talented engineers who create amazing products for us to use, buuutttttt……. I just have one reservation – is there a single OSS company that is design-driven? A single one that’s making intuitive, effective, beautiful experiences for their users? The obvious answer is of course engineering teams hold sway over design teams in OSS – how many OSS vendors even have a dedicated design department??? See this article for more.
  • Defaults – Almost every OSS I know of has an enormous amount of “out-of-the-box” functionality baked in. You could even say that most have too much functionality. There’s functionality that might be really important for one customer but never even used by any of the vendor’s other customers. It just represents bloat for all the other customers, and potentially a distraction for their operators. I’m still bemused to see vendors trying to differentiate by adding obscure new default features rather than optimising for “must-have” functions. See this article for more. However, I must add that I’m starting to see a shift in some OSS. They’re moving away from having baked-in functionality and are moving to more data-repository-driven architectures. Interesting!!
  • Documentation – This is a really interesting factor! Some vendors make almost no documentation available until a prospect becomes a paying customer. Other vendors make their documentation available for the general public online and dedicate significant effort to maintaining their information library. The low-doc approach espoused by Agile could be argued to be reducing document quality. However, it also reduces the chance of producing documentation that nobody will read ever! Personally, I believe vendors like Cisco have earnt a huge competitive advantage (in the networking space moreso than OSS) because of their training / certification (ie CCNA, etc) and self-learning (ie online documentation). See this article for more. As such, I’d tend to err on over-documenting for customer-facing collateral. And perhaps under-documenting for internal-facing collateral unless it’s likely to be used regularly and by many.
  • Details – This is another item where there are two ends to the spectrum. That might surprise some people who would claim that attention to detail is paramount. Well, yes…. in many cases, but certainly not all on OSS projects. Let me share a story on attention to detail on a past OSS project. And another story on seeking perfection. Sometimes we just need to find the right balance, and knowing when to prioritise resilience and when to favour precision becomes an art.
  • Delivery – I have two perspectives on this D-word. Firstly, the Steve Jobs inspired quote of “Real artists ship!” In other words, to laud the skill of shipping a product that provides value to the customer rather than holding off on a not-yet-perfected solution. But the second case is probably more important. OSS projects tend to be massive and complex transformation efforts. Our OSS are rarely self-installed like office software, so they require big delivery teams. Some products are easy to deliver/deploy. Others are a *&$%#! If you’re a product developer, please get out in the trenches with your delivery teams and find ways to make their job easier and/or more repeatable.

Net Simplicity Score (NSS) gets a little more complex

In last Tuesday’s post, I asked the community here on PAOSS and on TM Forum’s Engage platform for ideas about how you would benchmark complexity.

I also provided a reference to an old post that described the concept of a NSS (Net Simplicity Score) for our OSS/BSS.

Due to the complexity of factors that contribute to a complexity score, the NSS is a “catch-all” simplicity metric. Hopefully it will allow subtraction projects to be easily justified, just as the NPS (Net Promoter Score) metric has helped justify customer experience initiatives.

The NSS (Net Simplicity Score), could be further broken down into:

  • The NCSS (Net Customer Simplicity Score) – A ranking from 0 (lowest) to 10 (highest) how easy is it to choose and use the company / product / service? This is an external metric (ie the ranking of the level of difficulty that your customers face)
  • The NOSS (Net Operator Simplicity Score) – A ranking from 0 (lowest) to 10 (highest) how easy is it to choose and use the company / product / service? This is an internal metric (ie for operators to rank complexity of systems and their constituent applications / data / processes)

One interesting item of feedback came from Ronald Hasenberger. He rightly pointed out that just because something is simple for users to interact with, doesn’t mean it’s simple behind the scenes – often exactly the opposite. The iPod example I used in earlier posts is a case in point. The iPod was more intuitive than existing MP3 players, but a huge amount of design and engineering went into making it that way. The underlying “system” certainly wasn’t simple.

So perhaps there’s a third simplicity factor to add to the two bullets listed above:

  • The NSSS (Net System Simplicity Score) – and this one does require a more sophisticated algorithm than just an aggregate of perceptions. Not only that, but it’s the one that truly reflects the systems we design and build. I wonder whether the first two are an initial set of proxies that help drive complexity out of our solutions, but we need to develop Ronald’s third one to make the biggest impact?

Again, I’d love to hear your thoughts!

What will get your CEO fired? (part 3)

In Monday’s article, we suggested that the three technical factors that could get the big boss fired are probably only limited to:

  1. Repeated and/or catastrophic failure (of network, systems, etc)
  2. Inability to serve the market (eg offerings, capacity, etc)
  3. Inability to operate network assets profitably

In that article, we looked closely at a human factor and how current trends of open-source, Agile and microservices might actually exacerbate it. In yesterday’s article we looked at the broader set of catastrophic failure factors for us to investigate and monitor.

But let’s look at some of the broader examples under point 2 today. The market-serving factors we could consider that reduce the chances of the big boss getting fired are:

  1. Immediate visibility of key metrics by boss and execs (what are the metrics that matter, eg customer numbers, ARPU, churn, regulatory, media hot-buttons, network health, etc)

  2. Response to “voice of customer” (including customer feedback, public perception, etc)

  3. Human resources (incl up-skill for new tech, etc)

  4. Ability to implement quickly / efficiently

  5. Ability to handle change (to network topology, devices/vendors, business products, systems, etc)

  6. Measuring end-to-end user experience, not just “nodal” monitoring

  7. Scalability / Capacity (ability to serve customer demand now and into a foreseeable future)

What will get your CEO fired?

Not sure whether you have a clear answer to that question – either the thought is enticing (you want the CEO to get fired), unthinkable (you don’t want the CEO fired) or somewhere in between.  You’d invariably get different answers from different employees within most organisations (you can’t please all of the people all of the time).

Today’s post also has no singular situation, so it poses a number of hypothetical questions. But let’s follow the thread of the question from a purely technical perspective (ie discounting inappropriate personal decisions, incompetence and the like).

If we look at the technical factors that could get the big boss fired, they’re probably only limited to:

  1. Repeated and/or catastrophic failure (of network, systems, etc)
  2. Inability to serve the market (eg offerings, capacity, etc)
  3. Inability to operate network assets profitably

Let’s start with catastrophic failures, ones where entire networks and/or product lines go down for long periods (as opposed to branches of networks becoming faulty or intermittently failing). We’ve had quite a few catastrophic failures in my region in the last few years.

Interestingly, we’re designing networks and systems that should be more reliable day-by-day. In all likelihood they probably are. But with increased system complexity and programmed automation, I wonder whether we’re actually increasing the likelihood of catastrophic failures. That is, examples of cascading failures that are too complex for operators to contain.

Anecdotal evidence surrounding the failures mentioned above in my area suggest that a skills-gap after many retirements / redundancies has been partly to blame. The replacements haven’t been as well equipped as their predecessors to prevent the runaway train of catastrophic failure.

Do you think a similar risk awaits us with the current trend towards build-it-yourself OSS? We have all these custom-built, one-of-a-kind, complex systems being built currently. What happens in a few years time when their designers / implementers leave? Yes, their replacements might be able to interrogate code repositories, but will they know what changes to rapidly make when patterns of degradation start to appear? Will they have any chance of inherently understanding the logic behind these systems like those who created them? Will they be able to stop the runaway train/s?

This earlier post discussed the build vs buy cycle the market goes through.

OSS build vs buy cycle

There are many pros and cons of the build vs buy argument, but at least there’s a level of repeatability and modularity with off the shelf solutions.

I wonder if the CEOs of today/tomorrow are planning for sophisticated up-skilling of replacements in future when they decide to build in-house today? Or will they just hand the replacements the keys and wish them luck? The latter approach keeps costs down, but it just might get the CEO fired!

We’ll take a closer look at the other two factors (“serve the market” and “profitability”) in the next few days.

 

 

 

OSS that make men feel more masculine and in command

From watching ESPN, I’d learned about the power of information bombardment. ESPN strafes its viewers with an almost hysterical amount of data and details. Scrolling boxes. Panels. Bars. Graphics. Multi-angle camera perspectives. When exposed to a surfeit of data, men tend to feel more masculine and in command. Do most men bother to decipher these boxes, panels, bars and graphics? No – but that’s not really the point.”
Martin Lindstrom
, in his book, “Small Data.”

I’ve just finished reading Small Data, a fascinating book that espouses forensic analysis of the lives of users (ie small data) rather than using big data methods to identify market opportunities. I like the idea of applying both approaches to our OSS products. After all, we need to make them more intuitive, endearing and ultimately, effective.

The quote above struck a chord in particular. Our OSS GUIs (user interfaces) can tend towards the ESPN model can’t they? The following paraphrasing doesn’t seem completely at odds with most of the OSS that we interact with – “[the OSS] strafes its viewers with an almost hysterical amount of data and details.”

And if what Lindstrom says is an accurate psychological analysis, does it mean:

  1. The OSS GUIs we’re designing help make their developers “feel more masculine and in command” or
  2. Our OSS operators “feel more masculine and in command” or
  3. Both

Intriguingly, does the feeling of being more masculine and in command actually help or hinder their effectiveness?

I find it fascinating that:

  1. Our OSS/BSS form a multi billion dollar industry
  2. Our OSS/BSS are the beating heart of the telecoms industry, being wholly responsible for operationalising the network assets that so much capital is invested in
  3. So little effort is invested in making the human to OSS interface far more effective than they  are today
  4. I keep hearing operators bemoan the complexities and challenges of wrangling their OSS, yet only hear “more functionality” being mentioned by vendors, never “better usability”

Maybe the last point comes from me being something of a rarity. Almost every one of the thousands of people I know in OSS either works for the vendor/supplier or the customer/operator. Conversely, I’ve represented both sides of the fence and often even sit in the middle acting as a conduit between buyers and sellers. Or am I just being a bit precious? Do you also spot the incongruence of point D on a regular basis?

Whether you’re buy-side or sell-side, would you love to make your OSS more effective? Let us know and we can discuss some of the optimisation techniques that might work for you.

Going to the OSS zoo

There’s the famous quote that if you want to understand how animals live, you don’t go to the zoo, you go to the jungle. The Future Lab has really pioneered that within Lego, and it hasn’t been a theoretical exercise. It’s been a real design-thinking approach to innovation, which we’ve learned an awful lot from.”
Jorgen Vig Knudstorp
.

This quote prompted me to ask the question – how many times during OSS implementations had I sought to understand user behaviour at the zoo versus the jungle?

By that, how many times had I simply spoken with the user’s representative on the project team rather than directly with end users? What about the less obvious personas as discussed in this earlier post about user personas? Had I visited the jungles where internal stakeholders like project sponsors, executives, data consumers, etc. or external stakeholders such as end-customers, regulatory bodies, etc go about their daily lives?

I can truthfully, but regretfully, say I’ve spent far more time at OSS zoos than in jungles. This is something I need to redress.

But, at least I can claim to have spent most time in customer-facing roles.

Too many of the product development teams I’ve worked closely with don’t even visit OSS zoos let alone jungles in any given year. They never get close to observing real customers in their native environments.

 

OSS user heat-mapping

Over the many OSS implementation projects I’ve worked on, UI/UX (user interface / user experience) has been an afterthought (if even thought about at all). I know there are OSS UI/UX experts out there (I’ve met a handful), but none have ever been assigned to the projects I’ve worked on unfortunately. UI has always just been the domain of the developer. If the functionality worked (even if in a highly convoluted way), then the developer would move on to the next epic. The UI was almost never re-visited unless the functionality proved to be almost unusable.

So the question becomes, how do we observe, measure and trial UI/UX effectiveness?

Have you ever tried running a heat-mapping analysis over your OSS to show actual user behaviour?

Given that almost all OSS are now browser-based, there are plenty of heat-map tools available. They give results that might look something like this (but can also provide more granularity of analysis too):
Heat-map
Image source: https://www.tatvic.com/data-analytics-solutions/heat-map-integration/

Whereas these tools are generally used by web developers to improve retention and conversion rates on their pages (ie customers buying, clicking through on a banner ad, calls to action, etc), we’ll use them in a different way within our OSS. We’ll instead be looking for efficiency of movement, an indicator of whether the design of our page is intuitive or not. Are the operators of your OSS clicking in the right places (menus, titles, buttons, links, etc) to achieve certain outcomes?

I’d be particularly interested in comparing heat-maps of new operators (eg if you’ve installed a sand-pit environment at a client site for the first time and let the client’s operators loose) versus experienced teams. Depending on the OSS application you’re analysing, you may even been interested in observing different behaviours across different devices (eg desktops, phones, tablets).

There’s generally a LOT of functionality available within each OSS. Are we optimising the experience for the functionality that matters most? For web-page designers, that might mean ensuring all the most important information is “above-the-fold” (ie can be seen on the page without having to scroll down – noting that the “fold” will be different across different devices/resolutions). If they want a user to click on the “buy now” button, then they *may* want that to appear above the fold, ensuring the prospective buyer doesn’t have to go searching for it.

In the case of an OSS, you don’t want to hide the most important functionality under layers of menus. And don’t forget that different personas (eg designers, admins, execs, help-desk, NOC, etc) are likely to have different definitions of “important functionality.” You may want to optimise important UI elements for each different persona (if your OSS allows for that level of configurability).

I’m not endorsing Smartlook, but if you’d like to read more about heat-mapping techniques, you can start here.

I’m really excited by a just-finished OSS analysis (part 3)

This is the third part of a series describing a really exciting analysis I’ve just finished.

Part 1 described how we can turn simple log files into a Sankey diagram that shows real-life process flows (not just a theoretical diagram drawn by BAs and SMEs), like below:

Part 2 described how the logs are broken down into a design tree and how we can assign weightings to each branch based on the data stored in the logs, as below:
OSS Decision Tree Analysis

I’ve already had lots of great feedback in relation to the Part 1 blog, especially from people who’ve had challenges capturing as-is process. The feedback has been greatly appreciated so I’m looking forward to helping them draw up their flow-charts on the way to helping optimise their process flows.

But that’s just the starting point. Today’s post is where things get really exciting (for me at least). Today we build on part 2 and not just record weightings, but use them to assist future decisions.

We can use the decision tree to “predict forward” and help operators / algorithms make optimal decisions whilst working towards process completion. We can use a feedback loop to steer an operator (or application) down the most optimal branches of the tree (and/or avoid the fall-out variants).

This allows us to create a closed-loop, self-optimising, Decision Support System (DSS), as follows:

Note: Diagram sourced from https://passionateaboutoss.com/closing-the-loop-to-make-better-decisions, where further explanation is provided

Using log data alone, we can perform decision optimisation based on “likelihood of success” or “time to complete” as per the weightings table. If supplemented with additional data, the weightings table could also allow decisions to be optimised by “cost to complete” or many other factors.

The model has the potential to be used in “real-time” mode, using the constant stream of process logs to continually refine and adapt. For example:

  • If the long-term average of a process path is 1 minute, but there’s currently a problem with and that path is failing, then another path (one that is otherwise slightly less optimised over the long-term), could be used until the first path is repaired
  • An operator happens to choose a new, more optimal path than has ever been identified previously (the delta function in the diagram). It then sets a new benchmark and informs the new approach via the DSS (Darwinian selection)

If you’re wondering how the DSS could be implemented, I can envisage a few ways:

  1. Using existing RPA (Robotic Process Automation) tools [which are particularly relevant if the workflow box in the diagram above crosses multiple different applications (not just a single monolithic OSS/BSS)]
  2. Providing a feedback path into the functionality of the OSS/BSS and it’s GUI
  3. Via notifications (eg email, Slack, etc) to operators
  4. Via a simple, more manual process like flow diagrams, work instructions, scorecards or similar
  5. You can probably envisage other methods

I’m really excited by a just-finished OSS analysis (part 2)

As the title suggests, this is the second part in a series describing a process flow visualisation, optimisation and decision support methodology that uses simple log data as input.

Yesterday’s post, part 1 in the series, showed the visualisation aspect in the form of a Sankey flow diagram.

This visualisation is exciting because it shows how your processes are actually flowing (or not), as opposed to the theoretical process diagrams that are laboriously created by BAs in conjunction with SMEs. It also shows which branches in the flow are actually being utilised and where inefficiencies are appearing (and are therefore optimisation targets).

Some people have wondered how simple activity logs can be used to show the Sankey diagrams. Hopefully the diagram below helps to describe this. You scan the log data looking for variants / patterns of flows and overlay those onto a map of decision states (DPs). In the diagram above, there are only 3 DPs, but 303 different variants (sounds implausible, but there are many variants that do multiple loops through the 3 states and are therefore considered to be a different variant).

OSS Decision Tree Analysis

The numbers / weightings you see on the Sankey diagram are the number* of instances (of a single flow type) that have transitioned between two DPs / states.

* Note that this is not the same as the count value that appears in the Weightings table. We’ll get to that in tomorrow’s post when we describe how to use the weightings data for decision support.

I’m really excited by a just-finished OSS analysis

In your travels, I don’t suppose you’ve ever come across anyone having challenges to capture and/or optimise their as-is OSS/BSS process flows? Once or twice?? 🙂

Well I’ve just completed an analysis that I’m really excited about. It’s something I’ve been thinking about for some time, but have just finished proving on the weekend. I thought it might have relevance to you too. It quickly helps to visualise as-is process and identify areas to optimise.

The method takes activity logs (eg from OSS, ITIL, WFM, SAP or similar) and turns them into a process diagram (a Sankey diagram) like below with real instance volumes. Much better than a theoretical process map designed by BAs and SMEs don’t you think?? And much faster and more accurate too!!

OSS Sankey process diagram

A theoretical process map might just show a sequence of 3 steps, but the diagram above has used actual logs to show what’s really occurring. It highlights governance issues (skipped steps) and inefficiencies (ie the various loops) in the process too. Perfect for process improvement.

But more excitingly, it proves a path towards real-time “predict-forward” decision support without having to get into the complexities of AI. More has been included in the analysis!

If this is of interest to you, let me know and I’ll be happy to walk you through the full analysis. Or if you want to know how your real as-is processes perform, I’d be happy to help turn your logs into visuals like the one above.

PS1. You might think you need a lot of fields to prepare the diagrams above. The good news is the only mandatory fields would be something like:

  1. Flow type – eg Order type, project type or similar (only required if the extract contains multiple flow types mixed together. The diagram above represents just one flow type)
  2. Flow instance identifier – eg Order number, project number or similar (the diagram above was based on data that had around 600,000 flow instances)
  3. Activity identifier – eg Activity name (as per the 3 states in the diagram above), recorded against each flow instance. Note that they will ideally be an enumerated list (ie from a finite pick-list)
  4. Timestamps – Start/end timestamp on each activity instance

If the log contains other details such as the name of the operator who completed each activity, that can help add richness, but not mandatory.

PS2. The main objective of the analysis was to test concepts raised in the following blog posts:

Can you solve the omni-channel identity conundrum for OSS/BSS?

For most end-customers, the OSS/BSS we create are merely back-office systems that they never see. The closest they get are the customer portals that they interact with to drive workflows through our OSS/BSS. And yet, our OSS/BSS still have a big part to play in customer experience. In times where customers can readily substitute one carrier for another, customer service has become a key differentiator for many carriers. It therefore also becomes a priority for our OSS/BSS.

Customers now have multiple engagement options (aka omni-channel) and form factors (eg in-person, phone, tablet, mobile phone, kiosk, etc). The only options we used to have were a call to a contact centre / IVR (Interactive Voice Response), a visit to a store, or a visit from an account manager for business customers. Now there are websites, applications, text messages, multiple social media channels, chatbots, portals, blogs, etc. They all represent different challenges as far as offering a seamless customer experience across all channels.

I’ve just noticed TM Forum’s “Omni-channel Guidebook” (GB994), which does a great job at describing the challenges and opportunities. For example, it explains the importance of identity. End-users can only get a truly seamless experience if they can be uniquely identified across all channels. Unfortunately, some channels (eg IVR, website) don’t force end-users to self-identify.

The Ovum report, “Optimizing Customer Service in a Multi Channel World, March 2011” indicates that around 74% of customers use 3 channels or more for engaging customer service. In most cases, it’s our OSS/BSS that provide the data that supports a seamless experience across channels. But what if we have no unique key? What if the unique key we have (eg phone number) doesn’t uniquely identify the different people who use that contact point (eg different family members who use the same fixed-line phone)?

We could use personality profiling across these channels, but we’ve already seen how that has worked out for Cambridge Analytica and Facebook in terms of customer privacy and security.

I’d love to hear how you’ve done cross-channel identity management in your OSS/BSS. Have you solved the omni-channel identity conundrum?

PS. One thing I find really interesting. The whole omni-channel thing is about giving customers (or potential customers) the ability to connect via the channel they’re most comfortable with. But there’s one glaring exception. When an end-user decides a phone conversation is the only way to resolve their issue (often after already trying the self-service options), they call the contact centre number. But many big telcos insist on trying to deflect as many calls as possible to self-service options (they call it CVR – call volume reduction), because contact centre staff are much more expensive per transaction than the automated channels. That seems to be an anti-customer-experience technique if you ask me. What are your thoughts?

The 3 states of OSS consciousness

The last four posts have discussed how our OSS/BSS need to cope with different modes of working to perform effectively. We started off with the thread of “group flow,” where multiple different users of our tools can work cohesively. Then we talked about how flow requires a lack of interruptions, yet many of the roles using our OSS actually need constant availability (ie to be constantly interrupted).

From a user experience (UI/UX) perspective, we need an awareness of the state the operator/s needs to be in to perform each step of an end-to-end process, be it:

  • Deep think or flow mode – where the operator needs uninterrupted time to resolve a complex and/or complicated activity (eg a design activity)
  • Constant availability mode – where the operator needs to quickly respond to the needs of others and therefore needs a stream of notifications / interruptions (eg network fault resolutions)
  • Group flow mode – where a group of operators need to collaborate effectively and cohesively to resolve a complex and/or complicated activity (eg resolve a cross-domain fault situation)

This is a strong argument for every OSS/BSS supplier to have UI/UX experts on their team. Yet most leave their UI/UX with their coders. They tend to take the perspective that if the function can be performed, it’s time to move on to building the next function. That was the same argument used by all MP3 player suppliers before the iPod came along with its beautiful form and function and dominated the market.

Interestingly, modern architectural principles potentially make UI/UX design more challenging. With old, monolithic OSS/BSS, you at least had more control over end-to-end workflows (I’m not suggesting we should go back to the monoliths BTW). These days, you need to accommodate the unique nuances / inconsistencies of third-party modules like APIs / microservices.

As Evan Linwood incisively identified, ” I guess we live in the age of cloud based API providers, theoretically enabling loads of pre-canned integration patterns but these may not be ideal for a large service provider… Definitely if the underlying availability isn’t there, but could also occur through things like schema mismanagement across multiple providers? (Which might actually be an argument for better management / B/OSS, rather than against the use of microservices!

Am I convincing any of you to hire more UI/UX resources? Or convincing you to register for UI/UX as your next training course instead of learning a ninth programming language?

Put simply, we need your assistance to take our OSS from this…
Old MP3 player

To this…
iPod

Stealing Fire for OSS (part 2)

Yesterday’s post talked about the difference between “flow state” and “office state” in relation to OSS delivery. It referenced a book I’m currently reading called Stealing Fire.

The post mainly focused on how the interruptions of “office state” actually inhibit our productivity, learning and ability to think laterally on our OSS. But that got me thinking that perhaps flow doesn’t just relate to OSS project delivery. It also relates to post-implementation use of the OSS we implement.

If we think about the various personas who use an OSS (such as NOC operators, designers, order entry operators, capacity planners, etc), do our user interfaces and workflows assist or inhibit them to get into the zone? More importantly, if those personas need to work collaboratively with others, do we facilitate them getting into “group flow?”

Stealing Fire suggests that it costs around $500k to train each Navy SEAL and around $4.25m to train each elite SEAL (DEVGRU). It also describes how this level of training allows DEVGRU units to quickly get into group flow and function together almost as if choreographed, even in high-pressure / high-noise environments.

Contrast this with collaborative activities within our OSS. We use tickets, emails, Slack notifications, work order activity lists, etc to collaborate. It seems to me that these are the precise instruments that prevent us from getting into flow individually. I assume it’s the same collectively. I can’t think back to any end-to-end OSS workflows that seem highly choreographed or seamlessly effective.

Think about it. If you experience significant rates of process fall-out / error, then it would seem to indicate an OSS that’s not conducive to group flow. Ditto for lengthy O2A (order to activate) or T2R (trouble to resolve) times. Ditto for bringing new products to market.

I’d love to hear your thoughts. Has any OSS environment you’ve worked in facilitated group flow? If so, was it the people and/or the tools? Alternatively, have the OSS you’ve used inhibited group flow?

PS. Stealing Fire details how organisations such as Google and DARPA are investing heavily in flow research. They can obviously see the pay-off from those investments (or potential pay-offs). We seem to barely even invest in UI/UX experts to assist with the designs of our OSS products and workflows.

Stealing fire for OSS

I’ve recently started reading a book called Stealing Fire: How Silicon Valley, the Navy SEALs, and Maverick Scientists Are Revolutionizing the Way We Live and Work. To completely over-generalise the subject matter, it’s about finding optimal performance states, aka finding flow. Not the normal topic of conversation for here on the PAOSS blog!!

However, the book’s content has helped to make the link between flow and OSS more palpable than you might think.

In the early days of working on OSS delivery projects, I found myself getting into a flow state on a daily basis – achieving more than I thought capable, learning more effectively than I thought capable and completely losing track of time. In those days of project delivery, I was lucky enough to get hours at a time without interruptions, to focus on what was an almost overwhelming list of tasks to be done. Over the first 5-ish years in OSS, I averaged an 85 hour week because I was just so absorbed by it. It was the source from where my passion for OSS originated. Or was it??

The book now has me pondering a chicken or egg conundrum – did I become so passionate about OSS that I could get into a state of flow or did I only become passionate about OSS because I was able to readily get into a state of flow with it? That’s where the book provides the link between getting in the zone and the brain chemicals that leave us with a feeling of ecstasis or happiness (not to mention the addictive nature of it). The authors describe this state of consciousness as Selflessness, Timelessness, Effortlessness, and Richness, or STER for short. OSS definitely triggered STER for me,, but chicken or egg??

Having spent much of the last few years embedded in big corporate environments, I’ve found a decreased ability to get into the same flow state. Meetings, emails, messenger pop-ups, distractions from surrounding areas in open-plan offices, etc. They all interrupt. It’s left me with a diminishing opportunity to get in the zone. With that has come a growing unease and sense of sub-optimal productivity during “office hours.” It was increasingly disheartening that I could generally only get into the zone outside office hours. For example, whilst writing blogs on the train-trip or in the hours after the rest of my family was asleep.

Since making the concerted effort to leave that “office state,” I’ve been both surprised and delighted at the increased productivity. Not just that, but the ability to make better lateral connections of ideas and to learn more effectively again.

I’d love to hear your thoughts on this in the comments section below. Some big questions for you:

  1. Have you experienced a similar productivity gap between “flow state” and “office state” on your OSS projects?
  2. Have you had the same experience as me, where modern ways of working seem to be lessening the long chunks of time required to get into flow state?
  3. If yes, how can our sponsor organisations and our OSS products continue to progress if we’re increasingly working only in office state?