The four styles of RPA used in OSS

You’re probably already aware of RPA (Robotic Process Automation) tools. You’ve possibly even used one (or more) to enhance your OSS experience. In some ways, they’re a really good addition to your OSS suite. In some ways, potentially not. That all comes down to the way you use them.

There are four main ways that I see them being used (but happy for you to point out others):

  1. Automating repeatable tasks – following an algorithmic approach to getting regular, mundane tasks done (eg weekly report generation)
  2. Streamlining processes / tasks – again following an algorithmic approach to assist an operator during a process (eg reducing the amount of data entry when there is duplication between systems)
  3. Predefined decision support – to guide operators through a process that involves making different decisions based on the information being presented (eg in a highly regulated or complex process, with many options, RPA rules can ensure quality remains high)
  4. As part of a closed-loop system – if your RPA tool can handle changes to its rules through feedback (ie not just static rules) then it can become an important part of a learning, improving solution

You’ll notice an increasing level of sophistication from 1-4. Not just sophistication but potential value to operators too.

We’ll take a closer look at the use of RPA in OSS over the next couple of days.

The evolving complexity of RCA

Root cause analysis (RCA) is one of the great challenges of OSS. As you know, it aims to identify the probable cause of an alarm-storm, where all alarms are actually related to a single fault.

In the past, my go-to approach was to start with a circuit hierarchy-based algorithm. If you had an awareness of the hierarchy of circuits, usually through an awareness in inventory, if you have a lower-order fault (eg Loss of Signal on a transmission link caused by a cable break), then you could suppress all higher-order alarms (ie from bearers or tributaries that were dependent upon the L1 link. That works well in the fixed networks of distant past (think SDH / PDH). This approach worked well because it was repeatable between different customer environments.

Packet-switching data networks changed that to an extent, because a data service could traverse any number of links, on-net or off-net (ie leased links). The circuit hierarchy approach was still applicable, but needed to be supplemented with other rules.

Now virtualised networking is changing it again. RCA loses a little relevance in the virtualised layer. Workloads and resource allocations are dynamic and transient, making them less suited to fixed algorithms. The objective now becomes self-healing – if a failure is identified, failed resources are spun down and new ones spun up to take the load. The circuit hierarchy approach loses relevance, but perhaps infrastructure hierarchy still remains useful. Cable breaks, server melt-downs, hanging controller applications are all examples of root causes that will cause problems in higher layers.]

Rather than fixed-rules, machine-based pattern-matching is the next big hope to cope with the dynamically changing networks.

The number of layers and complexity of the network seems to be ever increasing, and with it RCA becomes more sophisticated…. If only we could evolve to simpler networks rather than more complex ones. Wishful thinking?

If the customer thinks they have a problem, they do have a problem

Omni-channel is an interesting concept because it generates two distinctly different views.
The customer will use whichever channel (eg digital, apps, contact-centre, IVR, etc) that they want to use.
The service provider will try to push the customer onto whichever channel suits the service provider best.

The customer will often want to use digital or apps, back-ended by OSS – whether that’s to place an order, make configuration changes, etc. The service provider is happy for the customer to use these low-cost, self-service channels.

But when the customer has a problem, they’ll often try to self-diagnose, then prefer to speak with a person who has the skills to trouble-shoot and work with the back-end systems and processes. Unfortunately, the service provider still tries to push the customer into low-cost, self-service channels. Ooops!

If the customer thinks they have a problem, they do have a problem (even if technically, they don’t).
Omni-channel means giving customers the channels that they want to work via, not the channels the service provider wants them to work via.
Call Volume Reduction (CVR) projects (which can overlap into our OSS) sometimes lose sight of this fact just because the service provider has their heart set on reducing costs.

Funding beyond the walls of operations

You can have more – if you become more.”
Jim Rohn.

I believe that this is as true of our OSS as it is of ourselves.

Many people use the name Operational Support Systems to put an electric fence around our OSS, to limit uses to just operational activities. However, the reach, awareness and power of what they (we) offer goes far beyond that.

We have powerful insights to deliver to almost every department in an organisation – beyond just operations and IT. But first we need to understand the challenges and opportunities faced by those departments so that we can give them something useful.

That doesn’t necessarily mean expensive integrations but unlocking the knowledge that resides in our data.

Looking for additional funding for your OSS? Start by seeking ways to make it more valuable to more people… or even one step further back – seeking to understand the challenges beyond the walls of operations.

When low OSS performance is actually high performance

It’s not unusual for something to be positioned as the high performance alternative. The car that can go 0 to 60 in three seconds, the corkscrew that’s five times faster, the punch press that’s incredibly efficient…
The thing is, though, that the high performance vs. low performance debate misses something. High at what?
That corkscrew that’s optimized for speed is more expensive, more difficult to operate and requires more maintenance.
That car that goes so fast is also more difficult to drive, harder to park and generally a pain in the neck to live with.
You may find that a low-performance alternative is exactly what you need to actually get your work done. Which is the highest performance you can hope for
Seth Godin
in this article, What sort of performance?

Whether selecting a vendor / product, designing requirements or building an OSS solution, we can sometimes lose track of what level of performance is actually required to get the work done can’t we?

How many times have you seen a requirement sheet that specifies a Ferrari, but you know the customer lives in a location with potholed and cobblestoned roads? Is it right to spec them – sell them – build them – charge them for a Ferrari?

I have to admit to being guilty of this one too. I have gotten carried away in what the OSS can do, nearer the higher performance end of the spectrum, rather than taking the more pragmatic view of what the customer really needs.

Automations, custom reports and integrations are the perfect OSS examples of low performance actually being high performance. We spend a truckload of money on these types of features to avoid manual tasks (curse having to do those manual tasks)… when a simple cost-benefit analysis would reveal that it makes a lot more sense to stay manual in many cases.

The double-edged sword of OSS/BSS integrations

…good argument for a merged OSS/BSS, wouldn’t you say?
John Malecki

The question above was posed in relation to Friday’s post about the currency and relevance of OSS compared with research reports, analyses and strategic plans as well as how to extend OSS longevity.

This is a brilliant, multi-faceted question from John. My belief is that it is a double-edged sword.

Out of my experiences with many OSS, one product stands out above all the others I’ve worked with. It’s an integrated suite of Fault Management, Performance Management, Customer Management, Product / Service Management, Configuration / orchestration / auto-provisioning, Outside Plant Management / GIS, Traffic Engineering, Trouble Ticketing, Ticket of Work Management, and much more, all tied together with the most elegant inventory data model I’ve seen.

Being a single vendor solution built on a relational database, the cross-pollination (enrichment) of data between all these different modules made it the most powerful insight engine I’ve worked with. With some SQL skills and an understanding of the data model, you could ask it complex cross-domain questions quite easily because all the data was stored in a single database. That edge of the sword made a powerful argument for a merged OSS/BSS.

Unfortunately, the level of cross-referencing that made it so powerful also made it really challenging to build an initial data set to facilitate all modules being inter-operable. By contrast, an independent inventory management solution could just pull data out of each NMS / EMS under management, massage the data for ingestion and then you’d have an operational system. The abovementioned solution also worked this way for inventory, but to get the other modules cross-referenced with the inventory required engineering rules, hand-stitched spreadsheets, rules of thumb, etc. Maintaining and upgrading also became challenges after the initial data had been created. In many cases, the clients didn’t have all of the data that was needed, so a data creation exercise needed to be undertaken.

If I had the choice, I would’ve done more of the cross-referencing at data level (eg via queries / reports) rather than entwining the modules together so tightly at application level… except in the most compelling cases. It’s an example of the chess-board analogy.

If given the option between merged (tightly coupled) and loosely coupled, which would you choose? Do you have any insights or experiences to share on how you’ve struck the best balance?

Keeping the OSS executioner away

With the increasing pace of change, the moment a research report, competitive analysis, or strategic plan is delivered to a client, its currency and relevance rapidly diminishes as new trends, issues, and unforeseen disrupters arise.”
Soren Kaplan

By the same token as the quote above, does it follow that the currency and relevance of an OSS rapidly diminishes as soon as it is delivered to a client?

In the case of research reports, analyses and strategic plans, currency diminishes because the static data sets upon which they’re built are also losing currency. That’s not the case for an OSS – they are data collection and processing engines for streaming (ie constantly refreshing) data. [As an aside here – Relevance can still decrease if data quality is steadily deteriorating, irrespective of its currency. Meanwhile currency can decrease if the ever expanding pool of OSS data becomes so large as to be unmanagable or responsiveness is usurped by newer data processing technologies]

However, as with research reports, analyses and strategic plans, the value of an OSS is not so much related to the data collected, but the questions asked of, and answers / insights derived from, that data.

Apart from the asides mentioned above, the currency and relevance of OSS only diminish as a result of new trends, issues and disrupters if new questions can not or are not being asked with them.

You’ll recall from yesterday’s post that, “An ability to use technology to manage, interpret and visualise real data in a client’s data stores, not just industry trend data,” is as true of OSS tools as it is of OSS consultants. I’m constantly surprised that so few OSS are designed with intuitive, flexible data interrogation tools built in. It seems that product teams are happy to delegate that responsibility to off-the-shelf reporting tools or leave it up to the client to build their own.

The future of telco / service provider consulting

Change happens when YOU and I DO things. Not when we argue.”
James Altucher

We recently discussed how ego can cause stagnation in OSS delivery. The same post also indicated how smart contracts potentially streamline OSS delivery and change management.

Along similar analytical lines, there’s a structural shift underway in traditional business consulting, as described in a recent post contrasting “clean” and “dirty” consulting. There’s an increasing skepticism in traditional “gut-feel” or “set-and-forget” (aka clean) consulting and a greater client trust in hard data / analytics and end-to-end implementation (dirty consulting).

Clients have less need for consultants that just turn the ignition and lay out sketchy directions, but increasingly need ones that can help driving the car all the way to their desired destination.

Consultants capable of meeting these needs for the telco / service provider industries have:

  • Extensive coal-face (delivery) experience, seeing and learning from real success and failure situations / scenarios
  • An ability to use technology to manage, interpret and visualise real data in a client’s data stores, not just industry trend data
  • An ability to build repeatable frameworks (including the development of smart contracts)
  • A mix of business, IT and network / tech expertise, like all valuable tripods

Have you noticed that the four key features above are perfectly aligned with having worked in OSSOSS/BSS data stores contain information that’s relevant to all parts of a telco / service provider business. That makes us perfectly suited to being the high-value consultants of the future, not just contractors into operations business units.

Few consultancy tasks are productisable today, but as technology continues to advance, traditional consulting roles will increasingly be replaced by IP (Intellectual Property) frameworks, data analytics, automations and tools… as long as the technology provides real business benefit.

A deeper level of OSS connection,

Yesterday we talked about the cuckoo-bird analogy and how it was preventing telcos from building more valuable platforms on top of their capital-intensive network platforms. Thanks to Dean Bubley, it gave examples of how the most successful platform plays were platforms on platforms (eg Microsoft Office on Windows, iTunes on iOS, phones on physical networks, etc).

The telcos have found it difficult to build the second layer of platform on their data networks during the Internet age to keep the cuckoo chicks out of the nest.

Telcos are great at helping customers to make connections. OSS are great at establishing and maintaining those connections. But there’s a deeper level of connection waiting for us to support – helping the telcos’ customers to make valuable connections that they wouldn’t otherwise be able to make by themselves.

In the past, telcos provided yellow pages directories to help along these lines. The internet and social media have marginalised the value of this telco-owned asset in recent years.

But the telcos still own massive subscriber bases (within our OSS / BSS suites). How can our OSS / BSS facilitate a deeper level of connection, providing the telcos’ customers with valuable connections that they would not have otherwise made?

To reduce OSS dark data (or not)?

Dark data is the name for data that is collected but never used.
lt’s said that 96-98% of all data is dark data (not that I can confirm or deny those claims).

Dark data forms the bottom layer in the DIKW hierarchy below (image sourced from here).
DIKW hierarchy

What would the dark data percentage be within OSS do you think? Or more specifically, your OSS?

If you’re not going to use it, then why collect it?

I have two conflicting trains of thought here:

  • The Minimum Viable Data perspective; and
  • It’s relatively cheap and easy to collect / store raw data if an interface is already built, so hoard it all just in case your data scientists (or automated data algorithms) ever need it

Where do you sit on the data collection spectrum?

Micro-strangulation vs COTS customisation

Over the last couple of posts, we’ve referred to the following diagram and its ability to create a glass ceiling on OSS feature releases:
The increasing percentage of tech debt

Yesterday’s post indicated that the current proliferation of microservices has the potential to amplify the strangulation.

So how does that compare with the previous approach that was built around COTS (Commercial off-the-shelf) OSS packages?

With COTS, the same time-series chart exists, just that it sees the management of legacy, etc fall largely with the COTS vendor, freeing up the service provider… until the service provider starts building customisations and the overhead becomes shared.

With microservices, the rationalisation responsibility is shifted to the in-house (or insourced) microservice developers.

And a third option: If the COTS is actually delivered via a cloud “OSS as a service” (OSSaaS) model, then there’s a greater incentive for the vendor to constantly re-factor and reduce clutter.

A fourth option, which I haven’t actually seen as a business model yet, is once an accumulation of modular microservices begins to grow, vendors might begin to offer microservices as a COTS offering.

Can the OSS mammoths survive extinction?

Startups win with data. Mammoths go extinct with products.”
Jay Sharma

Interesting phraseology. I love the play on words with the term mammoths. There are some telcos that are mammoth in size but are threatened with extinction though changes in environment and new competitors appearing.

I tend to agree with the intent of the quote, but also have some reservations. For example, products are still a key part of the business model of digital phenoms like Google, Facebook, etc. It’s their compelling products that allow them to collect the all-important data. As consumers, we want the product, they get our data. We also want the products sold by the Mammoths but perhaps they don’t leverage the data entwined in our usage (or more importantly, the advertising revenues that gets attracted to all that usage) as well as the phenoms do.

Another interesting play on words exists here for the telcos – in the “winning with data.” Telcos are losing at data (their profitability per bit is rapidly declining to the point of commoditisation), so perhaps a mindset shift is required. Moving the business model that’s built on the transport of data to a model based on the understanding of, and learning from, data. It’s certainly not a lack of data that’s holding them back. Our OSS / BSS collect and curate plenty. The difference is that Google’s and Facebook’s customers are advertisers, whilst the Mammoths’ customers are subscribers.

As OSS providers, the question remains for us to solve – how can we provide the products that allow the Mammoths to win with data?

PS. The other part of this equation is the rise of data privacy regulations such as GDPR (General Data Protection Regulation). Is it just me, or do the Mammoths seem to attract more attention in relation to privacy of our data than the OTT service providers?

Analytics and OSS seasonality

Seasonality is an important factor for network and service assurance. It’s also known as time-of-day/week/month/year specific activity.

For example, we often monitor network health through the analysis of performance metrics (eg CPU utilisation) and set up thresholds to alert us if those metrics go above (or below) certain levels. The most basic threshold is a fixed one (eg if a CPU goes above 95% utilisation, then raise an alert). However, this might just create unnecessary activity. Perhaps we do an extract at 2am every evening, which causes CPU utilisation to bounce at nearly 100% for long perids of time. We don’t want to receive an alert in the middle of the night for what might be expected behaviour.

Another example might be a higher network load for phone / SMS traffic on major holidays or during disaster events.

The great thing about modern analytics tools is that as long as they have long time series of data, then they can spot patterns of expected behaviour at certain times/dates that humans might not be observing and adjust alerting accordingly. This reduces the number of spurious notifications for network assurance operators to chase up on.

10 ways to #GetOutOfTheBuilding

Eric Ries’ “The Lean Startup,” has a short chapter entitled, “Get out of the Building.” It basically describes getting away from your screen – away from reading market research, white papers, your business plan, your code, etc – and out into customer-land. Out of your comfort zone and into a world of primary research that extends beyond talking to your uncle (see video below for that reference!).

This concept applies equally well to OSS product developers as it does to start-up entrepreneurs. In fact the concept is so important that the chapter name has inspired it’s own hashtag (#GetOutOfTheBuilding).

This YouTube video provides 10 tips for getting out of the building (I’ve started the clip at Tendai Charasika’s list of 10 ways but you may want to scroll back a bit for his more detailed descriptions).

But there’s one thing that’s even better than getting out of the building and asking questions of customers. After all, customers don’t always tell the complete truth (even when they have good intentions). No, the better research is to observe what they do, not what they say. #ObserveWhatTheyDoNotWhatTheySay

This could be by being out of the building and observing customer behaviour… or it could be through looking at customer usage statistics generated by your OSS. That data might just show what a customer is doing… or not doing (eg customers might do small volume transactions through the OSS user interface, but have a hack for bulk transactions because the UI isn’t efficient at scale).

Not sure if it’s indicative of the industry as a whole, but my experience working for / with vendors is that they don’t heavily subscribe to either of these hashtags when designing and refining their products.

Does your OSS collect primary data to #ObserveWhatTheyDoNotWhatTheySay? If it does, do you ever make use of it? Or do you prefer to talk with your uncle (does he know much about OSS BTW)?

Watching customers under an omnichannel strobe light

Omnichannel will remain full of holes until we figure out a way of tracking user journeys rather than trying to prescribe (design, document, maintain) process flows.

As a customer jumps between the various channels, they move between systems. In doing so, we tend to lose the ability to watch customer’s journey as a single continuous flow. It’s like trying to watch customer behaviour under a strobe light… except that the light only strobes on for a few seconds every minute.

Theoretically, omnichannel is a great concept for customers because it allows them to step through any channel at any time to suit their unique behavioural preferences. In practice, it can be a challenging experience for customers because of a lack of consistency and flow between channels.

It’s a massive challenge for providers to deliver consistency and flow because the disparate channels have vastly different user interfaces and experiences. IVR, digital, retail, etc all come from completely different design roots.

Vendors are selling the dream of cost reductions through improved efficiency within their channels. Unfortunately this is the wrong place for a service provider to look. It’s the easier place to look, but the wrong place nonetheless. Processes already tend to be relatively efficient within a channel and data tends to be tracked well within a channel.

The much harder, but better place to seek benefits is through the cross-channel user journeys, the hand-offs between channels. That’s where the real competitive advantage opportunities lie.

Do you want dirty or clean automation?

Earlier in the week, we spoke about the differences between dirty and clean consulting, as posed by Dr Richard Claydon, and how it impacted the use of consultants on OSS projects.

The same clean / dirty construct applies to automation projects / tools such as RPA (Robotic Process Automation).

Clean Automation = simply building robotic automations (ie fixed algorithms) that manage existing process designs
Dirty Automation = understanding the process deeply first, optimising it for automation, then creating the automation.

The first is cheap(er) and easy(er)… in the short-term at least.
The second requires getting hands dirty, analysing flows, analysing work practices, analysing data / logs, understanding operator psychology, identifying inefficiencies, refining processes to make them better suited to automation, etc.

Dirty automation requires analysis, not just of the SOP (Standard Operating Procedure), but the actual state-changes occurring from start to end of each iteration of process implementation.
This also represents the better launching-off point to lead into machine-learning (ie cognitive automation), rather than algorithmic or robotic automation.

Are we measuring OSS at the wrong end?

I have a really simple philosophical question to pose of you today – Are we measuring our OSS at the wrong end?
It seems that a vast majority of our OSS measurement is at the input end of a process rather than at the output.

Just a few examples:

  • Financial predictions in a business cases vs Return on Invested Capital (ROIC) of that project
  • Implementation costs vs lifetime ownership implication costs
  • Revenues vs profitability (of products, services, workflows, activities, etc)
  • OSS costs vs enablement of service and/or monetisation of assets (ie operationalising assets such as network equipment via service activation)
  • OSS incidents raised (or even resolved) vs insurance on brand value (ie prevention of negative word-of-mouth caused by network / service outages)

In each of these cases, it’s much easier to measure the inputs. However, the output measurements portray a far more powerful message don’t you think?

6 principles of OSS UI design

When we talk about building capabilities by design, there are a set of four core capabilities that you should keep in mind:

  • Designed for self-sufficiency: Enable an environment where the business user is capable of acquiring, blending, presenting, and visualizing their data discoveries. IT needs to move away from being command and control to being an information broker in a new kind of business-IT partnership that removes barriers, so that users have more options, more empowerment, and greater autonomy.
  • Designed for collaboration: Have tools and platforms that allow people to share and work together on different ideas for review and contribution. This further closes that business-IT gap, establishes transparency, and fosters a collective learning culture.
  • Designed for visualization: Data visualizations have been elevated to a whole new form of communication that leverages cognitive hardwiring, enriches visual discovery, and helps tell a story about data to move from understanding to insight.
  • Designed for mobility: It is not enough to be just able to consume information on mobile devices, instead users must be able to work and play with data “on the go” and make discovery a portable, personalized experience.

Lindy Ryan in the book, “The Visual Imperative: Creating a Visual Culture of Data Discovery.”

When it comes to OSS specifically, I have two additional design principles:

  • Designed for Search – there is so much data in our OSS / BSS suites; some linked, some not; some normalised, some not; some cleansed, some not; This design principle allows abstraction from all those data challenges to allow operators to make psuedo-natural language requests for information. Noting that this could be considered an overlap between points 1 and 3 in the prior list
  • Designed for user journeys – in an omni-channel world, the entry point and traversal of any OSS workflow could cross multiple channels (eg online, retail store, IVR, app, etc). This makes pre-defined workflows almost impossible to design / predict. Instead, on OSS / BSS suite must be able to handle complete flexibility of user journeys between state / event transitions

Building an OSS piggybank with scoreboard pressure

“The gameplan tells what you want to happen, but the scoreboard tells what is happening.”
John C Maxwell

Over the years, I’ve found it interesting that most of the organisations I’ve consulted to have significant hurdles for a new OSS to jump through to get funded (the gameplan), but rarely spend much time on the results (the scoreboard)… apart from the burndown of capital during the implementation project.

From one perspective, that’s great for OSS implementers. With less accountability, we can move straight on to the next implementation and not have to justify whether our projects are worth the investment. It allows us to focus on justifying whether we’ve done a technically brilliant implementation instead.

However, from the other perspective, we’re short-changing ourselves if we’re not proving the value of our projects. We’re not building up the credits in the sponsor bank ahead of the inevitable withdrawals (ie when one of our OSS projects goes over time, budget or functionality is reduced to bring in time/budget). It’s the lack of credits that make sponsors skeptical of any OSS investment value and force the aforementioned jumping through hoops.

One of our OSS‘s primary functions is to collect and process data – to be the central nervous system for our organisations. We have the data to build the scoreboards. Perhaps we just don’t apply enough creativity to proving the enormous value of what our OSS are facilitating.

Do you ever consider whether you’re on the left or right side of this ledger / scoreboard?

If OSS is my hammer, am I only seeing nails?

OSS is a powerful multi-purpose tool, much like a hammer.

If OSS is my only tool, do I see all problems as nails that I have to drive home with my OSS?

The downside of this is that it then needs to be designed, built, integrated, tested, released, supported, upgraded, data curated and maintained. The Total Cost of Ownership (TCO) for a given problem extends far beyond the time-frame envisaged during most solutioning exercises.

To be honest, I’ve probably been guilty of using OSS to solve problems before seeking alternatives in the past.

What if our going-in position was that answers should be found elsewhere – outside OSS – and OSS simply becomes the all-powerful last resort? The sledgehammer rather than the ball-pein hammer.

With all this big data I keep hearing about, has anyone ever seen any stats relating to the real life-time costs of OSS customisations made by a service provider to its off-the-shelf OSS? If such data exists, I’d love to see what the cost-benefit break-even point might look like and what we could learn from it. I assume we’re contributing to our very own Whale Curve but have nothing to back that assumption up yet.