Compiling “The Zen of OSS” perhaps?

A recent presentation just reminded me of “The Zen of Python.” It’s a collection of 20 (19?) software principles, particularly as they relate to the Python programming language.

Since OSS is software-defined, (almost) all of the principles (not sure about the “Dutch” one) relate to OSS in a programming sense, but perhaps in a broader sense as well. I’d like to share two pairings:

Errors should never pass silently.
Unless explicitly silenced.

Unfortunately too many do pass silently, particularly across “best-of-breed” OSS stacks.

And

Now is better than never.
Although never is often better than right now.

An especially good hint if you’re working within an Agile model!

So that got me thinking (yes, scary, I know!). What would a Zen of OSS look like? I’d be delighted to accept your suggestions. Does one already exist (and no, I’m not referring to the vendor, Zenoss)?

In the meantime, I’ll have to prepare a list I think. However, you can be almost assured that the first principle on the Zen of OSS will be:

Just because you can, doesn’t mean you should.

This one OSS factor can give a sustainable advantage

The business case justifications of OSS tend to fall into four categories:

  • Revenue increase – the operationalisation and monetisation of an operator’s assets
  • Cost reduction – improving the operational efficiency of the operator
  • Insight generation – by leveraging the valuable data that an OSS collects
  • Brand value – this is a catch-all for many different ways an OSS can contribute to (or detract from) an operator’s brand

On the last point, we can break this down into defence (eg reducing outages that may damage the operator’s brand) or on offence (eg faster time to market or activations that give the operator a competitive advantage).

But there’s one other special category that bears consideration – threat minimisation – which probably has elements of each of the four points above. If we take a really macro-view of this, two of the biggest threats facing operators today are disruptive new business models and disruptive new products. Or, you could take the complete opposite perspective on this and see it as opportunity maximisation (if you’re the one to capitalise on the disruptive opportunity first).

An operator’s OSS can have a massive influence on this category. If an operator takes months to force urgent changes through its OSS, then it can’t respond well to a disruptive threat. Similarly, opportunities / arbitrages only have a short window before the market responds, so a lack of OSS flexibility impacts an operator’s ability to maximise opportunity.

Having an OSS with greater agility than competitors can be a more significant, sustainable market advantage than most people in telecommunications realise.

Check out our OSS business case builder for a more detailed breakdown of factors that can help to build a persuasive OSS business case. Or just contact us and we’d be happy to help.

Dan Pink’s 6 critical OSS senses

I recently wrote an article that spoke about the obsolescence of jobs in OSS, particularly as a result of Artificial Intelligence.

But an article by someone much more knowledgeable about AI than me, Rodney Brooks, had this to say, “We are surrounded by hysteria about the future of artificial intelligence and robotics — hysteria about how powerful they will become, how quickly, and what they will do to jobs.” He then describes The Seven Deadly Sins of AI Predictions here.

Back into my box I go, tail between my legs! Nonetheless, the premise of my article still holds true. The world of OSS is changing quickly and we’re constantly developing new automations, so our roles will inevitably change. My article also proposed some ideas on how to best plan our own adaptation.

That got me thinking… Many people in OSS are “left-brain” dominant right? But left-brained jobs (ie repeatable, predictable, algorithmic) can be more easily out-sourced or automated, thus making them more prone to obsolescence. That concept reminded me of Daniel Pink’s premise in A Whole New Mind where right-brained skills become more valuable so this is where our training should be focused. He argues that we’re on the cusp of a new era that will favor “conceptual” thinkers like artists, inventors and storytellers. [and OSS consultants??]

He also implores us to enhance six critical senses, namely:

  • Design – the ability to create something that’s emotionally and/or visually engaging
  • Story – to create a compelling and persuasive narrative
  • Symphony – the ability to synthesise new insights, particularly from seeing the big picture
  • Empathy – the ability to understand and care for others
  • Play – to create a culture of games, humour and play, and
  • Meaning – to find a purpose that will provide an almost spiritual fulfillment.

I must admit that I hadn’t previously thought about adding these factors to my development plan. Had you?
Do you agree with Dan Pink or will you continue to opt for left-brain skills / knowledge enhancement?

Finding the most important problems to solve

The problem with OSS is that there are too many problems. We don’t have to look too hard to find a problem that needs solving.

An inter-related issue is that we’re (almost always) constrained by resources and aren’t able to solve every problem we find. I have a theory – As much as you are skilled at solving OSS problems, it’s actually your skill at deciding which problem to solve that’s more important.

With continuous release methodologies gaining favour, it’s easy to prioritise on the most urgent or easiest problems to solve. But what if we were to apply the Warren Buffett 20 punch-card approach to tackling OSS problems?

I could improve your ultimate financial welfare by giving you a ticket with only twenty slots in it so that you had twenty punches – representing all the investments that you got to make in a lifetime. And once you’d punched through the card, you couldn’t make any more investments at all. Under those rules, you’d really think carefully about what you did, and you’d be forced to load up on what you’d really thought about. So you’d do so much better.”
Warren Buffett
.

I’m going through this exact dilemma at the moment – am I so busy giving attention to the obvious problems that I’m not allowing enough time to discover the most important ones? I figure that anyone can see and get caught up in the noise of the obvious problems, but only a rare few can listen through it…

OSS and Network of the Future architectures workshop in Melbourne this week

OSS and Network of the Future architectures workshop in Melbourne this week.

Click on the link above to register online.

TM Forum’s description of the event is as follows:
As part of the TM Forum Open Digital Architecture program, we are starting work on a standardized definition of Operational Domain Managers (ODM) capabilities. Telstra will be hosting an open TM Forum Local workshop on ODM to garner more industry feedback and we are looking for your participation.

This is the chance for you to provide input into the OSS and Networks of the Future architectures. While the workshop will introduce the ODA and ODM concepts, we are looking for feedback on your product plans supporting these type of architectures, understanding where we align, and what is missing. Small groups of participants will work on answering these type of questions :

Develop definition of NaaS services and the required Northbound API (i.e. towards IT and/or partner(s) that an ODM should support – from minimum viable set towards complete list
Service catalogue: service specification, service lifecycle management from design time to run-time – minimum requirements
Service assurance: Handling of closed-loop support within one domain vs composite domain or across another CSP Service instance: How is service-resource mapping & Service impact assessment handled?
Service Fulfillment: How can we track “Activation Error handling” within an atomic domain vs a composite domain? etc

Bringing Eminem’s blank canvas to OSS

“When you start out in your career, you have a blank canvas, so you can paint anywhere that you want because the shit ain’t been painted on yet. And then your second album comes out, and you paint a little more and you paint a little more. By the time you get to your seventh and eighth album you’ve already painted all over it. There’s nowhere else to paint.”
Eminem. (on Rick Rubin and Malcolm Gladwell’s Broken Record podcast)

To each their own. Personally, Eminem’s music has never done it for me, whether his first or eighth album, but the quote above did strike a chord (awful pun).

It takes many, many hours to paint in the detail of an OSS painting. By the time a product has been going for a few years, there’s not much room left on the canvas and the detail of the existing parts of the work is so nuanced that it’s hard to contemplate painting over.

But this doesn’t consider that over the years, OSS have been painted on many different canvases. First there were mainframes, then client-server, relational databases, XaaS, virtualisation (of servers and networks), and a whole continuum in between… not to mention the future possibilities of blockchain, AI, IoT, etc. And that’s not even considering the changes in programming languages along the way. In fact, new canvases are now presenting themselves at a rate that’s hard to keep up with.

The good thing about this is that we have the chance to start over with a blank canvas each time, to create something uniquely suited to that canvas. However, we invariably attempt to bring as much of the old thinking across as possible, immediately leaving little space left to paint something new. Constraints that existed on the old canvas don’t always apply to each new canvas, but we still have a habit of bringing them across anyway.

We don’t always ask enough questions like:

  • Does this existing process still suit the new canvas
  • Can we skip steps
  • Can we obsolete any of the old / unused functionality
  • Are old and new architectures (at all levels) easily transmutable
  • Does the user interface need to be ported or replaced
  • Do we even need a user interface (assuming the rise of machine-to-machine with IoT, etc)
  • Does the old data model have any relevance to the new canvas
  • Do the assurance rules of fixed-network services still apply to virtualised networks
  • Do the fulfillment rules of fixed-network services still apply to virtualised networks
  • Are there too many devices to individually manage or can they be managed as a cohort
  • Does the new model give us access to new data and/or techniques that will allow us to make decisions (or derive insights) differently
  • Does the old billing or revenue model still apply to the new platform
  • Can we increase modularity and abstraction between modules

“The real reason “blockchain” or “AI” may actually change businesses now or in the future, isn’t that the technology can do remarkable things that can’t be done today, it’s that it provides a reason for companies to look at new ways of working, new systems and finally get excited about what can be done when you build around technology.”
Tom Goodwin
.

50 exercises to ignite your OSS innovation sessions

Every project starts with an idea… an idea that someone is excited enough to sponsor.

  1. But where are your ideas being generated from?
  2. How do they get cultivated and given time to grow?
  3. How do they get pitched? and How do they get heard?
  4. How are sponsors persuaded?
  5. How do they then get implemented?
  6. How do we amplify this cycle of innovation and implementation?

I’m fascinated by these questions in OSS for the reasons outlined in The OSS Call for Innovation.

If we look at the levels of innovation (to be honest, it’s probably more a continuum than bands / levels):

  1. Process Improvement
  2. Incremental Improvement (new integrations, feature enhancement, etc)
  3. Derivative Ideas (iPhone = internet + phone + music player)
  4. Quantum Innovation (Tablet computing, network virtualisation, cloud delivery models)
  5. Radical Innovations (transistors, cellular wireless networks, Claude Shannon’s Information Theory)

We have so many immensely clever people working in our industry and we’re collectively really good at the first two levels. Our typical mode of working – which could generally be considered fire-fighting (or dare I say it, Agile) – doesn’t provide the time and headspace to work on anything in the longer life-cycles of levels 3-5. These are the levels that can be more impactful, but it’s these levels where we need to carve out time specifically for innovation planning.

If you’re ever planning to conduct innovation fire-starter sessions, I really recommend reading Richard Brynteson’s, “50 Activities for Building Innovation.” As the title implies, it provides 50 (simple but powerful) exercises to help groups to generate ideas.

Please contact us if you’d like PAOSS to help facilitate your OSS idea firestarter or road-mapping sessions.

How the investment strategy of a $106 billion VC fund changed my OSS thinking

What is a service provider’s greatest asset?

Now I’m biased when considering the title question, but I believe OSS are the puppet-master of every modern service provider. They’re the systems that pull all of the strings of the organisation. They generate the revenue by operationalising and assuring the networks as well as the services they carry. They coordinate the workforce. They form the real-time sensor networks that collect and provide data, but more importantly, insights to all parts of the business. That, and so much more.

But we’re pitching our OSS all wrong. Let’s consider first how we raise revenue from OSS, be that either internal (via internal sponsors) or external (vendor/integrator selling to customers)? Most revenue is either generated from products (fixed, leased, consumption revenue models) or services (human effort).

This article from just last month ruminated, “An organisation buys an OSS, not because it wants an Operational Support System, but because it wants Operational Support,” but I now believe I was wrong – charting the wrong course in relation to the most valuable element of our OSS.

After researching Masayoshi Son’s Vision Fund, I’m certain we’re selling a fundamentally short-term vision. Yes, OSS are valuable for the operational support they provide, but their greatest value is as vast data collection and processing engines.

“Those who rule data will rule the entire world. That’s what people of the future will say.”
Masayoshi Son.

For those unfamiliar with Masayoshi Son, he’s Japan’s richest man, CEO of SoftBank, in charge of a monster (US$106 billion) venture capital fund called Vision Fund and is seen as one of the world’s greatest technology visionaries.

As this article on Fortune explains Vision Fund’s foundational strategy, “…there’s a slide that outlines the market cap of companies during the Industrial Revolution, including the Pennsylvania Railroad, U.S. Steel, and Standard Oil. The next frontier, he [Son] believes, is the data revolution. As people like Andrew Carnegie and John D. Rockefeller were able to drastically accelerate innovation by having a very large ownership over the inputs of the Industrial Revolution, it looks like Son is trying to do something similar. The difference being he’s betting on the notion that data is one of the most valuable digital resources of modern day.”

Matt Barnard is CEO of Plenty, one of the companies that Vision Fund has invested in. He had this to say about the pattern of investments by Vision Fund, “I’d say the thing we have in common with his other investments is that they are all part of some of the largest systems on the planet: energy, transportation, the internet and food.”

Telecommunications falls into that category too. SoftBank already owns significant stakes in telecommunications and broadband network providers.

But based on the other investments made by Vision Fund so far, there appears to be less focus on operational data and more focus on customer activity and decision-making data. In particular, unravelling the complexity of customer data in motion.

OSS “own” service provider data, but I wonder whether we’re spending too much time thinking about operational data (and how to feed it into AI engines to get operational insights) and not enough on stitching customer-related insight sets together. That’s where the big value is, but we’re rarely thinking about it or pitching it that way… even though it is perhaps the most valuable asset a service provider has.

The chains of integration are too light until…

Chains of habit are too light to be felt until they are too heavy to be broken.”
Warren Buffett (although he attributed it to an unknown author, perhaps originating with Samuel Johnson).

What if I were to replace the word “habit” in the quote above with “OSS integration” or “OSS customisation” or “feature releases?”

The elegant quote reflects this image:

The chains of feature releases are light at t0. They’re much heavier at t0+100.

Like habits, if we could project forward and see the effects, would we allow destructive customisations to form in their infancy? The problem is that we don’t see them as destructive at infancy. We don’t see how they entangle.

Posing a Network Data Synchronisation Protocol (NDSP) concept

Data quality is one of the biggest challenges we face in OSS. A product could be technically perfect, but if the data being pumped into it is poor, then the user experience of the product will be awful – the OSS becomes unusable, and that in itself generates a data quality death spiral.

This becomes even more important for the autonomous, self-healing, programmable, cooperative networks being developed (think IoT, virtualised networks, Self-Organizing Networks). If we look at IoT networks for example, they’ll be expected to operate unattended for long periods, but with code and data auto-propagating between nodes to ensure a level of self-optimisation.

So today I’d like to pose a question. What if we could develop the equivalent of Network Time Protocol (NTP) for data? Just as NTP synchronises clocking across networks, Network Data Synchronisation Protocol (NDSP) would synchronise data across our networks through a feedback-loop / synchronisation algorithm.

Of course there are differences from NTP. NTP only tries to coordinate one data field (time) along a common scale (time as measured along a 64+64 bits continuum). The only parallel for network data is in life-cycle state changes (eg in-service, port up/down, etc).

For NTP, the stratum of the clock is defined (see image below from wikipedia).

This has analogies with data, where some data sources can be seen to be more reliable than others (ie primary sources rather than secondary or tertiary sources). However, there are scenarios where stratum 2 sources (eg OSS) might push state changes down through stratum 1 (eg NMS) and into stratum 0 (the network devices). An example might be renaming of a hostname or pushing a new service into the network.

One challenge would be the vast different data sets and how to disseminate / reconcile across the network without overloading it with management / communications packets. The other would be that format consistency. I once had a device type that had four different port naming conventions, and that was just within its own NMS! Imagine how many port name variations (and translations) might have existed across the multiple inventories that exist in our networks. The good thing about the NDSP concept is that it might force greater consistency across different vendor platforms.

Another would be that NDSP would become a huge security target as it would have the power to change configurations and because of its reach through the network.

So what do you think? Has the NDSP concept already been developed? Have you implemented something similar in your OSS? What are the scenarios in which it could succeed? Or fail?

I’m predicting the demise of the OSS horse

“What will telcos do about the 30% of workers AI is going to displace?”
Dawn Bushaus

That question, which is the headline of Dawn’s article on TM Forum’s Inform platform, struck me as being quite profound.

As an aside, I’m not interested in the number – the 30% – because I concur with Tom Goodwin’s sentiments on LinkedIn, “There is a lot of nonsense about AI.
Next time someone says “x% of businesses will be using AI by 2020” or “AI will be worth $xxxBn by 2025” or any of those other generic crapspeak comments, know that this means nothing.
AI is a VERY broad area within computer science that includes about 6-8 very different strands of work. It spans robotics, image recognition, machine learning, natural language processing, speech recognition and far more. Nobody agrees on what is and isn’t in this.
This means it covers everything from superintelligence to artificial creativity to chatbots
.”

For the purpose of this article, let’s just say that in 5 years AI will replace a percentage of jobs that we in tech / telco / OSS are currently doing. I know at least a few telcos that have created updated operating plans built around a headcount reduction much greater than the 30% mentioned in Dawn’s article. This is despite the touchpoint explosion and increased complexity that is already beginning to crash down onto us and will continue apace over the next 5 years.

Now, assuming you expect to still be working in 5 years time and are worried that your role might be in the disappearing 30% (or whatever percentage), what do you do now?

First, figure out what the modern equivalents of the horse are in the context of Warren Buffett’s quote below:

“What you really should have done in 1905 or so, when you saw what was going to happen with the auto is you should have gone short horses. There were 20 million horses in 1900 and there’s about 4 million now. So it’s easy to figure out the losers, the loser is the horse. But the winner is the auto overall. [Yet] 2000 companies (carmakers) just about failed.”

It seems impossible to predict how AI (all strands) might disrupt tech / telco / OSS in the next 5 years – and like the auto industry, more impossible to predict the winners (the technologies, the companies, the roles). However, it’s almost definitely easier to predict the losers.

Massive amounts are being invested into automation (by carriers, product vendors and integrators), so if the investments succeed, operational roles are likely to be net losers. OSS are typically built to make operational roles more efficient – but if swathes of operator roles are automated, then does operational support also become a net loser? In its current form, probably yes.

Second, if you are a modern-day horse, ponder which of your skills are transferable into the future (eg chassis building, brakes, steering, etc) and which are not (eg buggy-whip making, horse-manure collecting, horse grooming, etc). Assuming operator-driven OSS activities will diminish, but automation (with or without AI) will increase, can you take your current networks / operations knowledge and combine that with up-skilling in data, software and automation tools?

Even if OSS user interfaces are made redundant by automation and AI, we’ll still need to feed the new technologies with operations-style data, seed their learning algorithms and build new operational processes around them.

The next question is double-edged – for both individuals and telcos alike – how are you up-skilling for a future without horses?

Are your various device inventory repositories in synch?

Does your organisation have a number of different device inventory repositories?
Hint: You might even be surprised by how many you have.

Examples include:

  • Physical network inventory
  • Logical network inventory
  • DNS records
  • CMDB (Config Management DB)
  • IPAM (IP Address Management)
  • EMS (Element Management Systems)
  • SIEM (Security Information and Event Management)
  • Desktop / server management
  • not to mention the management information base on the devices themselves (the only true primary data source)

Have you recently checked how in-synch they are? As a starting point, are device counts aligned (noting that different repositories perhaps only cover subsets of device ranges)? If not, how many cross-match between repositories?

If they’re out of synch, do you have a routine process for triangulating / reconciling / cleansing? Even better, do you have an automated, closed-loop process for ensuring synchronisation across the different repositories?

Would anyone like to offer some thoughts on the many reasons why it’s important to have inventory alignment?

I’ll give a few little hints:

  • Security
  • Assurance
  • Activations
  • Integrations
  • Asset lifecycle management
  • Financial management (ie asset depreciation)

The concept of DevOps is missing one really important thing

There’s a concept that’s building a buzz across all digital industries – you may’ve heard of it – it’s a little thing called DevOps. Someone (most probably a tester) decided to extend it and now you might even hear the #DevTestOps moniker being mentioned.

In the ultimate of undeserved acknowledgements, I even get a reference on Wikipedia’s DevOps page. It references this DevOps life-cycle diagram from an earlier post that I can take no credit for:

However, there is one really important chevron missing from the DevOps infinite loop above. Can you picture what it might be?

If I show you this time series below, does it help identify what’s missing from the DevOps infinite loop? I refer to the diagram below as The Tech-Debt Wreck
The increasing percentage of tech debt
If I give you a hint that it primarily relates to the grey band in the time series above, would that help?

Okay, okay. I’m sure you’ve guessed it already, but the big thing missing from the DevOps loop is pruning, or what I refer to as subtraction projects (others might call it re-factoring). Without pruning, the rapid release mantra of DevOps will take the digital world from t0 to t0+100 faster than at any time before in our history.

As a result, I’m advocating a variation on DevOps… or DevTestOps even… I want you to preach a revised version of the label – let’s start a movement called #DevTestPruneOps. Actually, the pruning should go at the start, before each dev / test cycle, but by calling it #PruneDevTestOps, I fear its lineage might get lost.

Speedcast secures AU$184m contract with NBN Co

Speedcast Secures Contract Valued at Up to AU$184 Million with NBN Co Australia.

Speedcast International Limited announced it has secured a 10-year contract with Australian government-owned infrastructure provider NBN Co to deliver enterprise-grade satellite services. Speedcast’s wholly-owned subsidiary and dedicated entity, Speedcast Managed Services, will partner with NBN Co to design, build and manage NBN Co’s enterprise satellite services. The value of the base network build and managed services project is AU$107 million and with other demand-driven services the aggregate revenue is expected to be up to AU$184 million in total.

This new contract will be a transformational project for Speedcast in Australia. Speedcast will leverage its experience as Australia’s largest provider of enterprise-grade satellite services to build and operate, in support of NBN Co, a unique suite of satellite services targeted at enterprise and government customers in Australia. In order to deliver on its mission, Speedcast will set up a new office with world class specialists in Melbourne to support NBN Co. The services provided by Speedcast will complement NBN Co’s consumer satellite service and will serve to increase the availability of enterprise-grade cost-effective communications solutions for Australian businesses.

“We are honored and grateful to have been chosen for such an important program and we are excited to play a part in expanding the communication services available in Australia. This contract is another huge success for Speedcast in our efforts to provide next-generation communications solutions to our clients and partners around the world,” said Pierre-Jean Beylier, CEO, Speedcast. “I thank NBN Co for their trust in Speedcast’s ability to deliver reliable communications services enabling mission-critical applications that enterprise and government customers rely on; something we are passionate about and have been doing very successfully in Australia for many years as well as in over 100 countries around the world.”

As the largest provider of satellite communication services to enterprises in Australia and globally, Speedcast boasts a range of services, technical capabilities and service levels that are second to none in the industry. The innovative solutions and ideas that Speedcast brings will help NBN Co support its mission and provide enterprise customers in Australia with an industry-leading connectivity service.

Torturous OSS version upgrades

Have you ever worked on an OSS where a COTS (Commercial Off-The-Shelf) solution has been so heavily customised that implementing the product’s next version upgrade has become a massive challenge? The solution has become so entangled that if the product was upgraded, it would break the customisations and/or integrations that are dependent upon that product.

This trickle-down effect is the perfect example of The Chess-board Analogy or The Tech-debt Wreck at work. Unfortunately, it is far too common, particularly in large, complex OSS environments.

The OSS then either has to:

  • skip the upgrade or
  • take a significant cost/effort hit and perform an upgrade that might otherwise be quite simple.

If the operator decides to take the “skip” path for a few upgrades in a row, then it gets further from the vendor’s baseline and potentially misses out on significant patches, functionality or security hardening. Then, when finally making the decision to upgrade, a much more complex project ensues.

It’s just one more reason why a “simple” customisation often has a much greater life-cycle cost than was initially envisaged.

How to reduce the impact?

  1. We’ve recently spoken about using RPA tools for pseudo-integrations, allowing the operator to leave the COTS product un-changed, but using RPA to shift data between applications
  2. Attempt to achieve business outcomes via data / process / config changes to the COTS product rather than customisations
  3. Enforce a policy of integration as a last resort as a means of minimising the chess-board implications (ie attempting to solve problems via processes, in data, etc before considering any integration or customisation)
  4. Enforcing modularity in the end-to-end architecture via carefully designed control points, microservices, etc

There are probably many other methods that I’m forgetting about whilst writing the article. I’d love to hear the approach/es you use to minimise the impact of COTS version upgrades. Similarly, have you heard of any clever vendor-led initiatives that are designed to minimise upgrade costs and/or simplify the upgrade path?

A summary of RPA uses in an OSS suite

This is the sixth and final post in a series about the four styles of RPA (Robotic Process Automation) in OSS.

Over the last few days, we’ve looked into the following styles of RPA used in OSS, their implementation approaches, pros / cons and the types of automation they’re best suited to:

  1. Automating repeatable tasks – using an algorithmic approach to completing regular, mundane tasks
  2. Streamlining processes / tasks – using an algorithmic approach to assist an operator during a process or as an alternate integration technique
  3. Predefined decision support – guiding operators through a complex decision process
  4. As part of a closed-loop system – that provides a learning, improving solution

RPA tools can significantly improve the usability of an OSS suite, especially for end-to-end processes that jump between different applications (in the many ways mentioned in the above links).

However, there can be a tendency to use the power of RPAs to “solve all problems” (see this article about automating bad processes). That can introduce a life-cycle of pain for operators and RPA admins alike. Like any OSS integration, we should look to keep the design as simple and streamlined as possible before embarking on implementation (subtraction projects).

RPA in OSS feedback loops

This is the fifth in a series about the four styles of RPA (Robotic Process Automation) in OSS.

The fourth of those styles is as part of a closed-loop system such as the one described here. Here’s a diagram from that link:
OSS / DSS feedback loop

This is the most valuable style of RPA because it represents a learning and improving system.

Note though that RPA tools only represent the DSS (Decision Support System) component of the closed-loop so they need to be supplemented with the other components. Also note that an RPA tool can only perform the DSS role in this loop if it can accept feedback (eg via an API) and modify its output in response. The RPA tool could then perform fully automated tasks (ie machine-to-machine) or semi-automated (Decision support for humans).

Setting up this type of solution can be far more challenging than the earlier styles of RPA use, but the results are potentially the most powerful too.

Almost any OSS process could be enhanced by this closed-loop model. It’s just a case of whether the benefits justify the effort. Broad examples include assurance (network health / performance), fulfilment / activations, operations, strategy, etc.

The OSS / RPA parrot on the shoulder analogy

This is the fourth in a series about the four styles of RPA (Robotic Process Automation) in OSS.

The third style is Decision Support. I refer to this style as the parrot on the shoulder because the parrot (RPA) guides the operator through their daily activities. It isn’t true automation but it can provide one of the best cost-benefit ratios of the different RPA styles. It can be a great blend of human-computer decision making.

OSS processes tend to have complex decision trees and need different actions performed depending on the information being presented. An example might be a customer on-boarding, which includes credit and identity check sub-processes, followed by the customer service order entry.

The RPA can guide the operator to perform each of the steps along the process including the mandatory fields to populate for regulatory purposes. It can also recommend the correct pull-down options to select so that the operator traverses the correct branch of the decision tree of each sub-process.

This functionality can allow organisations to deliver less training than they would without decision support. It can be highly cost-effective in situations where:

  • There are many inexperienced operators, especially if there is high staff turnover such as in NOCs, contact centres, etc
  • It is essential to have high process / data quality
  • The solution isn’t intuitive and it is easy to miss steps, such as a process that requires an operator to swivel-chair between multiple applications
  • There are many branches on the decision tree, especially when some of the branches are rarely traversed, even by experienced operators

In these situations the cost of training can far outweigh the cost of building an OSS (RPA) parrot on each operator’s shoulder.

Using RPA as an alternate OSS integration

This is the third in a series about the four styles of RPA (Robotic Process Automation) in OSS.

The second of those styles is Streamlining processes / tasks by following an algorithmic approach to simplify processes for operators.

These can be particularly helpful during swivel-chair processes where multiple disparate systems are partially integrated but each needs the same data (ie reducing the amount of duplicated data entry between systems). As well as streamlining the process it also improves data consistency rates.

The most valuable aspect of this style of RPA is that it can minimise the amount of integration between systems, thus potentially reducing solution maintenance into the future. The RPA can even act as the integration technique where an API isn’t available or documentation isn’t available (think legacy systems here).