A summary of RPA uses in an OSS suite

This is the sixth and final post in a series about the four styles of RPA (Robotic Process Automation) in OSS.

Over the last few days, we’ve looked into the following styles of RPA used in OSS, their implementation approaches, pros / cons and the types of automation they’re best suited to:

  1. Automating repeatable tasks – using an algorithmic approach to completing regular, mundane tasks
  2. Streamlining processes / tasks – using an algorithmic approach to assist an operator during a process or as an alternate integration technique
  3. Predefined decision support – guiding operators through a complex decision process
  4. As part of a closed-loop system – that provides a learning, improving solution

RPA tools can significantly improve the usability of an OSS suite, especially for end-to-end processes that jump between different applications (in the many ways mentioned in the above links).

However, there can be a tendency to use the power of RPAs to “solve all problems” (see this article about automating bad processes). That can introduce a life-cycle of pain for operators and RPA admins alike. Like any OSS integration, we should look to keep the design as simple and streamlined as possible before embarking on implementation (subtraction projects).

The OSS / RPA parrot on the shoulder analogy

This is the fourth in a series about the four styles of RPA (Robotic Process Automation) in OSS.

The third style is Decision Support. I refer to this style as the parrot on the shoulder because the parrot (RPA) guides the operator through their daily activities. It isn’t true automation but it can provide one of the best cost-benefit ratios of the different RPA styles. It can be a great blend of human-computer decision making.

OSS processes tend to have complex decision trees and need different actions performed depending on the information being presented. An example might be a customer on-boarding, which includes credit and identity check sub-processes, followed by the customer service order entry.

The RPA can guide the operator to perform each of the steps along the process including the mandatory fields to populate for regulatory purposes. It can also recommend the correct pull-down options to select so that the operator traverses the correct branch of the decision tree of each sub-process.

This functionality can allow organisations to deliver less training than they would without decision support. It can be highly cost-effective in situations where:

  • There are many inexperienced operators, especially if there is high staff turnover such as in NOCs, contact centres, etc
  • It is essential to have high process / data quality
  • The solution isn’t intuitive and it is easy to miss steps, such as a process that requires an operator to swivel-chair between multiple applications
  • There are many branches on the decision tree, especially when some of the branches are rarely traversed, even by experienced operators

In these situations the cost of training can far outweigh the cost of building an OSS (RPA) parrot on each operator’s shoulder.

Using RPA as an alternate OSS integration

This is the third in a series about the four styles of RPA (Robotic Process Automation) in OSS.

The second of those styles is Streamlining processes / tasks by following an algorithmic approach to simplify processes for operators.

These can be particularly helpful during swivel-chair processes where multiple disparate systems are partially integrated but each needs the same data (ie reducing the amount of duplicated data entry between systems). As well as streamlining the process it also improves data consistency rates.

The most valuable aspect of this style of RPA is that it can minimise the amount of integration between systems, thus potentially reducing solution maintenance into the future. The RPA can even act as the integration technique where an API isn’t available or documentation isn’t available (think legacy systems here).

Using RPA to automate OSS activities

This is the second in a series about the four styles of RPA (Robotic Process Automation) in OSS.

The first of those styles is automating repeatable tasks by following an algorithmic approach to complete regular, mundane tasks.

Running an OSS has many high value, challenging tasks for operators to perform. Unfortunately, they also have many repetitive, simple (brain-dead?) tasks that need to be done too.

This might include collecting data from various sources and aggregating it into a single file or report for consumption by humans or machines. Other examples include admin clean-up tasks like accounts / tempfiles / processes / sessions and myriad simple process automations.

When we think of OSS automations, we often think of high value but complicated tasks like orchestrations, network self-healing, etc. They can be expensive and inflexible, not always delivering the perceived worth for the investment.

However, when thinking of RPA I think about the simplest stuff first. They are basic and consistent processes that are straightforward to define an algorithm for, making them the “low-hanging fruit” of OSS / RPA activities. They help to build momentum towards the bigger automation fish. Best of all, they free up your talented OSS operators to do more valuable activities.

Automating repeatable tasks is the most basic RPA style. We’ll step up the value chain with each additional style over the next few days.

The four styles of RPA used in OSS

You’re probably already aware of RPA (Robotic Process Automation) tools. You’ve possibly even used one (or more) to enhance your OSS experience. In some ways, they’re a really good addition to your OSS suite. In some ways, potentially not. That all comes down to the way you use them.

There are four main ways that I see them being used (but happy for you to point out others):

  1. Automating repeatable tasks – following an algorithmic approach to getting regular, mundane tasks done (eg weekly report generation)
  2. Streamlining processes / tasks – again following an algorithmic approach to assist an operator during a process (eg reducing the amount of data entry when there is duplication between systems)
  3. Predefined decision support – to guide operators through a process that involves making different decisions based on the information being presented (eg in a highly regulated or complex process, with many options, RPA rules can ensure quality remains high)
  4. As part of a closed-loop system – if your RPA tool can handle changes to its rules through feedback (ie not just static rules) then it can become an important part of a learning, improving solution

You’ll notice an increasing level of sophistication from 1-4. Not just sophistication but potential value to operators too.

We’ll take a closer look at the use of RPA in OSS over the next couple of days.

Onboarding outsiders as a new OSS business model

The majority of these new services [such as healthcare, content and media, autonomous vehicles, smart homes etc.] require partnerships and will be based on a platform business model where the customer is not aware of who is providing which part of the service and to be quite frankly honest, wont care. All as they will care about is the customer experience and the end-to-end delivery of their service that they have paid for. This is where the opportunity for the telco comes and we need to think beyond data!
Aaron Boasman-Patel
here on TM Forum Inform.

Are your OSS tools already integrating with third-party services?

Do your catalog / orchestration engines already call upon microservices from outside your organisation? Perhaps it’s something as simple as providing a content service bundled with a service provider’s standard bitpipe service. Perhaps it’s also bundled with an internal-facing analytics service or an outward-facing shopping cart service.

A telco isn’t going to want to (or be able to) provide all of these services but can use partnerships and catalog items to allow each unique customer to build the bundled offer they want.

This is where catalogs and microservices potentially represent a type of small-grid model. There are already many APIs from third-party providers and the catalog / orchestration tools already exist to support the model. For many telcos, it will take a slight mindset shift – to embrace partnerships (ie to discard the “not invented here” thinking); to allowing their many existing bit-pipe subscribers to sell and bill through the telco platform (embrace sell-through); to build platforms and processes to allow for simple certification and onboarding of third-parties.

If your current OSS isn’t already integrating with third-party services, is it on your roadmap? Then again, does it suit your proposed future business models?

Will it take open source to unlock OSS potential?

I have this sense that the other OSS, open source software, holds the key to the next wave of OSS (Operational Support Systems) innovation.

Why? Well, as yesterday’s post indicated (through Nic Brisbourne), “it’s hard to do big things in a small way.” I’d like to put a slight twist on that concept by saying, “it’s hard to do big things in a fragmented way.” [OSS is short for fragmented after all]

The skilled resources in OSS are so widely spread across many organisations (doing a lot of duplicated work) that we can’t reach a critical mass of innovation. Open source projects like ONAP represent a possible path to critical mass through sharing and augmentating code. They provide the foundation upon which bigger things can be built. If we don’t uplift the foundations across the whole industry quickly, we risk losing relevance (just ask our customers for their gripes list!).

BTW. Did you notice the news that Six Linux Foundation open source networking projects have just merged into one? The six initial projects are ONAP, OPNFV, OpenDaylight, FD.io, PDNA, and SNAS. The new project is called the LF Networking Fund (LFN).

But you may ask how organisations can protect their trade secrets whilst embracing open source innovation. Derek Sivers provides a fascinating story and line of thinking in “Why my code and ideas are public.” I really recommend having a read about Valerie.

Alternatively, if you’re equating open source with free / unprofitable, this link provides a list of highly successful organisations with significant open source contributions. There are plenty of creative ways to be rewarded for open source effort.

Comment below if you agree or disagree about whether we need OSS to unlock the potential of OSS innovation.

The two types of disruptive technologists

OSS is an industry that’s undergoing constant, and massive change. But it still hasn’t been disrupted in the modern sense of that term. It’s still waiting to have its Uber/AirBnB-moment, where the old way becomes almost obsoleted by the introduction of a new way. OSS is not just waiting, but primed for disruption.

It’s a massive industry in terms of revenues, but it’s still far from delivering everything that customers want/need. It’s potentially even holding back the large-scale service provider industry from being even more influential / efficient in the current digital communications world. Our recent OSS Call for Innovation spelled out the challenges and opportunities in detail.

Today we’ll talk about the two types of disruptive technologists – one that assists change and one that hinders.

The first disruptive technologist is a rare beast – they’re the innovators who create solutions that are distinctly different from anything else in the market, changing the market (for the better) in the process. As discussed in this recent post, most of the significant changes occurring to OSS have been extrinsic (from adjacent industries like IT or networking rather than OSS). We need more of these.

The second disruptive technologist is all too common – they’re the technologists whose actions disrupt an OSS implementation. They’re usually well-intended, but can get in the way of innovation in two main ways:
1) By not looking beyond incremental change to existing solutions
2) Halting momentum by creating and resolving a million “what if?” scenarios

Most of us probably fall into the second category more often than the first. We need to reverse that trend individually and collectively though don’t we?

Would you like to nominate someone who stands out as being the first type of disruptive technologist and why?

How “what if?” scenarios can halt a project

Let’s admit it; we’ve all worked on an OSS project that has gone into a period of extended stagnation because of a fear of the unknown. I call them “What if?” scenarios. They’re the scenarios where someone asks, “What if x happens?” and then the team gets side-tracked whilst finding an answer / resolution. The problem with “What if?” scenarios is that many of them will never happen, or will happen on such rare occasions that the impact will be negligible. They’re the opposite end of the Pareto Principle – they’re the 20% that take up the 80% of effort / budget / time. They need to be minimised and/or mitigated.

In some cases, the “what if?” questions comes from a lack of understanding about the situation, the product suite and / or the future solution. That’s completely understandable because we can never predict all of the eventualities of an OSS project at the outset. That’s the OctopOSS at work – you think you have all of the tentacles under control, but another one always comes and whacks you on the back of the head.

The best way to reduce the “what if?” questions from getting out of control is to give stakeholders a sandpit / MVP / rapid-prototype / PoC environment to interact with.

The benefit of the prototype environment is that it delivers something tangible, something that stakeholders far and wide can interact with and test assumptions, usefulness, usability, boundary cases, scalability, etc. Stakeholders get to understand the context of the product and get a better feeling for what the end solution is going to look like. That way, many of the speculative “what ifs?” are bypassed and you start getting into the more productive dialogue earlier. The alternative, the creation of a document or discussion, can devolve into an almost endless set of “what-if” scenarios and opinions, especially when there are large groups of (sometimes militant) stakeholders.

The more dangerous “what if?” questions come from the experts. They’re the ones who demonstrate their intellectual prowess by finding scenario after scenario that nobody else is considering. I have huge admiration for those who can uncover potential edge cases, race conditions, loopholes in code, etc. The challenge is that they can be extremely hard to document, test for and circumvent. They’re also often very difficult to quantify or prove a likelihood of occurrence, thus consuming significant resources.

Rather than divert resources to resolving all these “what if?” questions one-by-one, I try to seek a higher-order “safety-net” solution. This might be in the form of exception handling, try-catch blocks, fall-out analysis reports, etc. Or, it might mean assigning a watching brief on the problem and handling it only if it arises in future.

The evolving complexity of RCA

Root cause analysis (RCA) is one of the great challenges of OSS. As you know, it aims to identify the probable cause of an alarm-storm, where all alarms are actually related to a single fault.

In the past, my go-to approach was to start with a circuit hierarchy-based algorithm. If you had an awareness of the hierarchy of circuits, usually through an awareness in inventory, if you have a lower-order fault (eg Loss of Signal on a transmission link caused by a cable break), then you could suppress all higher-order alarms (ie from bearers or tributaries that were dependent upon the L1 link. That works well in the fixed networks of distant past (think SDH / PDH). This approach worked well because it was repeatable between different customer environments.

Packet-switching data networks changed that to an extent, because a data service could traverse any number of links, on-net or off-net (ie leased links). The circuit hierarchy approach was still applicable, but needed to be supplemented with other rules.

Now virtualised networking is changing it again. RCA loses a little relevance in the virtualised layer. Workloads and resource allocations are dynamic and transient, making them less suited to fixed algorithms. The objective now becomes self-healing – if a failure is identified, failed resources are spun down and new ones spun up to take the load. The circuit hierarchy approach loses relevance, but perhaps infrastructure hierarchy still remains useful. Cable breaks, server melt-downs, hanging controller applications are all examples of root causes that will cause problems in higher layers.]

Rather than fixed-rules, machine-based pattern-matching is the next big hope to cope with the dynamically changing networks.

The number of layers and complexity of the network seems to be ever increasing, and with it RCA becomes more sophisticated…. If only we could evolve to simpler networks rather than more complex ones. Wishful thinking?

Customers don’t invest in OSS. What do they invest in?

“An organisation buys an OSS, not because it wants an Operational Support System, but because it wants Operational Support.”

So if our customers are not investing in our OSS, what are they actually investing in? Easy! They’re investing in the ability to solve their own problems and opportunities in future.

If we don’t actually understand operations, what chance do we have to deliver operational support? We keep hearing the term, “customer experience this,” “CX that,” so it must be important right? Operational support staff might be a few steps removed from us (intentionally or unintentionally) but they are our “real” customers and the only way we can develop a solution that empathises with them is by spending time with them and listening (not always easy for us know-it-all OSS builder-types).

And just because we have a history in ops doesn’t mean we can assume to know this time. Operations are different at each organisation.

So, are we sure we understand the nature, extent and context of the unique problem/s that this customer needs to solve (not wants to solve)?

The exposure effect can work for or against OSS projects

The exposure effect (no, not the one circulating through Hollywood) has a few interesting implications for OSS.

“The mere-exposure effect is a psychological phenomenon by which people tend to develop a preference for things merely because they are familiar with them.”
Wikipedia

In effect, it’s the repetition that drills familiarity, comfort, but also bias, into our sub-conscious. Repetition doesn’t make a piece of information true, but it can make us believe it’s true.

Many OSS experts are exposed to particular vendors/products for a number of years during their careers, and in doing so, the exposure effect can build. It can have a subtle bias on vendor selection, whereby the evaluators choose the solution/s they know ahead of the best-fit solution for their organisation. Perhaps having independent vendor selection facilitators who are familiar with many products can help to reduce this bias?

The exposure effect can also appear through sales and marketing efforts. By regularly contacting customers and repetitively promoting their wares, the customer builds a familiarity with the product. In theory it works for OSS products as it does with beer commercials. This can work for or against, depending on the situation.

In the case for, it can help to build a guiding coalition to get a complex, internal OSS project approved and supported through the challenging times that await every OSS project. I’d even go so far as to say, “you should use it to help build a guiding coalition,” rather than, “you can use it to help build a guiding coalition.” Never underestimate the importance of organisational change management on an OSS project.

In the case against, it can again develop a bias towards vendors / products that aren’t best-fit for the organisation. Similarly, if a “best-fit” product doesn’t take the time to develop repetition, they may never even get considered in a selection process, as highlighted in the diagram below.

7 touches of sales
Courtesy of the OnlineMarketingInstitute.

If the customer thinks they have a problem, they do have a problem

Omni-channel is an interesting concept because it generates two distinctly different views.
The customer will use whichever channel (eg digital, apps, contact-centre, IVR, etc) that they want to use.
The service provider will try to push the customer onto whichever channel suits the service provider best.

The customer will often want to use digital or apps, back-ended by OSS – whether that’s to place an order, make configuration changes, etc. The service provider is happy for the customer to use these low-cost, self-service channels.

But when the customer has a problem, they’ll often try to self-diagnose, then prefer to speak with a person who has the skills to trouble-shoot and work with the back-end systems and processes. Unfortunately, the service provider still tries to push the customer into low-cost, self-service channels. Ooops!

If the customer thinks they have a problem, they do have a problem (even if technically, they don’t).
Omni-channel means giving customers the channels that they want to work via, not the channels the service provider wants them to work via.
Call Volume Reduction (CVR) projects (which can overlap into our OSS) sometimes lose sight of this fact just because the service provider has their heart set on reducing costs.

When low OSS performance is actually high performance

It’s not unusual for something to be positioned as the high performance alternative. The car that can go 0 to 60 in three seconds, the corkscrew that’s five times faster, the punch press that’s incredibly efficient…
The thing is, though, that the high performance vs. low performance debate misses something. High at what?
That corkscrew that’s optimized for speed is more expensive, more difficult to operate and requires more maintenance.
That car that goes so fast is also more difficult to drive, harder to park and generally a pain in the neck to live with.
You may find that a low-performance alternative is exactly what you need to actually get your work done. Which is the highest performance you can hope for
.”
Seth Godin
in this article, What sort of performance?

Whether selecting a vendor / product, designing requirements or building an OSS solution, we can sometimes lose track of what level of performance is actually required to get the work done can’t we?

How many times have you seen a requirement sheet that specifies a Ferrari, but you know the customer lives in a location with potholed and cobblestoned roads? Is it right to spec them – sell them – build them – charge them for a Ferrari?

I have to admit to being guilty of this one too. I have gotten carried away in what the OSS can do, nearer the higher performance end of the spectrum, rather than taking the more pragmatic view of what the customer really needs.

Automations, custom reports and integrations are the perfect OSS examples of low performance actually being high performance. We spend a truckload of money on these types of features to avoid manual tasks (curse having to do those manual tasks)… when a simple cost-benefit analysis would reveal that it makes a lot more sense to stay manual in many cases.

The 10 minute / 1 minute / 10 second OSS challenge

Check out the video below, which gives an example of the 10 minute / 1 minute / 10 second challenge (you can check out more of them here).

When given 10 minutes to sketch Spiderman, the result is far richer than when the artist is given only 10 seconds… well obviously!!

But let me pose a question. If Sketch B was compiled from 60 sequential 10s updates (ie Sketch B would also take 10 mins total sketching time) do you think the final sketch would look as impressive as the 1 x 10 min sketch (Sketch A)? The total sketching time is the same, but will the results be similar?

From the 10s sketch above, you can see that the composition is not as precise. Subsequent updates would have to work around the initial structural flaws.

Do you wonder whether this is somewhat analogous to creating OSS using continuous development frameworks like Agile or DevOps? By having tightly compressed (eg weekly) release cycles, are we compromising the structure from the start?

I’m a big believer in rapid prototyping with subsequent incremental improvements instead of the old big-bang OSS delivery model. I’m also impressed with automated dev / test / release frameworks. However, I’m concerned that rapid release cycles can enforce unnecessary deadlines and introduce structural compromises that are difficult to fix mid-flight.

The future of telco / service provider consulting

Change happens when YOU and I DO things. Not when we argue.”
James Altucher
.

We recently discussed how ego can cause stagnation in OSS delivery. The same post also indicated how smart contracts potentially streamline OSS delivery and change management.

Along similar analytical lines, there’s a structural shift underway in traditional business consulting, as described in a recent post contrasting “clean” and “dirty” consulting. There’s an increasing skepticism in traditional “gut-feel” or “set-and-forget” (aka clean) consulting and a greater client trust in hard data / analytics and end-to-end implementation (dirty consulting).

Clients have less need for consultants that just turn the ignition and lay out sketchy directions, but increasingly need ones that can help driving the car all the way to their desired destination.

Consultants capable of meeting these needs for the telco / service provider industries have:

  • Extensive coal-face (delivery) experience, seeing and learning from real success and failure situations / scenarios
  • An ability to use technology to manage, interpret and visualise real data in a client’s data stores, not just industry trend data
  • An ability to build repeatable frameworks (including the development of smart contracts)
  • A mix of business, IT and network / tech expertise, like all valuable tripods

Have you noticed that the four key features above are perfectly aligned with having worked in OSSOSS/BSS data stores contain information that’s relevant to all parts of a telco / service provider business. That makes us perfectly suited to being the high-value consultants of the future, not just contractors into operations business units.

Few consultancy tasks are productisable today, but as technology continues to advance, traditional consulting roles will increasingly be replaced by IP (Intellectual Property) frameworks, data analytics, automations and tools… as long as the technology provides real business benefit.

I found a way to save ten million dollars

Yesterday’s post about egos in OSS contained the following Dilbert cartoon:
Dilbert - I found a way to save a million dollars.
It reminded me of a story from many years ago.

I was working in a developing country, advising the board of a tier-one telco on the implementation of their first-ever OSS (they’d only ever operated their networks at NMS level previously). During the analysis phase I came across some data that showed an interesting opportunity for an innovation relating to their voice Points of Interconnect (PoI).

From a back-of-a-paper napkin analysis it seemed that a ~$50-100k investment could’ve resulted in an improvement to the company’s profit by at least $10M. I ran the concept, and the numbers, past their head of switching. His response was, “I think you’re right…. but I’m not going to recommend it.”

You could say that I was a little bewildered.

He then followed with, “You have to see this from my perspective. If I recommend this project and it succeeds, I receive no benefit. I’m not due for promotion for another two years at the earliest. I will barely receive any recognition at all, certainly no financial reward. The company receives all the benefits. But if the project fails, I will be put aside, passed over for any future promotions. It would be a career killer.”

He was right. I hadn’t seen it from his perspective… still not sure that I do, but as a consultant, I was only ever passing through their corporate culture rather than having a 4-5 decade career embedded within it.

It wasn’t within my OSS scope, but I quietly mentioned it to the board. They delegated the decision back to the head of switching. The project was not recommended to proceed, not even for further analysis.

It’s interesting the human factors that come into play when project investment is under evaluation isn’t it? What human factors have you seen influence purchasing decisions?

Bad OSS ego decisions

A long, long time ago Dennis Haslinger told me that most of the most serious mistakes I would make in life would be bad ego decisions. I have found that to be true.”
Gary Halbert
.

OSS is an industry filled with highly intelligent people. In every country I’ve visited to work on OSS assignments, perhaps excluding Vietnam, my colleagues have been predominantly male. Dare I say it, do those two preceding facts imply a significant ego level exists on many (most?) OSS projects (or has it just been a coincidence that I’ve experienced)?

Given that OSS projects tend to cross business units, inter-departmental power plays like the one described in the Dilbert comic below can become just another potential pitfall.
Dilbert - I found a way to save a million dollars

To be honest, I can’t recall any examples where ego (mine or others) has lead to serious mistakes as such, but I’ve seen many cases where it’s lead to serious stagnation, delays in project delivery, that have been extremely costly.

One example is cited in this post, where the intellectual brilliance of one person caused a document to blow out from 30 pages to 150+, causing a 3+ month delay and more than $100k additional cost.

Stakeholder management and change management are highly underestimated factors in the success of OSS projects.

PS. The “intellectual brilliance” link above also talks about the possible benefits of smart contracts in OSS delivery. I wonder whether smart contracts will reduce some of the ego-related stagnation on OSS projects, or simply shift it from the delivery phase to the up-front smart contract agreement phase, thus introducing more “what if scenario” stagnation?

Raising the OSS horizon

With the holiday period looming for many of us, we will have the head-space to reflect – on the year(s) gone and to ponder the one(s) upcoming. I’d like to pose the rhetorical question, “What do you expect to reflect on?

It’s probably safe to say that a majority of OSS experts are engaged in delivery roles. Delivery roles tend to require great problem-solving skills. That’s one of the exciting aspects of being an OSS expert after all.

There’s one slight problem though. Delivery roles tend to have a focus on the immediacy of delivery, a short-term problem-solving horizon. This generates incremental improvements like new dashboards within an existing dashboard framework, refining processes, next release software upgrades, releasing new stuff that adds to the accumulation of tech-debt, etc, etc.

That’s great, highly talented, admirable work, often exactly what our customers are requesting, but not necessarily what our industry needs most.

We need the revolutionary, not the evolutionary. And that means raising our horizons – to identify and comprehend the bigger challenges and then solving those. That is the intent of the OSS Call for Innovation – to lift our vision to a more distant horizon.

When you reflect during this holiday period, how distant will your horizon be?

PS. Upon your own reflection, are there additional big challenges or exponential opportunities that should be captured in the OSS Call for Innovation?

Micro-strangulation vs COTS customisation

Over the last couple of posts, we’ve referred to the following diagram and its ability to create a glass ceiling on OSS feature releases:
The increasing percentage of tech debt

Yesterday’s post indicated that the current proliferation of microservices has the potential to amplify the strangulation.

So how does that compare with the previous approach that was built around COTS (Commercial off-the-shelf) OSS packages?

With COTS, the same time-series chart exists, just that it sees the management of legacy, etc fall largely with the COTS vendor, freeing up the service provider… until the service provider starts building customisations and the overhead becomes shared.

With microservices, the rationalisation responsibility is shifted to the in-house (or insourced) microservice developers.

And a third option: If the COTS is actually delivered via a cloud “OSS as a service” (OSSaaS) model, then there’s a greater incentive for the vendor to constantly re-factor and reduce clutter.

A fourth option, which I haven’t actually seen as a business model yet, is once an accumulation of modular microservices begins to grow, vendors might begin to offer microservices as a COTS offering.