The PAOSS Call for Innovation has been released

I’ve been promising to release an OSS Call for Innovation, a manifesto of what OSS can become – a manifesto that also describes areas where exponential improvements are just waiting to happen .

It can be found here:
http://passionateaboutoss.com/oss-call-for-innovation/
And you’ll also notice that it’s a new top-level menu item here on PAOSS.

Each time I’ve released one of these vision-statement style reports in the past, I’ve been pleasantly surprised to find that some of the visions are already being worked on by someone in the industry.

Are there any visions that I’ve overlooked? I’d love to see your comments on the page and spread the word on all the amazing innovations that you’re working on and/or dreaming about.

Call for Innovation by Swisscom, Telia and Proximus

Software Defined Networking (SDN) and Network Function Virtualization (NFV) are about to transform and disrupt the network operator industry.

That’s why Swisscom, Telia Company and Proximus, three leading telco providers from across Europe, are jointly issuing this unique Call for Innovation and invite startups and innovators developing “Next Generation Virtual Telco Functions & Services (SDN / NFV 2.0)” to apply until 23 October 2016.

The best startups will be able to pitch their solution in front of an expert jury and have the chance to be selected for a PoC project and future collaboration with all three telcos..
What we are looking for

The topics considered for this call are related to SDN/NFV development within the telecommunication industry. The baseline infrastructure for SDN/NFV is available and is largely based on the de-facto standards and open source-based systems of OpenStack/OPNVF/CloudFoundry. The next steps and next wave of innovations should be using that infrastructure for new networking functions, cloud-native implementation of existing network functions, and new Telco services we could offer in the market.”

From http://call-for-innovation.com/sdn-nfv

Let me start out with an apology. This is old news. The Swisscom / Telia Company / Proximus Call for Innovation closed nearly a year ago. However, I’m bringing it to your attention for what it means to the telco industry. It’s acknowledging that traditional suppliers to telcos are not always servicing the need for innovative approaches to the problems the telcos are facing. It’s targeting the long tail of innovation.

Whilst this is specifically seeking SDN/NFV innovations, what does a Call for Innovation look like for OSS? Keep an eye out here on PAOSS, as I’ll be publishing an OSS Call for Innovation shortly.

Do we actually need less intellectual giants?

Have you ever noticed that almost every person who works in OSS is extremely clever?
No?

They may not know the stuff that you know or even talk in the same terminologies that you and your peers use, but chances are they also know lots of stuff that you don’t.

OSS sets a very high bar. I’ve been lucky enough to cross into many different industries as a consultant. I’d have to say that there are more geniuses per capita in OSS than in any other industry / sector I’ve worked in.

So why then are so many of our OSS a shambles?

Is it groupthink? Do we need more diversity of thinking? Do we actually need less intellectual giants to create pragmatic, mere-mortal solutions?

Our current approach appears to be flawed. Perhaps Project Platypus gives us on alternate framework?

Actually, I don’t think we need less intellectual giants. But I do think we need our intellectual giants to have a greater diversity of experiences.

The OSS Think Big juxtaposition

I recently saw the advertisement below:

I’ve clipped only the last 10 seconds because that was the part that struck me. The ad is for BHP*, one of the world’s largest miners. The mining industry thinks in long-term projects because it takes many years to deliver results – for exploration, planning, approvals, for the infrastructure to be built and operationalised, etc.

Mining is “only” the process of pulling natural resources out of the ground, but despite all our complexities, mining projects tend to be far more complex than for OSS. The decade-long duration of projects means that technologies that were originally included in plans frequently become obsolete mid-flight and have to be re-planned. That means major contracts also need to be obsoleted and re-planned mid-flight. Work-force management has a completely different scale than for OSS.

Mining thinks in time-frames of decades. OSS transformations are planned in time-frames of years. OSS delivery, especially Agile deliveries, often only think in quarters (or much, much less).

In OSS, do we really Think Big?

But there’s a twist on this question. In the rare cases when we do think big, are we constraining ourselves by then following into the “deliver big” mindset too? In OSS, I’ve always felt that we deliver most efficiently when very small numbers of very clever people group together.

So there’s the juxtaposition with the clip above – Think Big… Think Small.

When you’re thinking of OSS roadmaps, what’s your thinking time-frame?

* For disclosure, I’m not an investor in BHP to my knowledge, but perhaps my super fund is.

Getting past the first layer on the OSS onion

When you first start off trying to solve a problem, the first solutions you come up with are very complex, and most people stop there. But if you keep going, and live with the problem and peel more layers of the onion off, you can often times arrive at some very elegant and simple solutions.”
Steve Jobs
.

The quote above pretty well describes my experience with OSS. The first solutions we come up with for a given problem are generally very complex…. and that’s where we stop because there are so many other problems to move on to next.

Does that reflect your experiences too?

Do we ever get the chance to take a deep breath because we have all our roadmap items completed, and then therefore have time to peel more layers off old problems?

In my experience this just doesn’t happen. So that just leaves us with solutions that are complex… to the detriment of OSS as a whole.

So the question for you today is how to give the time and space to be able to peel more layers off our OSS onions?

My initial thought is that we should stop adding so many things into the roadmap – to take the 80/20 approach into our roadmap prioritisation – leaving more time to refine the really important stuff. I’d love to hear your thoughts though.

The augmented analytics journey

Smart Data Discovery goes beyond data monitoring to help business users discover subtle and important factors and identify issues and patterns within the data so the organization can identify challenges and capitalize on opportunities. These tools allow business users to leverage sophisticated analytical techniques without the assistance of technical professionals or analysts. Users can perform advanced analytics in an easy-to-use, drag and drop interface without knowledge of statistical analysis or algorithms. Smart Data Discovery tools should enable gathering, preparation, integration and analysis of data and allow users to share findings and apply strategic, operational and tactical activities and will suggest relationships, identifies patterns, suggests visualization techniques and formats, highlights trends and patterns and helps to forecast and predict results for planning activities.

Augmented Data Preparation empowers business users with access to meaningful data to test theories and hypotheses without the assistance of data scientists or IT staff. It allows users access to crucial data and Information and allows them to connect to various data sources (personal, external, cloud, and IT provisioned). Users can mash-up and integrate data in a single, uniform, interactive view and leverage auto-suggested relationships, JOINs, type casts, hierarchies and clean, reduce and clarify data so that it is easier to use and interpret, using integrated statistical algorithms like binning, clustering and regression for noise reduction and identification of trends and patterns. The ideal solution should balance agility with data governance to provide data quality and clear watermarks to identify the source of data.

Augmented Analytics automates data insight by utilizing machine learning and natural language to automate data preparation and enable data sharing. This advanced use, manipulation and presentation of data simplifies data to present clear results and provides access to sophisticated tools so business users can make day-to-day decisions with confidence. Users can go beyond opinion and bias to get real insight and act on data quickly and accurately.”
The definitions above come from a post by Kartik Patel entitled, “What is Augmented Analytics and Why Does it Matter?.”

Over the years I’ve loved playing with data and learnt so much from it – about networks, about services, about opportunities, about failures, about gaps, etc. However, modern statistical analysis techniques fall into one of the categories described in “You have to love being incompetent“, where I’m yet to develop the skills to a comfortable level. Revisiting my fifth year uni mathematics content is more nightmare than dream, so if augmented analytics tools can bypass the stats, I can’t wait to try them out.

The concepts described by Kartik above would take those data learning opportunities out of the data science labs and into the hands of the masses. Having worked with data science labs in the past, the value of the information has been mixed, all dependent upon which data scientist I dealt with. Some were great and had their fingers on the pulse of what data could resolve the questions asked. Others, not so much.

I’m excited about augmented analytics, but I’m even more excited about the layer that sits on top of it – the layer that manages, shares and socialises the aggregation of questions (and their answers). Data in itself doesn’t provide any great insight. It only responds when clever questions are asked of it.

OSS data has an immeasurable number of profound insights just waiting to be unlocked, so I can’t wait to see where this relatively nascent field of augmented analytics takes us.

You have to love being incompetent

You have to love being incompetent in order to be competent.”
James Altucher
.

Not sure that anyone loves feeling incompetent, but James’ quote is particularly relevant in the world of OSS. There are always so many changes underway that you’re constantly taken out of your comfort zone. But the question becomes how do you overcome those phases / areas of incompetence?

Earlier in my career, I had more of an opportunity to embed myself into any area of incompetence, usually spawned by a technical challenge being faced, and pick it up through a combination of practical and theoretical research. That’s a little harder these days with less hands-on and more management responsibilities, not to mention more demands on time outside hours.

In a way, it’s a bit like stepping up the layers of TMN management pyramid.
TMN Pyramid
Image courtesy of www.researchgate.net.

With each step up, the context gets broader (eg more domains under management), but more abstracted from what’s happening in the network. Each subsequent step northbound does the same thing:

  • It abstracts – it only performs a sub-set of the lower layer’s functionality
  • It connects – it performs the task of connecting and managing a larger number of network elements than the lower layer

Conversely, each step down the management stack should produce a narrower (ie not so many device interconnections), but deeper field of view (ie a deeper level of information about the fewer devices).

The challenge of OSS is in choosing where to focus curiosity and improvements – diving down the stack into new tech or looking up and sidewards?

The mafia… Pressure? What pressure?

OSS delivery teams can be quite tense environments to work within can’t they? Deadlines, urgency, being in the customer’s line of sight and did I mention deadlines? [As an aside, I’m not sure which type of deadline is more stressful, the ongoing drain of fortnightly releases under Agile, or the chaos of a big-bang release that is preceded by lengthier periods of relative calm.]

When it comes to dealing with stress, I see two ends of a continuum:

  • The teflon end – get it off me, get it off me – the people who, when under stress, push stress onto everyone else and make the whole team more stressed
  • The sponge end – the people who are able to absorb the pressure around them and exude a calm that reduces stress contagion

I can completely understand those who fall at the teflon end, but I can’t admire them or aspire to work with them. I’m sure most would feel the same way. They let urgency overwhelm logic.

This reminds me of a project where the mafia were tightly entwined into a customer’s project team and they were constantly wrangling scope, approvals and payments to ensure “the organisation” profited. They were particularly “active” around delivery time.

A biggest of big-bang deliveries required me to stand in front of a large customer contingent for three days straight to demonstrate functionality and get grilled about processes, tools and data sets. At the end of the third day, we’d scheduled the demonstration of some brand new functionality.

It was a module that had been sold to the customer before even being conceptually architected let alone built. [You know the story – every requirement on an RFP must be responded to with a “Complies” even if it doesn’t]. My client (the vendor) was almost ready to back away from this many-million dollar contract due to the complexity and time estimated to build the entirely new module from scratch. I stepped in and proposed a solution that stitched together four existing tools, some glue and only a few weeks of effort… but we’d never even had it working in the lab before entering into the demo.

At first pass, the demo failed. Being at the end of the three-day demo (and the hectic weeks leading up to it), my brain was fried. The customer agreed to take a short break while we investigated what went wrong. We were struggling to find a resolution, so I was proposing to delay demonstration of the new tool until the following day.

Luckily for me, the most junior member of our team sat in the background plugging away, trialling different fixes. He tapped me on the shoulder and told me that he thought he’d resolved the problem.

We regathered the customer’s team and presented the new module. We waited for the customer’s lead to push an unknown configuration into the network and waited for him to check whether our new tool had responded correctly. It did and the customer was ecstatic.

We’d been saved by a very clever young man with an ability to absorb pressure like a sponge. I couldn’t thank him enough.

Omnichannel will remain disjointed until…

Omnichannel is intended to be a strategy that provides customers with a seamless, consistent experience across all of their contact channels – channels that include online/digital, IVR, contact centre, mobile app, retail store, B2B portal, etc.

The challenge of delivering consistency across these platforms is that there is little cross-over between the organisations that deliver these tools. Each is a fragmented market in its own right and the only time interaction happens (in my experience at least) is on an as-needed basis for a given project.

Two keys to delivering seamless customer experience are the ability to identify unique customers and the ability to track their journeys through different channels. The problem is that some of these channels aren’t designed to uniquely identify and if they can, aren’t consistent with other products in their linking-key strategies.

A related problem is that user journeys won’t follow a single step-by-step sequence through the channels. So rather than process flows, user journeys need to be tracked as state transitions through their various life-cycles.

OSS/BSS are ideally situated to manage linking keys across channels (if the channels can provide the data) as well as handling state-transition user journeys.

Omnichannel represents a significant opportunity, in part because there are two layers of buyers for such technology. The first is the service provider that wants to provide their customer with a truly omnichannel experience. The second is to provide omnichannel infrastructure to the service providers’ customers, customers that are in business and want to offer consistent omnichannel experiences for their end-customers.

Who is going to be the first to connect the various channel products / integrators together?

In desperate search of OSS flow

Flow, also known as the zone, is the mental state of operation in which a person performing an activity is fully immersed in a feeling of energized focus, full involvement, and enjoyment in the process of the activity. In essence, flow is characterized by complete absorption in what one does, and a resulting loss in one’s sense of space and time.”
Wikipedia.

It’s almost definitely no coincidence that a majority of the achievements I’m most proud of within the context of OSS have been originated outside business hours. I strongly believe it all comes down to flow. In a day that is punctuated by meeting after meeting, there is no flow, no ability to get into deep focus. In the world of transaction-based doing, there is rarely the opportunity to generate flow.

Every OSS project I’ve worked on has been in desperate need of innovation. That’s not a criticism, but a statement of the whole industry having so many areas in which improvement is possible. But on your current and/or past projects, how many have fostered an environment where deep focus was possible for you or your colleagues? Where have your greatest achievements been spawned from?

Jason Fried of Basecamp and 37signals fame is an advocate of building an environment where flow can happen and starts with manager and meeting minimisation. The best managers I’ve worked with have been great at facilitating flow for their teams and buffered them from the M&M noise.

How can we all build an OSS environment where the thinkers get more time to think… about improving every facet of ideating, creating, building and implementing?

A new, more sophisticated closed-loop OSS model

Back in early 2014, PAOSS posted an article about the importance of closed loop designs in OSS, which included the picture below:

OSS / DSS feedback loop

It generated quite a bit of discussion at the time and led me to being introduced to two companies that were separately doing some interesting aspects of this theoretical closed loop system. [Interestingly, whilst being global companies, they both had strong roots tying back to my home town of Melbourne, Australia.]

More recently, Brian Levy of TM Forum has published a more sophisticated closed-loop system, in the form of a Knowledge Defined Network (KDN), as seen in the diagram below:
Brian Levy Closed Loop OSS
I like that this control-loop utilises relatively nascent technologies like intent networking and the constantly improving machine-learning capabilities (as well as analytics for delta detection) to form a future OSS / KDN model.

The one thing I’d add is the concept of inputs (in the form of use cases such as service orders or new product types) as well as outputs / outcomes such as service activations for customers and not just the steady-state operations of a self-regulating network. Brian Levy’s loop is arguably more dependent on the availability and accuracy of data, so it needs to be initially seeded with inputs (and processing of workflows).

Current-day OSS are too complex and variable (ie un-repeatable), so perhaps this represents an architectural path towards a simpler future OSS – in terms of human interaction at least – although the technology required to underpin it will be very sophisticated. The sophistication will be palatable if we can deliver the all-important repeatability described in, “I want a business outcome, not a deployment challenge.” BTW. This refers to repeatability / reusability across organisations, not just being able to repeatedly run workflows within organisations.

From pipeline to platform

Amazon established an architecture to leverage its assets for implementing a wide range of business models in a repeatable way for the retail industry. In most cases, Amazon is exposing product offerings it doesn’t actually own. It carries some inventory for third parties, but its main tasks now are vetting retail partners, ensuring product quality, giving partners access to the marketplace, and arranging payment and delivery of purchases. The company’s business model is to provide an ever-growing number of products through a broad, simple solution that’s fast, efficient and highly effective.
A similar level of speed and efficiency is possible when the same approach is applied to an operator’s architecture. Easy access, exposure of product information, a managed ecosystem of partners, a simple solution that hides background complexity – many of the same things that make it so easy to shop with Amazon – can transform the way our products are created, modified, assembled, offered, personalized and delivered. In other words, operators need architectures that support platform business models analogous to the Amazon architecture to onboard offers from many, connecting suppliers to consumers and monetizing the end solution with settlement of payments to those involved. Many platforms are possible because of technology, but they’re successful because of trust among ecosystem partners, curator and consumers
.”
From TM Forum’s “Digital Platform Reference Architecture Concepts and Principles” (IG1157 Release 17.0.0)

A recent post entitled, “I want a business outcome, not a deployment challenge,” discussed cloud deployment, hosted service offerings and trust – three important considerations to the Amazon-style platform play for future CSP business models.

Neither the architecture or the trust currently tends to exist to allow most telcos to follow a model where, “its main tasks now are vetting retail partners, ensuring product quality, giving partners access to the marketplace, and arranging [e-]payment and [e-]delivery of purchases.” Some telcos are already going down the path of complete transformation to services-driven platforms (ie SOA). This is as much for offering (web) services to internal silos as it is to external customers.

The real platform play will only happen when the “internal” services are also opened up to third-parties to provide, not just consume.

An OSS knowledge plane for SDN

We propose a new objective for network research: to build a fundamentally different sort of network that can assemble itself given high level instructions, reassemble itself as requirements change, automatically discover when something goes wrong, an automatically fix a detected problem or explain why it cannot do so.
We further argue that to achieve this goal, it is not sufficient to improve incrementally on the techniques and algorithms we know today. Instead, we propose a new construct, the Knowledge Plane, a pervasive system within the network that builds and maintains high level models of what the network is supposed to do, in order to provide services and advice to other elements of the network. The knowledge plane is novel in its reliance on the tools of AI and cognitive systems. We argue that cognitive techniques, rather than traditional algorithmic approaches, are best suited to meeting the uncertainties and complexity of our objective
.”
David Clarke
et al in “A Knowledge Plane for the Internet.”

We know that SDN is built around the concepts of the data plane and the control plane. SDN also proposes centralised knowledge of, and management of, the network. David Clarke and his contemporaries are proposing a machine-driven cognitive layer that could sit on top of SDN‘s control plane.

The other facet of the knowledge plane concept is that it becomes an evolving data-driven approach rather than the complex process-driven approach to getting tasks done.

Brian Levy & Barry Graham have authored a paper entitled, “TM FORUM FUTURE ARCHITECTURE STRATEGY,” which discusses the knowledge plane in more detail as well as providing interesting next-generation OSS/BSS architecture concepts.

I want a business outcome, not a deployment challenge

We can look and take lessons on how services evolved in the cloud space. Our customers have expressed how they want to take these services and want a business outcome, not a deployment challenge.”
Shawn Hakl
.

Make no mistake, cloud OSS is still a deployment challenge (at this nascent stage at least), but in the context of OSS, Shawn Hakl’s quote asks the question, “who carries the burden of that deployment challenge?”

The big service providers have traditionally opted to take on the deployment challenge, almost wearing it as a badge of honour. I get it, because if done well, OSS can be a competitive differentiator.

The cloud model (ie hosted by a trusted partner) becomes attractive from the perspective of repeatability, from the efficiency of doing the same thing repeatedly at scale. Unfortunately this breaks down in a couple of ways for OSS (currently at least).

Firstly, the term “trusted partner” is a rare commodity between OSS providers and consumers for many different reasons (including trust from a security perspective, which is the most common pushback against using hosted OSS). Secondly, we haven’t unlocked the repeatability problem. Every organisation has different networks, different services, different processes, even different business models.

Cloud-hosted OSS represents a big opportunity into the future if we first focus on identification of the base ingredients of repeatability amongst all the disparity. Catalogs (eg service catalogs, product catalogs, virtual network device catalogs) are the closest we have so far. Intent abstraction models follow this theme too, as does platform-thinking / APIs. Where else?

How would Einstein or Darwin manage an OSS?

Here are a few questions I reflect on:
– Am I excited to be doing what I’m doing or am I in aimless motion?
– Are the trade-offs between work and my relationships well-balanced?
– How can I speed up the process from where I am to where I want to go?
– What big opportunities am I not pursuing that I potentially could?
– What’s a small thing that will produce a disproportionate impact?
– What could probably go wrong in the next 6 months of my life?

Zat Rana
on here on Business Insider.

The link above provides some insights into the way some of the world’s greatest innovators have tackled the challenges that lay before them. It espouses the benefits of Reflective Thinking versus the current mindset of Doing Thinking, as discussed here earlier.

If you were to follow Zat Rana’s suggestion of allocating two hours a week to reflective thinking, what are the seed questions you could ask? Would the list above work as a starting point? Perhaps something more specific to your situation?

Here are a few other possibilities:

  • Do I know what (my) OSS will look like in 5 years
  • Am I satisfied that (my) OSS is actually helping outside operations
  • What tangents could (my) OSS take to improve the world
  • Where does complexity stem from that impacts (my) OSS
  • Which areas can I pare back with negligible impact
  • What is the OSS moonshot that changes the landscape forever
  • What is my lead domino(es) (ie What’s a small thing that will produce a disproportionate impact?)
  • Have I thought about what might impact business continuity
  • How can I impact the bigger bodies (eg CSPs, vendors, standards bodies) around me

Instead of the easy metrics…

What is it that you hope to accomplish? Not what you hope to measure as a result of this social media strategy/launch, but to actually change, create or build?

An easy but inaccurate measurement will only distract you. It might be easy to calibrate, arbitrary and do-able, but is that the purpose of your work?

I know that there’s a long history of a certain metric being a stand-in for what you really want, but perhaps that metric, even though it’s tried, might not be true. Perhaps those clicks, views, likes and grps are only there because they’re easy, not relevant.

If you and your team can agree on the goal, the real goal, they might be able to help you with the journey…

System innovations almost always involve rejecting the standard metrics as a first step in making a difference. When you measure the same metrics, you’re likely to create the same outcomes. But if you can see past the metrics to the results, it’s possible to change the status quo.”
Seth Godin on his blog here.

There are a lot of standard metrics in OSS and comms networks. In the context of Seth’s post, I have two layers of metrics for you to think about. One layer is the traditional role of OSS – to provide statistics on the operation of the comms network / services / etc. The second layer is in the statistics of the OSS itself.

Layer 1 – Our OSS tend to just provide a semi-standard set of metrics because service providers tend to use similar metrics. We even had standards bodies helping providers get consistent in their metrics. But are those metrics still working for a modern carrier?
Can we disrupt the standard set of metrics by asking what a service provider is really wanting to achieve in our changing environment?

Layer 2 – What do we know about our own OSS? How are they used? How does that usage differ across clients? How does it differ between other products? What are the metrics that help sell an OSS (either internally or externally)?

The OSS Mechanical Turk

The Mechanical Turk… was a fake chess-playing machine constructed in the late 18th century. From 1770 until its destruction by fire in 1854 it was exhibited by various owners as an automaton, though it was eventually revealed to be an elaborate hoax.
The Turk was in fact a mechanical illusion that allowed a human chess master hiding inside to operate the machine. With a skilled operator, the Turk won most of the games played during its demonstrations around Europe and the Americas for nearly 84 years, playing and defeating many challengers including statesmen such as Napoleon Bonaparte and Benjamin Franklin
.”
Wikipedia.

This ingenious contraption can be mirrored in certain situations within the OSS industry.

I once heard of an OSS fulfilment solution that had consumed a couple of years of effort and millions of dollars before management decided to try an alternate path because there was still no end in sight. There was so much sunk cost that it was a difficult decision.

The problem statement was delivered to a new team brought in from outside the organisation.

They had it working within a single weekend!!

How?

They had focused on what the end customers needed and developed an efficient self-service portal (a front end) that created tickets. The tickets were then manually entered into the back-end systems. Any alerts from the back-end systems were fed back into the portal.

It did the job because transaction volumes were low enough to be processed manually. The first approach failed because integrations, workflows and exception-handling were enormously complex and they were laser-focused on perfect automation.

The Mechanical Turk approach to this OSS conundrum proved to be far more successful. It doesn’t work in all situations but it could be used more often than it is.

OSS billionaires with perfect abs

If [more] information was the answer, then we’d all be billionaires with perfect abs.”
Derek Sivers
.

The sharing economy has made a deluge of information available to us at negligible cost. We have more information available at our fingertips than we can ever consume and process. So why don’t we all have massive bank balances and perfect abs? (although I’m sure most PAOSS readers do of course)

The answer is that information is only as good as the decisions we are able to make with it. More specifically, the ability to distill the information down to the insights that compel great decisions to be made.

As OSS implementers, we can easily bombard our users with enough information calories to make perfect abs an impossible dream. Too easily in fact.

It’s much harder to consistently produce insights of great value. Perhaps it even needs the unique billionaire’s lens to spot the insights hidden in the information. But that’s what makes billionaires so rare.

Herein lies the message I want to leave you with today – how do us OSS Engineers train ourselves to see information more through a billionaire / value lens rather than our more typical technical lenses? I’m not a billionaire so I (we?) spend too much time thinking about technically correct solutions rather that thinking about valuable solutions. Do we have the right type of training / thinking to actually know what a valuable solution looks like?

Is commission management the key for next-gen OSS?

Relationships with the things we ‘consume’ (rather than ‘own’) are increasing, and are being governed by ongoing supply arrangements between customers and vendors. What sits at the heart of these relationships, from a financial perspective, is billing. The entity that has the billing relationship with the customer essentially ‘owns’ the customer – they have the right to communicate with the customer regularly, and this can form the basis of a far deeper customer relationship.”
David Werdiger
in his article “The Own-Lease-Rent-Subscribe Continuum“.

The snippet above is just a small part of a very interesting article about asset utilisation and ownership structures. The continuum that David speaks of is increasingly presenting opportunities towards the subscribe / consume end and breaking down barriers to entry for those with entrepreneurial spirit.

This has implications for our OSS from a number of angles. OSS are typically expensive to buy or build or run, as are the networks that they manage. As a result, in years past the only opportunities arising from these assets were for their owners, the communications service providers (CSPs).

The subscribe end of the continuum allows CSPs to share the capital risk with customers, which has traditionally been done via the selling of comms services. But with comms services commoditising, innovative subscription models become a more attractive way of sharing the risk. These include virtual network operators, off-payroll sales agents, managed service offerings and the like.

We’ve spoken previously about how APIs allow third-parties to uplift and upsell services offered by CSPs. From a similar, but slightly different perspective, efficient / effective commission management functionality within your OSS / BSS effectively extends and incentivises the sales arm of any business (whether internal or external). You don’t need as many (any?) salespeople on the payroll, but you are still selling… if you incentivise the commission management process and associated earnings.

Whether additional selling and consumption is supported via human interfaces such as portal / UI or via machine interfaces such as API / microservices, it’s ultimately the billing that allows ownership of the customer. CSPs still have a significant strength from their billing base and clever design of our OSS / BSS can help to leverage that further in the consumption economy.

AIOps (Algorithmic IT Operations)

AIOps stands for Algorithmic IT Operations and is a new category as defined by Gartner research that is an evolution of what the industry previously referred to as ITOA (IT Operations and Analytics). We have reached a point where data science and algorithms are being successfully applied to automate traditionally manual tasks and processes in IT Operations. Now, algorithmics are being incorporated into tools that allow organizations to streamline operations even further by liberating humans from time-consuming and error prone processes, such as defining and managing an endless sprawl of rules and filters in legacy IT Management systems.

Algorithmic IT operations platforms offer increasingly wide and valuable sets of advanced analytical techniques. Although initially targeted at IT operations management use cases and data, they can also be applied by infrastructure and operations leaders to broader data sets to yield unique insights.

A goal of AIOps solutions is to make life better for us, but the line gets a bit blurry when humans interact with AIOps. The more advanced AIOps solutions will have neural-network technology built in that will learn from its operators, adapt and attempt to eliminate repetitive and tedious tasks.
Sai Krishna here.

Yesterday’s post talked about how to reduce a 150-person OSS implementation team down to just 1. The concept of AIOps, if taken to its proposed conclusion, could lead to large reductions in OSS support teams too. In theory, you put the system/s in place, seed them and then let them learn for themselves rather than having a team implement lots of logic rules.

In Gartner’s report, they detail the monitoring capabilities of the top AIOps tools, dividing comprehensive monitoring into 11 categories that include historical data management, streaming data management, log data ingestion, wire data ingestion, document text ingestion, automated pattern discovery and prediction, anomaly detection, root cause determination, on-premise delivery, and software as a service.”
This article from Loom Systems provides some really interesting perspectives on AIOps and the corresponding Gartner report.

For full disclosure, I have no financial interests in Loom Systems or Gartner, nor have I used the Loom Systems tools to be able to promote or deride their market offerings.