An OSS doomsday scenario

If I start talking about doomsday scenarios where the global OSS job industry is decimated, most people will immediately jump to the conclusion that I’m predicting an artificial intelligence (AI) takeover. AI could have a role to play, but is not a key facet of the scenario I’m most worried about.
OSS doomsday scenario

You’d think that OSS would be quite a niche industry, but there must be thousands of OSS practitioners in my home town of Melbourne alone. That’s partly due to large projects currently being run in Australia by major telcos such as nbn, Telstra, SingTel-Optus and Vodafone, not to mention all the smaller operators. Some of these projects are likely to scale back in coming months / years, meaning less seats in a game of OSS musical chairs. But this isn’t the doomsday scenario I’m hinting at in the title either. There will still be many roles at the telcos and the vendors / integrators that support them.

There are hundreds of OSS vendors in the market now, with no single dominant player. It’s a really fragmented market that would appear to be ripe for M&A (mergers and acquisitions). Ripe for consolidation, but massive consolidation is still not the doomsday scenario because there would still be many OSS roles in that situation.

The doomsday scenario I’m talking about is one where only one OSS gains domination globally. But how?

Most traditional telcos have a local geographic footprint with partners/subsidiaries in other parts of the world, but are constrained by the costs and regulations of a wired or cellular footprint to be able to reach all corners of the globe. All that uniqueness currently leads to the diversity of OSS offerings we see today. The doomsday scenario arises if one single network operator usurps all the traditional telcos and their legacy network / OSS / BSS stacks in one technological fell swoop.

How could a disruption of that magnitude happen? I’m not going to predict, but a satellite constellation such as the one proposed by Starlink has some of the hallmarks of such a scenario. By using low-earth orbit (LEO) satellites (ie lower latency than geostationary satellite solutions), point-to-point laser interconnects between them and peering / caching of data in the sky, it could fundamentally change the world of communications and OSS.

It has global reach, no need for carrier interconnect (hence no complex contract negotiations or OSS/BSS integration for that matter), no complicated lead-in negotiations or reinstatements, no long-haul terrestrial or submarine cable systems. None of the traditional factors that cost so much time and money to get customers connected and keep them connected (only the complication of getting and keeping the constellation of birds in the sky – but we’ll put that to the side for now). It would be hard for traditional telcos to compete.

I’m not suggesting that Starlink can or will be THE ubiquitous global communications network. What if Google, AWS or Microsoft added this sort of capability to their strengths in hosting / data? Such a model introduces a new, consistent network stack without the telcos’ tech debt burdens discussed here. The streamlined network model means the variant tree is millions of times simpler. And if the variant tree is that much simpler, so is the operations model and so is the OSS… with one distinct contradiction. It would need to scale for billions of customers rather than millions and trillions of events.

You might be wondering about all the enterprise OSS. Won’t they survive? Probably not. Comms networks are generally just an important means-to-an-end for enterprises. If the one global network provider were to service every organisation with local or global WANs, as well as all the hosting they would need, and hosted zero-touch network operations like Google is already pre-empting, would organisation have a need to build or own an on-premises OSS?

One ubiquitous global network, with a single pared back but hyperscaled OSS, most likely purpose-built with self-healing and/or AI as core constructs (not afterthoughts / retrofits like for existing OSS). How many OSS roles would survive that doomsday scenario?

Do you have an alternative OSS doomsday scenario that you’d like to share?

Hat tip again to Jay Fenton for pointing out what Starlink has been up to.

OSS / BSS security getting a little cloudy

Many systems are moving beyond simple virtualization and are being run on dynamic private or even public clouds. CSPs will migrate many to hybrid clouds because of concerns about data security and regulations on where data are stored and processed.
We believe that over the next 15 years, nearly all software systems will migrate to clouds provided by third parties and be whatever cloud native becomes when it matures. They will incorporate many open-source tools and middleware packages, and may include some major open-source platforms or sub-systems (for example, the size of OpenStack or ONAP today)
.”
Dr Mark H Mortensen
in an article entitled, “BSS and OSS are moving to the cloud: Analysys Mason” on Telecom Asia.

Dr Mortensen raises a number of other points relating to cloud models for OSS and BSS in the article linked above. Included is definition of various cloud / virtualisation related terms.

He also rightly points out that many OSS / BSS vendors are seeking to move to virtualised / cloud / as-a-Service delivery models (for reasons including maintainability, scalability, repeatability and other “ilities”).

The part that I find interesting with cloud models (I’ll use the term generically) is positioning of the security control point(s). Let’s start by assuming a scenario where:

  1. The Active Network (AN) is “on-net” – the network that carries live customer traffic (the routers, switches, muxes, etc) is managed by the CSP / operator [Noting though, that these too are possibly managed as virtual entities rather than owned].
  2. The “cloud” OSS/BSS is “off-net” – some vendors will insist on their multi-tenanted OSS/BSS existing within the public cloud

The diagram below shows three separate realms:

  1. The OSS/BSS “in the cloud”
  2. The operator’s enterprise / DC realm
  3. The operator’s active network realm

as well as the Security Control Points (SCPs) between them.

OSS BSS Cloud Security Control Points

The most important consideration of this architecture is that the Active Network remains operational (ie carry customer traffic) even if the link to the DC and/or the link to the cloud is lost.

With that in mind, our second consideration is what aspects of network management need to reside within the AN realm. It’s not just the Active Network devices, but anything else that allows the AN to operate in an isolated state. This means that shared services like NTP / synch needs a presence in the AN realm (even if not of the highest stratum within the operator’s time-synch solution).

What about Element Managers (EMS) that look after the AN devices? How about collectors / probes? How about telemetry data stores? How about network health management tools like alarm and performance management? How about user access management (LDAP, AD, IAM, etc)? Do they exist in the AN or DC realm?

Then if we step up the stack a little to what I refer to as East-West OSS / BSS tools like ticket management, workforce management, even inventory management – do we collect, process, store and manage these within the DC or are we prepared to shift any of this functionality / data out to the cloud? Or do we prefer it to remain in the AN realm and ensure only AN privileged users have access?

Which OSS / BSS tools remain on-net (perhaps as private cloud) and which can (or must) be managed off-net (public cloud)?

Climb further up the stack and we get into the interesting part of cloud offerings. Not only do we potentially have the OSS/BSS (including East-West tools), but more excitingly, we could bring in services or solutions like content from external providers and bundle them with our own offerings.

We often hear about the tight security that’s expected (and offered) as part of the vendor OSS/BSS cloud solutions, but as you see, the tougher consideration for network management architects is actually the end-to-end security and where to position the security control points relative to all the pieces of the OSS/BSS stack.

Getting lost in the flow of OSS

The myth is that people play games because they want to avoid challenging work. The reality is, people play games to engage in well-designed, challenging work. The only thing they are avoiding is poorly designed work. In essence, we are replacing poorly designed work with work that provides a more meaningful challenge and offers a richer sense of progress.
And we should note at this point that just because something is a game, it doesn’t mean it’s good. As we’ll soon see, it can be argued that everything is a game. The difference is in the design.
Really good games have been ruthlessly play-tested and calibrated to the point where achieving a state of flow is almost guaranteed for many. Play-testing is just another word for iterative development, which is essentially the conducting of progressive experiments
.”
Dr Jason Fox
in his book, “The Game Changer.”

Reflect with me for a moment – when it comes to your OSS activities, in which situations do you consistently get into a state of flow?

For me, it’s in quite a few different scenarios, but one in particular stands out – building up a network model in an inventory management tool. This activity starts with building models / patterns of devices, services, connections, etc, then using the models to build a replica of the network, either manually or via data migration, within the inventory tool(s). I can lose complete track of time when doing this task. In fact I have almost every single time I’ve performed this task.

Whilst not being much of a gamer, I suspect it’s no coincidence that by far my favourite video game genre is empire-building strategy games like the Civilization series. Back in the old days, I could easily get lost in them for hours too. Could we draw a comparison from getting that same sense of achievement, seeing a network (of devices in OSS, of cities in the empire strategy games) grow rapidly as a result of your actions?

What about fans of first-person shooter games? I wonder whether they get into a state of flow on assurance activities, where they get to hunt down and annihilate every fault in their terrain?

What about fans of horse grooming and riding games? Well…. let’s not go there. 🙂

Anyway, enough of all these reflections and musings. I would like to share three concepts with you that relate to Dr Fox’s quote above:

  1. Gamification – I feel that there is MASSIVE scope for gamification of our OSS, but I’ve yet to hear of any OSS developers using game design principles
  2. Play-testing – How many OSS are you aware of that have been, “ruthlessly play-tested and calibrated?” In almost every OSS situation I’ve seen, as soon as functionality meets requirements, we stop and move on to the next feature. We don’t pause and try a few more variants to see which is most likely to result in a great design, refining the solution, “to the point where achieving a state of flow is almost guaranteed for many
  3. Richer Progress – How many of our end-to-end workflows are designed with, “a richer sense of progress” in mind? Feedback tends to come through retrospective reporting (if at all), rarely through the OSS game-play itself. Chances are that our end-to-end processes actually flow through multiple un-related applications, so it comes back to clever integration design to deliver more compelling feedback. We simply don’t use enough specialist creative designers in OSS

Reducing the lumps with OSS services

As promised in yesterday’s post about lumpy revenues for OSS product companies, today we’ll discuss OSS professional services revenues and the contrasting mindset compared with products.

Professional services revenues are a great way of smoothing out the lumpy revenue streams of traditional OSS product companies. There’s just one problem though. Of all the vendors I’ve worked with, I’ve found that they always have a predilection – they either have a product mindset or a services mindset and struggle to do both well because the mindsets are quite different.

Not only that but we can break professional services into two categories:

  1. Product-related services – the installation and commissioning of products; and
  2. Consultancy-based services – the value-add services that drive business value from the OSS / BSS

Product companies provide product-related services, naturally. I can’t help but think that if we as an industry provided more of the consultancy-based services, we’d have more justification for greater spend on OSS / BSS (and smoother revenue streams in the process).

Having said that, PAOSS specialises in consultancy-based services (as well as install / commission / delivery services), so we’re always happy to help organisations that need assistance in this space!!

It’s all a bit lumpy

Being an OSS product supplier to telecom operators is a tough business. There is a constant stream of outgoings on developer costs, cost of sale, general overheads, etc. Unfortunately revenue streams are rarely so smooth. In fact, they tend to be decidedly lumpy – unpredictable (in terms of timelines when forecasting inflows years in advance) but large spikes of income stemming from customer implementations.

Not only that, but the risks are high due to the complexity and unknowns of OSS implementation projects as well as the lack of repeatability that was discussed in yesterday’s post.

Enduringly valuable businesses achieve their status through predictable, diversified, recurring (and preferably growing) revenue streams, so they need to be objectives of our OSS business models.

Annual maintenance fees (usually in the order of 20-22% of up-front list prices) is the most common recurring revenue model used by OSS product suppliers. Transaction-based pricing is another common model.

Cloud subscription (consumption) based models are also becoming more common, although there are always challenges around convincing carriers of the security and sovereignty of such important tools and data being hosted off-site.

I’m fascinated with the platform-plays, like Salesforce, which is a mushrooming form of the subscription model because there’s an ecosystem (or marketplace) of sellers contributing to transaction volumes. OSS and BSS are the perfect platform play but I haven’t seen any built around this style of revenue model yet. [Please let me know if I’ve missed any].

It has also been interesting to observe Cisco’s market success on the back of a perceived revenue shift towards more software and services.

Whenever considering alternate revenue models, I refer back to this great image from Ross Dawson:
Revenue Models
Do any apply to your OSS? Can any apply to your OSS?

Tomorrow we’ll discuss OSS professional services revenues and the contrasting mindset compared with products.

How to run an OSS PoC

This is the third in a series describing the process of finding the right OSS solution for your specific needs and getting estimated pricing to help you build a business case.

The first post described the overall OSS selection process we use. The second described the way we poll the market and prepare a short-list of OSS products / vendors based on current capabilities.

Once you’ve prepared the short-list it’s time to get into specifics. We generally do this via a PoC (Proof of Concept) phase with the short-listed suppliers. We have a few very specific principles when designing the PoC:

  • We want it to reflect the operator’s context so that they can grasp what’s being presented (which can be a challenge when a vendor runs their own generic demos). This “context” is usually in the form of using the operator’s device types, naming conventions, service types, etc. It also means setting up a network scenario that is representative of the operator’s, which could be a hypothetical model, a small segment of a real network, lab model or similar
  • PoC collateral must clearly describe the PoC and related context. It should clearly identify the important scenarios and selection criteria. Ideally it should logically complement the collateral provided in the previous step (ie the requirement gathering)
  • We want it to focus on the most important conditions. If we take the 80/20 rule as a guide, will quickly identify the most common service types, devices, configurations, functions, reports, etc that we want to model
  • Identify efficacy across those most important conditions. Don’t just look for the functionality that implements those conditions, but also the speed at which they can be done at a scale required by the operator. This could include bulk load or processing capabilities and may require simulators (or real integrations – see below) to generate volume
  • We want it to be a simple as is feasible so that it minimises the effort required both of suppliers and operators
  • Consider a light-weight integration if possible. One of the biggest challenges with an OSS is getting data in and out. If you can get a rapid integration with a real network (eg a microservice, SNMP traps, syslog events or similar) then it will give an indication of integration challenges ahead. However, note the previous point as it might be quite time-consuming for both operator and supplier to set up a real-time integration
  • Take note of the level of resourcing required by each supplier to run the PoC (eg how many supplier staff, server scaling, etc.). This will give an indication of the level of resourcing the operator will need to allocate for the actual implementation, including organisational change management factors
  • Attempt to offer PoC platform consistency so that all operators are on a level playing field, which might be through designing the PoC on common devices or topologies with common interfaces. You may even look to go the opposite way if you think the rarity of your conditions could be a deal-breaker

Note that we tend to scale the size/complexity/reality of the PoC to the scale of project budget out of consideration of vendor and operator alike. If it’s a small project / budget, then we do a light PoC. If it’s a massive transformation, then the PoC definitely has to go deeper (ie more integrations, more scenarios, more data migration and integrity challenges, etc)…. although ultimately our customers decide how deep they’re comfortable in going.

Best of luck and feel free to contact us if we can assist with the running of your OSS PoC.

How to identify a short-list of best-fit OSS suppliers for you

In yesterday’s post, we talked about how to estimate OSS pricing. One of the key pillars of the approach was to first identify a short-list of vendors / integrators best-suited to implementing your specific OSS, then working closely with them to construct a pricing model.

Finding the right vendor / integrator can be a complex challenge. There are dozens, if not hundreds of OSS / BSS solutions to choose from and there are rarely like-for-like comparators. There are some generic comparison tools such as Gartner’s Magic Quadrant, but there’s no way that they can cater for the nuanced requirements of each OSS operator.

Okay, so you don’t want to hear about problems. You want solutions. Well today’s post provides a description of the approach we’ve used and refined across the many product / vendor selection processes we’ve conducted with OSS operators.

We start with a short-listing exercise. You won’t want to deal with dozens of possible suppliers. You’ll want to quickly and efficiently identify a small number of candidates that have capabilities that best match your needs. Then you can invest a majority of your precious vendor selection time in the short-list. But how do you know the up-to-date capabilities of each supplier? We’ll get to that shortly.

For the short-listing process, I use a requirement gathering and evaluation template. You can find a PDF version of the template here. Note that the content within it is out-dated and I now tend to use a more benefit-centric classification rather than feature-centric classification, but the template itself is still applicable.

STEP ONE – Requirement Gathering
The first step is to prepare a list of requirements (as per page 3 of the PDF):
Requirement Capture.
The left-most three columns in the diagram above (in white) are filled out by the operator, which classifies a list of requirements and how important they are (ie mandatory, etc). The depth of requirements (column 2) is up to you and can range from specific technical details to high-level objectives. They could even take the form of user-stories or intended benefits.

STEP TWO – Issue your requirement template to a list of possible vendors
Once you’ve identified the list of requirements, you want to identify a list of possible vendors/integrators that might be able to deliver on those requirements. The PAOSS vendor/product list might help you to identify possible candidates. We then send the requirement matrix to the vendors. Note that we also send an introduction pack that provides the context of the solution the OSS operator needs.

STEP THREE – Vendor Self-analysis
The right-most three columns in the diagram above (in aqua) are designed to be filled out by the vendor/integrator. The suppliers are best suited to fill out these columns because they best understand their own current offerings and capabilities.
Note that the status column is a pick-list of compliance level, where FC = Fully Compliant. See page 2 of the template for other definitions. Given that it is a self-assessment, you may choose to change the Status (vendor self-rankings) if you know better and/or ask more questions to validate the assessments.
The “Module” column identifies which of the vendor’s many products would be required to deliver on the requirement. This column becomes important later on as it will indicate which product modules are most important for the overall solution you want. It may allow you to de-prioritise some modules (and requirements) if price becomes an issue.

STEP FOUR – Compare Responses
Once all the suppliers have returned their matrix of responses, you can compare them at a high-level based on the summary matrix (on page 1 of the template)
OSS Requirement Summary
For each of the main categories, you’ll be able to quickly see which vendors are the most FC (Fully Compliant) or NC (Non-Compliant) on the mandatory requirements.

Of course you’ll need to analyse more deeply than just the Summary Matrix, but across all the vendor selection processes we’ve been involved with, there has always been a clear identification of the suppliers of best fit.

Hopefully the process above is fairly clear. If not, contact us and we’d be happy to guide you through the process.

Getting a price estimate for your OSS

Sometimes a simple question deserves a simple answer: “A piece of string is twice as long as half its length”. This is a brilliant answer… if you have its length… Without a strategy, how do you know if it is successful? It might be prettier, but is it solving a define business problem, saving or making money, or fulfilling any measurable goals? In other words: can you measure the string?
Carmine Porco
here.

I was recently asked how to obtain OSS pricing by a University student for a paper-based assignment. To make things harder, the target client was to be a tier-2 telco with a small SDN / NFV network.

As you probably know already, very few OSS providers make their list prices known. The few vendors that do tend to focus on the high volume, self-serve end of the market, which I’ll refer to as “Enterprise Grade.” I haven’t heard of any “Telco Grade” OSS suppliers making their list prices available to the public.

There are so many variables when finding the right OSS for a customer’s needs and the vendors have so much pricing flexibility that there is no single definitive number. There are also rarely like-for-like alternatives when selecting an OSS vendor / product. Just like the fabled piece of string, the best way is to define the business problem and get help to measure it. In the case of OSS pricing, it’s to design a set of requirements and then go to market to request quotes.

Now, I can’t imagine many vendors being prepared to invest their valuable time in developing pricing based on paper studies, but I have found them to be extremely helpful when there’s a real buyer. I’ll caveat that by saying that if the customer (eg service provider) you’re working with is prepared to invest the time to help put a list of requirements together then you have a starting point to approach the market for customised pricing.

We’ve run quite a few of these vendor selections and have refined the process along the way to streamline for vendors and customers alike. Here’s a template we’ve used as a starting point for discussions with customers:

OSS vendor selection process

Note that each customer will end up with a different mapping of the diagram above to suit their specific needs. We also have existing templates (eg Questionnaire, Requirement Matrix, etc) to support the selection process where needed.

If you’re interested in reading more about the process of finding the right OSS vendor and pricing for you, click here and here.

Of course, we’d also be delighted to help if you need assistance to develop an OSS solution, get OSS pricing estimates, develop a workable business case and/or find the right OSS vendor/products for you.

Using OSS/BSS to steer the ship

For network operators, our OSS and BSS touch most parts of the business. The network, and the services they carry, are core business so a majority of business units will be contributing to that core business. As such, our OSS and BSS provide many of the metrics used by those business units.

This is a privileged position to be in. We get to see what indicators are most important to the business, as well as the levers used to control those indicators. From this privileged position, we also get to see the aggregated impact of all these KPIs.

In your years of working on OSS / BSS, how many times have you seen key business indicators that are conflicting between business units? They generally become more apparent on cross-team projects where the objectives of one internal team directly conflict with the objectives of another internal team/s.

In theory, a KPI tree can be used to improve consistency and ensure all business units are pulling towards a common objective… [but what if, like most organisations, there are many objectives? Does that mean you have a KPI forest and the trees end up fighting for light?]

But here’s a thought… Have you ever seen an OSS/BSS suite with the ability to easily build KPI trees? I haven’t. I’ve seen thousands of standalone reports containing myriad indicators, but never a consolidated roll-up of metrics. I have seen a few products that show operational metrics rolled-up into a single dashboard, but not business metrics. They appear to have been designed to show an information hierarchy, but not necessarily with KPI trees in mind specifically.

What do you think? Does it make sense for us to offer KPI trees as base product functionality from our reporting modules? Would this functionality help our OSS/BSS add more value back into the businesses we support?

Have I got an OSS deal for you!?!

Tending to be a low-volume, high-customisation, high-uniqueness product, OSS has a significantly different selling proposition than most “box drop” products.

Can you imagine if OSS salespeople used any of these “great deal” propositions (as described by Gary Halbert)?
“I’m going out of business.”
“I just had a fire and I’m having a fire sale.”
“I’m crazy.” (all used car dealers)
“I owe taxes and I’ve got to raise money fast to pay them.”
“I’ve lost my lease and I’ve got to sell this merchandise right away before it gets thrown into the sheet.”
“I’ve got to make space for some new merchandise that is arriving soon so I will sell you what I have on hand real cheap.”

Did the image of an OSS salesperson saying any of those, especially the first, bring a smile to your face?

Anyway, Gary’s article also goes on to say, “…I wrote: “and if you can find a way to use it, you can dramatically increase your sales volume.”
Now, compare that to this: “and if you can find a way to use it, you can make yourself a bushel of money!”
Isn’t that a lot more powerful? You bet! The words “dramatically increase your sales volume” do not even begin to conjure up the visual imagery of “a bushel of money
.””

From what I’ve experienced on the client side of the buying equation, OSS selling propositions seem to be driven by functionality. I call it the functionality arms-race, where vendors compete on functionality rather than efficacy. In a way, it’s the “sales volume” variant mentioned by Gary above.

The other approach that does align more closely with the “bushel of money” variant is the cost-out discussion. It’s the, “if you implement this OSS, you’ll be able to reduce head-count in your operations team,” argument. That’s definitely important for any operator that sees their OSS as a cost-centre. However, it’s a “save a bushel of money” argument rather than the more powerful “make a bushel of money” argument.

In reply to a recent post, James Crawshaw of Light Reading wrote, “OSS/BSS represents around 2-3% of revenue and takes up around 10% of capex.” I initially read this as OSS/BSS contributing 2-3% of revenue (ie the higher the percentage the better). However, James clarified that our IT/OSS/BSS tend to consume 2-3% of revenue (ie the lower the percentage the better).

Can you imagine how these tiny wording/perspective differences could change the credibility of the whole OSS/BSS industry? As soon as our OSS make a bushel of money, then the selling proposition becomes a whole lot stronger.

The Goldilocks OSS story

We all know the story of Goldilocks and the Three Bears where Goldilocks chooses the option that’s not too heavy, not too light, but just right.

The same model applies to OSS – finding / building a solution that’s not too heavy, not too light, but just right. To be honest, we probably tend to veer towards the too heavy, especially over time. We put more complexity into our architectures, integrations and customisations… because we can… which end up burdening us and our solutions.

A perfect example is AT&T offering its ECOMP project (now part of the even bigger Linux Foundation Network Fund) up for open source in the hope that others would contribute and help mature it. As a fairytale analogy, it’s an admission that it’s too heavy even for one of the global heavyweights to handle by itself.

The ONAP Charter has some great plans including, “…real-time, policy-driven orchestration and automation of physical and virtual network functions that will enable software, network, IT and cloud providers and developers to rapidly automate new services and support complete lifecycle management.”

These are fantastic ambitions to strive for, especially at the Pappa Bear end of the market. I have huge admiration for those who are creating and chasing bold OSS plans. But what about for the large majority of customers that fall into the Goldilocks category? Is our field of vision so heavy (ie so grand and so far into the future) that we’re missing the opportunity to solve the business problems of our customers and make a difference for them with lighter solutions today?

TM Forum’s Digital Transformation World is due to start in just over two weeks. It will be fascinating to see how many of the presentations and booths consider the Goldilocks requirements. There probably won’t be many because it’s just not as sexy a story as one that mentions heavy solutions like policy-driven orchestration, zero-touch automation, AI / ML / analytics, self-scaling / self-healing networks, etc.

[I should also note that I fall into the category of loving to listen to the heavy solutions too!! ]

Training network engineers to code, not vice versa

Did any of you read the Light Reading link in yesterday’s post about Google creating automated network operations services? If you haven’t, it’s well worth a read.

If you did, then you may’ve also noticed a reference to Finland’s Elisa selling its automation smarts to other telcos. This is another interesting business model disruption for the OSS market, although I’ll reserve judgement on how disruptive it will be until Elisa sells to a few more operators.

What did catch my eye in the Elisa article (again by Light Reading’s Iain Morris), is this paragraph:
Automation has not been hassle-free for Elisa. Instilling a software culture throughout the organization has been a challenge, acknowledges [Kirsi] Valtari. Rather than recruiting software expertise, Elisa concentrated on retraining the people it already had. During internal training courses, network engineers have been taught to code in Python, a popular programming language, and to write algorithms for a self-optimizing network (or SON). “The idea was to get engineers who were previously doing manual optimization to think about automating it,” says Valtari. “These people understand network problems and so it is a win-win outcome to go down this route.”.

It provides a really interesting perspective on this diagram below (from a 2014 post about the ideal skill-set for the future of networking)

There is an undoubted increase in the level of network / IT overlap (eg SDN). Most operators appear to be taking the path of hiring for IT and hoping they’ll grow to understand networks. Elisa is going the opposite way and training their network engineers to code.

With either path, if they then train their multi-talented engineers to understand the business (the red intersect), then they’ll have OSS experts on their hands right folks?? 😉

Automated Network Operations as a Service (ANOaaS)

Google has started applying its artificial intelligence (AI) expertise to network operations and expects to make its tools available to companies building virtual networks on its global cloud platform.
That could be a troubling sign for network technology vendors such as Ericsson AB (Nasdaq: ERIC), Huawei Technologies Co. Ltd. and Nokia Corp. (NYSE: NOK), which now see AI in the network environment as a potential differentiator and growth opportunity…
Google already uses software-defined network (SDN) technology as the bedrock of this infrastructure and last week revealed details of an in-development “Google Assistant for Networking” tool, designed to further minimize human intervention in network processes.
That tool would feature various data models to handle tasks related to network topology, configuration, telemetry and policy.
.”
Iain Morris
here on Light Reading.

This is an interesting, but predictable, turn of events isn’t it? If (when?) automated network operations as a service (ANOaaS) is perfected, it has the ability to significantly change the OSS space doesn’t it?

Let’s have a look at this from a few scenarios (and I’m considering ANOaaS from the perspective of any of the massive cloud providers who are also already developing significant AI/ML resource pools, not just Google).

Large Enterprise, Utilities, etc with small networks (by comparison to telco networks), where the network and network operations are simply a cost of doing business rather than core business. Virtual networks and ANOaaS seem like an attractive model for these types of customer (ignoring data sovereignty concerns and the myriad other local contexts for now). Outsourcing this responsibility significantly reduces CAPEX and head-count to run what’s effectively non-core business. This appears to represent a big disruptive risk for the many OSS vendors who service the Enterprise / Utilities market (eg Solarwinds, CA, etc, etc).

T2/3 Telcos with relatively small networks that tend to run lean operations. In this scenario, the network is core business but having a team of ML/AI expects is hard to justify. Automations are much easier to build for homogeneous (consistent) infrastructure platforms (like those of the cloud providers) than for those carrying different technologies (like T2/T3 telcos perhaps?). Combine complexity, lack of scale and lack of large ML/AI resource pools and it becomes hard for T2/T3 telcos to deliver cost-effective ANOaaS either internally or externally to their customer base. Perhaps outsourcing the network (ie VNO) and ANOaaS allows these operators to focus more on sales?

T1 Telcos have large networks, heterogenous platforms and large workforces where the network is core business. The question becomes whether they can build network cloud at the scale and price-point of Amazon, Microsoft, Google, etc. This is partly dependent upon internal processes, but also on what vendors like Ericsson, Huawei and Nokia can deliver, as quoted as a risk above.

As you probably noticed, I just made up ANOaaS. Does a term already exist for this? How do you think it’s going to change the OSS and telco markets?

Networks lead. OSS are an afterthought. Time for a change?

In a recent post, we described how changing conditions in networks (eg topologies, technologies, etc) cause us to reconsider our OSS.

Networks always lead and OSS (or any form of network management including EMS/NMS) is always an afterthought. Often a distant afterthought.

But what if we spun this around? What if OSS initiated change in our networks / services? After all, OSS is the platform that operationalises the network. So instead of attempting to cope with a huge variety of network options (which introduces a massive number of variants and in turn, massive complexity, which we’re always struggling with in our OSS), what if we were to define the ways that networks are operationalised?

Let’s assume we want to lead. What has to happen first?

Network vendors tend to lead currently because they’re offering innovation in their networks, but more importantly on the associated services supported over the network. They’re prepared to take the innovation risk knowing that operators are looking to invest in solutions they can offer to their customers (as products / services) for a profit. The modus operandi is for operators to look to network vendors, not OSS vendors / integrators, to help to generate new revenues. It would take a significant perception shift for operators to break this nexus and seek out OSS vendors before network vendors. For a start, OSS vendors have to create a revenue generation story rather than the current tendency towards a cost-out business case.

ONAP provides an interesting new line of thinking though. As you know, it’s an open-source project that represents multiple large network operators banding together to build an innovative new approach to OSS (even if it is being driven by network change – the network virtualisation paradigm shift in particular). With a white-label, software-defined network as a target, we have a small opening. But to turn this into an opportunity, our OSS need to provide innovation in the services being pushed onto the SDN. That innovation has to be in the form of services/products that are readily monetisable by the operators.

Who’s up for this challenge?

As an aside:
If we did take the lead, would our OSS look vastly different to what’s available today? Would they unequivocally need to use the abstract model to cope with the multitude of scenarios?

A purple cow in our OSS paddock

A few years ago, I read a book that had a big impact on the way I thought about OSS and OSS product development. Funnily enough, the book had nothing to do with OSS or product development. It was a book about marketing – a subject that I wasn’t very familiar with at the time, but am now fascinated with.

And the book? Purple Cow by Seth Godin.
Purple Cow

The premise behind the book is that when we go on a trip into the countryside, we notice the first brown or black cows, but after a while we don’t pay attention to them anymore. The novelty has worn off and we filter them out. But if there was a purple cow, that would be remarkable. It would definitely stand out from all the other cows and be talked about. Seth promoted the concept of building something into your products that make them remarkable, worth talking about.

I recently heard an interview with Seth. Despite the book being launched in 2003, apparently he’s still asked on a regular basis whether idea X is a purple cow. His answer is always the same – “I don’t decide whether your idea is a purple cow. The market does.”

That one comment brought a whole new perspective to me. As hard as we might try to build something into our OSS products that create a word-of-mouth buzz, ultimately we don’t decide if it’s a purple cow concept. The market does.

So let me ask you a question. You’ve probably seen plenty of different OSS products over the years (I know I have). How many of them are so remarkable that you want to talk about them with your OSS colleagues, or even have a single feature that’s remarkable enough to discuss?

There are a lot of quite brilliant OSS products out there, but I would still classify almost all of them as brown cows. Brilliant in their own right, but unremarkable for their relative sameness to lots of others.

The two stand-out purple cows for me in recent times have been CROSS’ built-in data quality ranking and Moogsoft’s Incident Room model. But it’s not for me to decide. The market will ultimately decide whether these features are actual purple cows.

I’d love to hear about your most memorable OSS purple cows.

You may also be wondering how to go about developing your own purple OSS cow. Well I start by asking, “What are people complaining about?” or “What are our biggest issues?” That’s where the opportunities lie. Once discovering those issues, the challenge is solving the problem/s in an entirely different, but better, way. I figure that if people care enough to complain about those issues, then they’re sure to talk about any product that solves the problem for them.

Does the death of ATM bear comparison with telco-grade open-source OSS?

Hands up if you’re old enough to remember ATM here? And I don’t mean the type of ATM that sits on the side of a building dispensing cash – no I mean Asynchronous Transfer Mode.

For those who aren’t familiar with ATM, a little background. ATM was THE telco-grade packet-switching technology of choice for most carriers globally around the turn of the century. Who knows, there might still be some ATM switches/routers out there in the wild today.

ATM was a powerful beast, with enormous configurability and custom-designed with immense scale in mind. It was created by telco-grade standards bodies with the intent of carrying voice, video, data, whatever, over big data pipes.

With such pedigree, you may be wondering then, how it was beaten out by a technology that was designed to cheaply connect small groups of computers clustered within 100 metres of each other (and a theoretical maximum bandwidth of 10Mbps).

Why does the technology that scaled up to become carrier Ethernet exist in modern telco networks, whereas ATM is largely obsoleted? Others may beg to differ, and there are probably a multitude of factors, but I feel it boils down to operational simplicity. Customers wanted operational simplicity and operators didn’t want to have a degree in ATM just to be able to drive it. By being designed to be all things to all people (carriers), did that make ATM compromised from the start?

Now I’ll state up front that I love the initiative and collaboration being shown by many of the telcos in committing to open-source programs like ONAP. It’s a really exciting time for the industry. It’s a sign that the telcos are wresting control back from the vendors in terms of driving where the collective innovation goes.

Buuuuuuut…..

Just like with ATM, are the big open source programs just too big and too complicated? Do you need a 100% focus on ONAP to be able to make it work, or even to follow all the moving parts? Are these initiatives trying to be all things to all carriers instead of changing needs to more simplified use cases?

Sometimes the ‘right’ way to do it just doesn’t exist yet, but often it does exist but is very expensive. So, the question is whether the ‘cheap, bad’ solution gets better faster than the ‘expensive, good’ solution gets cheap. In the broader tech industry (as described in the ‘disruption’ concept), generally the cheap product gets good. The way that the PC grew and killed specialized professional hardware vendors like Sun and SGi is a good example. However, in mobile it has tended to be the other way around – the expensive good product gets cheaper faster than the cheap bad product can get good.”
Ben Evans
here.

Is there an Ethernet equivalent in the OSS world, something that’s “cheap, bad” but getting better (and getting customer buy-in) rapidly?

I’ve just been blown away by the most elegant OSS innovation I’ve seen in decades

Looking back, I now consider myself extremely lucky to have worked with an amazing product on the first OSS project I worked on (all the way back in 2000). And I say amazing because the underlying data models and core product architecture are still better than any other I’ve worked with in the two decades since. The core is the most elegant, simple and powerful I’ve seen to date. Most importantly, the models were designed to cope with any technology, product or service variant that could be modelled as a hierarchy, whether physical or virtual / logical. I never found a technology that couldn’t be modelled into the core product and it required no special overlays to implement a new network model. Sadly, the company no longer exists and the product is languishing on the books of the company that bought out the assets but isn’t leveraging them.

Having been so spoilt on the first assignment, I’ve been slightly underwhelmed by the level of elegant innovation I’ve observed in OSS since. That’s possibly part of the reason for the OSS Call for Innovation published late last year. There have been many exciting innovations introduced since, but many modern tools are still more complex and complicated than they should be, for implementers and operators alike.

But during a product demo last week, I was blown away by an innovation that was so simple in concept, yet so powerful that it is probably the single most impressive innovation I’ve seen since that first OSS. Like any new elegant solution, it left me wondering why it hasn’t been thought of previously. You’re probably wondering what it is. Well first let me start by explaining the problem that it seeks to overcome.

Many inventory-based OSS rely on highly structured and hierarchical data. This is a double-edged sword. Significant inter-relationship of data increases the insight generation opportunities, but the downside is that it can be immensely challenging to get the data right (and to maintain a high-quality data state). Limited data inter-relationships make the project easier to implement, but tend to allow less rich data analyses. In particular, connectivity data (eg circuits, cables, bearers, VPNs, etc) can be a massive challenge because it requires the linking of separate silos of data, often with no linking key. In fact, the data quality problem was probably one of the most significant root-causes of the demise of my first OSS client.

Now getting back to the present. The product feature that blew me away was the first I’ve seen that allows significant inter-relationship of data (yet in a simple data model), but still copes with poor data quality. Let’s say your OSS has a hierarchical data model that comprises Location, Rack, Equipment, Card, Port (or similar) and you have to make a connection from one device’s port to another’s. In most cases, you have to build up the whole pyramid of data perfectly for each device before you can create a customer connection between them. Let’s also say that for one device you have a full pyramid of perfect data, but for the other end, you only know the location.

The simple feature is to connect a port to a location now, or any other point to point on the hierarchy (and clean up the far-end data later on if you wish). It also allows the intermediate hops on the route to be connected at any point in the hierarchy. That’s too simple right, yet most inventory tools don’t allow connections to be made between different levels of their hierarchies. For implementers, data migration / creation / cleansing gets a whole lot simpler with this approach. But what’s even more impressive is that the solution then assigns a data quality ranking to the data that’s just been created. The quality ranking is subsequently considered by tools such as circuit design / routing, impact analysis, etc. However, you’ll have noted that the data quality issue still hasn’t been fixed. That’s correct, so this product then provides the tools that show where quality rankings are lower, thus allowing remediation activities to be prioritised.

If you have an inventory data quality challenge and / or are wondering the name of this product, it’s CROSS, from the team at CROSS Network Intelligence (www.cross-ni.com).

After the boys of OSS have gone

Something has always bothered me about the medical profession. Whenever you visit a GP (General Practitioner), unless you need to come back for test results or ongoing treatment, the doctor never finds out if their diagnoses / prescriptions have been effective. In my experience at least, they don’t call to see whether there were any complications, allergic reactions to treatments, improvement in condition, etc and only find out if you make a follow-up appointment. As a result, they never close the feedback loop or gather a potentially rich source of data on their efficacy.

I sometimes wonder whether this is true of OSS implementers too. There can be a tendency to move from one implementation project to the next, from one customer to the next, without having the time to circle back on previous clients. Any unrealised but ongoing problems are handed over to operations and/or product support teams, so the implementers may not get to see them. Alternatively the team might also be consistently missing out on identifying opportunities for value-add on their projects.

If you’re an implementer (as I often am), how do you close the loop to find out what you could be doing better? Do you retain dialog with customers after handover? Do you question your support teams about what client problems / enquiries are landing on their desk? Do you ever book follow-up sessions with client staff at scheduled intervals after handover? Are you always engaged on an operational handover period where you have the chance to see post-handover challenges first-hand?

Just like a doctor, you’re bound to hear of any major or catastrophic outcomes after a “patient’s” initial visit. But what about the niggling ailments your clients have that could be easily rectified for all future clients… if only you knew of them?

I’d love to hear the thoughts from implementers on how they’re continually upping their game. Similarly, if you’re in ops / support, what experiences (ie messes) are consistently landing with you to clean up after the implementers have moved on? Do you have any suggestions for how they (we) could close the loop better with you?

Note: For all the highly talented women out there in OSS-land, please note that I’m not overlooking you. The title of my post is just a play on Don Henley’s famous song.

How smart contracts might reduce risk and enhance trust on OSS projects

Last Friday, we spoke about all wanting to develop trusted OSS supplier / customer relationships but rarely finding them and a contrarian factor for why trust is so hard to achieve in OSS – complexity.

Trust is the glue that allows OSS projects to happen. Not only that, it becomes a catch-22 with complexity. If OSS partners don’t trust each other, requirements, contracts, etc get more complex as a self-protection barrier. But with every increase in complexity, there becomes an increasing challenge to deliver and hence, risk of further reduction in trust.

On a smaller scale, you’ve seen it on all projects – if the project starts to falter, increased monitoring attention is placed on the project, which puts increased administrative load on the project team and reduces the time they have to deliver the intended outcomes. Sometimes the increased admin / report gains the attention of sponsors and access to additional resources, but usually it just detracts from the available delivery capability.

Vish Nandlall also associates trust and complexity in organisational models in his LinkedIn post below:

This is one of the reasons I’m excited about what smart contracts can do for the organisations and OSS projects of the future. Just as “Likes” and “Supplier Rankings” have facilitated online trust models, smart contracts success rankings have the ability to do the same for OSS suppliers, large and small. For example, rather than needing to engage “Big Vendor A” to build your entire, monolithic OSS stack, if an operator develops simpler, more modular work breakdowns (eg microservices), then they can engage “Freelancer B” and “Small Vendor C” to make valuable contributions on smaller risk increments. Being lower in complexity and risk means B and C have a greater chance of engendering trust, but their historical contract success ranking forces them to develop trust as a key metric.

An OSS niche market opportunity?

The survey found that 82 percent of service providers conduct less than half of customer transactions digitally, despite the fact that nearly 80 percent of respondents said they are moving forward with business-wide digital transformation programs of varying size and scale. This underscores a large perception gap in understanding, completing and benefiting from digitalization programs.

The study revealed that more than one-third of service providers have completed some aspect of digital transformation, but challenges persist; nearly three-quarters of service providers identify legacy systems and processes, challenges relating to staff and skillsets and business risk as the greatest obstacles to transforming digital services delivery.

Driving a successful digital transformation requires companies to transform myriad business and operational domains, including customer journeys, digital product catalogs, partner management platforms and networks via software-defined networking (SDN) and network functions virtualization (NFV).
Survey from Netcracker and ICT Intuition.

Interesting study from Netcracker and ICT Intuition. To re-iterate with some key numbers and take-aways:

  1. 82% of responding service providers can increase digital transactions by at least 50% (in theory).  Digital transactions tend to be significantly cheaper for service providers than manual transactions. However, some customers will work the omni-channel experience to find the channel that they’re most comfortable dealing with. In many cases, this means attempting to avoid digital experiences. As a side note, any attempts to become 100% digital are likely to require social / behavioural engineering of customers and/or an associated churn rate
  2. Nearly 75% of responding service providers identify legacy systems / processes, skillsets and business risk as biggest challenges. This reads as putting a digital interface onto back-end systems like BSS / OSS tools. This is less of a challenge for newer operators that have been designed with digitalised customer interactions in mind. The other challenge for operators is that the digital front-ends are rarely designed to bolt onto the operators’ existing legacy back-end systems and need significant integration
  3. If an operator want to build a digital transaction regime, they should expect an OSS / BSS transformation too.

To overcome these challenges, I’ve noticed that some operators have been building up separate (often low-cost) brands with digital-native front ends, back ends, processes and skills bases. These brands tend to target the ever-expanding digitally native generations and be seen as the stepping stone to obsoleting legacy solutions (and perhaps even legacy business models?).

I wonder whether this is a market niche for smaller OSS players to target and grow into whilst the big OSS brands chase the bigger-brother operator brands?