New OSS functionality or speed and scale?

We all know that revenue per bit (of data transferred across comms networks) is trending lower. How could we not? It’s posited as one of the reasons for declining profitability of the industry. The challenge for telcos is how to engineer an environment of low revenue per bit but still be cost viable.

I’m sure there are differentiated comms products out there in the global market. However, for the many products that aren’t differentiated, there’s a risk of commoditisation. Customers of our OSS are increasingly moving into a paradigm of commoditisation, which in turn impacts the form our OSS must mould themselves to.

The OSS we deliver can either be the bane or the saviour. They can be a differentiator where otherwise there is none. For example, getting each customer’s order ready for service (RFS) faster than competitors. Or by processing orders at scale, yet at a lower cost-base through efficiencies / repeatability such as streamlined products, processes and automations.

OSS exist to improve efficiency at scale of course, but I wonder whether we lose sight of that sometimes? I’ve noticed that we have a tendency to focus on functionality (ie delivering new features) rather than scale.

This isn’t just the OSS vendors or implementation teams either by the way. It’s often apparent in customer requirements too. If you’ve been lucky enough to be involved with any OSS procurement processes, which side of the continuum was the focus – on introducing a raft of features, or narrowing the field of view down to doing the few really important things at scale and speed?

An alternate way of slicing OSS projects

One of the biggest challenges of big bang OSS project implementations is that all of the business value (ie the OSS and its data, workflows, integrations, etc) gets delivered at once, normally at the end of a lengthy exercise.

Ok, ok, so the delivery of value is not a challenge, it’s the implications of a big delivery of value that’s the challenge – implications that include:

  1. If the project runs out of funds before the project finishes, no value is delivered
  2. If there’s no modularity of delivery then the project team must stay the course of the original project plan. There’s no room for prioritising or dropping or including delivery modules. Project plans are rarely perfect at first after all
  3. Any changes in project plan tend to have knock-on effects into the rest of the delivery due to the sequential nature of typical project plans
  4. Any delivery of value represents a milestone, which in turn demonstrates momentum for the project… a key change management and team morale strategy
  5. Large deliverables represent the proverbial “pig in the python” – only one segment of the python (ie segment of the project delivery team) is engaged (hyper-engaged) whilst the other segments remain under-utilised.  This isn’t great for project flow or utilisation

When tasked with designing a project schedule, I’ve noticed that many vendors tend to follow the typical waterfall delivery and corresponding payment milestones (eg. design, then build, then test, then deploy, then hand over). The downside of this approach is that the business value (for the customer) is delivered at the end of the handover (ie big bang). There’s no business value in delivering design artefacts for example – the customer can’t use them to perform operational tasks.

The model I prefer sees incremental business value being delivered such as:

  • Proof of Concept (PoC) build
  • Sandpit build
  • Out of the box (OOTB) production build (ie. no customisations)
  • End-to-end use case #1 delivery (ie. design, build*, test, deploy, handover)
  • E2E use case #2 delivery
  • E2E use case #n delivery

where build* includes incremental configuration, customisation, integration, data migration, etc.

Just in time design

It’s interesting how we tend to go in cycles. Back in the early days of OSS, the network operators tended to build their OSS from the ground up. Then we went through a phase of using Commercial off-the-shelf (COTS) OSS software developed by third-party vendors. We now seem to be cycling back towards in-house development, but with collaboration that includes vendors and external assistance through open-source projects like ONAP. Interesting too how Agile fits in with these cycles.

Regardless of where we are in the cycle for our OSS, as implementers we’re always challenged with finding the Goldilocks amount of documentation – not too heavy, not too light, but just right.

The Agile Manifesto espouses, “working software over comprehensive documentation.” Sounds good to me! It perplexes me that some OSS implementations are bogged down by lengthy up-front documentation phases, especially if we’re basing the solution on COTS offerings. These can really stall the momentum of a project.

Once a solution has been selected (which often does require significant analysis and documentation), I’m more of a proponent of getting the COTS software stood up, even if only in a sandpit environment. This is where just-in-time (JIT) documentation comes into play. Rather than having every aspect of the solution documented (eg process flows, data models, high availability models, physical connectivity, logical connectivity, databases, etc, etc), we only need enough documentation for collaborative stakeholders to do their parts (eg IT to set up hardware / hosting, networks to set up physical connectivity, vendor to provide software, integrator to perform build, etc) to stand up a vanilla solution.

Then it’s time to start building trial scenarios through the solution. There’s usually quite a bit of trial and error in this stage, as we seek to optimise the scenarios for the intended users. Then we add a few more scenarios.

There’s little point trying to document the solution in detail before a scenario is trialled, but some documentation can be really helpful. For example, if the scenario is to build a small sub-section of a network, then draw up some diagrams of that sub-network that include the intended naming conventions for each object (eg device, physical connectivity, addresses, logical connectivity, etc). That allows you to determine whether there are unexpected challenges with naming conventions, data modelling, process design, etc. There are always unexpected challenges that arise!

I figure you’re better off documenting the real challenges than theorising on the “what if?” challenges, which is what often happens with up-front documentation exercises. There are always brilliant stakeholders who can imagine millions of possible challenges, but these often bog the design phase down.

With JIT design, once the solution evolves, the documentation can evolve too… if there is an ongoing reason for its existence (eg as a user guide, for a test plan, as a training cheat-sheet, a record of configuration for fault-finding purposes, etc).

Interestingly, the first value in the Agile Manifesto is, “individuals and interactions over processes and tools.” This is where the COTS vs in-house-dev comes back into play. When using COTS software, individuals, interactions and processes are partly driven by what the tools support. COTS functionality constrains us but we can still use Agile configuration and customisation to optimise our solution for our customers’ needs (where cost-benefit permits).

Having a working set of vanilla tools allows our customers to get a much better feel for what needs to be done rather than trying to understand the intent of up-front design documentation. And that’s the key to great customer outcomes – having the customers knowledgeable enough about the real solution (not hypothetical solutions) to make the most informed decisions possible.

Of course there are always challenges with this JIT design model too, especially when third-party contracts are involved!

Using risk reversal to design OSS

There’s a concept in sales called “risk reversal” that takes all of the customers’ likely issues with a product and provides answers to alleviate customer concerns. I believe we can apply the same concept to OSS, not just to sell them, but to design them.

To borrow from a risk register page here on PAOSS, the major categories of risk that appear on almost all OSS projects are:

  • Organisational change management – the OSS will touch almost all parts of a business and a large number of people within the organisation. If all parts of the business is not conditioned to the change then the implementation will not be successful even if the technical deliverables are faultless. Change management has many, many layers but one way to minimise change management is to make the products and processes highly intuitive. I feel that intuitive OSS will come from a focus on design and simplification rather than our current focus on constantly adding more features. The aim should be to create OSS that are as easy for operators to start using as office tools like spreadsheets, word processors, presentation applications, etc
  • Data integrity – the OSS is only as good as good as the data that is being fed to it. If the quality of data in the OSS database is poor then operational staff will quickly lose faith in the tools. The product-based techniques that can be used to overcome this risk include:
    • Design tools / data model to cope with poor data quality, but also flag it as low confidence for future repair
    • Reduction in data relationships / dependencies (ie referential integrity) to ensure that quality problems don’t have a domino effect on OSS usability
    • Building checks and balances that ensure the data can be reconciled and quality remains high
    • Incorporate closed-loop processes to ensure data quality is continually improved, rather than the open-loop processes that tend to lead to data quality degradation
  • Application functionality mapping to real business needs OSS have been around long enough to have all but run out of features for vendors to differentiate against. The truly useful functionality has arisen from real business needs. “Wish-list” functionality that adds little tangible business benefit or requires significant effort is just adding product and project risk
  • Northbound Interface / Integration – Costs and risks of integrations are significant on each OSS project. There are many techniques that can be used to reduce risk such as a Minimum Viable Data (ie less data types to collect across an interface), standardised destination mapping models, etc but the industry desperately needs major innovation here
  • Implementation – there are so many sources of risk within this category, as is to be expected on any large, complex project. Taking the PMP approach to risk reduction, we can apply the Triple Constraint model

Aggregated OSS buying models

Last week we discussed a sell-side co-op business model. Today we’ll look at buy-side co-op models.

In other industries, we hear of buying groups getting great deals through aggregated buying volumes. This is a little harder to achieve with products that are as uniquely customised as OSS. It’s possible that OSS buy-side aggregation could occur for operators that are similar in nature but don’t compete (eg regional operators). Having said that, I’ve yet to see any co-ops formed to gain OSS group-purchase benefits. If you have, I’d love to hear about it.

In OSS, there are three approaches that aren’t exactly co-op buying models but do aggregate the evaluation and buying decision.

The most obvious is for corporations that run multiple carriers under one umbrella such as Telefonica (see Telefonica’s various OSS / BSS contract notifications here), SingTel (group contracts here), etisalat, etc. There would appear to benefits in standardising OSS platforms across each of the group companies.

A far less formal co-op buying model I’ve noticed is the social-proof approach. This is where one, typically large, network operator in a region goes through an extensive OSS / BSS evaluation and chooses a vendor. Then there’s a domino effect where other, typically smaller, network operators also buy from the same vendor.

Even less formal again is by using third-party organisations like Passionate About OSS to assist with a standard vendor selection methodology. The vendors selected aren’t standardised because each operator’s needs are different, but the product / vendor selection methodology builds on the learnings of past selection processes across multiple operators. The benefits comes in the evaluation and decision frameworks.

Network slicing, another OSS activity

One business customer, for example, may require ultra-reliable services, whereas other business customers may need ultra-high-bandwidth communication or extremely low latency. The 5G network needs to be designed to be able to offer a different mix of capabilities to meet all these diverse requirements at the same time.
From a functional point of view, the most logical approach is to build a set of dedicated networks each adapted to serve one type of business customer. These dedicated networks would permit the implementation of tailor-made functionality and network operation specific to the needs of each business customer, rather than a one-size-fits-all approach as witnessed in the current and previous mobile generations which would not be economically viable.
A much more efficient approach is to operate multiple dedicated networks on a common platform: this is effectively what “network slicing” allows. Network slicing is the embodiment of the concept of running multiple logical networks as virtually independent business operations on a common physical infrastructure in an efficient and economical way.
.”
GSMA’s Introduction to Network Slicing.

Engineering a network is one of compromises. There are many different optimisation levers to pull to engineer a set of network characteristics. In the traditional network, it was a case of pulling all the levers to find a middle-ground set of characteristics that supported all their service offerings.

QoS striping of traffic allowed for a level of differentiation of traffic handling, but the underlying network was still a balancing act of settings. Network virtualisation offers new opportunities. It allows unique segmentation via virtual networks, where each can be optimised for the specific use-cases of that network slice.

For years, I’ve been posing the concept of telco offerings being like electricity networks – that we don’t need so many service variants. I should note that this analogy is not quite right. We do have a few different types of “electricity” such as highly available (health monitoring), high-bandwidth (content streaming), extremely low latency (rapid reaction scenarios such as real-time sensor networks), etc.

Now what do we need to implement and manage all these network slices?? Oh that’s right, OSS! It’s our OSS that will help to efficiently coordinate all the slicing and dicing that’s coming our way… to optimise all the levers across all the different network slices!

OSS, with drama, without drama. Your choice

A recent blog from Seth Godin brought back some memories from a past project.

Two ways to solve a problem and provide a service.
With drama. Make sure the customer knows just how hard you’re working, what extent you’re going to in order to serve. Make a big deal out of the special order, the additional cost, the sweat and the tears.
Without drama. Make it look effortless.
Either can work. Depends on the customer and the situation.
Seth Godin here.

Over the course of the long-running and challenging project, I worked under a number of different Program Directors. The second last (chronologically) took the team barrel-chested down the “With Drama” path whilst the last took the “Without Drama” approach.

The “With Drama” approach was very melodramatic and political, but to be honest, was also really draining. It was draining because of the high levels of contact (eg meetings, reports, etc), reducing the amount of productive delivery time.

The “Without Drama” approach did make it look effortless, because by comparison it was effortless. The Program Director took responsibility for peer-level contact and cleared the way for the delivery team to focus on delivering. The team was still working well over 60 hour weeks, but it was now more clearly focused on delivery tasks. Interestingly, this approach brought a seemingly endless project to a systematic and clean conclusion (ie delivery) within about three months.

Now I’m not sure about your experiences or preferences, but I’d go with the “Without Drama” OSS delivery approach every time. The emotional intensity required of the “With Drama” approach just isn’t sustainable over long-running projects like our OSS projects tend to be.

What are your thoughts / experiences?

How an OSS is like an F1 car

A recent post discussed the challenge of getting a timeslice of operations people to help build the OSS. That post surmised, “as the old saying goes, you get back what you put in. In the case of OSS I’ve seen it time and again that operations need to contribute significantly to the implementation to ensure they get a solution that fits their needs.”

I have a new saying for you today, this time from T.D. Jakes, “You can’t be committed to the dream. You have to be committed to the process.”

If you’re representing an organisation that is buying an OSS solution from a vendor / integrator, please consider these two adages above. Sometimes we’re good at forming the dream (eg business requirements, business case, etc) and expecting the vendor to conduct almost all of the process. While our network operations teams are hired for the process of managing the network, we also need their significant input on the process of building / configuring an OSS. The vendor / integrator can’t just develop it in isolation and then hand it over to ops with a few days of training at the end.

The process of bringing a new OSS into an organisation is not like buying a road car. With an OSS, you can’t just place an order with some optional features like paint and trim specified, then expect to start driving it as soon as it leaves the vendor’s assembly line. It’s more like an F1 car where the driver is in constant communications with the pit-crew, changing and tweaking and refining to optimise the car to the driver’s unique needs (and in turn to hopefully optimise the results).

At least, that’s what current-state OSS are like. Perhaps in the future… we’ll strive to refine our OSS to be more like a road-car – standardised and intuitive enough for operators to drive straight off the assembly line.

Automated testing and new starters

Can you guess what automated OSS testing and OSS new starters have in common?

Both are best front-loaded.

As a consultant, I’ve been a new starter on many occasions, as well as being assigned new starters on probably even more occasions. From both sides of that fence, it’s far more effective to front-load the new starter with training / knowledge to bring them up to speed / effectiveness, as soon as possible. Far more useful from the perspective of quality, re-work, self-sufficiency, etc.

Unfortunately, this is easier said than done. For a start, new starters are generally only required because the existing team is completely busy. So busy that it’s really hard to drop everything to make time to deliver up-front training. It reminds me of this diagram.
We're too busy

Front-loading of automated testing is similar… it takes lots of time to get it operational, but once in place it allows the team to focus on business outcomes faster.

In both cases, front-loading leads to a decrease in efficiency over the first few days, but tends to justify the effort soon thereafter. What other examples of front-loading can you think of in OSS?

OSS compromise, not compromised

When you’ve got multiple powerful parties involved in a decision, compromise is unavoidable. The point is not that compromise is a necessary evil. Rather, compromise can be valuable in itself, because it demonstrates that you’ve made use of diverse opinions, which is a way of limiting risk.”
Chip and Dan Heath
in their book, Decisive.

This risk perspective on compromise (ie diversity of thought), is a fascinating one in the context of OSS.

Let’s just look at Vendor Selection as one example scenario. In the lead-up to buying a new OSS, there are always lots of different requirements that are thrown into the hat. These requirements are likely to come from more than one business unit, and from a diverse set of actors / contributors. This process, the OSS Thrashing process, tends to lead to some very robust discussions. Even in the highly unlikely event of every requirement being met by a single OSS solution, there are still compromises to be made in terms of prioritisation on which features are introduced first. Or which functionality is dropped / delayed if funding doesn’t permit.

The more likely situation is that each of the product options will have different strengths and weaknesses, each possibly aligning better or worse to some of the requirement contributor needs. By making the final decision, some requirements will be included, others precluded. Compromise isn’t an option, it’s a reality. The perspective posed by the Heath brothers is whether all requirement contributors enter the OSS vendor selection process prepared for compromise (thus diversity of thought) or does one actor / business-unit seek to steamroll the process (thus introducing greater risk)?

Did we forget the OSS operating model?

When we have a big OSS transformation to undertake, we tend to start with the use cases / requirements, work our way through the technical solution and build up an implementation plan before delivering it (yes, I’ve heavily reduced the real number of steps there!).

However, we sometimes overlook the organisational change management part. That’s the process of getting the customer’s organisation aligned to assist with the transformation, not to mention being fully skilled up to accept handover into operations. I’ve seen OSS projects that were nearly perfect technically, but ultimately doomed because the customer wasn’t ready to accept handover. Seasoned OSS veterans probably already have plans in place for handling organisational change through stakeholder management, training, testing, thorough handover-to-ops processes, etc. You can find some hints on the Free Stuff pages here on PAOSS.

In addition, long-time readers here on PAOSS have probably already seen a few posts about organisational management, but there’s a new gotcha that I’d like to add to the mix today – the changing operating model. This one is often overlooked. The changes made in a next-gen OSS can often have profound changes on the to-be organisation chart. Roles and responsibilities that used to be clearly defined now become blurred and obsoleted by the new solution.

This is particularly true for modern delivery models where cloud, virtualisation, as-a-service, etc change the dynamic. Demarcation points between IT, operations, networks, marketing, products, third-party suppliers, etc can need complete reconsideration. The most challenging part about understanding the re-mapping of operating models is that we often can’t even predict what they will be until we start using the new solution and refining our processes in-flight. We can start with a RACI and a bunch of “what if?” assumptions / scenarios to capture new operational mappings, but you can almost bet that it will need ongoing refinement.

Getting lost in the flow of OSS

The myth is that people play games because they want to avoid challenging work. The reality is, people play games to engage in well-designed, challenging work. The only thing they are avoiding is poorly designed work. In essence, we are replacing poorly designed work with work that provides a more meaningful challenge and offers a richer sense of progress.
And we should note at this point that just because something is a game, it doesn’t mean it’s good. As we’ll soon see, it can be argued that everything is a game. The difference is in the design.
Really good games have been ruthlessly play-tested and calibrated to the point where achieving a state of flow is almost guaranteed for many. Play-testing is just another word for iterative development, which is essentially the conducting of progressive experiments
.”
Dr Jason Fox
in his book, “The Game Changer.”

Reflect with me for a moment – when it comes to your OSS activities, in which situations do you consistently get into a state of flow?

For me, it’s in quite a few different scenarios, but one in particular stands out – building up a network model in an inventory management tool. This activity starts with building models / patterns of devices, services, connections, etc, then using the models to build a replica of the network, either manually or via data migration, within the inventory tool(s). I can lose complete track of time when doing this task. In fact I have almost every single time I’ve performed this task.

Whilst not being much of a gamer, I suspect it’s no coincidence that by far my favourite video game genre is empire-building strategy games like the Civilization series. Back in the old days, I could easily get lost in them for hours too. Could we draw a comparison from getting that same sense of achievement, seeing a network (of devices in OSS, of cities in the empire strategy games) grow rapidly as a result of your actions?

What about fans of first-person shooter games? I wonder whether they get into a state of flow on assurance activities, where they get to hunt down and annihilate every fault in their terrain?

What about fans of horse grooming and riding games? Well…. let’s not go there. 🙂

Anyway, enough of all these reflections and musings. I would like to share three concepts with you that relate to Dr Fox’s quote above:

  1. Gamification – I feel that there is MASSIVE scope for gamification of our OSS, but I’ve yet to hear of any OSS developers using game design principles
  2. Play-testing – How many OSS are you aware of that have been, “ruthlessly play-tested and calibrated?” In almost every OSS situation I’ve seen, as soon as functionality meets requirements, we stop and move on to the next feature. We don’t pause and try a few more variants to see which is most likely to result in a great design, refining the solution, “to the point where achieving a state of flow is almost guaranteed for many
  3. Richer Progress – How many of our end-to-end workflows are designed with, “a richer sense of progress” in mind? Feedback tends to come through retrospective reporting (if at all), rarely through the OSS game-play itself. Chances are that our end-to-end processes actually flow through multiple un-related applications, so it comes back to clever integration design to deliver more compelling feedback. We simply don’t use enough specialist creative designers in OSS

Getting a price estimate for your OSS

Sometimes a simple question deserves a simple answer: “A piece of string is twice as long as half its length”. This is a brilliant answer… if you have its length… Without a strategy, how do you know if it is successful? It might be prettier, but is it solving a define business problem, saving or making money, or fulfilling any measurable goals? In other words: can you measure the string?
Carmine Porco
here.

I was recently asked how to obtain OSS pricing by a University student for a paper-based assignment. To make things harder, the target client was to be a tier-2 telco with a small SDN / NFV network.

As you probably know already, very few OSS providers make their list prices known. The few vendors that do tend to focus on the high volume, self-serve end of the market, which I’ll refer to as “Enterprise Grade.” I haven’t heard of any “Telco Grade” OSS suppliers making their list prices available to the public.

There are so many variables when finding the right OSS for a customer’s needs and the vendors have so much pricing flexibility that there is no single definitive number. There are also rarely like-for-like alternatives when selecting an OSS vendor / product. Just like the fabled piece of string, the best way is to define the business problem and get help to measure it. In the case of OSS pricing, it’s to design a set of requirements and then go to market to request quotes.

Now, I can’t imagine many vendors being prepared to invest their valuable time in developing pricing based on paper studies, but I have found them to be extremely helpful when there’s a real buyer. I’ll caveat that by saying that if the customer (eg service provider) you’re working with is prepared to invest the time to help put a list of requirements together then you have a starting point to approach the market for customised pricing.

We’ve run quite a few of these vendor selections and have refined the process along the way to streamline for vendors and customers alike. Here’s a template we’ve used as a starting point for discussions with customers:

OSS vendor selection process

Note that each customer will end up with a different mapping of the diagram above to suit their specific needs. We also have existing templates (eg Questionnaire, Requirement Matrix, etc) to support the selection process where needed.

If you’re interested in reading more about the process of finding the right OSS vendor and pricing for you, click here and here.

Of course, we’d also be delighted to help if you need assistance to develop an OSS solution, get OSS pricing estimates, develop a workable business case and/or find the right OSS vendor/products for you.

Using OSS/BSS to steer the ship

For network operators, our OSS and BSS touch most parts of the business. The network, and the services they carry, are core business so a majority of business units will be contributing to that core business. As such, our OSS and BSS provide many of the metrics used by those business units.

This is a privileged position to be in. We get to see what indicators are most important to the business, as well as the levers used to control those indicators. From this privileged position, we also get to see the aggregated impact of all these KPIs.

In your years of working on OSS / BSS, how many times have you seen key business indicators that are conflicting between business units? They generally become more apparent on cross-team projects where the objectives of one internal team directly conflict with the objectives of another internal team/s.

In theory, a KPI tree can be used to improve consistency and ensure all business units are pulling towards a common objective… [but what if, like most organisations, there are many objectives? Does that mean you have a KPI forest and the trees end up fighting for light?]

But here’s a thought… Have you ever seen an OSS/BSS suite with the ability to easily build KPI trees? I haven’t. I’ve seen thousands of standalone reports containing myriad indicators, but never a consolidated roll-up of metrics. I have seen a few products that show operational metrics rolled-up into a single dashboard, but not business metrics. They appear to have been designed to show an information hierarchy, but not necessarily with KPI trees in mind specifically.

What do you think? Does it make sense for us to offer KPI trees as base product functionality from our reporting modules? Would this functionality help our OSS/BSS add more value back into the businesses we support?

Operator involvement on OSS projects

You cannot simply have your end users give some specifications then leave while you attempt to build your new system. They need to be involved throughout the process. Ultimately, it is their tool to use.”
José Manuel De Arce
here.

As an OSS consultant and implementer, I couldn’t agree more with José’s quote above. José, by the way is an OSS Manager at Telefónica, so he sits on the operator’s side of the implementation equation. I’m glad he takes the perspective he does.

Unfortunately, many OSS operators are so busy with operations, they don’t get the time to help with defining and tuning the solutions that are being built for them. It’s understandable. They are measured by their ability to keep the network (and services) running and in a healthy state.

From the implementation side, it reminds me of this old comic:
Too busy

The comic reminds me of OSS implementations for two reasons:

  1. Without ongoing input from operators, you can only guess at how the new tools could improve their efficacy and mitigate their challenges
  2. Without ongoing involvement from operators, they don’t learn the nuances of how the new tool works or the scenarios it’s designed to resolve… what I refer to as an OSS apprenticeship

I’ve seen it time after time on OSS implementations (and other projects for that matter) – [As a customer] you get back what you put in.

The Goldilocks OSS story

We all know the story of Goldilocks and the Three Bears where Goldilocks chooses the option that’s not too heavy, not too light, but just right.

The same model applies to OSS – finding / building a solution that’s not too heavy, not too light, but just right. To be honest, we probably tend to veer towards the too heavy, especially over time. We put more complexity into our architectures, integrations and customisations… because we can… which end up burdening us and our solutions.

A perfect example is AT&T offering its ECOMP project (now part of the even bigger Linux Foundation Network Fund) up for open source in the hope that others would contribute and help mature it. As a fairytale analogy, it’s an admission that it’s too heavy even for one of the global heavyweights to handle by itself.

The ONAP Charter has some great plans including, “…real-time, policy-driven orchestration and automation of physical and virtual network functions that will enable software, network, IT and cloud providers and developers to rapidly automate new services and support complete lifecycle management.”

These are fantastic ambitions to strive for, especially at the Pappa Bear end of the market. I have huge admiration for those who are creating and chasing bold OSS plans. But what about for the large majority of customers that fall into the Goldilocks category? Is our field of vision so heavy (ie so grand and so far into the future) that we’re missing the opportunity to solve the business problems of our customers and make a difference for them with lighter solutions today?

TM Forum’s Digital Transformation World is due to start in just over two weeks. It will be fascinating to see how many of the presentations and booths consider the Goldilocks requirements. There probably won’t be many because it’s just not as sexy a story as one that mentions heavy solutions like policy-driven orchestration, zero-touch automation, AI / ML / analytics, self-scaling / self-healing networks, etc.

[I should also note that I fall into the category of loving to listen to the heavy solutions too!! ]

Powerful ranking systems with hidden variables

There are ratings and rankings that ostensibly exist to give us information (and we are supposed to use that information to change our behavior).
But if we don’t know what variables matter, how is it supposed to be useful?
Just because it can be easily measured with two digits doesn’t mean that it’s accurate, important or useful.
[Marketers learned a long time ago that people love rankings and daily specials. The best way to boost sales is to put something in a little box on the menu, and, when in doubt, rank things. And sometimes people even make up the rankings.]

Seth Godin
here.

Are there any rankings that are made up in OSS? Our OSS collect an amazing amount of data so there’s rarely a need to make up the data we present.

Are they based on hidden variables? Generally, we use raw counters and / or well known metrics so we’re usually quite transparent with what our OSS present.

What about when we’re trying to select the right vendor to fulfill the OSS needs of our organisation? As Seth states, Just because it can be easily measured with two digits* doesn’t mean that it’s accurate, important or useful. [* In this case, I’m thinking of a 2 x 2 matrix].

The interesting thing about OSS ranking systems is that there is so much nuance in the variables that matter. There are potentially hundreds of evaluation criteria and even vast contrasts in how to interpret a given criteria.

For example, a criteria might be “time to activate a service.” A vendor might have a really efficient workflow for activating single services manually but have no bulk load or automation interface. For one operator (which does single activations manually), the TTAS metric for that product would be great, but for another operator (which does thousands of activations a day and tries to automate), the TTAS metric for the same product would be awful.

As much as we love ranking systems… there are hundreds of products on the market (in some cases, hundreds of products in a single operator’s OSS stack), each fitting unique operator needs differently… so a 2 x 2 matrix is never going to cut it as a vendor selection tool… not even as a short-listing tool.

Better to build yourself a vendor selection framework. You can find a few OSS product / vendor selection hints here based on the numerous vendor / product selections I’ve helped customers with in the past.

Automated Network Operations as a Service (ANOaaS)

Google has started applying its artificial intelligence (AI) expertise to network operations and expects to make its tools available to companies building virtual networks on its global cloud platform.
That could be a troubling sign for network technology vendors such as Ericsson AB (Nasdaq: ERIC), Huawei Technologies Co. Ltd. and Nokia Corp. (NYSE: NOK), which now see AI in the network environment as a potential differentiator and growth opportunity…
Google already uses software-defined network (SDN) technology as the bedrock of this infrastructure and last week revealed details of an in-development “Google Assistant for Networking” tool, designed to further minimize human intervention in network processes.
That tool would feature various data models to handle tasks related to network topology, configuration, telemetry and policy.
.”
Iain Morris
here on Light Reading.

This is an interesting, but predictable, turn of events isn’t it? If (when?) automated network operations as a service (ANOaaS) is perfected, it has the ability to significantly change the OSS space doesn’t it?

Let’s have a look at this from a few scenarios (and I’m considering ANOaaS from the perspective of any of the massive cloud providers who are also already developing significant AI/ML resource pools, not just Google).

Large Enterprise, Utilities, etc with small networks (by comparison to telco networks), where the network and network operations are simply a cost of doing business rather than core business. Virtual networks and ANOaaS seem like an attractive model for these types of customer (ignoring data sovereignty concerns and the myriad other local contexts for now). Outsourcing this responsibility significantly reduces CAPEX and head-count to run what’s effectively non-core business. This appears to represent a big disruptive risk for the many OSS vendors who service the Enterprise / Utilities market (eg Solarwinds, CA, etc, etc).

T2/3 Telcos with relatively small networks that tend to run lean operations. In this scenario, the network is core business but having a team of ML/AI expects is hard to justify. Automations are much easier to build for homogeneous (consistent) infrastructure platforms (like those of the cloud providers) than for those carrying different technologies (like T2/T3 telcos perhaps?). Combine complexity, lack of scale and lack of large ML/AI resource pools and it becomes hard for T2/T3 telcos to deliver cost-effective ANOaaS either internally or externally to their customer base. Perhaps outsourcing the network (ie VNO) and ANOaaS allows these operators to focus more on sales?

T1 Telcos have large networks, heterogenous platforms and large workforces where the network is core business. The question becomes whether they can build network cloud at the scale and price-point of Amazon, Microsoft, Google, etc. This is partly dependent upon internal processes, but also on what vendors like Ericsson, Huawei and Nokia can deliver, as quoted as a risk above.

As you probably noticed, I just made up ANOaaS. Does a term already exist for this? How do you think it’s going to change the OSS and telco markets?

Is service personalisation the answer?

The actions taken by the telecom industry have mostly been around cost cutting, both in terms of opex and capex, and that has not resulted in breaking the curve. Too few activities has been centered around revenue growth, such as focused activities in personalization, customer experience, segmentation, targeted offerings that become part of or drive ecosystems. These activities are essential if you want to break the curve; thus, it is time to gear up for growth… I am very surprised that very few, if any, service providers today collect and analyze data, create dynamic targeted offerings based on real-time insights, and do that per segment or individual.”
Lars Sandstrom
, here.

I have two completely opposite and conflicting perspectives on the pervading wisdom of personalised services (including segmentation of one and targeted offerings) in the telecoms industry.

Telcos tend to be large organisations. If I invest in a large organisation it’s because the business model is simple, repeatable and has a moat (as Warren Buffett likes to say). Personalisation is contra to two of those three mantras – personalisation makes our OSS/BSS far more complicated and hence less repeatable (unless we build in lots of automations, which BTW, are inherently more complex).

I’m more interested in reliable and enduring profitability than just revenue growth (not that I’m discounting the search for revenue growth of course). The complexity of personalisation leads to significant increases in systems costs. As such, you’d want to be really sure that personalisation is going to give an even larger up-tick in revenues (ie ROI). Seems like a big gamble to me.

For my traditional telco services, I don’t want personalised, dynamic offers that I have to evaluate and make decisions on regularly. I want set and forget (mostly). It’s a bit like my electricity – I don’t want to constantly choose between green electricity, blue electricity, red electricity – I just want my appliances to work when I turn on the switch and not have bill shock at the end of the month / quarter. In telco, it’s not just green / blue / red. We seem to want to create the millions of shades of the rainbow, which is a nightmare for OSS/BSS implementers.

I can see the argument however for personalisation in what I’ll call the over-the-top services (let’s include content and apps as well). Telcos tend to be much better suited to building the platforms that support the whole long tail than selecting individual winners (except perhaps supply of popular content like sport broadcasts, etc).

So, if I’m looking for a cool, challenging project or to sell some products or services (you’ll notice that the quote above is on a supplier’s blog BTW), then I’ll definitely recommend personalisation. But if I want my telco customers to be reliably profitable…

Am I taking a short-term view on this? Is personalisation going to be expected by all end-users in future, leaving providers with no choice but to go down this path??

Networks lead. OSS are an afterthought. Time for a change?

In a recent post, we described how changing conditions in networks (eg topologies, technologies, etc) cause us to reconsider our OSS.

Networks always lead and OSS (or any form of network management including EMS/NMS) is always an afterthought. Often a distant afterthought.

But what if we spun this around? What if OSS initiated change in our networks / services? After all, OSS is the platform that operationalises the network. So instead of attempting to cope with a huge variety of network options (which introduces a massive number of variants and in turn, massive complexity, which we’re always struggling with in our OSS), what if we were to define the ways that networks are operationalised?

Let’s assume we want to lead. What has to happen first?

Network vendors tend to lead currently because they’re offering innovation in their networks, but more importantly on the associated services supported over the network. They’re prepared to take the innovation risk knowing that operators are looking to invest in solutions they can offer to their customers (as products / services) for a profit. The modus operandi is for operators to look to network vendors, not OSS vendors / integrators, to help to generate new revenues. It would take a significant perception shift for operators to break this nexus and seek out OSS vendors before network vendors. For a start, OSS vendors have to create a revenue generation story rather than the current tendency towards a cost-out business case.

ONAP provides an interesting new line of thinking though. As you know, it’s an open-source project that represents multiple large network operators banding together to build an innovative new approach to OSS (even if it is being driven by network change – the network virtualisation paradigm shift in particular). With a white-label, software-defined network as a target, we have a small opening. But to turn this into an opportunity, our OSS need to provide innovation in the services being pushed onto the SDN. That innovation has to be in the form of services/products that are readily monetisable by the operators.

Who’s up for this challenge?

As an aside:
If we did take the lead, would our OSS look vastly different to what’s available today? Would they unequivocally need to use the abstract model to cope with the multitude of scenarios?