To link or not to link your OSS. That is the question

The first OSS project I worked on had a full-suite, single vendor solution. All products within the suite were integrated into a single database and that allowed their product developers to introduce a lot of cross-linking. That has its strengths and weaknesses.

The second OSS suite I worked with came from one of the world’s largest network vendors and integrators. Their suite primarily consisted of third-party products that they integrated together for the customer. It was (arguably) a best-of-breed all implemented as a single solution, but since the products were disparate, there was very little cross-linking. This approach also has strengths and weaknesses.

I’d become so used to the massive data migration and cross-referencing exercise required by the first OSS that I was stunned by the lack of time allocated by the second vendor for their data migration activities. The first took months and a significant level of expertise. The second took days and only required fairly simple data sets. That’s a plus for the second OSS.

However, the second OSS was severely lacking in cross-domain data, which impacted the richness of insight that could be easily unlocked.

Let me give an example to give better context.

We know that a trouble ticketing system is responsible for managing the tracking, reporting and resolution of problems in a network operator’s network. This could be as simple as a repository for storing a problem identifier and a list of notes performed to resolve the problem. There’s almost no cross-linking required.

A more referential ticketing system might have links to:

  • Alarm management – to show the events linked to the problem
  • Inventory management – to show the impacted resources (or possibly impacted)
  • Service management – to show the services impacted
  • Customer management – to show the customers impacted and possibly the related customer interactions
  • Spares management – to show the life-cycle of physical resources impacted
  • Workforce management – to manage the people / teams performing restorative actions
  • etc

The referential ticketing system gives far richer information, obviously, but you have to trade that off against the amount of integration and data maintenance that needs to go into supporting it. The question to ask is what level of linking is justifiable from a cost-benefit perspective.

Treating your OSS/BSS suite like a share portfolio

Like most readers, I’m sure your OSS/BSS suite consists of many components. What if you were to look at each of those components as assets? In a share portfolio, you analyse your stocks to see which assets are truly worth keeping and which should be divested.

We don’t tend to take such a long-term analytical view of our OSS/BSS components. We may regularly talk about their performance anecdotally, but I’m talking about a strategic analysis approach.

If you were to look at each of your OSS/BSS components, where would you put them in the BCG Matrix?
BCG matrix
Image sourced from NetMBA here.

How many of your components are giving a return (whatever that may mean in your organisation) and/or have significant growth potential? How many are dogs that are a serious drain on your portfolio?

From an investor’s perspective, we seek to double-down our day-to-day investment in cash-cows and stars. Equally, we seek to divest our dogs.

But that’s not always the case with our OSS/BSS porfolio. We sometimes spend so much of our daily activity tweaking around the edges, trying to fix our dogs or just adding more things into our OSS/BSS suite – all of which distracts us from increasing the total value of our portfolio.

To paraphrase this Motley Fool investment strategy article into an OSS/BSS context:

  • Holding too many shares in a portfolio can crowd out returns for good ideas – being precisely focused on what’s making a difference rather than being distracted by having too many positions. Warren Buffett recommends taking 5-10 positions in companies that you are confident in holding forever (or for a very long period of time), rather than constantly switching. I shall note though that software could arguably be considered to be more perishable than the institutions we invest in – software doesn’t tend to last for decades (except some OSS perhaps  😀 )
  • Good ideas are scarce – ensuring you’re not getting distracted by the latest trends and buzzwords
  • Competitive knowledge advantage – knowing your market segment / portfolio extremely well and how to make the most of it, rather than having to up-skill on every new tool that you bring into the suite
  • Diversification isn’t lost – ensuring there is suitable vendor/product diversification to minimise risk, but also being open to long-term strategic changes in the product mix

Day-trading of OSS / BSS tools might be a fun hobby for those of us who solution them, but is it as beneficial as long-run investment?

I’d love to hear your thoughts and experiences.

2019 predictions for OSS

Well, this is the time of year when people make big predictions for the coming year. But let me start by saying the headline is something of a misnomer. I’m not clever enough to have any predictions for 2019 for a couple of reasons:

  1. There are far too many clever people working across the myriad fields of expertise that make up an OSS for me to possibly guess which might gain traction this year
  2. I’m yet to figure out whether there are any consistent patterns or cycles like Moore’s Law that uniquely define progress in OSS. On the contrary, you could claim that there are any number of metrics that might define progress for OSS (or to any individual OSS stack). But I’ll also be honest enough to say that I haven’t tried applying any of these futurology techniques to find any useful patterns either.
    Futurism Methodologies

Instead, I’ll call out the many industry-wide challenges / opportunities that are still waiting to be solved in 2019. Many of these same challenges / opportunities have been around since I first started working on OSS projects in circa 2000.

The Passionate About OSS Call for Innovation paper outlines a list of starting points where exponential improvements await.

I’m not sure if any will be solved in 2019 but I will make the prediction that the thousands of very clever people working in the OSS industry will make some exciting steps forward this year. Hopefully they’re some of the quantum leaps that await and not only the ever-present, but still highly challenging, incremental improvements.

OSS come in all shapes and sizes

As the OSS vendors / suppliers page here on PAOSS shows, there are a LOT of different OSS options, making it an extremely fragmented market. But there’s something of a reason for that fragmentation – customer requirements for OSS come in all shapes and sizes. Here are four of the major categories that I’ve been lucky enough to work on.

Tier 1 telcos – the OSS of these organisations tend to be best classified as having to cope with scale. Scale comes in multiple dimensions. The number of network devices under management tend to be large, as do the types of device. The number of subscribers and customer services tend to be large, not to mention having large amounts of change occurring on a daily basis. The number of process variants and system integrations also tend to be large. And being at scale means that they’re more likely to be able to justify the cost of customisations and automations – either to off-the-shelf products or via purpose-built tools. Budgets, both CAPEX and OPEX, also tend to be large. Except where niche slices of the total OSS suite are being delivered, the vendors that service this market are also large in terms of revenues, but also in their number of services staff available to service the customer’s unique needs. In the case of the telco, the business (and revenue model) is built around the network so it gets the clear attention of the organisation’s executives.

Enterprise customers – these OSS tend to be at the other end of the spectrum, even when the enterprise is large (eg banks). Networks tend to be more homogeneous, being IT/IP-centric. Services tend to be less customer-specific (ie for journaling costs at a business unit level rather than individual subscribers) but follow ITSM process models, so the service management daily delta is not at the same scale as the Tier 1 telco. For enterprise customers, the network is rarely core business, even if it is mission-critical to the business. As such, attention and budgets tend to be much smaller. In turn, this means that the smaller, self-service or open-source OSS products / suppliers tend to be present in this segment.

Then there are two categories of organisation that fit between the two previous ends of the spectrum:

Tier 2/3 telcos, MVNOs and data centres – Similar to the Tier-1 telco, just not at the same scale, which has implications on the nature of their OSS. They generally need all the same types of OSS tools as the T1s, just not catering for the same number of variants. Due to cost constraints, there may be one or a few significant OSS building blocks such as inventory, assurance or orchestration, but often there will also be enterprise-grade and/or open-source products in their OSS stack. CAPEX and OPEX budgets are smaller, so clever jack-of-all-trades OSS experts are often on the operational teams delivering sophisticated solutions on shoe-string budgets. Some of the best OSS experts I’ve come across can trace their roots back to these origins.

Utilities – the OSS of these organisations are a fascinating mix of the first two categories above because enterprise-grade OSS often aren’t really fit-for-purpose and carrier-grade OSS doesn’t quite suit either. Except in the case of multi-utilities (eg power + telco), these organisations tend to have very little service management change, mainly because they tend to have few to no external customers. This makes them similar to enterprise OSS. But like telcos, they often have networks that are more varied than your typical IT/IP-centric networks under management in enterprise-land. They often have less common network topologies and protocols, including older and even proprietary models that enterprise-grade OSS rarely support without expensive mediation. Just like the enterprise, the telco network (and hence the OSS) of a utility is not core business and can’t be justified through driving incremental revenues. However, it is generally mission-critical to the core business (eg tele-protection circuits are in place to ensure resilience of the electricity supply across the power network). As such, telco Network health / reliability and asset management tend to be the main focus of these OSS. And whereas telcos can delegate some responsibility for network security to their customers (using the dumb-pipe excuse), utilities bear full responsibility for the security of their telco networks and the critical infrastructure that these networks and OSS tools support.

These are only broadly general categories and there are more than 50 shades of grey in between. Are there any other broad categories that you feel I’m missing?

GE undergoes another re-structure. Does it unlock a competitive advantage?

GE has just announced plans to establish a new, independent company focused on building a comprehensive Industrial Internet of Things (IIoT) software portfolio.

The spun out company will “start with $1.2 billion in annual software revenue and an existing global industrial customer base. The company is intended to be a GE wholly-owned, independently run business with a new brand and identity, its own equity structure, and its own Board of Directors. The proposed new organization aims to bring together GE Digital’s industry-leading IIoT solutions including the Predix platform, Asset Performance Management, Historian, Automation (HMI/SCADA), Manufacturing Execution Systems, Operations Performance Management, and the GE Power Digital and Grid Software Solutions businesses.”

A couple of months ago, we posed the question about cross-over use-cases / functionality / products / data / process between IoT platforms and OSS.

Sure, there are fundamental differences between what a sensor network management platform (ooops, should I call that SNMP? That won’t cause any confusion will it??) and what an OSS does. However, there seems to be enough commonality and potential for shared insight to collude.

As far as I’ve ascertained (happy to be told otherwise), GE is the only organisation that has significant offerings in both spaces – Predix in sensor network management and a multitude of OSS / asset tools including Smallworld. Up until now, I understand that Predix and OSS have been kept in separate siloes by GE. Placing the two sets of assets together in the new, as yet unnamed, digital business increases the likelihood of collaboration surely.

If GE really is the only organisation at the Venn-Diagram convergence of IOT and OSS platforms, then it holds a competitive advantage in that niche. The only question that remains is to identify the use-cases and customers that the niche (and its functionality) is relevant to, if any.

PS. Just as an aside, the restructure also includes the announcement that GE is divesting a majority stake in ServiceMax, a product that is often bundled with its OSS offers, which it bought for $916M back in 2016. Silver Lake, a private equity firm will take over that stake in early 2019.

How to kill the OSS RFP (part 4)

This is the fourth, and final part (I think) in the series on killing the OSS RFI/RFP process, a process that suppliers and customers alike find to be inefficient. The concept is based on an initiative currently being investigated by TM Forum.

The previous three posts focused on the importance of trusted partnerships and the methods to develop them via OSS procurement events.

Today’s post takes a slightly different tack. It proposes a structural obsolescence that may lead to the death of the RFP. We might not have to kill it. It might die a natural death.

Actually, let me take that back. I’m sure RFPs won’t die out completely as a procurement technique. But I can see a time when RFPs are far less common and significantly different in nature to today’s procurement events.

How??
Technology!
That’s the answer all technologists cite to any form of problem of course. But there’s a growing trend that provides a portent to the future here.

It comes via the XaaS (As a Service) model of software delivery. We’re increasingly building and consuming cloud-native services. OSS of the future, the small-grid model, are likely to consume software as services from multiple suppliers.

And rather than having to go through a procurement event like an RFP to form each supplier contract, the small grid model will simply be a case of consuming one/many services via API contracts. The API contract (eg OpenAPI specification / swagger) will be available for the world to see. You either consume it or you don’t. No lengthy contract negotiation phase to be had.

Now as mentioned above, the RFP won’t die, but evolve. We’ll probably see more RFPs formed between customers and the services companies that will create customised OSS solutions (utilising one/many OSS supplier services). And these RFPs may not be with the massive multinational services companies of today, but increasingly through smaller niche service companies. These micro-RFPs represent the future of OSS work, the gig economy, and will surely be facilitated by smart-RFP / smart-contract models (like the OSS Justice League model).

Why does everyone know an operator’s business better than the operator?

The headline today blatantly steals from a post by William Webb. You can read his entire, brilliant post here. All quotes below are from the article.

William’s concept aligns quite closely with yesterday’s article regarding external insights that don’t quite marry up with the real situation faced by operators.

At the Great Telco Debate this week there was no shortage of advice for operators. Some counselled them to move up the value chain or branch out into related areas. Others to build “it” so that they would come… But there were no operators actually talking about doing these things.”
Funny because it’s true.

In most industries the working assumption is that a company knows its customers better than outsiders… But this assumption of knowing your customers seems not to hold in the mobile telecoms industry. It appears that the industry assumes that the mobile operators do not know their customers, but that they – the suppliers generally – understand them better.
Interesting. So this is a case of the suppliers purportedly knowing their customer (the operators) but also their customer’s customer (the end-users of comms services). This concept is almost definitely true of network suppliers. I don’t feel that this is common for OSS suppliers though. In fact it’s an area that could definitely be improved upon – an awareness of our customers’ customers.

At the Great Telco Debate, Nokia spoke about how the telcos needed to be bold, to build networks [eg 5G] for which there was no current business plan on the basis that revenue streams would materialise. Telling your customer to do something which cannot be justified economically seems a risky way to ensure a good long-term relationship.

I actually laughed out loud at the truth behind this one. So many related stories to tell. Another day perhaps!

The operators have been advised for decades that they are in a business that is increasingly becoming a utility and that they need to “move up the value chain” or find some other growth opportunity. This advice seems to be predicated on the view that nobody wants to be a utility, that it is essential for organisations to grow, and that moving around the value chain is easy to do. All merit further investigation. Utility businesses are stable, low-risk and normally profitable. Many companies do not grow but thrive nevertheless. But most problematic, mobile operators have been trying to “move up the value chain” for many years, with conspicuous lack of success.”

The CSP vs DSP business model. There is absolutely a position for both speeds in the telco marketplace. Which is better? Depends on your investment objectives and risk/reward profile.

Most operators, sensibly, appear to be ignoring all this unsolicited advice and getting on with running their networks reliably while delivering ever-more data capacity for ever-lower tariffs. Of course, they listen to ideas emanating from around the industry, but they know their business, their financial constraints, and their competitive and regulatory environment.”

As indicated in yesterday’s post, every client situation is different. We might look at the technical similarities between projects, but differences go beyond that. A supplier or consultant can’t easily replace local knowledge across financial and regulatory environments especially.

OSS answers that are simple but wrong vs complex but right

We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills…”
John F Kennedy
.

Let’s face it. The business of running a telco is complex. The business of implementing an OSS is complex. The excitement about working in our industry probably stems from the challenges we face, but the impact we can make if/when we overcome them.

The cartoon below tells a story about telco and OSS consulting (I’m ignoring the “Science vs everything else” box for the purpose of this post, focusing only on the simple vs complex sign-post).

Simple vs Complex

I was recently handed a brochure from a consulting firm that outlined a step-by-step transformation approach for comms service providers of different categories. It described quarter-by-quarter steps to transform across OSS, BSS, networks, etc. Simple!

The problem with their prescriptive model was that they’d developed a stereotype for each of the defined carrier categories. By stepping through the model and comparing against some of my real clients, it was clear that their transformation approaches weren’t close to aligning to any of those clients’ real situations.

Every single assignment and customer has its own unique characteristics, their own nuances across many layers. Nuances that in some cases are never even visible to an outsider / consultant. Trying to prepare generic, but prescriptive transformation models like this would seem to be a futile exercise.

I’m all for trying to bring repeatable methodologies into consulting assignments, but they can only act as general guidelines that need to be moulded to local situations. I’m all for bringing simplification approaches to consultancies too, as reflected by the number of posts that are categorised as “Simplification” here on PAOSS. We sometimes make things too complex, so we can simplify, but this definitely doesn’t imply that OSS or telco transformations are simple. There is no one-size-fits-all approach.

Back to the image above, there’s probably another missing arrow – Complex but wrong! And perhaps another answer with no specific path – Simple, but helpful in guiding us towards the summit / goal.

I can understand why telcos get annoyed with us consultants telling them how they should run their business, especially consultants who show no empathy for the challenges faced.

But more on that tomorrow!

The OSS proof-of-worth dilemma

Earlier this week we posted an article describing Sutton’s Law of OSS, which effectively tells us to go where the money is. The article suggested that in OSS we instead tend towards the exact opposite – the inverse-Sutton – we go to where the money isn’t. Instead of robbing banks like Willie Sutton, we break into a cemetery and aimlessly look for the cash register.

A good friend responded with the following, “Re: The money trail in OSS … I have yet to meet a senior exec. / decision maker in any telco who believes that any OSS component / solution / process … could provide benefit or return on any investment made. In telco, OSS = cost. I’ve tried very hard and worked with many other clever people also trying hard to find a way to pitch OSS which overcomes this preconception. BSS is often a little easier … in many cases it’s clear that “real money” flows through BSS and needs to be well cared for.”

He has a strong argument. The cost-out mentality is definitely ingrained in our industry.

We are saddled with the burden of proof. We need to prove, often to multiple layers of stakeholders, the true value of the (often intangible) benefits that our OSS deliver.

The same friend also posited, “The consequence is the necessity to establish beneficial working relationships with all key stakeholders – those who specify needs, those who design and implement solutions and those, especially, who hold or control the purse strings. [To an outsider] It’s never immediately obvious who these people are, nor what are their key drivers. Sometimes it’s ambition to climb the ladder, sometimes political need to “wedge” peers to build empires, sometimes it’s necessity to please external stakeholders – sometimes these stakeholders are vendors or government / international agencies. It’s complex and requires true consultancy – technical, business, political … at all levels – to determine needs and steer interactions.

Again, so true. It takes more than just technical arguments.

I’m big on feedback loops, but also surprised at how little they’re used in telco – at all levels.

  • We spend inordinate amounts of time building and justifying business cases, but relatively little measuring the actual benefits produced after we’ve finished our projects (or gaining the learnings to improve the next round of business cases)
  • We collect data in our databases, obliviously let it age, realise at some point in the future that we have a data quality issue and perform remedial actions (eg audits, fixes, etc) instead of designing closed-loop improvement cycles that ensure DQ isn’t constantly deteriorating
  • We’re great at spending huge effort in gathering / arguing / prioritising requirements, but don’t always run requirements traceability all the way into testing and operational rollout.
  • etc

Which leads us back to the burden of proof. Our OSS have all the data in the world, but how often do we use it to justify and persuade – to prove?

Our OSS products have so many modules and technical functionality (so much of it effectively duplicated from vendor to vendor). But I’ve yet to see any vendor product that allows their customer, the OSS operators, to automatically gather proof-of-worth stats (ie executive-ready reports). Nor have I seen any integrator build proof-of-worth consultancy into their offer, whereby they work closely with their customer to define and collect the metrics that matter. BTW. If this sounds hard, I’d be happy to discuss how I approach this task.

So let me leave you with three important questions today:

  1. Have you also experienced the overwhelming burden of the “OSS = cost” mentality
  2. If so, do you have any suggestions / experiences on how you’ve overcome it
  3. Does the proof-of-worth product functionality sound like it could be useful (noting that it doesn’t even have to be a product feature, but a custom report / portal using data that’s constantly coursing through our OSS databases)

OSS that capture value, not just create it

I’ve just had a really interesting first day at TM Forum’s Digital Transformation Asia (https://dta.tmforum.org and #tmfdigitalasia ). The quality of presentations was quite high. Some great thought-provoking ideas!!

Nik Willetts kicked off his keynote with the following quote, which I’m paraphrasing, “Telcos need to start capturing value, not just creating it as they have for the last decade.”

For me, this is THE key takeaway for this event, above any of the other interesting technical discussions from day 1 (and undoubtedly on the agenda for the next 2 days too).

The telecommunications industry has made a massive contribution to the digital lifestyle that we now enjoy. It has been instrumental in adding enormous value to our lives and our economy. But all the while, telecommunications providers globally have been experiencing diminishing profitability and share-of-wallet (as described in this earlier post). Clearly the industry has created enormous value, but hasn’t captured as much as it would’ve liked.

The question to ask is how will our thinking and our OSS/BSS stacks help to contribute to capturing more value for our customers. As described in the share of wallet post above, the premium end of the value chain has always been in the content (think in terms of phone conversations in days gone by, or the myriad of comms techniques today such as email, live chat, blogs, etc, etc). That’s what the customer pays for – the experience – not the networks or systems that facilitate it.

Nik’s comments made me think of Andrew Carnegie. Monopolies such as the telecommunications organisations of the past and Andrew Carnegie’s steel business owned vast swathes of the value chain (Carnegie Steel Company owned the mines which extracted the raw materials needed to make steel, controlled the transportation used to deliver the materials and the product, and ran the mills used for steel production). Buyers didn’t care for the mines or mills or transportation. Customers were paying for the end product as it is what helped them achieve their goals, whether that was the railway tracks needed by the railroads or the beams needed by construction companies.

The Internet has allowed enormous proliferation of the premium-end of the telecommunications value chain. It’s too late to stuff that genie back into the bottle. But to Nik’s further comment, we can help customers achieve their goals by becoming their “do-it-yourself” digital partners.

Our customers now look to platforms like Facebook, Instagram, Google, WordPress, Amazon, etc to build their marketing, order capture, product / content delivery, commercial transactions, etc. I really enjoyed Monty Hong‘s presentation that showed how Telkomsel’s OSS/BSS is helping to embed Telkomsel into customers’ digital lifestyles / value-chains. It’s a perfect example of the biggest OSS loser proof discussed in yesterday’s post.

Presence vs omni-presence and the green button of OSS design

In OSS there are some tasks that require availability (the green button on communicator). The Network Operations Centre (NOC) is one. But does it require on-site presence in the NOC?

An earlier post showed how wrong I was about collaboration rooms. It seems that ticket flicking (and perhaps communication tools like slack) is the preferred model. If this is the preferred model, then perhaps there is no need for a NOC… perhaps only a DR NOC (Disaster Recover NOC).

Truth is, there are hardly any good reasons to know if someone’s available or away at any given moment. If you truly need something from someone, ask them. If they respond, then you have what you needed. If they don’t, it’s not because they’re ignoring you – it’s because they’re busy. Respect that! Assume people are focused on their own work.
Are there exceptions? Of course. It might be good to know who’s around in a true emergency, but 1% occasions like that shouldn’t drive policy 99% of the time. 
Jason Fried on Signal v Noise

Customer service needs availability. But with a multitude of channels (for customers) and collaboration tools (for staff*), it decreasingly needs presence (except in retail outlets perhaps). You could easily argue that contact centres, online chat operators, etc don’t require presence, just availability.

The one area where I’m considering the paradox of presence is in OSS design / architecture. There are often many facets of a design that require multiple SMEs – OSS application, security, database, workflow, user-experience design, operations, IT, cloud etc.

When we get many clever SMEs in the one room, they often have so many ideas and so much expertise that the design process resembles an endless loop. Presence seems to inspire omnipresence (the need to show expertise across all facets of the design). Sometimes we achieve a lot in these design workshops. Sometimes we go around in circles almost entirely because of the cleverness of our experts. They come up with so many good ideas we end up in paralysis by analysis.

The idea I’m toying with is how to use the divide and conquer theory – being able to carve up areas of responsibility and demarcation points to ensure each expert focuses on their area of responsibility. Having one expert come up with their best model within their black box of responsibility and connecting their black box with adjacent demarcation points. The benefits are also the detriments. The true double-edged sword. The benefits are having one true expert work through the options within the black box. The detriments are having only one expert work through the options within the black box.

There are some past projects that I wished I’d tried to inspire the divide and conquer approach in hindsight. In others, the collaboration model has worked extremely well.

But to get back to presence, I wonder whether thrashing up front to define black boxes and demarcation points then allows the experts to do their thing remotely and become less inclined to analyse and opine on everyone else’s areas of expertise.

* I use the term staff to represent anyone representing the organisation (staff, contractor, consultant, freelancer, etc)

Build an OSS and they will come… or sometimes not

Build it and they will come.

This is not always true for OSS. Let me recount a few examples.

The project team is disconnected from the users – The team that’s building the OSS in parallel to existing operations doesn’t (or isn’t able to) engage with the end users of the OSS. Once it comes time for cut-over, the end users want to stick with what they know and don’t use the shiny new OSS. From painful experience I can attest that stakeholder management is under-utilised on large OSS projects.

Turf wars – Different groups within a customer are unable to gain consensus on the solution. For example, the operational design team gains the budget to build an OSS but the network assurance team doesn’t endorse this decision. The assurance team then decides not to endorse or support the OSS that is designed and built by the design team. I’ve seen an OSS worth tens of millions of dollars turned off less than 2 years after handover because of turf wars. Stakeholder management again, although this could be easier said than done in this situation.

It sounded like a good idea at the time – The very clever OSS solution team keeps coming up with great enhancements that don’t get used, for whatever reason (eg non fit-for-purpose, lack of awareness of its existence by users, lack of training, etc). I’ve seen a customer that introduced over 500 customisations to an off-the-shelf solution, yet hundreds of those customisations hadn’t been touched by users within a full year prior to doing a utilisation analysis. That’s right, not even used once in the preceding 12 months. Some made sense because they were once-off tools (eg custom migration activities), but many didn’t.

The new OSS is a scary beast – The new solution might be perfect for what the customer has requested in terms of functionality. But if the solution differs greatly from what the operators are used to, it can be too intimidating to be used. A two-week classroom-based training course at the end of an OSS build doesn’t provide sufficient learning to take up all the nuances of the new system like the operators have developed with the old solution. Each significant new OSS needs an apprenticeship, not just a short-course.

It’s obsolete before it’s finishedOSS work in an environment of rapid change – networks, IT infrastructure, organisation models, processes, product offerings, regulatory shifts, disruptive innovation, etc, etc. The longer an OSS takes to implement, the greater the likelihood of obsolescence. All the more reason for designing for incremental delivery of business value rather than big-bang delivery.

What other examples have you experienced where an OSS has been built, but the users haven’t come?

Falsely rewarding based on OSS existence rather than excellence

There’s a common belief that most jobs see people rewarded for presence rather than performance. That is, they’re encouraged to be on site from 9am to 5pm rather than being given free reign over their work schedules as long as key outcomes are met / exceeded.

In OSS vendor / product selection there’s a similar concept. Contracts are often awarded based on existence rather than excellence. When evaluating a product, if it’s able to do a majority of the functions in the long list of requirements then the box is ticked.

However, this doesn’t take into account that there are usually only a very small number of functions that any given customer’s OSS needs to perform at a very high level of efficiency. All the others are effectively just nice to have. That’s the 80/20 rule at work.

When guiding a customer through their vendor selections, I always take them through an exercise to identify the use-cases / functions that really matter. Then we ensure that the demos or proofs of concept focus closely on how excellent the OSS is at those most important factors.

OSS automations – just because we can, doesn’t mean we should

Automation is about using machines / algorithms to respond faster than humans can, or more efficiently than humans can, or more accurately than humans can… but only if the outcomes justify the costs. When it comes to automations, it’s a case of, “just because we can, doesn’t mean we should.”

The more complex the decision tree you’re trying to automate, the higher the costs and therefore the harder it becomes to cost-justify. So the first step in any automation is taking a lateral thinking approach to simplifying the decision tree.

This recent post highlighted a graph from Nokia’s Bell Labs and the financial dependency that network slicing has on operational automation:
Nokia Network Slicing

Let’s use the Toyota Five Whys technique to work our way through the implications of this:

Statement 0: As CSPs, we need to drastically reduce complexity in the processes / decision-trees across our whole organisation.

Why 1? So that we can apply significant levels of automation

Why 2? So that we can apply technologies / techniques such as network slicing or virtualisation that are cost-justifiable

Why 3? So that we can offer differentiated, premium services

Why 4? So that our offerings don’t become commodities

Why 5? So that we retain corporate profitability to return to shareholders and/or invest in further interesting projects

I love that we’re looking to all number of automation technologies / techniques to apply to our OSS. However, we’re bypassing the all-important statement 0. We’re starting at Why 1 and partially missing the cost-justifiable part of Why 2. If our automation projects don’t prove cost-justifiable, then we never get the chance to reach whys 3, 4 and 5.

OSS holds the key to network slicing

Network slicing opens new business opportunities for operators by enabling them to provide specialized services that deliver specific performance parameters. Guaranteeing stringent KPIs enables operators to charge premium rates to customers that value such performance. The flip side is that such agreements will inevitably come with tough contractual obligations and penalties when the agreed KPIs are not met…even high numbers of slices could be managed without needing to increase the number of operational staff. The more automation applied, the lower the operating costs. At 100 percent automation, there is virtually no cost increase with the number of slices. Granted this is a long-term goal and impractical in the short to medium term, yet even 50 percent automation will bring very significant benefits.”
From a paper by Nokia – “Unleashing the economic potential of network slicing.”

With typical communications services tending towards commoditisation, operators will naturally seek out premium customers. Customers with premium requirements such as latency, throughput, reliability, mobility, geography, security, analytics, etc.

These custom requirements often come with unique network configuration requirements. This is why network slicing has become an attractive proposition. The white paper quoted above makes an attempt at estimating profitability of network slicing including some sensitivity analyses. It makes for an interesting read.

The diagram below is one of many contained in the White Paper:
Nokia Network Slicing

It indicates that a significant level of automation is going to be required to achieve an equivalent level of operational cost to a single network. To re-state the quote, “The more automation applied, the lower the operating costs. At 100 percent automation, there is virtually no cost increase with the number of slices. Granted this is a long-term goal and impractical in the short to medium term, yet even 50 percent automation will bring very significant benefits.”

Even 50% operational automation is a significant ambition. OSS hold the key to delivering on this ambition. Such ambitious automation goals means we have to look at massive simplification of operational variant trees. Simplifications that include, but go far beyond OSS, BSS and networks. This implies whole-stack simplification.

Stop looking for exciting new features for your OSS

The iPhone disrupted the handset business, but has not disrupted the cellular network operators at all, though many people were convinced that it would. For all that’s changed, the same companies still have the same business model and the same customers that they did in 2006. Online flight booking doesn’t disrupt airlines much, but it was hugely disruptive to travel agents. Online booking (for the sake of argument) was sustaining innovation for airlines and disruptive innovation for travel agents.
Meanwhile, the people who are first to bring the disruption to market may not be the people who end up benefiting from it, and indeed the people who win from the disruption may actually be doing something different – they may be in a different part of the value chain. Apple pioneered PCs but lost the PC market, and the big winners were not even other PC companies. Rather, most of the profits went to Microsoft and Intel, which both operated at different layers of the stack. PCs themselves became a low-margin commodity with fierce competition, but PC CPUs and operating systems (and productivity software) turned out to have very strong winner-takes-all effects
.”
Ben Evans
on his blog about Tesla.

As usual, Ben makes some thought-provoking points. The ones above have coaxed me into thinking about OSS from a slightly perspective.

I’d tended to look at OSS as a product to be consumed by network operators (and further downstream by the customers of those network operators). I figured that if our OSS delivered benefit to the downstream customers, the network operators would thrive and would therefore be prepared to invest more into OSS projects. In a way, it’s a bit like a sell-through model.

But the ideas above give some alternatives for OSS providers to reduce dependence on network operator budgets.

Traditional OSS fit within a value-chain that’s driven by customers that wish to communicate. In the past, the telephone network was perceived as the most valuable part of that value-chain. These days, digitisation and competition has meant that the perceived value of the network has dropped to being a low-margin commodity in most cases. We’re generally not prepared to pay a premium for a network service. The Microsofts and Intels of the communications value-chain is far more diverse. It’s the Googles, Facebooks, Instagrams, YouTubes, etc that are perceived to deliver most value to end customers today.

If I were looking for a disruptive OSS business model, I wouldn’t be looking to add exciting new features within the existing OSS model. In fact, I’d be looking to avoid our current revenue dependence on network operators (ie the commoditising aspects of the communications value-chain). Instead I’d be looking for ways to contribute to the most valuable aspects of the chain (eg apps, content, etc). Or even better, to engineer a more exceptional comms value-chain than we enjoy today, with an entirely new type of OSS.

Persona mapping for OSS PoCs

When selecting new applications for an OSS or to augment an existing OSS, it always makes sense to me to run a Proof of Concept. But what do we want to demonstrate in that PoC? For me, we want to run demonstrations of the factors (eg features, use-cases, processes, etc) that justify the investment.

A simple exercise you can use is to identify the personas / roles that interact with the OSS. This could include personas such as NOC operator, strategic planner, network engineer, order entry, field ops, data / analytics, application administrator, etc. The actual personas will differ within each organisation of course.

For each of those personas, we can identify and interview an individual that represents that persona.

Interview questions include:

  1. What are the key responsibilities of your role
  2. What is the most important goal / KPI for your role
  3. How does this OSS (or proposed OSS) support you meeting this goal
  4. Describe the single most important process / function that you perform using the OSS
  5. Why is it so important
  6. How often do you perform this process / function
  7. Please provide a short list of other important processes / functions you perform with this OSS

We can then build this into a matrix and seek to prioritise into a set of use-cases. Based on time and cost constraints, we can then build the top-n of those use-cases into implementation scenarios for the PoC.

The OSS self-driving vehicle

I was lucky enough to get some time of a friend recently, a friend who’s running a machine-learning network assurance proof-of-concept (PoC).

He’s been really impressed with the results coming out of the PoC. However, one of the really interesting factors he’s been finding is how frequently BAU (business as usual) changes in the OSS data (eg changes in naming conventions, topologies, etc) would impact results. Little changes made by upstream systems effectively invalidated baselines identified by the machine-learning engines to key in on. Those little changes meant the engine had to re-baseline / re-learn to build back up to previous insight levels. Or to avoid invalidating the baseline, it would require re-normalising all of data prior to the identification of BAU changes.

That got me wondering whether DevOps (or any other high-change environment) might actually hinder our attempts to get machine-led assurance optimisation. But more to the point, does constant change (at all levels of a telco business) hold us back from reaching our aim of closed-loop / zero-touch assurance?

Just like the proverbial self-driving car, will we always need someone at the wheel of our OSS just in case a situation arises that the machines hasn’t seen before and/or can’t handle? How far into the future will it be before we have enough trust to take our hands off the OSS wheel and let the machines drive closed-loop processes without observation by us?

Your OSS Justice League

Is it just me or has there been a proliferation of superhero movies coming out at cinemas lately? Not only that, but movies where teams of superheros link up to defeat the baddies (eg Deadpool 2, Justice League, etc)?

The thing that strikes me as interesting is that there’s rarely an overlap of super-powers within the team. They all have their different strengths and points of difference. The sum of the parts… blah, blah, blah.

Anyway, I’m curious whether you’ve noticed the same thing as me on OSS projects, that when there are multiple team members with significant skill / experience overlap, the project can bog down in indecision? I’ve noticed this particularly when there are many architects, often super-talented ones, on a project. Instead of getting the benefit of collaboration of great minds, we can end up with too many possibilities (and possibly egos) to work through and the project stagnates.

If you were to hand-pick your all-star cast for your OSS Justice League, just like in the movies, you’d look for a team of differentiated, but hopefully complementary, super-heroes I assume. But I’m diverting away from my main point here.

Each project, just like each formidable foe in the movies, is slightly different and needs slightly different super-powers to tackle it. When selecting a cast for a movie, directors have a global pool to choose from. When selecting a cast for an OSS project, directors have traditionally chosen from their own organisation, possibly with some outside hires to fill the long-term gaps.

With the increasing availability of freelance resources (ie people who aren’t intrinsically tied to carriers or vendors), the proposition of selecting a purpose-built project team of OSS super-heroes is actually beginning to become more possible. I’m wondering how much the gig economy will change the traditional OSS project team model in coming years.

I’d love to hear your thoughts and experiences on this.

Aggregated OSS buying models

Last week we discussed a sell-side co-op business model. Today we’ll look at buy-side co-op models.

In other industries, we hear of buying groups getting great deals through aggregated buying volumes. This is a little harder to achieve with products that are as uniquely customised as OSS. It’s possible that OSS buy-side aggregation could occur for operators that are similar in nature but don’t compete (eg regional operators). Having said that, I’ve yet to see any co-ops formed to gain OSS group-purchase benefits. If you have, I’d love to hear about it.

In OSS, there are three approaches that aren’t exactly co-op buying models but do aggregate the evaluation and buying decision.

The most obvious is for corporations that run multiple carriers under one umbrella such as Telefonica (see Telefonica’s various OSS / BSS contract notifications here), SingTel (group contracts here), etisalat, etc. There would appear to benefits in standardising OSS platforms across each of the group companies.

A far less formal co-op buying model I’ve noticed is the social-proof approach. This is where one, typically large, network operator in a region goes through an extensive OSS / BSS evaluation and chooses a vendor. Then there’s a domino effect where other, typically smaller, network operators also buy from the same vendor.

Even less formal again is by using third-party organisations like Passionate About OSS to assist with a standard vendor selection methodology. The vendors selected aren’t standardised because each operator’s needs are different, but the product / vendor selection methodology builds on the learnings of past selection processes across multiple operators. The benefits comes in the evaluation and decision frameworks.