Understanding where OSS POCs go wrong

We’ve regularly discussed how RFPs / PoCs are usually an inefficient way of finding new products or partners for your OSS (see this link). They’re often ineffient regardless of whether you’re a supplier, buyer or integrator. They tend to be expensive and time-consuming for all involved.

The short paper shown below by Peter Willis, the ex-Chief Researcher at BT, provides a really insightful insider’s perspective as to why Proofs of Concept go wrong.

In just one of the points raised in the paper, a shortened list of why PoCs don’t lead to adoption / sales includes:

  • Failure to demonstrate credible value to the telco
  • The value of the technology or component is not significant enough to give it sufficient priority
  • The PoC was not realistic enough
  • The PoC was not demonstrated to the right stakeholders
  • The technology does not fit into the telco’s operational model
  • The wider telco community has a negative view of the technology
  • Existing (incumbent) suppliers offer their own alternative technology
  • The technology demonstrated in the PoC gets overtaken by another technology

Great insights into a telco’s mindset and challenges from the buyer’s perspective.

It’s great to see that the Telecom Ecosystem Group (TEG) and TM Forum are both expending effort on finding a better way to bring buyers and sellers together.

Re-imagining network planning? New approaches to plan, design & build networks

In a recent article, we posed many ideas about how future OSS solutions might be re-imagined. We looked into how new approaches and technologies like AR (Augmented Reality) might change solutions and ways-of-working in the near future. This article triggered a conversation about next-generation planning systems with a couple of very clever OSS, orchestration and integration experts.

This conversation thread convinced us to take a closer look at this very important field within telco businesses. When we say important, the larger telcos allocate billions on network upgrades and augmentations every year, and an optimised return on that investment is essential.

But here’s the twist. Our Blue Book OSS/BSS Vendor Directory now has well over 500 listings, but none (that we’re currently aware of) produce a solution that caters for the entirety of the planning challenge:

  • New build / greenfield – identifying optimum allocation of capital for return on investment, not just for single network layers / domains, but across all
  • Network change management (MACD)
    • Predicted – based on capacity exhaustion or under-use, upgrade / life-cycle management, in-fill builds, specific events (eg catering for large crowds), etc
    • Unplanned – based on won or lost orders, traffic engineering, ramifications of fault-fix activities, widespread damage such as weather events, temporary builds such as COWs (Cells on Wheels), etc
  • Network performance optimisation – to cater for the changing demands on the network
  • New infrastructure – to evaluate, select and then onboard new technologies, topologies, devices, etc

Nor do they go beyond the planning stage and consider a feedback loop that takes data and optimises planning across Plan, Design, Build, Operate and Maintain phases.

Current state of network / capacity planning tools

Whilst there don’t appear to be all-in-one planning solutions (please leave us a comment to correct us if we’re wrong), there are solutions that contribute to the planning challenge, including:

  1. Traditional infrastructure planning and project management tools for PNI / LNI networks  (eg Smallworld, Bentley, Synchronoss, SunVizion, etc)
  2. Automated design tools for generating physical access networks (eg Biarri, Synchronoss, SunVizion, etc). Some only do greenfields, but some are now beginning to cope with brownfields / in-fill designs too
  3. Service design / planning tools by SOM / CPQ / etc types of solutions
  4. SDN (Software Defined Network) planning and automation suites such as Rakuten Symphony for mobile networks
  5. Many network performance tools that allow heat-mapping of network topology to show capacity / utilisation of devices, interfaces, gateways, load balancers, proxies, leased links, etc
  6. Orchestration type tools that coordinate automated tasks as well as generating work orders for manual tasks (eg design and build)
  7. Simulation tools of many sorts that allow network / service designs and configurations to be tested prior to implementation
  8. Other tools that are not OSS or telco specific. This includes Geographical Information Systems (GIS), Computer Aided Design (CAD), data science solutions, demographic analyses, regression test suites, etc

Each of these solution sets are comprehensive in their own right. Combining them across all domains, dimensions and possibilities seems like a nearly impossible task – a massive variant-tree. This probably explains the dearth of all-in-one planning solutions.


Opportunities for next-generation network / service planning solutions

Despite the complexity of this challenge, it still leaves us with some really interesting unfulfilled opportunities, including solutions that support:

  • Allocation of capital – at the moment, planning tends to be done with a very broad brush (ie update everything in this exchange area) rather than being more surgical in nature (eg, replace individual branches of a network that need optimising). This pulls on the lever of “easiest to plan” rather than “optimal use of capital / time / resources.” A more targeted approach may take up-to-date signals from finance, assurance, demographics, planning, fulfilment, field-worker / skills availability, spares / life-cycle, VIM, etc data feeds to decide best option at any point in time.
  • Multi-technology mix – like so many other things in OSS, planning tools tend to work best within a single domain (eg fibre access networks), leaving opportunities for generic, cross-domain, multi-layer planning / design
  • Seamless transition from planning, to design, to build instructions – there tend to be a lot of manual steps required to take a design and turn it into instructions for field workers to implement (eg design packs). AR and smart glass technology are likely to completely revolutionise the design-to-build life-cycle in coming years
  • Sophisticated capacity prediction – so far, I’ve really only seen capacity planning in two forms. One, as a blunt instrument, such as performing link exhaustion analysis (eg if capacity is trending towards 80% utilisation threshold, then start planning additional capacity). Two, as a customised data science exercise. Sophisticated prediction modelling and visualisation tools could fulfil the opportunity to be more targeted (looping back to the “allocation of capital” point above)
  • Scenario planning / modelling simulations – as we know, nobody can predict the future, but we can simulate scenarios that might give us hints about what the future will hold. A scenario pre-COVID could’ve been that we’re all forced or choose to WFH
  • Auto-capacity scaling – particularly with SDN, having the ability to turn capacity up/down depending on utilisation in real-time (not to mention pre-planning based on time of day / week / month / special-occasions) across multi-domain networks. We helped design and build a rudimentary threshold-based, multi-domain traffic engineering module for an OSS provider back in 2008, so this capability surely exists in more sophisticated forms today. These days, this might not just consider standard metrics like throughput, utilisation, packet-drops, etc but perhaps even looks at optimisation by protocol / slice / service / etc across much larger network dynamics. But this is more aligned to SON solutions. To bring it back to planning, perhaps an opportunity relates more to policy / intent / guard-rail / shaping management?
  • Dynamic pricing – as a result of the previous point, the ability to scale capacity up/down dynamically also provides the ability to provide dynamic pricing to incentivise  preferred customer usage patterns. With increasing focus on sustainability initiatives and energy reduction in the telco sector, smarter capacity allocation is sure to play a part
  • Business continuity – Serious issues inevitably arise on every network. We have to expect the worst. This is no longer a case of only defend and protect, but we must also take a detect and recover mindset as well. We haven’t seen any tools that automate Business Continuity Plans (BCPs) to initiate recovery actions (eg switching entire network or segment profiles / configurations / capacities in certain scenarios) to ensure responses are rapid in crises yet though.
  • Supply Chain Management – with more of the supply chain becoming software-defined, product catalogs potentially assimilate a long-tail of suppliers for telcos to easily build capacity and capabilities with (eg a marketplace of third-party CNF, VNF, API, etc). Allowing telco product teams to design sophisticated new offerings based on internal and third-party services (eg CFS / RFS)
  • Auto Option Analysis – planning often involves considering more than one option and then making a choice on which is optimal. This multi-option approach consumes significant effort, with the unsuccessful options discarded, representing lost efficiency. An algorithmic approach to option analysis has the potential to consider “all” options and auto-generate a business case with options for consideration and approval (eg optimal design for costs, optimal design for implementation time, optimal reduction in risk, optimal minimisation of infrastructure, etc).

Summarising the planning opportunity

Network and capacity planning is a challenging, but vitally important aspect of running a communications network. Whilst there are existing tools and processes to support planning activities, there doesn’t appear to be a comprehensive, all-in-one planning solution. The complexity of building one to support many different client requirements appears to be a significant impediment to doing so.

Despite this, we have identified a number of opportunities for OSS solutions to help solve the planning challenge in new and innovative ways. But this list is by no means exhaustive.

As always, we’d love to hear your thoughts and suggestions. What solutions have we missed? What could be done differently? How would you tackle the planning challenge in innovative ways? Leave us your comment below.


What Exactly Is Real-time? How Valuable Is It?

These are two questions I’ve often pondered in the context of OSS / telco data, but never had a definitive answer to. I just spotted a new report from CEBR on LinkedIn that helps to provide quantification.

What Exactly Is Real-time?

When it comes to managing large, complex networks, having access to real-time data is a non-negotiable. I’ve always found it interesting to hear what people consider “real-time” means to them. For some, real-time is measured in minutes, for others it’s measured in milliseconds. As shown below, the CEBR report indicates that nearly half of all businesses now expect their “real-time” telemetry to be measured in seconds or milliseconds, up significantly from only a third last year. 

This is an interesting result because most telcos I’m familiar with still get their telemetry in 5 minute, or worse, 15 minute batch intervals. And that’s just the cadence at the source (eg devices and/or EMS / NMS). That doesn’t factor in the speed of the ETL (extract, transform, load) pipelines or the processing / visualisation steps. By the time it passes through its entire journey, data can take an hour or more before any person or system can perform any action on that data.

With consuming systems like AIOps tools, which attempt to make predictive recommendations, the traditional understanding of “real-time” within telco just isn’t up to speed (sorry about the bad pun there).  An hour from event to action can barely be considered real-time.

I’m curious to get your thoughts. At your telco (or clients if you’re at a vendor / integrator / supplier), what are the bottleneck/s for achieving faster telemetry? [note that there’s additional content below the survey]

What are the Impacts of Real-time Data (Anomaly detection / reduction indicators)?

The CEBR report also helps to quantify some of the impacts of having data that’s not quite real-time, as indicated in the following four graphs.

These figures indicate telemetry is arriving in an order of seconds. This seems much higher than I had expected.

Interestingly, 100% of telcos in the UK saw a reduction in costs after implementing faster real-time data pipelines (where anomalies result in at least a moderate amount of loss).

How Valuable Is Real-time Data?

I have a foreboding, but possibly mistaken, sense that the telemetry data at many telcos just isn’t fast enough to provide the speed of insights that ops teams need. If so, what is the opportunity cost? How valuable is data that does arrive in real-time? Or to ask another way, how costly is data that is not-quite-real-time enough?

The CEBR report helps to answer these questions too, via the following graphs.

A revenue increase of nearly $300M as a result of real-time data represents a significant gain, with most projected impact coming from America.

I should caveat this by saying the report doesn’t show the methodology for how these numbers are calculated. And having been inside the veil and seen the lack of sophistication behind some analyst estimates, I’m always a bit skeptical about the veracity of these types of numbers.

Despite these question marks, it still seems likely that faster data results in faster insights. 

Regardless, when paired with consuming systems like AIOps, faster insights should translate into tangible benefits (eg being able to fix before fail or resolve via workarounds like re-routing to avoid SLA impacts or swaying a customer on the verge of churning), but also intangible benefits (eg advance notifications to customers).

Faster insights should directly relate to improved customer experiences, as reflected by 9 out of 10 customers suggesting moderate or significant improvement after implementing real-time data initiatives.

I’d love to hear your thoughts and experiences. Has speeding up your processing rates resulted in significant tangible / intangible benefits at your organisation or customers? Please leave us a comment below.

The most comprehensive analysis I’ve seen of the OSS / BSS market

I’m delighted to share with you that Houlihan Lokey, a global investment bank, has just launched the most comprehensive market analysis I’ve seen covering connectivity software including OSS / BSS. I’m possibly biased though as I’ve been lucky enough to have played a small part in its direction and content.

Most of the reports that I’ve seen from this sector cover 20-30 organisations at best. This Houlihan Lokey report references hundreds of leading vendors. But there’s one other key difference about this report.

Whereas other reports are designed to help network operators identify which OSS/BSS vendors provide suitable capabilities for their needs, this report is designed to inform investors about the opportunities that exist at the convergence of software and telecom. In essence, software enabling telecoms and, more generally, connectivity (oftentimes for non-telco sectors, such as enterprise, energy, healthcare, and media). Oh, and the vendor segmentation provided in the report still aids with the matching of vendors to network operator functional needs though too

As a highly fragmented market with well over 400 participating vendors, there is ample room for investors to influence and optimise the use of capital in the ecosystem as well as facilitating new innovation models.

The report does a deep-dive market analysis with the following five important sections:

  1. Market Dynamics
  2. Software Transformation Case Studies
  3. Ways Hyperscale Cloud Is Enabling Innovation
  4. Subsector Overviews
  5. Industry Insights

At this stage, the full report is being made available to selected companies and investor groups. If you’re interested in reading my contribution to Houlihan Lokey’s report, I would be delighted to connect and share more. And, if you’re evaluating M&A or capital raising opportunities, I’d be happy to introduce you to my friends at Houlihan Lokey.

[Update: HL has now made the exec summary of the report available for download and the full report available upon request – https://hl.com/insights/insights-article/?id=17179875243 ]

What if your career in telco were to end tomorrow?

Are you a dyed-in-the-wool telco person? Has your whole career, or a majority of it, been built around the telco industry?

If your career in telco were to cease tomorrow, what would you do next? 

As we mentioned in our previous article on “Burning Platforms”, the black curve of the traditional telco business is trending towards an asymptote (see the black curve in the diagram below). But today’s post looks at the future of telco even closer to home – not the industry and its organisations, but the careers of all of us people in telco.

S-curve image courtesy of strategictoolkits.com

Telco networks are still vitally important, but the profit engine of traditional telco is expiring. With it goes innovation, projects, jobs. All of those things are facilitated by profitability. Many telcos in developed nations have long since moved into a financial engineering, asset selling and incremental innovation phase. Those aren’t indicators of rapid improvement on an S curve. They’re indicators of an asymptote.

Alternatively, what if there were a doomsday event that disrupts the market, slashing the relevance of traditional telco models and/or networks? 

Or what if you work for a supplier that is heavily dependent on telcos (or your role at the supplier is dependent on telco)? What if your telco customers’ access to capital dries up after they embark on massive 5G roll-outs, re-tooling and re-skilling that doesn’t bear fruit? Does that impact your employer’s revenues and the viability of your roles / projects?

If you’re a battle-worn telco person, where do you go next? What’s your red curve? What skills, assets, connections, etc will carry you into the red curve? Don’t have a red curve? Are you already on the lookout to find one?

I’ve seen many telco experts transition to roles with hyperscalers recently. They’ve sought out their red curve by investing in cloud awareness / skills and making the leap to telco cloud.

But cloud is far from being the only red curve candidate. Telco has precipitated a Cambrian explosion of red curves over many decades, yet hasn’t really been able to find its own of late.

Cloud is telco’s Kodak moment. As George Glass mentioned in his podcast with us, he was able to spot the opportunity of cloud at BT around the turn of the century, but the hyperscalers have usurped the telcos with that business / technology model.

The internet red curve was facilitated by telco and partly leveraged (via Internet connectivity), yet other opportunities were missed. Many telcos are now trying to transform their DNA to software but aren’t really proving to be as good at it as other software-first organisations (possibly for reasons mentioned in Bert Hubert’s video cited in the Burning Platform post).

There are many better red curve candidates in my opinion, ones that are better suited to the many strengths that telcos still retain.

But my red curve/s, the ones I’ve been investing in over the last few years, aren’t the same as yours. You have different strengths, skills, connections and capabilities. The question remains, “what would you do next if your telco career were to cease tomorrow?”

Just as suggested in burning platform post that Telco needs to quest to find its red curve/s, so do us telco/OSS/BSS experts need to form our jumping off points. The burning platform is as much for us as it is for the Telco industry.

Take a look at the second video in the Burning Platforms post. It provides some great suggestions about embarking on a pioneering quest to help find your red curve/s.

How to get Telco off its Burning Platform

I have a couple of really interesting videos to share with you today. I think they’re both brilliant, though perhaps I’m biased because they largely reinforce thoughts I already share with the stars of each video. Well, reinforce, but they also significantly expand the thinking with some enlightening dots that I hadn’t previously connected.

Nobody is going out on a limb to suggest that our lives are increasingly impacted by digital experiences. In the storefront, our loungerooms, our games rooms, on the move anywhere, these experiences are enabled by the digital devices we interact with and communicate through. These interactions and communications in turn are largely supported by telecommunications networks.

So, telcos clearly play a crucial role in our modern lifestyle. Arguably more so than ever before. Yet telcos, generally speaking, have also arguably never been more threatened in terms of their business viability. Profitability is down (generally), but costs are increasing and so are risks.

The traditional telco business model is a burning platform.

The Ernest Hemingway quote springs to mind – “How did you go bankrupt?” Two ways. Gradually, then suddenly.”

Yet the need for comms networks isn’t going away any time soon. In fact their need and importance is only escalating. So, between these two great opposing forces, it’s almost inevitable that change of significant magnitude is ahead. Like tectonic plates moving ever so slowly, but ultimately grinding and colliding, eventually causing a release of force of enormous magnitude.

Fault lines are forming, smaller tremors have already been felt. But the big quakes are surely coming.

The OSS and BSS industry is co-dependent with the telco industry. Dependent upon the success of service providers, but also influential in their relative effectiveness and profitability.

I’ve increasingly seen telco and OSS as nearing (passing?) the “next wave” stage of the S curve:

S-curve image courtesy of strategictoolkits.com

The question hangs impalpably – what does the next wave look like? What skills, talents, networks, technologies, etc can be carried forward into the next opportunity for growth?

The success of OSS and telco are tightly coupled but also have their own identities. Do they continue to surf the same “next wave” (red curve above) together, or does each industry find disparate waves?

As referred to in a recent article about the great telco tower sell-off, Bert Hubert has published a brilliant video describing how telcos have outsourced and delegated their technological edge. They’re in the phase of selling off assets and divesting skills to support the financial engineering of their organisations (I’m generalising wildly here, as not all telcos fall into this category, but hopefully you’re willing to grant me this generalisation).

The video is long, but well worth the watch to give you a sense of where telcos are on the black curve above.

Then our second inspiring video stars Jason Fox and Aidan McCullen. It isn’t specifically about telco at all, but it does ask us to dial up our constructive discontent and embark on a pioneering quest. It takes us on the journey of finding the red curve that will provide us with a leaping-off point/s.

There are so many quotable ideas in this video (which is embedded below), but I’ll summarise with only a few:

  • Organisations are good at creating new capabilities but are hopeless at destroying and removing the detritus that just weighs the organisation down (refer to my earlier posts on inertia and subtraction projects)
  • Rather than just doing things the way they’ve always been done, they argue that
    20+% of time needs to be engaged in more meaningful and thoughtful work
  • Senior leaders that are in a position to change the direction of an organisation are so busy dealing with short time slices and empire protection that there’s no time remaining for extensive reflection and planning. No time to cultivate emergent and divergent thinking
  • There’s a fixation on numbers and arbitrary benchmarks that lead to incrementalism. There’s little time devoted to contemplation, reflection or divergence, nor for embarking on a journey of learning and change
  • At approximately 31:00 there’s a discussion about the kraken of doom, a metaphor for disruption. It’s not a pot of gold, but a kraken at the end of the rainbow (or in our case, the declining slope on the black S curve for traditional telco models)
  • New story arcs (red S curves) need time and space where new thinking can flourish, and this isn’t just restricted to an occasional off-site strategy meeting
  • A quest is a journey to find viable alternative options beyond the default . A quest goes into the unknown to find answers that are exactly that – not yet known. Beyond the incremental. Every question (to the answers we seek) begins with a quest (a seeking of knowledge)
  • Leaders need to have a quiver of strategic options to jump to if/when the right conditions manifest. Most leaders don’t have the time to think/vision, let alone develop any quiver of options. 
    (Currently this is largely outsourced too by delegating to consultants or vendors that may have biases toward benefits for their own organisations. Consultants typically don’t have the same level of tribal knowledge that the in-house leaders do, but they do bring stimulative ideas from a broader industry involvement.
    However, this also poses the question. Do consultants even have the time to think transformatively themselves given that their focus is typically on utilisation rather than having time out for introspection)
  • Often leaders do get the call to adventure (such as seeing an inspiring idea at a conference), but generally don’t have the time to heed the call, and get too busy, eventually forgetting about the transformative call to action
    (Ponder also whether senior execs are typically built with the tools to accept and lead this type of quest. Do their skills and personality thrive on a journey of discovery of the unknown? Ponder also whether the explorative, risk-taking types tend to get killed-off lower in the org-pyramid by the aggressive politicians that instead rise to power in some organisations?)
  • A change-maker is lonely. The ones who do it best are the collaborators who can bring others on the journey. But Jason and Aidan ask you to recall the iconic scene in Jerry Maguire… when Tom Cruise (a leader in a large sports agency) asks,  “who’s coming with me?” when forging out on his own. Feel the cringe when watching the excerpt provided!!
  • The curse of productivity and efficiency doesn’t allow for exploration or growth or conducting experiments

It makes me wonder whether leaders should be encouraged to take individual sabbaticals to explore and quest. This would allow their 2IC to shadow the leader on sabbaticals. The underling becomes acting in the role, providing for their personal growth and builds organisational continuity protection. The sabbatical unlocks disruption. I imagine this sabbatical (of a month for example) would be as an individual rather than as a collective such as at a lengthy off-site

Even if no specific outcome is achieved whilst on sabbatical, the leader goes back to their normal roles with greater savvy, awareness and forward thinking to project into a strategic future. Their journey of enlightenment is transformative (the type of transformation that should precede any digital transformation by the way).

Here’s the second video. Again it’s lengthy, but I hope you find it as inspiring as I did:

As an aside, this video partially explains why I left big-telco and now work from the outside in to these organisations. I loved the work on big, complex, impactful OSS suites. But I jumped off because I needed time away from meetings and short-term incrementalism to ruminate. I needed to claw back the time for investigating jumping off points onto the red S-curve. Time to discover parallel / overlapping universes of knowledge.

I’d love to hear your thoughts on the question posed earlier – What skills, talents, networks, technologies, etc can be carried forward into the next opportunity for growth (from the perspective of telco and/or OSS)?

What jumping off points do you think exist for telco and/or OSS? Leave us a comment below.

I’ve been exploring a range of them over the last couple of years and am excited by a number of the opportunities that await.

I’ll also leave you with the concept of disruptive models that have the potential to impact traditional telco (and OSS) sooner than you might think

The great telco tower sell-off

You’ve probably noticed the great tower sell-off trend that’s underway (see this article by Robert Clark as an example). Actually, I should call it the sell-off / lease-back trend because carriers are selling their towers, but leasing back space on the towers for their antennae, radio units, etc.

Let’s look at this trend through the lens of two of my favourite OSS books (that mostly have little to do with OSS – not directly at least):

  • Rich Dad, Poor Dad by Robert Kiyosaki, and
  • The Idea Factory by Jon Gertner

The telcos are getting some fantastic numbers for the sale (or partial sale) of their towers, as Robert’s article identifies. Those are some seriously big numbers going back into the telcos for reinvestment (assumedly).

Let’s look at this through the lens of Rich Dad first (quotes below sourced from here).

The number one rule [Rich Dad] taught me was: “You must learn the difference between an asset and a liability—and buy assets.”

The simple definition of an asset is something that puts money in your pocket.

The simple definition of a liability is something that takes money out of your pocket.

Towers, whilst requiring maintenance, would still seem to be an asset by this definition. They are often leased out to other telcos as well as aiding the delivery of services by the owning telco, thus putting money into their pockets. However, once they become leased, they become a liability, requiring money to be paid to the asset owner.

Now I’m clearly more of an Engineer than an Accountant, but it seems fairly clear that tower sales -> lease-back is a selling of assets, acquisition of liabilities, thus contradicting Rich Dad’s number one rule. But that’s okay if the sale of one asset (eg towers or data centres) allows for the purchase (or creation) of other more valuable assets.

[Just an aside here, but I also assume the sale / lease-back models factor in attractive future leasing prices for the sellers for some period of time such as 7-10 years. But I also wonder whether the lease-back models might become a little more extortionate after the initial contract period. It’s not like the telco can easily shift all their infrastructure off the now leased towers, so they’re somewhat trapped I would’ve thought…. But I have no actual insights into what these contracts look like, so it’s merely conjecture here].

Now let’s take a look through the lens of “The Idea Factory” next. This brilliant book tells the story about how Bell Labs (what was Bell, then AT&T’s research and development arm, now part of Nokia) played a crucial role in developing many of the primary innovations (transistors, lasers plus optical fibres, satellites, microwave, OSS, Claude Shannon’s Information Theory, Unix, various programming languages, solar cells and much more) we rely on today across telco and almost every other industry. These advances also paved the way for the rise of the Silicon Valley innovation engine of today.

Historically, this primary R&D gave telcos the opportunity to create valuable assets. However, most telcos divested their R&D arms years ago. They’ve also delegated most engineering and innovation to equipment / software vendors and via outsourcing agencies. I’d argue that most telcos don’t have the critical mass of engineering that allow them to create many assets anymore. They mostly only have the option of buying assets now. But we’ll come back to this a little later.

The cash raised from the sale of towers will undoubtedly be re-invested across various initiatives. Perhaps network extensions (eg 5G roll-outs), more towers (eg small-cell sites), or even OSS/BSS uplift (to cope with next-generation networks and services), amongst other things. [BTW. I’m hoping the funds are for re-investment, not shareholder dividends and the like 🙂 ]

Wearing my OSS-coloured glasses (as usual), I’d like to look at the OSS create / buy decision amongst the re-investment. But more specifically, whether OSS investment can be turned into a new and valuable asset.

In most cases, OSS are seen as a cost centre. Therefore a liability. They take money out of a carrier’s pockets. Naturally, I’ll always argue that OSS can be a revenue-generator (as you can see in more depth in this article):

But in this case, what is the asset? Is it the network? The services that the network carries? The OSS/BSS that brings the two together? Is it any one of these things that puts money in the pockets of carriers or all three working cohesively? I’d love to hear a CFO’s perspective here  😉

However, the thought I’d actually like to leave you with out of today’s article is how carriers can actually create OSS that are definitively assets.

I’d like to show three examples:

The first is to create an OSS that enterprise clients are willing to pay money for. That is, an OSSaaS (OSS as a Service), where a carrier sells OSS-style services to enterprise clients to help run the clients’ own private networks.

The second is to create an OSS for your own purposes that you also sell to other carriers, like Rakuten is currently doing. [Note that Rakuten is largely doing this with in-house, rather than outsourced, expertise. It is also buying up companies like InnoEye and Robin.io to bring other vital IP / engineering of their solution in-house.]

The third is the NaaS (Network as a Service) model, where the OSS/BSS is responsible for putting an API wrapper around the OSS and network (and possibly other services) to offer directly to external clients for consumption (and usually for internal consumption as well). 

However, all of these models require some serious engineering. They require an investment of effort to create an asset. Do modern carriers still have the appetite to create assets like these?

Wayne Dyer coined the phrase, “It’s never crowded along the extra mile.” Rakuten is one of the few that is walking the extra mile with their OSS (and network), in an attempt to turn it into a true asset – one that directly puts money in their pocket. They’re investing heavily to make it so. How valuable will this asset become? Time will tell.

I’d love to hear your thoughts on this. For example, are there other OSS asset models you can think of that I’ve overlooked?

Hat tips to James and Bert for seeds of thought that inspired this article. More articles to follow on from a brilliant video of Bert’s in future  😉


The Network Automation Journey – It’s about Culture

I love this article by Tim Fiola, entitled “A Note to Management About the Network Automation Journey – It’s About Culture.” It does a brilliant job of explaining the real challenges around network automation. I’m going to riff off it a little bit today, drawing comparisons with OSS/BSS (which do happen to have a role to play in network automation).

He starts off with this gem:

“Most of the blog posts and information about network automation focus on the technical details and are directed toward the network engineers. There is more to the network automation transformation, however, than the technical aspects. In fact, one of the most important aspects of making the network automation transformation ‘stick’ is the cultural component. Network automation is a journey, and it does require a cultural shift throughout the organization, including engineering, management, and human resources (HR). Management needs to understand this cultural shift, so it can better enable the transformation and make it permanent.”

So true for network automation and OSS/BSS alike. There does tend to be a focus on the tech. But I’ve seen tech that’s perfect, but is un-usable because the culture / change-management aspects haven’t been incorporated into the transformation program. There’s a saying in golf, “Driving is for show. Putting is for dough [cash].” If we paraphrase this slightly for the OSS/BSS/automation world, it becomes, “Tech is for show. Change/culture is for dough.”

Tim continues:

“The technology part of an automated infrastructure is a solved problem: we have the technology. Ultimately, it’s the cultural component that makes network automation successful in the long term.”

I’m not sure that the automated infrastructure problem is 100% solved, especially for physical infrastructure, but his point remains valid.

“This cultural shift, at its heart, means changing the expectations around how many people it should take to run a network, along with the skill sets the people have. In order for the automation transformation to keep long-term momentum, it must be supported by management: everyone, from the first-line manager, executive management, and all the way to Human Resources, must play a part to sustain the appropriate culture.”

I’m perplexed by the conflicting messaging that often exists around network automation. Openly, there is the message of Zero Touch Assurance (ZTA) and the head-count reductions that support automation business cases. However, as Tim suggests, there does need to be a change in expectations about how the network is run. The conflicting, and perhaps even subliminal, message to ZTA though is that telcos are often built around an “empire-building” mindset, where reduction of staff leads to a reduction in status for the managers within the organisation. The RFPs requesting OSS/BSS tools still seem to include requirements that assume a large team is required to run them and the network. They’re rarely about comparing vendors for operational efficiency.

But I digress. Tim instead cleverly pivots to aligning automated infrastructure to corporate objectives and cashflow, as follows:

“Many firms have a high-level corporate objective to increase cash flow. A business is interested in executing as many value-generating workflows as possible because those workflows produce value and ultimately revenue. An automated infrastructure directly supports this high-level objective by allowing the business to:

  • Execute revenue-generating workflows quicker, which brings in more revenue sooner.
    • This interval, between when something is sold versus when the revenue is realized, is often called the quote-to-cash interval.
    • Shrinking the quote-to-cash interval helps cashflow.
  • Execute workflows with less friction, which leads to more workflows being executed, which leads to greater throughput and increased revenue.

Companies care about executing workflows quickly and without friction; when they do this, they minimize the quote-to-cash interval and increase the total amount of throughput, which increases cashflow.”

Exactly! The whole idea of investments in OSS, BSS and network automations are about executing workflows quickly and without friction. But they’re also about making things more repeatable, reliable and enduring, as suggested by Tim as follows:

“Worker turnover happens everywhere and all the time. In the worst case, when an employee leaves, all their knowledge leaves with them.

With an automated infrastructure, even when a person leaves, they leave behind a lot of their knowledge, enshrined in the automation infrastructure.”

Speaking of employees leaving,

“Another common high-level corporate objective is employee retention and satisfaction. For a moment, imagine your network engineers freed from high-volume, low-value intensive tasks. For example, many highly skilled network engineers are often relegated to perform tasks such as

  • O/S upgrades
  • [Etc deleted for brevity]

These types of tasks are well below the engineers’ skill set, but network engineers are often called upon to perform these tasks simply because they understand the technology and full context around the task. In addition to being below the skill set of a network engineer, a lot of these tasks also happen to be tasks that machines are great at.

Relieving your network engineers from repetitive, high-volume tasks that are below their skill set will improve employee satisfaction.

An automated infrastructure can perform these tasks and give engineers the time to use their high-level skills. This benefits the company because their employees are much happier; it also benefits the engineers because it allows them to exercise high-level skills that add more value.”

This shift to staff only performing higher-value activities should be an absolute objective of any investment in OSS, BSS and network automation. However, the skills, and the way they’re measured, changes appreciably too:

“One of the most striking examples of daily tasks changing in an automated environment is configuration management: instead of manually configuring devices at the CLI, network engineers will transition to maintaining configuration templates. This is a great example of the power of abstraction: instead of trying to manage the configs on hundreds or even thousands of devices, the engineer will design and maintain a relatively small amount of configuration templates, and then use those templates to deploy consistent configurations to the many devices. So, your expectations around how your engineers spend their time will also have to change:

  • They will spend time on tasks such as templating, coding, etc. to automate away low-value tasks
  • They will spend more time on higher-level network engineering”

But some engineers will strongly resist change, as per our earlier reference to skilful execution of change management initiatives. For some engineers, their expertise with CLIs are their one-wood (to keep the golfing analogy going), their primary sense of corporate self-worth. But as Tim says,

“The CLI-to-template transition is one of many cultural shifts that comes along with an automated network infrastructure. As with any cultural shift, there will likely be resistance among some engineers who see this as a threat to their jobs, skill sets, certifications, etc…

It will be important to understand that this is a huge change for your engineering staff, so you will have to be patient and educate them on why these changes are needed and how they benefit the engineers. In addition, it may be necessary to provide multiple opportunities for technical training so, as fears and resistance subside, the engineers have the chance to get on board and learn how to operate in the new environment.”

Like I said, great article from Tim. Kudos to him for taking the time to share his thoughts with us all in his article. Please jump over via this link and check it out from cover to cover.

The Most Exciting OSS/BSS Innovations of 2022


We are currently living through a revolution for the OSS/BSS industry and the customers that we serve.

It’s not a question of if we need to innovate, but by how much and in what ways.

Change is surrounding us and impacting us in profound ways, triggered by new technologies, business models, design & delivery techniques, customer expectations and so much more.

Our why, the fundamental reasons for OSS and BSS to exist, is relatively unchanged in decades. We’ve advanced so far, yet we are a long way from perfecting the tools and techniques required to operationalise service provider networks.

In people like you, we have an incredibly smart group of experts tackling this challenge every day, seeking new ways to unlock the many problems that OSS and BSS aim to resolve.

In this report, you’ll find 25 solutions that are unleashing bold ideas to empower change and solve problems old and new. 

I’m thrilled to present these solutions in the hope that they’ll excite and inspire you like they have me.

Click on the image below to download the report.

Or click the link – https://passionateaboutoss.com/wp-content/uploads/2022/03/25-Most-Exciting-Innovations-in-OSS_BSS-2022_FINAL1.pdf 

Vendors included, in no preferential order:

  1. 2operate
  2. ActivePort
  3. ADVA
  4. AN10
  5. Anodot
  6. Appearition / Air Inspect Australia
  7. Aria Networks
  8. Avanseus
  9. Bentley
  10. Cisco
  11. CSG
  12. Cubro
  13. DFG Consulting
  14. EnterpriseWeb
  15. Ericsson / Nvdia
  16. InfoVista
  17. KX
  18. Mavenir
  19. Neo4j
  20. Netcracker
  21. Rakuten Symphony
  22. Techsee
  23. Trendspek
  24. Twinkler
  25. Zeetta

If you enjoy this report, be sure to subscribe via the form below and receive notification of future reports.


What is an OSStracod?

Back on my first OSS project – all the way back, deep into the annals of time – the year 2000 to be exact, I was lucky enough to work with a really tight delivery team. We all lived in the same hotel in Taiwan and took taxis to work together. Each morning I’d read Dilbert in the newspaper. It seemed like every single day’s cartoon was written based off our team’s recent experiences on that project. Almost every episode was shared with my taxi-mates. I’m sure there were many confused taxi drivers in Taipei who were left wondering why a bunch of crazy Aussies had burst into tears of laughter reading the cartoon section of the paper.

The point is, Dilbert was never about OSS, yet always seemed relevant. The same goes for Seth Godin. As a renowned marketer, he probably has no awareness of OSS, yet his articles are often so prescient for our industry. I often borrow his quotes or article snippets. Today I’m going to borrow an entire article below (you can find it here):

The ostracod is extinct. Over millions of years, with good reasons at every step, it evolved to become the creature it was.

And when we add up all of those little steps, we end up with a creature that was no longer fit for its environment.

Organizations develop like this. So do work practices, cultural systems and “the way we do things around here.”

I’m sure there was a really good reason twenty years ago for all the steps that are now involved in the thing you do right now, but your competitor, the one who is starting from scratch, is skipping most of them.

Every day we get a new chance to begin again. And if you don’t, someone else will.

I’m currently working on two projects relating to inventories of the future and another report that’s due imminently about exciting innovations in the OSS / BSS space.

Inventory is our OSStracod. They’ve evolved over the last million years (or so it seems), with billions of developer hours invested as little steps. The little decisions made twenty years ago are baked into the solutions we work with today. Our OSStracod has two familial branches – the vendor inventory solutions and the service provider inventory stacks. 

Both branches have long and distinguished evolutionary histories. The accumulation of many little steps. However, in many cases, they are creatures that are struggling to fit into a changing environment. In addition, as we know, inventory transformations can be incredibly challenging, to the point that they’re scary to even contemplate.

Some people have even told me that inventory solutions are no longer relevant, that virtual infrastructure managers have abstracted away the need for these hard-to-perfect beasts. I’ve countered that they do remain very relevant.

Yet we stand at a real inflection point. Inventories of the past were built upon relational databases to manage networks that were largely “nailed up.” Inventories of the future will build upon the strengths of newer database models like graph and time-series, whilst also managing highly dynamic networks of greater size and scale.

And these are only the tip of the iceberg of architectural advantages awaiting a competitor starting from scratch today. Is today the day you decide to begin again, or do you continue to make incremental evolutions to your OSStracod and build on its proud ancestry?

I don’t envy anyone having to make that decision, so let’s close with a relevant Dilbert cartoon:

The Secret Truth About OSS Innovation Limitations

It’s said that the best way to sell hardware is to have compelling software and the best way to sell network services is to have compelling information to share.

In the halcyon years for the telcos, they had both the network services and the singular mechanism for sharing compelling information. Voice / telephony was THE “compelling information sharing platform (CISP)”.

These days, compelling information is ultimately shared via network services (as data streams), but the CISP mechanisms / companies are far more diverse (eg voice & messaging like Skype, video like YouTube, social like Insta, publishing like Medium, conferencing like Zoom, business tools like hyperscaler platforms, etc, etc). The CISP is now a very long tail of platform / service offerings. The telcos lost the CISP dominance long ago.

In the halcyon years, customers joined the telcos not for their network services but for the CISP. This remains true today. Clients seek out the CISP providers and (generally) hope to get cheap/reliable network services merely as the delivery mechanism.

Our OSS/BSS (generally) only support the network services. If we do things right, we provide a competitive advantage for cheap/reliable delivery of network services. But are we too constrained in this thinking?

We’re currently creating a new report, “The Most Exciting OSS/BSS Innovations for 2022” (due for release in a couple of weeks, with more info to come). This process has stimulated thinking into what exactly is innovation in terms of OSS / BSS and what problems need to be solved. It’s not just limited to innovation in terms of cutting-edge products or exceptional talent. The report looks into innovation across:

  • DESIGN – usability, user interfaces
  • MARKETS – profit models, partnerships, structures, ecosystems, channels
  • TECHNIQUES – processes, delivery models, customer engagement
  • TECHNOLOGY – products, product performance, services, integrations

The challenge for telcos and OSS/BSS vendors alike is that we have the opportunity to think outside the box of providing platforms that support network services. How do we better tap into and/or better facilitate the CISPs? Do we simply target the big CISPs or do we target the entire long tail of vendors / products?

The OSS/BSS market is highly fragmented. There are 400+ vendors in our Blue Book OSS/BSS Vendor Directory. There’s a lot of overlapping functionality amongst this list. Lots of innovation exists, but usually only in incremental improvements.

I can’t help but think that the next big innovations in OSS/BSS/telco, the radical or exponential improvements. To start with thinking about solving problems for / with the broader ecosystem in which we operate. The partners. The adjacencies.

We’d love to hear your suggestions for innovations we must consider for our report:

The Mysterious Sector 4 in a Telco IT Stack

When it comes to driving efficiency and profitability in a service provider’s business, I feel there are three key pillars to consider (aside from strategic factors of course), as follows:

  1. The IT stack, led by the OSS/BSS
  2. The networks and
  3. The field services 

The networks are vital. They are effectively each organisation’s product because connectivity across the network is what customers are paying for. They must be operating effectively to attract and retain paying customers.

The field services are vital. They build and maintain the networks, ensuring there is a product to sell to customers.

The OSS/BSS are arguably even more vital. Some may argue that they only provide a support role (the middle S in OSS and in BSS would even suggest so!). They’re more than that. OSS/BSS are the Profit Engine of a service provider. 

But let’s take a closer look at the implications of effectiveness and profitability in the overlapping sectors in the Venn diagram above and why OSS/BSS are so important.

Sector 1. OSS / BSS <-> Networks

The OSS/BSS connects customers with the product (buyers with sellers). Even if we remove this powerful factor, OSS and BSS have other key roles to perform. They connect and coordinate. They hold the key to efficiency and utilisation and, in turn, profitability.

Sectors 2&3. OSS / BSS <-> Field Workforce and Network

The OSS/BSS manages the field workforce, assigning what to do and when. But before that, our OSS/BSS provide the tools to decide why the work needs doing. They identify:

  • What needs to be built (network designs)
  • What capacity is required (by identifying performance gaps now or in the future)
  • What customers need to be connected, where and how
  • What are the root problems that need to be fixed to ensure customers are well served

That covers sector 2. But sector 3 appears to be separated from the OSS/BSS. It is, the field workers work directly on the network. However, they generally only do so after direction from the OSS/BSS (eg via work orders, trouble tickets, etc). Hence, they’re merged here.

Summary – Efficiency and The Profit Engine

You might be wondering why I missed sector 4. You might also be wondering whether the last sentence above, with the OSS/BSS pulling the strings of field work on networks, represents sector 4. Well, yes, but be patient and we’ll come back to sector 4, but with a slightly different (and potentially more powerful) perspective than that.

From the two previous sections, you would have noticed just how important OSS and BSS are to a network service provider. They directly influence:

  • Taking sales orders
  • Processing orders to ensure they’re activated
  • The time to market, of new services, of new product offerings, build of support infrastructure, reactivation of damaged infrastructure or warrantied equipment and more
  • Identification of problems and what needs to be done to fix them
  • Preventative maintenance to minimise the degradations / outages occurring 
  • Allocation of resources and their lifecycle management
  • Optimising the capital in the network by balancing capacity (available resources vs utilisation)
  • Managing revenues (preparation of invoices, issuing bills, collections, etc)
  • Combining the many people, skills and their availability with assets, materials, certifications, etc to ensure work is prioritised and coordinated through to completion
  • The speed of getting people to site and on to the next after finishing a job
  • The scalability / repeatability of the factors above and more
  • The identification of repeatable actions that are then worthy of automation and/or improvement
  • Logging and visualising the performance of people, processes and technologies (in real-time and over longer trend-lines), providing the benchmarks and levers to manage any of those factors if they’re going outside control bounds
  • The list goes on!!!

When you consider the daily volumes of each of those factors at large telcos, you’ll understand how a 5% improvement or deterioration in any will have a significant implication on profitability. The profitability of an organisation is massively helped, or hindered, by OSS/BSS, though few people seem to realise it.

The Elusive Sector 4. Combining OSS/BSS, Network and Field Services.

I promised to come back to sector number 4 in the Venn diagram above and give a different perspective. There’s an opportunity that exists here that few are capitalising on yet.

But first a recap. Sector 1 is best characterised by the tools used by a NOC (Network Operations Centre) and SOC (Service Operations Centre), as well as their various metrics like time to repair, up-time / availability, etc.

Sectors 2 and 3 are best characterised by the tools used by a WOC (Workforce Operations Centre) and metrics like number of truck rolls, jobs completed, etc.

The analytics that are available at sector 4 are profound, but rarely used. Let me describe via a scenario:

  • There is an outage in region A identified in the NOC. They create a ticket for copper cable #1 to be replaced
  • The WOC picks up the ticket and a worker is sent to repair copper cable #1
  • This repeats over the next few weeks. Copper cables # 2, 3, 4, 5 and more are also repaired in region A
  • It’s clear from my description that a systematic problem exists in region A, but the NOC and WOC are both more transactional in nature and have SLAs to meet so they’re not looking for endemic issues

The OSS/BSS has the ability to provide the role as an overseer, observing not just the transaction metrics, but also considering effectiveness and profitability. It is able to consider:

  • Each outage is having an impact on availability, potentially costing money via rebates and is damaging the brand
  • Each outage is introducing costs of repair teams as well as replacement material costs
  • The copper network doesn’t provide as much head-room for higher-speed offerings to customers, which represents an opportunity cost
  • The reliability and availability uplift of an optical node versus a copper node in region A
  • The long-term cost profile of a new fibre network versus a deteriorating copper network
  • The costs of inserting a new optical node and removal of the copper node (and associated cables, connectors, network termination devices, etc)
  • The customer profiles connected to the network that are likely to take higher speed services if made available
  • etc

With all of this knowledge at the fingertips of the OSS/BSS, it could decide to inject a new work order into the work queue, starting with a network design (possibly automated), acquisition of assets / materials (possibly automated), notification to customers of network uplift (resulting in outage notifications, but also better performance and higher-speed offerings – possibly automated), creation / scheduling / dispatch of jobs to deconstruct / construct / commission infrastructure in region A, then notify customers of availability of service.

The metrics that matter at sector 4 are less about transactions and more about higher-order objectives like effectiveness and profitability.

In many cases, network operators don’t have these near-real-time decision support tools at hand. If network uplift is to be performed, it’s decided across an entire service area by capacity planning teams rather than in more granular regions.

This is only one scenario for what could be achieved at sector 4. I’m sure we can imagine more if we start building the tools here.

Also consider the different speeds that the OSS/BSS need to cater for and optimally allocate as part of end-to-end workflows:

  • Some activities are fast – virtualised networks have the ability to self-manage / self-optimise so changes are occurring in this part of the network dynamically and automatically
  • Some activities are medium-speed – logical changes in networks such as configuration updates or logical connectivity, are performed with only a short turnaround and are pushed to the network
  • Other activities are slow – physical infrastructure builds such as new sites, cables, etc can often take months of planning and build activities (with many sub-activities to manage) 


How and why we need to Challenge our OSS Beliefs

The OSS / BSS industry tends to be quite technical in nature. The networks we manage are technical. The IT stacks we build on are technical. The workflows we design are technical. The product offerings we coordinate are technical. As such, the innovations we tend to propose are quite technical too. In some cases, too technical, but I digress.

You would’ve no doubt noticed that the organisations garnering most attention in the last few years (eg the hyperscalers, Airbnb, Uber, Twitter, Afterpay, Apple, etc) have also leveraged technology. However, whilst they’ve leveraged technology, you could argue that it’s actually not “technical innovation” that has radically changed the markets they service. In most cases, it’s actually the business model and design innovations, which have simply been facilitated by technology. Even Apple is arguably more of a design innovator than a technology innovator (eg there were MP3 players before the iPod came along and revolutionised the personal music player market).

Projects like Skype, open-source software and hyperscaled services have made fundamental changes to the telco industry. But what about OSS/BSS and the telcos themselves? Have we collectively innovated beyond the technical realm? Has there been any innovation from within that has re-framed the entire market?

As mentioned in an earlier post, I’ve recently been co-opted onto the TM Forum’s Transformation User Group to prepare a transformation framework and transformation guides for our industry. We’re creating a recipe book to help others to manage their OSS/BSS/telco/digital transformation projects. However, it only dawned on me over the weekend that I’d overlooked a really, really important consideration of any transformation – reframing!

Innovation needs to be a key part of any transformation, firstly because we need to consider how we’ll do things better. However, we also need to consider the future environment in which our transformed solutions will operate within. Our transformed solution will (hopefully) still be relevant and remain operational in 5, 10, 15 years from now. For it to be relevant that far into the future, it must be able to flexibly cater for the environmental situation in the future. Oh, and by the way, when I say “environment,” I’m not talking climate change, but other situational change. This might include factors like:

  • Networks under management
  • Delivery mechanisms
  • Business models
  • Information Technology platforms 
  • Architectural models
  • Process models
  • Staffing models (vs automations)
  • Geographical models (eg local vs global services)
  • Product offerings (driven by customer needs)
  • Design / Develop / Test / Release pipelines
  • Availability of funding for projects, and is it capex, opex, clip-the-ticket, etc
  • Risk models and risk adversity level
  • etc

Before embarking on each transformation project, we first need to challenge our current beliefs and assumptions to make sure they’re still valid now, but can remain valid into the next few years. Our current beliefs and assumptions are based on past experiences and may not be applicable for the to-be environment that our transformations will exist within.

So, how do we go about challenging our own beliefs and assumptions in this environment of massive, ongoing change?

Well, you may wish to ask “re-framing questions” to test your beliefs. These may be questions such as (but not limited to):

  1. Who are our customers (could be internal or external) and how do they perceive our products? Have we asked them recently? Have you spent much time with them?
  2. What is our supply chain and could it be improved? Are there any major steps or painful elements in the supply chain that could be modified or removed
  3. Are these products even needed under future scenarios / environments
  4. What does / doesn‘t move the needle? What even is “the needle”
  5. What functionality is visible vs invisible to customers 
  6. What data would be useful but is currently unattainable 
  7. Do we know how cash flows in our clients and our own companies in relation to these solutions? Specifically, how value is generated
  8. How easy are our products to use? How long does it take a typical user (internal / external) to reach proficiency 
  9. What personas DO we serve? What personas COULD we serve? What new personas WILL we serve
  10. What is the value we add that justifies our solution’s existence? Could that value be monetised in other ways
  11. Would alternative tech make a difference (voice recognition, AI, robotics, observability, biometric sensors, AR/VR, etc)
  12. Are there any strategic relationships that could transform this solution
  13. What does our team do better than anyone else (and what are they not as strong at)
  14. What know-how does the team possess that others don’t
  15. What features do we have that we just won’t need? Or that we absolutely MUST have
  16. Are there likely to be changes to the networks we manage
  17. Are there likely to be changes to the way we build and interconnect systems
  18. Where does customer service fall on the continuum between self-service and high-contact relationships
  19. What pricing model best facilitates optimal use of this solution (if applicable) (eg does a consumption-based usage / pricing model fit better than a capital investment model?). As Jeff Bezos says, “Your margin is my opportunity.” The incumbents have large costs that they need to feed (eg salaries, infrastructure, etc), whereas start-ups or volume-models allow for much smaller margins
  20. Where are the biggest risks of this transformation? Can they be eliminated, mitigated or transferred
  21. What aspects of the solution can be fixed and what must remain flexible
  22. What’s absorbing the most resources (time, money, people, etc) and could any of those resource-consumers be minimised, removed or managed differently

It also dawns on me when writing this list that we can apply these reframing questions not just to our transformation projects, but to ourselves – for our own personal, ongoing transformation.

I’d love to get your perspective below. What other re-framing questions should we ask? What re-framing exercise has fundamentally changed the way you think or approach OSS/BSS/personal transformation projects?

How to Approach OSS Vendor Selection Differently than Most

Selecting a new OSS / BSS product, vendor or integrator for your transformation project can be an arduous assignment.

Every network operator and every project has a unique set of needs. Counter to that, there are literally hundreds of vendors creating an even larger number of products to service those widely varied sets of needs.

If you’re a typical buyer, how many of those products are you already familiar with? Five? Ten? Fifty? How do you know whether the best-fit product or supplier is within the list you already know? Perhaps the best-fit is actually amongst the hundreds of other products and suppliers you’re not familiar with yet. How much time do you have to research each one and distill down to a short-list of possible candidates to service your specific needs? Where do you start? Lots of web searches?

Then how do you go about doing a deeper analysis to find the one that’s best fit for you out of those known products? The typical approach might follow a journey similar to the following:

The first step alone can take days, if not weeks, but also chews up valuable resources because many key stakeholders will be engaged in the requirement gathering process. The other downside of the requirements-first approach is that it becomes a wish-list that doesn’t always specify level of importance (eg “nice to have” versus “absolutely mandatory”).

Then, there’s no guarantee that any vendor will support every single one of the mandatory requirements. There’s always a level of compromise and haggling between stakeholders.

Next comes the RFP process, which can be even more arduous.

There has to be an easier way!

We think there is.

Our approach starts with the entire 400+ vendors in our OSS/BSS Vendor Directory. Then we apply one or two rounds of filters:

  1. Long-list Filter – Query by high-level product capability as per diagram below. For example, if you want outside plant management, then we filter by 9b, which returns a list of over 60 candidate vendors alone, but we can narrow it down further by applying filters 10 and 14 as well if you need these functionalities
  2. Short-list Filter – We then sit with your team to prepare a list of approx. 20 high-level questions (eg regions the vendor works in, what support levels they provide, high level functional questions, etc). We send this short questionnaire to the long-list of vendors for their clarification. Their collated responses usually then yields a short-list of 3-10 best-fit candidates that you/we can then perform a deeper evaluation on (how deep you dive depends on how thorough you want the review to be, which could include RFPs, PoCs and other steps).

The 2-step filter approach is arguably even quicker to prepare and more likely to identify the best-fit short-list solutions because it starts by assessing 400+ vendors, not just the small number that most clients are aware of.

The next step (step 5 in the diagram above) also uses a contrarian approach. Rather than evaluating via an RFP that centres around a set of requirements, we instead identify:

  • The personas (people or groups) that need to engage with the OSS/BSS
  • The highest-priority activities each persona needs to perform with the OSS/BSS
  • End-to-end workflows that blend these activities into a prioritised list of demonstration scenarios

These steps quickly prioritise what’s most important for the to-be solution to perform. We describe the demonstration scenarios to the short-listed vendors and ask them to demonstrate how their solutions solve for those scenarios (as best they can). The benefit of this approach is that the client can review each vendor demonstration through their own context (ie the E2E workflows / scenarios they’ve helped to design).

This approach does not provide an itemised list of requirement compliance like the typical approach. However, we’d argue that even the requirement-led approach will (almost?) never identify a product of perfect fit, even if it’s filled with “Will Comply” responses for functionality that requires specific customisation.

Our “filtering” approach will uncover the solution of closest fit (in out-of-the-box state) in a much more efficient way.

We should also highlight that the two chevron diagrams above are just sample vendor-selection flows. We actually customise them to each client’s specific requirements. For example, some clients require much more thorough analysis than others. Others have mandatory RFP and/or business case templates that need to be followed.

If you need help, either with:

  • Preparing a short-list of vendors for further evaluation down from our known list of 400+; or
  • Need help to perform a more thorough analysis and identify the best-fit solution/s

then we’d be delighted to assist. Please leave us a note in the contact form below.

New. Just Launched – Mastering Your OSS (the eBook version)

We’re excited to announce that we’ve just launched the eBook version of Mastering Your OSS.

Click on the link above or here to read all about it.

Time to Kill the Typical OSS Partnership Model?

A couple of years ago Mark Newman and the content team at TM Forum created a seminal article, “Time to Kill the RFP? Reinventing IT Procurement for the 2020s.” There were so many great ideas within the article. We shared a number of concordant as well as divergent ideas (see references #1, #2, #3, #4, #5, #6, and others).

As Mark’s article described, the traditional OSS/BSS vendor selection process is deeply flawed for both buyer and seller. They’re time-consuming and costly. But worst of all, they tend to set the project on a trajectory towards conflict and disillusionment. That’s the worst possible start for a relationship that will ideally last for a decade or more (OSS and BSS projects are “sticky” because they’re difficult to transform / replace once in-situ).

Partnership is the key word in this discussion – as reiterated in Mark’s report and our articles back then as well.

Clearly this message of long-held partnerships is starting to resonate, as we see via the close and extensive partnerships that some of the big service providers have formed with third-parties for integration and other services. 

That’s great…. but…… in many cases it introduces its own problem for the implementation of OSS and BSS projects. A situation that is also deeply flawed.

Many partnerships are based around a time and materials (T&M) model. In other words, the carrier pays the third-party a set amount per day for the number of days each third-party-provided resource works. A third-party supplies solution architects at ($x per day), business analysts at ($y per day), developers at ($z per day), project managers at… you get the picture. That sounds simple for all parties to wrap their head around and come to mutually agreeable terms on. It’s so simple to comprehend that most carriers almost default to asking external contractors for their daily charge-out rates.

This approach is deeply flawed – ethically conflicted even. You may ask why…. Well, Alan Weiss articulates it best as follows:

When you charge by the hour, you’re in ethical conflict with the client. You only receive larger pay for the longer you’re there, but the client is better served by how quickly you can meet objectives and leave.

Complex IT projects like OSS and BSS projects are the perfect example of this. If your partners are paid on a day rate, they’re financially incentivised for delays, bureaucracy, endless meetings and general inefficiency to prosper. In big organisations, these things tend to already thrive without any incentivisation!

Assuming a given project continues at a steady-state of resources, if a project goes twice as long as projected, then it also goes 100% over the client’s budget. By direct contrast, the third-party doubles their revenue on the project.

T&M partnership models disincentivise efficiency, yet efficiency is one of the primary reasons for the existence of OSS and BSS. They also disincentivise reusability. Why would a day-rater spend the extra time (in their own time) to systematise what they’ve learnt on a project when they know they will be paid by the hour to re-invent that same wheel on the next project?

Can you see why PAOSS only provides scope of work proposals (ie defined outcomes / deliverables / timelines and, most importantly, defined value) rather than day-rates (other than in exceptional circumstances)??

Let me cite just one example to illustrate the point (albeit a severe example of the point).

I was once assisting an OEM vendor to implement an OSS at a tier-1 carrier. This vendor also provided ongoing professional services support for tasks such as customisation. However, the vendor’s day-rates were slightly higher than the carrier was paying for similar roles (eg architects, developers, etc). The carrier invited a third-party to perform much of the customisation work because their day-rates were lower than the OEM.

Later on, I was tasked with reviewing a customisation written by the third-party because it wasn’t functioning as expected. On closer inspection, it had layers of nested function calls and lookups to custom tables in the OEM’s database (containing fixed values). It comprised around 1,500 lines of code. It must’ve taken weeks of effort to write, test and release into production via the change process that was in place. The sheer entanglement of the code took me hours to decipher. Once I finally grasped why it was failing and then interpreted the intent of what it should do, I took it back to a developer at the OEM. His response?

Oh, you’ve gotta be F#$%ing kidding me!

He then proceeded to replace the entire 1,500 lines and spurious lookup tables with half a line of code.

Let’s put that into an equation containing hypothetical numbers:

  • For the sake of the process, let’s assume test and release amounts are equivalent
  • OEM charges $1,000 per day for a dev
  • Third-party charges $900 per day for a dev
  • OEM developer (who knows how the OEM software works) takes 15 seconds to write the code  = $0.52
  • Third-party dev takes (conservatively) 5 days to write the equivalent code (which didn’t work properly) = $4,500

In the grand scheme of this multi-million dollar project, the additional $4,499.48 was almost meaningless, but it introduced weeks of delays (diagnosis, re-dev, re-test, re-release, etc).

Now, let’s say the new functionality offered by this code was worth $50,000 to the carrier in efficiency gains. Who deserves to be rewarded $5,000 for value delivered?

  • The OEM who completed the task and got it working in seconds (and was compensated $0.52); or
  • The Third-party who never got it to work despite a week of trying (and was compensated $4,500)

The hard part about scope of works projects is that someone has to scope them and define the value delivered by them. That’s a whole lot harder and provides less flexibility than just assigning a day rate. But perhaps that in itself provides value. If we need to consider the value of what we’re producing, we might just find that some of the tasks in our agile backlog aren’t really worth doing.

If you’d like to share your thoughts on this, please leave a comment below.




The confused mind says no – the psychology of OSS purchasing

When it comes to OSS/BSS vendor selections, a typical customer might say they’re evaluating vendors on criteria that are a mix of technical and commercial (and other) factors. However, there’s more likely a much bigger and often hidden factor that drives a purchasing event. We’ll explore what that is shortly.

I’m currently reading the book, Alchemy: The Surprising Power of Ideas That Don’t Make Sense. It discusses a number of lateral, counter-intuitive approaches that have actually been successful and the psychological effects behind them.

The author, Rory Sutherland, proffers the idea that, “we make decisions… not only for the expected average outcome (but) also seek to minimise the possible variance, which makes sense in an uncertain world.” Also that, “A 1 percent chance of a nightmarish experience dwarfs a 99 percent chance of a 5 percent gain.”

Does that make you wonder about what the OSS/BSS buying experience feels like?

Are OSS/BSS vendors so busy promoting the 5% gain and the marginal differentiating features that they’re overlooking the white-knuckled fear of the nightmarish experience for OSS buyers?

OSS/BSS transformation projects tend to be large, complex and risky. There are a lot of unknowns, especially for organisations that tackle these types of projects rarely. Every project is different. The stakeholders signing off these projects are making massive investment decisions (relative to their organisation’s respective size) in the allocation of  resources (in financial, human and time allocation). The ramifications of these buying decisions will last for years and can often be career-defining (in the positive or the negative depending on the success of the transformation).

As someone who assists organisations with their buying decisions, I can concur with the old saying that, “the confused mind says no.” I’d also suggest that the scared mind says F#$@ no! If the vendor bamboozles the buyer with jargon and features and data, it only amplifies the fears that they might be walking into a nightmarish experience.

Fear and confusion are the reason customers often seek out the vendors who are in the top-right corner of the Gartner quadrant, even when they’re just not the best-fit solution. It’s the reason for the old saying that nobody got fired for hiring IBM. It’s the reason OSS/BSS procurement events can be so arduous (9, 12, 18 months are the norm).

The counter-intuitive approach for vendors is to spend more time overcoming the fear and confusion rather than technical demonstrations:

  • Simplify the messaging
  • Simplify the user experience (refer to the OSS intuition age)
  • Simplify the transformation
  • Provide work breakdowns and phasing to deliver early proof of value rather than a big-bang delivery way off into the future
  • Taking time to learn and communicate in the customer’s voice and terminology rather than language that’s more comfortable to you
  • Provide working proofs-of-concept / sandpits of your solutions as early as possible for the customer to interact with
  • Allow customers to use these sandpit environments and self-help with extensive support collateral (eg videos, how-to’s) enabling the customer to build trust in you at their own pace
  • Show evidence of doing the important things really well and efficiently rather than a long-tail of marginal features
  • Show evidence of striving to ensure every customer gets a positive outcome. This includes up-front transparency of the challenges faced (and still being faced) on similar projects. Not just words, but evidence of your company’s actions on behalf of customers. This might include testimonials and referrals for every single customer
  • Show evidence of no prior litigations or rampant variations or cost escalations on past projects
  • Trust is required to reduce fear and confusion (refer to “the relationship slider” in the three project responsibility sliders)
  • Provide examples of past challenges and risk mitigations. Even school the client on what they need to do to de-risk the project prior to commencement*

Can you think of other techniques / proofs that almost guarantee to the customer that they aren’t entering into a nightmarish situation?

* Note: I wrote Mastering your OSS with this exact concept in mind – to get customers ready for the transformation project they’re about to embark on and the techniques they can use to de-risk the project.


Are you kidding? We’ll never use open-source OSS/BSS!

Back in the days when I first started using OSS/BSS software tools, there was no way any respectable telco was going to use open-source software (the other oss, for which I’ll use lower-case in this article) in their OSS/BSS stacks. The arguments were plenty, and if we’re being honest, probably had a strong element of truth in many cases back then.

These arguments included:

  • Security – This is the most commonly cited aversion I’ve heard to open-source. Our OSS/BSS control our network, so they absolutely have to be secure. Secure across all aspects of the stack from network / infrastructure to data (at rest and in motion) to account access to applications / code, etc. The argument against open-source is that the code is open to anyone to view, so vulnerabilities can be identified by hackers. Another argument is that community contributors could intentionally inject vulnerabilities that aren’t spotted by the rest of the community
  • Quality – There is a perception that open-source projects are more hobby projects  than professional. Related to that, hobbyists can’t expend enough effort to make the solution as feature-rich and/or user-friendly as commercial software
  • Flexibility – Large telcos tend to want to steer the products to their own unique needs via a lot of customisations. OSS/BSS transformation projects tend to be large enough to encourage proprietary software vendors to be paid to make the requested changes. Choosing open-source implies accepting the product (and its roadmap) is defined by its developer community unless you wish to develop your own updates
  • Support – Telcos run 24x7x365, so they often expect their OSS/BSS vendors to provide round-the-clock support as well. There’s a  belief that open-source comes with a best-effort support model with no contracted service obligations. And if something does go drastically wrong, that open-source disclaims all responsibility and liability
  • Continuity – Telcos not only run 24x7x365, but also expect to maintain this cadence for decades to come. They need to know that they can rely on their OSS/BSS today but also expect a roadmap of updates into the future. They can’t become dependent upon a hobbyist or community that decides they don’t want to develop their open-source project anymore

Luckily, these perceptions around open-source have changed in telco circles in recent years. The success of open-source organisations like Red Hat (acquired by IBM for $34 billion on annual revenues of $3.4 billion) have shown that valuable business models can be underpinned by open-source. There are many examples of open-source OSS/BSS projects driving valuable business models and associated professionalism. The change in perception has possibly also been driven by shifts in application architectures, from monolithic OSS/BSS to more modular ones. Having smaller modules has opened the door to utilisation of building block solutions like the Apache projects.

So let’s look at the same five factors above again, but through the lens of the pros rather than the cons.

  • Security – There’s no doubt that security is always a challenge, regardless of being open-source or proprietary software, especially for an industry like OSS/BSS where all organisations are still investing more heavily in innovation (new features/capabilitys) more than security optimisations. Clearly the openness of code means vulnerabilities are more easily spotted in open-source than in “walled-garden” proprietary solutions. Not just by nefarious actors, but its development community as well. Linus’ Law suggests that “given enough eyeballs, all bugs (and security flaws) are shallow.” The question for open-source OSS/BSS is whether there are actually many eyeballs. All commercially successful open-source OSS/BSS vendors that I’m aware of have their own teams of professional developers who control every change to the code base, even on the rare occasions when there are community contributions. Similarly, many modern open-source OSS/BSS leverage other open-source modules that do have many eyes (eg linux, snmp libaries, Apache projects, etc). Another common perception is security through obscurity, that there are almost no external “eyeballs.” The fragmented nature of the OSS/BSS industry means that some proprietary tools have a tiny install base. This can lull some into a false sense of security. Alternatively, open-source OSS/BSS manufacturers know there’s a customer focus on security and have to mitigate this concern. The other interesting perspective of openness is that open-source projects can quite quickly be scrutinised for security-code-quality. An auditor has free reign to identify whether the code is professional and secure. With proprietary software, the customer’s auditor isn’t afforded the same luxury unless special access is granted to the code. With no code access, the auditor has to reverse-engineer for vulnerabilities rather than foresee them in the code.
  • Quality – There’s no doubt that many open-source OSS/BSS have matured and found valuable business models to sustain them. With the profitable business model has come increased resources, professionalism and quality. With the increased modularity of modern architectures, open-source OSS/BSS projects are able to perform very specific niche functionalities. Contrast this with the monolithic proprietary solutions that have needed to spread their resources thinner across a much wider functional estate. Also successful open-source OSS/BSS organisations tend to focus on product development and product-related services (eg support), whereas the largest OSS/BSS firms tend to derive a much larger percentage of revenues from value-added services (eg transformations, customisations, consultancy, managed services, etc). The latter are more services-oriented companies than product companies. As inferred in the security point above, open-source also provides transparency relating to code-quality. A code auditor will quickly identify whether open-source code is of good quality, whereas proprietary software quality is hidden inside the black-box 
  • Flexibility – There has been a significant shift in telco mindsets in recent years, from an off-the-shelf to a build-your-own OSS/BSS stack. Telcos like AT&T have seen the achievements of the hyperscalers, observed the increased virtualisation of networks and realised they needed to have more in-house software development skills. Having in-house developers and access to the code-base of open-source means that telcos have (almost) complete control over their OSS/BSS destinies. They don’t need to wait for proprietary vendors to acknowledge, quote, develop and release new feature requests. They no longer rely on the vendor’s roadmap. They can just slip the required changes into their CI/CD pipeline and prioritise according to resource availability. Or if you don’t want to build a team of developers specifically skilled with your OSS/BSS, you can pick and choose – what functionality to develop in-house, versus what functionality you want to sponsor the open-source vendor to build
  • Support – Remember when I mentioned above that OSS/BSS organisations have found ways to build profitable business models around open-source software? In most cases, their revenues are derived from annual support contracts. The quality and coverage of their support (and the products that back it up) is directly tied to their income stream, so there’s commensurate professionalism assigned to support. As mentioned earlier, almost all the open-source OSS/BSS I’m aware of are developed by an organisation that controls all code change, not community consensus projects. This is a good thing when it comes to support, as the support partner is clear, unlike community-led open-source projects. Another support-related perspective is in the number of non-production environments that can be used for support, test, training, security analysis, etc. The cost of proprietary software means that it can be prohibitive to stand up additional environments for the many support use-cases. Apart from the underlying hardware costs and deployment effort, standing up additional open-source environments tends to be at no additional cost. Open-source also gives you greater choice in deciding whether to self-support your OSS/BSS in future (if you feel that your internal team knows the solution so intimately that they can fix any problem or functionality requirement that arises) rather than paying ongoing support contracts. Can the same be said for proprietary product support?
  • Continuity – This is perhaps the most interesting one for me. There is the assumption that big, commercial software vendors are more reliable than open-source vendors. This may (or might not) be the case. Plenty of commercial vendors have gone out of business, just as plenty of open-source projects have burned out or dwindled away. To counter the first risk, telcos pay to enter into software escrow agreements with proprietary vendors to ensure critical fixes and roadmap can continue even in the event that a vendor ceases to operate. But the escrow contract may not cover when a commercial vendor chooses to obsolete a product line of software or just fail to invest in new features or patches. This is a common event from even the world’s largest OSS/BSS providers. Under escrow arrangements, customers are effectively paying an insurance fee to have access to the code for organisational continuity purposes, not product continuity. Escrow may not cover that, but open-source is available under any scenario. The more important continuity consideration is the data and data is the reason OSS/BSS exist. When choosing a commercial provider, especially a cloud software / service provider, the data goes into a black box. What happens to the data inside the black box is proprietary and often what comes out of it is also. Telcos will tend to have far more control of their data destinies for operational continuity if using open-source solutions. Speaking of cloud and SaaS-model and/or subscription-model OSS/BSS, customers are at the whim of the vendor’s product direction. Products, features and modules can be easily changed or deprecated by these vendors, with little recourse for the buyers. This can still happen in open-source and become a impediment for buyers too, but at least open-source provides buyers with the opportunity to control their own OSS/BSS destinies.

Now, I’m not advocating one or the other for your particular situation. As cited above, there are clearly pros and cons for each approach as well as different products of best-fit for different operators. However, open-source can no longer be as summarily dismissed as it was when I first started on my OSS/BSS journey. There are many fine OSS and BSS products and vendors in our Blue Book OSS/BSS Vendor Directory that are worthy of your consideration too when looking into your next product or transformation.

Edit: Thanks to Guy B. who rightly suggested that Scalability was another argument against open-source in the past. Ironically, open-source has been a significant influencer in the almost unlimited scalability that many of our solutions enjoy today.

There’s an OSS Security Elephant in the Room!

The pandemic has been beneficial for the telco world in one way. For many who weren’t already aware, it’s now clear how incredibly important telecommunications providers are to our modern way of life. Not just for our ability to communicate with others, but our economy, the services we use, the products we buy and even more fundamentally, our safety.

Working in the telco industry, as I’m sure you do, you’ll also be well aware of all the rhetoric and politics around Chinese manufactured equipment (eg Huawei) being used in the networks of global telco providers. The theory is that having telecommunications infrastructure supplied by a third-party, particularly a third-party aligned with non-Western allies, puts national security interests at risk.

In this article, “5G: The outsourced elephant in the room,” Bert Hubert provides a brilliant look into the realities of telco network security that go far beyond just equipment supply. It breaks the national security threat into three key elements:

  • Spying (using compromised telco infrastructure to conduct espionage)
  • Availability (compromising and/or manipulating telco infrastructure so that it’s unable to work reliably)
  • Autonomy (being unable to operate a network or to recover from outages or compromises)

The first two are well understood and often discussed. The third is the real elephant in the room. The elephant OSS/BSS have a huge influence over (potentially). But we’ll get to that shortly.

Before we do, let’s summarise Bert’s analysis of security models. For 5G, he states that there’s an assumption that employees at national carriers design networks, buy equipment, install it, commission it and then hand it over to other employees to monitor and manage it. Oh, and to provide other specialised activities like lawful intercept, where a local legal system provides warrants to monitor the digital communications of (potentially) nefarious actors. Government bodies and taxpayers all assume the telcos have experienced staff with the expertise to provide all these services.

However, the reality is far different. Service providers have been outsourcing many of these functions for decades. New equipment is designed, deployed, configured, maintained and sometimes even financed by vendors for many global telcos. As Bert reinforces, “Just to let that sink in, Huawei (and their close partners) already run and directly operate the mobile telecommunication infrastructure for over 100 million European subscribers.

But let’s be very clear here. It’s not just Huawei and it’s not just Chinese manufacturers. Nor is it just mobile infrastructure. It’s also cloud providers and fixed-line networks. It’s also American manufacturers. It’s also the integrators that pull these networks and systems together. 

Bert also points out that CDRs (Call Detail Records) have been outsourced for decades. There’s a strong trend for billing providers to supply their tools via SaaS delivery models. And what are CDRs? Only metadata. Metadata that describes a subscriber’s activities and whereabouts. Data that’s powerful enough to be used to assist with criminal investigations (via lawful intercept). But where has CDR / bill processing been outsourced to? China and Israel mostly.

Now, let’s take a closer look at the autonomy factor, the real elephant in the room. Many design and operations activities have been offshored to jurisdictions where staff are more affordable. The telcos usually put clean-room facilities in place to ensure a level of security is applied to any data handled off-shore. They also put in place contractual protection mechanisms.

Those are moot points, but still not the key point here. As Bert brilliantly summarises,  “any worries about [offshore actors] being able to disrupt our communications through backdoors ignore the fact that all they’d need to do to disrupt our communications.. is to stop maintaining our networks for us!

There might be an implicit trust in “Western” manufacturers or integrators (eg Ericsson, Nokia, IBM) in designing, building and maintaining networks. However, these organisation also outsource / insource labour to international destinations where labour costs are cheaper.

If the R&D, design, configuration and operations roles are all outsourced, where do the telcos find the local resources with requisite skills to keep the network up in times when force majeure (eg war, epidemic, crime, strikes, etc) interrupts a remote workforce? How do local resources develop the required skills if the roles don’t exist locally?

Bert proposes that automation is an important part of the solution. He has a point. Many of the outsource arrangements are time and materials based contracts, so it’s in the resource suppliers’ best interests for activities to take a lot of time to maintain manually. He counters by showing how the hyperscalers (eg Google) have found ways of building automations so that their networks and infrastructure need minimal support crews.

Their support systems, unlike the legacy thinking of telco systems, have been designed with zero-touch / low-touch in mind.

If we do care about the stability, resiliency and privacy of our national networks, then something has to be done differently, vastly different! Having highly autonomous networks, OSS, BSS and related systems is a start. Having a highly skilled pool of local resources that can research, design, build, commission, operate and improve these systems would also seem important. If the business models of these telcos can’t support the higher costs of these local resources, then perhaps national security interests might have to subsidise these skills?

I wonder if the national carriers and/or local OSS/BSS / Automation suppliers are lobbying this point? I know a few governments have inserted security regulatory systems and pushed them onto the telcos to adhere to, to ensure they have suitable cyber-security mechanisms. They also have lawful intercept provisions. But do any have local operational autonomy provisions? None that I’m aware of, but feel free to leave us a comment about any you’re aware of.

PS. Hat tip to Jay for the link to Bert’s post.

How to make your OSS a Purple Cow

With well over 400 product suppliers in the OSS/BSS market it can be really difficult to stand out from the other products. Part of the reason we compiled The Blue Book OSS/BSS Vendor Directory was to allow us to quickly recall one product from another. With so much overlapping functionality and similarities in their names, some vendors can “blend” into each other when we try to recall them.

And we spend a lot of our week working with and analysing the market, the products and the customers who use them. Imagine how difficult that task would be for someone whose primary task is to operate a network (which most OSS/BSS customers do for a living).

How then can a vendor make their offerings stand out amongst this highly fragmented product market?

Seth Godin is a legendary marketer and product maker (not in OSS or BSS products though). We refer to him often here on the Passionate About OSS blog because of his brilliant and revolutionary ideas. One of those ideas turned into a product manifesto entitled Purple Cow. This book made it into our list of best books for OSS/BSS practitioners.

Purple Cow describes something phenomenal, something counterintuitive and exciting and flat-out unbelievable… Seth urges you to put a Purple Cow into everything you build, and everything you do, to create something truly noticeable.”

When you’re on a long trip in the countryside, seeing lots of brown or black cows soon gets boring, but if you saw a purple cow, you’d immediately take notice. This book provides the impetus to make your products stand out and drive word of mouth rather than having to differentiate via your marketing.

I’ve noticed the same effect when we pitch our Managed OSS/BSS Data Service to prospects. It’s the data collection and collation tools that drive most of the real business value (in our humble opinion), but it’s the visualisation tools that drive the wow factor amongst our prospects / customers.

It’s our ability to show 3D models of assets (eg towers as per the animation below) or overlay of real-time data onto the 3D models (ie digital twins), or mashing up many disparate data sources for presentation by powerful and intuitive visualisation engines.

This might seem counter-intuitive, but set aside your products’ technical functionality for a moment. Now ask yourself what is the biggest wow factor that stays with your customers and gets them talking with others in the industry? What’s phenomenal, counterintuitive, exciting and flat-out unbelievable? What’s your Purple Cow?

Having reviewed many OSS/BSS products, I can assure you there aren’t many Purple Cows in the OSS/BSS industry. If you’re responsible for products or marketing at an OSS/BSS vendor, that gives you a distinct opportunity.

Improvements in technologies such as 3D asset modelling, AI image recognition, advances in UX/CX, data collection / visualisation and many more gives you the tools to be creative. The tools to be memorable. The tools to build a Purple Cow.

It’s difficult to stand out based on functional specifications or non-functionals (unless you leave others in your dust, such as being able to ingest data at 10-20x your nearest competitor). Those features might be where the your biggest business value may lie. In fact that’s probably where it does lie. 

However, it seems that the Purple Cows in OSS/BSS appear in the unexpected and/or surprising visual experiences.

Can you imagine building an OSS/BSS Purple Cow? If interested, we’d be delighted to assist.