The Most Exciting OSS/BSS Innovations of 2022

Featured

We are currently living through a revolution for the OSS/BSS industry and the customers that we serve.

It’s not a question of if we need to innovate, but by how much and in what ways.

Change is surrounding us and impacting us in profound ways, triggered by new technologies, business models, design & delivery techniques, customer expectations and so much more.

Our why, the fundamental reasons for OSS and BSS to exist, is relatively unchanged in decades. We’ve advanced so far, yet we are a long way from perfecting the tools and techniques required to operationalise service provider networks.

In people like you, we have an incredibly smart group of experts tackling this challenge every day, seeking new ways to unlock the many problems that OSS and BSS aim to resolve.

In this report, you’ll find 25 solutions that are unleashing bold ideas to empower change and solve problems old and new. 

I’m thrilled to present these solutions in the hope that they’ll excite and inspire you like they have me.

Click on the image below to download the report.

Or click the link – https://passionateaboutoss.com/wp-content/uploads/2022/03/25-Most-Exciting-Innovations-in-OSS_BSS-2022_FINAL1.pdf 

Vendors included, in no preferential order:

  1. 2operate
  2. ActivePort
  3. ADVA
  4. AN10
  5. Anodot
  6. Appearition / Air Inspect Australia
  7. Aria Networks
  8. Avanseus
  9. Bentley
  10. Cisco
  11. CSG
  12. Cubro
  13. DFG Consulting
  14. EnterpriseWeb
  15. Ericsson / Nvdia
  16. InfoVista
  17. KX
  18. Mavenir
  19. Neo4j
  20. Netcracker
  21. Rakuten Symphony
  22. Techsee
  23. Trendspek
  24. Twinkler
  25. Zeetta

If you enjoy this report, be sure to subscribe via the form below and receive notification of future reports.

#PAOSSInnovation2022

New OSS Innovation Video Recording

The \#Adtran #FibreBroadband Symposium was held in Melbourne on 3rd August 2022.

This video link below shows the presentation we made at the conference:

It’s designed specifically for network builders, to show the relevance that OSS and BSS have for their worlds.

It also shows a very high level view of OSS past, present and future – including four possible future scenarios covering:

  • Private Networks
  • Digital Twins and Sensor / IoT Networks
  • Autonomous / Programmable Networks
  • Integration of Augmented Reality and OSS to provide new ways of working for the telecommunications industry (and beyond)

How AI and blast radii could inspire next-gen OSS designs

Artificial Intelligence is purported to be the panacea for so many high-volume factors within telco. For example, AI is being used for complex pattern recognition using diverse signal streams like churn reduction. AIOps solutions are making great strides to reducing noise within alarm / event lists, filtering and automating, to limit the number of decisions to be made by humans. We’re also seeing AIOps equivalent solutions starting to make progress for handling other high-volume streams like contact centre call handling and field work.

With this automated volume reduction occurring, we probably need to re-evaluate how our OSS user interfaces work. I see this as presenting across two dimensions:

  1. Pareto’s 20 – Whereas our UIs have historically been designed for handling transactions at volume, we can now start thinking instead about what’s slipping through the cracks. The ~80% is being automated, filtered, prioritised etc. The remaining ~20% (the 20% that’s really hard to characterise or triage or identify) is now where humans need to intervene to take it ever closer to 0% un-matched.
  2. Watching the watcher – We can’t just let the automations run without having some way of ensuring they’re doing the right things. We need a way of observing the automations. Even if they are designed to optimise for certain outcomes (eg truck-roll reduction), the environments that they’re working within are so complex that it’s quite feasible that secondary or tertiary effects (eg infrastructure costs) could become sub-optimal

For a UI to handle Pareto’s 20, it seems we need a workbench that accommodates two key factors:

  • Blast Radii – The ability to observe the blast radius of any given event across all four proximities (time, topology, geography, object hierarchy), where the root-cause often triggers ripple-out alarms / events that can obfuscate the real root-cause
  • Collaboration – The ability to collaborate with experts across any domains

Let’s look at blast radius first. Tools such as AIOps are great at identifying patterns amongst data that is just too voluminous for people to handle. Networks are able to produce tens of thousands of performance metrics at streaming speeds. Multiply that by the increasing number of network devices and combine that with multi-domain dependencies (eg customer services, network domain interconnection, etc) and we have far too much data for humans to process every second (or whatever cadence your metrics tick over at). The same is true if you’re trying to triage an event that hasn’t been picked up by the automations. 

I believe our next-generation UIs can actually use AI to identify “probable / possible cause” clusters of metrics connections and present them in a workbench across space and time. Each stream (eg telemetry, alarms, flows, connections, etc) is presented as layers on a map view, and the operator is able to turn layers on/off to visualise possible linkages / impacts. More importantly, the user will have the ability to scrub backwards and forwards in time to see the sequence and proximities of events. (Note: Naturally, we’ll need the network to propagate these data streams rapidly because true sequencing can be lost if there are significant time delays).

We also need new ways to collaborate on complex triage – taking a more modern “chat” approach rather than ticket-and-flick-it as used historically. And when I say chat, this could mean chatting with other humans or could also pull in machine-input like line test results, social indicators, etc from something akin to a sophisticated chat-bot.

Now, coming back to watching the watcher, we always have to install guard-rails or threshold-boundaries on the automation streams to ensure we don’t get run-away feedback loops. The automations could have the ability to game the system, getting great results for primary objectives but be triggering unwanted side-effects. To avoid this, I believe we need more than just guard-rails, but highly visual presentations of data to see what’s happening inside the “black-box” of the automation. Similar to the blast-radius example above, I can envisage this as being presented in heat-maps across space and time… amongst other visualisation techniques.

Can you envisage what our next-gen OSS UIs will look like and why? Do you agree or disagree with any of the perspectives above? Do you have theories about how we can do this better? I’d love to hear your thoughts via the comment box below.

Oh and PS, I’d love to see our user-interfaces look far more advanced than they typically are today – to match the level of sophistication that’s going on behind the scenes in our amazing OSS solutions. Check out the user interfaces from Jorge Almeida in his stunning showreel below. Many are presented as Augmented Reality views, as our OSS will surely be within short periods of time as smart glass hardware catches up to software. As indicated recently, it’s not just the presentation of visual data that matters, but decision support to guide workers on what to do next!!

Understanding where OSS POCs go wrong

We’ve regularly discussed how RFPs / PoCs are usually an inefficient way of finding new products or partners for your OSS (see this link). They’re often ineffient regardless of whether you’re a supplier, buyer or integrator. They tend to be expensive and time-consuming for all involved.

The short paper shown below by Peter Willis, the ex-Chief Researcher at BT, provides a really insightful insider’s perspective as to why Proofs of Concept go wrong.

In just one of the points raised in the paper, a shortened list of why PoCs don’t lead to adoption / sales includes:

  • Failure to demonstrate credible value to the telco
  • The value of the technology or component is not significant enough to give it sufficient priority
  • The PoC was not realistic enough
  • The PoC was not demonstrated to the right stakeholders
  • The technology does not fit into the telco’s operational model
  • The wider telco community has a negative view of the technology
  • Existing (incumbent) suppliers offer their own alternative technology
  • The technology demonstrated in the PoC gets overtaken by another technology

Great insights into a telco’s mindset and challenges from the buyer’s perspective.

It’s great to see that the Telecom Ecosystem Group (TEG) and TM Forum are both expending effort on finding a better way to bring buyers and sellers together.

Why KPIs, QoS and QoE aren’t good enough. What else must we do

Earlier this week we talked about the data management challenge being solved, for the most part. However, we still have large gaps in the decision support challenge, mostly because we don’t have tools to help us understand the complex “systems / environments” in which decisions need to be made.

I thought the graph below might help to further articulate this thought.

Source: Rohde and Schwarz

Our OSS and data management solutions are fantastic at collecting and presenting KPIs (Key Performance Indicators), at the bottom of the pyramid. KPIs tend to be nodal in nature. That is, each network device publishes information that it sees locally. Large networks collect and present terabytes worth of telemetry and log data every day. The firehose of KPIs generally doesn’t really help much. There’s too many metrics / KPIs for a human to possibly consume or evaluate, let alone make decisions with. Unfortunately, many OSS stop at the KPI level.

More advanced OSS know that we must help filter the KPIs. This is where quality of the network or Quality of Service (QoS) comes into play. QoS numbers tend to relate to end-to-end indicators such as jitter, packet loss, throughput, etc in packet-switched networks. This is a quantitative measure of the quality of a given service.

Unfortunately, QoS isn’t a direct representation of the customer’s actual experience. As customers, we might experience issues with voice, video, connectivity, etc when using applications whose packets are carried over the network. Quality of Experience (QoE) metrics are used to capture the customer’s perception of how satisfactory the service has been, regardless of how good the QoS metrics look. End-user experience metrics such as voice mean opinion score (MOS) have been developed to bridge the gap between QoS and QoE. QoE metrics are often measured more subjectively or qualitatively than QoS. 

Perfect. As network operators, we now know whether our customers are satisfied with their holistic experience using the network, applications, consuming information, etc. Some more advanced OSS manage to present metrics at the second (QoS) or third (QoE) layer of this pyramid.

Unfortunately, we still have a problem. If we see deterioration in KPI, QoS or QoE numbers, simply presenting this information in graphical format doesn’t actually tell us what we should do to fix the problem or to perform network / capital optimisation activities. The ideal situation is for these fixes / optimisations to be performed automatically – such as self organising / optimising networks (SON).

Most OSS still stop short at point 1 on the chart from the earlier article, providing a multitude of graphs or tables, irrespective of where on the pyramid they are.

What we actually want is for specific information about what to do next – decision support – to change the e-tilt on an antenna to more optimally target where mobile phone users are, to add an extra 1 Gbps of capacity to a bearer link so that customer download speeds improve, to replace a copper pillar with a fibre node so that customers don’t experience so many outages, to perform a node-split, etc.

Taking the outputs of our OSS from the pyramid of metrics at point 1 to decision support at point 2 on this chart, will make our OSS far more valuable. It will use the power of computers to show us how to continuously optimise the network. Unfortunately, to do so, we have to get far better at complex system modelling / analysis (or what the cool kids call “systems of engagement” according to Tony N – thanks for the heads-up on the latest nomenclature Tony! )

What is the Exciting Next Frontier of OSS Solution Development?

A discussion with an OSS vendor last week reminded me of an article, and a lecture by Mark Zangari that inspired it, which is now nearly 10 years old. It’s worth revisiting because the challenges described are still relevant for OSS today, still not adequately resolved.

At the time, Mark described for his audience at SFU that the data management problem had largely been solved (which is true, although improvements are still appearing on the market).

The diagram below (a slight variation on Mark’s presentation) alludes to the fact that there are comprehensive solutions available for Data, including:

  • Collection
  • Data Management
  • Analytics and
  • Presentation / Visualisation

However, we’ve all seen what Point #1 in this diagram looks like. We’ve seen OSS and/or Business Intelligence (BI) tools that provide us with hundreds of graphs of performance counters. Unfortunately, these still-raw graphs only help take us part-way along the journey to a decision. 

The output of many OSS tools are just raw metrics. BI tools are quite helpful for presenting large volumes of complex data in forms that the human brain can comprehend. The interesting thing about this is that our brains are actually already quite efficient at understanding data. We’ve solved the part that our brains are already reasonably adept at – we’ve created databases, BI tools, algorithms, etc, etc for capturing, processing and loading data.

But the part that our brains are far less well suited for, the middle box representing the modelling of complex systems and decision support, is yet to be mastered. Instead we largely rely on human intuition to inform what action to take next – to carry us from the raw graphs at point 1 to definitive action/s at point 2. Our brains just aren’t sophisticated enough to consider all the variants and identify the optimal answer in most cases. Computers are typically far more adept at performing option analyses than we are.

Despite this, it seems we haven’t focused much attention on creating tools to help us model the complex interactions of systems (consisting of people, process, technology, funding, etc) that turn raw outputs into optimal decisions. Nearly a decade on from Mark’s oration, there is still very little tooling in the world of OSS that helps us resolve the middle box to make better decisions.

This is the next frontier for OSS solution development.

Flying COWs and OSS. A game-changer for Capacity Management?

A recent article here talked about re-imagining the planning and design process for telco networks. We’ve also been heavily involved with the use of drones to help deliver new ways of working in the telco industry lately too. With those two concepts still fresh in our memory, this article about flying COWs from AT&T really struck a chord.

But what is a COW? Clearly not the four-legged variety that graze in pastures around the world. For those of you not already familiar, a COW in telco vernacular is a Cell on Wheels. In other words, a self-contained, moveable cellular node that can stand up mobile network capacity wherever it is deployed. COWs are often deployed to major events where user density spikes for a short period (eg sporting events), in natural disasters where comms infrastructure has been wiped out and in remote locations where cellular service isn’t reliably available to support search and rescue efforts.

The article from AT&T though is not your typical COW. It’s a flying COW, a Cell on Wings (not Wheels).

What a great way to stand up extra cellular capacity at short notice!

The article doesn’t go into details about costs of the COW itself or the ongoing operations, so we could only speculate whether it might become a scalable network capacity consideration in future. However, the article does mention that they’re tethered but Art Pregler of said that AT&T is “working to autonomously fly without tethers for months without landing, using solar power. This solution may one day help bring broadband connectivity to rural and other underserved communities.”

Naturally, my mind goes to the OSS / BSS implications, factors such as:

  • Combining coverage mapping with demographics to identify ideal locations to hover a flying COW
  • Managing within local regulatory obligations / constraints for unmanned aircraft and radio spectrum licencing
  • Site lease management (if applicable)
  • COW design, configuration and backhaul / transmission
  • Asset lifecycle management (from design, build, commission, operations, relocations, decommission)
  • COW rollout management and handover to operations (including implications such as activating transmission, onboarding for management purposes, orchestration of virtualised workloads and connectivity / slices, address allocation, automated testing, customer connection readiness, charging, etc)
  • Self optimisation of the COW’s performance and backhaul via ongoing configuration tweaks
  • Additional ongoing observability and management where full automation / optimisation isn’t possible
  • Auto-updating inventory and topology views to accommodate the new node when each COW comes online in a new location (including the need for modelling inventory in three-dimensions, not just two)
  • Remote flight-path control (if applicable)
  • Reporting
  • Location management

But thinking further ahead, my mind goes to a more autonomous cellular network, where OSS identify coverage gaps and control squadrons of drones to programmatically leave their hangars and fill in the gaps – performing all the dot-points above (and more) automatically.  🙂

I like what companies like Rakuten are trying to do with their autonomous cellular network ambitions. This potentially does away with some of the physical constraints for those models though, such as having to plan around fixed tower / pole positions. It has the potential to take autonomy to a whole new level (notwithstanding regulatory constraints of course).

What do you think? Is this just pie (or COW) in the sky thinking? Leave us a note in the comment section below.

GIS and BIM Take First Major Steps to Integration. So what?

When you see those two acronyms above, GIS and BIM, do you immediately grasp what this post is going to be about? Do you know what both of those acronyms represent? Their relevance to OSS? They’re not exactly widespread OSS terms.

Many would be familiar with GIS (Geographic Information Systems). We’ve all seen OSS data, especially network topologies, overlaid onto map / GIS views.

How about BIM (Building Information Modeling)? Have you ever come across BIM representations of telco assets? BIMs are used for the 3D representation of objects. They’ve been used by architects, civil engineers, construction workers and many others for years.  We haven’t had much use for them in telco yet. But that’s sure to change once we embrace asset lifecycle management in the third dimension – and end-to-end from design to build to maintain to decommission – to bring greater situational awareness to our widespread workforces.

As reported on Engineering News-Record, two of the dominant players in the network design space – Esri and Autodesk – have announced a collaborative product, which provides a cue for our future-of-work in telco too. Esri’s GIS and Autodesk’s CAD / BIM products had previously been evolving side-by-side for years. Merging the combined benefits of their data sets required significant manual intervention. The Esri – Autodesk collaboration mans GIS-to-BIM workflows and experiences are more seamless.

Why would two massive organisations (Autodesk has a market cap of $45B, Esri is a private company, but whose annual revenues are believed to exceed $1B) seek to work together on joint products? Probably a number of reasons, but the increasing application of digital twins / triplets is surely one market they’re seeking to serve.

This is an exciting space to see what other products follow and evolve to augment our OSS.

Re-imagining network planning? New approaches to plan, design & build networks

In a recent article, we posed many ideas about how future OSS solutions might be re-imagined. We looked into how new approaches and technologies like AR (Augmented Reality) might change solutions and ways-of-working in the near future. This article triggered a conversation about next-generation planning systems with a couple of very clever OSS, orchestration and integration experts.

This conversation thread convinced us to take a closer look at this very important field within telco businesses. When we say important, the larger telcos allocate billions on network upgrades and augmentations every year, and an optimised return on that investment is essential.

But here’s the twist. Our Blue Book OSS/BSS Vendor Directory now has well over 500 listings, but none (that we’re currently aware of) produce a solution that caters for the entirety of the planning challenge:

  • New build / greenfield – identifying optimum allocation of capital for return on investment, not just for single network layers / domains, but across all
  • Network change management (MACD)
    • Predicted – based on capacity exhaustion or under-use, upgrade / life-cycle management, in-fill builds, specific events (eg catering for large crowds), etc
    • Unplanned – based on won or lost orders, traffic engineering, ramifications of fault-fix activities, widespread damage such as weather events, temporary builds such as COWs (Cells on Wheels), etc
  • Network performance optimisation – to cater for the changing demands on the network
  • New infrastructure – to evaluate, select and then onboard new technologies, topologies, devices, etc

Nor do they go beyond the planning stage and consider a feedback loop that takes data and optimises planning across Plan, Design, Build, Operate and Maintain phases.

Current state of network / capacity planning tools

Whilst there don’t appear to be all-in-one planning solutions (please leave us a comment to correct us if we’re wrong), there are solutions that contribute to the planning challenge, including:

  1. Traditional infrastructure planning and project management tools for PNI / LNI networks  (eg Smallworld, Bentley, Synchronoss, SunVizion, etc)
  2. Automated design tools for generating physical access networks (eg Biarri, Synchronoss, SunVizion, etc). Some only do greenfields, but some are now beginning to cope with brownfields / in-fill designs too
  3. Service design / planning tools by SOM / CPQ / etc types of solutions
  4. SDN (Software Defined Network) planning and automation suites such as Rakuten Symphony for mobile networks
  5. Many network performance tools that allow heat-mapping of network topology to show capacity / utilisation of devices, interfaces, gateways, load balancers, proxies, leased links, etc
  6. Orchestration type tools that coordinate automated tasks as well as generating work orders for manual tasks (eg design and build)
  7. Simulation tools of many sorts that allow network / service designs and configurations to be tested prior to implementation
  8. Other tools that are not OSS or telco specific. This includes Geographical Information Systems (GIS), Computer Aided Design (CAD), data science solutions, demographic analyses, regression test suites, etc

Each of these solution sets are comprehensive in their own right. Combining them across all domains, dimensions and possibilities seems like a nearly impossible task – a massive variant-tree. This probably explains the dearth of all-in-one planning solutions.

 

Opportunities for next-generation network / service planning solutions

Despite the complexity of this challenge, it still leaves us with some really interesting unfulfilled opportunities, including solutions that support:

  • Allocation of capital – at the moment, planning tends to be done with a very broad brush (ie update everything in this exchange area) rather than being more surgical in nature (eg, replace individual branches of a network that need optimising). This pulls on the lever of “easiest to plan” rather than “optimal use of capital / time / resources.” A more targeted approach may take up-to-date signals from finance, assurance, demographics, planning, fulfilment, field-worker / skills availability, spares / life-cycle, VIM, etc data feeds to decide best option at any point in time.
  • Multi-technology mix – like so many other things in OSS, planning tools tend to work best within a single domain (eg fibre access networks), leaving opportunities for generic, cross-domain, multi-layer planning / design
  • Seamless transition from planning, to design, to build instructions – there tend to be a lot of manual steps required to take a design and turn it into instructions for field workers to implement (eg design packs). AR and smart glass technology are likely to completely revolutionise the design-to-build life-cycle in coming years
  • Sophisticated capacity prediction – so far, I’ve really only seen capacity planning in two forms. One, as a blunt instrument, such as performing link exhaustion analysis (eg if capacity is trending towards 80% utilisation threshold, then start planning additional capacity). Two, as a customised data science exercise. Sophisticated prediction modelling and visualisation tools could fulfil the opportunity to be more targeted (looping back to the “allocation of capital” point above)
  • Scenario planning / modelling simulations – as we know, nobody can predict the future, but we can simulate scenarios that might give us hints about what the future will hold. A scenario pre-COVID could’ve been that we’re all forced or choose to WFH
  • Auto-capacity scaling – particularly with SDN, having the ability to turn capacity up/down depending on utilisation in real-time (not to mention pre-planning based on time of day / week / month / special-occasions) across multi-domain networks. We helped design and build a rudimentary threshold-based, multi-domain traffic engineering module for an OSS provider back in 2008, so this capability surely exists in more sophisticated forms today. These days, this might not just consider standard metrics like throughput, utilisation, packet-drops, etc but perhaps even looks at optimisation by protocol / slice / service / etc across much larger network dynamics. But this is more aligned to SON solutions. To bring it back to planning, perhaps an opportunity relates more to policy / intent / guard-rail / shaping management?
  • Dynamic pricing – as a result of the previous point, the ability to scale capacity up/down dynamically also provides the ability to provide dynamic pricing to incentivise  preferred customer usage patterns. With increasing focus on sustainability initiatives and energy reduction in the telco sector, smarter capacity allocation is sure to play a part
  • Business continuity – Serious issues inevitably arise on every network. We have to expect the worst. This is no longer a case of only defend and protect, but we must also take a detect and recover mindset as well. We haven’t seen any tools that automate Business Continuity Plans (BCPs) to initiate recovery actions (eg switching entire network or segment profiles / configurations / capacities in certain scenarios) to ensure responses are rapid in crises yet though.
  • Supply Chain Management – with more of the supply chain becoming software-defined, product catalogs potentially assimilate a long-tail of suppliers for telcos to easily build capacity and capabilities with (eg a marketplace of third-party CNF, VNF, API, etc). Allowing telco product teams to design sophisticated new offerings based on internal and third-party services (eg CFS / RFS)
  • Auto Option Analysis – planning often involves considering more than one option and then making a choice on which is optimal. This multi-option approach consumes significant effort, with the unsuccessful options discarded, representing lost efficiency. An algorithmic approach to option analysis has the potential to consider “all” options and auto-generate a business case with options for consideration and approval (eg optimal design for costs, optimal design for implementation time, optimal reduction in risk, optimal minimisation of infrastructure, etc).

Summarising the planning opportunity

Network and capacity planning is a challenging, but vitally important aspect of running a communications network. Whilst there are existing tools and processes to support planning activities, there doesn’t appear to be a comprehensive, all-in-one planning solution. The complexity of building one to support many different client requirements appears to be a significant impediment to doing so.

Despite this, we have identified a number of opportunities for OSS solutions to help solve the planning challenge in new and innovative ways. But this list is by no means exhaustive.

As always, we’d love to hear your thoughts and suggestions. What solutions have we missed? What could be done differently? How would you tackle the planning challenge in innovative ways? Leave us your comment below.

 

What Exactly Is Real-time? How Valuable Is It?

These are two questions I’ve often pondered in the context of OSS / telco data, but never had a definitive answer to. I just spotted a new report from CEBR on LinkedIn that helps to provide quantification.

What Exactly Is Real-time?

When it comes to managing large, complex networks, having access to real-time data is a non-negotiable. I’ve always found it interesting to hear what people consider “real-time” means to them. For some, real-time is measured in minutes, for others it’s measured in milliseconds. As shown below, the CEBR report indicates that nearly half of all businesses now expect their “real-time” telemetry to be measured in seconds or milliseconds, up significantly from only a third last year. 

This is an interesting result because most telcos I’m familiar with still get their telemetry in 5 minute, or worse, 15 minute batch intervals. And that’s just the cadence at the source (eg devices and/or EMS / NMS). That doesn’t factor in the speed of the ETL (extract, transform, load) pipelines or the processing / visualisation steps. By the time it passes through its entire journey, data can take an hour or more before any person or system can perform any action on that data.

With consuming systems like AIOps tools, which attempt to make predictive recommendations, the traditional understanding of “real-time” within telco just isn’t up to speed (sorry about the bad pun there).  An hour from event to action can barely be considered real-time.

I’m curious to get your thoughts. At your telco (or clients if you’re at a vendor / integrator / supplier), what are the bottleneck/s for achieving faster telemetry? [note that there’s additional content below the survey]

What are the Impacts of Real-time Data (Anomaly detection / reduction indicators)?

The CEBR report also helps to quantify some of the impacts of having data that’s not quite real-time, as indicated in the following four graphs.

These figures indicate telemetry is arriving in an order of seconds. This seems much higher than I had expected.

Interestingly, 100% of telcos in the UK saw a reduction in costs after implementing faster real-time data pipelines (where anomalies result in at least a moderate amount of loss).

How Valuable Is Real-time Data?

I have a foreboding, but possibly mistaken, sense that the telemetry data at many telcos just isn’t fast enough to provide the speed of insights that ops teams need. If so, what is the opportunity cost? How valuable is data that does arrive in real-time? Or to ask another way, how costly is data that is not-quite-real-time enough?

The CEBR report helps to answer these questions too, via the following graphs.

A revenue increase of nearly $300M as a result of real-time data represents a significant gain, with most projected impact coming from America.

I should caveat this by saying the report doesn’t show the methodology for how these numbers are calculated. And having been inside the veil and seen the lack of sophistication behind some analyst estimates, I’m always a bit skeptical about the veracity of these types of numbers.

Despite these question marks, it still seems likely that faster data results in faster insights. 

Regardless, when paired with consuming systems like AIOps, faster insights should translate into tangible benefits (eg being able to fix before fail or resolve via workarounds like re-routing to avoid SLA impacts or swaying a customer on the verge of churning), but also intangible benefits (eg advance notifications to customers).

Faster insights should directly relate to improved customer experiences, as reflected by 9 out of 10 customers suggesting moderate or significant improvement after implementing real-time data initiatives.

I’d love to hear your thoughts and experiences. Has speeding up your processing rates resulted in significant tangible / intangible benefits at your organisation or customers? Please leave us a comment below.

Really interesting OSS cross-over role

I have a few automated job searches set up for OSS roles. Not because I’m looking for a job (just projects), but to keep an eye on what the markets are looking for and what’s trending that I might not be aware of yet.

A really interesting role popped up on one of these searches this morning:
https://www.seek.com.au/job/56858624

No idea which company it’s for, but it’s the first one of its kind that I’ve seen… yet I’ve long thought it is a really important intersection with OSS.

As you’ll notice if you click on the link, it’s for a Senior Security Architect/Designer that specialises in OSS.

Of course I’ve seen job ads for Security Architects and for OSS Architects, but never combined (although I have worked with SAs that have done a lot of telco work and have OSS familiarity, so there’s a few of them out there).

As we all know, OSS and BSS have their tentacles spread throughout a telco’s management stack. They cross all the traditional security trust zones such as:

  • Active Network
  • Corporate Network
  • Enterprise Services
  • Demilitarised Zone
  • Centralised Management and
  • Others

Everyone would already be aware that segmentation and interconnectivity between devices / services within each of these zones is strictly controlled. Security layering can be coarse (eg firewall segmentation) or more granular (service publishing / proxying, service chaining, host based firewalls / IPS, etc)

What fewer people would be aware of is the complex chatter that goes on between the various devices / services in each of the domains. There are:

  • Identity Services (eg directory services, privileged access management, user access management and the roles/privileges/governance that they assign)
  • Access Gateway / Services (eg role-based access, session management, password management, SSO / SAML, secure API gateways, proxies, 2FA)
  • Shared Services (eg NTP, DNS, IPAM, DHCP, SMTP – email, log management, config management, CI/CD solutions, patch management, etc)
  • Security Services

These services often require primary / secondary architectures and related data reconciliation to allow them to function correctly / securely across different domains. For example, some DNS records should be accessible from within the active network, allowing NOC users to resolve addresses within the active network as well as external, such as for downloading security patches. Conversely, machines inside the corporate domain might be able to resolve DNS records of the patch management server, but not any devices within the active network. There is a lot of complexity and inter-dependency involved in setting them all up correctly.

But then taking things out of the solution architecture space, I also envisage the possibilities of combining security and OSS data to improve threat detection and incident analysis. Coming back the other way, for security patterns to potentially signal to the traffic engineering our OSS do. Our OSS not only have detailed logs on everything that’s happening on the active network devices, but also have performance indicators and topology awareness (physical and logical) to better understand attack/kill chains.

I’d love to hear from any of the rare beasts that operate in this crossover security / OSS space. I feel like there are many fascinating discoveries to be made. I’d love to hear your thoughts about the possibilities. Please leave a comment below to discuss.

 

Re-imagining your OSS? Remarkable approaches changing the world of telco

An updated call for innovation in the OSS industry

We are currently living through a revolution for the OSS/BSS industry and the customers that we serve.

It’s not a question of if we need to innovate, but by how much and in what ways.

Change is surrounding us and impacting us in profound ways, triggered by new technologies, business models, design & delivery techniques, customer expectations and so much more.

Our why, the fundamental reasons for OSS and BSS to exist, is relatively unchanged in decades. We’ve advanced so far, yet we are a long way from perfecting the tools and techniques required to operationalise service provider networks.

Imagine a future OSS where:

  1. Buying a new / transformed OSS is a relatively simple experience
  2. Customers find the implementation / integration experience quite fast and easy
  3. Implementers are able to implement and integrate simply, seamlessly and repeatably
  4. The user experience for customers and OSS operators is intuitive, insightful and efficient
  5. It’s not exactly the “zero-touch” that has been discussed, but only requires human interaction / intervention for the rarest of situations

This call for innovation, whilst having no reward like the XPRIZE (yet), seeks out the best that we have to offer and the indicators for what OSS can be into the future. It provides existing indicators to the future but seeks your input on what else is possible.

Innovation is not just a better product or technology. It’s a complex mix of necessity, evangelism, timing, distribution, exploration, marketing and much more. It’s not just about thinking big. In many cases, it’s about thinking small – taking one small obstacle and re-framing, tackling the problem differently than anyone else has previously.

We issue this Call for Innovation as a means of seeking out and amplifying the technologies, people, companies and processes that will transform, disrupt and, more importantly, improve the parallel worlds of OSS and BSS. Innovation represents the path to greater enthusiasm and greater investment in our OSS/BSS projects.

In this article we’ll look at:

  • Part 1 – Current State
  • Part 2 – Change Drivers
  • Part 3 – What might an OSS Future Look Like (including “a day in the life” examples)

Part 1 – Current State of OSS

1.1 An introduction to the current state of OSS

At the highest level, the use cases for OSS and BSS have barely changed since the earliest tools were developed.

We still want to:

  • Monitor and improve network / service health
  • Bring appealing new products to an eager market quickly
  • Accurately and efficiently bill customers for the services customers use
  • Ensure optimal coordination, allocation and utilisation of available resources
  • Discover operational insights that can be actioned to quickly and enduringly improve the current situation
  • Ensure all stakeholders can securely interact with our tools and services via efficient / elegant interfaces and processes
  • Use technology to streamline and automate, allowing human resources to focus only on the most meaningful activities that they’re best-suited to


The problem statements we face still relate to doing all these use-cases, only cheaper, faster, better and more precisely.

Let’s take those use-cases to an even higher level and pose the question about customer interactions with our OSS:

  1. For how many customers (eg telcos) is the OSS buying experience an enjoyable one?
  2. For how many customers is the implementation / integration experience an enjoyable one?
  3. For how many implementers is the implementation / integration experience a simple, seamless, repeatable one?
  4. For how many customers is the user experience (ie of OSS operators) an enjoyable one?

Actual customer experiences in relation to the questions above today might be 1) Confusing, 2) Arduous, 3) Uniquely challenging and 4) Unintuitive.
The challenge is to break these down and reconstruct them so that they’re 1) Easy to follow, 2) Simple, 3) Less bespoke and 4) Navigable by novices.

Despite the best efforts of so many brilliant specialists, a subtle sense of disillusionment exists when some people discuss OSS/BSS solutions, to the point that some consider the name OSS, the brand of OSS, to be tarnished. Whilst there are many reasons for this pervasive disappointment, the real root-causes are arguably the big (big projects, budgets, teams, complexity and expectations) and the small (ambition, improvement / thinking, experimentation and tolerance of failure).

We have contributed to our own complexity and had complexity thrust upon us from external sources. The layers of complexity entangle us. This entanglement forces us into ongoing cycles of incremental change. Unfortunately, incremental change is impeding us from reaching for a vastly better future.

2.2 The 80/20 rule of OSS in a fragmented market

The diagram below shows Pareto’s 80/20 rule in the form of a telco functionality long-tail diagram:

  • The x-axis shows a list of OSS functionalities (within a hypothetical OSS product)
  • The y-axis shows the relative value or volume of transactions for each functionality (for the same hypothetical product)

True to Pareto’s Principle, this chart indicates that 80% of value is delivered by 20% of functionality (red rectangle). Meanwhile 20% of value is delivered by the remaining 80% of functionality (blue arrow).

If we take one sector of the OSS market – network inventory – there are well over 50 products / vendors servicing this market.

The functionalities in the red box are the non-negotiables because they’re the most important. All 50+ products will have these functionalities and have had them since their Minimum Viable Product first came onto the market. It also means 50+ sets of effort have been duplicated and 50+ competitors fighting for the same pool of customers. It also means there are 50+ vendors for a buyer to consider when choosing their next product (often leading to analysis paralysis).

But rather than innovating, trying to improve the functionality that “moves the needle” for buyers (ie the red box), instead most vendors attempt to innovate in the long tail (ie the blue arrow).

Of course innovation is needed in the blue arrow, but innovation and consolidation is more desperately needed in the red box (see this article for more detail).

Part 2 – Change Drivers for OSS

2.1 Exponential opportunities

Exponential technologies are landing all around us from adjacent industries. With them, it becomes a question about how to remove the constraints of current OSS and unleash them.

We’ve all heard of Moore’s Law, which predicts the semiconductor industry’s ability to exponentially increase transistor density in an integrated circuit. “Moore’s prediction proved accurate for several decades, and has been used in the semiconductor industry to guide long-term planning and to set targets for research and development. Advancements in digital electronics are strongly linked to Moore’s law: quality-adjusted microprocessor prices, memory capacity, sensors and even the number and size of pixels in digital cameras… Moore’s law describes a driving force of technological and social change, productivity, and economic growth.”

Moore’s Law is starting to break down, but it’s exponentiality has also proven to be helpful for long-term planning in many industries that rely on electronics / computing. That includes the communications industry. By nature, we tend to think in linear terms. Exponentiality is harder for us to comprehend (as shown with the old anecdote about the number of grains of wheat on a chessboard).

The problem, as described in a great article on SingularityHub, is that the exponentiality of technological progress tends to surprise us as change initially creeps up on us quietly, then overwhelms us in situations like this:

(source: Singularity Hub)

Hardware is scaling exponentially, yet our software is lagging and wetware (ie our thinking) could be said to be trailing even further behind. The level of complexity that has hit OSS in the last decade has been profound and has largely overwhelmed the linear thinking models we’ve applied to OSS. The continued growth from technologies such as network virtualisation, Internet of Things, etc is going to lead to a touchpoint explosion that will make the next few years even more difficult for our current OSS models (especially for the many tools that exist today that have evolved from decade-old frameworks).

Countering exponential growth requires exponential thinking, as described in this article on WIRED. We know we’re going to deal with vastly greater touch-points and vastly greater variants and vastly greater complexity (see more in the triple-constraint of OSS). Too many OSS projects are already buckling under the weight of this complexity.

So where to start?

2.2 Re-framing the Challenge of OSS Innovation

A journey of enlightenment is required. Arguably this type of transformation is required before an digital transformation can proceed. This starts by asking questions that challenge our beliefs about OSS and the customers + markets they serve. This link poses 22 re-framing questions that might help you on a journey to test your own beliefs, but don’t stop at those seed questions.

2.3 Driving Forces Impacting the Telco / OSS Industries

The following forces are driving future changes, both positive and negative, for the OSS industry and the customers it serves:

  1. OSS buyers (eg telcos) face diminished profitability due to competition in traditional services such as network access / carriage
  2. Telcos face a challenge in their ability to innovate (access to skills, capital, efficient coordination, constrained partner ecosystem, etc)
  3. Access to capital (incl. inflation, depreciation and interest rates following massive technology investments)
  4. The centre of gravity of innovation is rapidly shifting from West to East, as is access to skills (migration of jobs, skills and learning to India and China). Far more Engineers are minted in China and India and many OSS tasks are done in these regions, especially hands-on “creation” tasks. This means the vital hands-on “OSS apprenticeships” are largely only done at scale in these regions. As a result, of this shift and outsourcing / offshoring of core tech skills, many telcos in the West have lost the capability to influence the innovation / evolution of the technologies upon which they depend
  5. Access to energy and energy efficiency are coming under increased scrutiny due to climate change and emissions obligations
  6. Networks, data and digital experiences are increasingly software-centric, yet telcos don’t have “software-first” DNA
  7. Digitalisation and the desire for more immersive digital experiences is increasing, within B2B (eg gaming, entertainment) and B2C (eg digital twins and so much more)
  8. Web3 / metaverse use-cases will intensify this trend
  9. Telcos have traditionally sold (and profited from) the onramp to the digital world (ie mobile phones), but will they continue to for web3-enabled devices like headsets?
  10. Regulatory interventions have always been significant for telco, but as we increasingly rely on digital experience, regulatory oversight is now increasing for entities that rely on communications networks (eg user data privacy regulations like GDPR)
  11. Trust, privacy, but also proof-of-identity are all becoming more important as protection mechanisms as digital experiences increase in the face of increased cyber threats
  12. Cyber threats have the potential to be too advanced for many enterprises’ security budgets
  13. Increased proliferation of technology arriving from adjacent industries introduces challenges for standardisation and ability to collaborate 
  14. The proliferation of disruptive technologies also makes it more difficult to choose the “right” solution / architecture for today and into a predictable future
  15. Society changes and consumer desires in relation to digital experiences are changing rapidly, and changing in different ways in different regions
  16. The telecom market is largely saturated in many countries, nearing “full” penetration 
  17. Many telcos, especially tier-ones, are large, bureaucratic organisations. The inefficiency of these operations models can be justified at large scale of build, but less so when margins are shrinking along with new users (per earlier full-penetration point)
  18. Shorter attention spans and emphasis on short-term returns is limiting the ability to perform primary research or enduring problem solving. The appetite for tech literacy especially for hard challenges (ie ones not already available as a YouTube videos), is diminishing
  19. Geopolitical risk is on the rise
  20. Telco technology solutions are increasingly moving to public / private cloud models, which is beyond the experiences of many telco veterans
  21. There is an increase in proliferation of devices / appliances (by volume, type and behaviour) on enterprise and telco networks, so understanding and/or visibility of nefarious behaviour is harder to identify, leading to greater cyber risks
  22. Miniaturization of electronics frees up space in exchanges, leading to available rackspace, connectivity and power at these valuable “edge” locations. This opens up not just opportunities within the telco, but via partnership / investment / leasing-models as a real-estate opportunity 
  23. Telcos aren’t software-first, nor have developer-centred management skills in executive positions. Successful software development requires single-minded solutions, whereas software is one of many conflicting objectives for telcos (refer to this failure to prioritise article). Innovation is being seen from software-first business models (refer to this article about Rakuten), where the business is lead by IT-centric management / teams rather than telco veterans
  24. Other opportunities like private networks and digital twins for industry seem like opportunities that are more aligned with current telco capabilities

3.4 Future OSS Scenario Planning

Before considering what the future might look like, we must acknowledge that nobody can predict the future. There are simply too many variables at play. The best we can do is propose possible future scenarios such as:

  1. Blue-sky (fundamental change for the better)
  2. Doomsday (fundamental change that disrupts)
  3. Depression (more of the same, but with decay / deterioration)
  4. Growth (more of the same, but with improvement)

From these scenarios, we can make decisions on how to best steer OSS innovation. <WIP link>

Part 3 – What the Future of OSS Might Look Like

3.1 The Pieces of the Future OSS Puzzle

The topics related to this Call for Innovation can be wide and varied (the big), yet sharp in focus (the small). They can relate directly to OSS technologies or innovative methods brought to OSS from adjacent fields but ultimately they’ll improve our lot. Not just change for the sake of the next cool tech, but change for the sake of improving the experience of anyone interacting with an OSS.

The following is just a small list of starting points where exponential improvements await:

  • OSS are designed for machine-to-machine (M2M) interactions as the primary interface. User interfaces are only designed for the rarest cases of interaction / intervention
  • Automations of various sorts handle the high volume, difficult and/or mundane activities, freeing humans up to focus on higher-value decision making
  • These rare interactions will not be via today’s User Interfaces (UIs) consisting of alarm lists, work orders, device configs, design packs, etc. OSS interactions will be via augmented reality, three-dimensional models and digital twins / triplets where data of many forms can be easily overlaid and turned into decisions. Smart phones have revolutionised the way workers interact with operational systems. Imminent releases in smart glass will further change ways of working, delivering greater situational awareness and remote collaboration
  • Decision support will guide the workforce in performing many of their daily actions, especially using augmented reality devices
  • OSS won’t just coordinate the repair of faults after they occur in systems and networks. They will increasingly predict faults before they occur, but more importantly will be used to increase network and service resiliency to cope with failures / degradations as they inevitably arise. This allows focus to shift from networks, nodes and infrastructure to customers and services
  • Will make increasingly sophisticated and granular decisions on how to leverage capital based on the balance of cost vs benefit. For example, taking inputs such as capacity planning, assurance events, customer sentiment, service levels and fieldwork scheduling to decide whether to fix a particular failure (eg cable water ingress) or to modernise the infrastructure and customer service capabilities
  • Use every field worker touch-point as an opportunity to reconcile physical infrastructure data to help overcome the challenge of data quality. This can be achieved via image processing to identify QR / barcode / RFID tags or similar whilst conducting day-to-day activities on-site. Image processing is backed up by sophisticated asset life-cycle mapping
  • Greater consolidation of product functionality, especially of the core features (as shown in the red box within the 80/20 diagram above)
  • Common, vendor-neutral network configuration data models to ensure new network makes, models and topologies are designed once (by the vendor) and easily represented consistently across all OSS globally (eg OpenConfig project)
  • True streaming data collection and management (eg telemetry, traffic management, billing, quality of service, security, etc) will be commonplace, as opposed to near-real-time (eg 15 min batch processing). Decisions at the speed of streaming, not 30+ minutes after the event like many networks today. [At the time of writing, one solution stands ahead of all others in this space]
  • A composable and/or natural language user interface, like a Google search, with the smarts to interrogate data and return insights. Alarm lists, trouble tickets, inventory lists, performance graphs and other traditional  approaches are waiting to be usurped by more intuitive user interfaces (UIs)
  • Data-driven search-based interactions rather than integration, configuration or programming languages wherever possible, to cut down on integration times and costs
  • Service / application layer abstraction provides the potential for platform sharing between network and OSS and dove-tailed integration
  • New styles of service and event modelling for the adaptive packet-based, virtual/physical hybrid networks of the future
  • A single omni-channel thread that traces every customer omni-channel interaction flow through OSS, BSS, web/digital, contact centres, retail, live chat, etc, leading to less fall-outs and responsibility “flicking” between domains in a service provider
  • Repeatable, rather than customised, integration. Catalogs (eg service catalogs, product catalogs, virtual network device catalogs) are the closest we have so far. Intent abstraction and policy models follow this theme too, as does platform-thinking / APIs.
  • Unstructured, flexible data sets rather than structured data models. OSS has a place for graph, time-series and relational data models. Speaking of data, we need a new data platforms that easily support the cross-linking of time-series, graph and relational data sets (the holy trinity)
  • Highly de-centralised and/or distributed processing using small footprint, commoditised hardware at the provider and/or customer edge rather than the centralised (plus aggregators) model of today. This model reduces latency and management traffic carried over the network as only meta-data and/or consolidated data sets are shipped to centralised locations
  • The wow factor / usability is in the graphics and user-interface, the value (business case) is in the graph (the data)
  • A standardised, simplified integration mechanism between network and management on common virtualised platforms rather than proprietary interfaces connecting between different platforms
  • An open source core that allows anyone to develop plug-ins for, leading to long-tail innovation, whilst focusing the massive brainpower currently allocated to duplicated efforts (due to vendor fragmentation)
  • Cheaper, faster, simpler installations (at all levels, including contracts between suppliers and customers)
  • Transparency of documentation to make it easier for integrators
  • Ubiquitous training / learning programs – like what Cisco has achieved with it’s CCIE (and similar) certifications (not to mention the unlikely competitive advantage it has delivered)
  • Capable of handling a touchpoint explosion as network virtualisation and Internet of Things (IoT) will introduce vast numbers of devices to manage
  • Ruthless simplification of inputs leading into the OSS/BSS “black-box” (eg drastic reduction in the number of product offerings or SKUs and product variants) and inside the black box of OSS/BSS (eg process complexity, configuration variants, etc)
  • Machine learning and predictive analytics to help operators cope with the abovementioned touchpoint explosion
  • Increasing the perception of value provided by OSS, reversing the pervasive sentiment that OSS can only ever be a cost centre. OSS are too important to just be cost centres but they need better messaging of value to come from us

3.2 Increasing Trust to Support Web3

The digital experiences we rely on today are evolving.  The third generation of the Internet (Web 3.0) is on the horizon, with many of its necessary elements already taking shape (eg blockchain / crypto-currencies, digital proof-of-ownership, virtual worlds, etc). It will essentially be a more immersive, secure, private, user-friendly and de-centralised version of the Internet we know today.

Trust will be a fundamental element of society’s up-take of web3 technologies. Access to these experiences will occur via the on-ramp of communications networks. Being regulated in their local jurisdictions, telcos have an opportunity to act in the role of privacy, security and consumer protection stewards for everyone entering the world of Web3. Leveraging the long-held position of trust that telcos have with their business and residential customers, OSS/BSS have the opportunity to deliver trust mechanisms for Web3.

Whether, and how, we tap into this opportunity remains to be seen.

3.3. A Day in the Life of Future OSS Users

Many people talk about the possibility of a zero-touch future OSS. I don’t foresee that, but do see a future of lower-touch, smarter-touch, different touch. The entire way we will interact with our OSS will change fundamentally – from dealing with our two-dimensional device screens (eg PCs, phones and tablets) to future devices that allow us to have enriched experiences in three dimensions – in reality and with augmented realities.

Capacity Planner – The CAD designs of the past were necessary because field workers needed printed design packs that showed network changes. Field workers needed to translate these drawings and designs into the 3 dimensional worlds they experienced. Since capacity planners perform designs remotely, they make design decisions with incomplete awareness of site (eg site furniture, etc). In the future (and today), designers will have 3D photogrammetric models of site and can perform adds/moves/changes directly onto the model. Most of these designs will be generated automatically based on cost-benefit analysis. However, a human may be required to perform a quality audit of the design or generate any bespoke designs that aren’t catered for by the auto-designer. For example, certain infrastructure changes may be required before being able to drag a new device type (make and model) onto the 3D model and include other relevant annotations. 

Field Worker – Field workers are already using mobility devices to aid the efficiency of work on site. This will change further when field workers use augmented reality headsets to see network change designs as overlays whilst they’re working. They will see the new device location marked on the tower (as described above) and know exactly which piece of equipment needs to be installed where. Connection details will also be shown and image processing on the head-set will identified whether connectors have been wrongly connected. Even installation guides will appear via the heads-up display of AR to aid techs will the build.
Similarly, a fibre splice technician will be guided which strand / tube fibre to splice to which other strand / tube as image processing will identify cables and colours and match them up with designs. Where there are any discrepancies between field records and inventory records, these will be reconciled immediately without the need for returning red-line markup as-built drawings to be transcribed into the OSS.
Perhaps most important is the automation that keeps passive infrastructure reconciled whilst field techs perform their daily activities. We all know that data quality deteriorates as it ages. Since passive infrastructure (racks, cables, patch panels, splice boxes, etc) is unable to send digital notifications, their data tends to be updated only through design drawings, which are rarely touched. However, field workers “see” this infrastructure whilst they’re on site. As field workers will now upload imagery of the site they’re working on (as photos or AR streams), image processing will automatically identify QR / barcode / RFID tags to identify where assets are in space and time, thus providing a “ping” on the data to refresh it and reconcile its data accuracy. Entire asset life-cycles are better tracked and correlated with who was on site when life-cycle statuses changed (eg adds / moves / changes in location or configuration).

Data Centre Repair Technician – Since most infrastructure in a data centre will be standardised, commoditised and virtualised, the primary tasks will be replacing failed hardware and performing patching. DC Techs will do a daily fix-run where their augmented reality devices will show which rack and shelf contain faulty equipment that needs to be replaced. Image processing will even identify whether the correct device or card is being installed in place of the failed unit.  AR headsets will also guide DC techs on patching / re-patching, ensuring the correct connectivity is enabled.

NOC (and/or WOC and SOC) operator – As with today, a NOC operator will monitor, diagnose and coordinate a fix. However, with AIOps tools automatically identifying and responding to all previously identified network health patterns, the NOC operator will only handle the rare event patterns that haven’t been previously codified. For these rare cases, the NOC operator will have an advanced diagnosis solution and user interface, where all network domain data and available data streams (events, telemetry, logs, changes, etc) can be visualised on a single palette. These temporal/spatial data streams can be dragged onto the UI to aid with diagnosis, although the AIOps will initially present the data streams on the palette that contain likely anomalies (rather than the hundreds of unassociated metric graphs that will be available from the network). The UI to support the rare cases will look fundamentally different to the UI that supports bulk processing today (eg alarm lists and tickets).

Command and control (C&C) – Since the AIOps and SON handles most situations automatically (eg auto-fix, self-optimise, auto-ticket), only rare situations require command and control mechanisms by the NOC team. However, these C&C situations are likely to be invoked during crisis, high severity and/or complex resolution scenarios. Rather than handling these scenarios by tick(et) and flick, the C&C tool will provide collaborative control of resources (people and technology) using sophisticated decision support. The C&C solution will be tightly coupled with business continuity plans (BCP) to drive pre-planned, coordinated responses. They will be designed to ensure BCPs are regularly tested and refined.

System Administrators – These teams will arguably become the most important people for which OSS user interfaces (UIs) will be designed. These users will design, train, maintain and monitor the automations of future OSS. They will be responsible for keeping systems running, setting up product catalogs, designing workflows such as orchestration plans, identifying AIOps event patterns and response workflows, etc. They will be responsible for system configurations and data migrations to ensure the workflows for all other personas are seamless, immersive and intuitive. Whereas other persona UIs will be highly visual in nature, the dedicated UIs of system administrators are likely to look like the OSS UIs that we’re familiar with today (eg lists / logs, BPMN workflows, configs, technical attributes, network connectivity / topology maps, etc)

Product Designers – The product team will be provided with a visual product builder solution, where they can easily compose new offerings, contracts, options/variants, APIs, etc from atomic objects already available to them in the product catalog. Product Designers become the Lego builders of telco, limited only by their imaginations.

Marketing – The marketing team will be provided with sophisticated analytics that leverages OSS/BSS data to automatically identify campaign opportunities (eg subscribers that are churn risks, subscribers that are up-sell targets, prospects that aren’t subscribers but are within a designated coverage area such as within 100m of a passing cable, etc)

Sales Teams – Most sales will occur via seamless self-service mechanisms (eg portals, apps, etc). Some may even occur via bundled applications (where third-parties have utilised a carrier’s Network as a Service APIs to autogenerate telco services to be bundled with their own service offerings). Salespeople will only work with customers for rarer cases, but will use a visual quote and service design builder to collaborate with customers. Sales teams will even be able to walk clients through reality twins of service designs, such as showing where their infrastructure will reside in racks (in a DC) on towers, etc.

4. What do you think the future of OSS will look like?

We don’t claim to be able to predict the future. Many of the examples described above are already available in their nascent forms or provide a line-of-sight to these future scenarios. It’s quite likely that we’ve overlooked key initiatives, technologies and/or innovations. We’d love to hear your thoughts. What have we missed or misrepresented? Are you working on any innovations or products that you’d love to tell the world all about? Leave us a comment in the comment box below to share your predictions.

How to define a digital twin, digital triplet and reality twin?

The OSS we’ve been building for the last 30+ years aren’t too dissimilar to the digital twins the world is getting all excited by today.

I find it really interesting, not just because of the buzz, but because the term “digital twin” seems to mean different things to different people. Having seen what OSS are capable of, I’m somewhat underwhelmed by what people are calling their digital twins.

But within the context of what OSS have long been able to do, I’d like to define a few terminologies, including:

  1. Digital twins
  2. Digital triplets
  3. Reality twins
  4. Reality triplets

No idea if anyone else does or will agree with these terminologies, but at least they will provide a baseline to help me describe this world that’s significantly overlapping with OSS.

Digital Twin

First coined in the early 1990s (by David Gelernter), digital twins are a virtual representation of real-world entities. In the world of OSS, this is really just an inventory or asset management system. You have a visual representation of all the network devices and how they interconnect. Typically this represents physical inventory items (eg routers, switches, patch panels, cables, etc). But it could also represent logical inventory too (eg virtual circuits, virtual machines, etc).

Whilst the data is the digital twin may change from time to time, it remains relatively static (eg daily reconciliation).

The diagram below provides an example of a digital twin (in this case showing a topological view of a communications tower in my OSS sandpit):

.

Digital Triplet

A digital triplet is similar to a digital twin, but with one distinct difference. It also shows variable data, such as telemetry, that is changing on a regular basis. This could show the near-real-time performance metrics / graphs of devices in the network.

The OSS example below shows the network devices (circles), connectivity and telemetry (eg CPU and RAM inner circles, throughput, discards, etc)  of a digital triplet.

The examples above demonstrate that OSS have been digital twins / triplets for decades. Other industries are just starting to realise how our tools, data, processes and knowledge can be leveraged.

I’m really excited about taking the two-dimensional construct shown in the digital triplet above and turning it into an interactive three-dimensional model of a physical / logical / real-time network graph.

To my knowledge though, the reality twins / triplets we describe below are only just starting to appear with advances in 3D imaging technology becoming commercially available.

 

Reality Twin

The reality twin is similar to a digital twin, but shows a photorealistic representation of the assets. It provides greater spatial awareness of a site for remote workers. The example below shows a reality twin, a 3D model of a comms tower site, with assets such as antenna, remote radio units, mount groups and other appurtenances annotated (and reconciled with the inventory / OSS data sets). 

 

Reality Triplet

Like the digital triplet, the main distinction between the twin and the triplet is the fact that the visualised data is changing dynamically. The Augmented Reality view of the tower below is only annotated with inventory data, However, the engine that drives it can also inject streaming data such as telemetry relating to the assets on the tower from an alarm management or performance management solution in your OSS. It could also be overlaid with environmental data such as wind speed, temperature and air pressure if a weather station is mounted on the tower… or any other data you want to visualise for that matter.

Each of the four examples above provide situational enrichment, for field workers, remote workers and customers alike. Combined with the extensive data our OSS and BSS collect, these visualisation techniques provide an exciting array of possibilities across the entire network and service life-cycle (plan, design, build, operate and maintain).

Of course there are any number of other types of twins (process twins, component twins, etc), especially as they apply to industries other than telco.

As mentioned earlier, there are many different definitions of what a Digital Twin is. Does the twin / triplet terminology used above resonate with you, especially as it relates to your OSS and real-world networks / assets? Please leave us a comment below to share what your definition of a digital twin is.

How valuable can OSS and IOT data be?

When people talk about the value of OSS data, the discussion invariably turns to privacy and some of the unconscionable things already done with our personal data. Off the top of my head, I can’t remember any telcos blatantly misusing the highly privileged information they have access to.

This is possibly a double-edged sword. It hints at the trust that has been earned over decades by many of the telco brands with our personal data. But it also possibly suggests that the telcos haven’t looked to monetise the data they have access to as aggressively as some other organisations have done.

Telcos have generally monetised by subscription to services rather than offering free services that are (possibly) surreptitiously funded by advertisers, et al. Vastly different business models. Telcos have seemed to be more cautious and stringent with the de-personalisation of any data they collect and use (although maybe that’s just a case of me being on the outside looking in to many of these organisations).

A recent discussion with an Internet of Things (IOT) dashboard / visualisation provider expert gave me an insight into the possibility of telcos to use data that is not just de-personalised, but was never even personalised to begin with.

His company connects to all sorts of sensors for smart city projects. Sensors that gather data across temperature, humidity, noise, air quality, parking bays, lighting, people / vehicle movement, number of mobile devices connecting to access points and much more. He told me he can tell when storms are brewing in the cities they monitor because he can see degradation in air quality long before anything appears on weather radars.

Telcos already have towers within close proximity to large swathes of the world’s population. Having these types of sensors (and more) mounted to every tower could provide additional sources from which to unlock really valuable insights and streaming decision support. Combine that with the wealth of knowledge available about our networks, the number of people connected, the number of services active, the volume of data being consumed, the geo-location of the crowds, etc, all of which has no individual personal identifiers.

Imagine if the GPS in our cars not only routed us around traffic snarls, but also around areas where air quality is poor or noise levels are dangerously high. Or perhaps guided us to the optimum combination of available car parking spaces and distance to walk to a sporting event we’re attending.

But it’s not the patterns we can imagine that are exciting. The thing I love even more about having access to these diverse streams of data is the potential insights we can unlock for civilisation, enterprise, etc that we could never have even imagined. Early predictions of storms and who knows what else?

Watch this space too as we’re about to start on some experiments using a new data tool that can visualise:

  • Tens/hundreds of millions or data points
  • Entities (eg devices, vehicles, etc) mapped across telemetry, space and time (temporal-spatial) 
  • Inject interactive layers that can be readily overlaid, turned on/off, be interacted with to highlight or de-highlight certain data trends

I’m really excited about the type of experiments we’re hoping to do with it.

 

Is your OSS like a broken watch – only accurate twice a day?

The broken watch analogy.

When the second hand on your analog watch stops moving, you’ll quickly realise that your watch has broken. You’ll know exactly what to do next. You won’t assume that time is broken or is somehow standing still. You’ll immediately recognise that your watch needs fixing, re-winding or replacing.

A watch is an amazing piece of machinery, with a level of complexity that few of us can fully comprehend, much less be able to build ourselves. Despite this, the user interface, the watch-face, is so elegantly simple that almost anyone can immediately understand – not only the current metrics (time and date), but also its operational status (working, broken or perhaps even degraded if the time shown isn’t accurate).

Once we start dealing with more complex systems, such as multi-technology telecommunications networks, we can easily lose our comprehension / awareness of the current metrics and status of the network:

  • Is it working correctly?
  • Is it working within expected tolerances?
  • Do we even know whether it’s working at all?

It’s important to also note, that we need to ask these same three questions not just of the network, but of each customer service that is running across the network. Furthermore, it may also make sense that we ask these questions of each of the devices and links that make up the network.

An end-user of a customer service is generally only interested in the watch-face for their specific service/s. Is the watch-face you provide them elegant enough to be able to answer the three questions above?

A network operator doesn’t have just one watch-face to monitor, but hundreds / thousands / millions. They can’t just have a screen with that many watch-faces. That’s just too inefficient to work with when there are hundreds / millions of watch-faces (although I have seen examples of OSS that do just that!!).

Instead, operators need to be able to toggle between an individual watch-face (of a single user) and an aggregated watch-face that answers the three questions above for all services, devices, sub-networks and the network holistically.

For a network operator, there are likely to be other layers of granularity too. Not just “whole of network” and “individual user” but filters that allow an operator to narrow in on network performance by:

  • Time ranges
  • Geographic regions
  • Topology zones
  • Network / vendor / domain types
  • Metric types
  • Customer/s
  • etc

Some network operators use standard business intelligence (BI) tools to plot their watch-faces. Invariably, that leads to the problem described above, of having a screen full of dials, which are challenging to work with and make decisions upon.

Dedicated Network Performance Management (NPM) solutions, are generally designed with operators in mind. Not only do they collect metrics at big data volumes (in some cases billions of xDRs), but do so across different networks, domains and technologies (IP/MPLS, 5G, LTE, etc.).

NPMs provide operators with flexibility in the layers of granularity of data presented and the network intelligence shown. More importantly, they (hopefully) provide a user-interface that allows operational staff (and/or integrated systems) to respond to any deviations in expected behaviour, at macro or micro levels.

Just like a watch, it’s not always just the metric (time / date) or whether it accurately presents the current situation that are important. More important is how you use that information to decide what to do next.

If your current network performance management solution is not giving you an elegant way of deciding what to do next, it might be time for a new NPM solution. Don’t let your network performance monitoring solution be like a broken analog watch that only shows a seemingly accurate result twice a day.

The most comprehensive analysis I’ve seen of the OSS / BSS market

I’m delighted to share with you that Houlihan Lokey, a global investment bank, has just launched the most comprehensive market analysis I’ve seen covering connectivity software including OSS / BSS. I’m possibly biased though as I’ve been lucky enough to have played a small part in its direction and content.

Most of the reports that I’ve seen from this sector cover 20-30 organisations at best. This Houlihan Lokey report references hundreds of leading vendors. But there’s one other key difference about this report.

Whereas other reports are designed to help network operators identify which OSS/BSS vendors provide suitable capabilities for their needs, this report is designed to inform investors about the opportunities that exist at the convergence of software and telecom. In essence, software enabling telecoms and, more generally, connectivity (oftentimes for non-telco sectors, such as enterprise, energy, healthcare, and media). Oh, and the vendor segmentation provided in the report still aids with the matching of vendors to network operator functional needs though too

As a highly fragmented market with well over 400 participating vendors, there is ample room for investors to influence and optimise the use of capital in the ecosystem as well as facilitating new innovation models.

The report does a deep-dive market analysis with the following five important sections:

  1. Market Dynamics
  2. Software Transformation Case Studies
  3. Ways Hyperscale Cloud Is Enabling Innovation
  4. Subsector Overviews
  5. Industry Insights

At this stage, the full report is being made available to selected companies and investor groups. If you’re interested in reading my contribution to Houlihan Lokey’s report, I would be delighted to connect and share more. And, if you’re evaluating M&A or capital raising opportunities, I’d be happy to introduce you to my friends at Houlihan Lokey.

[Update: HL has now made the exec summary of the report available for download and the full report available upon request – https://hl.com/insights/insights-article/?id=17179875243 ]

What if your career in telco were to end tomorrow?

Are you a dyed-in-the-wool telco person? Has your whole career, or a majority of it, been built around the telco industry?

If your career in telco were to cease tomorrow, what would you do next? 

As we mentioned in our previous article on “Burning Platforms”, the black curve of the traditional telco business is trending towards an asymptote (see the black curve in the diagram below). But today’s post looks at the future of telco even closer to home – not the industry and its organisations, but the careers of all of us people in telco.

S-curve image courtesy of strategictoolkits.com

Telco networks are still vitally important, but the profit engine of traditional telco is expiring. With it goes innovation, projects, jobs. All of those things are facilitated by profitability. Many telcos in developed nations have long since moved into a financial engineering, asset selling and incremental innovation phase. Those aren’t indicators of rapid improvement on an S curve. They’re indicators of an asymptote.

Alternatively, what if there were a doomsday event that disrupts the market, slashing the relevance of traditional telco models and/or networks? 

Or what if you work for a supplier that is heavily dependent on telcos (or your role at the supplier is dependent on telco)? What if your telco customers’ access to capital dries up after they embark on massive 5G roll-outs, re-tooling and re-skilling that doesn’t bear fruit? Does that impact your employer’s revenues and the viability of your roles / projects?

If you’re a battle-worn telco person, where do you go next? What’s your red curve? What skills, assets, connections, etc will carry you into the red curve? Don’t have a red curve? Are you already on the lookout to find one?

I’ve seen many telco experts transition to roles with hyperscalers recently. They’ve sought out their red curve by investing in cloud awareness / skills and making the leap to telco cloud.

But cloud is far from being the only red curve candidate. Telco has precipitated a Cambrian explosion of red curves over many decades, yet hasn’t really been able to find its own of late.

Cloud is telco’s Kodak moment. As George Glass mentioned in his podcast with us, he was able to spot the opportunity of cloud at BT around the turn of the century, but the hyperscalers have usurped the telcos with that business / technology model.

The internet red curve was facilitated by telco and partly leveraged (via Internet connectivity), yet other opportunities were missed. Many telcos are now trying to transform their DNA to software but aren’t really proving to be as good at it as other software-first organisations (possibly for reasons mentioned in Bert Hubert’s video cited in the Burning Platform post).

There are many better red curve candidates in my opinion, ones that are better suited to the many strengths that telcos still retain.

But my red curve/s, the ones I’ve been investing in over the last few years, aren’t the same as yours. You have different strengths, skills, connections and capabilities. The question remains, “what would you do next if your telco career were to cease tomorrow?”

Just as suggested in burning platform post that Telco needs to quest to find its red curve/s, so do us telco/OSS/BSS experts need to form our jumping off points. The burning platform is as much for us as it is for the Telco industry.

Take a look at the second video in the Burning Platforms post. It provides some great suggestions about embarking on a pioneering quest to help find your red curve/s.

How to get Telco off its Burning Platform

I have a couple of really interesting videos to share with you today. I think they’re both brilliant, though perhaps I’m biased because they largely reinforce thoughts I already share with the stars of each video. Well, reinforce, but they also significantly expand the thinking with some enlightening dots that I hadn’t previously connected.

Nobody is going out on a limb to suggest that our lives are increasingly impacted by digital experiences. In the storefront, our loungerooms, our games rooms, on the move anywhere, these experiences are enabled by the digital devices we interact with and communicate through. These interactions and communications in turn are largely supported by telecommunications networks.

So, telcos clearly play a crucial role in our modern lifestyle. Arguably more so than ever before. Yet telcos, generally speaking, have also arguably never been more threatened in terms of their business viability. Profitability is down (generally), but costs are increasing and so are risks.

The traditional telco business model is a burning platform.

The Ernest Hemingway quote springs to mind – “How did you go bankrupt?” Two ways. Gradually, then suddenly.”

Yet the need for comms networks isn’t going away any time soon. In fact their need and importance is only escalating. So, between these two great opposing forces, it’s almost inevitable that change of significant magnitude is ahead. Like tectonic plates moving ever so slowly, but ultimately grinding and colliding, eventually causing a release of force of enormous magnitude.

Fault lines are forming, smaller tremors have already been felt. But the big quakes are surely coming.

The OSS and BSS industry is co-dependent with the telco industry. Dependent upon the success of service providers, but also influential in their relative effectiveness and profitability.

I’ve increasingly seen telco and OSS as nearing (passing?) the “next wave” stage of the S curve:

S-curve image courtesy of strategictoolkits.com

The question hangs impalpably – what does the next wave look like? What skills, talents, networks, technologies, etc can be carried forward into the next opportunity for growth?

The success of OSS and telco are tightly coupled but also have their own identities. Do they continue to surf the same “next wave” (red curve above) together, or does each industry find disparate waves?

As referred to in a recent article about the great telco tower sell-off, Bert Hubert has published a brilliant video describing how telcos have outsourced and delegated their technological edge. They’re in the phase of selling off assets and divesting skills to support the financial engineering of their organisations (I’m generalising wildly here, as not all telcos fall into this category, but hopefully you’re willing to grant me this generalisation).

The video is long, but well worth the watch to give you a sense of where telcos are on the black curve above.

Then our second inspiring video stars Jason Fox and Aidan McCullen. It isn’t specifically about telco at all, but it does ask us to dial up our constructive discontent and embark on a pioneering quest. It takes us on the journey of finding the red curve that will provide us with a leaping-off point/s.

There are so many quotable ideas in this video (which is embedded below), but I’ll summarise with only a few:

  • Organisations are good at creating new capabilities but are hopeless at destroying and removing the detritus that just weighs the organisation down (refer to my earlier posts on inertia and subtraction projects)
  • Rather than just doing things the way they’ve always been done, they argue that
    20+% of time needs to be engaged in more meaningful and thoughtful work
  • Senior leaders that are in a position to change the direction of an organisation are so busy dealing with short time slices and empire protection that there’s no time remaining for extensive reflection and planning. No time to cultivate emergent and divergent thinking
  • There’s a fixation on numbers and arbitrary benchmarks that lead to incrementalism. There’s little time devoted to contemplation, reflection or divergence, nor for embarking on a journey of learning and change
  • At approximately 31:00 there’s a discussion about the kraken of doom, a metaphor for disruption. It’s not a pot of gold, but a kraken at the end of the rainbow (or in our case, the declining slope on the black S curve for traditional telco models)
  • New story arcs (red S curves) need time and space where new thinking can flourish, and this isn’t just restricted to an occasional off-site strategy meeting
  • A quest is a journey to find viable alternative options beyond the default . A quest goes into the unknown to find answers that are exactly that – not yet known. Beyond the incremental. Every question (to the answers we seek) begins with a quest (a seeking of knowledge)
  • Leaders need to have a quiver of strategic options to jump to if/when the right conditions manifest. Most leaders don’t have the time to think/vision, let alone develop any quiver of options. 
    (Currently this is largely outsourced too by delegating to consultants or vendors that may have biases toward benefits for their own organisations. Consultants typically don’t have the same level of tribal knowledge that the in-house leaders do, but they do bring stimulative ideas from a broader industry involvement.
    However, this also poses the question. Do consultants even have the time to think transformatively themselves given that their focus is typically on utilisation rather than having time out for introspection)
  • Often leaders do get the call to adventure (such as seeing an inspiring idea at a conference), but generally don’t have the time to heed the call, and get too busy, eventually forgetting about the transformative call to action
    (Ponder also whether senior execs are typically built with the tools to accept and lead this type of quest. Do their skills and personality thrive on a journey of discovery of the unknown? Ponder also whether the explorative, risk-taking types tend to get killed-off lower in the org-pyramid by the aggressive politicians that instead rise to power in some organisations?)
  • A change-maker is lonely. The ones who do it best are the collaborators who can bring others on the journey. But Jason and Aidan ask you to recall the iconic scene in Jerry Maguire… when Tom Cruise (a leader in a large sports agency) asks,  “who’s coming with me?” when forging out on his own. Feel the cringe when watching the excerpt provided!!
  • The curse of productivity and efficiency doesn’t allow for exploration or growth or conducting experiments

It makes me wonder whether leaders should be encouraged to take individual sabbaticals to explore and quest. This would allow their 2IC to shadow the leader on sabbaticals. The underling becomes acting in the role, providing for their personal growth and builds organisational continuity protection. The sabbatical unlocks disruption. I imagine this sabbatical (of a month for example) would be as an individual rather than as a collective such as at a lengthy off-site

Even if no specific outcome is achieved whilst on sabbatical, the leader goes back to their normal roles with greater savvy, awareness and forward thinking to project into a strategic future. Their journey of enlightenment is transformative (the type of transformation that should precede any digital transformation by the way).

Here’s the second video. Again it’s lengthy, but I hope you find it as inspiring as I did:

As an aside, this video partially explains why I left big-telco and now work from the outside in to these organisations. I loved the work on big, complex, impactful OSS suites. But I jumped off because I needed time away from meetings and short-term incrementalism to ruminate. I needed to claw back the time for investigating jumping off points onto the red S-curve. Time to discover parallel / overlapping universes of knowledge.

I’d love to hear your thoughts on the question posed earlier – What skills, talents, networks, technologies, etc can be carried forward into the next opportunity for growth (from the perspective of telco and/or OSS)?

What jumping off points do you think exist for telco and/or OSS? Leave us a comment below.

I’ve been exploring a range of them over the last couple of years and am excited by a number of the opportunities that await.

I’ll also leave you with the concept of disruptive models that have the potential to impact traditional telco (and OSS) sooner than you might think

Upcoming Webinar – Developing Practical Transformation User Guides

You may have noticed in a recent post that I’ve been contributing to the Transformation Project Framework (TPF) with TM Forum (document GB1011). It’s an important initiative that aims to help ease the stress and reduce the risk on complex OSS/BSS transformation projects.

In conjunction with the TM Forum and the Transformation User Guide (TUG) team, we’re about to deliver a webinar where you will have the opportunity to learn how to:

  •  Leverage members’ experiences in Transformation User Guides (TUG)
  • Utilise relevant Open Digital Architecture assets including eTOM using a Transformation Project Framework
  • Reduce intervals for setting up transformation projects and minimise the risks of poor planning
  • Work with the TM Forum Advisory Board and members to develop a suite of practical TUGs.

Click on this link to register for this webinar

 

The great telco tower sell-off

You’ve probably noticed the great tower sell-off trend that’s underway (see this article by Robert Clark as an example). Actually, I should call it the sell-off / lease-back trend because carriers are selling their towers, but leasing back space on the towers for their antennae, radio units, etc.

Let’s look at this trend through the lens of two of my favourite OSS books (that mostly have little to do with OSS – not directly at least):

  • Rich Dad, Poor Dad by Robert Kiyosaki, and
  • The Idea Factory by Jon Gertner

The telcos are getting some fantastic numbers for the sale (or partial sale) of their towers, as Robert’s article identifies. Those are some seriously big numbers going back into the telcos for reinvestment (assumedly).

Let’s look at this through the lens of Rich Dad first (quotes below sourced from here).

The number one rule [Rich Dad] taught me was: “You must learn the difference between an asset and a liability—and buy assets.”

The simple definition of an asset is something that puts money in your pocket.

The simple definition of a liability is something that takes money out of your pocket.

Towers, whilst requiring maintenance, would still seem to be an asset by this definition. They are often leased out to other telcos as well as aiding the delivery of services by the owning telco, thus putting money into their pockets. However, once they become leased, they become a liability, requiring money to be paid to the asset owner.

Now I’m clearly more of an Engineer than an Accountant, but it seems fairly clear that tower sales -> lease-back is a selling of assets, acquisition of liabilities, thus contradicting Rich Dad’s number one rule. But that’s okay if the sale of one asset (eg towers or data centres) allows for the purchase (or creation) of other more valuable assets.

[Just an aside here, but I also assume the sale / lease-back models factor in attractive future leasing prices for the sellers for some period of time such as 7-10 years. But I also wonder whether the lease-back models might become a little more extortionate after the initial contract period. It’s not like the telco can easily shift all their infrastructure off the now leased towers, so they’re somewhat trapped I would’ve thought…. But I have no actual insights into what these contracts look like, so it’s merely conjecture here].

Now let’s take a look through the lens of “The Idea Factory” next. This brilliant book tells the story about how Bell Labs (what was Bell, then AT&T’s research and development arm, now part of Nokia) played a crucial role in developing many of the primary innovations (transistors, lasers plus optical fibres, satellites, microwave, OSS, Claude Shannon’s Information Theory, Unix, various programming languages, solar cells and much more) we rely on today across telco and almost every other industry. These advances also paved the way for the rise of the Silicon Valley innovation engine of today.

Historically, this primary R&D gave telcos the opportunity to create valuable assets. However, most telcos divested their R&D arms years ago. They’ve also delegated most engineering and innovation to equipment / software vendors and via outsourcing agencies. I’d argue that most telcos don’t have the critical mass of engineering that allow them to create many assets anymore. They mostly only have the option of buying assets now. But we’ll come back to this a little later.

The cash raised from the sale of towers will undoubtedly be re-invested across various initiatives. Perhaps network extensions (eg 5G roll-outs), more towers (eg small-cell sites), or even OSS/BSS uplift (to cope with next-generation networks and services), amongst other things. [BTW. I’m hoping the funds are for re-investment, not shareholder dividends and the like 🙂 ]

Wearing my OSS-coloured glasses (as usual), I’d like to look at the OSS create / buy decision amongst the re-investment. But more specifically, whether OSS investment can be turned into a new and valuable asset.

In most cases, OSS are seen as a cost centre. Therefore a liability. They take money out of a carrier’s pockets. Naturally, I’ll always argue that OSS can be a revenue-generator (as you can see in more depth in this article):

But in this case, what is the asset? Is it the network? The services that the network carries? The OSS/BSS that brings the two together? Is it any one of these things that puts money in the pockets of carriers or all three working cohesively? I’d love to hear a CFO’s perspective here  😉

However, the thought I’d actually like to leave you with out of today’s article is how carriers can actually create OSS that are definitively assets.

I’d like to show three examples:

The first is to create an OSS that enterprise clients are willing to pay money for. That is, an OSSaaS (OSS as a Service), where a carrier sells OSS-style services to enterprise clients to help run the clients’ own private networks.

The second is to create an OSS for your own purposes that you also sell to other carriers, like Rakuten is currently doing. [Note that Rakuten is largely doing this with in-house, rather than outsourced, expertise. It is also buying up companies like InnoEye and Robin.io to bring other vital IP / engineering of their solution in-house.]

The third is the NaaS (Network as a Service) model, where the OSS/BSS is responsible for putting an API wrapper around the OSS and network (and possibly other services) to offer directly to external clients for consumption (and usually for internal consumption as well). 

However, all of these models require some serious engineering. They require an investment of effort to create an asset. Do modern carriers still have the appetite to create assets like these?

Wayne Dyer coined the phrase, “It’s never crowded along the extra mile.” Rakuten is one of the few that is walking the extra mile with their OSS (and network), in an attempt to turn it into a true asset – one that directly puts money in their pocket. They’re investing heavily to make it so. How valuable will this asset become? Time will tell.

I’d love to hear your thoughts on this. For example, are there other OSS asset models you can think of that I’ve overlooked?

Hat tips to James and Bert for seeds of thought that inspired this article. More articles to follow on from a brilliant video of Bert’s in future  😉

 

I was surprised by this OSS innovation

I’m in the privileged position, probably as a result of founding The Blue Book OSS/BSS Vendor Directory, to speak with many vendors each week. It also means I’m lucky enough to watch in on product demos on a regular basis too.

Last year DFG Consulting (www.dfgcon.si/en) reached out to tell me about their Interactively Assisted Converter (IAC) solution. Their demo showed that it was a really neat tool for taking unstructured, inaccurate data and fixing it ready for ingestion into OSS and/or GIS solutions.

It has a particular strength for connecting CAD or GIS drawings that appear connected to our human eyes, but have no (or limited) associations in the data. Without the data associations (eg Splice Case A connects to Fibre Cable B), then any data imported into an OSS / GIS remains detached, just islands of separated info. And the whole benefit of our OSS is the ability to cross-link and traverse data chains to unlock actionable insights.

But it wasn’t the fact that IAC is a neat tool that surprised me. Nor that it’s able to automate, or semi-automate, the cross-linking of data points in an intuitive way (although that is impressive).

Having done a few data migrations over the years, we’d always done custom ingestion / cleanse / cross-link scripts because the data was always different from customer to customer. We might have re-used a proportion of our scripts / code, but ultimately, they were all still just scripts and all customised. I was impressed that DFG Consulting had turned that migration process into a tool with:

  • An intuitive user interface
  • A workflow-driven approach for handling the data
  • In-built “assists / wizards” to help the tool achieve full data conversion for users
  • In-built error detection and correction techniques
  • Flexible import and export mechanisms
  • Preparing an output of fully structured data that is ready to be migrated into a target OSS system

I thought that was really innovative – that they’d put all that time into productising what I’d always known to just be a manually scripted task to be done as needed per project. The best I’d previously been aware of were assistance tools like FME (Feature Manipulation Engine), but IAC is the more sophisticated approach. That’s why I included the IAC solution in our recent “The Most Exciting Innovations in OSS/BSS” report.

But that’s still not what surprised me.

After seeing IAC in action, I reached out to quite a few connections in the industry who do a lot more data migration than I tend to do these days. I thought the tool was really interesting, but I also thought that most data being imported into OSS / GIS would already be structured today. I thought most operators would have already cross-linked their key data sets like design docs for use via OSS or GIS tools. In fact, I thought that most network operators would’ve done this a decade ago.

However, it was the feedback from industry sources that shocked me. It turns out that unstructured, unreliable data is still the norm on a large proportion of their new data migration projects. Especially outside plant (OSP) projects. I was thinking that IAC would’ve been absolutely brilliant for the migration projects of the 2000s, but that there wouldn’t be much call for it these days. Turns out I was totally wrong in my assumptions.

If you’d like to see how IAC solves some of the most important data ingestion use-cases, check out the videos below: