New OSS Innovation Video Recording

The \#Adtran #FibreBroadband Symposium was held in Melbourne on 3rd August 2022.

This video link below shows the presentation we made at the conference:

It’s designed specifically for network builders, to show the relevance that OSS and BSS have for their worlds.

It also shows a very high level view of OSS past, present and future – including four possible future scenarios covering:

  • Private Networks
  • Digital Twins and Sensor / IoT Networks
  • Autonomous / Programmable Networks
  • Integration of Augmented Reality and OSS to provide new ways of working for the telecommunications industry (and beyond)

Re-imagining your OSS? Remarkable approaches changing the world of telco

An updated call for innovation in the OSS industry

We are currently living through a revolution for the OSS/BSS industry and the customers that we serve.

It’s not a question of if we need to innovate, but by how much and in what ways.

Change is surrounding us and impacting us in profound ways, triggered by new technologies, business models, design & delivery techniques, customer expectations and so much more.

Our why, the fundamental reasons for OSS and BSS to exist, is relatively unchanged in decades. We’ve advanced so far, yet we are a long way from perfecting the tools and techniques required to operationalise service provider networks.

Imagine a future OSS where:

  1. Buying a new / transformed OSS is a relatively simple experience
  2. Customers find the implementation / integration experience quite fast and easy
  3. Implementers are able to implement and integrate simply, seamlessly and repeatably
  4. The user experience for customers and OSS operators is intuitive, insightful and efficient
  5. It’s not exactly the “zero-touch” that has been discussed, but only requires human interaction / intervention for the rarest of situations

This call for innovation, whilst having no reward like the XPRIZE (yet), seeks out the best that we have to offer and the indicators for what OSS can be into the future. It provides existing indicators to the future but seeks your input on what else is possible.

Innovation is not just a better product or technology. It’s a complex mix of necessity, evangelism, timing, distribution, exploration, marketing and much more. It’s not just about thinking big. In many cases, it’s about thinking small – taking one small obstacle and re-framing, tackling the problem differently than anyone else has previously.

We issue this Call for Innovation as a means of seeking out and amplifying the technologies, people, companies and processes that will transform, disrupt and, more importantly, improve the parallel worlds of OSS and BSS. Innovation represents the path to greater enthusiasm and greater investment in our OSS/BSS projects.

In this article we’ll look at:

  • Part 1 – Current State
  • Part 2 – Change Drivers
  • Part 3 – What might an OSS Future Look Like (including “a day in the life” examples)

Part 1 – Current State of OSS

1.1 An introduction to the current state of OSS

At the highest level, the use cases for OSS and BSS have barely changed since the earliest tools were developed.

We still want to:

  • Monitor and improve network / service health
  • Bring appealing new products to an eager market quickly
  • Accurately and efficiently bill customers for the services customers use
  • Ensure optimal coordination, allocation and utilisation of available resources
  • Discover operational insights that can be actioned to quickly and enduringly improve the current situation
  • Ensure all stakeholders can securely interact with our tools and services via efficient / elegant interfaces and processes
  • Use technology to streamline and automate, allowing human resources to focus only on the most meaningful activities that they’re best-suited to


The problem statements we face still relate to doing all these use-cases, only cheaper, faster, better and more precisely.

Let’s take those use-cases to an even higher level and pose the question about customer interactions with our OSS:

  1. For how many customers (eg telcos) is the OSS buying experience an enjoyable one?
  2. For how many customers is the implementation / integration experience an enjoyable one?
  3. For how many implementers is the implementation / integration experience a simple, seamless, repeatable one?
  4. For how many customers is the user experience (ie of OSS operators) an enjoyable one?

Actual customer experiences in relation to the questions above today might be 1) Confusing, 2) Arduous, 3) Uniquely challenging and 4) Unintuitive.
The challenge is to break these down and reconstruct them so that they’re 1) Easy to follow, 2) Simple, 3) Less bespoke and 4) Navigable by novices.

Despite the best efforts of so many brilliant specialists, a subtle sense of disillusionment exists when some people discuss OSS/BSS solutions, to the point that some consider the name OSS, the brand of OSS, to be tarnished. Whilst there are many reasons for this pervasive disappointment, the real root-causes are arguably the big (big projects, budgets, teams, complexity and expectations) and the small (ambition, improvement / thinking, experimentation and tolerance of failure).

We have contributed to our own complexity and had complexity thrust upon us from external sources. The layers of complexity entangle us. This entanglement forces us into ongoing cycles of incremental change. Unfortunately, incremental change is impeding us from reaching for a vastly better future.

2.2 The 80/20 rule of OSS in a fragmented market

The diagram below shows Pareto’s 80/20 rule in the form of a telco functionality long-tail diagram:

  • The x-axis shows a list of OSS functionalities (within a hypothetical OSS product)
  • The y-axis shows the relative value or volume of transactions for each functionality (for the same hypothetical product)

True to Pareto’s Principle, this chart indicates that 80% of value is delivered by 20% of functionality (red rectangle). Meanwhile 20% of value is delivered by the remaining 80% of functionality (blue arrow).

If we take one sector of the OSS market – network inventory – there are well over 50 products / vendors servicing this market.

The functionalities in the red box are the non-negotiables because they’re the most important. All 50+ products will have these functionalities and have had them since their Minimum Viable Product first came onto the market. It also means 50+ sets of effort have been duplicated and 50+ competitors fighting for the same pool of customers. It also means there are 50+ vendors for a buyer to consider when choosing their next product (often leading to analysis paralysis).

But rather than innovating, trying to improve the functionality that “moves the needle” for buyers (ie the red box), instead most vendors attempt to innovate in the long tail (ie the blue arrow).

Of course innovation is needed in the blue arrow, but innovation and consolidation is more desperately needed in the red box (see this article for more detail).

Part 2 – Change Drivers for OSS

2.1 Exponential opportunities

Exponential technologies are landing all around us from adjacent industries. With them, it becomes a question about how to remove the constraints of current OSS and unleash them.

We’ve all heard of Moore’s Law, which predicts the semiconductor industry’s ability to exponentially increase transistor density in an integrated circuit. “Moore’s prediction proved accurate for several decades, and has been used in the semiconductor industry to guide long-term planning and to set targets for research and development. Advancements in digital electronics are strongly linked to Moore’s law: quality-adjusted microprocessor prices, memory capacity, sensors and even the number and size of pixels in digital cameras… Moore’s law describes a driving force of technological and social change, productivity, and economic growth.”

Moore’s Law is starting to break down, but it’s exponentiality has also proven to be helpful for long-term planning in many industries that rely on electronics / computing. That includes the communications industry. By nature, we tend to think in linear terms. Exponentiality is harder for us to comprehend (as shown with the old anecdote about the number of grains of wheat on a chessboard).

The problem, as described in a great article on SingularityHub, is that the exponentiality of technological progress tends to surprise us as change initially creeps up on us quietly, then overwhelms us in situations like this:

(source: Singularity Hub)

Hardware is scaling exponentially, yet our software is lagging and wetware (ie our thinking) could be said to be trailing even further behind. The level of complexity that has hit OSS in the last decade has been profound and has largely overwhelmed the linear thinking models we’ve applied to OSS. The continued growth from technologies such as network virtualisation, Internet of Things, etc is going to lead to a touchpoint explosion that will make the next few years even more difficult for our current OSS models (especially for the many tools that exist today that have evolved from decade-old frameworks).

Countering exponential growth requires exponential thinking, as described in this article on WIRED. We know we’re going to deal with vastly greater touch-points and vastly greater variants and vastly greater complexity (see more in the triple-constraint of OSS). Too many OSS projects are already buckling under the weight of this complexity.

So where to start?

2.2 Re-framing the Challenge of OSS Innovation

A journey of enlightenment is required. Arguably this type of transformation is required before an digital transformation can proceed. This starts by asking questions that challenge our beliefs about OSS and the customers + markets they serve. This link poses 22 re-framing questions that might help you on a journey to test your own beliefs, but don’t stop at those seed questions.

2.3 Driving Forces Impacting the Telco / OSS Industries

The following forces are driving future changes, both positive and negative, for the OSS industry and the customers it serves:

  1. OSS buyers (eg telcos) face diminished profitability due to competition in traditional services such as network access / carriage
  2. Telcos face a challenge in their ability to innovate (access to skills, capital, efficient coordination, constrained partner ecosystem, etc)
  3. Access to capital (incl. inflation, depreciation and interest rates following massive technology investments)
  4. The centre of gravity of innovation is rapidly shifting from West to East, as is access to skills (migration of jobs, skills and learning to India and China). Far more Engineers are minted in China and India and many OSS tasks are done in these regions, especially hands-on “creation” tasks. This means the vital hands-on “OSS apprenticeships” are largely only done at scale in these regions. As a result, of this shift and outsourcing / offshoring of core tech skills, many telcos in the West have lost the capability to influence the innovation / evolution of the technologies upon which they depend
  5. Access to energy and energy efficiency are coming under increased scrutiny due to climate change and emissions obligations
  6. Networks, data and digital experiences are increasingly software-centric, yet telcos don’t have “software-first” DNA
  7. Digitalisation and the desire for more immersive digital experiences is increasing, within B2B (eg gaming, entertainment) and B2C (eg digital twins and so much more)
  8. Web3 / metaverse use-cases will intensify this trend
  9. Telcos have traditionally sold (and profited from) the onramp to the digital world (ie mobile phones), but will they continue to for web3-enabled devices like headsets?
  10. Regulatory interventions have always been significant for telco, but as we increasingly rely on digital experience, regulatory oversight is now increasing for entities that rely on communications networks (eg user data privacy regulations like GDPR)
  11. Trust, privacy, but also proof-of-identity are all becoming more important as protection mechanisms as digital experiences increase in the face of increased cyber threats
  12. Cyber threats have the potential to be too advanced for many enterprises’ security budgets
  13. Increased proliferation of technology arriving from adjacent industries introduces challenges for standardisation and ability to collaborate 
  14. The proliferation of disruptive technologies also makes it more difficult to choose the “right” solution / architecture for today and into a predictable future
  15. Society changes and consumer desires in relation to digital experiences are changing rapidly, and changing in different ways in different regions
  16. The telecom market is largely saturated in many countries, nearing “full” penetration 
  17. Many telcos, especially tier-ones, are large, bureaucratic organisations. The inefficiency of these operations models can be justified at large scale of build, but less so when margins are shrinking along with new users (per earlier full-penetration point)
  18. Shorter attention spans and emphasis on short-term returns is limiting the ability to perform primary research or enduring problem solving. The appetite for tech literacy especially for hard challenges (ie ones not already available as a YouTube videos), is diminishing
  19. Geopolitical risk is on the rise
  20. Telco technology solutions are increasingly moving to public / private cloud models, which is beyond the experiences of many telco veterans
  21. There is an increase in proliferation of devices / appliances (by volume, type and behaviour) on enterprise and telco networks, so understanding and/or visibility of nefarious behaviour is harder to identify, leading to greater cyber risks
  22. Miniaturization of electronics frees up space in exchanges, leading to available rackspace, connectivity and power at these valuable “edge” locations. This opens up not just opportunities within the telco, but via partnership / investment / leasing-models as a real-estate opportunity 
  23. Telcos aren’t software-first, nor have developer-centred management skills in executive positions. Successful software development requires single-minded solutions, whereas software is one of many conflicting objectives for telcos (refer to this failure to prioritise article). Innovation is being seen from software-first business models (refer to this article about Rakuten), where the business is lead by IT-centric management / teams rather than telco veterans
  24. Other opportunities like private networks and digital twins for industry seem like opportunities that are more aligned with current telco capabilities

3.4 Future OSS Scenario Planning

Before considering what the future might look like, we must acknowledge that nobody can predict the future. There are simply too many variables at play. The best we can do is propose possible future scenarios such as:

  1. Blue-sky (fundamental change for the better)
  2. Doomsday (fundamental change that disrupts)
  3. Depression (more of the same, but with decay / deterioration)
  4. Growth (more of the same, but with improvement)

From these scenarios, we can make decisions on how to best steer OSS innovation. <WIP link>

Part 3 – What the Future of OSS Might Look Like

3.1 The Pieces of the Future OSS Puzzle

The topics related to this Call for Innovation can be wide and varied (the big), yet sharp in focus (the small). They can relate directly to OSS technologies or innovative methods brought to OSS from adjacent fields but ultimately they’ll improve our lot. Not just change for the sake of the next cool tech, but change for the sake of improving the experience of anyone interacting with an OSS.

The following is just a small list of starting points where exponential improvements await:

  • OSS are designed for machine-to-machine (M2M) interactions as the primary interface. User interfaces are only designed for the rarest cases of interaction / intervention
  • Automations of various sorts handle the high volume, difficult and/or mundane activities, freeing humans up to focus on higher-value decision making
  • These rare interactions will not be via today’s User Interfaces (UIs) consisting of alarm lists, work orders, device configs, design packs, etc. OSS interactions will be via augmented reality, three-dimensional models and digital twins / triplets where data of many forms can be easily overlaid and turned into decisions. Smart phones have revolutionised the way workers interact with operational systems. Imminent releases in smart glass will further change ways of working, delivering greater situational awareness and remote collaboration
  • Decision support will guide the workforce in performing many of their daily actions, especially using augmented reality devices
  • OSS won’t just coordinate the repair of faults after they occur in systems and networks. They will increasingly predict faults before they occur, but more importantly will be used to increase network and service resiliency to cope with failures / degradations as they inevitably arise. This allows focus to shift from networks, nodes and infrastructure to customers and services
  • Will make increasingly sophisticated and granular decisions on how to leverage capital based on the balance of cost vs benefit. For example, taking inputs such as capacity planning, assurance events, customer sentiment, service levels and fieldwork scheduling to decide whether to fix a particular failure (eg cable water ingress) or to modernise the infrastructure and customer service capabilities
  • Use every field worker touch-point as an opportunity to reconcile physical infrastructure data to help overcome the challenge of data quality. This can be achieved via image processing to identify QR / barcode / RFID tags or similar whilst conducting day-to-day activities on-site. Image processing is backed up by sophisticated asset life-cycle mapping
  • Greater consolidation of product functionality, especially of the core features (as shown in the red box within the 80/20 diagram above)
  • Common, vendor-neutral network configuration data models to ensure new network makes, models and topologies are designed once (by the vendor) and easily represented consistently across all OSS globally (eg OpenConfig project)
  • True streaming data collection and management (eg telemetry, traffic management, billing, quality of service, security, etc) will be commonplace, as opposed to near-real-time (eg 15 min batch processing). Decisions at the speed of streaming, not 30+ minutes after the event like many networks today. [At the time of writing, one solution stands ahead of all others in this space]
  • A composable and/or natural language user interface, like a Google search, with the smarts to interrogate data and return insights. Alarm lists, trouble tickets, inventory lists, performance graphs and other traditional  approaches are waiting to be usurped by more intuitive user interfaces (UIs)
  • Data-driven search-based interactions rather than integration, configuration or programming languages wherever possible, to cut down on integration times and costs
  • Service / application layer abstraction provides the potential for platform sharing between network and OSS and dove-tailed integration
  • New styles of service and event modelling for the adaptive packet-based, virtual/physical hybrid networks of the future
  • A single omni-channel thread that traces every customer omni-channel interaction flow through OSS, BSS, web/digital, contact centres, retail, live chat, etc, leading to less fall-outs and responsibility “flicking” between domains in a service provider
  • Repeatable, rather than customised, integration. Catalogs (eg service catalogs, product catalogs, virtual network device catalogs) are the closest we have so far. Intent abstraction and policy models follow this theme too, as does platform-thinking / APIs.
  • Unstructured, flexible data sets rather than structured data models. OSS has a place for graph, time-series and relational data models. Speaking of data, we need a new data platforms that easily support the cross-linking of time-series, graph and relational data sets (the holy trinity)
  • Highly de-centralised and/or distributed processing using small footprint, commoditised hardware at the provider and/or customer edge rather than the centralised (plus aggregators) model of today. This model reduces latency and management traffic carried over the network as only meta-data and/or consolidated data sets are shipped to centralised locations
  • The wow factor / usability is in the graphics and user-interface, the value (business case) is in the graph (the data)
  • A standardised, simplified integration mechanism between network and management on common virtualised platforms rather than proprietary interfaces connecting between different platforms
  • An open source core that allows anyone to develop plug-ins for, leading to long-tail innovation, whilst focusing the massive brainpower currently allocated to duplicated efforts (due to vendor fragmentation)
  • Cheaper, faster, simpler installations (at all levels, including contracts between suppliers and customers)
  • Transparency of documentation to make it easier for integrators
  • Ubiquitous training / learning programs – like what Cisco has achieved with it’s CCIE (and similar) certifications (not to mention the unlikely competitive advantage it has delivered)
  • Capable of handling a touchpoint explosion as network virtualisation and Internet of Things (IoT) will introduce vast numbers of devices to manage
  • Ruthless simplification of inputs leading into the OSS/BSS “black-box” (eg drastic reduction in the number of product offerings or SKUs and product variants) and inside the black box of OSS/BSS (eg process complexity, configuration variants, etc)
  • Machine learning and predictive analytics to help operators cope with the abovementioned touchpoint explosion
  • Increasing the perception of value provided by OSS, reversing the pervasive sentiment that OSS can only ever be a cost centre. OSS are too important to just be cost centres but they need better messaging of value to come from us

3.2 Increasing Trust to Support Web3

The digital experiences we rely on today are evolving.  The third generation of the Internet (Web 3.0) is on the horizon, with many of its necessary elements already taking shape (eg blockchain / crypto-currencies, digital proof-of-ownership, virtual worlds, etc). It will essentially be a more immersive, secure, private, user-friendly and de-centralised version of the Internet we know today.

Trust will be a fundamental element of society’s up-take of web3 technologies. Access to these experiences will occur via the on-ramp of communications networks. Being regulated in their local jurisdictions, telcos have an opportunity to act in the role of privacy, security and consumer protection stewards for everyone entering the world of Web3. Leveraging the long-held position of trust that telcos have with their business and residential customers, OSS/BSS have the opportunity to deliver trust mechanisms for Web3.

Whether, and how, we tap into this opportunity remains to be seen.

3.3. A Day in the Life of Future OSS Users

Many people talk about the possibility of a zero-touch future OSS. I don’t foresee that, but do see a future of lower-touch, smarter-touch, different touch. The entire way we will interact with our OSS will change fundamentally – from dealing with our two-dimensional device screens (eg PCs, phones and tablets) to future devices that allow us to have enriched experiences in three dimensions – in reality and with augmented realities.

Capacity Planner – The CAD designs of the past were necessary because field workers needed printed design packs that showed network changes. Field workers needed to translate these drawings and designs into the 3 dimensional worlds they experienced. Since capacity planners perform designs remotely, they make design decisions with incomplete awareness of site (eg site furniture, etc). In the future (and today), designers will have 3D photogrammetric models of site and can perform adds/moves/changes directly onto the model. Most of these designs will be generated automatically based on cost-benefit analysis. However, a human may be required to perform a quality audit of the design or generate any bespoke designs that aren’t catered for by the auto-designer. For example, certain infrastructure changes may be required before being able to drag a new device type (make and model) onto the 3D model and include other relevant annotations. 

Field Worker – Field workers are already using mobility devices to aid the efficiency of work on site. This will change further when field workers use augmented reality headsets to see network change designs as overlays whilst they’re working. They will see the new device location marked on the tower (as described above) and know exactly which piece of equipment needs to be installed where. Connection details will also be shown and image processing on the head-set will identified whether connectors have been wrongly connected. Even installation guides will appear via the heads-up display of AR to aid techs will the build.
Similarly, a fibre splice technician will be guided which strand / tube fibre to splice to which other strand / tube as image processing will identify cables and colours and match them up with designs. Where there are any discrepancies between field records and inventory records, these will be reconciled immediately without the need for returning red-line markup as-built drawings to be transcribed into the OSS.
Perhaps most important is the automation that keeps passive infrastructure reconciled whilst field techs perform their daily activities. We all know that data quality deteriorates as it ages. Since passive infrastructure (racks, cables, patch panels, splice boxes, etc) is unable to send digital notifications, their data tends to be updated only through design drawings, which are rarely touched. However, field workers “see” this infrastructure whilst they’re on site. As field workers will now upload imagery of the site they’re working on (as photos or AR streams), image processing will automatically identify QR / barcode / RFID tags to identify where assets are in space and time, thus providing a “ping” on the data to refresh it and reconcile its data accuracy. Entire asset life-cycles are better tracked and correlated with who was on site when life-cycle statuses changed (eg adds / moves / changes in location or configuration).

Data Centre Repair Technician – Since most infrastructure in a data centre will be standardised, commoditised and virtualised, the primary tasks will be replacing failed hardware and performing patching. DC Techs will do a daily fix-run where their augmented reality devices will show which rack and shelf contain faulty equipment that needs to be replaced. Image processing will even identify whether the correct device or card is being installed in place of the failed unit.  AR headsets will also guide DC techs on patching / re-patching, ensuring the correct connectivity is enabled.

NOC (and/or WOC and SOC) operator – As with today, a NOC operator will monitor, diagnose and coordinate a fix. However, with AIOps tools automatically identifying and responding to all previously identified network health patterns, the NOC operator will only handle the rare event patterns that haven’t been previously codified. For these rare cases, the NOC operator will have an advanced diagnosis solution and user interface, where all network domain data and available data streams (events, telemetry, logs, changes, etc) can be visualised on a single palette. These temporal/spatial data streams can be dragged onto the UI to aid with diagnosis, although the AIOps will initially present the data streams on the palette that contain likely anomalies (rather than the hundreds of unassociated metric graphs that will be available from the network). The UI to support the rare cases will look fundamentally different to the UI that supports bulk processing today (eg alarm lists and tickets).

Command and control (C&C) – Since the AIOps and SON handles most situations automatically (eg auto-fix, self-optimise, auto-ticket), only rare situations require command and control mechanisms by the NOC team. However, these C&C situations are likely to be invoked during crisis, high severity and/or complex resolution scenarios. Rather than handling these scenarios by tick(et) and flick, the C&C tool will provide collaborative control of resources (people and technology) using sophisticated decision support. The C&C solution will be tightly coupled with business continuity plans (BCP) to drive pre-planned, coordinated responses. They will be designed to ensure BCPs are regularly tested and refined.

System Administrators – These teams will arguably become the most important people for which OSS user interfaces (UIs) will be designed. These users will design, train, maintain and monitor the automations of future OSS. They will be responsible for keeping systems running, setting up product catalogs, designing workflows such as orchestration plans, identifying AIOps event patterns and response workflows, etc. They will be responsible for system configurations and data migrations to ensure the workflows for all other personas are seamless, immersive and intuitive. Whereas other persona UIs will be highly visual in nature, the dedicated UIs of system administrators are likely to look like the OSS UIs that we’re familiar with today (eg lists / logs, BPMN workflows, configs, technical attributes, network connectivity / topology maps, etc)

Product Designers – The product team will be provided with a visual product builder solution, where they can easily compose new offerings, contracts, options/variants, APIs, etc from atomic objects already available to them in the product catalog. Product Designers become the Lego builders of telco, limited only by their imaginations.

Marketing – The marketing team will be provided with sophisticated analytics that leverages OSS/BSS data to automatically identify campaign opportunities (eg subscribers that are churn risks, subscribers that are up-sell targets, prospects that aren’t subscribers but are within a designated coverage area such as within 100m of a passing cable, etc)

Sales Teams – Most sales will occur via seamless self-service mechanisms (eg portals, apps, etc). Some may even occur via bundled applications (where third-parties have utilised a carrier’s Network as a Service APIs to autogenerate telco services to be bundled with their own service offerings). Salespeople will only work with customers for rarer cases, but will use a visual quote and service design builder to collaborate with customers. Sales teams will even be able to walk clients through reality twins of service designs, such as showing where their infrastructure will reside in racks (in a DC) on towers, etc.

4. What do you think the future of OSS will look like?

We don’t claim to be able to predict the future. Many of the examples described above are already available in their nascent forms or provide a line-of-sight to these future scenarios. It’s quite likely that we’ve overlooked key initiatives, technologies and/or innovations. We’d love to hear your thoughts. What have we missed or misrepresented? Are you working on any innovations or products that you’d love to tell the world all about? Leave us a comment in the comment box below to share your predictions.

Upcoming Webinar – Developing Practical Transformation User Guides

You may have noticed in a recent post that I’ve been contributing to the Transformation Project Framework (TPF) with TM Forum (document GB1011). It’s an important initiative that aims to help ease the stress and reduce the risk on complex OSS/BSS transformation projects.

In conjunction with the TM Forum and the Transformation User Guide (TUG) team, we’re about to deliver a webinar where you will have the opportunity to learn how to:

  •  Leverage members’ experiences in Transformation User Guides (TUG)
  • Utilise relevant Open Digital Architecture assets including eTOM using a Transformation Project Framework
  • Reduce intervals for setting up transformation projects and minimise the risks of poor planning
  • Work with the TM Forum Advisory Board and members to develop a suite of practical TUGs.

Click on this link to register for this webinar

 

The great telco tower sell-off

You’ve probably noticed the great tower sell-off trend that’s underway (see this article by Robert Clark as an example). Actually, I should call it the sell-off / lease-back trend because carriers are selling their towers, but leasing back space on the towers for their antennae, radio units, etc.

Let’s look at this trend through the lens of two of my favourite OSS books (that mostly have little to do with OSS – not directly at least):

  • Rich Dad, Poor Dad by Robert Kiyosaki, and
  • The Idea Factory by Jon Gertner

The telcos are getting some fantastic numbers for the sale (or partial sale) of their towers, as Robert’s article identifies. Those are some seriously big numbers going back into the telcos for reinvestment (assumedly).

Let’s look at this through the lens of Rich Dad first (quotes below sourced from here).

The number one rule [Rich Dad] taught me was: “You must learn the difference between an asset and a liability—and buy assets.”

The simple definition of an asset is something that puts money in your pocket.

The simple definition of a liability is something that takes money out of your pocket.

Towers, whilst requiring maintenance, would still seem to be an asset by this definition. They are often leased out to other telcos as well as aiding the delivery of services by the owning telco, thus putting money into their pockets. However, once they become leased, they become a liability, requiring money to be paid to the asset owner.

Now I’m clearly more of an Engineer than an Accountant, but it seems fairly clear that tower sales -> lease-back is a selling of assets, acquisition of liabilities, thus contradicting Rich Dad’s number one rule. But that’s okay if the sale of one asset (eg towers or data centres) allows for the purchase (or creation) of other more valuable assets.

[Just an aside here, but I also assume the sale / lease-back models factor in attractive future leasing prices for the sellers for some period of time such as 7-10 years. But I also wonder whether the lease-back models might become a little more extortionate after the initial contract period. It’s not like the telco can easily shift all their infrastructure off the now leased towers, so they’re somewhat trapped I would’ve thought…. But I have no actual insights into what these contracts look like, so it’s merely conjecture here].

Now let’s take a look through the lens of “The Idea Factory” next. This brilliant book tells the story about how Bell Labs (what was Bell, then AT&T’s research and development arm, now part of Nokia) played a crucial role in developing many of the primary innovations (transistors, lasers plus optical fibres, satellites, microwave, OSS, Claude Shannon’s Information Theory, Unix, various programming languages, solar cells and much more) we rely on today across telco and almost every other industry. These advances also paved the way for the rise of the Silicon Valley innovation engine of today.

Historically, this primary R&D gave telcos the opportunity to create valuable assets. However, most telcos divested their R&D arms years ago. They’ve also delegated most engineering and innovation to equipment / software vendors and via outsourcing agencies. I’d argue that most telcos don’t have the critical mass of engineering that allow them to create many assets anymore. They mostly only have the option of buying assets now. But we’ll come back to this a little later.

The cash raised from the sale of towers will undoubtedly be re-invested across various initiatives. Perhaps network extensions (eg 5G roll-outs), more towers (eg small-cell sites), or even OSS/BSS uplift (to cope with next-generation networks and services), amongst other things. [BTW. I’m hoping the funds are for re-investment, not shareholder dividends and the like 🙂 ]

Wearing my OSS-coloured glasses (as usual), I’d like to look at the OSS create / buy decision amongst the re-investment. But more specifically, whether OSS investment can be turned into a new and valuable asset.

In most cases, OSS are seen as a cost centre. Therefore a liability. They take money out of a carrier’s pockets. Naturally, I’ll always argue that OSS can be a revenue-generator (as you can see in more depth in this article):

But in this case, what is the asset? Is it the network? The services that the network carries? The OSS/BSS that brings the two together? Is it any one of these things that puts money in the pockets of carriers or all three working cohesively? I’d love to hear a CFO’s perspective here  😉

However, the thought I’d actually like to leave you with out of today’s article is how carriers can actually create OSS that are definitively assets.

I’d like to show three examples:

The first is to create an OSS that enterprise clients are willing to pay money for. That is, an OSSaaS (OSS as a Service), where a carrier sells OSS-style services to enterprise clients to help run the clients’ own private networks.

The second is to create an OSS for your own purposes that you also sell to other carriers, like Rakuten is currently doing. [Note that Rakuten is largely doing this with in-house, rather than outsourced, expertise. It is also buying up companies like InnoEye and Robin.io to bring other vital IP / engineering of their solution in-house.]

The third is the NaaS (Network as a Service) model, where the OSS/BSS is responsible for putting an API wrapper around the OSS and network (and possibly other services) to offer directly to external clients for consumption (and usually for internal consumption as well). 

However, all of these models require some serious engineering. They require an investment of effort to create an asset. Do modern carriers still have the appetite to create assets like these?

Wayne Dyer coined the phrase, “It’s never crowded along the extra mile.” Rakuten is one of the few that is walking the extra mile with their OSS (and network), in an attempt to turn it into a true asset – one that directly puts money in their pocket. They’re investing heavily to make it so. How valuable will this asset become? Time will tell.

I’d love to hear your thoughts on this. For example, are there other OSS asset models you can think of that I’ve overlooked?

Hat tips to James and Bert for seeds of thought that inspired this article. More articles to follow on from a brilliant video of Bert’s in future  😉

 

I was surprised by this OSS innovation

I’m in the privileged position, probably as a result of founding The Blue Book OSS/BSS Vendor Directory, to speak with many vendors each week. It also means I’m lucky enough to watch in on product demos on a regular basis too.

Last year DFG Consulting (www.dfgcon.si/en) reached out to tell me about their Interactively Assisted Converter (IAC) solution. Their demo showed that it was a really neat tool for taking unstructured, inaccurate data and fixing it ready for ingestion into OSS and/or GIS solutions.

It has a particular strength for connecting CAD or GIS drawings that appear connected to our human eyes, but have no (or limited) associations in the data. Without the data associations (eg Splice Case A connects to Fibre Cable B), then any data imported into an OSS / GIS remains detached, just islands of separated info. And the whole benefit of our OSS is the ability to cross-link and traverse data chains to unlock actionable insights.

But it wasn’t the fact that IAC is a neat tool that surprised me. Nor that it’s able to automate, or semi-automate, the cross-linking of data points in an intuitive way (although that is impressive).

Having done a few data migrations over the years, we’d always done custom ingestion / cleanse / cross-link scripts because the data was always different from customer to customer. We might have re-used a proportion of our scripts / code, but ultimately, they were all still just scripts and all customised. I was impressed that DFG Consulting had turned that migration process into a tool with:

  • An intuitive user interface
  • A workflow-driven approach for handling the data
  • In-built “assists / wizards” to help the tool achieve full data conversion for users
  • In-built error detection and correction techniques
  • Flexible import and export mechanisms
  • Preparing an output of fully structured data that is ready to be migrated into a target OSS system

I thought that was really innovative – that they’d put all that time into productising what I’d always known to just be a manually scripted task to be done as needed per project. The best I’d previously been aware of were assistance tools like FME (Feature Manipulation Engine), but IAC is the more sophisticated approach. That’s why I included the IAC solution in our recent “The Most Exciting Innovations in OSS/BSS” report.

But that’s still not what surprised me.

After seeing IAC in action, I reached out to quite a few connections in the industry who do a lot more data migration than I tend to do these days. I thought the tool was really interesting, but I also thought that most data being imported into OSS / GIS would already be structured today. I thought most operators would have already cross-linked their key data sets like design docs for use via OSS or GIS tools. In fact, I thought that most network operators would’ve done this a decade ago.

However, it was the feedback from industry sources that shocked me. It turns out that unstructured, unreliable data is still the norm on a large proportion of their new data migration projects. Especially outside plant (OSP) projects. I was thinking that IAC would’ve been absolutely brilliant for the migration projects of the 2000s, but that there wouldn’t be much call for it these days. Turns out I was totally wrong in my assumptions.

If you’d like to see how IAC solves some of the most important data ingestion use-cases, check out the videos below:

 

 

The Network Automation Journey – It’s about Culture

I love this article by Tim Fiola, entitled “A Note to Management About the Network Automation Journey – It’s About Culture.” It does a brilliant job of explaining the real challenges around network automation. I’m going to riff off it a little bit today, drawing comparisons with OSS/BSS (which do happen to have a role to play in network automation).

He starts off with this gem:

“Most of the blog posts and information about network automation focus on the technical details and are directed toward the network engineers. There is more to the network automation transformation, however, than the technical aspects. In fact, one of the most important aspects of making the network automation transformation ‘stick’ is the cultural component. Network automation is a journey, and it does require a cultural shift throughout the organization, including engineering, management, and human resources (HR). Management needs to understand this cultural shift, so it can better enable the transformation and make it permanent.”

So true for network automation and OSS/BSS alike. There does tend to be a focus on the tech. But I’ve seen tech that’s perfect, but is un-usable because the culture / change-management aspects haven’t been incorporated into the transformation program. There’s a saying in golf, “Driving is for show. Putting is for dough [cash].” If we paraphrase this slightly for the OSS/BSS/automation world, it becomes, “Tech is for show. Change/culture is for dough.”

Tim continues:

“The technology part of an automated infrastructure is a solved problem: we have the technology. Ultimately, it’s the cultural component that makes network automation successful in the long term.”

I’m not sure that the automated infrastructure problem is 100% solved, especially for physical infrastructure, but his point remains valid.

“This cultural shift, at its heart, means changing the expectations around how many people it should take to run a network, along with the skill sets the people have. In order for the automation transformation to keep long-term momentum, it must be supported by management: everyone, from the first-line manager, executive management, and all the way to Human Resources, must play a part to sustain the appropriate culture.”

I’m perplexed by the conflicting messaging that often exists around network automation. Openly, there is the message of Zero Touch Assurance (ZTA) and the head-count reductions that support automation business cases. However, as Tim suggests, there does need to be a change in expectations about how the network is run. The conflicting, and perhaps even subliminal, message to ZTA though is that telcos are often built around an “empire-building” mindset, where reduction of staff leads to a reduction in status for the managers within the organisation. The RFPs requesting OSS/BSS tools still seem to include requirements that assume a large team is required to run them and the network. They’re rarely about comparing vendors for operational efficiency.

But I digress. Tim instead cleverly pivots to aligning automated infrastructure to corporate objectives and cashflow, as follows:

“Many firms have a high-level corporate objective to increase cash flow. A business is interested in executing as many value-generating workflows as possible because those workflows produce value and ultimately revenue. An automated infrastructure directly supports this high-level objective by allowing the business to:

  • Execute revenue-generating workflows quicker, which brings in more revenue sooner.
    • This interval, between when something is sold versus when the revenue is realized, is often called the quote-to-cash interval.
    • Shrinking the quote-to-cash interval helps cashflow.
  • Execute workflows with less friction, which leads to more workflows being executed, which leads to greater throughput and increased revenue.

Companies care about executing workflows quickly and without friction; when they do this, they minimize the quote-to-cash interval and increase the total amount of throughput, which increases cashflow.”

Exactly! The whole idea of investments in OSS, BSS and network automations are about executing workflows quickly and without friction. But they’re also about making things more repeatable, reliable and enduring, as suggested by Tim as follows:

“Worker turnover happens everywhere and all the time. In the worst case, when an employee leaves, all their knowledge leaves with them.

With an automated infrastructure, even when a person leaves, they leave behind a lot of their knowledge, enshrined in the automation infrastructure.”

Speaking of employees leaving,

“Another common high-level corporate objective is employee retention and satisfaction. For a moment, imagine your network engineers freed from high-volume, low-value intensive tasks. For example, many highly skilled network engineers are often relegated to perform tasks such as

  • O/S upgrades
  • [Etc deleted for brevity]

These types of tasks are well below the engineers’ skill set, but network engineers are often called upon to perform these tasks simply because they understand the technology and full context around the task. In addition to being below the skill set of a network engineer, a lot of these tasks also happen to be tasks that machines are great at.

Relieving your network engineers from repetitive, high-volume tasks that are below their skill set will improve employee satisfaction.

An automated infrastructure can perform these tasks and give engineers the time to use their high-level skills. This benefits the company because their employees are much happier; it also benefits the engineers because it allows them to exercise high-level skills that add more value.”

This shift to staff only performing higher-value activities should be an absolute objective of any investment in OSS, BSS and network automation. However, the skills, and the way they’re measured, changes appreciably too:

“One of the most striking examples of daily tasks changing in an automated environment is configuration management: instead of manually configuring devices at the CLI, network engineers will transition to maintaining configuration templates. This is a great example of the power of abstraction: instead of trying to manage the configs on hundreds or even thousands of devices, the engineer will design and maintain a relatively small amount of configuration templates, and then use those templates to deploy consistent configurations to the many devices. So, your expectations around how your engineers spend their time will also have to change:

  • They will spend time on tasks such as templating, coding, etc. to automate away low-value tasks
  • They will spend more time on higher-level network engineering”

But some engineers will strongly resist change, as per our earlier reference to skilful execution of change management initiatives. For some engineers, their expertise with CLIs are their one-wood (to keep the golfing analogy going), their primary sense of corporate self-worth. But as Tim says,

“The CLI-to-template transition is one of many cultural shifts that comes along with an automated network infrastructure. As with any cultural shift, there will likely be resistance among some engineers who see this as a threat to their jobs, skill sets, certifications, etc…

It will be important to understand that this is a huge change for your engineering staff, so you will have to be patient and educate them on why these changes are needed and how they benefit the engineers. In addition, it may be necessary to provide multiple opportunities for technical training so, as fears and resistance subside, the engineers have the chance to get on board and learn how to operate in the new environment.”

Like I said, great article from Tim. Kudos to him for taking the time to share his thoughts with us all in his article. Please jump over via this link and check it out from cover to cover.

The Most Exciting OSS/BSS Innovations of 2022

Featured

We are currently living through a revolution for the OSS/BSS industry and the customers that we serve.

It’s not a question of if we need to innovate, but by how much and in what ways.

Change is surrounding us and impacting us in profound ways, triggered by new technologies, business models, design & delivery techniques, customer expectations and so much more.

Our why, the fundamental reasons for OSS and BSS to exist, is relatively unchanged in decades. We’ve advanced so far, yet we are a long way from perfecting the tools and techniques required to operationalise service provider networks.

In people like you, we have an incredibly smart group of experts tackling this challenge every day, seeking new ways to unlock the many problems that OSS and BSS aim to resolve.

In this report, you’ll find 25 solutions that are unleashing bold ideas to empower change and solve problems old and new. 

I’m thrilled to present these solutions in the hope that they’ll excite and inspire you like they have me.

Click on the image below to download the report.

Or click the link – https://passionateaboutoss.com/wp-content/uploads/2022/03/25-Most-Exciting-Innovations-in-OSS_BSS-2022_FINAL1.pdf 

Vendors included, in no preferential order:

  1. 2operate
  2. ActivePort
  3. ADVA
  4. AN10
  5. Anodot
  6. Appearition / Air Inspect Australia
  7. Aria Networks
  8. Avanseus
  9. Bentley
  10. Cisco
  11. CSG
  12. Cubro
  13. DFG Consulting
  14. EnterpriseWeb
  15. Ericsson / Nvdia
  16. InfoVista
  17. KX
  18. Mavenir
  19. Neo4j
  20. Netcracker
  21. Rakuten Symphony
  22. Techsee
  23. Trendspek
  24. Twinkler
  25. Zeetta

If you enjoy this report, be sure to subscribe via the form below and receive notification of future reports.

#PAOSSInnovation2022

A couple of exciting announcements

We have a couple of exciting announcements to make today.

The Transformation Project Framework (TPF) guidebook (GB1011) we’ve been working on with TM Forum has been launched with a pre-production status. That means it has passed peer review and is open for member comments. It’s still a work in progress, but you’ll see the basis for how it can help guide your next transformation.

On a related note, I was delighted to accept the TM Forum Outstanding Contributor award last night as recognition for the effort that the Transformation User Guide (TUG) group has done on TPF and a number of exemplar transformation guides. I’m excited to see what continues to come out of this group in the coming year/s.

Big thanks to Tony Kalcina, Dave Milham, Juan Salcedo and the rest of the Transformation User Guide team for their assistance!

Congratulations to the award winners shown in the graphic above. You can find out more details about their fantastic achievements here.

Two Exciting OSS Situation Enrichment Innovations

OSS have long been used to enrich operator experiences in many different ways.

Raw alarms are cross-linked and enriched with inventory data to provide enhanced information (eg make/model, nearest neighbour, etc) to give operators in the NOC additional information to help them diagnose and repair.

Inventory data is enriched with GIS / map data to provide equipment location, map views of assets and connectivity views to provide network planners with a better sense of the networks they’re designing augmentations for.

Customer services are cross-linked and enriched with service performance data to graph against SLA targets and utilisation data to bill against.

There are many others of course.

However, there are two latent problem areas where technology innovation is just reaching a point where it can provide advanced situation enrichment to help network operators:

  1. Remote workers gaining an immersive awareness of the situation on site
  2. Field workers tapping into enriched information whilst on site, allowing them to make more informed decisions

1. Immersive Awareness of Site Situations

Remote workers such as network planners, NOC operators, OH&S coordinators, procurement and many others make decisions as best they can without being fully aware of the situation in the field. Our OSS and BSS record many useful details, but don’t record all of the site-specific nuances that might influence operator actions or recommendations.

Digital Twin platforms are now providing precision models of site, allowing remote operators to walk through 3D photographic representations of site. These models also allow remote operators to record accurate measurements and gain greater spatial awareness of the situation at site. More importantly, as we’ve demonstrated in our OSS sandpit, these 3D photogrammetric models can be enriched with vast arrays of information from OSS. In the example shown below, the 3D models are enriched with asset IDs (and much more):

 

Immersive Awareness Using Back Office Information

With the spatial world being cross-linked and enriched with OSS data above, we can now overlay and unlock a wealth of additional information for field workers, with the topology and attribute mappings shown below being just the tip of the iceberg.

Field workers had historically been given limited information, in the shape of design packs, A0 / A1 drawings, etc. For example, these design packs were unable to give field workers the current-state health posture of the network at time of arrival on site. If they needed additional information, such as nearest adjacent points to perform deeper diagnosis and testing, they would have to call a request in to head office. If there were any discrepancies between the design and the real situation on site, they also needed to engage with head office via “blunt instruments” like red-line markups on printouts and/or phone conversations.

Modern mobility applications are providing field workers with the ability to access whatever information they need, as well as a more direct, near-real-time interface to designers and other back-office staff as well as the wealth of OSS data available.

Further advances in augmented / virtual reality (AR/VR) provide field workers with enriched views of the sites they’re working in. Identification of hazards, assets, work instructions, current network health metrics, decision support and plenty of other OSS-based information can be overlaid onto what field workers are seeing on site, enriching their decision-making processes.

Apologies for the shaky video below, recorded through my phone, showing the projection of a tower site on our boardroom table.

The examples shown above are also discussed in two innovation reports that we’re currently finalising and will be released to market shortly. Further information will be made available about these reports soon.

 

How to Maintain Control of your OSS in a Cloud Outage

There was a brilliant article from Matt Kapko over at SDXCentral last week entitled, “AWS Outage Stresses Telco Cloud Challenges.” It specifically highlighted lengthy outages on AWS in December and the downstream impact cloud outages can have for telcos that have dependencies on third-party cloud providers. 

The benefits of public cloud are clear — efficiency, scalability, and the ability to consolidate functions with less equipment. For telco operators this translates to better economics, business agility, and accelerated innovation at the pace of software,” explained Don Alusha in the article.

That’s all completely true and it makes sense for telcos to leverage public cloud. There are a few considerations to take into account though:

  • Despite fantastic infrastructure resiliency mechanisms, cloud providers aren’t immune from outages
  • Carriers aren’t immune from outages either, so I’m not taking sides relating to infallibility in this article. [BTW. If any reader is aware of confirmed metrics that show security and availability comparisons between cloud and carrier infrastructure I’d love to see them!!]
  • Carrier infrastructure “tends” to be more localised, so outages may take out services within a region or perhaps even a country. That means subscribers are impacted within the effected area
  • However, with an increasing number of carriers leveraging cloud infrastructure, cloud outages are likely to lead to a more global impact area
  • In the past, carriers tended to build their own active networks (core infrastructure), but 5G is changing that paradigm, as will all future network models that embrace virtualisation and cloud-native concepts
  • Carriers traditionally owned and managed their own infrastructure and therefore the network was within its locus of control (ie carriers could prioritise what got fixed and allocate resources to optimise management of the infrastructure, around both business-as-usual and catastrophic outages)
  • Leveraging cloud infrastructure means the telcos no longer have as much ability to prioritise or control the events relating to fault restoration
  • And it’s not just network infrastructure that’s impacted here. When carriers have OSS and BSS in the cloud, they lose the ability to manage the network, systems and even the workforce during an outage.

If we (simplistically) think of networks being the data / customer plane and OSS/BSS as being the control plane, then with a cloud outage we have the potential to lose both planes. In most traditional telco outages, it’s either the control OR the data plane that’s impacted, not both planes. Again, cloud outages will tend to have broader impact

This adds an interesting additional layer of complexity into our High Availability (HA) planning doesn’t it? We previously generated HA designs for our network and HA designs for our OSS/BSS (in isolation more or less). But if both of these are just overlays on cloud infrastructure, then we’re abstracting HA design as well as services to the cloud operators.

What do we do to overcome this? 

Don Alusha further explains, “Operators should hedge their bets in alternative and competing cloud platforms to change the structure of current systems and processes to produce more of what is desirable and less of that which is undesirable.

Well, that’s true. HA models tend to be built around diversity, avoiding any Single Points of Failure (SPoF). But how does that impact the design of our OSS and BSS to ensure we maintain control of our control plane? Do we design our solutions to be decoupled and stretched across different regions/zones and even cloud providers??

Does this slideshare from Kai Waehner provide a few thoughts for event-streaming platforms?

I’d love to get your thoughts on all of this as I certainly don’t have the answers to these conundrums!!

BTW. Matt’s article made me think back to autonomy being the OSS Security Elephant in the Room in this earlier article. which was in turn inspired by this article about 5G security by Bert Hubert. They might be worth a read too!

 

Is there an air-gap in our OSS innovation super-loops?

If you work in an OSS/BSS product company, are you able to guess what percentage of your team spends time with clients (customer-facing)? And of that percentage, how many observe the end-user interacting with your products (user-facing)?

There is a really important distinction between these two groups, not to mention the implication it has on product innovation, but I’ll get to that shortly.

We all know that great products serve the needs of the users, solving their most important problems. We also know that when there’s an air-gap between OEM vendors and clients then products will tend to struggle to meet end-user needs.

Conversely, we don’t really want our devs sitting alongside the customer for long (even on Agile / insourcing projects) because there can be a tendency for development to veer away from the big-picture roadmap. Products can become client-specific rather than client-generic in those scenarios, which isn’t great if you’re seeking solutions with repeatability / reusability.

Let’s revisit those two opening questions.

What types of roles are customer-facing?

  • Sales
  • Implementation teams
    • Project Management / Coordination roles
    • Architects
    • Business Analysts
    • Testers
    • Trainers
    • Developers (sometimes)
  • Support teams
    • Post-handover support
    • Ongoing Product support

That probably means more than half of your roles are customer-facing. That’s great. The architects, BAs and testers in particular act as a conduit between the client and your developers, forming an important innovation loop. They deal with the client’s project team almost every day for long periods of time. Their roles are to deal with the clients and translate for developers. If they ask the right questions of the right people and can then transcribe into a message that resonates and the devs can develop against, then all is fine. That should mean the air-gap is minimised. 

I’ve probably spent close to 10 years in these sorts of client-facing roles on OSS/BSS implementation projects. That alone should give me a pretty good understanding of client needs right?

Unfortunately, there’s a serious flaw in this thinking! Take another look at the second question in the opening paragraph and see whether you can spot what the flaw is.

When you’re on an implementation team, you’re most often dealing with the client’s project team. The client’s project team is simply telling you what they think they need. In many cases the solution isn’t yet available for use for large periods of the project, so you can only talk about it, not actually use it. Not only that, but the client’s project team may not actually include many end-users. They help build OSS/BSS solutions, but might not actually use them. The OEM’s implementation team does most of their work prior to go-live of the solution. Almost everything prior to go-live is dealing with hypothetical situations. Even UAT (User Acceptance Testing) tends to only cover hypothetical scenarios.

The only true customer experience is what happens after go-live using production solutions. Unfortunately, the problem here is that most implementation teams don’t stick around long after go-live. They’re normally off to the next implementation project. At least that’s what happened most often in my case.

So let’s come back to the second question – how many (of your OEM team) observe the end-user interacting with your products?

From the list above, it’s really only your support staff. Post implementation support and ongoing product support (which tend to only interact with end-users remotely, often only via email / tickets). You might also throw in trainers if you’re training client staff on production like systems / scenarios (although most vendors just give the same generic training to all clients).

There’s a well-held belief that asking customers what the want will only provide incremental improvement, as per Henry Ford’s oft-quoted statement about customers asking for faster horses. Radical innovation is more likely to come from deep customer engagement, observing what they do with your solutions and understanding what they’re trying to achieve plus pain-points.

Therefore, let me ask you – are we putting our best observers / translators / innovation-detectives in these few real end-user-facing roles? Are we tasking them with finding the biggest problems worthy of innovation or are they just fault-fixing? Are we asking them to collect data that will guide our next-generation product developments? Would our developers and architects even listen if our observers did come up with amazing insights?

I suspect we’re leaving our best-fit roles out of our innovation super-loops! Are we air-gapping our innovation opportunities?

Would it actually be in our best interests to stay with the end-users for longer after go-live, partly to give them greater confidence handling the many scenarios that pop up, but also as a way of gaining a deeper understanding (and metrics) of our customers in the wild?

Please share your thoughts below!

The Mysterious Sector 4 in a Telco IT Stack

When it comes to driving efficiency and profitability in a service provider’s business, I feel there are three key pillars to consider (aside from strategic factors of course), as follows:

  1. The IT stack, led by the OSS/BSS
  2. The networks and
  3. The field services 

The networks are vital. They are effectively each organisation’s product because connectivity across the network is what customers are paying for. They must be operating effectively to attract and retain paying customers.

The field services are vital. They build and maintain the networks, ensuring there is a product to sell to customers.

The OSS/BSS are arguably even more vital. Some may argue that they only provide a support role (the middle S in OSS and in BSS would even suggest so!). They’re more than that. OSS/BSS are the Profit Engine of a service provider. 

But let’s take a closer look at the implications of effectiveness and profitability in the overlapping sectors in the Venn diagram above and why OSS/BSS are so important.

Sector 1. OSS / BSS <-> Networks

The OSS/BSS connects customers with the product (buyers with sellers). Even if we remove this powerful factor, OSS and BSS have other key roles to perform. They connect and coordinate. They hold the key to efficiency and utilisation and, in turn, profitability.

Sectors 2&3. OSS / BSS <-> Field Workforce and Network

The OSS/BSS manages the field workforce, assigning what to do and when. But before that, our OSS/BSS provide the tools to decide why the work needs doing. They identify:

  • What needs to be built (network designs)
  • What capacity is required (by identifying performance gaps now or in the future)
  • What customers need to be connected, where and how
  • What are the root problems that need to be fixed to ensure customers are well served

That covers sector 2. But sector 3 appears to be separated from the OSS/BSS. It is, the field workers work directly on the network. However, they generally only do so after direction from the OSS/BSS (eg via work orders, trouble tickets, etc). Hence, they’re merged here.

Summary – Efficiency and The Profit Engine

You might be wondering why I missed sector 4. You might also be wondering whether the last sentence above, with the OSS/BSS pulling the strings of field work on networks, represents sector 4. Well, yes, but be patient and we’ll come back to sector 4, but with a slightly different (and potentially more powerful) perspective than that.

From the two previous sections, you would have noticed just how important OSS and BSS are to a network service provider. They directly influence:

  • Taking sales orders
  • Processing orders to ensure they’re activated
  • The time to market, of new services, of new product offerings, build of support infrastructure, reactivation of damaged infrastructure or warrantied equipment and more
  • Identification of problems and what needs to be done to fix them
  • Preventative maintenance to minimise the degradations / outages occurring 
  • Allocation of resources and their lifecycle management
  • Optimising the capital in the network by balancing capacity (available resources vs utilisation)
  • Managing revenues (preparation of invoices, issuing bills, collections, etc)
  • Combining the many people, skills and their availability with assets, materials, certifications, etc to ensure work is prioritised and coordinated through to completion
  • The speed of getting people to site and on to the next after finishing a job
  • The scalability / repeatability of the factors above and more
  • The identification of repeatable actions that are then worthy of automation and/or improvement
  • Logging and visualising the performance of people, processes and technologies (in real-time and over longer trend-lines), providing the benchmarks and levers to manage any of those factors if they’re going outside control bounds
  • The list goes on!!!

When you consider the daily volumes of each of those factors at large telcos, you’ll understand how a 5% improvement or deterioration in any will have a significant implication on profitability. The profitability of an organisation is massively helped, or hindered, by OSS/BSS, though few people seem to realise it.

The Elusive Sector 4. Combining OSS/BSS, Network and Field Services.

I promised to come back to sector number 4 in the Venn diagram above and give a different perspective. There’s an opportunity that exists here that few are capitalising on yet.

But first a recap. Sector 1 is best characterised by the tools used by a NOC (Network Operations Centre) and SOC (Service Operations Centre), as well as their various metrics like time to repair, up-time / availability, etc.

Sectors 2 and 3 are best characterised by the tools used by a WOC (Workforce Operations Centre) and metrics like number of truck rolls, jobs completed, etc.

The analytics that are available at sector 4 are profound, but rarely used. Let me describe via a scenario:

  • There is an outage in region A identified in the NOC. They create a ticket for copper cable #1 to be replaced
  • The WOC picks up the ticket and a worker is sent to repair copper cable #1
  • This repeats over the next few weeks. Copper cables # 2, 3, 4, 5 and more are also repaired in region A
  • It’s clear from my description that a systematic problem exists in region A, but the NOC and WOC are both more transactional in nature and have SLAs to meet so they’re not looking for endemic issues

The OSS/BSS has the ability to provide the role as an overseer, observing not just the transaction metrics, but also considering effectiveness and profitability. It is able to consider:

  • Each outage is having an impact on availability, potentially costing money via rebates and is damaging the brand
  • Each outage is introducing costs of repair teams as well as replacement material costs
  • The copper network doesn’t provide as much head-room for higher-speed offerings to customers, which represents an opportunity cost
  • The reliability and availability uplift of an optical node versus a copper node in region A
  • The long-term cost profile of a new fibre network versus a deteriorating copper network
  • The costs of inserting a new optical node and removal of the copper node (and associated cables, connectors, network termination devices, etc)
  • The customer profiles connected to the network that are likely to take higher speed services if made available
  • etc

With all of this knowledge at the fingertips of the OSS/BSS, it could decide to inject a new work order into the work queue, starting with a network design (possibly automated), acquisition of assets / materials (possibly automated), notification to customers of network uplift (resulting in outage notifications, but also better performance and higher-speed offerings – possibly automated), creation / scheduling / dispatch of jobs to deconstruct / construct / commission infrastructure in region A, then notify customers of availability of service.

The metrics that matter at sector 4 are less about transactions and more about higher-order objectives like effectiveness and profitability.

In many cases, network operators don’t have these near-real-time decision support tools at hand. If network uplift is to be performed, it’s decided across an entire service area by capacity planning teams rather than in more granular regions.

This is only one scenario for what could be achieved at sector 4. I’m sure we can imagine more if we start building the tools here.

Also consider the different speeds that the OSS/BSS need to cater for and optimally allocate as part of end-to-end workflows:

  • Some activities are fast – virtualised networks have the ability to self-manage / self-optimise so changes are occurring in this part of the network dynamically and automatically
  • Some activities are medium-speed – logical changes in networks such as configuration updates or logical connectivity, are performed with only a short turnaround and are pushed to the network
  • Other activities are slow – physical infrastructure builds such as new sites, cables, etc can often take months of planning and build activities (with many sub-activities to manage) 

 

How and why we need to Challenge our OSS Beliefs

The OSS / BSS industry tends to be quite technical in nature. The networks we manage are technical. The IT stacks we build on are technical. The workflows we design are technical. The product offerings we coordinate are technical. As such, the innovations we tend to propose are quite technical too. In some cases, too technical, but I digress.

You would’ve no doubt noticed that the organisations garnering most attention in the last few years (eg the hyperscalers, Airbnb, Uber, Twitter, Afterpay, Apple, etc) have also leveraged technology. However, whilst they’ve leveraged technology, you could argue that it’s actually not “technical innovation” that has radically changed the markets they service. In most cases, it’s actually the business model and design innovations, which have simply been facilitated by technology. Even Apple is arguably more of a design innovator than a technology innovator (eg there were MP3 players before the iPod came along and revolutionised the personal music player market).

Projects like Skype, open-source software and hyperscaled services have made fundamental changes to the telco industry. But what about OSS/BSS and the telcos themselves? Have we collectively innovated beyond the technical realm? Has there been any innovation from within that has re-framed the entire market?

As mentioned in an earlier post, I’ve recently been co-opted onto the TM Forum’s Transformation User Group to prepare a transformation framework and transformation guides for our industry. We’re creating a recipe book to help others to manage their OSS/BSS/telco/digital transformation projects. However, it only dawned on me over the weekend that I’d overlooked a really, really important consideration of any transformation – reframing!

Innovation needs to be a key part of any transformation, firstly because we need to consider how we’ll do things better. However, we also need to consider the future environment in which our transformed solutions will operate within. Our transformed solution will (hopefully) still be relevant and remain operational in 5, 10, 15 years from now. For it to be relevant that far into the future, it must be able to flexibly cater for the environmental situation in the future. Oh, and by the way, when I say “environment,” I’m not talking climate change, but other situational change. This might include factors like:

  • Networks under management
  • Delivery mechanisms
  • Business models
  • Information Technology platforms 
  • Architectural models
  • Process models
  • Staffing models (vs automations)
  • Geographical models (eg local vs global services)
  • Product offerings (driven by customer needs)
  • Design / Develop / Test / Release pipelines
  • Availability of funding for projects, and is it capex, opex, clip-the-ticket, etc
  • Risk models and risk adversity level
  • etc

Before embarking on each transformation project, we first need to challenge our current beliefs and assumptions to make sure they’re still valid now, but can remain valid into the next few years. Our current beliefs and assumptions are based on past experiences and may not be applicable for the to-be environment that our transformations will exist within.

So, how do we go about challenging our own beliefs and assumptions in this environment of massive, ongoing change?

Well, you may wish to ask “re-framing questions” to test your beliefs. These may be questions such as (but not limited to):

  1. Who are our customers (could be internal or external) and how do they perceive our products? Have we asked them recently? Have you spent much time with them?
  2. What is our supply chain and could it be improved? Are there any major steps or painful elements in the supply chain that could be modified or removed
  3. Are these products even needed under future scenarios / environments
  4. What does / doesn‘t move the needle? What even is “the needle”
  5. What functionality is visible vs invisible to customers 
  6. What data would be useful but is currently unattainable 
  7. Do we know how cash flows in our clients and our own companies in relation to these solutions? Specifically, how value is generated
  8. How easy are our products to use? How long does it take a typical user (internal / external) to reach proficiency 
  9. What personas DO we serve? What personas COULD we serve? What new personas WILL we serve
  10. What is the value we add that justifies our solution’s existence? Could that value be monetised in other ways
  11. Would alternative tech make a difference (voice recognition, AI, robotics, observability, biometric sensors, AR/VR, etc)
  12. Are there any strategic relationships that could transform this solution
  13. What does our team do better than anyone else (and what are they not as strong at)
  14. What know-how does the team possess that others don’t
  15. What features do we have that we just won’t need? Or that we absolutely MUST have
  16. Are there likely to be changes to the networks we manage
  17. Are there likely to be changes to the way we build and interconnect systems
  18. Where does customer service fall on the continuum between self-service and high-contact relationships
  19. What pricing model best facilitates optimal use of this solution (if applicable) (eg does a consumption-based usage / pricing model fit better than a capital investment model?). As Jeff Bezos says, “Your margin is my opportunity.” The incumbents have large costs that they need to feed (eg salaries, infrastructure, etc), whereas start-ups or volume-models allow for much smaller margins
  20. Where are the biggest risks of this transformation? Can they be eliminated, mitigated or transferred
  21. What aspects of the solution can be fixed and what must remain flexible
  22. What’s absorbing the most resources (time, money, people, etc) and could any of those resource-consumers be minimised, removed or managed differently

It also dawns on me when writing this list that we can apply these reframing questions not just to our transformation projects, but to ourselves – for our own personal, ongoing transformation.

I’d love to get your perspective below. What other re-framing questions should we ask? What re-framing exercise has fundamentally changed the way you think or approach OSS/BSS/personal transformation projects?

How to Approach OSS Vendor Selection Differently than Most

Selecting a new OSS / BSS product, vendor or integrator for your transformation project can be an arduous assignment.

Every network operator and every project has a unique set of needs. Counter to that, there are literally hundreds of vendors creating an even larger number of products to service those widely varied sets of needs.

If you’re a typical buyer, how many of those products are you already familiar with? Five? Ten? Fifty? How do you know whether the best-fit product or supplier is within the list you already know? Perhaps the best-fit is actually amongst the hundreds of other products and suppliers you’re not familiar with yet. How much time do you have to research each one and distill down to a short-list of possible candidates to service your specific needs? Where do you start? Lots of web searches?

Then how do you go about doing a deeper analysis to find the one that’s best fit for you out of those known products? The typical approach might follow a journey similar to the following:

The first step alone can take days, if not weeks, but also chews up valuable resources because many key stakeholders will be engaged in the requirement gathering process. The other downside of the requirements-first approach is that it becomes a wish-list that doesn’t always specify level of importance (eg “nice to have” versus “absolutely mandatory”).

Then, there’s no guarantee that any vendor will support every single one of the mandatory requirements. There’s always a level of compromise and haggling between stakeholders.

Next comes the RFP process, which can be even more arduous.

There has to be an easier way!

We think there is.

Our approach starts with the entire 400+ vendors in our OSS/BSS Vendor Directory. Then we apply one or two rounds of filters:

  1. Long-list Filter – Query by high-level product capability as per diagram below. For example, if you want outside plant management, then we filter by 9b, which returns a list of over 60 candidate vendors alone, but we can narrow it down further by applying filters 10 and 14 as well if you need these functionalities
  2. Short-list Filter – We then sit with your team to prepare a list of approx. 20 high-level questions (eg regions the vendor works in, what support levels they provide, high level functional questions, etc). We send this short questionnaire to the long-list of vendors for their clarification. Their collated responses usually then yields a short-list of 3-10 best-fit candidates that you/we can then perform a deeper evaluation on (how deep you dive depends on how thorough you want the review to be, which could include RFPs, PoCs and other steps).

The 2-step filter approach is arguably even quicker to prepare and more likely to identify the best-fit short-list solutions because it starts by assessing 400+ vendors, not just the small number that most clients are aware of.

The next step (step 5 in the diagram above) also uses a contrarian approach. Rather than evaluating via an RFP that centres around a set of requirements, we instead identify:

  • The personas (people or groups) that need to engage with the OSS/BSS
  • The highest-priority activities each persona needs to perform with the OSS/BSS
  • End-to-end workflows that blend these activities into a prioritised list of demonstration scenarios

These steps quickly prioritise what’s most important for the to-be solution to perform. We describe the demonstration scenarios to the short-listed vendors and ask them to demonstrate how their solutions solve for those scenarios (as best they can). The benefit of this approach is that the client can review each vendor demonstration through their own context (ie the E2E workflows / scenarios they’ve helped to design).

This approach does not provide an itemised list of requirement compliance like the typical approach. However, we’d argue that even the requirement-led approach will (almost?) never identify a product of perfect fit, even if it’s filled with “Will Comply” responses for functionality that requires specific customisation.

Our “filtering” approach will uncover the solution of closest fit (in out-of-the-box state) in a much more efficient way.

We should also highlight that the two chevron diagrams above are just sample vendor-selection flows. We actually customise them to each client’s specific requirements. For example, some clients require much more thorough analysis than others. Others have mandatory RFP and/or business case templates that need to be followed.

If you need help, either with:

  • Preparing a short-list of vendors for further evaluation down from our known list of 400+; or
  • Need help to perform a more thorough analysis and identify the best-fit solution/s

then we’d be delighted to assist. Please leave us a note in the contact form below.

New report – Inventory of the Future

I’ve recently been co-opted to lead the development of an “Inventory of the Future” transformation guide on behalf of TM Forum and thought you, the readers, might have some interesting feedback to contribute.

I had previously prepared an “Inventory of the Future” discussion paper prior to being invited into the TMF discussions about a month ago. However, my original pack had too much emphasis on a possible target state (so those slides have been omitted from the attachment below).

Click to access Inventory_of_the_Future_v9a.pdf

Instead, before attempting to define a target state under the guise of TM Forum, we first need to understand:

  1. Where carriers are currently at with their inventory solutions
  2. What they want to achieve next with their inventory and
  3. What will the future environment  (in say 5 years from now) look like that inventory will exist within. Eg
    1. Will it still integrate with orchestration (resource allocation), fault management (enrichment, SIA, RCA, etc), etc?
    2. Will it manage a vastly different type / style of network (such as massive networks of sensors / IoT that are managed as a cohort rather than individually)
    3. etc

This, dear reader, is where your opinions will be so valuable.

Do the objectives (slide 3) and problem statements (slide 4) resonate with you? Or do you have an entirely different perspective?

Would love to hear your thoughts in the comment section below!! Alternatively, if you’d prefer to share your ideas directly, please leave me a personal message. Leave a personal message if you’d like to see the rest of the report (as it currently stands) too.

New. Just Launched – Mastering Your OSS (the eBook version)

We’re excited to announce that we’ve just launched the eBook version of Mastering Your OSS.

Click on the link above or here to read all about it.

Do we need more dummies working on our OSS?

I was reading an article by Chris Campbell that states, “In his book How to Rule the World, Brian J. Ford reveals the relationship between sophistication and complexity; what he calls “obscurantism.” In short, the more sophisticated the problem-solver, the more complex (and costly) the solution.

Does that resonate with you in the world of OSS/BSS? Is it because we have so many unbelievably clever, sophisticated people working on them that they are so complex (and costly)??

Do we actually need more stupid, unsophisticated people (like me) to be asking the stupid questions and coming up with the simple solutions to problems?? Do we need to intentionally apply the dummy-lens??

Sadly, with so much mental horsepower spread across the industry, the dummy-lens is probably shouted down in most instances (in favour of the sophisticated / clever solutions… that turn out to be complex and costly). Check out this article about what impact complexity has on the triple constraint of project management.

I’d love to get your thoughts.

Even better than that, I’d love to hear any “dummy lens” ideas you have that must be considered right away!!

Drop us a note in the comment section below.

Time to Kill the Typical OSS Partnership Model?

A couple of years ago Mark Newman and the content team at TM Forum created a seminal article, “Time to Kill the RFP? Reinventing IT Procurement for the 2020s.” There were so many great ideas within the article. We shared a number of concordant as well as divergent ideas (see references #1, #2, #3, #4, #5, #6, and others).

As Mark’s article described, the traditional OSS/BSS vendor selection process is deeply flawed for both buyer and seller. They’re time-consuming and costly. But worst of all, they tend to set the project on a trajectory towards conflict and disillusionment. That’s the worst possible start for a relationship that will ideally last for a decade or more (OSS and BSS projects are “sticky” because they’re difficult to transform / replace once in-situ).

Partnership is the key word in this discussion – as reiterated in Mark’s report and our articles back then as well.

Clearly this message of long-held partnerships is starting to resonate, as we see via the close and extensive partnerships that some of the big service providers have formed with third-parties for integration and other services. 

That’s great…. but…… in many cases it introduces its own problem for the implementation of OSS and BSS projects. A situation that is also deeply flawed.

Many partnerships are based around a time and materials (T&M) model. In other words, the carrier pays the third-party a set amount per day for the number of days each third-party-provided resource works. A third-party supplies solution architects at ($x per day), business analysts at ($y per day), developers at ($z per day), project managers at… you get the picture. That sounds simple for all parties to wrap their head around and come to mutually agreeable terms on. It’s so simple to comprehend that most carriers almost default to asking external contractors for their daily charge-out rates.

This approach is deeply flawed – ethically conflicted even. You may ask why…. Well, Alan Weiss articulates it best as follows:

When you charge by the hour, you’re in ethical conflict with the client. You only receive larger pay for the longer you’re there, but the client is better served by how quickly you can meet objectives and leave.

Complex IT projects like OSS and BSS projects are the perfect example of this. If your partners are paid on a day rate, they’re financially incentivised for delays, bureaucracy, endless meetings and general inefficiency to prosper. In big organisations, these things tend to already thrive without any incentivisation!

Assuming a given project continues at a steady-state of resources, if a project goes twice as long as projected, then it also goes 100% over the client’s budget. By direct contrast, the third-party doubles their revenue on the project.

T&M partnership models disincentivise efficiency, yet efficiency is one of the primary reasons for the existence of OSS and BSS. They also disincentivise reusability. Why would a day-rater spend the extra time (in their own time) to systematise what they’ve learnt on a project when they know they will be paid by the hour to re-invent that same wheel on the next project?

Can you see why PAOSS only provides scope of work proposals (ie defined outcomes / deliverables / timelines and, most importantly, defined value) rather than day-rates (other than in exceptional circumstances)??

Let me cite just one example to illustrate the point (albeit a severe example of the point).

I was once assisting an OEM vendor to implement an OSS at a tier-1 carrier. This vendor also provided ongoing professional services support for tasks such as customisation. However, the vendor’s day-rates were slightly higher than the carrier was paying for similar roles (eg architects, developers, etc). The carrier invited a third-party to perform much of the customisation work because their day-rates were lower than the OEM.

Later on, I was tasked with reviewing a customisation written by the third-party because it wasn’t functioning as expected. On closer inspection, it had layers of nested function calls and lookups to custom tables in the OEM’s database (containing fixed values). It comprised around 1,500 lines of code. It must’ve taken weeks of effort to write, test and release into production via the change process that was in place. The sheer entanglement of the code took me hours to decipher. Once I finally grasped why it was failing and then interpreted the intent of what it should do, I took it back to a developer at the OEM. His response?

Oh, you’ve gotta be F#$%ing kidding me!

He then proceeded to replace the entire 1,500 lines and spurious lookup tables with half a line of code.

Let’s put that into an equation containing hypothetical numbers:

  • For the sake of the process, let’s assume test and release amounts are equivalent
  • OEM charges $1,000 per day for a dev
  • Third-party charges $900 per day for a dev
  • OEM developer (who knows how the OEM software works) takes 15 seconds to write the code  = $0.52
  • Third-party dev takes (conservatively) 5 days to write the equivalent code (which didn’t work properly) = $4,500

In the grand scheme of this multi-million dollar project, the additional $4,499.48 was almost meaningless, but it introduced weeks of delays (diagnosis, re-dev, re-test, re-release, etc).

Now, let’s say the new functionality offered by this code was worth $50,000 to the carrier in efficiency gains. Who deserves to be rewarded $5,000 for value delivered?

  • The OEM who completed the task and got it working in seconds (and was compensated $0.52); or
  • The Third-party who never got it to work despite a week of trying (and was compensated $4,500)

The hard part about scope of works projects is that someone has to scope them and define the value delivered by them. That’s a whole lot harder and provides less flexibility than just assigning a day rate. But perhaps that in itself provides value. If we need to consider the value of what we’re producing, we might just find that some of the tasks in our agile backlog aren’t really worth doing.

If you’d like to share your thoughts on this, please leave a comment below.

 

 

 

What Pottery and SpaceX Rockets Have in Common with OSS will Surprise You

In our previous article, we described five new OSS/BSS automation design rules applied by Elon Musk. Today, we’ll continue on into the second part of the Starbase video tour and lock onto another brilliant insight from Elon.

From 5:10 to 11:00 in the video below, Elon and Tim (of Everyday Astronaut) discuss why failure, in certain scenarios, is a viable option even in the world of rocket development.

An edited excerpt from the video is provided below.

[Elon] We have just a fundamentally different optimization for Starship versus say, like the polar extreme would be Dragon. Dragon, there can be no failures ever. Everything’s gotta be tested six ways to Sunday. There has to be tons of margin. There can never be a failure ever for any reason whatsoever.
Then Falcon is a little less conservative. It is possible for us to have, say, a failure of the booster on landing. That’s not the end of the world.
And then for Starship, it’s like the polar opposite of Dragon: we’re iterating rapidly in order to create the first ever fully reusable rocket, orbital rocket.
Anyway, it’s hard to iterate, though, when people are on every mission. You can’t just be blowing stuff up ’cause you’re gonna kill people. Starship does not have anyone on board so we can blow things up.

[Tim] Are you just hoping that by the time you put people on it, you’ve flown it say 100, 200 times, and you’re familiar with all the failure modes, and you’ve mitigated it to a high degree of confidence.

[Elon] Yeah.

Interesting…. Very interesting…

How does that relate to OSS? Well, first I’d like to share with you another story, this time about pottery, that I also found fascinating.

The ceramics teacher announced on opening day that he was dividing the class into two groups. All those on the left side of the studio, he said, would be graded solely on the quantity of work they produced, all those on the right solely on its quality. His procedure was simple: on the final day of class he would bring in his bathroom scales and weigh the work of the “quantity” group: fifty pounds of pots rated an “A,” forty pounds a “B,” and so on. Those being graded on “quality,” however, needed to produce only one pot—albeit a perfect one—to get an “A.” Well, come grading time and a curious fact emerged: the works of the highest quality were all produced by the group being graded for quantity. It seems that while the “quantity” group was busily churning out piles of work—and learning from their mistakes—the “quality” group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of dead clay
David Bayles and Ted Orland in their book, “Art and Fear: Observations on the Perils (And Rewards) of Art making,” which I discussed previously in this post years ago.

These two stories are all about being given the licence to iterate…. within an environment where there was also a licence to fail.

The consequences of failure for our OSS/BSS usually falls somewhere between these two examples. Not quite as catastrophic as crashing a rocket, but more impactful than creating some bad pottery.

But even within each of these, there are different scales of consequence. Elon presciently identifies that it’s less catastrophic to crash an unmanned rocket than it is to blow up a manned rocket, killing the occupants. That’s why he has different platforms. He can use his unmanned Starship platform to rapidly iterate, flying a lot more often than the manned Dragon platform by reducing compliance checks, redundancy and safety margins. 

So, let me ask you, what are our equivalents of Starship and Dragon?
Answer: Non-Prod and Prod environments!

We can set up non-production environments where it’s safe to crash the OSS/BSS rocket without killing anyone. We can reduce the compliance, run lots of tests / iterations and rapidly learn, identifying the risks and unknowns.

With OSS/BSS being software, we’re probably already doing this. Nothing particularly new in the paragraph above. But I’m less interested in ensuring the reliability of our OSS/BSS (although that’s important of course). I’m actually more interested in ensuring the reliability of the networks that our OSS/BSS manage.

What if we instead changed the lens to using the OSS/BSS to intentionally test / crash the (non-production or lab) network (and EMS, NMS, OSS/BSS too perhaps)? I previously discussed the concept of CT/IR – Continual Test / Incremental Resilience here, which is analogous to CI/CD (Continuous Integration / Continuous Delivery) in software. CT/IR is a method to automatically, systematically and programmatically test the resilience of the network, then using the learnings to harden the network and ensure resilience is continually improving. 

Like the SpaceX scenario, we can use the automated destructive testing approach to pump out high testing volumes and variants that could not occur if operating in risk-averse environments like Dragon / Production. Where planned, thoroughly tested changes to production may only be allowed during defined and pre-approved change windows, intentionally destructive tests can be run day and night on non-prod environments.

We could even pre-seed AIOps data sets with all the boundary failure cases ready for introduction into production environments without them having ever even been seen in the prod environment.

 

Zooming in and out of your OSS

Our previous post talked about using the following frame of reference to think bigger about our OSS/BSS projects and their possibilities.

It’s the multitudes of experts at Level 1 that get projects done and products released. All hail the doers!!

As Rory Sutherland indicated in his book, Alchemy – The surprising power of ideas that don’t make sense, “No one complained that Darwin was being trivial in comparing the beaks of finches from one island to another because his ultimate inferences were so interesting.”

In the world of OSS/BSS, we do need people who are comparing the beaks of finches. We need to zoom down into the details, to understand the data. 

But if you’re planning an OSS/BSS project or product; leading a team; consulting; or marketing / selling an OSS/BSS product or service, you also need to zoom out. You need to be like Darwin to look for, and comprehend the ramifications of, the details.

This is why I use WBS to break down almost every OSS/BSS project I work on. I start with the problem statements at levels 2-5 of the reference framework above (depending on the project) and try to take in the broadest view of the project. I then start breaking down the work at the highest level of granularity. From there, we can zoom in as far into the details as we need to. It provides a plan on a page that all stakeholders can readily zoom out of and back into, seeing where they fit in the overall scheme of the project.

With this same perspective of zooming in and out, I often refer to the solution architecture analogy. SAs tend to create designs for an end-state – the ideal solution that will appear at the end of a project or product. Having implemented many OSS/BSS, I’ve also seen examples of where the end-state simply isn’t achievable because of the complexity of all the moving parts (The Chessboard Analogy). The SAs haven’t considered all the intermediate states that the delivery team needs to step through or the constraints that pop up during OSS/BSS transformations. Their designs haven’t considered the detailed challenges along the way.

Interestingly, those challenges often have little to do with the OSS/BSS you’re implementing. It could be things like:

  • A lack of a suitable environment for dev, test, staging, etc purposes
  • A lack of non-PROD infrastructure to test with, leading to the challenging carve-out and protection of PROD whilst still conducting integration testing
  • A new OSS/BSS introduces changes in security or shared services (eg DNS, UAM, LDAP, etc) models that need to be adapted on the fly before the OSS/BSS can function
  • Carefully disentangling parts of an existing OSS/BSS before stitching in elements of your new functionality (The Strangler Fig transformation model)
  • In-flight project changes that are moving the end-state or need to be progressively implemented in lock-step with your OSS/BSS phased release, which is especially common across integrations and infrastructure
  • Changes in underlying platforms or libraries that your code-base depends on
  • Refactoring of other products like microservices 
  • The complex nuances of organisational change management (since our OSS/BSS often trigger significant change events)
  • Changes in market landscape
  • The many other dependencies that may or may not be foreseeable at the start of the journey

You need to be able to zoom out to consider and include these types of adjacency when planning an OSS/BSS.

I’ve only seen one person successfully manage an OSS/BSS project using bottom-up work breakdown (although he was an absolute genius and it did take him 14 hour days x 7 day weeks throughout the project to stay on top of his 10,000+ row Gantt chart and all of the moving dependencies within it). I’ve also seen other bottom-up thinkers nearing mental breakdown trying to keep their highly complex OSS/BSS projects under control.

Being able to zoom up and down may be your only hope for maintaining sanity in this OSS/BSS world (although it might already be too late for me…. and you???)

If you need guidance with the breakdown of work on your OSS/BSS project or need to reconsider the approach to a project that’s veering off target, give us a call. We’d be happy to discuss.