A seismic shift is happening

A seismic shift is happening that will change the world of OSS/BSS so fundamentally in the next few years that it totally blows my mind. It will be the biggest in my 20+ year career in the telco industry.

The following pic is from an IA Summit Keynote by Charles Lamanna, the Corporate VP of Business Apps & Platforms at Microsoft. He leads the development of business apps within the Microsoft Dynamics 365 platform. This keynote is brilliant and should be watched by anyone who has an influence on the direction of OSS and BSS product development. I’ll explain more below.

 

Such significant inflection points like the one described above represents great opportunity, but also great risk (for those that do and those that don’t pivot). It signifies major upheaval in the way OSS and BSS applications will be architected, how their user interfaces will look, how workflows are performed. The OSS/BSS applications we know today will change dramatically. The benefits they deliver to our clients (eg telcos). The benefits they deliver to our clients’ customers (eg subscribers to telco services). The next generation of applications will become even more intelligent and personalised than the current one.

This is an opportunity to think vastly differently about our OSS/BSS and where they fit. A chance to re-imagine by first projecting forward to what the environment in which they operate will look like five years from now. That world will not only be driven by AI. It will exist in a world where Augmented Reality will be as prevalent as task enablers as smart phones are today. It will exist within frameworks where many human activities will be guided by decision support (driven by the data collected and assembled by our OSS/BSS insight engines).

This will surely be the biggest inflection point in my career 20+ year career in OSS. During the past 20 years, we’ve gone through:

  • Drastic changes in networks under management (as described in yesterday’s article)
  • Convergence of IT and telco with:
    • Network virtualisation and containerisation
    • Changes in delivery models like Agile, DevSecOps, etc
    • A shift towards low-code / no-code development
    • Changes in data and analytics, from relational to graph and time-series databases, with big-data / data-warehouse models of data aggregation, and with productised ETL pipelines
    • Perhaps most importantly from an architectural perspective is cloud-hosting, web-scaling and microservices
    • Closely related, changes in the style of APIs
    • Data governance regulations like GDPR

The hits just keep on coming. These have changed our underlying architectures and infrastructure, but they haven’t massively influenced the fundamental user interfaces or customer workflows / journeys.

However, AI will change this. For anyone developing an OSS/BSS roadmap, AI can either just facilitate more of the same, but slightly better, or trigger a complete re-imagination.

It could trigger shifts in thinking such as:

  • The move from human-centric user interfaces and workflows to data / AI / machine-driven apps. Instead of humans driving apps, the apps / AI drives humans. Instead of choosing to search and navigate through an application, the application is responsible for only surfacing the information I must action
  • From interfaces designed for individuals to work on where instead of today’s tick-and-flick workflow mentality it becomes a more collaborative digital teamwork approach. Not just humans in the team, but supplemented with digital assistants and AI in the collaborative team too. Rather than people using tools that are driven by AI behind-the-scenes, we’ll have AI collaborating with the humans as co-pilots for decision support (as GPT-3 is giving us a glimpse of with co-authoring)
  • From a perspective of having to sort through the noise and make human correlations that allow us to then work on anomalies / discrepancies, to only working on activities where a human touch is most needed
  • From working with a narrow sliver of data, where we only focus on what our human minds can comprehend (eg I’ll look at CPU utilisation, throughput and up/down time metrics) to using machines to sort through thousands of metrics and petabytes of data at streaming speeds looking for patterns or anomalies that should be brought to our attention
  • From data-light to data rich environments
  • From human-led decisions to nuanced machine assisted decision support
  • From a side-car interaction with data (eg via printed design packs, PCs / monitors in a NOC, smart-phone screens) to an immersive one with augmented reality (AR). I analogise this transition to being similar to when we used to drive to the desired destination using paper maps to now being directed by GPS via heads-up displays
  • From human-generated or system-generated data to large language models like OpenAI’s GPT-3 or DALL-E 2 that auto-generate new content
  • From working through daily diagnoses and actions via screens, menus, scripts, etc to explaining the problem and goal in natural language
  • From sight-centric user interactions (ie looking at monitors) to greater use of other senses such as gestures / touch and sounds / voice
  • From following static process flows to performing data mining in real-time to optimise process outcomes by steering around bottlenecks that are occurring in real-time (eg product shortages, supply-chain issues, over-utilised staff, microservice outages, natural disasters, etc)
  • From having applications where there’s in-built scripting languages to get things done and a reliance on developers trained with the language (eg Majik in Smallworld) to low-code / no-code customisations where non-coders can achieve outcomes (thus bypassing the global shortage of coders).
    [As an important aside here, there are +1 billion people in the world who can use Excel (arguably a low-code platform), but only ~10M who can code in Python. It’s no coincidence that Excel is still heavily used in telco / OSS / BSS style workflows and some of my clients feel guilty about that, but shouldn’t. A common problem you hear telcos talk about is the skills gap, where they just can’t get enough coders to transform to modern, software-first organisations. If telcos were to prioritise the development / purchase of low-code OSS/BSS tools that can be used by existing staff, then they leap right over the skills gap. It goes from coder-centric to become more user-centric, more democratised so anyone can contribute to building functionality]
  • From having pre-defined user interfaces with a million functionalities baked in (making most OSS non-intuitive) to being able to use iterative natural language conversations to get the output / results you want (anyone who has used AI tools like DALL-E or Midjourney to auto-generate images will know what I mean by having to iterate)

So here are some confronting questions for OSS/BSS creators:

  • Are we willing to throw away large chunks of what we’ve developed (apps, code, processes)? As the old saying goes, if you don’t cannibalise your business, then someone else will do it for you. Perhaps even by companies you don’t even know as competitors yet. New OSS/BSS tools will arise that are AI-first, not AI-retrofitted, just as some are cloud-native / cloud-first rather than retrofitted
  • How many of your product teams are already building prototypes of AI-led intelligent apps?
  • How many are prioritising low-code user interfaces, or even interactive natural language user interfaces?
  • How many are building AR-ready apps in readiness for when revolutionary Augmented Reality (AR) headsets / glasses land on the market and fundamentally change our ways of working?
  • How many are re-designing UIs to leverage the power of AI and AR tools?
  • How many are re-designing workflows for the disruptive platforms that follow?
  • How many of you are upskilling yourself to remain relevant beyond this inflection point?

Does the confluence of all of this change represent an opportunity to not just re-factor but entirely re-build using modern software architectures and next-generation concepts? It’s a large chasm to jump but not one that can be hurdled incrementally.

If you didn’t already click on the keynote speech link above, I seriously encourage you to watch it below. Hopefully your mind is as blown as mine was!

One last aside. I love the four pillars of fundamental learning discussed at around 1:01:00 into the video:

  • Cloud
  • Mobility
  • AI
  • Hands-free devices (incl. AR / mixed-reality)

Share:

Most Recent

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.