OSS/BSS in the clouds, updated

We posted an article in July entitled “OSS / BSS in the clouds,” which looked at the OSS, BSS and related telco infrastructure platforms being offered by AWS, Google, Microsoft and their partners. This followed a number of recent announcements made by the hyperscalers relating to their bigger pushes into telco. It had a particular focus on 5G models and the edge compute technologies that support them. 

Following VMware’s announcement of its 5G Telco Cloud Portfolio on 1st Sept 2020, we’ve now retro-fitted VMware into the earlier post. Click on it here “OSS / BSS in the clouds,” to view the updated article.

Is omni-channel more disadvantage than advantage for telcos?

In our post on Monday, we discussed how some commodity providers have a structural advantage through lower cost of production (eg Rio Tinto in iron ore). Telcos have the potential to achieve a similar advantage on their commodity services too.

It also mentioned that the first principle behind that advantage is simplicity (of systems, overheads, processes, offerings, etc). Many of these simplification factors are controllable by our OSS/BSS, but most just end up with us as a result of up-stream decisions that flow to us for resolution. These decisions (eg running 50+ mobile offering variants including grandfathered services) make our OSS/BSS far more complex.

Another example is omni-channel customer engagement, which includes:

  • Call centres
  • Digital / websites
  • Email
  • SMS
  • Chatbots
  • Mobile Apps
  • IVR (Interactive Voice Response)
  • Social channels
  • Smart-home assistants
  • USSD (Unstructured Supplementary Service Data)

I completely get that we want to allow customers (or potential customers) to choose the contact mechanism/s they’re most comfortable with. Tight coupling / orchestration across these channels is important for customer loyalty.

Unfortunately, the task of coordinating all of these systems is complex. Linking and tracking customer journeys through multiple channels is even more challenging. For example, websites and IVR channels don’t require customers to self-identify, making it difficult to cross-link with in app touch-point data where identification is inherent. Some flows are personalised, some are interactive, some are batched. 

Gaining consolidated insights from these disparate data sources is imperative for customer satisfaction. 

Clearly a typical telco omni-channel story doesn’t represent simplicity, especially for the OSS/BSS that support it! That presents a barrier to achieving the cost of production advantage we discussed.

How do telcos compare with Apple, Amazon, Google and others? Do they offer as many customer contact channels? Do they have as many variants in their “extended supply chain?” Do they have as many potential fall-out points in their O2A, T2R, P2O, etc workflows? It appears to me that most telcos are at a structural disadvantage here (on costs and CX).

We’d love to hear your thoughts. Please leave us your perspectives in the comments section below.  

Is scaled OSS/BSS multi-tenancy a thing?

We talked yesterday about the commoditisation of telco services and the part that OSS/BSS have to play in differentiation. We also talked about telcos retaining a few competitive advantages despite the share-of-wallet inroads made by OTT, software and cloud service providers recently. Managed services is one area where some of those advantages converge.

Quite a few years ago I worked with one of Australia’s largest telcos on a managed service account for one of Australia’s big four banks. The value of the managed service contract was worth a couple of billion dollars. It covered the whole gamut of what a telco can offer. Voice, data, mobility and unified communications of course, but also branch / office comms fit-outs, international WANs, IT services, security, custom help-desk, dedicated service delivery management and much more.

Big companies tend to prefer to deal with big companies, especially when the services are complex and wide-ranging. A bank like this one could realistically only negotiate with a handful of telcos because only a few could provide viable size and scope to meet their needs… if they wanted to keep it to a consolidated procurement event.

This bank wasn’t this telco’s only large managed services contract. They had quite a few similar clients. The deals were all incredibly cut-throat between the other big telcos, but they were still profitable, especially once the variations started coming in. The telcos are not in a race to the bottom on managed services like the commodity data services we discussed yesterday.

I found it interesting that the telcos mainly focused on providing these managed service deals to the ASX200 (Australia’s top 200 companies by market capitalisation), give or take. So they service the top 200 (ish) with managed services and the long, long tail with retail services. But of course mid-market companies fall somewhere in the middle – big enough to require custom solutions, but too small to warrant the type of managed services the bank was getting.

So, now let me put the OSS/BSS spin on this situation (yes, it took me a while). At this telco, each of its big managed service contracts had completely bespoke OSS/BSS/portal solutions. Of course they had a core OSS/BSS – the factory that handled assurance and fulfilment for customers large and small – but every managed service had completely unique, satellite OSS/BSS/portals at their customer interface. For example, the core ran eTOM, the customer interface often demanded ITIL.

This telco couldn’t offer managed services to the mid-market because it was too expensive to set up the unique tooling. That meant there were almost no repeatability or economies of scale in processes, reporting, data science, skill portability, etc.

It bewildered me that they hadn’t invested the time into creating a cookie-cutter approach to all these satellite OSS/BSS/portals. Sure, each would need some customisation, but there was also significant potential for repeatability / consistency.

Now this may sound a bit bewildering to you too. But it’s not just the telco’s short-term thinking that leads to this situation. Most OSS/BSS vendors don’t build products with this type of multi-tenancy in mind.

The ability to spin up/down satellite OSS/BSS/portals, each with configurability but consistency, is not common. Importantly, nor is secure partitioning that ensures each customer only sees their data even when using shared infrastructure (including NOC / WOC / SOC as well as network infra and core OSS/BSS).

You know, I don’t recall ever hearing anyone else talk about this type of use-case. Multi-tenancy, yes, to an extent, but not at managed services scale.

Is this even a thing? Should it be? Could it be? You be the judge! I’d love to hear your thoughts / experiences. Please leave a comment below if your OSS/BSS stack supports this use-case or if your telco has developed a cookie-cutter approach. What approach do you use?

OSS’s Influence on Cost of Production

Since widespread deregulation of telecommunications globally, the passing of data has become a commodity. Perhaps it always was, but increased competition has steadily driven down dollar per bit. It’s likely to continue on that path too. Meanwhile the expected throughputs and consumption of data services is ramping ever-upwards, which requires investment in networks by their operators.

By definition, a commodity is a product/service that is indistinguishable between providers. The primary differentiator is price because there are no other features that make a buyer choose one provider over another. 

At face value, that’s true of commodities such as oil or iron ore, just as it is for data. What could be less differentiated than ones and zeroes?

However, as the charts below show for oil and iron ore, commodities aren’t always a level playing field. Some suppliers have significant differentiation – via cost of production advantages over their competitors.

Cost of Oil Production

Iron ore cost of Production

What if we created a $/bit graph of telcos similar to the oil graph above showing opex and capex splits? Which countries / operators would have significant advantage? Which telcos are like Rio Tinto, comfortable in the knowledge that no matter how low the spot price goes, their competitors will be deeply unprofitable (and possibly out of business) long before they will be.

Virtualisation of network infrastructure (SDN, NFV, cloud) has been the much-hyped saviour of telco business models. However, like me, you’ve probably started hearing news that savings from these architectures just aren’t eventuating. Cost-bases are possibly even going up because of their increased complexity. Either way, incremental cost reduction tends to have diminishing returns anyway.

It would seem that telcos are left with two / three choices:

  1. Have structurally lower production costs, like Kuwait and Rio Tinto; or
  2. Differentiate / Innovate, limiting exposure to the raw data / connectivity services that so many telcos still rely upon
  3. Or both

OSS/BSS has a part to play with both of these options.

Structurally lower cost comes not from cost reduction but from having far simpler variant trees (ie smaller numbers of product offerings, network types, system integrations, service configuration styles, support models, obsolescence of legacy / grandfathered products, minimal process options, etc, etc). Some of this happens within the OSS/BSS, but a lot more of it stems from upstream decisions.

Differentiation / innovation means being able to innovate through experiences / insights / content / trust / partnerships / ecosystems / local-presence in ways that other organisations can’t. It’s unlikely to be in software alone or in cloud infrastructure because others have proven to do that far more effectively than telcos. As much as they wish otherwise, it’s just not in the DNA of many. Yet that’s where most attention seems to be. Meanwhile OSS/BSS are waiting to be the glue that can leverage the competitive advantages that telcos do still hold.

How fragmentation is harming the OSS/BSS industry

Our Blue Book OSS/BSS Vendors Directory provides a list of over 400 vendors. That clearly states that it’s a highly fragmented market. This amount of fragmentation hurts the industry in many ways, including:

  • Duplication – Let’s say 100 of the 400 vendors offer alarm / fault management capabilities. That means there are 100 teams duplicating effort in creating similar functionality. Isn’t that re-inventing the wheel, again and again? Wouldn’t the effort be better spread into developing new / improved functionality, rather than repeating what’s already available (more or less). And it’s not just coding, but testing, training, etc. The talent pool overlaps on duplicated work at the expense of taking us faster into an innovative future
  • Profit-share – The collective revenues of all those vendors need to be spread across many investors. Consolidated profits would most likely lead to more coordination of innovation (and less duplication above). And just think at how much capability has been lost in tools developed by companies that are no longer commercially viable
  • Overhead – Closely related is that every one of these organisations has an overhead that supports the real work (ie product development, project implementation, etc). Consolidation would bring greater economies of scale
  • Consistency – With 400+ vendors, there are 400+ visions / designs / architectures / approaches. This means the cost and complexity of integration hurts us and our customers. The number of variants makes it impossible for everything to easily bolt together – not withstanding the wonderful alignment mechanisms that TM Forum, MEF, etc create (via their products Frameworx, OpenAPIs, MEF 3.0 Framework, LSO, etc). At the end of the day, they create recommendations that vendors can interpret as they see fit. It seems the integration points are proliferating rather than consolidating
  • Repeatability and Quality – Repeatability tends to provide a platform for continual improvement. If you do something repeatedly, you have more opportunities to refine. Unfortunately, the bespoke nature of OSS/BSS implementations (and products) means there’s not a lot of repeatability. Linus’s Law of OSS defects also applies, with eyeballs spread across many code-bases. And the spread of our variant trees means that we can never have sufficient test coverage, meaning higher end-to-end failure / fall-out rates than should be acceptable 
  • Portability – Because each product and implementation is so different, it can be difficult to readily transfer skills between organisations. An immensely knowledgeable, talented and valuable OSS expert at one organisation will likely still need to do an apprenticeship period at a new organisation before becoming nearly as valuable
  • Analysis Paralysis – If you’re looking for a new vendor / product, you generally need to consider dozens of alternatives. And it’s not like the decisions are easy. Each vendor provides a different set of functionality, pros and cons. It’s never a simple “apples-for-apples” comparison (although we at PAOSS have refined ways to make comparisons simpler). It’s certainly not like a cola-lover having to choose between Coke and Pepsi. The cost and ramifications of an OSS/BSS procurement decision are decidedly more significant too obviously
  • Requirement Spread – Because there are so many vendors with so many niches and such a willingness to customise, our customers tend to have few constraints when compiling a list of requirements for their OSS/BSS. As described in the Lego analogy, reducing the number of building blocks, perhaps counter-intuitively, can actually enhance creativity and productivity
  • Shared Insight – Our OSS/BSS collect eye-watering amounts of data. However, every data set is unique – collection / ETL approach, network under management, product offerings, even naming conventions, etc. This makes it challenging to truly benchmark between organisations, or share insights, or share seeded data for cognitive tools

However, I’m very cognisant that OSS come in all shapes and sizes. They all have nuanced requirements and need unique consideration. Yet many of our customers stand on a burning platform and desperately need us to create better outcomes for them.

From the points listed above, the industry is calling out for consolidation – especially in the foundational functionality that is so heavily duplicated – inventory / resource management, alarms, performance, workflows, service ordering, provisioning, security, infrastructure scalability, APIs, etc, etc. 

If we had a consistent foundation for all to work on, we could then more easily take the industry forward. It becomes a platform for continuous improvement of core functionality, whilst allowing more widespread customisation / innovation at its edges.

But who could provide such a platform and lead its over-arching vision?

  • I don’t think it can be a traditional vendor. Despite there being 400+ vendors, I’m not aware of any that cover the entire scope of TM Forum’s TAM map. Nor do any hold enough market share currently to try to commandeer the foundational platform
  • TM Forum, wouldn’t want to compromise their subscriber base by creating something that overlaps with existing offerings
  • Solution Integrators often perform a similar role today, combining a multitude of different OSS/BSS offerings on behalf of their customers. But none have a core of foundational products that they’ve rolled out to enough customers to achieve critical mass
  • I like the concept of what ONAP is trying to do to rally global carriers to a common cause. However, its size and complexity also worries me. That it’s a core component of the Linux Foundation (LF Networking) gives it more chance of creating a core foundation via collaborative means rather than dictatorial ones

We’d love to hear your thoughts. Is fragmentation a good thing or a bad thing? Do you suggest a better way? Leave us a message below.

OSS / BSS in the clouds

Have you noticed the recent up-tick in headlines around telco offerings by hyperscalers AWS, Google and Microsoft? Or the multi-cloud telco models, the middleware, supplied by VMware and Red Hat?

Whilst previous generations of wireless connectivity have focussed on voice and data capabilities, 5G is architected to better enable consumer business models. Edge compute (both on-prem / device-edge and provider edge) and related services to support 5G use-cases appears to be the leading driver behind recent announcements.  These use-cases will need to be managed by our OSS/BSS for the telco operators and their customers.

Meanwhile, top-tier OSS/BSS users are also continuing to adopt cloud-native OSS/BSS initiatives, as described in this Infographic from Analysis Mason / Amdocs. Analysis Mason estimates that over 90% of CSPs in North America, Asia–Pacific and Europe will have their OSS/BSS stacks running on cloud infrastructure by 2022, with well over 60% on hybrid cloud.

However, just how much of the CSP OSS/BSS stack will be on the cloud remains in question. According to TM Forum’s research, most CSPs have deployed less than 5% percent of their operations software in the public cloud.

In today’s article, we take a closer look into cloud offerings for OSS/BSS. The providers we’ll cover are hyperscalers:

AWS

https://aws.amazon.com/telecom/

The following diagrams come from the Amazon Telco Symposium. The first diagram shows the AWS Telecom Engagement Model (noting the OSS/BSS bubble).

The latter diagram provides some insight into important offerings in AWS’ push into the 5G / telco edge such as Greengrass, SiteWise, Sagemaker and more.

 

AWS services such as the following have been used as part of home-grown offerings for years:

  • Wavelength (low latency), Lambda (serverless) or EC2 – compute services for processing applications/code
  • S3, EFS, Glacier, Elastic, Snow Family, etc – data storage for collecting logs, etc
  • EKS or ECS – for Kubernetes / Docker container / cluster management
  • VPC – for separate environment deployments
  • VPN – to tie VPCs to networks / clouds / DCs
  • ELB – for load balancing
  • ELK – for log management consisting of three open source projects: Elasticsearch, Logstash, and Kibana
  • Aurora, RDS, Redshift, DynamoDB, Neptune, KDB, etc – databases
  • Cassandra, Kibana, etc – data visualisation
  • SageMaker, Augmented AI, Lex, etc – AI / ML tools
  • And much more

These have been leveraged by telco architects to build home-grown OSS/BSS tools that leverage commercial and open-source products like Apache’s Kafka, NiFi, Spark, etc.

However, there’s been an increasing trend for OSS/BSS vendors to publish their offerings on the AWS marketplace too, including:

  • Moogsoft
  • Zoho / ManageEngine (eg OpManager, Network Config Mgr)
  • Solarwinds
  • Domotz Pro
  • Lumeta CloudVisibility
  • Flowmon
  • Hyperglance
  • Mphasis InfraGraf (Forecasting & Planning)
  • KloudGin (Work Order Mgmt)
  • Kx (network performance)
  • AND Bosch, ThingsBoard, ThingPark, ThingLogix, etc, etc (if we extend into IoT device management)

AWS Marketplace tends to show the solutions that are more standardised / fixed-price in nature (Telecoms section in Marketplace). Many other OSS/BSS vendors such as Netcracker, CSG, Intraway and Camvio don’t appear in the AWS marketplace but have customisable, AWS-ready solutions for clients. These companies have their own sales arms obviously, but also train the AWS global salesforce in their products.

Google Cloud

https://cloud.google.com/solutions/telecommunications

According to Google Cloud’s strategy for the telecom industry, Google Cloud is focusing on three strategic areas to support telecommunications companies:

  • Helping telecommunications companies monetise 5G as a business services platform, including:

    • The Global Mobile Edge Cloud (GMEC) strategy, which will deliver a portfolio and marketplace of 5G solutions built jointly with telecommunications companies; an open cloud platform for developing network-centric applications; and a global distributed edge for deploying these solutions

    • Anthos for Telecom, which will bring its Anthos cloud application platform to the network edge, allowing telecommunications companies to run their applications wherever it makes the most sense. Anthos for Telecom—based on open-source Kubernetes—will provide an open platform for network-centric applications.
  • Empowering telecommunications companies to better engage their customers through data-driven experiences by:

    • Empowering telecommunications companies to transform their customer experiences through data- and AI-driven technologies. Google’s BigQuery platform provides a scalable data analytics solution—with machine learning built-in so telecommunications companies can store, process, and analyze data in real time, and build personalization models on top of this data

    • Contact Center AI assists telecommunications companies with customer service. Contact Center AI gives companies 24/7 access to conversational self-service, with seamless hand-offs to human agents for more complex issues. It also empowers human agents with continuous support during their calls by identifying intent and providing real-time, step-by-step assistance
    • AI and retail solutions including omni-channel marketing, sales and service, personalisation and recommendations, and virtual-agent presence in stores
  • Assisting them in improving operational efficiencies across core telecom systems. This allows operators to move OSS, BSS and network functions from their own environments to the Google Cloud

This LightReading report even highlights how Google has been engaged to provide extensive knowledge transfer to some telcos.

This press release from March 2020 announced that Google would partner with Amdocs to support the telecom industry to:

  • Deliver Amdocs solutions to Google Cloud: Amdocs will run its digital portfolio on Google Cloud’s Anthos, enabling communications service providers (CSPs) to deploy across hybrid and multi-cloud configurations
  • Develop new enterprise-focused 5G edge computing solutions: Amdocs and Google Cloud will create new industry solutions for CSPs to monetize over 5G networks at the edge
  • Help CSPs leverage data and analytics to improve services: Amdocs will make its Data Hub and Data Intelligence analytics solutions available on Google Cloud. Amdocs and Google Cloud will also develop a new, comprehensive analytics solution to help CSPs leverage data to improve the reliability of their services and customer experiences.
  • Partner on Site Reliability Engineering (SRE) services: The companies will share tools, frameworks, and best practices for SRE and DevOps

On the same day the Google / Amdocs partnership was announced, Netcracker Technology announced it would deploy its entire Digital BSS/OSS and Orchestration stack on Google Cloud. These applications are cloud native, deployed as a set of reusable microservices that run over on-prem or public cloud on top of container platforms such as Google Kubernetes Engine (GKE).

Optiva (formerly Redknee) has also adopted a Google Cloud strategy, using Google Cloud Spanner and the Google Cloud Platform (GCP) to underpin its Charging Engine.

This article from CIMI Corp provides some great additional reading about why Google is a credible telco cloud provider.

Microsoft Azure

https://www.microsoft.com/en-us/industry/telecommunications

Microsoft has also announced an intention to better serve telecom operators at the convergence of cloud and comms networks through its Azure platform.

The diagram below, from this Microsoft blog, shows their coverage map of offerings for CSP customers (Azure for Operators):

Microsoft's Telco Offering Coverage Map

The blog also indicates that their offering is built upon:

  • Interconnect – 170 points of presence and 20,000 peering connections globally. More than 200 operators are already integrated with the Azure network via their ExpressRoute service
  • Edge – offered by the Azure platform, whether at enterprise edge, network edge, network core or cloud
  • Network Functions – This is where Microsoft distinguishes itself from AWS and Google. Its ability to offer network, particularly for 5G via RAN and Mobile Packet Core offerings (see more about Microsoft’s Affirmed and Metaswitch acquisitions below)
  • Cloud – Incorporates a marketplace with capabilities including OSS/BSS, IoT (via IoT Central), Machine Learning / AI, Azure Cognitive Services (APIs/SDKs that help developers build cognitive intelligence across Decisions, Vision, Speech, Language and Web-search)

“We will continue to partner with existing suppliers, emerging innovators and network equipment partners to share roadmaps and explore expanded opportunities to work together, including in the areas of radio access networks (RAN), next-generation core, virtualized services, orchestration and operations support system/business support system (OSS/BSS) modernization,” states Yousef Khalidi in this Microsoft post.

Acquisition of Metaswitch Networks (a provider of virtualised network software) and Affirmed Networks (a provider that sells virtualised, cloud-native mobile network solutions) shows further evidence of ambitions in the telco / cloud domain.

Like the partnerships described with AWS and Google above, Netcracker has also partnered with Microsoft, offering its Netcracker Digital BSS/OSS and Orchestration applications on Microsoft Azure. This article also describes that, “Netcracker is collaborating with Microsoft to integrate Azure Machine Learning (ML) and AI services with Netcracker’s Advanced Analytics to add intelligent contextual decisioning and recommendations to enable more personalized customer engagements.”

Meanwhile, Amdocs and Microsoft have been working on making ONAP available on the Azure platform. Nokia and Microsoft are partnering on “…cloud, Artificial Intelligence (AI) and Internet of Things (IoT), bringing together Microsoft cloud solutions and Nokia’s expertise in mission-critical networking.”

Microsoft Azure Edge Zones are offered through Azure, with select carriers and operators, or as private customer zones. They bring compute, storage, and service availability closer to the customer / device for low-latency + high-throughput use cases.

The recent announcement of an AT&T and Microsoft alliance as well as deals involving Telefónica (with Aura, its AI-powered digital assistant), SK Telecom (5G-based cloud gaming), Reliance Jio (cloud solutions), NTT (enterprise solution offerings), and Etisalat (future networks) show an increasing presence for Azure within the telco domain.

Netcracker has integrated its solution with Microsoft business applications (eg Office 365, Dynamics 365 and OneDrive), as other OSS/BSS providers undoubtedly have too.

Microsoft also has their own OSS/BSS offering in Dynamics 365 Field Service Management.

VMware Telco Cloud

telco.vmware.com 

VMware telco cloud architecture

VMware’s recently announced 5G Telco Cloud Portfolio has been designed to give network operators the platform to accelerate 5G and Edge implementation. Its key differentiator from the examples provided above is it allows operators to run containerised workloads across private, telco, edge and public clouds. This is seen as being an important feature allowing telcos to avoid cloud partner lock-in.

The press release above indicates that, “VMware is evolving its VMware vCloud NFV solution to Telco Cloud Infrastructure, providing CSPs a consistent and unified platform delivering consistent operations for both Virtual Network Functions (VNFs) and Cloud Native Network Functions (CNNFs) across telco networks. Telco Cloud Infrastructure is designed to optimize the delivery of network services with telco centric enhancements, supporting distributed cloud deployments…
Tightly integrated with Telco Cloud Infrastructure, VMware’s Telco Cloud Automation intelligently automates the end-to-end lifecycle management of network functions and services to simplify operations and accelerate service delivery while optimizing resource utilization. Telco Cloud Automation also now supports infrastructure and Containers-as-a-Service (CaaS) management automation to streamline workload placement and deliver optimal infrastructure resource allocation. It also significantly simplifies the 5G and telco edge network expansions through zero-touch-provisioning (ZTP) whenever capacity is required.

Red Hat

www.redhat.com/telco

Red Hat’s origins, of course, are in open-source tools such as Red Hat Linux. They’ve now evolved to become a leading provider of open-source solutions for enterprise, delivering Linux, cloud, container and Kubernetes technology. Red Hat was acquired by IBM in 2019.

Red Hat’s telco offerings are built upon the premise that service providers will use more open-source, multi-vendor solutions to underpin their OSS, BSS and networks of the future. Red Hat aims to offer open infrastructure to facilitate service provider initiatives such as NFV, 5G, OpenRAN and Edge Compute. This includes coordination of telco and IT infrastructure, but also the applications and data that are intertwined with them.

Red Hat’s telco proposition is supported by:

  • OpenShift – a cloud compute platform as a service (PaaS), built upon Docker containers (containerised applications) that are managed by Kubernetes. This is particularly relevant to support telco cloud models that provide virtualised network functions. It also helps to deliver the edge compute infrastructure that’s becoming synonymous with 5G
  • OpenStack – a set of components, mostly deployed as infrastructure as a service (IaaS) that help manage compute, storage and networks. Some of the components are shown in the diagram below sourced from redhat.com
    OpenStack Components (not complete)
  • Ansible Automation Platform – to automate network configuration, fault remediation, security updates, and more
  • Marketplace – to assist service providers in finding, buying and deploying certified, container-based software
  • Telco Ecosystem Program – that brings together enterprise and community partners to deliver integrated telco solutions. Partners include Affirmed, Altiostar, Atos, Cisco, Ericsson, Amdocs, MYCOM OSI, Zabbix, Metaswitch, Nokia, Juniper and more
  • Consulting – offering service resources that include Innovation Labs, training and consulting
  • And other solutions such as Ceph Storage, Cloud Suite, Quay, JBoss suite, Integration, Insights, Fuse and more.

Summary

CSPs (Communications Service Providers) find themselves in a catch-22 position with cloud providers. Their own OSS/BSS and those of their suppliers have an increasing reliance on cloud provider services and infrastructure. Due to economies of scale, efficiency of delivery, scalability and a long-tail of service offerings (from the cloud providers and their marketplaces), CSPs aren’t able to compete. Complexity of public cloud (security, scalability, performance, interoperability, etc) also make it a quandary for CSPs. It’s already a challenge (commercially and technically) to run the networks they do, but prohibitively difficult to expand coverage further to include public cloud. 

Yet, by investing heavily in cloud services, CSPs are funding further growth of said cloud providers, thus making CSPs less competitive, but more reliant, on cloud services. Telco architects are becoming ever more adept at leveraging the benefits of cloud. An example is being able to spin up apps without having to wait for massive infrastructure projects to be completed first, which has been a massive dependency (ie time delay) for many OSS/BSS projects.

In the distant past, CSPs had the killer apps, being voice and WAN data. These services supported the long-tail of business (eg salespeople from every industry in the world would make sales calls via telephony services) and customers were willing to pay a premium for these services.

The long-tail of business is now omni-channel, and the killer apps are content, experiences, data and the apps that support them. Being the killer apps, whoever supplies them also takes the premium and share-of-wallet. AWS, Google and Microsoft are supplying more of today’s killer apps (or the platforms that support them) than CSPs are.

The risk for CSPs is that cloud providers and over the top players will squeeze most of the profits from massive global investments in 5G. This is exacerbated if telco architects get their cloud architectures wrong and OPEX costs spiral out of control. Whether architectures are optimal or not, CSPs will fund much of the cloud infrastructure. But if CSPs don’t leverage cloud provider offerings, the infrastructure will cost even more, take longer to get to market and constrain them to local presence, leaving them at a competitive disadvantage with other CSPs

If I were a cloud provider, I’d be happy for CSPs to keep providing the local, physical, outside plant networks (however noting recent investments in local CSPs such as Amazon’s $2B stake in Bharti Airtel and Google’s $4.7 billion investment in Jio Platforms* not to mention Google Fiber and sub-sea fibre roll-outs such as this). It’s CAPEX intensive and needs a lot of human interaction to maintain / augment the widely distributed infrastructure. That means a lot is paid on non-effective time (ie the travel-time of techs travelling to site to fix problems, managing resources and/or coordinating repairs with property owners). Not only that, but there tends to be a lot of regulatory overhead managing local infrastructure / services as well as local knowledge / relationships. Governments want to ensure all their constituents have access to communications services at affordable prices. All the while, revenue per bit is continuing to drop, so merely shuffling bits around is a business model with declining profitability.

With declining profitability, operational efficiency improvements and cost reduction becomes even more important. OSS/BSS tools are vital for delivering improved productivity. But CSPs are faced with the challenge of transforming from legacy, monolithic OSS/BSS to more modern, nimble solutions. The more modular, flexible OSS/BSS of today and in future roadmaps are virtualised, microservice-based and designed for continuous delivery / DevOps. This is painting CSPs into a cloud-based future.

Like I said, a catch-22 for CSPs!

But another interesting strategy by Google is that its Anthos hybrid cloud platform will run multi-cloud workloads, including workloads on AWS and Microsoft Azure. Gartner predicts that >75% of midsize and large organisations will have adopted a multi-cloud and/or hybrid IT strategies by 2021 to prevent vendor lock-in. VMware (Dell) and Red Hat (IBM) are others creating multi-cloud / hybrid-cloud offerings. This gives the potential for CSPs to develop a near-global presence for virtualised telco functions. But will cloud providers get there before the telcos do?

For those of us supporting or delivering OSS/BSS, our future is in the clouds either way. It’s a rapidly evolving landscape, so watch this space.

 

* Note: Google is not the only significant investor in Jio:

Investor US$B Stake
Facebook 5.7 10%
Silver Lake Partners 1.43 2.08%
Mubadala 1.3 1.85%
Adia UAE Sovereign 0.8 1.16%
Saudi Arabia Sovereign 1.6 2.32%
TPG 0.64 0.93%
Catterton 0.27 0.39%
Intel 0.253 0.39%
Qualcomm 0.097 15.00%
Google 4.7 7.70%
TOTALS 16.79 42%

 

OSS sell money!

Huh? But they’re just cost centres aren’t they?
 
Nope, they sell financial outcomes – they reduce downtime, they turn on revenue, they improve productivity by coordinating the workforce, etc…
 
But they only “sell money” if they can help stakeholders clearly see the money! I mean “actually” see it, not “read between the lines” see it! (so many benefits of OSS are intangible, so we have to help make the financial benefits more obvious).
 
They don’t sell network performance metrics or orchestration plans or AI or any other tech chatter. They sell money in the form of turning on customers that pay to use comms services. They sell insurance policies (ie service reliability) that keep customers from churning.
 
Or to think of it another way, could you estimate (in a dollar amount) the consequences of not having the OSS/BSS? What would the cost to your organisation be?

OSS diamonds are forever (part 2)

Wednesday’s post discussed how OPEX is forever, just like the slogan for diamonds.
 
As discussed, some aspects of Operational Expenses are well known when kicking off a new OSS project (eg annual OSS license / support costs). Others can slip through the cracks – what I referred to as OPEX leakage (eg third-party software, ongoing maintenance of software customisations).
 
OPEX leakage might be an unfair phrase. If there’s a clear line of sight from the expenses to a profitable return, then it’s not leakage. If costs (of data, re-work, cloud services, applications, etc) are proliferating with no clear benefit, then the term “leakage” is probably fair.
 
I’ve seen examples of Agile and cloud implementation strategies where leakage has occurred. And even the supposedly “cheap” open-source strategies have led to surprises. OPEX leakage has caused project teams to scramble as their financial year progressed and budgets were unexpectedly being exceeded.
 
Oh, and one other observation to share that you may’ve seen examples of, particularly if you’ve worked on OSS in large organisations – Having OPEX incurred by one business unit but the benefit derived by different business units. This can cause significant problems for the people responsible for divisional budgets, even if it’s good for the business as a whole. 
 
Let me explain by example: An operations delivery team needs extralogging capability so they stand up a new open-source tool. They make customisations so that log data can be collected for all of their network types. All log data is then sent to the organisation’s cloud instance. The operations delivery team now owns lifecycle maintenance costs. However, the cost of cloud (compute and storage) and data lake licensing have now escalated but Operations doesn’t foot that bill. They’ve just handed that “forever” budgetary burden to another business unit.
 
The opposite can also be true. The costs of build and maintain might be borne by IT or ops, but the benefits in revenue or CX (customer experience) are gladly accepted by business-facing units.
 
Both types of project could give significant whole-of-company benefit. But the unit doing the funding will tend to choose projects that are less effective if it means their own business unit will derive benefit (especially if individual’s bonuses are tied to those results).
 
OSS can be powerful tools, giving and receiving benefit from many different business units. However, the more OPEX-centric OSS projects that we see today are introducing new challenges to get funded and then supported across their whole life-cycle.
 
PS. Just like diamonds bought at retail prices, there’s a risk that the financials won’t look so great a year after purchase. If that’s the case, you may have to seek justification on intangible benefits.  😉
 
PS2. Check out Robert’s insightful comment to the initial post, including the following question, “I wonder how many OSS procurements are justified on the basis of reducing the Opex only *of the current OSS*, rather than reducing the cost of achieving what the original OSS was created to do? The former is much easier to procure (but may have less benefit to the business). The latter is harder (more difficult analysis to do and change to manage, but payoff potentially much larger).”

Crossing the OSS chasm

Geoff Moore’s seminal book, “Crossing the Chasm,” described the psychological chasm between early buyers and the mainstream market.

Crossing the Chasm

Seth Godin cites Moore’s work, “Moore’s Crossing the Chasm helped marketers see that while innovation was the tool to reach the small group of early adopters and opinion leaders, it was insufficient to reach the masses. Because the masses don’t want something that’s new, they want something that works…

The lesson is simple:

– Early adopters are thrilled by the new. They seek innovation.

– Everyone else is wary of failure. They seek trust.”
 

I’d reason that almost all significant OSS buyer decisions fall into the “mainstream market” section in the diagram above.  Why? Well, an organisation might have the 15% of innovators / early-adopters conceptualising a new OSS project. However, sign-off of that project usually depends on a team of approvers / sponsors. Statistics suggest that 85% of the team is likely to exist in a mindset beyond the chasm and outweigh the 15%. 

The mainstream mindset is seeking something that works and something they can trust.

But OSS / digital transformation projects are hard to trust. They’re all complex and unique. They often fail to deliver on their promises. They’re rarely reliable or repeatable. They almost all require a leap of faith (and/or a burning platform) for the buyer’s team to proceed.

OSS sellers seek to differentiate from the 400+ other vendors (of course). How do they do this? Interestingly, by pitching their innovations and uniqueness mostly.

Do you see the gap here? The seller is pitching the left side of the chasm and the buyer cohort is on the right.

I wonder whether our infuriatingly lengthy sales cycles (often 12-18 months) could be reduced if only we could engineer our products and projects to be more mainstream, repeatable, reliable and trustworthy, whilst being less risky.

This is such a dilemma though. We desperately need to innovate, to take the industry beyond the chasm. Should we innovate by doing new stuff? Or should we do the old, important stuff in new and vastly improved ways? A bit of both??

Do we improve our products and transformations so that they can be used / performed by novices rather than designed for use by all the massive intellects that our industry seems to currently consist of?

 

 

 

 

Diamonds are Forever and so is OSS OPEX

Sourced from: www.couponraja.in

I sometimes wonder whether OPEX is underestimated when considering OSS investments, or at least some facets (sorry, awful pun there!) of it.

Cost-out (aka head-count reduction) seems to be the most prominent OSS business case justification lever. So that’s clearly not underestimated. And the move to cloud is also an OPEX play in most cases, so it’s front of mind during the procurement process too. I’m nought for two so far! Hopefully the next examples are a little more persuasive!

Large transformation projects tend to have a focus on the up-front cost of the project, rightly so. There’s also an awareness of ongoing license costs (usually 20-25% of OSS software list price per annum). Less apparent costs can be found in the exclusions / omissions. This is where third-party OPEX costs (eg database licenses, virtualisation, compute / storage, etc) can be (not) found.

That’s why you should definitely consider preparing a TCO (Total Cost of Ownership) model that includes CAPEX and OPEX that’s normalised across all options when making a buying decision.

But the more subtle OPEX leakage occurs through customisation. The more customisation from “off-the-shelf” capability, the greater the variation from baseline, the larger the ongoing costs of maintenance and upgrade. This is not just on proprietary / commercial software, but open-source products as well.

And choosing Agile almost implies ongoing customisation. One of the things about Agile is it keeps adding stuff (apps, data, functions, processes, code, etc) via OPEX. It’s stack-ranked, so it’s always the most important stuff (in theory). But because it’s incremental, it tends to be less closely scrutinised than during a CAPEX / procurement event. Unless carefully monitored, there’s a greater chance for OPEX leakage to occur.

And as we know about OPEX, like diamonds, they’re forever (ie the costs re-appear year after year). 

When OSS experts are wrong

When experts are wrong, it’s often because they’re experts on an earlier version of the world.”
Paul Graham.
 
OSS experts are often wrong. Not only because of the “earlier version of the world” paradigm mentioned above, but also the “parallel worlds” paradigm that’s not explicitly mentioned. That is, they may be experts on one organisation’s OSS (possibly from spending years working on it), but have relatively little transferable expertise on other OSS.
 
It would be nice if the OSS world view never changed and we could just get more and more expert at it, approaching an asymptote of expertise. Alas, it’s never going to be like that. Instead, we experience a world that’s changing across some of our most fundamental building blocks.
 
We are the sum total of our experiences.”
B.J. Neblett.
 
My earliest forays into OSS had a heavy focus on inventory. The tie-in between services, logical and physical inventory (and all use-cases around it) was probably core to me becoming passionate about OSS. I might even go as far as saying I’m “an Inventory guy.”
 
Those early forays occurred when there was a scarcity mindset in network resources. You provisioned what you needed and only expanded capacity within tight CAPEX envelopes. Managing inventory and optimising revenue using these scarce resources was important. We did that with the help of Inventory Management (IM) tools. Even end-users had a mindset of resource scarcity. 
 
But the world has changed. We now operate with a cloud-inspired abundance mindset. We over-provision physical resources so that we can just spin up logical / virtual resources whenever we wish. We have meshed, packet-switched networks rather than nailed up circuits. Generally speaking, cost per resource has fallen dramatically so we now buy a much higher port density, compute capacity, dollar per bit, etc. Customers of the cloud generation assume abundance of capacity that is even available in small consumption-based increments. In many parts of the world we can also assume ubiquitous connectivity.
 
So, as “an inventory guy,” I have to question whether the scarcity to abundance transformation might even fundamentally change my world-view on inventory management. Do I even need an inventory management solution or should I just ask the network for resources when I want to turn on new customers and assume the capacity team has ensured there’s surplus to call upon?
 
Is the enormous expense we allocate to building and reconciling a digital twin of the network (ie the data gathered and used by Inventory Management) justified? Could we circumvent many of the fallouts (and a multitude of other problems) that occur because the inventory data doesn’t accurately reflect the real network?
 
For example, in the old days I always loved how much easier it was to provision a customer’s mobile / cellular or IN (Intelligent Network) service than a fixed-line service. It was easier because fixed-line service needed a whole lot more inventory allocation and reservation logic and process. Mobile / IN services didn’t rely on inventory, only an availability of capacity (mostly). Perhaps the day has almost come where all services are that easy to provision?
 
Yes, we continue to need asset management and capacity planning. Yes, we still need inventory management for physical plant that has no programmatic interface (eg cables, patch-panels, joints, etc). Yes, we still need to carefully control the capacity build-out to CAPEX to revenue balance (even more so now in a lower-profitability operator environment). But do many of the other traditional Inventory Management and resource provisioning use cases go away in a world of abundance?
 

 

I’d love to hear your opinions, especially from all you other “inventory guys” (and gals)!! Are your world-views, expertise and experiences changing along these lines too or does the world remain unchanged from your viewing point?
 
Hat tip to Garry for the seed of this post!

Google’s Circular Economy in OSS

OSS wear many hats and help many different functions within an organisation. One function that OSS assists might be surprising to some people – the CFO / Accounting function.

The traditional service provider business model tends to be CAPEX-heavy, with significant investment required on physical infrastructure. Since assets need to be depreciated and life-cycle managed, Accountants have an interest in the infrastructure that our OSS manage via Inventory Management (IM) tools.

I’ve been lucky enough to work with many network operators and see vastly different asset management approaches used by CFOs. These strategies have ranged from fastidious replacement of equipment as soon as depreciation cycles have expired through to building networks using refurbished equipment that has already passed manufacturer End-of-Life dates. These strategies fundamentally effect the business models of these operators.

Given that telecommunications operator revenues are trending lower globally, I feel it’s incumbent on us to use our OSS to deliver positive outcomes to global business models. 

With this in mind, I found this article entitled, “Circular Economy at Work in Google Data Centers,” to be quite interesting. It cites, “Google’s circular approach to optimizing end of life of servers based on Total Cost of Ownership (TCO) principles have resulted in hundreds of millions per year in cost avoidance.”

Google Asset Lifecycle

Asset lifecycle management is not your typical focus area for OSS experts, but an area where we can help add significant value for our customers!

Some operators use dedicated asset management tools such as SAP. Others use OSS IM tools. Others reconcile between both. There’s no single right answer.

For a deeper dive into ideas where our OSS can help in asset lifecycle (which Google describes as its Circular Economy and seems to manage using its ReSOLVE tool), I really recommend reviewing the article link above.

If you need to develop such a tool using machine learning models, reach out to us and we’ll point you towards some tools equivalent to ReSOLVE to augment your OSS.

I was wrong. Forget about investing in your OSS UI

I must’ve written dozens of posts about us needing to collectively invest a lot more effort into UI / UX. I’ve written quite a few over the last few months especially. This one in particular springs to mind.

As an industry, we typically don’t do user experience journeys (UX) or user interfaces (UI) very well at all yet. I know thousands of OSS experts, but only 2 who specialise in UI / UX! That ratio is far too small….

… but then something dawned on me when writing the Autonomous Networking post earlier this week – All effort invested into UI (and most effort on UX) is pointless if we succeed in building autonomous networks. You get the implication don’t you? Truly autonomous networks are machine-driven, so you don’t need users, UI or UX.

Oh, I should make one last point though. If you don’t expect to get all of your network operations activities to midday on the Autonomous Networks Clock, within the next couple of years, then you probably should still invest in your UI / UX!!

As a network owner….

….I want to make my network so observable, reliable, predictable and repeatable that I don’t need anyone to operate it.

That’s clearly a highly ambitious goal. Probably even unachievable if we say it doesn’t need anyone to run it. But I wonder whether this has to be the starting point we take on behalf of our network operator customers?

If we look at most networks, OSS, BSS, NOC, SOC, etc (I’ll call this whole stack “the black box” in this article), they’ve been designed from the ground up to be human-driven. We’re now looking at ways to automate as many steps of operations as possible.

If we were to instead design the black-box to be machine-driven, how different would it look?

In fact, before we do that, perhaps we have to take two unique perspectives on this question:

  1. Retro-fitting existing black-boxes to increase their autonomy
  2. Designing brand new autonomous black-boxes

I suspect our approaches / architectures will be vastly different.

The first will require a incredibly complex measure, command and control engine to sit over top of the existing black box. It will probably also need to reach into many of the components that make up the black box and exert control over them. This approach has many similarities with what we already do in the OSS world. The only exception would be that we’d need to be a lot more “closed-loop” in our thinking. I should also re-iterate that this is incredibly complex because it inherits an existing “decision tree” of enormous complexity and adds further convolution.

The second approach holds a great deal more promise. However, it will require a vastly different approach on many levels:

  1. We have to take a chainsaw to the decision tree inside the black box. For example:
    • We start by removing as much variability from the network as possible. Think of this like other utilities such as water or power. Our electricity service only has one feed-type for almost all residential and business customers. Yet it still allows us great flexibility in what we plug into it. What if a network operator were to simply offer a “broadband dial-tone” service and end users decide what they overlay on that bit-stream
    • This reduces the “protocol stack” in the network (think of this in terms of the long list of features / tick-boxes on any router’s brochure)
    • As well as reducing network complexity, it drastically reduces the variables an end-user needs to decide from. The operator no longer needs 50 grandfathered, legacy products 
    • This also reduces the decision tree in BSS-related functionality like billing, rating, charging, clearing-house
    • We achieve a (globally?) standardised network services catalog that’s completely independent of vendor offerings
    • We achieve a more standardised set of telemetry data coming from the network
    • In turn, this drives a more standardised and minimal set of service-impact and root-cause analyses
  2. We design data input/output methods and interfaces (to the black box and to any of its constituent components) to have closed-loop immediacy in mind. At the moment we tend to have interfaces that allow us to interrogate the network and push changes into the network separately rather than tasking the network to keep itself within expected operational thresholds
  3. We allow networks to self-regulate and self-heal, not just within a node, but between neighbours without necessarily having to revert to centralised control mechanisms like OSS
  4. All components within the black-box, down to device level, are programmable. [As an aside, we need to consider how to make the physical network more programmable or reconcilable, considering that cables, (most) patch panels, joints, etc don’t have APIs. That’s why the physical network tends to give us the biggest data quality challenges, which ripples out into our ability to automate networks]
  5. End-to-end data flows (ie controls) are to be near-real-time, not constrained by processing lags (eg 15 minute poll cycles, hourly log processing cycles, etc) 
  6. Data minimalism engineering. It’s currently not uncommon for network devices to produce dozens, if not hundreds, of different metrics. Most are never used by operators manually, nor are likely to be used by learning machines. This increases data processing, distribution and storage overheads. If we only produce what is useful, then it should improve data flow times (point 5 above). Therefore learning machines should be able to control which data sets they need from network devices and at what cadence. The learning engine can start off collecting all metrics, then progressively turning them off as they deem metrics unnecessary. This could also extend to controlling log-levels (ie how much granularity of data is generated for a particular log, event, performance counter)
  7. Perhaps we even offer AI-as-a-service, whereby any of the components within the black-box can call upon a centralised AI service (and the common data lake that underpins it) to assist with localised self-healing, self-regulation, etc. This facilitates closed-loop decisions throughout the stack rather than just an over-arching command and control mechanism

I’m barely exposing the tip of the iceberg here. I’d love to get your thoughts on what else it will take to bring fully autonomous network to reality.

What will get your CEO fired? (part 4)

In Monday’s article, we suggested that the three technical factors that could get the big boss fired are probably only limited to:

  1. Repeated and/or catastrophic failure (of network, systems, etc)
  2. Inability to serve the market (eg offerings, capacity, etc)
  3. Inability to operate network assets profitably

In that article, we looked closely at a human factor and how current trends of open-source, Agile and microservices might actually exacerbate it. In yesterday’s article we looked at market-serving factors for us to investigate and monitor.

But let’s look at point 3 today. The profitability factors we could consider that reduce the chances of the big boss getting fired are:

  1. Ability to see revenues in near-real-time (revenues are relatively easy to collect, so we use these numbers a lot. Much harder are profitability measures because of the shared allocation of fixed costs)

  2. Ability to see cost breakdown (particularly which parts of the technical solution are most costly, such as what device types / topologies are failing most often)

  3. Ability to measure profitability by product type, customer, etc

  4. Are there more profitable or cost-effective solutions available

  5. Is there greater profitability that could be unlocked by simplification

What will get your CEO fired? (part 3)

In Monday’s article, we suggested that the three technical factors that could get the big boss fired are probably only limited to:

  1. Repeated and/or catastrophic failure (of network, systems, etc)
  2. Inability to serve the market (eg offerings, capacity, etc)
  3. Inability to operate network assets profitably

In that article, we looked closely at a human factor and how current trends of open-source, Agile and microservices might actually exacerbate it. In yesterday’s article we looked at the broader set of catastrophic failure factors for us to investigate and monitor.

But let’s look at some of the broader examples under point 2 today. The market-serving factors we could consider that reduce the chances of the big boss getting fired are:

  1. Immediate visibility of key metrics by boss and execs (what are the metrics that matter, eg customer numbers, ARPU, churn, regulatory, media hot-buttons, network health, etc)

  2. Response to “voice of customer” (including customer feedback, public perception, etc)

  3. Human resources (incl up-skill for new tech, etc)

  4. Ability to implement quickly / efficiently

  5. Ability to handle change (to network topology, devices/vendors, business products, systems, etc)

  6. Measuring end-to-end user experience, not just “nodal” monitoring

  7. Scalability / Capacity (ability to serve customer demand now and into a foreseeable future)

What will get your CEO fired? (part 2)

In Monday’s article, we suggested that the three technical factors that could get the big boss fired are probably only limited to:

  1. Repeated and/or catastrophic failure (of network, systems, etc)
  2. Inability to serve the market (eg offerings, capacity, etc)
  3. Inability to operate network assets profitably

In that article, we looked closely at a human factor and how current trends of open-source, Agile and microservices might actually exacerbate it.

But let’s look at some of the broader examples under point 1 today. The failure factors we could consider that might result in the big boss getting fired are:

  1. Availability (nodal and E2E)

  2. Performance (nodal and E2E)

  3. Security (security trust model – cloud vs corporate vs active network and related zones)

  4. Remediation times, systems & processes (Assurance), particularly effectiveness of process for handling P1 (Priority 1) incidents

  5. Resilience Architecture

  6. Disaster Recovery Plan (incl Backup and Restore process, what black-swan events the organisation is susceptible to, etc)

  7. Supportability and Maintenance Routines

  8. Change and Release Management approaches

  9. Human resources (incl business continuity risk of losing IP, etc)

  10. Where are the SPoFs (Single Points of Failure)

We should note too that these should be viewed through two lenses:

  • The lens of the network our OSS/BSS is managing and
  • The lens of the systems (hardware/software/cloud) that make up our OSS/BSS

OSS/BSS procurement is flawed from the outset

You may’ve noticed that things have been a little quiet on this blog in recent weeks. We’ve been working on a big new project that we’ll be launching here on PAOSS on Monday. We can’t reveal what this project is just yet, but we can let you in on a little hint. It aims to help overcome one of the biggest problem areas faced by those in the comms network space.

Further clues will be revealed in this week’s series of posts.

The industry we work in is worth tens of billions of dollars annually. We rely on that investment to fund the OSS/BSS projects (and ops/maintenance tasks) that keeps many thousands of us busy. Obviously those funds get distributed by project sponsors in the buyers’ organisations. For many of the big projects, sponsors are obliged to involve the organisation’s procurement team.

That’s a fairly obvious path. But I often wonder whether the next step on that path is full of contradictions and flaws.

Do you agree with me that the 3 KPIs sponsors expect from their procurement teams are:

  1. Negotiate the lowest price
  2. Eliminate as many risks as possible
  3. Create a contract to manage the project by

If procurement achieves these 3 things, sponsors will generally be delighted. High-fives for the buyers that screw the vendor prices right down. Seems pretty obvious right? So where’s the contradiction? Well, let’s look at these same 3 KPIs from a different perspective – a more seller-centric perspective:

  1. I want to win the project, so I’ll set a really low price, perhaps even loss-leader. However, our company can’t survive if our projects lose money, so I’ll be actively generating variations throughout the project
  2. Every project of this complexity has inherent risks, so if my buyer is “eliminating” risks, they’re actually just pushing risks onto me. So I’ll use any mechanisms I can to push risks back on my buyer to even the balance again
  3. We all know that complex projects throw up unexpected situations that contracts can’t predict (except with catch-all statements that aim to push all risk onto sellers). We also both know that if we manage the project by contractual clauses and interpretations, then we’re already doomed to fail (or are already failing by the time we start to manage by contract clauses)

My 3 contrarian KPIs to request from procurement are:

  1. Build relationships / trust – build a framework and environment that facilitates a mutually beneficial, long-lasting buyer/seller relationship (ie procurement gets judged on partnership length ahead of cost reduction)
  2. Develop a team – build a framework and environment that allows the buyer-seller collective to overcome risks and issues (ie mutual risk mitigation rather than independent risk deflection)
  3. Establish clear and shared objectives – ensure both parties are completely clear on how the project will make the buyer’s organisation successful. Then both constantly evolve to deliver benefits that outweigh costs (ie focus on the objectives rather than clauses – don’t sweat the small stuff (or purely technical stuff))

Yes, I know they’re idealistic and probably unrealistic. Just saying that the current KPI model tends to introduce flaws from the outset.

OSS Persona 10:10:10 Mapping

We sometimes attack OSS/BSS planning at a quite transactional level. For example, think about the process of gathering detailed requirements at the start of a project. They tend to be detailed and transactional don’t they? This type of requirement gathering is more like the WHAT and HOW rings in Simon Sinek’s Golden Circle.

Just curious, do you have a persona map that shows all of the different user groups that interact with your OSS/BSS?
More importantly, do you deeply understand WHY they interact with your OSS/BSS? Not just on a transaction-by-transaction level, but in the deeper context of how the organisation functions? Perhaps even on a psychological level?

If you do, you’re in a great position to apply the 10:10:10 mapping rule. That is, to describe how you’re adding value to each user group 10 minutes from now, 10 days from now and 10 months from now…

OSS Persona 10:10:10 Mapping

The mapping table could describe current tense (ie how your OSS/BSS is currently adding value), or as a planning mechanism for a future tense (ie how your OSS/BSS can add value in the future).
This mapping table can act as a guide for the evolution of your solution.

I should also point out that the diagram above only shows a sample of the internal personas that directly interact with your OSS/BSS. But I’d encourage you to look further. There are other personas that have direct and indirect engagement with your OSS/BSS. These include internal stakeholders like project sponsors, executives, data consumers, etc. They also include external stakeholders such as end-customers, regulatory bodies, etc.

If you need assistance to unlock your current state through persona mapping, real process mapping, etc and then planning out your target-state, Passionate About OSS would be delighted to help.

Can you solve the omni-channel identity conundrum for OSS/BSS?

For most end-customers, the OSS/BSS we create are merely back-office systems that they never see. The closest they get are the customer portals that they interact with to drive workflows through our OSS/BSS. And yet, our OSS/BSS still have a big part to play in customer experience. In times where customers can readily substitute one carrier for another, customer service has become a key differentiator for many carriers. It therefore also becomes a priority for our OSS/BSS.

Customers now have multiple engagement options (aka omni-channel) and form factors (eg in-person, phone, tablet, mobile phone, kiosk, etc). The only options we used to have were a call to a contact centre / IVR (Interactive Voice Response), a visit to a store, or a visit from an account manager for business customers. Now there are websites, applications, text messages, multiple social media channels, chatbots, portals, blogs, etc. They all represent different challenges as far as offering a seamless customer experience across all channels.

I’ve just noticed TM Forum’s “Omni-channel Guidebook” (GB994), which does a great job at describing the challenges and opportunities. For example, it explains the importance of identity. End-users can only get a truly seamless experience if they can be uniquely identified across all channels. Unfortunately, some channels (eg IVR, website) don’t force end-users to self-identify.

The Ovum report, “Optimizing Customer Service in a Multi Channel World, March 2011” indicates that around 74% of customers use 3 channels or more for engaging customer service. In most cases, it’s our OSS/BSS that provide the data that supports a seamless experience across channels. But what if we have no unique key? What if the unique key we have (eg phone number) doesn’t uniquely identify the different people who use that contact point (eg different family members who use the same fixed-line phone)?

We could use personality profiling across these channels, but we’ve already seen how that has worked out for Cambridge Analytica and Facebook in terms of customer privacy and security.

I’d love to hear how you’ve done cross-channel identity management in your OSS/BSS. Have you solved the omni-channel identity conundrum?

PS. One thing I find really interesting. The whole omni-channel thing is about giving customers (or potential customers) the ability to connect via the channel they’re most comfortable with. But there’s one glaring exception. When an end-user decides a phone conversation is the only way to resolve their issue (often after already trying the self-service options), they call the contact centre number. But many big telcos insist on trying to deflect as many calls as possible to self-service options (they call it CVR – call volume reduction), because contact centre staff are much more expensive per transaction than the automated channels. That seems to be an anti-customer-experience technique if you ask me. What are your thoughts?