“…the top 15% of today’s residential subscribers in the US are said to account for about 95% of carrier profits! Thus, many service providers are looking to Next Generation Network (NGN) services as a means to attract and/or retain the most lucrative customers.”
Joseph C. Crimi in this white paper from Telcordia.*
Do you know which 15% of customers account for 95% of your profits? What are the attributes of those customers? Why are they so profitable? Is it from heavy usage or from consumption of multiple services? Do you have any demographics or other information that will allow you to classify these customers (noting that a Telco may only have the demographic details of one subscriber [eg a parent] in a household where there might be multiple consumers of Telco services [eg children])? Do you know what services they consume, or perhaps more pertinently, the services they don’t consume? Do these target customers exhibit above or below average levels of churn? What are their churn risks? Are you able to identify non-customers that exhibit these attributes from other data you may have access to? To what level of likelihood are you able to predict that certain attributes will lead to service patterns / usage / profitability?
These types of questions, and I’m sure you’ll be able to identify many more, may not seem relevant to an OSS at first glance. But it is these types of questions that will help justify OSS projects.
Firstly, the OSS is the ideal collator of statistics that can answer these questions, making the OSS invaluable beyond network operations teams. Other teams such as sales, marketing, executive, products, etc will find value in this data. This in turn helps to drive advocacy for OSS budgets beyond network ops, which anecdotally seem to have diminishing budgets to work with.
Secondly, any OSS developers should take these results into account when prioritising implementation efforts, whether that’s in relation to functionality being delivered, data set migration priority, roll-out schedules, process mapping (and refinement efforts), COTS product choice, product / service designs, supply chain refinements, Next-Gen Networks (and/or services) to roll out, etc.
Pareto’s 80/20 principle is often an important way of prioritising effort on vastly complex and configurable OSS solutions. The type of scientific analysis mentioned above may be a way to determine what the real priorities should be rather than making educated guesses. This might be particularly true for vendors or solutions integrators that aren’t deeply cognisant of what their CSP’s real priorities are. Their “educated” guesses may be based on other CSPs / environments that don’t reflect the current CSP’s ecosystem.
* Note that I haven’t been able to verify the figures in this report, nor determine a date when it was published. Given that Telcordia is now part of Ericsson, it’s safe to assume that the paper is at least a couple of years out of date. However, if the numbers remain relatively accurate today then it provides a few interesting perspectives for OSS developers to consider.