Delivering an innovative perspective on Operational Support Systems Consulting and Integration
Category Archives: Executive Series
OSS are tools have a big part to play in modern ICT organisations. They can make the difference between effective or ineffective operations of electronic assets. The Executive Series of blogs provide information that is targeted for senior executives on how to utilise OSS to deliver better business outcomes.
A recent blog from Seth Godin brought back some memories from a past project.
“Two ways to solve a problem and provide a service. With drama. Make sure the customer knows just how hard you’re working, what extent you’re going to in order to serve. Make a big deal out of the special order, the additional cost, the sweat and the tears. Without drama. Make it look effortless. Either can work. Depends on the customer and the situation.”
Seth Godin here.
Over the course of the long-running and challenging project, I worked under a number of different Program Directors. The second last (chronologically) took the team barrel-chested down the “With Drama” path whilst the last took the “Without Drama” approach.
The “With Drama” approach was very melodramatic and political, but to be honest, was also really draining. It was draining because of the high levels of contact (eg meetings, reports, etc), reducing the amount of productive delivery time.
The “Without Drama” approach did make it look effortless, because by comparison it was effortless. The Program Director took responsibility for peer-level contact and cleared the way for the delivery team to focus on delivering. The team was still working well over 60 hour weeks, but it was now more clearly focused on delivery tasks. Interestingly, this approach brought a seemingly endless project to a systematic and clean conclusion (ie delivery) within about three months.
Now I’m not sure about your experiences or preferences, but I’d go with the “Without Drama” OSS delivery approach every time. The emotional intensity required of the “With Drama” approach just isn’t sustainable over long-running projects like our OSS projects tend to be.
A recent post discussed the challenge of getting a timeslice of operations people to help build the OSS. That post surmised, “as the old saying goes, you get back what you put in. In the case of OSS I’ve seen it time and again that operations need to contribute significantly to the implementation to ensure they get a solution that fits their needs.”
I have a new saying for you today, this time from T.D. Jakes, “You can’t be committed to the dream. You have to be committed to the process.”
If you’re representing an organisation that is buying an OSS solution from a vendor / integrator, please consider these two adages above. Sometimes we’re good at forming the dream (eg business requirements, business case, etc) and expecting the vendor to conduct almost all of the process. While our network operations teams are hired for the process of managing the network, we also need their significant input on the process of building / configuring an OSS. The vendor / integrator can’t just develop it in isolation and then hand it over to ops with a few days of training at the end.
The process of bringing a new OSS into an organisation is not like buying a road car. With an OSS, you can’t just place an order with some optional features like paint and trim specified, then expect to start driving it as soon as it leaves the vendor’s assembly line. It’s more like an F1 car where the driver is in constant communications with the pit-crew, changing and tweaking and refining to optimise the car to the driver’s unique needs (and in turn to hopefully optimise the results).
At least, that’s what current-state OSS are like. Perhaps in the future… we’ll strive to refine our OSS to be more like a road-car – standardised and intuitive enough for operators to drive straight off the assembly line.
The following is the result of a survey question posed by TM Forum:
I’m not sure how the numbers tally, but conceptually the graph above paints an interesting perspective of why orchestration is important. The graph indicates the why.
But in this case, for me, the why is the by-product of the how. The main attraction of orchestration models is in how we can achieve modularity. All of the business outcomes mentioned in the graph above will only be achievable as a result of modularity.
Put another way, rather than having the integration spaghetti of an “old-school” OSS / BSS stack, orchestration (and orchestration plans) potentially provides the ability to provide clearer demarcation and abstraction all the way from product design down into transactions that hit the network… not to mention the meet-in-the-middle points between business units.
Demarcation points support catalog items (perhaps as APIs / microservices with published contracts), allowing building-block design of products rather than involvement of (and disputes between) business units all down the line of product design. This facilitates the speed (34%) and services on demand (28%) objectives stated in the graph.
But I used the term “old-school” with intent above. The modularity mentioned above was already achieved in some older OSS too. The ability to carve up, sequence, prioritise and re-construct a stream of service orders was already achievable by some provisioning + workflow engines of the past.
The business outcomes remain the same now as they were then, but perhaps orchestration takes it to the next level.
In 1998 Berkshire Hathaway acquired a reinsurance company called General Re. “The only significant staff change that followed the merger was the elimination of General Re’s investment unit. Some 150 people had been in charge of deciding where to invest the company’s funds; they were replaced with just one individual – Warren Buffett.”
Robert G. Hagstrom in, “The Warren Buffett Way.”
Buffett was able to replace 150 people, and significantly outperform them, because they were conducting (relatively) small value, high volume transactions and he did the exact opposite.
Compare this with Gemini Waghmare’s thoughts on BSS, “It used to be that operators differentiated by pricing. Complex bundles, friends and family plans, rollover minutes and megabytes were used as ways to win over consumers. This drove significant investment into charging platforms and product catalogs. The internet economy runs on one-click purchases and a recurring flat rate. Roaming and overages are going away and transactional VOD (video on-demand) makes way for subscription VOD.
It’s not uncommon for operators to have 10,000 price plans while Netflix has three. Facebook and Google make billions of dollars without charging a cent.
Operators would do well to deprecate the value of their charging systems and invest instead in cloud and flat-rate billing with added focus on collecting, normalizing and monetizing user data. By simplifying subscription models with lightweight billing platforms, the scale and cost of BSS will drop dramatically. After all, there is no differentiation left in out-bundling competitors,” quoted here on Inform. There are some brilliant insights in this link, so I recommend you taking a closer look BTW.
10,000+ pricing plans definitely sounds like the equivalent to General Re before Buffett arrived. Having only 3 pricing plans would be more like the Buffett approach, change the dynamic of BSS tools and the size of the teams that use them! Having only 3 pricing plans would certainly change the dynamic for OSS too. The number of variants we’d be asked to handle would diminish, making it much easier to build and operate our OSS. Due to all the down-stream inefficiencies, you could actually argue that there is only negative-differentiation left in out-bundling competitors.
As an aside… Interesting comment that, “Facebook and Google make billions of dollars without charging a cent.” I’d beg to differ. Whilst consumers of the service aren’t billed, advertisers certainly are, which I assume still needs a billing engine… one that probably has quite a bit of algorithmic complexity.
OSS are there to do just that – support operations. So as OSS implementers we have to do just that too.
But as the old saying goes, you get back what you put in. In the case of OSS I’ve seen it time and again that operations need to contribute significantly to the implementation to ensure they get a solution that fits their needs.
Just one problem here though. Operations are hired to operate the network, not build OSS. Now let’s assume the operations team does decide to commit heavily to your OSS build, thus taking away from network ops at some level (unless they choose to supplement the ops team).
That still leaves operations team leaders with a dilemma. Do they take certain SMEs out of ops to focus entirely on the OSS build (and thus act as nominees for the rest of the team) or do they cycle many of their ops people through the dual roles (at risk of task-switching inefficiency)?
There are pros and cons with each aren’t there? Which would you choose and why? Do you have an alternate approach?
“CSPs’ needs in orchestration are evolving in parallel on several dimensions. These can be considered hierarchically. At the highest level is software that has an end-to-end service role, as is the case in the ONAP project. This software generally supports a service life-cycle perspective, containing functions from design and service creation, to provisioning and activation, to operations management, analysis, upgrade and evolution.
Beneath this tier, in a resource-facing sense, is software that simplifies deployment and operation of virtual system infrastructures in cloud-native applications, NFV, vco/CORD and MEC. This carries the overall tag of MANO and incorporates the domains of NFV (with NFVO, for deployment and operation of virtualized network functions) and virtualized infrastructure management (or VIM, for automating deployment and operation of virtual system infrastructures). Open source developments are significant at each of these layers of orchestration, and each contains a significant portion of the overall orchestration TAM.
In parallel is the functionality for managing hybrid virtual and physical infrastructures, which is the reality in most CSP environments. This can be thought of as a lateral branch to MANO for virtualized infrastructures in the orchestration stack.
Together these categories make up the TAM [Total Addressable Market] for orchestration solutions with CSPs. This is a high-priority area of focus for CSPs and is one of the highest growth areas of software innovation and development in support of their service delivery needs. We expect the TAM for orchestration software to triple from 2018 through 2023 at a CAGR of 32.5%. This is partially because of the nascent level of the offerings at the current time, as well as the high priority that CSPs and their vendor suppliers are placing on the domain.”
“Succeeding on an Open Field: The Impact of Open Source Technologies on the Communication Service Provider Ecosystem,” an ACG Research Report.
Whilst the title of this blog is just one of the headline numbers in this report by ACG Research, there are a number of other interesting call-outs, so it’s well worth having a closer read of the report.
The research has been funded by the Linux Foundation, so it naturally has a focus on open-source solutions for network operators (CSPs). Here’s another quote from the report relating to open-source, “The main motivations behind the push for open source solutions in CSP operations are not simply focused on cost reduction as a goal. CSPs are thinking strategically and globally. There is a realization that the competitive landscape for communication and information services is changing rapidly, and it includes global, webscale service providers and over-the-top solutions.
Leading CSPs want industry collaboration and cooperation to solve common challenges…
Their top three motivations are:
• Unifying multiple service providers around a common approach
• Avoiding vendor lock-in and dependencies on a single vendor
• Accessing a broader talent pool than your own organization or any one vendor could provide”
The first bullet-point is where the CSPs diverge from the likes of AWS and Google. Whilst the CSPs, each with their local geographical reach, seek global unification through standardisation (ie to ensure simpler interconnection), AWS and Google appear to be seeking global reach and global domination (making unification efforts irrelevant for them).
Just curious though. What if global domination does come to pass in the next few years? Will there be a three-fold increase in the orchestration market or complete decimation? Check out this earlier post that describes an OSS doomsday scenario.
Global CSPs have significant revenue streams that won’t disappear by 2023 and will be certain to put up a fight against becoming obsolescent under that doomsday scenario. It seems that open source and orchestration are key weapons in this global battle, so we’re bound to see some big investments in this space.
“Insurer IAG has modelled the financial cost that a data breach or ransomware attack would have on its business, in part to understand how much proposed infosec investments might offset its losses.
Head of cybersecurity and governance Ian Cameron told IBM Think 2018 in Sydney that the “value-at-risk modelling” project called upon the company’s actuarial expertise to put numbers on different types and levels of security threats.
“Because we’re an insurance company, we can use actuarial methods to price or model what the costs of a loss event would be,” Cameron said.
“If we have a major data breach or a major ransomware attack, we’ve done some really great work in the past 12 months to model the net cost of losses to our organisation in terms of the loss of productivity, the cost of advertising to address the concerns of our customers, the legal costs, and the costs of regulatory oversight.
“We’ve been able to work out the distribution of loss from a small event to a very big event.”
Ry Crozier on IT News.
There are really only three main categories of benefit that an OSS can be built around:
Revenue generation / increase
Brand value (ie insurance of the brand, via protection of customer perception of the brand)
The last on the list is rarely used (in my experience) to justify OSS/BSS investment. The IAG experience of costing out infosec risk to operations and brand is an interesting one. It’s also one that has some strong parallels for the OSS/BSS of network operators.
Many people in the telecoms industry treat OSS/BSS as an afterthought and/or an expensive cost centre. Those people fail to recognise that the OSS/BSS are the operationalisation engines that allow customers to use the network assets.
Just as IAG was able to do through actuarial analysis, a telco’s OSS/BSS team could “work out the distribution of loss from a small event to (be) a very big event” (for the telco’s brand value). Consider the loss of repute during sustained network outages. Consider the impact of negative word-of-mouth from billing mistakes. Consider how revenue leakage analysis and predictive network health management might offset losses.
Can the IAG approach work for justifying your investment in OSS/BSS?
Do you use any other major categories for justifying OSS/BSS spend?
There’s a particular carrier that I know quite well that appears to despise a particular OSS vendor… but keeps coming back to them… and keeps getting let down by them… but keeps coming back to them. And I’m not just talking about support of their existing OSS, but whole new tools.
It never made sense to me… until reading Seth Godin’s blog today. In it, he states, “…this market segment knows that things that are too good to be true can’t possibly work, and that’s fine with them, because they don’t actually want to change–they simply want to be able to tell themselves that they tried. That the organization they paid their money to failed, of course it wasn’t their failure. Once you see that this short-cut market segment exists, you can choose to serve them or to ignore them. And you can be among them or refuse to buy in”
It starts to makes sense. The same carrier has a tendency to spend big money on the big-4 consultants whenever an important decision needs to be made. If the big, ambitious project then fails, the carrier’s project sponsors can say that the big-4 organization they paid their money to failed.
Does that ring true of any telco you’ve worked with? That they don’t actually want to change–they simply want to be able to tell themselves that they tried (or be seen to have tried) with their OSS transformation?
Are we actually stuck in one big dilemma? Are our OSS transformations actually so hard that they’re destined to fail, yet are already failing so badly that we desperately need to transform them? If so, then Seth’s insightful observation gives the appearance of progress AND protection from the pain of failure.
Not sure about you, but I’ll take Seth’s “refuse to buy in” option and try to incite change.
If I start talking about doomsday scenarios where the global OSS job industry is decimated, most people will immediately jump to the conclusion that I’m predicting an artificial intelligence (AI) takeover. AI could have a role to play, but is not a key facet of the scenario I’m most worried about.
You’d think that OSS would be quite a niche industry, but there must be thousands of OSS practitioners in my home town of Melbourne alone. That’s partly due to large projects currently being run in Australia by major telcos such as nbn, Telstra, SingTel-Optus and Vodafone, not to mention all the smaller operators. Some of these projects are likely to scale back in coming months / years, meaning less seats in a game of OSS musical chairs. But this isn’t the doomsday scenario I’m hinting at in the title either. There will still be many roles at the telcos and the vendors / integrators that support them.
There are hundreds of OSS vendors in the market now, with no single dominant player. It’s a really fragmented market that would appear to be ripe for M&A (mergers and acquisitions). Ripe for consolidation, but massive consolidation is still not the doomsday scenario because there would still be many OSS roles in that situation.
The doomsday scenario I’m talking about is one where only one OSS gains domination globally. But how?
Most traditional telcos have a local geographic footprint with partners/subsidiaries in other parts of the world, but are constrained by the costs and regulations of a wired or cellular footprint to be able to reach all corners of the globe. All that uniqueness currently leads to the diversity of OSS offerings we see today. The doomsday scenario arises if one single network operator usurps all the traditional telcos and their legacy network / OSS / BSS stacks in one technological fell swoop.
How could a disruption of that magnitude happen? I’m not going to predict, but a satellite constellation such as the one proposed by Starlink has some of the hallmarks of such a scenario. By using low-earth orbit (LEO) satellites (ie lower latency than geostationary satellite solutions), point-to-point laser interconnects between them and peering / caching of data in the sky, it could fundamentally change the world of communications and OSS.
It has global reach, no need for carrier interconnect (hence no complex contract negotiations or OSS/BSS integration for that matter), no complicated lead-in negotiations or reinstatements, no long-haul terrestrial or submarine cable systems. None of the traditional factors that cost so much time and money to get customers connected and keep them connected (only the complication of getting and keeping the constellation of birds in the sky – but we’ll put that to the side for now). It would be hard for traditional telcos to compete.
I’m not suggesting that Starlink can or will be THE ubiquitous global communications network. What if Google, AWS or Microsoft added this sort of capability to their strengths in hosting / data? Such a model introduces a new, consistent network stack without the telcos’ tech debt burdens discussed here. The streamlined network model means the variant tree is millions of times simpler. And if the variant tree is that much simpler, so is the operations model and so is the OSS… with one distinct contradiction. It would need to scale for billions of customers rather than millions and trillions of events.
You might be wondering about all the enterprise OSS. Won’t they survive? Probably not. Comms networks are generally just an important means-to-an-end for enterprises. If the one global network provider were to service every organisation with local or global WANs, as well as all the hosting they would need, and hosted zero-touch network operations like Google is already pre-empting, would organisation have a need to build or own an on-premises OSS?
One ubiquitous global network, with a single pared back but hyperscaled OSS, most likely purpose-built with self-healing and/or AI as core constructs (not afterthoughts / retrofits like for existing OSS). How many OSS roles would survive that doomsday scenario?
Do you have an alternative OSS doomsday scenario that you’d like to share?
Hat tip again to Jay Fenton for pointing out what Starlink has been up to.
It’s an interesting season as we come up to the EOFY (end of financial year – on 30 June). Budget cycles are coming to an end. At organisations that don’t carry un-spent budgets into the next financial year, the looming EOFY triggers a use-it-or-lose-it mindset.
In some cases, organisations are almost forced to allocate funds on OSS investments even if they haven’t always had the time to identify requirements and / or model detailed return projections. That’s normally anathema to me because an OSS‘ reputation is determined by the demonstrable value it creates for years to come. However, I can completely understand a client’s short-term objectives. The challenge we face is to minimise any risk of short-term spend conflicting with long-term objectives.
I take the perspective of allocating funds to build the most generally useful asset (BTW, I like Robert Kiyosaki’s simple definition of an asset as, “in reality, an asset is only something that puts money in your pocket,”) In the case of OSS, putting money in one’s pocket needs to consider earnings [or cost reductions] that exceed outgoings such as maintenance, licensing, operations, etc as well as cost of capital. Not a trivial task!
So this is where the farm equipment analogy comes in.
If we haven’t had the chance to conduct demand estimation (eg does the telco’s market want the equivalent of wheat, rice, stone fruit, etc) or product mix modelling (ie which mix of those products will bear optimal returns) then it becomes hard to predict what type of machinery is best fit for our future crops. If we haven’t confirmed that we’ll focus efforts on wheat, then it could be a gamble to invest big in a combine harvester (yet). We probably also don’t want to invest capital and ongoing maintenance on a fruit tree shaker if our trees won’t begin bearing fruit for another few years.
Therefore, a safer investment recommendation would be on a general-purpose machine that is most likely to be useful for any type of crop (eg a tractor).
In OSS terminology, if you’re not sure if your product mix will provision 100 customers a day or 100,000 then it could be a little risky to invest in an off-the-shelf orchestration / provisioning engine. Still potentially risky, but less so, would be to invest in a resource and service inventory solution (if you have a lot of network assets), alarm management tools (if you process a lot of alarms), service order entry, workforce management, etc.
Having said that, a lot of operators already have a strong gut-feel for where they intend to get returns on their investment. They may not have done the numbers extensively, but they know their market roadmap. If wheat is your specialty, go ahead and get the combine harvester.
I’d love to get your take on this analogy. How do you invest capital in your OSS without being sure of the projections (given that we’re never sure on projections becoming reality)?
“The global Internet of Things (IoT) market will be worth $1.1 trillion in revenue by 2025 as market value shifts from connectivity to platforms, applications and services. By that point, there will be more than 25 billion IoT connections (cellular and non-cellular), driven largely by growth in the industrial IoT market. The Asia Pacific region is forecast to become the largest global IoT region in terms of both connections and revenue.
Although connectivity revenue will grow over the period, it will only account for 5 per cent of the total IoT revenue opportunity by 2025, underscoring the need for operators to expand their capabilities beyond connectivity in order to capture a greater share of market value.”
GSMA Intelligence, referred to here.
Let’s look at these projected numbers. The GSMA Intelligence report forecasts only 5 cents in every dollar of IoT spend (of a $1.1T market opportunity) will be allocated to connectivity. That leaves $1.045T on the table if network operators just focus on connectivity.
Traditional OSS tend to focus on managing connectivity – less so on managing marketplaces, customer-facing platforms and applications. Does that headline number – $1.045T – provide you with an incentive to re-consider what your OSS manages and future use cases?
IoT requires slightly different OSS thinking:
Rather than integrating to a (relatively) small number of device types, IoT will have an almost infinite number of sensor types from a huge range of suppliers.
Rather than managing devices individually, their sheer volume means that devices will need to be increasingly managed in cohorts via policy controls.
Rather than a fairly narrow set of network-comms based services, functionality explodes into diverse areas like metering, vehicle fleets, health-care, manufacturing, asset controls, etc, etc so IoT controllers will need to be developed by a much longer-tail of suppliers (meaning open development platforms and/or scalable certification processes to integrate into the IoT controller platforms).
There are undoubtedly many, many additional differences.
Caveat: I haven’t evaluated the claims / numbers in the GSMA Intelligence report. This blog is just to prompt a thought-experiment around hypothetical projections.
“In the past, business-oriented groups have had ideas about what they want to do and then they come to us… Now, they want to know what technology can bring to the table and then they’ll work on the business plan.
So there’s a big gap here. It’s a phenomenon that’s been happening in the last year and it’s an uncomfortable place for IT. We’re not used to having to lead in that way. We have been more in the order-taker business.”
Veenod Kurup, Liberty Global, Group CIO.
That’s a really thought-provoking insight from Veenod isn’t it? Technology driving the business rather than business driving the technology. Technology as the business advantage.
I’ll be honest here – I never thought I’d see that day although I… guess… as e-business increases, the dependency on tech increases in lockstep. I’m passionate about tech, but also of the opinion that the tech is only a means to an end.
So if what Veenod says is reflective of a macro-trend across all industry then he’s right in saying that we’re going to have some very uncomfortable situations for some tech experts. Many will have to widen their field of view from a tech-only vision to a business vision.
Maybe instead the business-oriented groups could just come to the OSS / BSS department and speak with our valuable tripods. After all, we own and run the information and systems where business (BSS) meets technology (OSS) right? Or as previously reiterated on this blog, we have the opportunity to take the initiative and demonstrate the business value within our OSS / BSS.
PS. hat-tip to Dawn Bushaus for unearthing the quote from Veenod in her article on Inform.
“All over the world – from America’s National Football League (NFL) to the National Basketball Association (NBA), from our own AFL to NRL – athletes and coaches are cultivating club cultures in which tales of personal hardship and woe are welcome, even desirable. All are clamouring to embrace the biggest buzzword in professional sport: vulnerability.
The most publicised incarnation of this shift was the “Triple H” sessions used at AFL winners Richmond last year, where once a fortnight a player stood and shared three personal stories about a Hero, Hardship and Highlight from their life.”
Konrad Marshall, GoodWeekend.
I’ve just done a quick rummage through the OSS and/or tech-related books in my bookshelves. Would you like to know what I noticed? None mention teams, teamwork or culture, let alone the V-word quoted above – vulnerability. Funny that.
I say funny, because each of the highest-performing teams I’ve worked with have also had great team culture. Conversely, the worst-performing teams I’ve worked with have seen ego overpower vulnerability and empathy. Does that resonate with your experiences too?
Professor Amy Edmondson of Harvard Business School refers to psychological safety as, “a team climate characterized by interpersonal trust and mutual respect in which people are comfortable being themselves.” Graeme Cowan states, “We become more open-minded, resilient, motivated, and persistent when we feel safe. Humor increases, as does solution-finding and divergent thinking — the cognitive process underlying creativity.”
Yet why is it that we don’t seem to rate team factors when it comes to OSS delivery? “Team bonding” in stylishly inverted commas can come across as a bit ridiculous, but informal culture building seems to be more valuable than any of the technical alignment workshops we tend to build into our project plans.
OSS are built by teams, for teams, clearly. They’re often built in politically charged situations. They’re also usually built in highly complex environments, where complexities abound in technology and process, but even more so within the people involved. Not only that, but they’re regularly built across divisional lines of business units or organisations over which (hopefully metaphorical) hand grenades can be easily thrown.
Underestimate psychological safety and vulnerability across the entire stakeholder group at your peril on OSS projects. We could benefit from looking outside the walls of OSS, to models used by sporting teams in particular, where team culture is invested in far more heavily because of the proven performance benefits they’ve delivered.
Dymphna Boholt describes it thus:
“The reality is that if everything lands in a project – if every element does what it needs to do when it was supposed to do it, and everything went off without a hitch, that’s actually a stunning success. It’s also incredibly rare.
But we set that impossibly high standard as our benchmark, and then think that everything short of that is a failure.
And in that way, the grading scale should almost be reversed, so that if you put them side by side, it’d look like:
Success = Stunning Success Partial failure = Total Success Total Failure = Success Miserable Failure = Useful learning experience Total Trainwreck = Failure.”
You know, I’d never, ever looked at OSS projects from this perspective before. I’d always taken the view that if a project I’d worked on didn’t deliver on all expectations (the triple constraint of cost, time, scope/functionality), then it had failed to meet expectations, no matter how big the achievements of the project team may have been.
Dymphna has a really interesting point though. The chances of everything going to plan on a large, complex OSS project are almost zero. There are always challenges to overcome, no matter how skillful the team, how great the planning. It’s why I call it the OctopOSS (just when you think you have all the tentacles tied down, another comes and whacks you on the back of the head).
What if we instead treated the definition of “success” on our OSS projects as an implausibly unlikely stretch goal, to act as a guiding vision for our team? What if we also re-set the expectation benchmark of stakeholders to Dymphna’s right-hand column, not their incumbent expectation at the left?
Would that be letting ourselves off the hook for “failing,” for not meeting our promises, for under-delivering? Or is it fair to resign ourselves to reality rather than delusional benchmarks? Is Dymphna’s right-hand scale effectively like passing an exam only because the bell-curve is used and your classmates have overwhelmingly scored even lower than you?
I’m asking myself, why is it that some of the projects I’m most proud of (of the achievements of my colleagues and I) are also perhaps the biggest failures (on time, cost, scope or customer usability)?
Are you as challenged by Dymphna’s perspective as I am? I’d love to hear your thoughts.
Being an OSS product supplier to telecom operators is a tough business. There is a constant stream of outgoings on developer costs, cost of sale, general overheads, etc. Unfortunately revenue streams are rarely so smooth. In fact, they tend to be decidedly lumpy – unpredictable (in terms of timelines when forecasting inflows years in advance) but large spikes of income stemming from customer implementations.
Not only that, but the risks are high due to the complexity and unknowns of OSS implementation projects as well as the lack of repeatability that was discussed in yesterday’s post.
Enduringly valuable businesses achieve their status through predictable, diversified, recurring (and preferably growing) revenue streams, so they need to be objectives of our OSS business models.
Annual maintenance fees (usually in the order of 20-22% of up-front list prices) is the most common recurring revenue model used by OSS product suppliers. Transaction-based pricing is another common model.
Cloud subscription (consumption) based models are also becoming more common, although there are always challenges around convincing carriers of the security and sovereignty of such important tools and data being hosted off-site.
I’m fascinated with the platform-plays, like Salesforce, which is a mushrooming form of the subscription model because there’s an ecosystem (or marketplace) of sellers contributing to transaction volumes. OSS and BSS are the perfect platform play but I haven’t seen any built around this style of revenue model yet. [Please let me know if I’ve missed any].
It has also been interesting to observe Cisco’s market success on the back of a perceived revenue shift towards more software and services.
Whenever considering alternate revenue models, I refer back to this great image from Ross Dawson:
Do any apply to your OSS? Can any apply to your OSS?
Tomorrow we’ll discuss OSS professional services revenues and the contrasting mindset compared with products.
In yesterday’s post, we talked about how to estimate OSS pricing. One of the key pillars of the approach was to first identify a short-list of vendors / integrators best-suited to implementing your specific OSS, then working closely with them to construct a pricing model.
Finding the right vendor / integrator can be a complex challenge. There are dozens, if not hundreds of OSS / BSS solutions to choose from and there are rarely like-for-like comparators. There are some generic comparison tools such as Gartner’s Magic Quadrant, but there’s no way that they can cater for the nuanced requirements of each OSS operator.
Okay, so you don’t want to hear about problems. You want solutions. Well today’s post provides a description of the approach we’ve used and refined across the many product / vendor selection processes we’ve conducted with OSS operators.
We start with a short-listing exercise. You won’t want to deal with dozens of possible suppliers. You’ll want to quickly and efficiently identify a small number of candidates that have capabilities that best match your needs. Then you can invest a majority of your precious vendor selection time in the short-list. But how do you know the up-to-date capabilities of each supplier? We’ll get to that shortly.
For the short-listing process, I use a requirement gathering and evaluation template. You can find a PDF version of the template here. Note that the content within it is out-dated and I now tend to use a more benefit-centric classification rather than feature-centric classification, but the template itself is still applicable.
STEP ONE – Requirement Gathering
The first step is to prepare a list of requirements (as per page 3 of the PDF):
. The left-most three columns in the diagram above (in white) are filled out by the operator, which classifies a list of requirements and how important they are (ie mandatory, etc). The depth of requirements (column 2) is up to you and can range from specific technical details to high-level objectives. They could even take the form of user-stories or intended benefits.
STEP TWO – Issue your requirement template to a list of possible vendors
Once you’ve identified the list of requirements, you want to identify a list of possible vendors/integrators that might be able to deliver on those requirements. The PAOSS vendor/product list might help you to identify possible candidates. We then send the requirement matrix to the vendors. Note that we also send an introduction pack that provides the context of the solution the OSS operator needs.
STEP THREE – Vendor Self-analysis The right-most three columns in the diagram above (in aqua) are designed to be filled out by the vendor/integrator. The suppliers are best suited to fill out these columns because they best understand their own current offerings and capabilities.
Note that the status column is a pick-list of compliance level, where FC = Fully Compliant. See page 2 of the template for other definitions. Given that it is a self-assessment, you may choose to change the Status (vendor self-rankings) if you know better and/or ask more questions to validate the assessments.
The “Module” column identifies which of the vendor’s many products would be required to deliver on the requirement. This column becomes important later on as it will indicate which product modules are most important for the overall solution you want. It may allow you to de-prioritise some modules (and requirements) if price becomes an issue.
STEP FOUR – Compare Responses
Once all the suppliers have returned their matrix of responses, you can compare them at a high-level based on the summary matrix (on page 1 of the template)
For each of the main categories, you’ll be able to quickly see which vendors are the most FC (Fully Compliant) or NC (Non-Compliant) on the mandatory requirements.
Of course you’ll need to analyse more deeply than just the Summary Matrix, but across all the vendor selection processes we’ve been involved with, there has always been a clear identification of the suppliers of best fit.
Hopefully the process above is fairly clear. If not, contact us and we’d be happy to guide you through the process.
“Sometimes a simple question deserves a simple answer: “A piece of string is twice as long as half its length”. This is a brilliant answer… if you have its length… Without a strategy, how do you know if it is successful? It might be prettier, but is it solving a define business problem, saving or making money, or fulfilling any measurable goals? In other words: can you measure the string?”
I was recently asked how to obtain OSS pricing by a University student for a paper-based assignment. To make things harder, the target client was to be a tier-2 telco with a small SDN / NFV network.
As you probably know already, very few OSS providers make their list prices known. The few vendors that do tend to focus on the high volume, self-serve end of the market, which I’ll refer to as “Enterprise Grade.” I haven’t heard of any “Telco Grade” OSS suppliers making their list prices available to the public.
There are so many variables when finding the right OSS for a customer’s needs and the vendors have so much pricing flexibility that there is no single definitive number. There are also rarely like-for-like alternatives when selecting an OSS vendor / product. Just like the fabled piece of string, the best way is to define the business problem and get help to measure it. In the case of OSS pricing, it’s to design a set of requirements and then go to market to request quotes.
Now, I can’t imagine many vendors being prepared to invest their valuable time in developing pricing based on paper studies, but I have found them to be extremely helpful when there’s a real buyer. I’ll caveat that by saying that if the customer (eg service provider) you’re working with is prepared to invest the time to help put a list of requirements together then you have a starting point to approach the market for customised pricing.
We’ve run quite a few of these vendor selections and have refined the process along the way to streamline for vendors and customers alike. Here’s a template we’ve used as a starting point for discussions with customers:
Note that each customer will end up with a different mapping of the diagram above to suit their specific needs. We also have existing templates (eg Questionnaire, Requirement Matrix, etc) to support the selection process where needed.
If you’re interested in reading more about the process of finding the right OSS vendor and pricing for you, click here and here.
Of course, we’d also be delighted to help if you need assistance to develop an OSS solution, get OSS pricing estimates, develop a workable business case and/or find the right OSS vendor/products for you.
For network operators, our OSS and BSS touch most parts of the business. The network, and the services they carry, are core business so a majority of business units will be contributing to that core business. As such, our OSS and BSS provide many of the metrics used by those business units.
This is a privileged position to be in. We get to see what indicators are most important to the business, as well as the levers used to control those indicators. From this privileged position, we also get to see the aggregated impact of all these KPIs.
In your years of working on OSS / BSS, how many times have you seen key business indicators that are conflicting between business units? They generally become more apparent on cross-team projects where the objectives of one internal team directly conflict with the objectives of another internal team/s.
In theory, a KPI tree can be used to improve consistency and ensure all business units are pulling towards a common objective… [but what if, like most organisations, there are many objectives? Does that mean you have a KPI forest and the trees end up fighting for light?]
But here’s a thought… Have you ever seen an OSS/BSS suite with the ability to easily build KPI trees? I haven’t. I’ve seen thousands of standalone reports containing myriad indicators, but never a consolidated roll-up of metrics. I have seen a few products that show operational metrics rolled-up into a single dashboard, but not business metrics. They appear to have been designed to show an information hierarchy, but not necessarily with KPI trees in mind specifically.
What do you think? Does it make sense for us to offer KPI trees as base product functionality from our reporting modules? Would this functionality help our OSS/BSS add more value back into the businesses we support?
Tending to be a low-volume, high-customisation, high-uniqueness product, OSS has a significantly different selling proposition than most “box drop” products.
Can you imagine if OSS salespeople used any of these “great deal” propositions (as described by Gary Halbert)? “I’m going out of business.”
“I just had a fire and I’m having a fire sale.”
“I’m crazy.” (all used car dealers)
“I owe taxes and I’ve got to raise money fast to pay them.”
“I’ve lost my lease and I’ve got to sell this merchandise right away before it gets thrown into the sheet.”
“I’ve got to make space for some new merchandise that is arriving soon so I will sell you what I have on hand real cheap.”
Did the image of an OSS salesperson saying any of those, especially the first, bring a smile to your face?
Anyway, Gary’s article also goes on to say, “…I wrote: “and if you can find a way to use it, you can dramatically increase your sales volume.”
Now, compare that to this: “and if you can find a way to use it, you can make yourself a bushel of money!”
Isn’t that a lot more powerful? You bet! The words “dramatically increase your sales volume” do not even begin to conjure up the visual imagery of “a bushel of money.””
From what I’ve experienced on the client side of the buying equation, OSS selling propositions seem to be driven by functionality. I call it the functionality arms-race, where vendors compete on functionality rather than efficacy. In a way, it’s the “sales volume” variant mentioned by Gary above.
The other approach that does align more closely with the “bushel of money” variant is the cost-out discussion. It’s the, “if you implement this OSS, you’ll be able to reduce head-count in your operations team,” argument. That’s definitely important for any operator that sees their OSS as a cost-centre. However, it’s a “save a bushel of money” argument rather than the more powerful “make a bushel of money” argument.
In reply to a recent post, James Crawshaw of Light Reading wrote, “OSS/BSS represents around 2-3% of revenue and takes up around 10% of capex.” I initially read this as OSS/BSS contributing 2-3% of revenue (ie the higher the percentage the better). However, James clarified that our IT/OSS/BSS tend to consume 2-3% of revenue (ie the lower the percentage the better).
Can you imagine how these tiny wording/perspective differences could change the credibility of the whole OSS/BSS industry? As soon as our OSS make a bushel of money, then the selling proposition becomes a whole lot stronger.
We all know the story of Goldilocks and the Three Bears where Goldilocks chooses the option that’s not too heavy, not too light, but just right.
The same model applies to OSS – finding / building a solution that’s not too heavy, not too light, but just right. To be honest, we probably tend to veer towards the too heavy, especially over time. We put more complexity into our architectures, integrations and customisations… because we can… which end up burdening us and our solutions.
The ONAP Charter has some great plans including, “…real-time, policy-driven orchestration and automation of physical and virtual network functions that will enable software, network, IT and cloud providers and developers to rapidly automate new services and support complete lifecycle management.”
These are fantastic ambitions to strive for, especially at the Pappa Bear end of the market. I have huge admiration for those who are creating and chasing bold OSS plans. But what about for the large majority of customers that fall into the Goldilocks category? Is our field of vision so heavy (ie so grand and so far into the future) that we’re missing the opportunity to solve the business problems of our customers and make a difference for them with lighter solutions today?
TM Forum’s Digital Transformation World is due to start in just over two weeks. It will be fascinating to see how many of the presentations and booths consider the Goldilocks requirements. There probably won’t be many because it’s just not as sexy a story as one that mentions heavy solutions like policy-driven orchestration, zero-touch automation, AI / ML / analytics, self-scaling / self-healing networks, etc.
[I should also note that I fall into the category of loving to listen to the heavy solutions too!! ]