“The House Appropriations Committee unanimously accepted an amendment to an appropriations bill on Thursday that reinforces sanctions against Chinese telecommunications company ZTE, a rebuke to President Trump, who earlier this week tweeted support for the company.” reported TheHill.com.
As part of our ongoing commitment to open and collaborative innovation, we’re working with SKT, Intel Corporation and the OpenStack Foundation to launch a new open infrastructure project called Airship. This project builds on the foundation laid by the OpenStack-Helm project launched in 2017. It lets cloud operators manage sites at every stage from creation through minor and major updates, including configuration changes and OpenStack upgrades. It does all this through a unified, declarative, fully containerized, and cloud-native platform.
Simply put, Airship lets you build a cloud easier than ever before. Whether you’re a telecom, manufacturer, health care provider, or an individual developer, Airship makes it easy to predictably build and manage cloud infrastructure.
It’s built using microservices, which we think are the future of software development, and embraces cloud native principles out of the box. This lets each Airship microservice perform one specific role in the cloud delivery and management process, and do it well. The ultimate goal of Airship is to help operators take hardware from loading dock to an OpenStack cloud, all while ensuring first-class life cycle management of that cloud once it enters production.
The initial focus of this project is the implementation of a declarative platform to introduce OpenStack on Kubernetes (OOK) and the lifecycle management of the resulting cloud, with the scale, speed, resiliency, flexibility, and operational predictability demanded of network clouds.
“Declarative” might be a new term to some readers. But it’s a simple concept with huge benefits. In a nutshell, every aspect of your cloud is defined in standardized documents that give you extremely flexible and fine grain control of your cloud infrastructure. You simply manage the documents themselves and submit them and the platform takes care of the rest. This includes determining what has changed since the last submission and orchestrating those changes.
AT&T is contributing code for Airship that started in collaboration with SKT, Intel and a number of other companies in 2017. It’s the foundation of AT&T’s network cloud that will run our 5G core supporting the late 2018 launch of 5G service in 12 cities. Airship will also be used by Akraino Edge Stack, which is a new Linux Foundation project. Akraino is intended to create an open source software stack supporting high-availability cloud services optimized for edge computing systems and applications.
Airship will fuel and accelerate our Network AI initiative which houses several of our other open source projects. We want to build and nurture an open ecosystem of developers who can work together to advance this technology and deploy it within their own organizations.
Ryan van Wyk, assistant vice president of Cloud Platform Development at AT&T Labs, describes it like this: “Airship is going to allow AT&T and other operators to deliver cloud infrastructure predictably that is 100% declarative, where Day Zero is managed the same as future updates via a single unified workflow, and where absolutely everything is a container from the bare metal up.”
Ryan and his team will follow this blog post with a more in-depth introduction to project Airship in the next few days.
“We are pleased to bring continued innovation with Airship, extending the work we started in 2016 with the OpenStack and Kubernetes communities to create a continuum for modern and open infrastructure. Airship will bring new network edge capabilities to these stacks and Intel is committed to working with this project and the many other upstream projects to continue our focus of upstream first development and accelerating the industry.” – Imad Sousou, corporate vice president and general manager of the Open Source Technology Center at Intel
As promised in yesterday’s post about lumpy revenues for OSS product companies, today we’ll discuss OSS professional services revenues and the contrasting mindset compared with products.
Professional services revenues are a great way of smoothing out the lumpy revenue streams of traditional OSS product companies. There’s just one problem though. Of all the vendors I’ve worked with, I’ve found that they always have a predilection – they either have a product mindset or a services mindset and struggle to do both well because the mindsets are quite different.
Not only that but we can break professional services into two categories:
- Product-related services – the installation and commissioning of products; and
- Consultancy-based services – the value-add services that drive business value from the OSS / BSS
Product companies provide product-related services, naturally. I can’t help but think that if we as an industry provided more of the consultancy-based services, we’d have more justification for greater spend on OSS / BSS (and smoother revenue streams in the process).
Having said that, PAOSS specialises in consultancy-based services (as well as install / commission / delivery services), so we’re always happy to help organisations that need assistance in this space!!
Being an OSS product supplier to telecom operators is a tough business. There is a constant stream of outgoings on developer costs, cost of sale, general overheads, etc. Unfortunately revenue streams are rarely so smooth. In fact, they tend to be decidedly lumpy – unpredictable (in terms of timelines when forecasting inflows years in advance) but large spikes of income stemming from customer implementations.
Not only that, but the risks are high due to the complexity and unknowns of OSS implementation projects as well as the lack of repeatability that was discussed in yesterday’s post.
Enduringly valuable businesses achieve their status through predictable, diversified, recurring (and preferably growing) revenue streams, so they need to be objectives of our OSS business models.
Annual maintenance fees (usually in the order of 20-22% of up-front list prices) is the most common recurring revenue model used by OSS product suppliers. Transaction-based pricing is another common model.
Cloud subscription (consumption) based models are also becoming more common, although there are always challenges around convincing carriers of the security and sovereignty of such important tools and data being hosted off-site.
I’m fascinated with the platform-plays, like Salesforce, which is a mushrooming form of the subscription model because there’s an ecosystem (or marketplace) of sellers contributing to transaction volumes. OSS and BSS are the perfect platform play but I haven’t seen any built around this style of revenue model yet. [Please let me know if I’ve missed any].
It has also been interesting to observe Cisco’s market success on the back of a perceived revenue shift towards more software and services.
Whenever considering alternate revenue models, I refer back to this great image from Ross Dawson:
Do any apply to your OSS? Can any apply to your OSS?
Tomorrow we’ll discuss OSS professional services revenues and the contrasting mindset compared with products.
“Each problem that I solved became a rule, which served afterwards to solve other problems.”
On a recent project, I spent quite a lot of time thinking in terms of problem statements, then mapping them into solutions that could be broken down for assignment to lots of delivery teams – feeding their Agile backlogs.
On that assignment, like the multitude of OSS projects in the past, there has been very little repetition in the solutions. The people, org structure, platforms, timelines, objectives, etc all make for a highly unique solution each time. And high uniqueness doesn’t easily equate to repeatability. If there’s no repeatability, there’s no point building repeatable tools to improve efficiency. But repeatability is highly desirable for the purpose of reliability, continual improvement, economy of scale, etc.
However, if we look a step above the solution, above the use cases, above the challenges, we have key problem statements and they do tend to be more consistent (albeit still nuanced for each OSS). These problem statements might look something like:
- We need to find a new vendor / solution to do X (where X is the real problem statement)
- Realised risks have impacted us badly on past projects (so we need to minimise risk on our upcoming transformation)
- We don’t get new products out to market fast enough to keep up with competitor Y and are losing market share to them
- Our inability to resolve network faults quickly is causing customers to lose confidence in us
It’s at this level that we begin to have more repeatability, so it’s at this level that it makes sense to create rules, frameworks, etc that are re-usable and refinable. You’ll find some of the frameworks I use under the Free Stuff menu above.
It seems that I’m an OSS map-maker by nature, wanting the take the journey but also map it out for re-use and refinement.
I’d love to hear whether it’s a common trait and inherent in many of you too. Similarly, I’d love to hear about how you seek out and create repeatability.
I recently had something of a perspective-flip moment in relation to automation within the realm of OSS.
In the past, I’ve tended to tackle the automation challenge from the perspective of applying automated / scripted responses to tasks that are done manually via the OSS. But it’s dawned on me that I have it around the wrong way! It is an incremental perspective on the main objective of automations – global zero-touch networks.
If we take all of the tasks performed by all of the OSS around the globe, the number of variants is incalculable… which probably means the zero-touch problem is unsolvable (we might be able to solve for many situations, but not all).
The more solvable approach would be to develop a more homogeneous approach to network self-care / self-optimisation. In other words, the majority of the zero-touch challenge is actually handled at the equivalent of EMS level and below (I’m perhaps using out-dated self-healing terminology, but hopefully terminology that’s familiar to readers) and only cross-domain issues bubble up to OSS level.
As the diagram below describes, each layer up abstracts but connects (as described in more detail in “What an OSS shouldn’t do“). That is, each higher layer in the stack reduces the amount if information/control within a domain that it’s responsible for, but it assumes more a broader responsibility for connecting domains together.
The abstraction process reduces the number of self-healing variants the OSS needs to handle. But to cope with the complexity of self-caring for connected domains, we need a more homogeneous set of health information being presented up from the network.
Whereas the intent model is designed to push actions down into the lower layers with a standardised, simplified language, this would be the reverse – pushing network health knowledge up to higher layers to deal with… in a standard, consistent approach.
And BTW, unlike the pervading approach of today, I’m clearly saying that when unforeseen (or not previously experienced) scenarios appear within a domain, they’re not just kicked up to OSS, but the domains are stoic enough to deal with the situation inside the domain.
News last week of ZTE ceasing major operations due to US embargoes have taken another turn.
President Trump has now tweeted,”President Xi of China, and I, are working together to give massive Chinese phone company, ZTE, a way to get back into business, fast. Too many jobs in China lost. Commerce Department has been instructed to get it done!”
What will be the next twist in this saga?
Sigma Systems, announced a major deal with Indonesia’s leading mobile network operator, Telkomsel.
With more than 190 million customers, Telkomsel is currently the largest mobile operator in Indonesia. Telkomsel has consistently implemented the latest mobile technology and was the first to commercially launch 4G LTE mobile services in the country. Entering the digital era, Telkomsel continues to expand its digital business to incorporate advertising, lifestyle, mobile financial services, and Internet of Things.
In support of their digital mandate, Telkomsel has selected Sigma Systems as a partner, establishing Sigma Catalog as the central enterprise catalog to underpin their evolving business.
“We are pleased to partner with Sigma to deploy a B/OSS platform that enables the rapid creation of personalized, micro-segmented offers to our customers. Sigma’s agile delivery methodology and product-centric approach ultimately supports Telkomsel’s mission of building a Digital Indonesia,” said Montgomery Hong, CIO at Telkomsel.
Sigma Systems CEO, Tim Spencer, commented: “Telkomsel is at the forefront of digital transformation in the region, and recognizes the critical role a catalog-driven solution plays in accelerating the creation, selling and delivery of innovative and deeply personalized market offerings. Sigma is honored to work with Indonesia’s leading mobile operator as they transition into a truly digital business.”
As a result of an export denial order by the U.S. Department of Commerce’s Bureau of Industry and Security (BIS), ZTE has opted to cease all major operating activities and trading of shares on the Hong Kong Stock Exchange.
The ban on ZTE comes as a result of selling products to Iran. It prevents ZTE from purchasing components (eg semiconductors), software (eg licensed components of Android OS) or technology from US manufacturers. It impacts ZTE products from smartphones to routing and switching gear.
Huawei is also under investigation by the US Department of Justice.
This could have widespread ramifications for the telecommunications industry as operators will now no longer be able to source replacement parts or upgrades. You could speculate that wholesale network replacement projects will soon follow as well as the trickle-down effect into OSS/BSS.
Note that I’m not just talking about ZTEsoft OSS/BSS here. I’m talking about all the other OSS/BSS that will need to be updated as a result of the seemingly inevitable ZTE equipment change-out. And I’m talking about the secondary, tertiary, etc impacts as well as all subsequent ripples. It starts with 5G but the ramifications could be much bigger over the next few years.
This is the third in a series describing the process of finding the right OSS solution for your specific needs and getting estimated pricing to help you build a business case.
The first post described the overall OSS selection process we use. The second described the way we poll the market and prepare a short-list of OSS products / vendors based on current capabilities.
Once you’ve prepared the short-list it’s time to get into specifics. We generally do this via a PoC (Proof of Concept) phase with the short-listed suppliers. We have a few very specific principles when designing the PoC:
- We want it to reflect the operator’s context so that they can grasp what’s being presented (which can be a challenge when a vendor runs their own generic demos). This “context” is usually in the form of using the operator’s device types, naming conventions, service types, etc. It also means setting up a network scenario that is representative of the operator’s, which could be a hypothetical model, a small segment of a real network, lab model or similar
- PoC collateral must clearly describe the PoC and related context. It should clearly identify the important scenarios and selection criteria. Ideally it should logically complement the collateral provided in the previous step (ie the requirement gathering)
- We want it to focus on the most important conditions. If we take the 80/20 rule as a guide, will quickly identify the most common service types, devices, configurations, functions, reports, etc that we want to model
- Identify efficacy across those most important conditions. Don’t just look for the functionality that implements those conditions, but also the speed at which they can be done at a scale required by the operator. This could include bulk load or processing capabilities and may require simulators (or real integrations – see below) to generate volume
- We want it to be a simple as is feasible so that it minimises the effort required both of suppliers and operators
- Consider a light-weight integration if possible. One of the biggest challenges with an OSS is getting data in and out. If you can get a rapid integration with a real network (eg a microservice, SNMP traps, syslog events or similar) then it will give an indication of integration challenges ahead. However, note the previous point as it might be quite time-consuming for both operator and supplier to set up a real-time integration
- Take note of the level of resourcing required by each supplier to run the PoC (eg how many supplier staff, server scaling, etc.). This will give an indication of the level of resourcing the operator will need to allocate for the actual implementation, including organisational change management factors
- Attempt to offer PoC platform consistency so that all operators are on a level playing field, which might be through designing the PoC on common devices or topologies with common interfaces. You may even look to go the opposite way if you think the rarity of your conditions could be a deal-breaker
Note that we tend to scale the size/complexity/reality of the PoC to the scale of project budget out of consideration of vendor and operator alike. If it’s a small project / budget, then we do a light PoC. If it’s a massive transformation, then the PoC definitely has to go deeper (ie more integrations, more scenarios, more data migration and integrity challenges, etc)…. although ultimately our customers decide how deep they’re comfortable in going.
Best of luck and feel free to contact us if we can assist with the running of your OSS PoC.
In yesterday’s post, we talked about how to estimate OSS pricing. One of the key pillars of the approach was to first identify a short-list of vendors / integrators best-suited to implementing your specific OSS, then working closely with them to construct a pricing model.
Finding the right vendor / integrator can be a complex challenge. There are dozens, if not hundreds of OSS / BSS solutions to choose from and there are rarely like-for-like comparators. There are some generic comparison tools such as Gartner’s Magic Quadrant, but there’s no way that they can cater for the nuanced requirements of each OSS operator.
Okay, so you don’t want to hear about problems. You want solutions. Well today’s post provides a description of the approach we’ve used and refined across the many product / vendor selection processes we’ve conducted with OSS operators.
We start with a short-listing exercise. You won’t want to deal with dozens of possible suppliers. You’ll want to quickly and efficiently identify a small number of candidates that have capabilities that best match your needs. Then you can invest a majority of your precious vendor selection time in the short-list. But how do you know the up-to-date capabilities of each supplier? We’ll get to that shortly.
For the short-listing process, I use a requirement gathering and evaluation template. You can find a PDF version of the template here. Note that the content within it is out-dated and I now tend to use a more benefit-centric classification rather than feature-centric classification, but the template itself is still applicable.
STEP ONE – Requirement Gathering
The first step is to prepare a list of requirements (as per page 3 of the PDF):
The left-most three columns in the diagram above (in white) are filled out by the operator, which classifies a list of requirements and how important they are (ie mandatory, etc). The depth of requirements (column 2) is up to you and can range from specific technical details to high-level objectives. They could even take the form of user-stories or intended benefits.
STEP TWO – Issue your requirement template to a list of possible vendors
Once you’ve identified the list of requirements, you want to identify a list of possible vendors/integrators that might be able to deliver on those requirements. The PAOSS vendor/product list might help you to identify possible candidates. We then send the requirement matrix to the vendors. Note that we also send an introduction pack that provides the context of the solution the OSS operator needs.
STEP THREE – Vendor Self-analysis
The right-most three columns in the diagram above (in aqua) are designed to be filled out by the vendor/integrator. The suppliers are best suited to fill out these columns because they best understand their own current offerings and capabilities.
Note that the status column is a pick-list of compliance level, where FC = Fully Compliant. See page 2 of the template for other definitions. Given that it is a self-assessment, you may choose to change the Status (vendor self-rankings) if you know better and/or ask more questions to validate the assessments.
The “Module” column identifies which of the vendor’s many products would be required to deliver on the requirement. This column becomes important later on as it will indicate which product modules are most important for the overall solution you want. It may allow you to de-prioritise some modules (and requirements) if price becomes an issue.
STEP FOUR – Compare Responses
Once all the suppliers have returned their matrix of responses, you can compare them at a high-level based on the summary matrix (on page 1 of the template)
For each of the main categories, you’ll be able to quickly see which vendors are the most FC (Fully Compliant) or NC (Non-Compliant) on the mandatory requirements.
Of course you’ll need to analyse more deeply than just the Summary Matrix, but across all the vendor selection processes we’ve been involved with, there has always been a clear identification of the suppliers of best fit.
Hopefully the process above is fairly clear. If not, contact us and we’d be happy to guide you through the process.
Netcracker Technology announced that the French operator Bouygues Telecom has extended its use of Netcracker’s Revenue Management solution. Bouygues Telecom already uses Netcracker’s solution for its mobile subscribers and plans to use the platform for fixed-line subscribers in Fall 2018.
As part of this initiative, Netcracker delivered an upgraded Revenue Management solution to Bouygues Telecom to support long-term growth strategies. The highly configurable and scalable Netcracker BSS platform will help Bouygues Telecom streamline core customer-facing processes and reduce operational costs.
“Throughout our partnership, Netcracker has validated its dedication and ability to continuously support our mission-critical billing operations,” said Alain Moustard, Chief Information Officer at Bouygues Telecom. “We extended our partnership with Netcracker and its next-generation Revenue Management solution to meet the digitalization transformation requirements from our market.”
“Leveraging scalable BSS is critical for service providers to deliver and bill for the highly digital and innovative services that customers expect today,” said Roni Levy, General Manager of Europe at Netcracker. “We are excited to continue our partnership with Bouygues Telecom as it strives to provide its customers with the most innovative, user-friendly digital services.”
Nokia has acquired SpaceTime Insight to expand its Internet of Things (IoT) portfolio and IoT analytics capabilities, and accelerate the development of new IoT applications for key vertical markets.
Based in San Mateo, California, with offices in the U.S., Canada, U.K., India and Japan, SpaceTime Insight provides machine learning-powered analytics and IoT applications for some of the world’s largest transportation, energy and utilities organizations, including Entergy, FedEx, NextEra Energy, Singapore Power and Union Pacific Railroad. Its machine learning models and other advanced analytics, designed specifically for asset-intensive industries, predict asset health with a high degree of accuracy and optimize related operations. As a result, SpaceTime Insight’s applications help customers reduce cost and risk, increase operational efficiencies, reduce service outages and more.
The acquisition supports Nokia’s software strategy by bringing SpaceTime Insight’s sales expertise and proven track record in IoT application development, machine learning and data science to the Nokia Software IoT product unit. It will strengthen Nokia’s IoT software portfolio and IoT analytics capabilities, and accelerate the development of Nokia’s IoT offerings to deliver high-value IoT applications and services to new and existing customers.
The addition of SpaceTime Insight will also broaden the company’s ability to deliver new, advanced applications for key vertical markets, including energy, logistics, transportation and utilities.
Paul Lau, Chief Grid Strategy and Operations Officer at Sacramento Municipal Utility District, said: “We’ve partnered with SpaceTime to help us be more responsive, more efficient and ultimately able to deliver more value to our customers. Combining their innovative solutions with Nokia’s world-class portfolio will provide customers with powerful new tools to better manage assets, maximize efficiencies and deliver new capabilities.”
Bhaskar Gorti, president of Nokia Software, said: “Adding SpaceTime to Nokia Software is a strong step forward in our strategy, and will help us deliver a new class of intelligent solutions to meet the demands of an increasingly interconnected world. Together, we can empower customers to realize the full value of their people, processes and assets, and enable them to deliver rich, world-class digital experiences.”
SpaceTime Insight and its CEO Rob Schilling will join the IoT product unit within the Nokia Software business group.
Rob Schilling, CEO of SpaceTime Insight, said: “Today marks a transformational moment for SpaceTime, and I’m delighted to join forces with one of the world’s top organizations-a global brand that is reshaping the future of networking and intelligent software. I am excited for this incredible opportunity to help accelerate and scale Nokia’s IoT business and provide a new class of next-generation IoT solutions customers cannot find anywhere else.”
“Sometimes a simple question deserves a simple answer: “A piece of string is twice as long as half its length”. This is a brilliant answer… if you have its length… Without a strategy, how do you know if it is successful? It might be prettier, but is it solving a define business problem, saving or making money, or fulfilling any measurable goals? In other words: can you measure the string?”
Carmine Porco here.
I was recently asked how to obtain OSS pricing by a University student for a paper-based assignment. To make things harder, the target client was to be a tier-2 telco with a small SDN / NFV network.
As you probably know already, very few OSS providers make their list prices known. The few vendors that do tend to focus on the high volume, self-serve end of the market, which I’ll refer to as “Enterprise Grade.” I haven’t heard of any “Telco Grade” OSS suppliers making their list prices available to the public.
There are so many variables when finding the right OSS for a customer’s needs and the vendors have so much pricing flexibility that there is no single definitive number. There are also rarely like-for-like alternatives when selecting an OSS vendor / product. Just like the fabled piece of string, the best way is to define the business problem and get help to measure it. In the case of OSS pricing, it’s to design a set of requirements and then go to market to request quotes.
Now, I can’t imagine many vendors being prepared to invest their valuable time in developing pricing based on paper studies, but I have found them to be extremely helpful when there’s a real buyer. I’ll caveat that by saying that if the customer (eg service provider) you’re working with is prepared to invest the time to help put a list of requirements together then you have a starting point to approach the market for customised pricing.
We’ve run quite a few of these vendor selections and have refined the process along the way to streamline for vendors and customers alike. Here’s a template we’ve used as a starting point for discussions with customers:
Note that each customer will end up with a different mapping of the diagram above to suit their specific needs. We also have existing templates (eg Questionnaire, Requirement Matrix, etc) to support the selection process where needed.
Of course, we’d also be delighted to help if you need assistance to develop an OSS solution, get OSS pricing estimates, develop a workable business case and/or find the right OSS vendor/products for you.
For network operators, our OSS and BSS touch most parts of the business. The network, and the services they carry, are core business so a majority of business units will be contributing to that core business. As such, our OSS and BSS provide many of the metrics used by those business units.
This is a privileged position to be in. We get to see what indicators are most important to the business, as well as the levers used to control those indicators. From this privileged position, we also get to see the aggregated impact of all these KPIs.
In your years of working on OSS / BSS, how many times have you seen key business indicators that are conflicting between business units? They generally become more apparent on cross-team projects where the objectives of one internal team directly conflict with the objectives of another internal team/s.
In theory, a KPI tree can be used to improve consistency and ensure all business units are pulling towards a common objective… [but what if, like most organisations, there are many objectives? Does that mean you have a KPI forest and the trees end up fighting for light?]
But here’s a thought… Have you ever seen an OSS/BSS suite with the ability to easily build KPI trees? I haven’t. I’ve seen thousands of standalone reports containing myriad indicators, but never a consolidated roll-up of metrics. I have seen a few products that show operational metrics rolled-up into a single dashboard, but not business metrics. They appear to have been designed to show an information hierarchy, but not necessarily with KPI trees in mind specifically.
What do you think? Does it make sense for us to offer KPI trees as base product functionality from our reporting modules? Would this functionality help our OSS/BSS add more value back into the businesses we support?
“You cannot simply have your end users give some specifications then leave while you attempt to build your new system. They need to be involved throughout the process. Ultimately, it is their tool to use.”
José Manuel De Arce here.
As an OSS consultant and implementer, I couldn’t agree more with José’s quote above. José, by the way is an OSS Manager at Telefónica, so he sits on the operator’s side of the implementation equation. I’m glad he takes the perspective he does.
Unfortunately, many OSS operators are so busy with operations, they don’t get the time to help with defining and tuning the solutions that are being built for them. It’s understandable. They are measured by their ability to keep the network (and services) running and in a healthy state.
From the implementation side, it reminds me of this old comic:
The comic reminds me of OSS implementations for two reasons:
- Without ongoing input from operators, you can only guess at how the new tools could improve their efficacy and mitigate their challenges
- Without ongoing involvement from operators, they don’t learn the nuances of how the new tool works or the scenarios it’s designed to resolve… what I refer to as an OSS apprenticeship
I’ve seen it time after time on OSS implementations (and other projects for that matter) – [As a customer] you get back what you put in.
Tending to be a low-volume, high-customisation, high-uniqueness product, OSS has a significantly different selling proposition than most “box drop” products.
Can you imagine if OSS salespeople used any of these “great deal” propositions (as described by Gary Halbert)?
“I’m going out of business.”
“I just had a fire and I’m having a fire sale.”
“I’m crazy.” (all used car dealers)
“I owe taxes and I’ve got to raise money fast to pay them.”
“I’ve lost my lease and I’ve got to sell this merchandise right away before it gets thrown into the sheet.”
“I’ve got to make space for some new merchandise that is arriving soon so I will sell you what I have on hand real cheap.”
Did the image of an OSS salesperson saying any of those, especially the first, bring a smile to your face?
Anyway, Gary’s article also goes on to say, “…I wrote: “and if you can find a way to use it, you can dramatically increase your sales volume.”
Now, compare that to this: “and if you can find a way to use it, you can make yourself a bushel of money!”
Isn’t that a lot more powerful? You bet! The words “dramatically increase your sales volume” do not even begin to conjure up the visual imagery of “a bushel of money.””
From what I’ve experienced on the client side of the buying equation, OSS selling propositions seem to be driven by functionality. I call it the functionality arms-race, where vendors compete on functionality rather than efficacy. In a way, it’s the “sales volume” variant mentioned by Gary above.
The other approach that does align more closely with the “bushel of money” variant is the cost-out discussion. It’s the, “if you implement this OSS, you’ll be able to reduce head-count in your operations team,” argument. That’s definitely important for any operator that sees their OSS as a cost-centre. However, it’s a “save a bushel of money” argument rather than the more powerful “make a bushel of money” argument.
In reply to a recent post, James Crawshaw of Light Reading wrote, “OSS/BSS represents around 2-3% of revenue and takes up around 10% of capex.” I initially read this as OSS/BSS contributing 2-3% of revenue (ie the higher the percentage the better). However, James clarified that our IT/OSS/BSS tend to consume 2-3% of revenue (ie the lower the percentage the better).
Can you imagine how these tiny wording/perspective differences could change the credibility of the whole OSS/BSS industry? As soon as our OSS make a bushel of money, then the selling proposition becomes a whole lot stronger.
We all know the story of Goldilocks and the Three Bears where Goldilocks chooses the option that’s not too heavy, not too light, but just right.
The same model applies to OSS – finding / building a solution that’s not too heavy, not too light, but just right. To be honest, we probably tend to veer towards the too heavy, especially over time. We put more complexity into our architectures, integrations and customisations… because we can… which end up burdening us and our solutions.
A perfect example is AT&T offering its ECOMP project (now part of the even bigger Linux Foundation Network Fund) up for open source in the hope that others would contribute and help mature it. As a fairytale analogy, it’s an admission that it’s too heavy even for one of the global heavyweights to handle by itself.
The ONAP Charter has some great plans including, “…real-time, policy-driven orchestration and automation of physical and virtual network functions that will enable software, network, IT and cloud providers and developers to rapidly automate new services and support complete lifecycle management.”
These are fantastic ambitions to strive for, especially at the Pappa Bear end of the market. I have huge admiration for those who are creating and chasing bold OSS plans. But what about for the large majority of customers that fall into the Goldilocks category? Is our field of vision so heavy (ie so grand and so far into the future) that we’re missing the opportunity to solve the business problems of our customers and make a difference for them with lighter solutions today?
TM Forum’s Digital Transformation World is due to start in just over two weeks. It will be fascinating to see how many of the presentations and booths consider the Goldilocks requirements. There probably won’t be many because it’s just not as sexy a story as one that mentions heavy solutions like policy-driven orchestration, zero-touch automation, AI / ML / analytics, self-scaling / self-healing networks, etc.[I should also note that I fall into the category of loving to listen to the heavy solutions too!! ]
Netcracker Technology announced that Charter Communications has executed a long-term extension of its BSS and professional services relationship with Netcracker as part of its large-scale standardization program.
Charter, the second largest cable service provider in the United States which operates under the Spectrum brand, provides services to more than 25 million business and residential customers across the country. Following the acquisition of Time Warner Cable and Bright House Networks in 2016, Charter is standardizing and simplifying core customer-facing services and processes across the combined business.
Netcracker’s BSS solution provides Charter with CRM, ordering and billing capabilities for approximately half of its expanded customer base. As part of the multiyear extension, Charter will continue to leverage Netcracker’s solution and services to reduce operating costs, support increasing customer demands for complex digital services, and improve and standardize customer-facing services and processes.
“Netcracker’s Revenue Management and CRM solutions give us the flexibility and functionality needed to deliver the best possible experience to our customers as they demand more digital services,” said Mike Ciszek, Senior Vice President of Billing Operations at Charter. “We will continue leveraging Netcracker’s offerings as a means to standardize core customer processes.”
“The cable market is rapidly evolving, with new expectations around the delivery of high-value digital services, increasingly personalized multiservice bundles, and more efficient customer interactions,” said Christopher Finn, General Manager of North America at Netcracker. “As one of the largest cable operators in the United States, these capabilities are of the utmost importance for Charter. We are excited to extend our relationship with Charter and help the company meet these mission-critical objectives.”
One of the challenges facing OSS / BSS product designers is which platform/s to tie the roadmap to.
Let’s use a couple of examples.
In the past, most outside plant (OSP) designs were done in AutoCAD, so it made sense to build OSP design tools around AutoCAD. However in making that choice, the OSP developer becomes locked into whatever product directions AutoCAD chooses in future. And if AutoCAD falls out of favour as an OSP design format, the earlier decision to align with it becomes an even bigger challenge to work around.
A newer example is for supply chain related tools to leverage Salesforce. The tools and marketplace benefits make great sense as a platform to leverage today. But it also represents a roadmap lock-in that developers hope will remain beneficial in the years / decades to come.
I haven’t seen an OSS / BSS product built around Facebook, but there are plenty of other tools that have been. Given Facebook’s recent travails, I wonder how many product developers are feeling at risk due to their reliance on the Facebook platform?
The same concept extends to the other platforms we choose – operating systems, programming languages, messaging models, data storage models, continuous development tools, etc, etc.
Anecdotally, it seems that new platforms are coming online at an ever greater rate, so the chance / risk of platform churn is surely much higher than when AutoCAD was chosen back in the 1980s – 1990s.
The question becomes how to leverage today’s platform benefits, whilst minimising dependency and allowing easy swap-out. There’s too much nuance to answer this definitively, but the one key strategy is to try to build key logic models outside the dependent platform (ie ensuring logic isn’t built into AutoCAD or similar).