OSS Best Practices, cough, splutter

Organizations that seek transformations frequently bring in an army of outside consultants [or implementers in the case of OSS] who tend to apply one-size-fits-all solutions in the name of “best practices.” Our approach to transforming our respective organizations is to rely instead on insiders — staff who have intimate knowledge about what works and what doesn’t in their daily operations.”
Behnam Tabrizi, Ed Lam, Kirk Gerard and Vernon Irvin here

I don’t know about you, but the term “best practices” causes me make funny noises. A cross between a laugh, cough, derisive snicker and chortle. This noise isn’t always audible, but it definitely sounds inside my head any time someone mentions best practices in the field of OSS.

There are two reasons for my bemusement, no, actually there’s a third, which I’ll share as the story that follows. The first two reasons are:

  • That every OSS project is so different that chaos theory applies. I’m all for systematising aspects of OSS projects to create repeatable building blocks (like TM Forum does with tools such as eTOM). But as much as I build and use repeatable approaches, I know they always have to be customised for each client situation
  • Best practices becomes a mind-set that can prevent the outsiders / implementers from listening to insiders

Luckily, out of all the OSS projects I’ve worked on, there’s only been one where the entire implementation team has stuck with their “best practices” mantra throughout the project.

The team used this phrase as the intellectual high-ground over their OSS-novice clients. To paraphrase their words, “This is best practice. We’ve done it this way successfully for dozens of customers in the past, so you must do it our way.” Interestingly, this project was the most monumental failure of any OSS I’ve worked on.

The implementation team’s organisation lost out because the project was halted part-way through. The client lost out because they had almost no OSS functionality to show for their resource investment.

The project was canned largely because the implementation company wasn’t prepared to budge from their “best practices” thinking. To be honest, their best practices approaches were quite well formed. The only problem was that the changes they were insisting on (to accommodate their 10-person team of outsiders) would’ve caused major re-organisation of the client’s 100,000-person company of insiders. The outsiders / implementers either couldn’t see that or were so arrogant that they wanted the client to bend anyway.

That was a failure on their behalf no doubt, but not the monumental failure. I could see the massive cultural disconnect between client and implementer very early. I could even see the way to fix it (I believe). I was their executive advisor (the bridge between outsiders and insiders) so the monumental failure was mine. Not through lack of trying, I was unable to persuade either party to consider the other’s perspective.

Without compromise, the project became compromised.

Using my graphene analogy to help fix OSS data

By now I’m sure you’ve heard about graph databases. You may’ve even read my earlier article about the benefits graph databases offer when modelling network inventory when compared with relational databases. But have you heard the Graphene Database Analogy?

I equate OSS data migration and data quality improvement with graphene, which is made up of single layers of carbon atoms in hexagonal lattices (planes).

The graphene data model

There are four concepts of interest with the graphene model:

  1. Data Planes – Preparing and ingesting data from siloes (eg devices, cards, ports) is relatively easy. ie building planes of data (black carbon atoms and bonds above)
  2. Bonds between planes – It’s the interconnections between siloes (eg circuits, network links, patch-leads, joints in pits, etc) that is usually trickier. So I envisage alignment of nodes (on the data plane or graph, not necessarily network nodes) as equivalent to bonds between carbon atoms on separate planes (red/blue/aqua lines above).
    Alignment comes in many forms:

    1. Through spatial alignment (eg a joint and pit have the same geospatial position, so the joint is probably inside the pit)
    2. Through naming conventions (eg same circuit name associated with two equipment ports)
    3. Various other linking-key strategies
    4. Nodes on each data plane can potentially be snapped together (either by an operator or an algorithm) if you find consistent ways of aligning nodes that are adjacent across planes
  3. Confidence – I like to think about data quality in terms of confidence-levels. Some data is highly reliable, other data sets less so. For example if you have two equipment ports with a circuit name identifier, then your confidence level might be 4 out of 4* because you know the exact termination points of that circuit. Conversely, let’s say you just have a circuit with a name that follows a convention of “LocA-LocB-speed-index-type” but has no associated port data. In that case you only know that the circuit terminates at LocationA and LocationB, but not which building, rack, device, card, port so your confidence level might only be 2 out of 4.
  4. Visualisation – Having these connected panes of data allows you to visualise heat-map confidence levels (and potentially gaps in the graph) on your OSS data, thus identifying where data-fix (eg physical audits) is required

* the example of a circuit with two related ports above might not always achieve 4 out of 4 if other checks are applied (eg if there are actually 3 ports with that associated circuit name in the data but we know it should represent a two-ended patch-lead).

Note: The diagram above (from graphene-info.com) shows red/blue/aqua links between graphene layers as capturing hydrogen, but is useful for approximating the concept of aligning nodes between planes

All OSS products are excellent. So where’s the advantage?

“You don’t get differential advantage from your products, it’s from the way you speak to and relate to your customers . All products are excellent these days.”

The quote above paraphrases Malcolm McDonald from a podcast about his book, “Malcolm McDonald on Value Propositions: How to Develop Them, How to Quantify Them.”

This quote had nothing to do with OSS specifically, but consider for a moment how it relates to OSS.

Consider also in relation to the diagram below.
Long-tail features

Let’s say the x-axis on this graph shows a list of features within any given OSS product. And the y-axis shows a KPI that measures the importance of each feature (eg number of uses, value added by using that feature, etc).

As Professor McDonald indicates, all OSS products are excellent these days. And all product vendors know what the most important features are. As a result, they all know they must offer the features that appear on the left-side of the image. Since all vendors do the left-side, it seems logical to differentiate by adding features to the far-right of the image, right?

Well actually, there’s almost no differential advantage at the far-right of the image.

Now what if we consider the second part of Prof McDonald’s statement on differential advantage, “…it’s from the way you speak to and relate to your customers.”

To me it implies that the differential advantage in OSS is not in the products, but in the service wrapper that is placed around it. You might be saying, “but we’re an OSS product company. We don’t want to get into services.” As described in this earlier post, there are two layers of OSS services.

One of the layers mentioned is product-related services (eg product installation, product configuration, product release management, license management, product training, data migration, product efficiency / optimisation, etc). None of these items would appear as features on the long-tail diagram above. Perhaps as a result, it’s these items that are often less than excellent in vendor offerings. It’s often in these items where complexity, time, cost and risk are added to an OSS project, increasing stress for clients.

If Prof McDonald is correct and all OSS products are excellent, then perhaps it’s in the services wrapper where the true differential advantage is waiting to be unlocked. This will come from spending more time relating to customers than cutting more code.

What if we take it a step further? What if we seek to better understand our clients’ differential advantages in their markets? Perhaps this is where we will unlock an entirely different set of features that will introduce new bands on the left-side of the image. I still feel that amazing OSS/BSS can give carriers significant competitive advantage in their marketplace. And the converse can give significant competitive disadvantage!

Are you desperately seeking to increase your OSS‘s differential advantage? Contact us at Passionate About OSS to help map out a way.

Competition closing on 23 March

In case you didn’t notice, we launched a competition yesterday. It’s a 5 question survey and respondents are in with a chance of winning 1 of 5 copies of my book, Mastering Your OSS.

Not only that, but it’s your chance to give back to the next generation of OSS experts coming through. You’ll achieve this by sharing your experiences on your earliest OSS projects. In turn, we’ll be turning these answers into e-books that we’ll make available free here on PAOSS.

I’m really looking forward to reading the answers from you, the exceptionally talented OSS people who I know read this blog!

How did your first OSS project make you feel?

Can you remember how you felt during the initial weeks of your first OSS project?

I can vividly recall how out of my depth I felt on my first OSS project. I was in my 20s and had relocated to a foreign country for the first time. I had a million questions (probably more actually). The challenges seemed immense (they were). I was working with talented and seasoned telco veterans who had led as many as 500 people, but had little OSS experience either. None of us had worked together before. We were all struggling. We were all about to embark on an incredibly steep learning curve, by far the steepest of my career to date.

Now through PAOSS I’m looking to create a new series of free e-books to help those starting out on their own OSS journey.

But this isn’t about my experiences. That would be a perspective limited to just one unique journey in OSS. No, this survey is all about you. I’d love to capture your feelings, experiences, insights and opinions to add much greater depth to the material.

Are you up for it? Are you ready to answer just five one-line questions to help the next generation of OSS experts?

We’ve created a survey below that closes on 23 March 2019. The best 5 responses will win a free copy of my physical book, “Mastering your OSS,” (valued at US$49.97+P/H).

Click on the link below to start the 5 question survey:

https://passionateaboutoss.com/how-did-your-first-oss-make-you-feel

Can OSS/BSS assist CX? We’re barely touching the surface

Have you ever experienced an epic customer experience (CX) fail when dealing a network service operator, like the one I described yesterday?

In that example, the OSS/BSS, and possibly the associated people / process, had a direct impact on poor customer experience. Admittedly, that 7 truck-roll experience was a number of years ago now.

We have fewer excuses these days. Smart phones and network connected devices allow us to get OSS/BSS data into the field in ways we previously couldn’t. There’s no need for printed job lists, design packs and the like. Our OSS/BSS can leverage these connected devices to give far better decision intelligence in real time.

If we look to the logistics industry, we can see how parcel tracking technologies help to automatically provide status / progress to parcel recipients. We can see how recipients can also modify their availability, which automatically adjusts logistics delivery sequencing / scheduling.

This has multiple benefits for the logistics company:

  • It increases first time delivery rates
  • Improves the ability to automatically notify customers (eg email, SMS, chatbots)
  • Decreases customer enquiries / complaints
  • Decreases the amount of time the truck drivers need to spend communicating back to base and with clients
  • But most importantly, it improves the customer experience

Logistics is an interesting challenge for our OSS/BSS due to the sheer volume of customer interaction events handled each day.

But it’s another area that excites me even more, where CX is improved through improved data quality:

  • It’s the ability for field workers to interact with OSS/BSS data in real-time
  • To see the design packs
  • To compare with field situations
  • To update the data where there is inconsistency.

Even more excitingly, to introduce augmented reality to assist with decision intelligence for field work crews:

  • To provide an overlay of what fibres need to be spliced together
  • To show exactly which port a patch-lead needs to connect to
  • To show where an underground cable route goes
  • To show where a cable runs through trayway in a data centre
  • etc, etc

We’re barely touching the surface of how our OSS/BSS can assist with CX.

The 7 truck-roll fail

In yesterday’s post we talked about the cost of quality. We talked about examples of primary, secondary and tertiary costs of bad data quality (DQ). We also highlighted that the tertiary costs, including the damage to brand reputation, can be one of the biggest factors.

I often cite an example where it took 7 truck rolls to connect a service to my house a few years ago. This provider was unable to provide an estimate of when their field staff would arrive each day, so it meant I needed to take a full day off work on each of those 7 occasions.

The primary cost factors are fairly obvious, for me, for the provider and for my employer at the time. On the direct costs alone, it would’ve taken many months, if not years, for the provider to recoup their install costs. Most of it attributable to the OSS/BSS and associated processes.

Many of those 7 truck rolls were a direct result of having bad or incomplete data:

  • They didn’t record that it was a two storey house (and therefore needed a crew with “working at heights” certification and gear)
  • They didn’t record that the install was at a back room at the house (and therefore needed a higher-skilled crew to perform the work)
  • The existing service was installed underground, but they had no records of the route (they went back to the designs and installed a completely different access technology because replicating the existing service was just too complex)

Customer Experience (CX), aka brand damage, is the greatest of all cost of quality factors when you consider studies such as those mentioned below.

A dissatisfied customer will tell 9-15 people about their experience. Around 13% of dissatisfied customers tell more than 20 people.”
White House Office of Consumer Affairs
(according to customerthink.com).

Through this page alone, I’ve told a lot more than 20 (although I haven’t mentioned the provider’s name, so perhaps it doesn’t count! 🙂  ).

But the point is that my 7 truck-roll example above could’ve been avoided if the provider’s OSS/BSS gave better information to their field workers (or perhaps enforced that the field workers populated useful data).

We’ll talk a little more tomorrow about modern Field Services tools and how our OSS/BSS can impact CX in a much more positive way.

Calculating the cost of quality

This week of posts has followed the theme of the cost of quality. Data quality that is.

But how do you calculate the cost of bad data quality?

Yesterday’s post mentioned starting with PNI (Physical Network Inventory). PNI is the cables, splices / joints, patch panels, ducts, pits, etc. This data doesn’t tend to have a programmable interface to electronically reconcile with. This makes it prone to errors of many types – mistakes in manual entry, reconfigurations that are never documented, assets that are lost or stolen, assets that are damaged or degraded, etc.

Some costs resulting from poor PNI data quality (DQ) can be considered primary costs. This includes SLA breaches caused by an inability to identify a fault within an SLA window due to incorrect / incomplete / indecipherable design data. These costs are the most obvious and easy to calculate because they result in SLA penalties. If a network operator misses a few of these with tier 1 clients then this is the disaster referred to yesterday.

But the true cost of quality is in the ripple-out effects. The secondary costs. These include the many factors that result in unnecessary truck rolls. With truck rolls come extra costs including contractor costs, delayed revenues, design rework costs, etc.

Other secondary effects include:

  • Downstream data maintenance in systems that rely on PNI data
  • Code in downstream systems that caters for poor data quality, which in turn increases the costs of complexity such as:
    • Additional testing
    • Additional fixing
    • Additional curation
  • Delays in the ability to get new products to market
  • Ability to accurately price products (due to variation in real costs caused by extra complexity)
  • Reduced impact of automations (due to increased variants)
  • Potential to impact Machine Learning / Artificial Intelligence engines, which rely on reliable and consistent data at scale
  • etc

There are probably more sophisticated ways to calculate the cost of quality across all these factors and more, but in most cases I just use a simple multiplier:

  • Number of instances of DQ events (eg number of additional truck rolls); times by
  • A rule-of-thumb cost impact of each event (eg the cost of each additional truck roll)

Sometimes the rules-of-thumb are challenging to estimate, so I tend to err on the side of conservatism. I figure that even if the rules-of-thumb aren’t perfectly accurate, at least they produce a real cost estimate rather than just anecdotal evidence.

And more importantly, the tertiary and less tangible costs of brand damage (also known as Customer Experience or CX or reputation damage). We’ll talk a little more about that tomorrow.

 

Waiting for the disaster to invest in the data

Have you seen OSS tools where the applications are brilliant but consigned to failure by bad data? I definitely have! I call it the data death spiral. It’s a well known fact in the industry that bad data can ruin an OSS. You know it. I know it. Everyone knows it.

But how many companies do you know that invest in data quality? I mean truly invest in it.

The status quo is not to invest in the data, but the disaster. That is the disaster caused by the data!

Being a data nerd, it boggles my brain to understand why that is. My only assumption to date is that we don’t adequately measure the cost of quality. Or more to the point, what the cost impact is resulting from bad data.

I recently attempted to model the cost of quality. My model focuses on the ripple-out impacts from poor PNI (Physical Network Inventory) quality data alone. Using conservative numbers, the cost of quality is in the millions for the first carrier I applied it to.

Why do you think operators wait for the disaster before investing in the data? What alternate techniques do you use to focus attention, and investment, on the data?

Where an absence of OSS data can still provide insights

The diagram below has some parallels with OSS. The story however is a little long before it gets to the OSS part, so please bear with me.

null

The diagram shows analysis the US Navy performed during WWII on where planes were being shot. The theory was that they should be reinforcing the areas that received the most hits (ie the wing-tips, central part of the body, etc as per the diagram).

Abraham Wald, a statistician, had a completely different perspective. His rationale was that the hits would be more uniform and the blank sections on the diagram above represent the areas that needed reinforcement. Why? Well, the planes that were hit there never made it home for analysis.

In OSS, this is akin to the device or EMS that has failed and is unable to send any telemetry data. No alarms are appearing in our alarm lists for those devices / EMS because they’re no longer capable of sending any.

That’s why we use heartbeat mechanisms to confirm a system is alive (and capable of sending alarms). These come in the form of pollers, pingers or just watching for other signs of life such as network traffic of any form.

In what other areas of OSS can an absence of data demonstrate an insight?

Telefónica and Ericsson sign AI-powered Network Ops

Telefónica and Ericsson sign AI-powered Network Operations agreement.

Ericsson and Telefónica, one of the world’s largest communications service providers, have penned a new four-to-six-year managed services deal for AI-powered Network Operations in the UK, Colombia, Peru, Ecuador and Uruguay.

Through its global Network Operations Centers (NOCs), Ericsson will provide services spanning day-to-day monitoring and service desk, change management, and problem and incident management – all powered by its leading AI and automation solutions. The deal supports and reinforces Telefónica’s strategy to focus on increased use of AI-based automation for evolved network operations.

Juan Manuel Caro, Global Director of Operations & Customer experience, Telefónica, says: “Expanding our long-term partnership with Ericsson with the implementation and support of their global Network Operation Centers will now allow us to build a more agile network, while implementing new tools and developing technologies for the network and our customers. AI and automation are key pillars of the network operations of the future.”

Arun Bansal, President and Head of Ericsson Europe and Latin America, Ericsson, says: “Ericsson and Telefónica have a long-standing partnership in technology and services. This new deal reflects both our ambitions to develop and drive networks based in automation, machine learning and AI and we’re working closely with Telefónica to make this a reality”.

Mobily deploys Ericsson

Mobily deploys Ericsson full stack Telecom Cloud.

Mobily Saudi Arabia has deployed Ericsson’s full stack telecom cloud solution, focusing on transforming its wireless network and providing a 5G Cloud Core. Mobily will gain a flexible, agile, and programmable network to improve customer experience and support the development of new services.

Moezid bin Nasser Al Harbi, Chief Technology Officer, Mobily, says: “The cloud will help meet the growing demand for new network services to help improve productivity and efficiency.”

Through Ericsson Cloud Packet Core, powered by Ericsson Cloud Infrastructure including Ericsson Orchestrator, Mobily customers will receive benefits such as reduced latency, tailored vertical network solutions and optimized distribution of key network capabilities.

Rafiah Ibrahim, Head of Ericsson Middle East and Africa, says: “Ericsson is committed to collaborating with Mobily in modernizing and managing its capacity growth, adding sites, and migrating to datacenters. We are taking another step in Mobily’s digital transformation journey through Ericsson’s core network solutions, bringing new levels of performance, flexibility and optimization.”

Ericsson extends OSSii to include 5G

Ericsson committed to extending OSSii agreement to include 5G.

Ericsson, Huawei and Nokia have agreed to initiate discussions to extend an OSSii (Operation Support System Interoperability Initiative) Memorandum of Understanding (MoU) to cover 5G network technology.

The aim is to enable and simplify interoperability between OSS systems, reducing overall OSS integration costs and enabling shorter time-to-market for 5G.

The scope will also look at simplifying integration capabilities towards multi-vendor operator’s OSS systems, maximizing the use intelligent and autonomous technologies.

Ignacio Más, Head of Technology Strategy, Solution Areas OSS, Ericsson, says: “Extending the OSSii MoU scope with 5G demonstrates the strength of Ericsson’s support for multi-vendor interoperability. We fully support and are committed to, the initiative that has helped service providers globally to leverage advancements in telco networks and meet the demands for optimization across their networks.”

OSSii – the Operations Support Systems interoperability initiative – was initiated in May 2013 to promote OSS interoperability between different OSS vendor’s equipment.

Telefónica UK selects Netcracker

Telefónica UK Selects Netcracker’s End-to-End BSS/OSS Suite.

NEC Corporation and Netcracker Technology have signed an agreement to partner with Telefónica UK to implement Netcracker’s end-to-end BSS/OSS suite. The partnership further expands Netcracker’s global, strategic relationship with Telefónica and underscores Netcracker’s continued market leadership in BSS and OSS.

Telefónica UK will use Netcracker’s comprehensive BSS/OSS suite comprising its Revenue Management, Customer Management and Operations Management solutions. In addition, Netcracker will also provide a comprehensive set of services, including Agile- and DevOps-based development, configuration and delivery.

Telefónica UK will integrate Netcracker’s revenue management and customer management capabilities into its customer-facing environment. It will also integrate Netcracker’s OSS capabilities within its IT and Network operations. To ensure rapid integration and delivery, Netcracker will leverage its Blueprint methodology to optimize delivery.

“Service providers around the world are choosing to work with us as leading suppliers of next-gen business and operations solutions. Our solutions allow our customers to become more agile and, in turn, give their customers the differentiated offerings they demand,” said Roni Levy, General Manager of Europe at Netcracker. “We are excited to help our customers deliver the services and experiences their customers have come to expect.”

Ericsson and Airtel to build AI use cases for Network Operations

Ericsson and Airtel to further build real-world AI use cases for Network Operations.

Ericsson and Bharti Airtel (“Airtel”), India’s leading telecom services provider, announced their collaboration for building intelligent and predictive network operations. Leveraging on its developments in Artificial Intelligence (AI) and automation, Ericsson will support Airtel to proactively address network complexity and boost user experience.

Combining deep domain expertise with advanced technologies like AI and automation, Ericsson managed services provides the performance, reliability, and flexibility to meet the dynamic needs of consumers and enterprises as well as intelligently monitoring and managing networks to drive operational efficiencies.

Ericsson and Airtel are industry leaders in the use of AI and automation, driving the future of network operations. Having completed proof of concept trials, the companies are expanding their co-creation partnership to industrialise AI use cases.

Bradley Mead, Head of Ericsson Network Managed Services, says: “I’m delighted that we are able to innovate together with Airtel which confirms a joint commitment to our long-standing partnership where together we can showcase what is possible with AI/ML as we transform into a truly data driven operations that will deliver business benefits on a new level.”

Randeep Shekhon, CTO, Bharti Airtel, says: “Airtel has always been a pioneer in introducing new network technologies to serve customers. AI / ML will be key to Airtel’s customer experience centric operations management. Our partnership with Ericsson is example of the power of real-world and collaborative innovation. With these initiatives, we would continue to maintain our network differentiation and provide superior customer experience. We look forward to taking this journey together to the next level.”

Telecom Egypt and Ericsson apply AI

Telecom Egypt and Ericsson apply Artificial Intelligence to operate telco cloud.

Telecom Egypt and Ericsson completed the successful deployment of Artificial Intelligence (AI) to its full-stack telco cloud infrastructure. The objective is to operate telco cloud environment intelligently and efficiently to enable cloud Automation and orchestration.

The telecom industry is moving into cloud automation especially with introduction of Cloud Native in 5G. Artificial Intelligence assets provide an efficient method for cloud visualization with ability to monitor internal traffic between NFVI layers, in addition to providing a fast way to identify faults and generate suggestions for resolution.

Adel Hamed, Chief Executive Officer at Telecom Egypt says: “We are keen to lead the way in the region when it comes to Artificial Intelligence, as it paves the road for implementation of new technologies across all our markets. Partnering with Ericsson enables us to achieve our strategic goals when it comes to enhanced operational effectiveness and customer experience.”

A key benefit in the case of cloud is that the software is divided into smaller components. This means Telecom Egypt can be selective about what it chooses to upgrade in terms of software and manage these upgrades more easily on a live network with minimal disruption.

Rafiah Ibrahim, Head of Ericsson Middle East and Africa says: “This successful pilot showcases the possibility for operators to deploy Artificial Intelligence on a broader scale. By using Ericsson’s technology, operators such as Telecom Egypt are able to build global standard agile networks and speed up the introduction of new services.”

The OSS Tinder effect

On Friday, we provided a link to an inspiring video showing Rolls-Royce’s vision of an operations centre. That article is a follow-on from other recent posts about to pros and cons of using MVPs (Minimum Viable Products) as an OSS transformation approach.

I’ve been lucky to work on massive OSS projects. Projects that have taken months / years of hard implementation grind to deliver an OSS for clients. One was as close to perfect (technically) as I’ve been involved with. But, alas, it proved to be a failure.

How could that be you’re wondering? Well, it’s what I refer to as the Tinder Effect. On Tinder, first appearances matter. Liked or disliked at the swipe of a hand.

Many new OSS are delivered to users who are already familiar with one or more OSS. If they’re not as pretty or as functional or as intuitive as what the users are accustomed to, then your OSS gets a swipe to the left. As we found out on that project (a ‘we’ that included all the client’s stakeholders and sponsors), first impressions can doom an otherwise successful OSS implementation.

Since then, I’ve invested a lot more time into change management. Change management that starts long before delivery and handover. Long before designs are locked in. Change management that starts with hearts and minds. And starts by involving the end users early in the change process. Getting them involved in the vision, even if not quite as elaborate as Rolls-Royce’s.

Proving you’re the best of the best at OSS

Way back in 2015, the PAOSS blog proposed the idea of an OSS Olympics to help identify the best OSS‘ers in the world.

It suggested the following 10 events (noting that operators can use tools of their own choosing):

  1. Order handling – set up a particular type of order, with complexities, and see who can generate the order the fastest into a pseudo-BSS order repository
  2. Service configuration and activation – being the fastest to fulfill a complex service request on a network platform
  3. Problem resolution – surely OSS‘s equivalent of the 100m sprint, the blue riband event, to solve complex network problem scenarios the fastest
  4. Customer QoS and SLA management – perhaps the 200m sprint, being the fastest to solve a degraded network situation
  5. Resource performance management – engineering the traffic flows on a network to provide the best steady-state performance
  6. Bill invoice management – taking a data set containing different products, bundles, services and customers and generating correct bills for all customers in the fastest time
  7. Supplier / Partner settlements and payments management – similar to the above but taking S/P interconnect data and providing correct settlement statements the fastest
  8. Product and Offer management – to build a fully functional product offering the fastest (could be a team event)
  9. Service development – to build a functional service offering
  10. Knowledge management – to deliver a decision support capability that provides the fastest access to process documentation in 5 real-time situations / events

Well, it seems that the concept is now a reality. Number three on the list, the blue-riband event, exists under the title Boss of the NOC (BOTN) competition hosted by Splunk and Arcus Data. Apparently this is the third year of this event, having first been hosted in 2017.

This event only relates to diagnosis using Splunk tools, but it’s a great concept. All you Splunk gurus, jump on board!

It’s on Thursday, March 28, 2019 from 9:00am – 3:30pm in San Diego.

Telefónica Movistar México selects Ericsson for CX

Telefónica Movistar México selects Ericsson Expert Analytics to enhance customer experience.

Ericsson has been selected by Telefónica Movistar México, a subsidiary of Telefónica South America, to deploy the Service Operation Center (SOC) platform to enhance subscribers’ experience through Ericsson Expert Analytics.

The new SOC platform, based on Ericsson’s solution, will be deployed to gain visibility of Telefónica Movistar México’s service operations, quality, and end-customer experience. This will give the service provider actionable insights in real-time to help improve service quality and increase customer satisfaction.

Hector Gimenez, Chief Technical Officer, Telefónica Movistar México, says: “With Ericsson Expert Analytics we can now get a complete end-to-end view of our services, along with real-time insights, that allow us to accelerate decisions and actions that enhance the experience for our subscribers. We will leverage the data captured through Ericsson’s solution to relentlessly pursue operational efficiencies in both technical and non-technical areas and thus increase our focus on our customers.”

Arun Bansal, President and Head of Ericsson in Europe and Latin America, says: “Service providers around the world are increasingly turning to Ericsson Expert Analytics to gain greater visibility of their service operations. This visibility highlights opportunities to improve the customer experience in a wide range of areas, resulting in higher levels of customer satisfaction and loyalty. These are key metrics for service providers to increase revenue.”

The Rolls Royce vision of OSS

Yesterday’s post mentioned the importance of setting a future vision as part of your MVP delivery strategy.

As Steve Blank said here, Founders act like the “minimum” part is the goal. Or worse, that every potential customer should want it. In the real world not every customer is going to get overly excited about your minimum feature set…You’re selling the vision and delivering the minimum feature set to visionaries not everyone.”

Yesterday’s post promised to give you an example of an exciting vision. Not just any vision, the Rolls-Royce version of a vision.

We’ve all seen examples of customers wanting a Rolls-Royce OSS solution. Here’s a video that’s as close as possible to Rolls-Royce’s own vision of an OSS solution.