How did your first OSS project make you feel?

Featured

Can you remember how you felt during the initial weeks of your first OSS project?

I can vividly recall how out of my depth I felt on my first OSS project. I was in my 20s and had relocated to a foreign country for the first time. I had a million questions (probably more actually). The challenges seemed immense (they were). I was working with talented and seasoned telco veterans who had led as many as 500 people, but had little OSS experience either. None of us had worked together before. We were all struggling. We were all about to embark on an incredibly steep learning curve, by far the steepest of my career to date.

Now through PAOSS I’m looking to create a new series of free e-books to help those starting out on their own OSS journey.

But this isn’t about my experiences. That would be a perspective limited to just one unique journey in OSS. No, this survey is all about you. I’d love to capture your feelings, experiences, insights and opinions to add much greater depth to the material.

Are you up for it? Are you ready to answer just five one-line questions to help the next generation of OSS experts?

We’ve created a survey below that closes on 23 March 2019. The best 5 responses will win a free copy of my physical book, “Mastering your OSS,” (valued at US$49.97+P/H).

Click on the link below to start the 5 question survey:

https://passionateaboutoss.com/how-did-your-first-oss-make-you-feel

How to bring your art and your science to your OSS

In the last two posts, we’ve discussed repeatability within the field of OSS implementation – paint-by-numbers vs artisans and then resilience vs precision in delivery practices.

Now I’d like you to have a think about how those posts overlay onto this quote by Karl Popper:
Non-reproducible single occurrences are of no significance to science.”

Every OSS implementation is different. That means that every one is a non-reproducible single occurrence. But if we bring this mindset into our OSS implementations, it means we’re bringing artisinal rather than scientific method to the project.

I’m all for bringing more art, more creativity, more resilience into our OSS projects.

I’m also advocating more science though too. More repeatability. More precision. Whilst every OSS project may be different at a macro level, there are a lot of similarities in the micro-elements. There tends to be similarities in sequences of activities if you pay close attention to the rhythms of your projects. Perhaps our products can use techniques to spot and leverage similarities too.

In other words, bring your art and your science to your OSS. Please leave a comment below. I’d love to hear the techniques you use to achieve this.

OSS resilience vs precision

Resilience is what happens when we’re able to move forward even when things don’t fit together the way we expect.[OSS project anyone???] And tolerances are an engineer’s measurement of how well the parts meet spec.
One way to ensure that things work out the way you hope is to spend the time and money to ensure that every part, every form, every worker meets spec. Tighten your spec, increase precision and you’ll discover that systems become more reliable.
The other alternative is to embrace the fact that nothing is ever exactly on spec, and to build resilient systems.
You’ll probably find that while precision feels like the way forward, resilience, the ability to thrive when things go wrong, is a much safer bet
.”
Seth Godin
here.

Yesterday’s post talked about the difference between having a team of artisans versus a team that paints by numbers. Seth’s blog provides a similar comparison. Instead of comparing by talent, Seth compares by attitude.

I’m really conflicted by Seth’s comparison.

From the side of past experience, resilience is a massive factor in overcoming the many obstacles faced on implementation projects. I’ve yet to work on an OSS project where all challenges were known at inception.

From the side of an ideal future, precision and repeatability are essential factors in improving the triple constraint of OSS delivery and increasing reliability for customers. And whilst talking about the future, the concept of network slicing (which holds the key for 5G) is dependent upon OSS repeatability and efficiency.

So which do we focus on? Building a vastly talented, experienced and resilient implementation team? Or building a highly reliable, repeatable implementation system? Both, most likely.

But what if you only get to choose one? Which do you focus on (for you and your team/system)?

The Mona Lisa of OSS

All OSS rely on workflows to make key outcomes happen. Outcomes like activating a customer order, resolving a fault, billing customers, etc. These workflows often touch multiple OSS/BSS products and/or functional capabilities. There’s not always a single-best-way to achieve an outcome.

If you’re responsible for your organisation’s workflows do you want to build a paint-by-numbers approach where each process is repeatable?
Or do you want the bespoke paintings, which could unintentionally lead to a range in quality from Leonardo’s Mona Lisa to my 3 year old’s finger painting?

Apart from new starters, who thrive on a paint-by-numbers approach at first, every person who uses an OSS wants to feel like an accomplished artisan. They want to have the freedom to get stuff done with their own unique brush-strokes. They certainly don’t want to follow a standard, pre-defined pattern day-in and day-out. That would be so boring and demoralising. I don’t blame them. I’d be exactly the same.

This is perhaps why some organisations don’t have documented workflows, or at least they only have loosely defined ones. It’s just too hard to capture all the possibilities on one swim-lane chart.

I’m all for having artisans on the team who are able to handle the rarer situations (eg process fall-outs) with bespoke processes. But bespoke processes should never be the norm. Continual improvement thrives on a strong level of repeatability.

To me, bespoke workflows are not necessarily an indication of a team of free spirited artists that need to be regimented, but of processes with too many variants. Click on this link to find recommendations for reducing the level of bespoke processes in your organisation.

Are processes bespoke or paint-by-numbers in your organisation?

BTW. We’ll take a slightly different perspective on workflow repeatability in tomorrow’s post.

OSS Best Practices, cough, splutter

Organizations that seek transformations frequently bring in an army of outside consultants [or implementers in the case of OSS] who tend to apply one-size-fits-all solutions in the name of “best practices.” Our approach to transforming our respective organizations is to rely instead on insiders — staff who have intimate knowledge about what works and what doesn’t in their daily operations.”
Behnam Tabrizi, Ed Lam, Kirk Gerard and Vernon Irvin here

I don’t know about you, but the term “best practices” causes me make funny noises. A cross between a laugh, cough, derisive snicker and chortle. This noise isn’t always audible, but it definitely sounds inside my head any time someone mentions best practices in the field of OSS.

There are two reasons for my bemusement, no, actually there’s a third, which I’ll share as the story that follows. The first two reasons are:

  • That every OSS project is so different that chaos theory applies. I’m all for systematising aspects of OSS projects to create repeatable building blocks (like TM Forum does with tools such as eTOM). But as much as I build and use repeatable approaches, I know they always have to be customised for each client situation
  • Best practices becomes a mind-set that can prevent the outsiders / implementers from listening to insiders

Luckily, out of all the OSS projects I’ve worked on, there’s only been one where the entire implementation team has stuck with their “best practices” mantra throughout the project.

The team used this phrase as the intellectual high-ground over their OSS-novice clients. To paraphrase their words, “This is best practice. We’ve done it this way successfully for dozens of customers in the past, so you must do it our way.” Interestingly, this project was the most monumental failure of any OSS I’ve worked on.

The implementation team’s organisation lost out because the project was halted part-way through. The client lost out because they had almost no OSS functionality to show for their resource investment.

The project was canned largely because the implementation company wasn’t prepared to budge from their “best practices” thinking. To be honest, their best practices approaches were quite well formed. The only problem was that the changes they were insisting on (to accommodate their 10-person team of outsiders) would’ve caused major re-organisation of the client’s 100,000-person company of insiders. The outsiders / implementers either couldn’t see that or were so arrogant that they wanted the client to bend anyway.

That was a failure on their behalf no doubt, but not the monumental failure. I could see the massive cultural disconnect between client and implementer very early. I could even see the way to fix it (I believe). I was their executive advisor (the bridge between outsiders and insiders) so the monumental failure was mine. Not through lack of trying, I was unable to persuade either party to consider the other’s perspective.

Without compromise, the project became compromised.

Using my graphene analogy to help fix OSS data

By now I’m sure you’ve heard about graph databases. You may’ve even read my earlier article about the benefits graph databases offer when modelling network inventory when compared with relational databases. But have you heard the Graphene Database Analogy?

I equate OSS data migration and data quality improvement with graphene, which is made up of single layers of carbon atoms in hexagonal lattices (planes).

The graphene data model

There are four concepts of interest with the graphene model:

  1. Data Planes – Preparing and ingesting data from siloes (eg devices, cards, ports) is relatively easy. ie building planes of data (black carbon atoms and bonds above)
  2. Bonds between planes – It’s the interconnections between siloes (eg circuits, network links, patch-leads, joints in pits, etc) that is usually trickier. So I envisage alignment of nodes (on the data plane or graph, not necessarily network nodes) as equivalent to bonds between carbon atoms on separate planes (red/blue/aqua lines above).
    Alignment comes in many forms:

    1. Through spatial alignment (eg a joint and pit have the same geospatial position, so the joint is probably inside the pit)
    2. Through naming conventions (eg same circuit name associated with two equipment ports)
    3. Various other linking-key strategies
    4. Nodes on each data plane can potentially be snapped together (either by an operator or an algorithm) if you find consistent ways of aligning nodes that are adjacent across planes
  3. Confidence – I like to think about data quality in terms of confidence-levels. Some data is highly reliable, other data sets less so. For example if you have two equipment ports with a circuit name identifier, then your confidence level might be 4 out of 4* because you know the exact termination points of that circuit. Conversely, let’s say you just have a circuit with a name that follows a convention of “LocA-LocB-speed-index-type” but has no associated port data. In that case you only know that the circuit terminates at LocationA and LocationB, but not which building, rack, device, card, port so your confidence level might only be 2 out of 4.
  4. Visualisation – Having these connected panes of data allows you to visualise heat-map confidence levels (and potentially gaps in the graph) on your OSS data, thus identifying where data-fix (eg physical audits) is required

* the example of a circuit with two related ports above might not always achieve 4 out of 4 if other checks are applied (eg if there are actually 3 ports with that associated circuit name in the data but we know it should represent a two-ended patch-lead).

Note: The diagram above (from graphene-info.com) shows red/blue/aqua links between graphene layers as capturing hydrogen, but is useful for approximating the concept of aligning nodes between planes

All OSS products are excellent. So where’s the advantage?

“You don’t get differential advantage from your products, it’s from the way you speak to and relate to your customers . All products are excellent these days.”

The quote above paraphrases Malcolm McDonald from a podcast about his book, “Malcolm McDonald on Value Propositions: How to Develop Them, How to Quantify Them.”

This quote had nothing to do with OSS specifically, but consider for a moment how it relates to OSS.

Consider also in relation to the diagram below.
Long-tail features

Let’s say the x-axis on this graph shows a list of features within any given OSS product. And the y-axis shows a KPI that measures the importance of each feature (eg number of uses, value added by using that feature, etc).

As Professor McDonald indicates, all OSS products are excellent these days. And all product vendors know what the most important features are. As a result, they all know they must offer the features that appear on the left-side of the image. Since all vendors do the left-side, it seems logical to differentiate by adding features to the far-right of the image, right?

Well actually, there’s almost no differential advantage at the far-right of the image.

Now what if we consider the second part of Prof McDonald’s statement on differential advantage, “…it’s from the way you speak to and relate to your customers.”

To me it implies that the differential advantage in OSS is not in the products, but in the service wrapper that is placed around it. You might be saying, “but we’re an OSS product company. We don’t want to get into services.” As described in this earlier post, there are two layers of OSS services.

One of the layers mentioned is product-related services (eg product installation, product configuration, product release management, license management, product training, data migration, product efficiency / optimisation, etc). None of these items would appear as features on the long-tail diagram above. Perhaps as a result, it’s these items that are often less than excellent in vendor offerings. It’s often in these items where complexity, time, cost and risk are added to an OSS project, increasing stress for clients.

If Prof McDonald is correct and all OSS products are excellent, then perhaps it’s in the services wrapper where the true differential advantage is waiting to be unlocked. This will come from spending more time relating to customers than cutting more code.

What if we take it a step further? What if we seek to better understand our clients’ differential advantages in their markets? Perhaps this is where we will unlock an entirely different set of features that will introduce new bands on the left-side of the image. I still feel that amazing OSS/BSS can give carriers significant competitive advantage in their marketplace. And the converse can give significant competitive disadvantage!

Are you desperately seeking to increase your OSS‘s differential advantage? Contact us at Passionate About OSS to help map out a way.

Competition closing on 23 March

In case you didn’t notice, we launched a competition yesterday. It’s a 5 question survey and respondents are in with a chance of winning 1 of 5 copies of my book, Mastering Your OSS.

Not only that, but it’s your chance to give back to the next generation of OSS experts coming through. You’ll achieve this by sharing your experiences on your earliest OSS projects. In turn, we’ll be turning these answers into e-books that we’ll make available free here on PAOSS.

I’m really looking forward to reading the answers from you, the exceptionally talented OSS people who I know read this blog!

Can OSS/BSS assist CX? We’re barely touching the surface

Have you ever experienced an epic customer experience (CX) fail when dealing a network service operator, like the one I described yesterday?

In that example, the OSS/BSS, and possibly the associated people / process, had a direct impact on poor customer experience. Admittedly, that 7 truck-roll experience was a number of years ago now.

We have fewer excuses these days. Smart phones and network connected devices allow us to get OSS/BSS data into the field in ways we previously couldn’t. There’s no need for printed job lists, design packs and the like. Our OSS/BSS can leverage these connected devices to give far better decision intelligence in real time.

If we look to the logistics industry, we can see how parcel tracking technologies help to automatically provide status / progress to parcel recipients. We can see how recipients can also modify their availability, which automatically adjusts logistics delivery sequencing / scheduling.

This has multiple benefits for the logistics company:

  • It increases first time delivery rates
  • Improves the ability to automatically notify customers (eg email, SMS, chatbots)
  • Decreases customer enquiries / complaints
  • Decreases the amount of time the truck drivers need to spend communicating back to base and with clients
  • But most importantly, it improves the customer experience

Logistics is an interesting challenge for our OSS/BSS due to the sheer volume of customer interaction events handled each day.

But it’s another area that excites me even more, where CX is improved through improved data quality:

  • It’s the ability for field workers to interact with OSS/BSS data in real-time
  • To see the design packs
  • To compare with field situations
  • To update the data where there is inconsistency.

Even more excitingly, to introduce augmented reality to assist with decision intelligence for field work crews:

  • To provide an overlay of what fibres need to be spliced together
  • To show exactly which port a patch-lead needs to connect to
  • To show where an underground cable route goes
  • To show where a cable runs through trayway in a data centre
  • etc, etc

We’re barely touching the surface of how our OSS/BSS can assist with CX.

The 7 truck-roll fail

In yesterday’s post we talked about the cost of quality. We talked about examples of primary, secondary and tertiary costs of bad data quality (DQ). We also highlighted that the tertiary costs, including the damage to brand reputation, can be one of the biggest factors.

I often cite an example where it took 7 truck rolls to connect a service to my house a few years ago. This provider was unable to provide an estimate of when their field staff would arrive each day, so it meant I needed to take a full day off work on each of those 7 occasions.

The primary cost factors are fairly obvious, for me, for the provider and for my employer at the time. On the direct costs alone, it would’ve taken many months, if not years, for the provider to recoup their install costs. Most of it attributable to the OSS/BSS and associated processes.

Many of those 7 truck rolls were a direct result of having bad or incomplete data:

  • They didn’t record that it was a two storey house (and therefore needed a crew with “working at heights” certification and gear)
  • They didn’t record that the install was at a back room at the house (and therefore needed a higher-skilled crew to perform the work)
  • The existing service was installed underground, but they had no records of the route (they went back to the designs and installed a completely different access technology because replicating the existing service was just too complex)

Customer Experience (CX), aka brand damage, is the greatest of all cost of quality factors when you consider studies such as those mentioned below.

A dissatisfied customer will tell 9-15 people about their experience. Around 13% of dissatisfied customers tell more than 20 people.”
White House Office of Consumer Affairs
(according to customerthink.com).

Through this page alone, I’ve told a lot more than 20 (although I haven’t mentioned the provider’s name, so perhaps it doesn’t count! 🙂  ).

But the point is that my 7 truck-roll example above could’ve been avoided if the provider’s OSS/BSS gave better information to their field workers (or perhaps enforced that the field workers populated useful data).

We’ll talk a little more tomorrow about modern Field Services tools and how our OSS/BSS can impact CX in a much more positive way.

Calculating the cost of quality

This week of posts has followed the theme of the cost of quality. Data quality that is.

But how do you calculate the cost of bad data quality?

Yesterday’s post mentioned starting with PNI (Physical Network Inventory). PNI is the cables, splices / joints, patch panels, ducts, pits, etc. This data doesn’t tend to have a programmable interface to electronically reconcile with. This makes it prone to errors of many types – mistakes in manual entry, reconfigurations that are never documented, assets that are lost or stolen, assets that are damaged or degraded, etc.

Some costs resulting from poor PNI data quality (DQ) can be considered primary costs. This includes SLA breaches caused by an inability to identify a fault within an SLA window due to incorrect / incomplete / indecipherable design data. These costs are the most obvious and easy to calculate because they result in SLA penalties. If a network operator misses a few of these with tier 1 clients then this is the disaster referred to yesterday.

But the true cost of quality is in the ripple-out effects. The secondary costs. These include the many factors that result in unnecessary truck rolls. With truck rolls come extra costs including contractor costs, delayed revenues, design rework costs, etc.

Other secondary effects include:

  • Downstream data maintenance in systems that rely on PNI data
  • Code in downstream systems that caters for poor data quality, which in turn increases the costs of complexity such as:
    • Additional testing
    • Additional fixing
    • Additional curation
  • Delays in the ability to get new products to market
  • Ability to accurately price products (due to variation in real costs caused by extra complexity)
  • Reduced impact of automations (due to increased variants)
  • Potential to impact Machine Learning / Artificial Intelligence engines, which rely on reliable and consistent data at scale
  • etc

There are probably more sophisticated ways to calculate the cost of quality across all these factors and more, but in most cases I just use a simple multiplier:

  • Number of instances of DQ events (eg number of additional truck rolls); times by
  • A rule-of-thumb cost impact of each event (eg the cost of each additional truck roll)

Sometimes the rules-of-thumb are challenging to estimate, so I tend to err on the side of conservatism. I figure that even if the rules-of-thumb aren’t perfectly accurate, at least they produce a real cost estimate rather than just anecdotal evidence.

And more importantly, the tertiary and less tangible costs of brand damage (also known as Customer Experience or CX or reputation damage). We’ll talk a little more about that tomorrow.

 

Waiting for the disaster to invest in the data

Have you seen OSS tools where the applications are brilliant but consigned to failure by bad data? I definitely have! I call it the data death spiral. It’s a well known fact in the industry that bad data can ruin an OSS. You know it. I know it. Everyone knows it.

But how many companies do you know that invest in data quality? I mean truly invest in it.

The status quo is not to invest in the data, but the disaster. That is the disaster caused by the data!

Being a data nerd, it boggles my brain to understand why that is. My only assumption to date is that we don’t adequately measure the cost of quality. Or more to the point, what the cost impact is resulting from bad data.

I recently attempted to model the cost of quality. My model focuses on the ripple-out impacts from poor PNI (Physical Network Inventory) quality data alone. Using conservative numbers, the cost of quality is in the millions for the first carrier I applied it to.

Why do you think operators wait for the disaster before investing in the data? What alternate techniques do you use to focus attention, and investment, on the data?

Where an absence of OSS data can still provide insights

The diagram below has some parallels with OSS. The story however is a little long before it gets to the OSS part, so please bear with me.

null

The diagram shows analysis the US Navy performed during WWII on where planes were being shot. The theory was that they should be reinforcing the areas that received the most hits (ie the wing-tips, central part of the body, etc as per the diagram).

Abraham Wald, a statistician, had a completely different perspective. His rationale was that the hits would be more uniform and the blank sections on the diagram above represent the areas that needed reinforcement. Why? Well, the planes that were hit there never made it home for analysis.

In OSS, this is akin to the device or EMS that has failed and is unable to send any telemetry data. No alarms are appearing in our alarm lists for those devices / EMS because they’re no longer capable of sending any.

That’s why we use heartbeat mechanisms to confirm a system is alive (and capable of sending alarms). These come in the form of pollers, pingers or just watching for other signs of life such as network traffic of any form.

In what other areas of OSS can an absence of data demonstrate an insight?

Telefónica and Ericsson sign AI-powered Network Ops

Telefónica and Ericsson sign AI-powered Network Operations agreement.

Ericsson and Telefónica, one of the world’s largest communications service providers, have penned a new four-to-six-year managed services deal for AI-powered Network Operations in the UK, Colombia, Peru, Ecuador and Uruguay.

Through its global Network Operations Centers (NOCs), Ericsson will provide services spanning day-to-day monitoring and service desk, change management, and problem and incident management – all powered by its leading AI and automation solutions. The deal supports and reinforces Telefónica’s strategy to focus on increased use of AI-based automation for evolved network operations.

Juan Manuel Caro, Global Director of Operations & Customer experience, Telefónica, says: “Expanding our long-term partnership with Ericsson with the implementation and support of their global Network Operation Centers will now allow us to build a more agile network, while implementing new tools and developing technologies for the network and our customers. AI and automation are key pillars of the network operations of the future.”

Arun Bansal, President and Head of Ericsson Europe and Latin America, Ericsson, says: “Ericsson and Telefónica have a long-standing partnership in technology and services. This new deal reflects both our ambitions to develop and drive networks based in automation, machine learning and AI and we’re working closely with Telefónica to make this a reality”.

Mobily deploys Ericsson

Mobily deploys Ericsson full stack Telecom Cloud.

Mobily Saudi Arabia has deployed Ericsson’s full stack telecom cloud solution, focusing on transforming its wireless network and providing a 5G Cloud Core. Mobily will gain a flexible, agile, and programmable network to improve customer experience and support the development of new services.

Moezid bin Nasser Al Harbi, Chief Technology Officer, Mobily, says: “The cloud will help meet the growing demand for new network services to help improve productivity and efficiency.”

Through Ericsson Cloud Packet Core, powered by Ericsson Cloud Infrastructure including Ericsson Orchestrator, Mobily customers will receive benefits such as reduced latency, tailored vertical network solutions and optimized distribution of key network capabilities.

Rafiah Ibrahim, Head of Ericsson Middle East and Africa, says: “Ericsson is committed to collaborating with Mobily in modernizing and managing its capacity growth, adding sites, and migrating to datacenters. We are taking another step in Mobily’s digital transformation journey through Ericsson’s core network solutions, bringing new levels of performance, flexibility and optimization.”

Ericsson extends OSSii to include 5G

Ericsson committed to extending OSSii agreement to include 5G.

Ericsson, Huawei and Nokia have agreed to initiate discussions to extend an OSSii (Operation Support System Interoperability Initiative) Memorandum of Understanding (MoU) to cover 5G network technology.

The aim is to enable and simplify interoperability between OSS systems, reducing overall OSS integration costs and enabling shorter time-to-market for 5G.

The scope will also look at simplifying integration capabilities towards multi-vendor operator’s OSS systems, maximizing the use intelligent and autonomous technologies.

Ignacio Más, Head of Technology Strategy, Solution Areas OSS, Ericsson, says: “Extending the OSSii MoU scope with 5G demonstrates the strength of Ericsson’s support for multi-vendor interoperability. We fully support and are committed to, the initiative that has helped service providers globally to leverage advancements in telco networks and meet the demands for optimization across their networks.”

OSSii – the Operations Support Systems interoperability initiative – was initiated in May 2013 to promote OSS interoperability between different OSS vendor’s equipment.

Telefónica UK selects Netcracker

Telefónica UK Selects Netcracker’s End-to-End BSS/OSS Suite.

NEC Corporation and Netcracker Technology have signed an agreement to partner with Telefónica UK to implement Netcracker’s end-to-end BSS/OSS suite. The partnership further expands Netcracker’s global, strategic relationship with Telefónica and underscores Netcracker’s continued market leadership in BSS and OSS.

Telefónica UK will use Netcracker’s comprehensive BSS/OSS suite comprising its Revenue Management, Customer Management and Operations Management solutions. In addition, Netcracker will also provide a comprehensive set of services, including Agile- and DevOps-based development, configuration and delivery.

Telefónica UK will integrate Netcracker’s revenue management and customer management capabilities into its customer-facing environment. It will also integrate Netcracker’s OSS capabilities within its IT and Network operations. To ensure rapid integration and delivery, Netcracker will leverage its Blueprint methodology to optimize delivery.

“Service providers around the world are choosing to work with us as leading suppliers of next-gen business and operations solutions. Our solutions allow our customers to become more agile and, in turn, give their customers the differentiated offerings they demand,” said Roni Levy, General Manager of Europe at Netcracker. “We are excited to help our customers deliver the services and experiences their customers have come to expect.”

Ericsson and Airtel to build AI use cases for Network Operations

Ericsson and Airtel to further build real-world AI use cases for Network Operations.

Ericsson and Bharti Airtel (“Airtel”), India’s leading telecom services provider, announced their collaboration for building intelligent and predictive network operations. Leveraging on its developments in Artificial Intelligence (AI) and automation, Ericsson will support Airtel to proactively address network complexity and boost user experience.

Combining deep domain expertise with advanced technologies like AI and automation, Ericsson managed services provides the performance, reliability, and flexibility to meet the dynamic needs of consumers and enterprises as well as intelligently monitoring and managing networks to drive operational efficiencies.

Ericsson and Airtel are industry leaders in the use of AI and automation, driving the future of network operations. Having completed proof of concept trials, the companies are expanding their co-creation partnership to industrialise AI use cases.

Bradley Mead, Head of Ericsson Network Managed Services, says: “I’m delighted that we are able to innovate together with Airtel which confirms a joint commitment to our long-standing partnership where together we can showcase what is possible with AI/ML as we transform into a truly data driven operations that will deliver business benefits on a new level.”

Randeep Shekhon, CTO, Bharti Airtel, says: “Airtel has always been a pioneer in introducing new network technologies to serve customers. AI / ML will be key to Airtel’s customer experience centric operations management. Our partnership with Ericsson is example of the power of real-world and collaborative innovation. With these initiatives, we would continue to maintain our network differentiation and provide superior customer experience. We look forward to taking this journey together to the next level.”

Telecom Egypt and Ericsson apply AI

Telecom Egypt and Ericsson apply Artificial Intelligence to operate telco cloud.

Telecom Egypt and Ericsson completed the successful deployment of Artificial Intelligence (AI) to its full-stack telco cloud infrastructure. The objective is to operate telco cloud environment intelligently and efficiently to enable cloud Automation and orchestration.

The telecom industry is moving into cloud automation especially with introduction of Cloud Native in 5G. Artificial Intelligence assets provide an efficient method for cloud visualization with ability to monitor internal traffic between NFVI layers, in addition to providing a fast way to identify faults and generate suggestions for resolution.

Adel Hamed, Chief Executive Officer at Telecom Egypt says: “We are keen to lead the way in the region when it comes to Artificial Intelligence, as it paves the road for implementation of new technologies across all our markets. Partnering with Ericsson enables us to achieve our strategic goals when it comes to enhanced operational effectiveness and customer experience.”

A key benefit in the case of cloud is that the software is divided into smaller components. This means Telecom Egypt can be selective about what it chooses to upgrade in terms of software and manage these upgrades more easily on a live network with minimal disruption.

Rafiah Ibrahim, Head of Ericsson Middle East and Africa says: “This successful pilot showcases the possibility for operators to deploy Artificial Intelligence on a broader scale. By using Ericsson’s technology, operators such as Telecom Egypt are able to build global standard agile networks and speed up the introduction of new services.”