Further rebukes for Trump and ZTE

First ZTE was banned, then given a lifeline by President Trump, but then Trump has also been rebuked.

The House Appropriations Committee unanimously accepted an amendment to an appropriations bill on Thursday that reinforces sanctions against Chinese telecommunications company ZTE, a rebuke to President Trump, who earlier this week tweeted support for the company.” reported TheHill.com.

AT&T, SKT and Intel to Launch a New Open Infrastructure Project, Airship

AT&T Working With SKT and Intel to Launch a New Open Infrastructure Project, Airship.

As part of our ongoing commitment to open and collaborative innovation, we’re working with SKT, Intel Corporation and the OpenStack Foundation to launch a new open infrastructure project called Airship. This project builds on the foundation laid by the OpenStack-Helm project launched in 2017. It lets cloud operators manage sites at every stage from creation through minor and major updates, including configuration changes and OpenStack upgrades. It does all this through a unified, declarative, fully containerized, and cloud-native platform.

Simply put, Airship lets you build a cloud easier than ever before. Whether you’re a telecom, manufacturer, health care provider, or an individual developer, Airship makes it easy to predictably build and manage cloud infrastructure.

It’s built using microservices, which we think are the future of software development, and embraces cloud native principles out of the box. This lets each Airship microservice perform one specific role in the cloud delivery and management process, and do it well. The ultimate goal of Airship is to help operators take hardware from loading dock to an OpenStack cloud, all while ensuring first-class life cycle management of that cloud once it enters production.

The initial focus of this project is the implementation of a declarative platform to introduce OpenStack on Kubernetes (OOK) and the lifecycle management of the resulting cloud, with the scale, speed, resiliency, flexibility, and operational predictability demanded of network clouds.

“Declarative” might be a new term to some readers. But it’s a simple concept with huge benefits. In a nutshell, every aspect of your cloud is defined in standardized documents that give you extremely flexible and fine grain control of your cloud infrastructure.  You simply manage the documents themselves and submit them and the platform takes care of the rest. This includes determining what has changed since the last submission and orchestrating those changes.

AT&T is contributing code for Airship that started in collaboration with SKT, Intel and a number of other companies in 2017. It’s the foundation of AT&T’s network cloud that will run our 5G core supporting the late 2018 launch of 5G service in 12 cities. Airship will also be used by Akraino Edge Stack, which is a new Linux Foundation project. Akraino is intended to create an open source software stack supporting high-availability cloud services optimized for edge computing systems and applications.

Airship will fuel and accelerate our Network AI initiative which houses several of our other open source projects. We want to build and nurture an open ecosystem of developers who can work together to advance this technology and deploy it within their own organizations.

Ryan van Wyk, assistant vice president of Cloud Platform Development at AT&T Labs, describes it like this: “Airship is going to allow AT&T and other operators to deliver cloud infrastructure predictably that is 100% declarative, where Day Zero is managed the same as future updates via a single unified workflow, and where absolutely everything is a container from the bare metal up.”

Ryan and his team will follow this blog post with a more in-depth introduction to project Airship in the next few days.

“We are pleased to bring continued innovation with Airship, extending the work we started in 2016 with the OpenStack and Kubernetes communities to create a continuum for modern and open infrastructure. Airship will bring new network edge capabilities to these stacks and Intel is committed to working with this project and the many other upstream projects to continue our focus of upstream first development and accelerating the industry.” – Imad Sousou, corporate vice president and general manager of the Open Source Technology Center at Intel

To get involved in this new Open Infrastructure Project for OpenStack, please attend one of the OpenStack Vancouver Summit talks, or go to airshipit.org.

Oracle buys DataScience.com

Oracle Buys DataScience.com.

Oracle announced that it has signed an agreement to acquire DataScience.com, whose platform centralizes data science tools, projects and infrastructure in a fully-governed workspace.

Data science teams use the platform to organize work, easily access data and computing resources, and execute end-to-end model development workflows. Leading organizations like Amgen, Rio Tinto, and Sonos are using the DataScience.com platform to improve productivity, reduce operational costs and deploy machine learning solutions faster to power their digital transformations.

DataScience.com empowers data scientists to deliver the business-changing insights executives expect in less time with self-service access to open source tools, data and computing resources, while also improving the ability of IT teams to support that work. Oracle embeds Artificial Intelligence (AI) and machine learning capabilities across its software as a service (SaaS) and platform as a service (PaaS) solutions, including big data, analytics and security operations, to enable digital transformations. Together, Oracle and DataScience.com will provide customers with a single data science platform that leverages Oracle Cloud Infrastructure and the breadth of Oracle’s integrated SaaS and PaaS offerings to help them realize the full potential of machine learning.

“Every organization is now exploring data science and machine learning as a key way to proactively develop competitive advantage, but the lack of comprehensive tooling and integrated machine learning capabilities can cause these projects to fall short,” said Amit Zavery, Executive Vice President of Oracle Cloud Platform, Oracle. “With the combination of Oracle and DataScience.com, customers will be able to harness a single data science platform to more effectively leverage machine learning and big data for predictive analysis and improved business results.”

“Data science requires a comprehensive platform to simplify operations and deliver value at scale,” said Ian Swanson, CEO of DataScience.com. “With DataScience.com, customers leverage a robust, easy-to-use platform that removes barriers to deploying valuable machine learning models in production. We are extremely enthusiastic about joining forces with Oracle’s leading cloud platform so customers can realize the benefits of their investments in data science.”

DGIT Systems acquires Inomial

DGIT Systems acquires billing systems vendor Inomial.

DGIT Systems announced the acquisition of Inomial Pty Ltd. Inomial Pty Ltd is a Billing Systems Vendor with a strong customer base predominantly located in the Asia Pacific region.

“Inomial’s suite of billing related products complement DGITs award winning Telflow Service Delivery Platform”, Greg Tilton CEO DGIT Systems said today. “The acquisition of Inomial provides us with a significant value- our offer now supports a full-service proposition for our customers delivered as a cloud or on – premises option.”

The combined Telflow and Inomial value proposition provides a realisation of the TM Forum Open Digital Architecture powered by TM Forum Open API’s to provide solutions spanning from digital customer channels to virtualised networks.

DGIT Systems and Inomial have worked together as strategic partners with a common group of customers for some time and collaboration between the R&D programs have both produce suites aligned on a dynamic microservices architecture in keeping with the very latest in IT Architecture thinking. DGITs acquisition will facilitate a seamless Quote-to-Order-to-Activate-to-Bill to cover complex enterprise network solutions, managed services and B2B integrated domestic or global wholesale.

Michael Lawrey, Chairman at DGIT Systems welcomed Inomial Pty Ltd founder Mark Lillywhite to the Board of DGIT Systems noting that “Mark’s ongoing involvement and contribution to DGIT Systems through his role as a Board member and Head of the Billing Systems Division will be invaluable and further strengthen our leadership in this space. DGIT is on a mission to provide the most capable Quote-to-Cash and Order-to-Activate platform for the Telco and Service Provider industries.”

Reducing the lumps with OSS services

As promised in yesterday’s post about lumpy revenues for OSS product companies, today we’ll discuss OSS professional services revenues and the contrasting mindset compared with products.

Professional services revenues are a great way of smoothing out the lumpy revenue streams of traditional OSS product companies. There’s just one problem though. Of all the vendors I’ve worked with, I’ve found that they always have a predilection – they either have a product mindset or a services mindset and struggle to do both well because the mindsets are quite different.

Not only that but we can break professional services into two categories:

  1. Product-related services – the installation and commissioning of products; and
  2. Consultancy-based services – the value-add services that drive business value from the OSS / BSS

Product companies provide product-related services, naturally. I can’t help but think that if we as an industry provided more of the consultancy-based services, we’d have more justification for greater spend on OSS / BSS (and smoother revenue streams in the process).

Having said that, PAOSS specialises in consultancy-based services (as well as install / commission / delivery services), so we’re always happy to help organisations that need assistance in this space!!

HPE buys Plexxi

HPE to Acquire Plexxi.

Ric Lewis of HPE writes in his blog…
Our customers live in a hybrid world, running a mix of workloads on traditional IT, as well as private, managed and public clouds. They need to be able to move at cloud-like speed, regardless of where the data lives. HPE is focused on delivering a portfolio of products and services that simplify hybrid IT, helping customers to move faster and to drive business value.

Today, I’m thrilled to announce that we are taking another important step to deliver on this promise of simplification and speed: we’ve reached an agreement to acquire Plexxi, a leading provider of software-defined data fabric networking technology. The company was founded in 2010 and is focused on enabling data center modernization and hybrid cloud with its software-defined data fabric.

Plexxi’s technology will extend HPE’s market-leading software-defined compute and storage capabilities into the high-growth, software-defined networking market, expanding our addressable market and strengthening our offerings for customers and partners. By seamlessly combining Plexxi’s next-generation data center fabric with HPE’s existing software-defined infrastructure, HPE can deliver a true cloud-like experience in the data center. Through this acquisition, we will deliver hyperconverged and composable solutions with a next-generation data network fabric that can automatically create or re-balance bandwidth to workload needs. This will increase agility and efficiency, and accelerate how quickly companies deploy applications and draw business value from their data.

We see two clear opportunities to integrate Plexxi’s innovative technology:

First, we intend to integrate Plexxi technology into our hyperconverged solutions. Building on last year’s SimpliVity acquisition, Plexxi will enable us to deliver the industry’s only hyperconverged offering that incorporates compute, storage and data fabric networking into a single solution, with a single management interface and support. The combined HPE SimpliVity plus Plexxi solution will provide customers with a highly dynamic workload-based model to better align IT resources to business priorities. HPE’s hyperconverged business has been growing at four times the market and HPE was recently named a leader in the Gartner Magic Quadrant for hyperconverged. With Plexxi, we’ll be able to extend this lead over the competition.

Second, Plexxi’s technology will extend our composable infrastructure portfolio, called HPE Synergy. Composable infrastructure, built on HPE OneView, is a new category of infrastructure that delivers fluid pools of storage and compute resources that can be composed and recomposed as business needs dictate. In the near future with Plexxi, we will deliver a composable rack solution that will seamlessly extend our composable fabric to a broader set of use cases across the data center.”

Verizon to move 1,000+ apps to AWS

Verizon is migrating over 1,000 business-critical applications and database backend systems to AWS.
Courtesy of Businesswire.

Amazon Web Services announced that Verizon Communications has selected AWS as its preferred public cloud provider. Verizon is migrating over 1,000 business-critical applications and database backend systems to AWS, several of which also include the migration of production databases to Amazon Aurora—AWS’s relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases.

Verizon first started working with AWS in 2015 and has several successful business and consumer applications already running in the cloud. This latest wave of migrations to AWS is part of a corporate-wide initiative at Verizon to increase agility and reduce costs through the use of cloud computing. Standardizing on AWS will enable Verizon to access the most comprehensive set of cloud capabilities available today so it can deliver innovative applications and services in hours versus weeks. To ensure that Verizon’s developers are able to invent on behalf of its customers, the company has also invested in building AWS-specific training facilities, called “dojos,” where its employees can quickly ramp up on AWS technologies and learn how to innovate with speed and at scale.

“We are making the public cloud a core part of our digital transformation, upgrading our database management approach to replace our proprietary solutions with Amazon Aurora,” said Mahmoud El-Assir, Senior Vice President of Global Technology Services at Verizon. “The agility we’ve gained by moving to the world’s leading public cloud has helped us better serve our customers. Working with AWS complements our focus on efficiency, speed, and innovation within our engineering culture, and has enabled us to quickly deliver the best, most efficient customer experiences.”

“Millions of consumers and businesses rely on Verizon to communicate and connect them to every corner of the world at a time when the stakes have never been higher to keep customers satisfied and engaged,” said Mike Clayville, Vice President, Worldwide Commercial Sales at AWS. “We look forward to continuing our work with Verizon as their preferred public cloud provider, helping them to continually transform their business and innovate on behalf of their customers. The combination of Verizon’s team of builders with AWS’s extensive portfolio of cloud services and expertise means that Verizon’s options for delighting their customers is virtually unlimited.”

It’s all a bit lumpy

Being an OSS product supplier to telecom operators is a tough business. There is a constant stream of outgoings on developer costs, cost of sale, general overheads, etc. Unfortunately revenue streams are rarely so smooth. In fact, they tend to be decidedly lumpy – unpredictable (in terms of timelines when forecasting inflows years in advance) but large spikes of income stemming from customer implementations.

Not only that, but the risks are high due to the complexity and unknowns of OSS implementation projects as well as the lack of repeatability that was discussed in yesterday’s post.

Enduringly valuable businesses achieve their status through predictable, diversified, recurring (and preferably growing) revenue streams, so they need to be objectives of our OSS business models.

Annual maintenance fees (usually in the order of 20-22% of up-front list prices) is the most common recurring revenue model used by OSS product suppliers. Transaction-based pricing is another common model.

Cloud subscription (consumption) based models are also becoming more common, although there are always challenges around convincing carriers of the security and sovereignty of such important tools and data being hosted off-site.

I’m fascinated with the platform-plays, like Salesforce, which is a mushrooming form of the subscription model because there’s an ecosystem (or marketplace) of sellers contributing to transaction volumes. OSS and BSS are the perfect platform play but I haven’t seen any built around this style of revenue model yet. [Please let me know if I’ve missed any].

It has also been interesting to observe Cisco’s market success on the back of a perceived revenue shift towards more software and services.

Whenever considering alternate revenue models, I refer back to this great image from Ross Dawson:
Revenue Models
Do any apply to your OSS? Can any apply to your OSS?

Tomorrow we’ll discuss OSS professional services revenues and the contrasting mindset compared with products.

Being an OSS map-maker

Each problem that I solved became a rule, which served afterwards to solve other problems.”
Rene Descartes
.

On a recent project, I spent quite a lot of time thinking in terms of problem statements, then mapping them into solutions that could be broken down for assignment to lots of delivery teams – feeding their Agile backlogs.

On that assignment, like the multitude of OSS projects in the past, there has been very little repetition in the solutions. The people, org structure, platforms, timelines, objectives, etc all make for a highly unique solution each time. And high uniqueness doesn’t easily equate to repeatability. If there’s no repeatability, there’s no point building repeatable tools to improve efficiency. But repeatability is highly desirable for the purpose of reliability, continual improvement, economy of scale, etc.

However, if we look a step above the solution, above the use cases, above the challenges, we have key problem statements and they do tend to be more consistent (albeit still nuanced for each OSS). These problem statements might look something like:

  • We need to find a new vendor / solution to do X (where X is the real problem statement)
  • Realised risks have impacted us badly on past projects (so we need to minimise risk on our upcoming transformation)
  • We don’t get new products out to market fast enough to keep up with competitor Y and are losing market share to them
  • Our inability to resolve network faults quickly is causing customers to lose confidence in us
  • etc

It’s at this level that we begin to have more repeatability, so it’s at this level that it makes sense to create rules, frameworks, etc that are re-usable and refinable. You’ll find some of the frameworks I use under the Free Stuff menu above.

It seems that I’m an OSS map-maker by nature, wanting the take the journey but also map it out for re-use and refinement.

I’d love to hear whether it’s a common trait and inherent in many of you too. Similarly, I’d love to hear about how you seek out and create repeatability.

An OSS automation mind-flip

I recently had something of a perspective-flip moment in relation to automation within the realm of OSS.

In the past, I’ve tended to tackle the automation challenge from the perspective of applying automated / scripted responses to tasks that are done manually via the OSS. But it’s dawned on me that I have it around the wrong way! It is an incremental perspective on the main objective of automations – global zero-touch networks.

If we take all of the tasks performed by all of the OSS around the globe, the number of variants is incalculable… which probably means the zero-touch problem is unsolvable (we might be able to solve for many situations, but not all).

The more solvable approach would be to develop a more homogeneous approach to network self-care / self-optimisation. In other words, the majority of the zero-touch challenge is actually handled at the equivalent of EMS level and below (I’m perhaps using out-dated self-healing terminology, but hopefully terminology that’s familiar to readers) and only cross-domain issues bubble up to OSS level.

As the diagram below describes, each layer up abstracts but connects (as described in more detail in “What an OSS shouldn’t do“). That is, each higher layer in the stack reduces the amount if information/control within a domain that it’s responsible for, but it assumes more a broader responsibility for connecting domains together.
OSS abstract and connect

The abstraction process reduces the number of self-healing variants the OSS needs to handle. But to cope with the complexity of self-caring for connected domains, we need a more homogeneous set of health information being presented up from the network.

Whereas the intent model is designed to push actions down into the lower layers with a standardised, simplified language, this would be the reverse – pushing network health knowledge up to higher layers to deal with… in a standard, consistent approach.

And BTW, unlike the pervading approach of today, I’m clearly saying that when unforeseen (or not previously experienced) scenarios appear within a domain, they’re not just kicked up to OSS, but the domains are stoic enough to deal with the situation inside the domain.

ZTE lifeline from Trump?

News last week of ZTE ceasing major operations due to US embargoes have taken another turn.

President Trump has now tweeted,”President Xi of China, and I, are working together to give massive Chinese phone company, ZTE, a way to get back into business, fast. Too many jobs in China lost. Commerce Department has been instructed to get it done!

What will be the next twist in this saga?

Sigma Systems signs with Telkomsel

Sigma Systems Supports Telkomsel in Building a Digital Indonesia.

Sigma Systems, announced a major deal with Indonesia’s leading mobile network operator, Telkomsel.

With more than 190 million customers, Telkomsel is currently the largest mobile operator in Indonesia. Telkomsel has consistently implemented the latest mobile technology and was the first to commercially launch 4G LTE mobile services in the country. Entering the digital era, Telkomsel continues to expand its digital business to incorporate advertising, lifestyle, mobile financial services, and Internet of Things.

In support of their digital mandate, Telkomsel has selected Sigma Systems as a partner, establishing Sigma Catalog as the central enterprise catalog to underpin their evolving business.

“We are pleased to partner with Sigma to deploy a B/OSS platform that enables the rapid creation of personalized, micro-segmented offers to our customers. Sigma’s agile delivery methodology and product-centric approach ultimately supports Telkomsel’s mission of building a Digital Indonesia,” said Montgomery Hong, CIO at Telkomsel.

Sigma Systems CEO, Tim Spencer, commented: “Telkomsel is at the forefront of digital transformation in the region, and recognizes the critical role a catalog-driven solution plays in accelerating the creation, selling and delivery of innovative and deeply personalized market offerings. Sigma is honored to work with Indonesia’s leading mobile operator as they transition into a truly digital business.”

KCOM partners with Amdocs

Amdocs to provide KCOM with a service delivery platform offering capabilities for next generation networks including flexible bandwidth, time-bound bandwidth and SD-WAN services.

Amdocs announced that KCOM, a leading provider of communications, applications and integration services to the UK enterprise and consumer market, has selected an Amdocs service delivery platform to enhance its new next generation network infrastructure. Amdocs will provide a new orchestration platform that will support KCOM’s zero-touch customer service portal giving it the capability to select bandwidth on-demand, time-bound bandwidth and software-defined networking in wide area network (SD-WAN) options.

KCOM has deployed fibre-to-the-premises capability to 150,000 properties in Hull and East Yorkshire in the north of England, representing 75 percent of its network in the region1. It expects to have deployed full fibre across its entire network area by March 2019 and already has more customers with full fibre connections than with ADSL connections2. KCOM is ambitious to exploit its new network, which will ultimately see it deliver new Network Function Virtualization (NFV) based services to its customers. These will include service activation of orders where network components are managed through the delivery and activation process. Amdocs will provide automated software management of the network components to ensure that KCOM’s voice network service is capable of being delivered through online portals.

KCOM will also be migrating its public switched telephone network (PSTN) residential customers to Voice Over Internet Protocol (VoIP) Softswitch as part of this project, where the service delivery platform from Amdocs will be able to manage and automate any fulfilment steps without affecting the voice service received by residential customers.

“We want to improve the services we deliver to our customers and the experience we offer them. We’re starting with our voice services so need a platform that can deliver this phase and then be deployed more widely across our business as the basis for other services”, said Sean Royce, Executive Vice President for Technology, Service and Operations at KCOM. “By working with Amdocs, we are gaining a partner that shares KCOM’s ambitions and can grow with us.”

“KCOM is fully embracing the transition to virtualized networks which will see it transform its business and offer exceptional network performance to its customers,” said Gary Miles, chief marketing officer, Amdocs. “This will allow KCOM to provide service order management capabilities that can be expanded to other services and customer segments as it grows, and to stay competitive by improving network speed, capabilities and customer engagement.”

Vivo expands with Netcracker

Vivo Expands Netcracker’s Service Management as Part of Large-Scale Digital Transformation.

Netcracker Technology announced that Vivo, Telefónica Group’s Brazilian subsidiary, has upgraded and expanded its use of Netcracker’s Service Management solution. The solution will help Vivo standardize provisioning and activation for all B2B and B2C mobile services.

Vivo is the leading communications service provider in Brazil, delivering fixed-line and mobile voice, television and internet broadband services to approximately 97 million customers across the country.

Upgrading and expanding Netcracker’s Service Management solution is part of Vivo’s larger digital transformation and will enable Vivo to accelerate core mobile service delivery and management, ensuring speedy activation, provisioning and assurance. This expansion comes shortly after the announcement that Vivo extended its use of Netcracker’ s Revenue Management solution.

“Netcracker’s proven ability to deliver sophisticated solutions that enable and support digital transformation strategies largely influenced our decision to upgrade and expand our use of its service management platform,” said Adriana Lika, IT Director at Vivo. “By standardizing and streamlining the way we activate and provision mobile services, we will be able to provide our customers with a better experience.”

“In order to keep up with the demands from millions of subscribers in the rapidly digitalizing Latin American market, it’s critical for service providers to be agile,” said Fabio Gatto, General Manager of Latin America at Netcracker. “We are excited to continue working with Vivo and be the provider of choice for revenue and service management.”

ZTE stops operating activities and suspends trading

As a result of an export denial order by the U.S. Department of Commerce’s Bureau of Industry and Security (BIS), ZTE has opted to cease all major operating activities and trading of shares on the Hong Kong Stock Exchange.

The ban on ZTE comes as a result of selling products to Iran. It prevents ZTE from purchasing components (eg semiconductors), software (eg licensed components of Android OS) or technology from US manufacturers. It impacts ZTE products from smartphones to routing and switching gear.

Huawei is also under investigation by the US Department of Justice.

This could have widespread ramifications for the telecommunications industry as operators will now no longer be able to source replacement parts or upgrades. You could speculate that wholesale network replacement projects will soon follow as well as the trickle-down effect into OSS/BSS.

Note that I’m not just talking about ZTEsoft OSS/BSS here. I’m talking about all the other OSS/BSS that will need to be updated as a result of the seemingly inevitable ZTE equipment change-out. And I’m talking about the secondary, tertiary, etc impacts as well as all subsequent ripples. It starts with 5G but the ramifications could be much bigger over the next few years.

How to run an OSS PoC

This is the third in a series describing the process of finding the right OSS solution for your specific needs and getting estimated pricing to help you build a business case.

The first post described the overall OSS selection process we use. The second described the way we poll the market and prepare a short-list of OSS products / vendors based on current capabilities.

Once you’ve prepared the short-list it’s time to get into specifics. We generally do this via a PoC (Proof of Concept) phase with the short-listed suppliers. We have a few very specific principles when designing the PoC:

  • We want it to reflect the operator’s context so that they can grasp what’s being presented (which can be a challenge when a vendor runs their own generic demos). This “context” is usually in the form of using the operator’s device types, naming conventions, service types, etc. It also means setting up a network scenario that is representative of the operator’s, which could be a hypothetical model, a small segment of a real network, lab model or similar
  • PoC collateral must clearly describe the PoC and related context. It should clearly identify the important scenarios and selection criteria. Ideally it should logically complement the collateral provided in the previous step (ie the requirement gathering)
  • We want it to focus on the most important conditions. If we take the 80/20 rule as a guide, will quickly identify the most common service types, devices, configurations, functions, reports, etc that we want to model
  • Identify efficacy across those most important conditions. Don’t just look for the functionality that implements those conditions, but also the speed at which they can be done at a scale required by the operator. This could include bulk load or processing capabilities and may require simulators (or real integrations – see below) to generate volume
  • We want it to be a simple as is feasible so that it minimises the effort required both of suppliers and operators
  • Consider a light-weight integration if possible. One of the biggest challenges with an OSS is getting data in and out. If you can get a rapid integration with a real network (eg a microservice, SNMP traps, syslog events or similar) then it will give an indication of integration challenges ahead. However, note the previous point as it might be quite time-consuming for both operator and supplier to set up a real-time integration
  • Take note of the level of resourcing required by each supplier to run the PoC (eg how many supplier staff, server scaling, etc.). This will give an indication of the level of resourcing the operator will need to allocate for the actual implementation, including organisational change management factors
  • Attempt to offer PoC platform consistency so that all operators are on a level playing field, which might be through designing the PoC on common devices or topologies with common interfaces. You may even look to go the opposite way if you think the rarity of your conditions could be a deal-breaker

Note that we tend to scale the size/complexity/reality of the PoC to the scale of project budget out of consideration of vendor and operator alike. If it’s a small project / budget, then we do a light PoC. If it’s a massive transformation, then the PoC definitely has to go deeper (ie more integrations, more scenarios, more data migration and integrity challenges, etc)…. although ultimately our customers decide how deep they’re comfortable in going.

Best of luck and feel free to contact us if we can assist with the running of your OSS PoC.

Telefónica Spain transforms DCs with Nuage

Telefónica Spain transforms its data centers with Nokia high-performance routing and Nuage Networks Virtualized Cloud Services.

Nokia and its venture focused on software-defined networking (SDN), Nuage Networks, are partnering with Telefónica Spain to build an open, elastic and highly secure data center network infrastructure, dramatically expanding the agility, scale and efficiency of its cloud-based services.

A key part of Telefónica’s cloud vision is to offer enterprises the ability to easily order, customize and configure value-added services through a self-service portal for on-demand delivery. Having already deployed an SD-WAN infrastructure in 2017, Telefónica is leveraging and extending that investment to include modern software-defined data centers (SDDC). The Nuage Networks Virtualized Cloud Services (VCS) solution automates secure connectivity and network services across efficient and advanced datacenter fabrics powered by Nokia and Nuage Networks routers. This modernization will propel Telefónica service offerings ranging from enterprise hosting and co-location to enterprise wide-area networks (WAN) and enterprise cloud infrastructure.

The Nuage Networks VCS solution allows Telefónica to accelerate the provisioning of new customers, applications and networks with cloud-scale efficiency. The solution automatically establishes networking configurations, with quality of service (QoS) and security policies. It also enables zero-touch, policy-based network automation of applications running on any infrastructure, whether virtual machines, containers or bare-metal servers. The solution is OpenStack compliant and fully certified with the Red Hat Enterprise Linux OpenStack platform.

Additionally Nuage Networks will enable hybrid cloud seamless interconnection between Private DC, Telefónica SDDC and Public Clouds, where Telefónica’s customers require solutions to address the needs of cloud-based applications such as cloudbursting, optimizing latency, virtualized networking and routing services.

To implement Telefónica’s advanced leaf-spine datacenter architecture – a specialized topology that minimizes latency and bottlenecks – Nokia and Nuage Networks are delivering routing platforms with the density, flexibility, and cost efficiency to meet Telefónica’s objectives across the full range of interfaces. Telefónica’s hosted infrastructure ensures that enterprises get the highest level of agility and responsiveness, while avoiding the complexity and risks of managing their own cloud.

Joaquín Mata, director of operations, network and IT at Telefónica España, said: “To meet the rapidly emerging business requirements for agility and on-demand deployments, we moved aggressively to build our business connectivity services around a new cloud-based architecture. Nuage Networks provided us with a highly scalable SDN architecture that could support all our services across all our regions without disruption. We are confident our customers will significantly improve their businesses with these new cloud-based services.”

Sunil Khandekar, founder and chief executive officer of Nuage Networks from Nokia, said: “The IT, communications and service needs of today’s enterprises have much higher demands than just a few years ago and therefore require new technologies to support them. We worked closely with Telefónica to assure the Nuage Networks SDN solutions address the requirements of its entire network infrastructure from the data center to remote WAN sites around the globe. Enterprise customers who need more flexibility and agility to quickly propel themselves into new markets can get it through trusted providers like Telefónica.”

Overview of the solution to be deployed:

  • The Nuage Networks VCS enables Telefónica to automate the configuration, management and optimization of virtual networks in the datacenter, including bandwidth, QoS policy, and security services.
  • The Nuage Networks VCS provides per-tenant micro-segmentation and access controls to individual applications and workloads, irrespective of whether they are bare metal, virtual machines, or containers.
  • Nuage Networks enables Telefónica to deliver SD-WAN and SDDC services using a single common Networks Virtualized Services Platform (VSP), paving the way for a massively multi-tenant, fully automated and highly secure SDN infrastructure that spans the datacenter, the branch and the cloud.
  • Telefónica’s SDDC solution combines high performance routing and gateway functionality delivered by Nokia’s FP4-powered 7750 SR-1 routers for datacenter gateway and tera-leaf functionality. Nokia 7250 IXR-10 routers are deployed as super spine nodes, delivering massive density for 100GbE interconnection. Both platforms share the common SR OS operating system, proven over years of deployment in networks of leading operators, including Telefónica. Virtualized instantiations of network functions such as route reflectors (VSR-RR) are also based on the SR OS, and seamlessly deployed alongside SDDC & carrier SDN implementations.
  • The Nuage Networks 210 WBX will be used as a data center leaf router, offering a high density, flexible, cost-effective solution for 1GbE, 10GbE, 25GbE, 40GbE, 50GbE and 100GbE interfaces.

How to identify a short-list of best-fit OSS suppliers for you

In yesterday’s post, we talked about how to estimate OSS pricing. One of the key pillars of the approach was to first identify a short-list of vendors / integrators best-suited to implementing your specific OSS, then working closely with them to construct a pricing model.

Finding the right vendor / integrator can be a complex challenge. There are dozens, if not hundreds of OSS / BSS solutions to choose from and there are rarely like-for-like comparators. There are some generic comparison tools such as Gartner’s Magic Quadrant, but there’s no way that they can cater for the nuanced requirements of each OSS operator.

Okay, so you don’t want to hear about problems. You want solutions. Well today’s post provides a description of the approach we’ve used and refined across the many product / vendor selection processes we’ve conducted with OSS operators.

We start with a short-listing exercise. You won’t want to deal with dozens of possible suppliers. You’ll want to quickly and efficiently identify a small number of candidates that have capabilities that best match your needs. Then you can invest a majority of your precious vendor selection time in the short-list. But how do you know the up-to-date capabilities of each supplier? We’ll get to that shortly.

For the short-listing process, I use a requirement gathering and evaluation template. You can find a PDF version of the template here. Note that the content within it is out-dated and I now tend to use a more benefit-centric classification rather than feature-centric classification, but the template itself is still applicable.

STEP ONE – Requirement Gathering
The first step is to prepare a list of requirements (as per page 3 of the PDF):
Requirement Capture.
The left-most three columns in the diagram above (in white) are filled out by the operator, which classifies a list of requirements and how important they are (ie mandatory, etc). The depth of requirements (column 2) is up to you and can range from specific technical details to high-level objectives. They could even take the form of user-stories or intended benefits.

STEP TWO – Issue your requirement template to a list of possible vendors
Once you’ve identified the list of requirements, you want to identify a list of possible vendors/integrators that might be able to deliver on those requirements. The PAOSS vendor/product list might help you to identify possible candidates. We then send the requirement matrix to the vendors. Note that we also send an introduction pack that provides the context of the solution the OSS operator needs.

STEP THREE – Vendor Self-analysis
The right-most three columns in the diagram above (in aqua) are designed to be filled out by the vendor/integrator. The suppliers are best suited to fill out these columns because they best understand their own current offerings and capabilities.
Note that the status column is a pick-list of compliance level, where FC = Fully Compliant. See page 2 of the template for other definitions. Given that it is a self-assessment, you may choose to change the Status (vendor self-rankings) if you know better and/or ask more questions to validate the assessments.
The “Module” column identifies which of the vendor’s many products would be required to deliver on the requirement. This column becomes important later on as it will indicate which product modules are most important for the overall solution you want. It may allow you to de-prioritise some modules (and requirements) if price becomes an issue.

STEP FOUR – Compare Responses
Once all the suppliers have returned their matrix of responses, you can compare them at a high-level based on the summary matrix (on page 1 of the template)
OSS Requirement Summary
For each of the main categories, you’ll be able to quickly see which vendors are the most FC (Fully Compliant) or NC (Non-Compliant) on the mandatory requirements.

Of course you’ll need to analyse more deeply than just the Summary Matrix, but across all the vendor selection processes we’ve been involved with, there has always been a clear identification of the suppliers of best fit.

Hopefully the process above is fairly clear. If not, contact us and we’d be happy to guide you through the process.

Bouygues Telecom extends with Netcracker

Bouygues Telecom Selects Netcracker’s Revenue Management Solution in its Drive Toward Convergence.

Netcracker Technology announced that the French operator Bouygues Telecom has extended its use of Netcracker’s Revenue Management solution. Bouygues Telecom already uses Netcracker’s solution for its mobile subscribers and plans to use the platform for fixed-line subscribers in Fall 2018.

As part of this initiative, Netcracker delivered an upgraded Revenue Management solution to Bouygues Telecom to support long-term growth strategies. The highly configurable and scalable Netcracker BSS platform will help Bouygues Telecom streamline core customer-facing processes and reduce operational costs.

“Throughout our partnership, Netcracker has validated its dedication and ability to continuously support our mission-critical billing operations,” said Alain Moustard, Chief Information Officer at Bouygues Telecom. “We extended our partnership with Netcracker and its next-generation Revenue Management solution to meet the digitalization transformation requirements from our market.”

“Leveraging scalable BSS is critical for service providers to deliver and bill for the highly digital and innovative services that customers expect today,” said Roni Levy, General Manager of Europe at Netcracker. “We are excited to continue our partnership with Bouygues Telecom as it strives to provide its customers with the most innovative, user-friendly digital services.”

Nokia acquires SpaceTime Insight

Nokia acquires SpaceTime Insight to expand its IoT software portfolio and accelerate vertical application development.

Nokia has acquired SpaceTime Insight to expand its Internet of Things (IoT) portfolio and IoT analytics capabilities, and accelerate the development of new IoT applications for key vertical markets.

Based in San Mateo, California, with offices in the U.S., Canada, U.K., India and Japan, SpaceTime Insight provides machine learning-powered analytics and IoT applications for some of the world’s largest transportation, energy and utilities organizations, including Entergy, FedEx, NextEra Energy, Singapore Power and Union Pacific Railroad. Its machine learning models and other advanced analytics, designed specifically for asset-intensive industries, predict asset health with a high degree of accuracy and optimize related operations. As a result, SpaceTime Insight’s applications help customers reduce cost and risk, increase operational efficiencies, reduce service outages and more.

The acquisition supports Nokia’s software strategy by bringing SpaceTime Insight’s sales expertise and proven track record in IoT application development, machine learning and data science to the Nokia Software IoT product unit. It will strengthen Nokia’s IoT software portfolio and IoT analytics capabilities, and accelerate the development of Nokia’s IoT offerings to deliver high-value IoT applications and services to new and existing customers.

The addition of SpaceTime Insight will also broaden the company’s ability to deliver new, advanced applications for key vertical markets, including energy, logistics, transportation and utilities.

Paul Lau, Chief Grid Strategy and Operations Officer at Sacramento Municipal Utility District, said: “We’ve partnered with SpaceTime to help us be more responsive, more efficient and ultimately able to deliver more value to our customers. Combining their innovative solutions with Nokia’s world-class portfolio will provide customers with powerful new tools to better manage assets, maximize efficiencies and deliver new capabilities.”

Bhaskar Gorti, president of Nokia Software, said: “Adding SpaceTime to Nokia Software is a strong step forward in our strategy, and will help us deliver a new class of intelligent solutions to meet the demands of an increasingly interconnected world. Together, we can empower customers to realize the full value of their people, processes and assets, and enable them to deliver rich, world-class digital experiences.”

SpaceTime Insight and its CEO Rob Schilling will join the IoT product unit within the Nokia Software business group.

Rob Schilling, CEO of SpaceTime Insight, said: “Today marks a transformational moment for SpaceTime, and I’m delighted to join forces with one of the world’s top organizations-a global brand that is reshaping the future of networking and intelligent software. I am excited for this incredible opportunity to help accelerate and scale Nokia’s IoT business and provide a new class of next-generation IoT solutions customers cannot find anywhere else.”