Optimisation Support Systems

We’ve heard of OSS being an acronym for operational support systems, operations support systems, even open source software. I have a new one for you today – Optimisation Support Systems – that exists for no purpose other than to drive a mindset shift.

I think we have to transition from “expectations” in a hype sense to “expectations” in a goal sense. NFV is like any technology; it depends on a business case for what it proposes to do. There’s a lot wrong with living up to hype (like, it’s impossible), but living up to the goals set for a technology is never unrealistic. Much of the hype surrounding NFV was never linked to any real business case, any specific goal of the NFV ISG.”
Tom Nolle
in his blog here.

This is a really profound observation (and entire blog) from Tom. Our technology, OSS included, tends to be surrounded by “hyped” expectations – partly from our own optimistic desires, partly from vendor sales pitches. It’s far easier to build our expectations from hype than to actually understand and specify the goals that really matter. Goals that are end-to-end in manner and preferably quantifiable.

When embarking on a technology-led transformation, our aim is to “make things better,” obviously. A list of hundreds of functional requirements might help. However, having an up-front, clear understanding of the small number of use cases you’re optimising for tends to define much clearer goal-driven expectations.

Security and privacy as an OSS afterthought?

I often talk about OSS being an afterthought for network teams. I find that they’ll often design the network before thinking about how they’ll operationalise it with an OSS solution. That’s both in terms of network products (eg developing a new device and only thinking about building the EMS later), or building networks themselves.

It can be a bit frustrating because we feel we can give better solutions if we’re in the discussion from the outset. As OSS people, I’m sure you’ll back me up on this one. But we can’t go getting all high and mighty just yet. We might just be doing the same thing… but to security, privacy and analytics teams.

In terms of security, we’ll always consider security-based requirements (usually around application security, access management, etc) in our vendor / product selections. We’ll also include Data Control Network (DCN) designs and security appliance (eg firewalls, IPS, etc) effort in our implementation plans. Maybe we’ll even prescribe security zone plans for our OSS. But security is more than that (check out this post for example). We often overlook the end-to-end aspects such central authentication, API hardening, server / device patching, data sovereignty, etc and it then gets picked up by the relevant experts well into the project implementation.

Another one is privacy. Regulations like GDPR and the Facebook trials show us the growing importance of data privacy. I have to admit that historically, I’ve been guilty on this one, figuring that the more data sets I could stitch together, the greater the potential for unlocking amazing insights. Just one problem with that model – the more data sets that are stitched together, the more likely that privacy issues arise.

We increasingly have to figure out ways to weave security, privacy and analytics into our OSS planning up-front and not just think of them as overlays that can be developed after all of our key decisions have been made.

New OSS functionality or speed and scale?

We all know that revenue per bit (of data transferred across comms networks) is trending lower. How could we not? It’s posited as one of the reasons for declining profitability of the industry. The challenge for telcos is how to engineer an environment of low revenue per bit but still be cost viable.

I’m sure there are differentiated comms products out there in the global market. However, for the many products that aren’t differentiated, there’s a risk of commoditisation. Customers of our OSS are increasingly moving into a paradigm of commoditisation, which in turn impacts the form our OSS must mould themselves to.

The OSS we deliver can either be the bane or the saviour. They can be a differentiator where otherwise there is none. For example, getting each customer’s order ready for service (RFS) faster than competitors. Or by processing orders at scale, yet at a lower cost-base through efficiencies / repeatability such as streamlined products, processes and automations.

OSS exist to improve efficiency at scale of course, but I wonder whether we lose sight of that sometimes? I’ve noticed that we have a tendency to focus on functionality (ie delivering new features) rather than scale.

This isn’t just the OSS vendors or implementation teams either by the way. It’s often apparent in customer requirements too. If you’ve been lucky enough to be involved with any OSS procurement processes, which side of the continuum was the focus – on introducing a raft of features, or narrowing the field of view down to doing the few really important things at scale and speed?

Expanding your bag of OSS tricks

Let me ask you a question – when you’ve expanded your bag of tricks that help you to manage your OSS, where have they typically originated?

By reading? By doing? By asking? Through mentoring? Via training courses?
Relating to technical? People? Process? Product?
Operations? Network? Hardware? Software?
Design? Procure? Implement / delivery? Test? Deploy?
By retrospective thinking? Creative thinking? Refinement thinking?
Other?

If you were to highlight the questions above that are most relevant to the development of your bag of tricks, how much coverage does your pattern show?

There are so many facets to our OSS (ie. tentacles on the OctopOSS) aren’t there? We have to have a large bag of tricks. Not only that, we need to be constantly adding new tricks too right?

I tend to find that our typical approaches to OSS knowledge transfer cover only a small subset (think about discussion topics at OSS conferences that tend to just focus on the technical / architectural)… yet don’t align with how we (or maybe just I) have developed capabilities in the past.

The question then becomes, how do we facilitate the broader learnings required to make our OSS great? To introduce learning opportunities for ourselves and our teams across vaguely related fields such as project management, change management, user interface design, process / workflows, creative thinking, etc, etc.

An alternate way of slicing OSS projects

One of the biggest challenges of big bang OSS project implementations is that all of the business value (ie the OSS and its data, workflows, integrations, etc) gets delivered at once, normally at the end of a lengthy exercise.

Ok, ok, so the delivery of value is not a challenge, it’s the implications of a big delivery of value that’s the challenge – implications that include:

  1. If the project runs out of funds before the project finishes, no value is delivered
  2. If there’s no modularity of delivery then the project team must stay the course of the original project plan. There’s no room for prioritising or dropping or including delivery modules. Project plans are rarely perfect at first after all
  3. Any changes in project plan tend to have knock-on effects into the rest of the delivery due to the sequential nature of typical project plans
  4. Any delivery of value represents a milestone, which in turn demonstrates momentum for the project… a key change management and team morale strategy
  5. Large deliverables represent the proverbial “pig in the python” – only one segment of the python (ie segment of the project delivery team) is engaged (hyper-engaged) whilst the other segments remain under-utilised.  This isn’t great for project flow or utilisation

When tasked with designing a project schedule, I’ve noticed that many vendors tend to follow the typical waterfall delivery and corresponding payment milestones (eg. design, then build, then test, then deploy, then hand over). The downside of this approach is that the business value (for the customer) is delivered at the end of the handover (ie big bang). There’s no business value in delivering design artefacts for example – the customer can’t use them to perform operational tasks.

The model I prefer sees incremental business value being delivered such as:

  • Proof of Concept (PoC) build
  • Sandpit build
  • Out of the box (OOTB) production build (ie. no customisations)
  • End-to-end use case #1 delivery (ie. design, build*, test, deploy, handover)
  • E2E use case #2 delivery
  • E2E use case #n delivery

where build* includes incremental configuration, customisation, integration, data migration, etc.

Telstra hosts “OSS & Networks for the Future architecture” tomorrow

I’m looking forward to dropping in on a “OSS & Networks for the Future architecture” seminar being hosted by Telstra tomorrow. Hope to see you there.

The agenda is as follows:
8:30 Welcome & registration | Johanne Mayer – Moderator (Global evangelist NaaS 2020, Telstra)
9:00 Introduction | Gary Traver (Director Media Product Engineering, Telstra)
9:15 TMF Open Digital Architecture Update |Ken Dilbeck (VP, Collaborative R&D, TM Forum)
10:00 TMF Open API Rel 18.0 | Pierre Gauthier (Chief API architecture, TM Forum)
11:00 Break
11:15 NaaS API Component Suite & Operational Domain Manager (ODM) | Corey Clinger (OSS Expert, Telstra)
11:45 Service Modeling and Exposure | Raman Bhalla (Chief architect NaaS, Telstra)
12:30 Lunch & onsite demo
13:30 MEF Update | Dan Pitt (Senior VP, MEF)
14:15 Closed-Loop Assurance across domains | Paula Rujak (Head of Architecture, Network 2020, Telstra)
14:45 Telstra NaaS Transformation Learning | Guy Lupo (GM, Head of NaaS 2020, Telstra)
15:30 Break
15:45 ETSI ZSM Update | Klaus Martiny (Deutsche Telekom and ETSI ZSM chair)
16:30 Futurism: Who will be Driving your Network? | Guy Lupo (GM, Head of NaaS 2020, Telstra)

Please send us an email if you’d like to get a summary of the event.

Zero touch network & Service Management (ZSM)

Zero touch network & Service Management (ZSM) is a next-gen network management approach using closed-loop principles hosted by ETSI. An ETSI blog has just demonstrated the first ZSM Proof of Concept (PoC). The slide deck describing the PoC, supplied by EnterpriseWeb, can be found here.

The diagram below shows a conceptual closed-loop assurance architecture used within the PoC
ETSI ZSM PoC.

It contains some similar concepts to a closed-loop traffic engineering project designed by PAOSS back in 2007, but with one big difference. That 2007 project was based on a single-vendor solution, as opposed to the open, multi-vendor PoC demonstrated here. Both were based on the principle of using assurance monitors to trigger fulfillment responses. For example, ours used SLA threshold breaches on voice switches to trigger automated remedial response through the OSS‘s provisioning engine.

For this newer example, ETSI’s blog details, “The PoC story relates to a congestion event caused by a DDoS (Denial of Service) attack that results in a decrease in the voice quality of a network service. The fault is detected by service monitoring within one or more domains and is shared with the end-to-end service orchestrator which correlates the alarms to interpret the events, based on metadata and metrics, and classifies the SLA violations. The end-to-end service orchestrator makes policy-based decisions which trigger commands back to the domain(s) for remediation.”

You’ll notice one of the key call-outs in the diagram above is real-time inventory. That was much harder for us to achieve back in 2007 than it is now with virtualised network and compute layers providing real-time telemetry. We used inventory that was only auto-discovered once daily and had to build in error handling, whilst relying on over-provisioned physical infrastructure.

It’s exciting to see these types of projects being taken forward by ETSI, EnterpriseWeb, et al.

Celcom selects Huawei cloud OSS

Celcom Partners with Huawei to Apply a Cloud-Based Platform for Digitized Network Operations.

Celcom Axiata Berhad inked an agreement with Huawei Technologies (Malaysia) Sdn. Bhd. to apply the Cloud-based Digitized Operation Platform, Software as a Service (SaaS) solution.

Celcom will be the first in the country to adopt a full suite cloud-based Operation Support Service (OSS) system to accelerate agility in their automation and the intelligence of network management, and to pave the way for their journey towards becoming a digital company.

The Digitized Operation Platform brings together Artificial Intelligence (AI) and Machine Learning technology powered by Huawei’s award-winning Operation Web Services (OWS) platform, to enhance Celcom’s capabilities in managing increasingly complex networks and services. It also enables Celcom to transform their daily operations from reactive to proactive and predictive, and further solidify their relentless drive to achieve excellence in customer experience.

The agreement to acquire the platform for Celcom’s network operation was signed by Amandeep Singh, Chief Technology Officer of Celcom Axiata Berhad and Baker Zhouxin, Chief Executive Officer, Huawei Technologies (Malaysia) Sdn. Bhd., and also witnessed by Bassaharil Mohd Yusop, Head of Procurement, Celcom Axiata Berhad and Tang Qibing, President of Global Technical Service Department, Huawei Technologies Co. Ltd.

Through this partnership, Huawei aims to leverage its Digitized Operation AUTomation & INtelligence Services Solution (AUTIN™), and share global experience with Celcom to achieve a visualised, automated and intelligent network operation.

Amandeep Singh, Chief Technology Officer of Celcom Axiata Berhad, said that the partnership signifies Celcom’s ongoing commitment in delivering the best network experience to the customers.

“Celcom will constantly continue the evolution of its network with the latest technologies to bring an awesome experience for Malaysians. It is critical that we explore the capabilities of new generation technology with global partners like Huawei.”

“The Digitized Operation Platform will increase Celcom’s efficacy in managing our daily operations, readiness in managing potential issues and continuous improvements in our network,” he said.

Huawei Global Technical Services President Tang Qibing said, “I’m pleased that Celcom chose Huawei as a partner in its digital transformation journey. We certainly believe that Huawei’s AUTIN™ solution will accelerate Celcom’s transition from traditional operations with repetitive manual processes into automated operations. Our vision is to build an ecosystem with strategic partners like Celcom, third parties and other industries to unlock incredible value through new services and innovations, which will ultimately benefit everyone in the telecommunications industry.”

VMware to acquire Dell EMC Service Assurance Suite

VMware to Acquire Dell EMC Service Assurance Suite.

VMware, Inc. announced a definitive agreement to acquire the technology and team of Dell EMC Service Assurance Suite – software spanning network health, performance monitoring and root cause analysis for communications service providers (CSPs) and their customers – from Dell EMC. The addition of the Dell EMC Service Assurance Suite technology to the VMware Telco NFV portfolio equips CSPs with the ability to maintain operational reliability in their core network, cloud, and IT domains across physical and virtual infrastructure—enabling them to operationalize competitive new services faster.

As customers bridge current services from 4G to 5G, service assurance becomes critical. The Dell EMC Service Assurance Suite provides automated capabilities to operators via accurate root cause analysis management. VMware, a leader in network functions virtualization (NFV) infrastructure, will leverage the Dell EMC Service Assurance Suite to help customers accelerate their virtual network function deployments with end-to-end service assurance once the deal closes.

The Dell EMC Service Assurance Suite team adds a deep bench of talent with engineering expertise and 10+ years of customer relationships. The core Dell EMC Service Assurance Suite offering is already well known to CSPs for its superior troubleshooting capabilities. More than 50 CSPs worldwide, including many Tier 1 operators, leverage Dell EMC Service Assurance Suite capabilities to enable new services for a range of customers, including enterprises, federal and local governments. Upon the deal closing, VMware plans to invest in growing the capabilities of the platform as a key component in the Telco NFV portfolio and focusing on modernization and intelligent automation. After the deal closes, Dell EMC customers will continue to have access to the Dell EMC Service Assurance Suite’s solutions pursuant to a commercial reseller agreement in place between VMware and Dell EMC.

Faced with top-line and bottom-line pressures, operators are moving from a packaged hardware approach to an NFV-driven, software-defined approach for their core network environments. While this move is critical to operators’ ability to deliver agile services and capitalize upon new opportunities, their capacity to virtualize quickly is hampered by a lack of effective root cause analysis. This is an increasingly important area of focus, given the rapid changes happening in operator networks as they deploy 4G services like Voice Over LTE (VoLTE) and prepare for 5G-driven advanced applications supporting IoT, Artificial Intelligence, Machine Learning and Augmented Reality/Virtual Reality.

The Dell EMC Service Assurance Suite provides assurance capabilities to deliver service impact and root-cause analysis with visibility across physical and virtual networks, and cloud environments, to identify how resources are being consumed and whether service level agreements are being met. This transparency enables CSPs to visualize, analyze and optimize their environments to enable faster resolution times; proactive identification of issues is proven to provide better return on NFV and IT investments. The Dell EMC Service Assurance Suite is complemented by leading VMware technologies, including VMware vCloud NFV, VMware vRealize Operations, VMware vRealize Network Insight, Wavefront by VMware and VMware NSX SD-WAN by VeloCloud.

This acquisition demonstrates VMware’s growing commitment to the telecommunications industry. It also reinforces the “better together” synergy between VMware and Dell EMC. Additionally, CSP customers will benefit from the combination of VMware and Dell EMC solutions.

“As carriers are readying for 5G, they are increasingly virtualizing edge and core networks with network functions virtualization, or NFV. Service assurance is a critical need for any network. The Dell EMC Service Assurance Suite’s established software and services capabilities, combined with VMware’s trademark innovation, will empower CSPs to modernize and accelerate the transformation of their networks through NFV upon closing,” said Shekar Ayyar, executive vice president, Strategy and Corporate Development and General Manager Telco NFV Group, VMware. “The Dell EMC Service Assurance Suite team is primed to accelerate our NFV business and help drive it forward with unprecedented service assurance.”

Just in time design

It’s interesting how we tend to go in cycles. Back in the early days of OSS, the network operators tended to build their OSS from the ground up. Then we went through a phase of using Commercial off-the-shelf (COTS) OSS software developed by third-party vendors. We now seem to be cycling back towards in-house development, but with collaboration that includes vendors and external assistance through open-source projects like ONAP. Interesting too how Agile fits in with these cycles.

Regardless of where we are in the cycle for our OSS, as implementers we’re always challenged with finding the Goldilocks amount of documentation – not too heavy, not too light, but just right.

The Agile Manifesto espouses, “working software over comprehensive documentation.” Sounds good to me! It perplexes me that some OSS implementations are bogged down by lengthy up-front documentation phases, especially if we’re basing the solution on COTS offerings. These can really stall the momentum of a project.

Once a solution has been selected (which often does require significant analysis and documentation), I’m more of a proponent of getting the COTS software stood up, even if only in a sandpit environment. This is where just-in-time (JIT) documentation comes into play. Rather than having every aspect of the solution documented (eg process flows, data models, high availability models, physical connectivity, logical connectivity, databases, etc, etc), we only need enough documentation for collaborative stakeholders to do their parts (eg IT to set up hardware / hosting, networks to set up physical connectivity, vendor to provide software, integrator to perform build, etc) to stand up a vanilla solution.

Then it’s time to start building trial scenarios through the solution. There’s usually quite a bit of trial and error in this stage, as we seek to optimise the scenarios for the intended users. Then we add a few more scenarios.

There’s little point trying to document the solution in detail before a scenario is trialled, but some documentation can be really helpful. For example, if the scenario is to build a small sub-section of a network, then draw up some diagrams of that sub-network that include the intended naming conventions for each object (eg device, physical connectivity, addresses, logical connectivity, etc). That allows you to determine whether there are unexpected challenges with naming conventions, data modelling, process design, etc. There are always unexpected challenges that arise!

I figure you’re better off documenting the real challenges than theorising on the “what if?” challenges, which is what often happens with up-front documentation exercises. There are always brilliant stakeholders who can imagine millions of possible challenges, but these often bog the design phase down.

With JIT design, once the solution evolves, the documentation can evolve too… if there is an ongoing reason for its existence (eg as a user guide, for a test plan, as a training cheat-sheet, a record of configuration for fault-finding purposes, etc).

Interestingly, the first value in the Agile Manifesto is, “individuals and interactions over processes and tools.” This is where the COTS vs in-house-dev comes back into play. When using COTS software, individuals, interactions and processes are partly driven by what the tools support. COTS functionality constrains us but we can still use Agile configuration and customisation to optimise our solution for our customers’ needs (where cost-benefit permits).

Having a working set of vanilla tools allows our customers to get a much better feel for what needs to be done rather than trying to understand the intent of up-front design documentation. And that’s the key to great customer outcomes – having the customers knowledgeable enough about the real solution (not hypothetical solutions) to make the most informed decisions possible.

Of course there are always challenges with this JIT design model too, especially when third-party contracts are involved!

Using risk reversal to design OSS

There’s a concept in sales called “risk reversal” that takes all of the customers’ likely issues with a product and provides answers to alleviate customer concerns. I believe we can apply the same concept to OSS, not just to sell them, but to design them.

To borrow from a risk register page here on PAOSS, the major categories of risk that appear on almost all OSS projects are:

  • Organisational change management – the OSS will touch almost all parts of a business and a large number of people within the organisation. If all parts of the business is not conditioned to the change then the implementation will not be successful even if the technical deliverables are faultless. Change management has many, many layers but one way to minimise change management is to make the products and processes highly intuitive. I feel that intuitive OSS will come from a focus on design and simplification rather than our current focus on constantly adding more features. The aim should be to create OSS that are as easy for operators to start using as office tools like spreadsheets, word processors, presentation applications, etc
  • Data integrity – the OSS is only as good as good as the data that is being fed to it. If the quality of data in the OSS database is poor then operational staff will quickly lose faith in the tools. The product-based techniques that can be used to overcome this risk include:
    • Design tools / data model to cope with poor data quality, but also flag it as low confidence for future repair
    • Reduction in data relationships / dependencies (ie referential integrity) to ensure that quality problems don’t have a domino effect on OSS usability
    • Building checks and balances that ensure the data can be reconciled and quality remains high
    • Incorporate closed-loop processes to ensure data quality is continually improved, rather than the open-loop processes that tend to lead to data quality degradation
  • Application functionality mapping to real business needs OSS have been around long enough to have all but run out of features for vendors to differentiate against. The truly useful functionality has arisen from real business needs. “Wish-list” functionality that adds little tangible business benefit or requires significant effort is just adding product and project risk
  • Northbound Interface / Integration – Costs and risks of integrations are significant on each OSS project. There are many techniques that can be used to reduce risk such as a Minimum Viable Data (ie less data types to collect across an interface), standardised destination mapping models, etc but the industry desperately needs major innovation here
  • Implementation – there are so many sources of risk within this category, as is to be expected on any large, complex project. Taking the PMP approach to risk reduction, we can apply the Triple Constraint model

Aggregated OSS buying models

Last week we discussed a sell-side co-op business model. Today we’ll look at buy-side co-op models.

In other industries, we hear of buying groups getting great deals through aggregated buying volumes. This is a little harder to achieve with products that are as uniquely customised as OSS. It’s possible that OSS buy-side aggregation could occur for operators that are similar in nature but don’t compete (eg regional operators). Having said that, I’ve yet to see any co-ops formed to gain OSS group-purchase benefits. If you have, I’d love to hear about it.

In OSS, there are three approaches that aren’t exactly co-op buying models but do aggregate the evaluation and buying decision.

The most obvious is for corporations that run multiple carriers under one umbrella such as Telefonica (see Telefonica’s various OSS / BSS contract notifications here), SingTel (group contracts here), etisalat, etc. There would appear to benefits in standardising OSS platforms across each of the group companies.

A far less formal co-op buying model I’ve noticed is the social-proof approach. This is where one, typically large, network operator in a region goes through an extensive OSS / BSS evaluation and chooses a vendor. Then there’s a domino effect where other, typically smaller, network operators also buy from the same vendor.

Even less formal again is by using third-party organisations like Passionate About OSS to assist with a standard vendor selection methodology. The vendors selected aren’t standardised because each operator’s needs are different, but the product / vendor selection methodology builds on the learnings of past selection processes across multiple operators. The benefits comes in the evaluation and decision frameworks.

An OSS data creation brain-fade

Many years ago, I made a data migration blunder that slowed a production OSS down to a crawl. Actually, less than a crawl. It almost became unusable.

I was tasked with creating a production database of a carrier’s entire network inventory, including data migration for a bunch of Nortel Passport ATM switches (yes, it was that long ago).

  • There were around 70 of these devices in the network
  • 14 usable slots in each device (ie slots not reserved for processing, resilience, etc)
  • Depending on the card type there were different port densities, but let’s say there were 4 physical ports per slot
  • Up to 2,000 VPIs per port
  • Up to 65,000 VCIs per VPI
  • The customer was running SPVC

To make it easier for the operator to create a new customer service, I thought I should script-create every VPI/VCI on every port on every devices. That would allow the operator to just select any available VPI/VCI from within the OSS when provisioning (or later, auto-provisioning) a service.

There was just one problem with this brainwave. For this particular OSS, each VPI/VCI represented a logical port that became an entry alongside physical ports in the OSS‘s ports table… You can see what’s about to happen can’t you? If only I could’ve….

My script auto-created nearly 510 billion VCI logical ports; over 525 billion records in the ports table if you also include VPIs and physical ports…. in a production database. And that was just the ATM switches!

So instead of making life easier for the operators, it actually brought the OSS‘s database to a near stand-still. Brilliant!!

Luckily for me, it was a greenfields OSS build and the production database was still being built up in readiness for operational users to take the reins. I was able to strip all the ports out and try again with a less idiotic data creation plan.

The reality was that there’s no way the customer could’ve ever used 2,000 x 65,000 VPI/VCI groupings I’d created on every single physical port. Put it this way, there were far less than 130 million services across all service types across all carriers across that whole country!

Instead, we just changed the service activation process to manually add new VPI/VCIs into the database on demand as one of the pre-cursor activities when creating each new customer service.

From that experience, I have reverted back to the Minimum Viable Data (MVD) mantra ever since.

Network slicing, another OSS activity

One business customer, for example, may require ultra-reliable services, whereas other business customers may need ultra-high-bandwidth communication or extremely low latency. The 5G network needs to be designed to be able to offer a different mix of capabilities to meet all these diverse requirements at the same time.
From a functional point of view, the most logical approach is to build a set of dedicated networks each adapted to serve one type of business customer. These dedicated networks would permit the implementation of tailor-made functionality and network operation specific to the needs of each business customer, rather than a one-size-fits-all approach as witnessed in the current and previous mobile generations which would not be economically viable.
A much more efficient approach is to operate multiple dedicated networks on a common platform: this is effectively what “network slicing” allows. Network slicing is the embodiment of the concept of running multiple logical networks as virtually independent business operations on a common physical infrastructure in an efficient and economical way.
.”
GSMA’s Introduction to Network Slicing.

Engineering a network is one of compromises. There are many different optimisation levers to pull to engineer a set of network characteristics. In the traditional network, it was a case of pulling all the levers to find a middle-ground set of characteristics that supported all their service offerings.

QoS striping of traffic allowed for a level of differentiation of traffic handling, but the underlying network was still a balancing act of settings. Network virtualisation offers new opportunities. It allows unique segmentation via virtual networks, where each can be optimised for the specific use-cases of that network slice.

For years, I’ve been posing the concept of telco offerings being like electricity networks – that we don’t need so many service variants. I should note that this analogy is not quite right. We do have a few different types of “electricity” such as highly available (health monitoring), high-bandwidth (content streaming), extremely low latency (rapid reaction scenarios such as real-time sensor networks), etc.

Now what do we need to implement and manage all these network slices?? Oh that’s right, OSS! It’s our OSS that will help to efficiently coordinate all the slicing and dicing that’s coming our way… to optimise all the levers across all the different network slices!

The Pentagon creates a “Do Not Buy” list? Including OSS vendors?

Pentagon Creates ‘Do Not Buy’ List of Russian, Chinese Software.

The Pentagon is working on a software “do not buy” list to block vendors who use software code originating from Russia and China, a top Defense Department acquisitions official said on Friday.

Apparently The The Pentagon started compiling the list about six months ago. Suspicious companies are put on a list that is circulated to the military’s software buyers. Now the Pentagon is working with the three major defense industry trade associations — the Aerospace industries Association, National Defense Industrial Association and Professional Services Council — to alert contractors small and large.

Does anyone know whether there are any OSS vendors on this list? One would assume that Huawei and ZTE Soft would be. Who else?

The OSS co-op business model

A co-operative is a member-owned business structure with at least five members, all of whom have equal voting rights regardless of their level of involvement or investment. All members are expected to help run the cooperative.”
Small Business WA.

The co-op business model has fascinated me since doing some tech projects in the dairy industry in the deep distant past. The dairy co-ops empower collaboration of dairy farmers where the might of the collective outweighs that of each individually. As the collective, they’ve been able to establish massive processing plants, distribution lines, bargaining power, etc. The dairy co-ops are a sell-side collaboration.

By contrast open source projects like ONAP represent an interesting hybrid – part buy-side collaboration (ie the service providers acquiring software to run their organisations) and part sell-side (ie the vendors contributing code to the project alongside the service providers).

I’ve long been intrigued by the potential for a pure sell-side co-operative in OSS.

As we all know, the OSS market is highly fragmented (just look the number of vendors / products on this page), which means inefficiency because of the duplicated effort across vendors. A level of market efficiency comes from mergers and acquisitions. In addition, some comes from vendors forming partnerships to offer more complete solutions to a given customer requirement list.

But the key to a true sell-side OSS co-operative would be in the definition above – “at least five members.” Perhaps it’s an open-source project that brings them together. Perhaps it’s an extended partnership.

As Tom Nolle stated in an article that prompted the writing of today’s post, “On the vendor side, commoditization tends to force consolidation. A vendor who doesn’t have a nice market share has little to hope for but slow decline. A couple such vendors (like Infinera and Coriant, recently) can combine with the hope that the combination will be more survivable than the individual companies were likely to be. Consolidation weeds out industry inefficiencies like parallel costly operations structures, and so makes the remaining players stronger.

Imagine for a moment if instead of having developers spread across 100 alarm management tools, that same developer pool can take a consolidated 5 alarm management products forward? Do you think we’d get better, more innovative, more complete products faster?

Having said that, co-ops have their weaknesses too.

What do you think? Could such a model work? Would it be a disaster?

OSS, with drama, without drama. Your choice

A recent blog from Seth Godin brought back some memories from a past project.

Two ways to solve a problem and provide a service.
With drama. Make sure the customer knows just how hard you’re working, what extent you’re going to in order to serve. Make a big deal out of the special order, the additional cost, the sweat and the tears.
Without drama. Make it look effortless.
Either can work. Depends on the customer and the situation.
Seth Godin here.

Over the course of the long-running and challenging project, I worked under a number of different Program Directors. The second last (chronologically) took the team barrel-chested down the “With Drama” path whilst the last took the “Without Drama” approach.

The “With Drama” approach was very melodramatic and political, but to be honest, was also really draining. It was draining because of the high levels of contact (eg meetings, reports, etc), reducing the amount of productive delivery time.

The “Without Drama” approach did make it look effortless, because by comparison it was effortless. The Program Director took responsibility for peer-level contact and cleared the way for the delivery team to focus on delivering. The team was still working well over 60 hour weeks, but it was now more clearly focused on delivery tasks. Interestingly, this approach brought a seemingly endless project to a systematic and clean conclusion (ie delivery) within about three months.

Now I’m not sure about your experiences or preferences, but I’d go with the “Without Drama” OSS delivery approach every time. The emotional intensity required of the “With Drama” approach just isn’t sustainable over long-running projects like our OSS projects tend to be.

What are your thoughts / experiences?

How an OSS is like an F1 car

A recent post discussed the challenge of getting a timeslice of operations people to help build the OSS. That post surmised, “as the old saying goes, you get back what you put in. In the case of OSS I’ve seen it time and again that operations need to contribute significantly to the implementation to ensure they get a solution that fits their needs.”

I have a new saying for you today, this time from T.D. Jakes, “You can’t be committed to the dream. You have to be committed to the process.”

If you’re representing an organisation that is buying an OSS solution from a vendor / integrator, please consider these two adages above. Sometimes we’re good at forming the dream (eg business requirements, business case, etc) and expecting the vendor to conduct almost all of the process. While our network operations teams are hired for the process of managing the network, we also need their significant input on the process of building / configuring an OSS. The vendor / integrator can’t just develop it in isolation and then hand it over to ops with a few days of training at the end.

The process of bringing a new OSS into an organisation is not like buying a road car. With an OSS, you can’t just place an order with some optional features like paint and trim specified, then expect to start driving it as soon as it leaves the vendor’s assembly line. It’s more like an F1 car where the driver is in constant communications with the pit-crew, changing and tweaking and refining to optimise the car to the driver’s unique needs (and in turn to hopefully optimise the results).

At least, that’s what current-state OSS are like. Perhaps in the future… we’ll strive to refine our OSS to be more like a road-car – standardised and intuitive enough for operators to drive straight off the assembly line.

Orchestration looks a bit like provisioning

The following is the result of a survey question posed by TM Forum:
Number 1 Driver for Orchestration

I’m not sure how the numbers tally, but conceptually the graph above paints an interesting perspective of why orchestration is important. The graph indicates the why.

But in this case, for me, the why is the by-product of the how. The main attraction of orchestration models is in how we can achieve modularity. All of the business outcomes mentioned in the graph above will only be achievable as a result of modularity.

Put another way, rather than having the integration spaghetti of an “old-school” OSS / BSS stack, orchestration (and orchestration plans) potentially provides the ability to provide clearer demarcation and abstraction all the way from product design down into transactions that hit the network… not to mention the meet-in-the-middle points between business units.

Demarcation points support catalog items (perhaps as APIs / microservices with published contracts), allowing building-block design of products rather than involvement of (and disputes between) business units all down the line of product design. This facilitates the speed (34%) and services on demand (28%) objectives stated in the graph.

But I used the term “old-school” with intent above. The modularity mentioned above was already achieved in some older OSS too. The ability to carve up, sequence, prioritise and re-construct a stream of service orders was already achievable by some provisioning + workflow engines of the past.

The business outcomes remain the same now as they were then, but perhaps orchestration takes it to the next level.

A defacto spatial manager

Many years ago, I was lucky enough to lead a team responsible for designing a complex inside and outside plant network in a massive oil and gas precinct. It had over 120 buildings and more than 30 networked systems.

We were tasked with using CAD (Computer Aided Design) and Office tools to design the comms and security solution for the precinct. And when I say security, not just network security, but building access control, number plate recognition, coast guard and even advanced RADAR amongst other things.

One of the cool aspects of the project was that it was more three-dimensional than a typical telco design. A telco cable network is usually planned on x and y coordinates because the y coordinate is usually on one or two planes (eg all ducts are at say 0.6m below ground level or all catenary wires between poles are at say 5m above ground). However, on this site, cable trays ran at all sorts of levels to run around critical gas processing infrastructure.

We actually proposed to implement a light-weight OSS for management of the network, including outside plant assets, due to the easy maintainability compared with CAD files. The customer’s existing CAD files may have been perfect when initially built / handed-over, but were nearly useless to us because of all the undocumented that had happened in the ensuing period. However, the customer was used to CAD files and wanted to stay with CAD files.

This led to another cool aspect of the project – we had to build out defacto OSS data models to capture and maintain the designs.

We modelled:

  • The support plane (trayway, ducts, sub-ducts, trenches, lead-ins, etc)
  • The physical connectivity plane (cables, splices, patch-panels, network termination points, physical ports, devices, etc)
  • The logical connectivity plane (circuits, system connectivity, asset utilisation, available capacity, etc)
  • Interconnection between these planes
  • Life-cycle change management

This definitely gave a better appreciation for the type of rules, variants and required data sets that reside under the hood of a typical OSS.

Have you ever had a non-OSS project that gave you a better appreciation / understanding of OSS?

I’m also curious. Have any of you used designed your physical network plane in three dimensions? With a custom or out-of-the-box tool?