Unexpected OSS indicators

Yesterday’s post talked about using customer contacts as a real-time proxy metric for friction in the business, which could also be a directional indicator for customer experience.

That got me wondering what other proxy metrics might be used by to provide predictive indicators of what’s happening in your network, OSS and/or BSS. Apparently, “Colt aims to enhance its service assurance capabilities by taking non-traditional data (signal strength, power, temperature, etc.) from network elements (cards, links, etc.) to predict potential faults,” according to James Crawshaw here on LightReading.

What about environmental metrics like humidity, temperature, movement, power stability/disturbance?

I’d love to hear about what proxies you use or what unexpected metrics you’ve found to have shone the spotlight on friction in your organisation.

Shooting the OSS messenger

NPS, or Net Promoter Score, has become commonly used in the telecoms industry in recent years. In effect, it is a metric that measures friction in the business. If NPS is high, the business runs more smoothly. Customers are happy with the service and want to buy more of it. They’re happy with the service so they don’t need to contact the business. If NPS is low, it’s harder to make sales and there’s the additional cost of time dealing with customer complaints, etc (until the customer goes away of course).

NPS can be easy to measure via survey, but a little more challenging as a near-real-time metric. What if we used customer contacts (via all channels such as phone, IVR, email, website, live-chat, etc) as a measure of friction? But more importantly, how does any of this relate to OSS / BSS? We’ll get to that shortly (I hope).

BSS (billing, customer relationship management, etc) and OSS (service health, network performance, etc) tend to be the final touchpoints of a workflow before reaching a customer. When the millions of workflows through a carrier are completing without customer contact, then friction is low. When there are problems, calls go up and friction / inefficiency is also going up. When there are problems, the people (or systems) dealing with the calls (eg contact centre operators) tend to start with OSS / BSS tools and then work their way back up the funnel to identify the cause of friction and attempt to resolve it.

The problem is that the OSS / BSS tools are often seen as the culprit because that’s where the issue first becomes apparent. It’s easier to log an issue against the OSS than to keep tracking back to the real source of the problem. Many times, it’s a case of shooting the messenger. Not only that, but if we’re not actually identifying the source of the problem then it becomes systemic (ie the poor customer experience perpetuates).

Maybe there’s a case for us to get better at tracking the friction caused further upstream of our OSS / BSS and to give more granular investigative tools to the call takers. Even if we do, our OSS / BSS are still the ones delivering the message.

Taking SMEs out of ops to build an OSS

OSS are there to do just that – support operations. So as OSS implementers we have to do just that too.

But as the old saying goes, you back what you put in. In the case of OSS I’ve seen it time and again that operations need to contribute significantly to the implementation to ensure they get a solution that fits their needs.

Just one problem here though. Operations are hired to operate the network, not build OSS. Now let’s assume the operations team does decide to commit heavily to your OSS build, thus taking away from network ops at some level (unless they choose to supplement the ops team).

That still leaves operations team leaders with a dilemma. Do they take certain SMEs out of ops to focus entirely on the OSS build (and thus act as nominees for the rest of the team) or do they cycle many of their ops people through the dual roles (at risk of task-switching inefficiency)?

There are pros and cons with each aren’t there? Which would you choose and why? Do you have an alternate approach?

Broadcom buys CA Technologies

Weirdest. Acquisition. Ever. Broadcom buys CA Technologies.

Broadcom to Acquire CA Technologies for $18.9 Billion in Cash.

Broadcom Inc., a semiconductor device supplier to the wired, wireless, enterprise storage, and industrial end markets, and CA Technologies, [a] provider of information technology (IT) management software and solutions, announced that the companies have entered into a definitive agreement under which Broadcom has agreed to acquire CA to build one of the world’s leading infrastructure technology companies.

Under the terms of the agreement, which has been approved by the boards of directors of both companies, CA’s shareholders will receive $44.50 per share in cash. This represents a premium of approximately 20% to the closing price of CA common stock on July 11, 2018, the last trading day prior to the transaction announcement, and a premium of approximately 23% to CA’s volume-weighted average price (“VWAP”) for the last 30 trading days. The all-cash transaction represents an equity value of approximately $18.9 billion, and an enterprise value of approximately $18.4 billion.

Hock Tan, President and Chief Executive Officer of Broadcom, said, “This transaction represents an important building block as we create one of the world’s leading infrastructure technology companies. With its sizeable installed base of customers, CA is uniquely positioned across the growing and fragmented infrastructure software market, and its mainframe and enterprise software franchises will add to our portfolio of mission critical technology businesses. We intend to continue to strengthen these franchises to meet the growing demand for infrastructure software solutions.”

“We are excited to have reached this definitive agreement with Broadcom,” said Mike Gregoire, CA Technologies Chief Executive Officer. “This combination aligns our expertise in software with Broadcom’s leadership in the semiconductor industry. The benefits of this agreement extend to our shareholders who will receive a significant and immediate premium for their shares, as well as our employees who will join an organization that shares our values of innovation, collaboration and engineering excellence. We look forward to completing the transaction and ensuring a smooth transition.”

The transaction is expected to drive Broadcom’s long-term Adjusted EBITDA margins above 55% and be immediately accretive to Broadcom’s non-GAAP EPS. On a combined basis, Broadcom expects to have last twelve months non-GAAP revenues of approximately $23.9 billion and last twelve months non-GAAP Adjusted EBITDA of approximately $11.6 billion.

As a global leader in mainframe and enterprise software, CA’s solutions help organizations of all sizes develop, manage, and secure complex IT environments that increase productivity and enhance competitiveness. CA leverages its learnings and development expertise across its Mainframe and Enterprise Solutions businesses, resulting in cross enterprise, multi-platform support for customers. The majority of CA’s largest customers transact with CA across both its Mainframe and Enterprise Solutions portfolios. CA benefits from predictable and recurring revenues with the average duration of bookings exceeding three years. CA operates across 40 countries and currently holds more than 1,500 patents worldwide, with more than 950 patents pending.

Financing and Path to Completion

Broadcom intends to fund the transaction with cash on hand and $18.0 billion in new, fully-committed debt financing. Broadcom expects to maintain an investment grade rating, given its strong cash flow generation and intention to rapidly de-leverage.

The transaction is subject to customary closing conditions, including the approval of CA shareholders and antitrust approvals in the U.S., the EU and Japan.

Careal Property Group AG and affiliates, who collectively own approximately 25% of the outstanding shares of CA common stock, have entered into a voting agreement to vote in favor of the transaction.

The closing of the transaction is expected to occur in the fourth calendar quarter of 2018.

The OSS Matrix – the blue or the red pill?

OSS Matrix
OSS tend to be very good at presenting a current moment in time – the current configuration of the network, the health of the network, the activities underway.

Some (but not all) tend to struggle to cope with other moments in time – past and future.

Most have tools that project into the future for the purpose of capacity planning, such as link saturation estimation (based on projecting forward from historical trend-lines). Predictive analytics is a current buzz-word as research attempts to predict future events and mitigate for them now.

Most also have the ability to look into the past – to look at historical logs to give an indication of what happened previously. However, historical logs can be painful and tend towards forensic analysis. We can generally see who (or what) performed an action at a precise timestamp, but it’s not so easy to correlate the surrounding context in which that action occurred. They rarely present a fully-stitched view in the OSS GUI that shows the state of everything else around it at that snapshot in time past. At least, not to the same extent that the OSS GUI can stitch and present current state together.

But the scenario that I find most interesting is for the purpose of network build / maintenance planning. Sometimes these changes occur as isolated events, but are more commonly run as projects, often with phases or milestone states. For network designers, it’s important to differentiate between assets (eg cables, trenches, joints, equipment, ports, etc) that are already in production versus assets that are proposed for installation in the future.

And naturally those states cross over at cut-in points. The proposed new branch of the network needs to connect to the existing network at some time in the future. Designers need to see available capacity now (eg spare ports), but be able to predict with confidence that capacity will still be available for them in the future. That’s where the “reserved” status comes into play, which tends to work for physical assets (eg physical ports) but can be more challenging for logical concepts like link utilisation.

In large organisations, it can be even more challenging because there’s not just one augmentation project underway, but many. In some cases, there can be dependencies where one project relies on capacity that is being stood up by other future projects.

Not all of these projects / plans will make it into production (eg funding is cut or a more optimal design option is chosen), so there is also the challenge of deprecating planned projects. Capability is required to find whether any other future projects are dependent on this deprecated future project.

It can get incredibly challenging to develop this time/space matrix in OSS. If you’re a developer of OSS, the question becomes whether you want to take the blue or red pill.

Front-loading with OSS auto-discovery

Yesterday’s post discussed the merits of front-loading effort on knowledge transfer of new starters and automated testing, whilst acknowledging the challenges that often prevent that from happening.

Today we look at the front-loading benefits of building OSS / network auto-discovery tools.

We all know that OSS are only as good as the data we seed them with. As the old saying goes, garbage in, garbage out.

Assurance / network-health data is generally collected directly from the network in near real time, typically using SNMP traps, syslog events and similar. The network is generally the data master for this assurance-style data, so it makes sense to pull data from the network wherever possible (ie bottom-up data flows).

Fulfilment data, in the form of customer orders, network designs, etc are often captured in external systems first (ie as master) and pushed into the network as configurations (ie top-down data flows).
Bottom-up and top-down OSS data flows

These two flows meet in the middle, as they both tend to rely on network inventory and/or resources. Bottom-up – Network faults to be tied to inventory / resource / topology information (eg fibre cuts, port failure, device failure, etc) which are important for fault identification (Root Cause Analysis [RCA]). Similarly for top-down – customer services / circuits / contracts tend to consume inventory, capacity and/or other resources.

Looking end-to-end and correlating network health (assurance) to customer service health (fulfilment) (eg as Service Level Agreement [SLA] analysis, Service Impact Analysis [SIA]) tends to only be possible due to reconciliation via inventory / resource data sets as linking keys.

Seeding an OSS with inventory / resource data can be done via three methods:

  1. Data migration (eg script-loading from external sources such as spreadsheet or CSV files)
  2. Manual data creation
  3. Auto-discovery (ie collection of data directly from the network)
  4. (or a combination of the above)

Options 1 and 2 are probably the more traditional method of initial seeding of OSS databases, mainly because they tend to be faster to demonstrate progress.

Option 3 is the front-loading option that can be challenging in the short-to-medium term but will hopefully prove beneficial in the longer term (just like knowledge transfer and automated testing).

It might seem easy to just suck data directly out of the network, but the devil is definitely in the detail, details such as:

  • Choosing optimal timings to poll the network without saturating it (if notification aren’t being pushed by the network), not to mention session handling of these long-running data transfers
  • Building the mediation layer to perform protocol conversion and data mappings
  • Field translation / mapping to common naming standards. This can be much more difficult than it sounds and is key to allow the assurance and fulfilment meet-in-the-middle correlations to occur (as described above)
  • Reconciliation between the data being presented by the network and what’s already in the OSS database. In theory, the data presented by the network should always be right, but there are scenarios where it’s not (eg flapping ports giving the appearance of assets being present / absent from network audit data, assets in test / maintenance mode that aren’t intended be accepted into the OSS inventory pool, lifecycle transitions from planned to built, etc)
  • Discrepancy and exception handling rules
  • All of the above make it challenging for “siloed” data sets, but perhaps even more challenging is in the discovery and auto-stitching of cross-domain data sets (eg cross-domain circuits / services / resource chains)

The vexing question arises – do you front-load and seed via auto-discovery or perform a creation / migration that requires more manual intervention? In some cases, it could even be a combination of both as some domains are easier to auto-discover than others.

Automated testing and new starters

Can you guess what automated OSS testing and OSS new starters have in common?

Both are best front-loaded.

As a consultant, I’ve been a new starter on many occasions, as well as being assigned new starters on probably even more occasions. From both sides of that fence, it’s far more effective to front-load the new starter with training / knowledge to bring them up to speed / effectiveness, as soon as possible. Far more useful from the perspective of quality, re-work, self-sufficiency, etc.

Unfortunately, this is easier said than done. For a start, new starters are generally only required because the existing team is completely busy. So busy that it’s really hard to drop everything to make time to deliver up-front training. It reminds me of this diagram.
We're too busy

Front-loading of automated testing is similar… it takes lots of time to get it operational, but once in place it allows the team to focus on business outcomes faster.

In both cases, front-loading leads to a decrease in efficiency over the first few days, but tends to justify the effort soon thereafter. What other examples of front-loading can you think of in OSS?

The OSS breathing coach analogy

To paraphrase the great Chinese philosopher Lao Tzu, resisting change is like trying to hold your breath – even if you’re successful, it won’t end well.”
Michael McQueen
here.

OSS is an interesting dichotomy. At one end of the scale, you have the breath holders – those who want the status quo to remain so that they can bring their OSS (and/or network) under control. At the other end, you have the hyperventilators – those who want to force a constant stream of change to overcome any perceived shortfalls in the current solution.

The more desirable state is probably a balance between breath holding and hyperventilation:

  • If you’re an OSS implementer (eg vendor, integrator, internal project delivery team), then you rely on change – as long as it’s enough to deliver an income, but not so much as to overwhelm you.
  • If you’re an OSS operator, then you long for an OSS that does its role perfectly and evolves at a manageable speed, allowing you stability.

The art of change management in OSS is to act as a breathing coach – to find the collective balance of respiration that suits the organisation whilst considering the natural tendencies of all of the different contributors to the project.

Just like breathing, change might seem simple, but is often completely underestimated as a result. But spend some time with any breathing coach or OSS change manager and you’ll find that there are many techniques that they call upon to find optimal balance.

Market for orchestration to triple from 2018 to 2023… but…

CSPs’ needs in orchestration are evolving in parallel on several dimensions. These can be considered hierarchically. At the highest level is software that has an end-to-end service role, as is the case in the ONAP project. This software generally supports a service life-cycle perspective, containing functions from design and service creation, to provisioning and activation, to operations management, analysis, upgrade and evolution.
Beneath this tier, in a resource-facing sense, is software that simplifies deployment and operation of virtual system infrastructures in cloud-native applications, NFV, vco/CORD and MEC. This carries the overall tag of MANO and incorporates the domains of NFV (with NFVO, for deployment and operation of virtualized network functions) and virtualized infrastructure management (or VIM, for automating deployment and operation of virtual system infrastructures). Open source developments are significant at each of these layers of orchestration, and each contains a significant portion of the overall orchestration TAM.
In parallel is the functionality for managing hybrid virtual and physical infrastructures, which is the reality in most CSP environments. This can be thought of as a lateral branch to MANO for virtualized infrastructures in the orchestration stack.
Together these categories make up the TAM [Total Addressable Market] for orchestration solutions with CSPs. This is a high-priority area of focus for CSPs and is one of the highest growth areas of software innovation and development in support of their service delivery needs. We expect the TAM for orchestration software to triple from 2018 through 2023 at a CAGR of 32.5%. This is partially because of the nascent level of the offerings at the current time, as well as the high priority that CSPs and their vendor suppliers are placing on the domain
.”
Succeeding on an Open Field: The Impact of Open Source Technologies on the Communication Service Provider Ecosystem,” an ACG Research Report.

Whilst the title of this blog is just one of the headline numbers in this report by ACG Research, there are a number of other interesting call-outs, so it’s well worth having a closer read of the report.

The research has been funded by the Linux Foundation, so it naturally has a focus on open-source solutions for network operators (CSPs). Here’s another quote from the report relating to open-source, “The main motivations behind the push for open source solutions in CSP operations are not simply focused on cost reduction as a goal. CSPs are thinking strategically and globally. There is a realization that the competitive landscape for communication and information services is changing rapidly, and it includes global, webscale service providers and over-the-top solutions.
Leading CSPs want industry collaboration and cooperation to solve common challenges…
Their top three motivations are:
• Unifying multiple service providers around a common approach
• Avoiding vendor lock-in and dependencies on a single vendor
• Accessing a broader talent pool than your own organization or any one vendor could provide

The first bullet-point is where the CSPs diverge from the likes of AWS and Google. Whilst the CSPs, each with their local geographical reach, seek global unification through standardisation (ie to ensure simpler interconnection), AWS and Google appear to be seeking global reach and global domination (making unification efforts irrelevant for them).

Just curious though. What if global domination does come to pass in the next few years? Will there be a three-fold increase in the orchestration market or complete decimation? Check out this earlier post that describes an OSS doomsday scenario.

Global CSPs have significant revenue streams that won’t disappear by 2023 and will be certain to put up a fight against becoming obsolescent under that doomsday scenario. It seems that open source and orchestration are key weapons in this global battle, so we’re bound to see some big investments in this space.

3 categories of OSS investment justification

Insurer IAG has modelled the financial cost that a data breach or ransomware attack would have on its business, in part to understand how much proposed infosec investments might offset its losses.
Head of cybersecurity and governance Ian Cameron told IBM Think 2018 in Sydney that the “value-at-risk modelling” project called upon the company’s actuarial expertise to put numbers on different types and levels of security threats.
“Because we’re an insurance company, we can use actuarial methods to price or model what the costs of a loss event would be,” Cameron said.
“If we have a major data breach or a major ransomware attack, we’ve done some really great work in the past 12 months to model the net cost of losses to our organisation in terms of the loss of productivity, the cost of advertising to address the concerns of our customers, the legal costs, and the costs of regulatory oversight.
“We’ve been able to work out the distribution of loss from a small event to a very big event
.”
Ry Crozier
on IT News.

There are really only three main categories of benefit that an OSS can be built around:

  • Cost reduction
  • Revenue generation / increase
  • Brand value (ie insurance of the brand, via protection of customer perception of the brand)

The last on the list is rarely used (in my experience) to justify OSS/BSS investment. The IAG experience of costing out infosec risk to operations and brand is an interesting one. It’s also one that has some strong parallels for the OSS/BSS of network operators.

Many people in the telecoms industry treat OSS/BSS as an afterthought and/or an expensive cost centre. Those people fail to recognise that the OSS/BSS are the operationalisation engines that allow customers to use the network assets.

Just as IAG was able to do through actuarial analysis, a telco’s OSS/BSS team could “work out the distribution of loss from a small event to (be) a very big event” (for the telco’s brand value). Consider the loss of repute during sustained network outages. Consider the impact of negative word-of-mouth from billing mistakes. Consider how revenue leakage analysis and predictive network health management might offset losses.

Can the IAG approach work for justifying your investment in OSS/BSS?

Do you use any other major categories for justifying OSS/BSS spend?

OSS stepping stone or wet cement

Very often, what is meant to be a stepping stone turns out to be a slab of wet cement that will harden around your foot if you do not take the next step soon enough.”
Richelle E. Goodrich
.

Not sure about your parts of the world, but I’ve noticed the terms “tactical” (ie stepping stone solution) and “strategic” (ie long-term solution) entering the architectural vernacular here in Australia.

OSS seem to be full of tactical solutions. We’re always on a journey to somewhere else. I love that mindset – getting moving now, but still keeping the future in mind. There’s just one slight problem… how many times have we seen a tactical solution that was build years before? Perhaps it’s not actually a problem at all in some cases – the short-term fix is obviously “good enough” to have survived.

As a colleague insightfully pointed out last week – “if you create a tactical solution without also preparing a strategic solution, you don’t have a tactical solution, you have a solution.

When architecting your OSS solutions, do you find yourself more easily focussing on the tactical, the strategic, or is having an eye on both the essential part of your solution?

OSS compromise, no, prioritise

On Friday, we talked about how making compromises on OSS can actually be a method for reducing risk. We used the OSS vendor selection process to discuss the point, where many stakeholders contribute to the list of requirements that help to select the best-fit product for the organisation.

To continue with this same theme, I’d like to introduce you to a way of prioritising requirements that borrows from the risk / likelihood matrix commonly used in project management.

The diagram below shows the matrix as it applies to OSS.
OSS automation grid

The y-axis shows the frequency of use (of a particular feature / requirement). They x-axis shows the time / cost savings that will result from having this functionality or automation.

If you add two extra columns to your requirements list, the frequency of use and the resultant savings, you’ll quickly identify which requirements are highest priority (green) based on business benefit. Naturally there are other factors to consider, such as cost-benefit, but it should quickly narrow down to your 80/20 that will allow your OSS to make the most difference.

The same model can be used to sub-prioritise too. For example, you might have a requirement to activate orders – but some orders will occur very frequently, whilst other order types occur rarely. In this case, when configuring the order activation functionality, it might make sense to prioritise on the green order types first.

OSS compromise, not compromised

When you’ve got multiple powerful parties involved in a decision, compromise is unavoidable. The point is not that compromise is a necessary evil. Rather, compromise can be valuable in itself, because it demonstrates that you’ve made use of diverse opinions, which is a way of limiting risk.”
Chip and Dan Heath
in their book, Decisive.

This risk perspective on compromise (ie diversity of thought), is a fascinating one in the context of OSS.

Let’s just look at Vendor Selection as one example scenario. In the lead-up to buying a new OSS, there are always lots of different requirements that are thrown into the hat. These requirements are likely to come from more than one business unit, and from a diverse set of actors / contributors. This process, the OSS Thrashing process, tends to lead to some very robust discussions. Even in the highly unlikely event of every requirement being met by a single OSS solution, there are still compromises to be made in terms of prioritisation on which features are introduced first. Or which functionality is dropped / delayed if funding doesn’t permit.

The more likely situation is that each of the product options will have different strengths and weaknesses, each possibly aligning better or worse to some of the requirement contributor needs. By making the final decision, some requirements will be included, others precluded. Compromise isn’t an option, it’s a reality. The perspective posed by the Heath brothers is whether all requirement contributors enter the OSS vendor selection process prepared for compromise (thus diversity of thought) or does one actor / business-unit seek to steamroll the process (thus introducing greater risk)?

The OSS transformation dilemma

There’s a particular carrier that I know quite well that appears to despise a particular OSS vendor… but keeps coming back to them… and keeps getting let down by them… but keeps coming back to them. And I’m not just talking about support of their existing OSS, but whole new tools.

It never made sense to me… until reading Seth Godin’s blog today. In it, he states, “…this market segment knows that things that are too good to be true can’t possibly work, and that’s fine with them, because they don’t actually want to change–they simply want to be able to tell themselves that they tried. That the organization they paid their money to failed, of course it wasn’t their failure. Once you see that this short-cut market segment exists, you can choose to serve them or to ignore them. And you can be among them or refuse to buy in

It starts to makes sense. The same carrier has a tendency to spend big money on the big-4 consultants whenever an important decision needs to be made. If the big, ambitious project then fails, the carrier’s project sponsors can say that the big-4 organization they paid their money to failed.

Does that ring true of any telco you’ve worked with? That they don’t actually want to change–they simply want to be able to tell themselves that they tried (or be seen to have tried) with their OSS transformation?

Are we actually stuck in one big dilemma? Are our OSS transformations actually so hard that they’re destined to fail, yet are already failing so badly that we desperately need to transform them? If so, then Seth’s insightful observation gives the appearance of progress AND protection from the pain of failure.

Not sure about you, but I’ll take Seth’s “refuse to buy in” option and try to incite change.

Are we making our OSS lives easier?

As an implementer of OSS, what’s the single factor that makes it challenging for us to deliver on any of the three constraints of project delivery? Complexity. Or put another way, variants. The more variants, the less chance we have of delivering on time, cost or functionality.

So let me ask you, is our next evolution simpler? No, actually. At least, it doesn’t seem so to me.

For all their many benefits, are virtualised networks simpler? We can apply abstractions to give a simpler view to higher layers in the stack, but we’ve actually only introduced more layers. Virtualisation will also bring an even higher volume of devices, transactions, etc to monitor, so we’re going to have to develop complex ways of managing these factors in cohorts.

We’re big on automations to simplify the roles of operators. But automations don’t make the task simpler for OSS implementers. Once we build a whole bunch of complex automations it might give the appearance of being simpler. But under the hood, it’s not. There are actually more moving parts.

Are we making it simpler through repetition across the industry? Nope, with the proliferation of options we’re getting more diverse. For example, back in the day, we only had a small number of database options to store our OSS data in (I won’t mention the names, I’m sure you know them!). But what about today? We have relational databases of course, but also have so many more options. What about virtualisation options? Mediation / messaging options? Programming languages? Presentation / reporting options? The list goes on. Each different OSS uses a different suite of tools, meaning less standardisation.

Our OSS lives seem to be getting harder by the day!

From PoC to OSS sandpit

You all know I’m a fan of training operators in OSS sandpits (and as apprenticeships during the build phase) rather than a week or two of classroom training at the end of a project.

To reduce the re-work in building a sandpit environment, which will probably be a dev/test environment rather than a production environment, I like to go all the way back to the vendor selection process.
From PoC to OSS sandpit

Running a Proof of Concept (PoC) is a key element of vendor selection in my opinion. The PoC should only include a small short-list of pre-selected solutions so as to not waste time of operator or vendor / integrator. But once short-listed, the PoC should be a cut-down reflection of the customer’s context. Where feasible, it should connect to some real devices / apps (maybe lab devices / apps, possibly via a common/simple interface like SNMP). This takes some time on both sides to set up, but it shows how easily (or not) the solution can integrate with the customer’s active network, BSS, etc. It should be specifically set up to show the device types, alarm types, naming conventions, workflows, etc that fit into the customer’s specific context. That allows the customer to understand the new OSS in terms they’re familiar with.

And since the effort has been made to set up the PoC, doesn’t it make sense to make further use of it and not just throw it away? If the winning bidder then leaves the PoC environment in the hands of the customer, it becomes the sandpit to play in. The big benefit for the winning bidder is that hopefully the customer will have less “what if?” questions that distract the project team during the implementation phase. Questions can be demonstrated, even if only partially, using the sandpit environment rather than empty words.

Post Implementation Review (PIR)

Have you noticed that OSS projects need to go through extensive review to get funding of business cases? That makes sense. They tend to be a big investment after all. Many OSS projects fail, so we want to make sure this one doesn’t and we perform thorough planing / due-diligence.

But I do find it interesting that we spend less time and effort on Post Implementation Reviews (PIRs). We might do the review of the project, but do we compare with the Cost Benefit Analysis (CBA) that undoubtedly goes into each business case?

OSS Project Analysis Scales

Even more interesting is that we spend even less time and effort performing ongoing analysis of an implemented OSS
against against the CBA.

Why interesting? Well, if we took the time to figure out what has really worked, we might have better (and more persuasive) data to improve our future business cases. Not only that, but more chance to reduce the effort on the business case side of the scale compared with current approaches (as per diagrams above).

What do you think?

OSS – just in time rather than just in case

We all know that once installed, OSS tend to stay in place for many years. Too much effort to air-lift in. Too much effort to air-lift back out, especially if tightly integrated over time.

The monolithic COTS (off-the-shelf) tools of the past would generally be commissioned and customised during the initial implementation project, with occasional integrations thereafter. That meant we needed to plan out what functionality might be required in future years and ask for it to be implemented, just in case. Along with all the baked-in functionality that is never needed, and the just in case but possibly never used, we ended up with a lot of bloat in our OSS.

With the current approach of implementing core OSS building blocks, then utilising rapid release and microservice techniques, we have an ongoing enhancement train. This provides us with an opportunity to build just in time, to build only functionality that we know to be essential.

This has pluses and minuses. On the plus side, we have more opportunity to restrict delivery to only what’s needed. On the minus side, a just in time mindset can build a stop-gap culture rather than strategic, long-term thinking. It’s always good to have long-term thinkers / planners on the team to steer the rapid release implementations (and reductions / refactoring) and avoid a new cause of bloat.

ONF executes new Strategic Plan

ONF Hits The Ground Running with Execution of New Strategic Plan.

Providing an update to its previously announced strategic plan aimed at creating a robust supply chain for open source solutions for operators, the Open Networking Foundation (ONF) today announced key milestones achieved. The achievements include the formation of the Technical Leadership Team (TLT), finalization on the initial focus areas for Reference Designs (RDs) and that four key new supply chain partners have joined as Partners (ONF’s top membership tier) to invest in the Reference Design activities, including ADTRAN, Dell EMC, Edgecore Networks and Juniper Networks.

Reference Designs for the Edge & Access Cloud

ONF’s operator leadership, which includes AT&T, China Unicom, Comcast, Google, Deutsche Telekom, Telefonica, NTT Group, and Turk Telekom, have agreed on a focused set of open source Reference Designs they communally plan to take to deployment in the near future as they begin to build out their edge & access clouds.  Furthermore, work on these RDs and their corresponding Exemplar Platforms have begun, with expectations that drafts, first implementations and production deployments will take place this year.

Reference Designs are “blueprints” for how to put common modular components together to create platforms based on open source to address specific deployment use cases for the emerging edge cloud.

SEBA-RD:
SDN Enabled Broadband Access
Lightweight reference design based on a variant of R-CORD.  Supports a multitude of virtualized access technologies at the edge of the carrier network, including PON, G.Fast, and eventually DOCSIS and more.
NFV Fabric-RD SDN-native spine-leaf data center fabric optimized for edge applications.
UPAN-RD:
Unified, Programmable & Automated Network
Next generation SDN reference design, leveraging P4 to enable flexible data plane programmability and network embedded VNF acceleration.
ODTN-RD:
Open Disaggregated Transport Network
Open multi-vendor optical networks.

 

Details on the Reference Designs, including the Operators committing to each, can be found at ONF Reference Design blog post.

 Trailblazing Open Source Platforms

In addition to the Reference Designs that are all tightly scoped and on the verge of deployment, the ONF will continue to pursue open source platforms that lead next-generation activities for the industry.  These activities include:

M-CORD Virtualized next-generation 5G mobile core and RAN platform.
CORD Multi-access Edge Cloud
Brings together all the access technologies into a single edge cloud, and enables third-party edge applications to run on the carrier’s network.

 

The ONF operators remain passionate about the full CORD vision, but recognize that 5G technologies are still in formation and for that reason deployment are perhaps a year behind other exemplar platforms.

Driving Formation of a New Supply Chain

To support operators’ impending deployment of these Reference Designs, a number of tier-1 vendors have joined the efforts as ONF partners to contribute their skills, expertise and technologies to help realize the RDs.  ADTRANDell/EMCEdgecore Networks and Juniper Networks are actively participating as supply chain partners in this reference design process.  Each brings unique skills and complementary competencies, and by working together the partnership will be able to expedite the production readiness of the various solutions.

ADTRAN A leader in installed base broadband and PON deployments, Adtran is joining the ONF partnership to serve as a system integrator, helping operators assemble solutions based on the new Reference Designs – leveraging open source platforms and interworking these solutions with existing OSS/BSS systems.
Dell EMC With leadership in servers and data center expertise, and a history of embracing disaggregated switching architectures, Dell EMC joins the partnership as an “ODM++” positioned to assemble open solutions and provide global reach and logistical support to scale operator deployments.
Edgecore Networks Edgecore Networks has been a leader in open switching and open PON systems, embracing the white box movement early and providing the majority of open access hardware white boxes to date.  By joining as a full partner, Edgecore will be offering operators an advanced selection of open hardware choices for broadband access and more, all pre-integrated with ONF Reference Designs and Exemplar Platforms.
Juniper Networks Juniper Networks simplifies the complexities of networking with products, solutions and services in the cloud era. A recognized leader in advanced networking technologies, Juniper is joining the ONF to embrace the open source movement for edge cloud transformation.

These new partners complement ONF’s existing supply chain partners CienaIntel, Radisys and Samsung, each of whom are playing equally essential roles in the new open source ecosystem.

 

ONF Partner Commentary

 ADTRAN
“Over the last several years, ADTRAN has fully embraced open, disaggregated and software-defined attributes as the core principles in our Mosaic-branded access focused solutions. We are encouraged by ONF’s new strategic plan and strong operator commitment and believe ADTRAN is a perfect fit as a supply chain partner,” said ADTRAN Senior Vice President of Technology and Strategy Jay Wilson. “ADTRAN intends to contribute across multiple reference design initiatives by applying access domain expertise both in software and systems integration.”

Dell EMC

“Dell EMC’s Open Networking initiative is about choice, flexibility and agility for enterprises and service providers, without a compromise on technology,” said Gavin Cato, Senior Vice President, Dell EMC Networking Engineering. “Dell EMC is a pioneer in open networking and committed to advancing the ONF efforts for harmonizing open source initiatives and transforming networking operations.”

Edgecore Networks

“Edgecore has been delivering open network solutions to data center operators and service providers for years, and understands the important role of the ONF as the premier community that defines and develops solutions based on open technology to meet the real world requirements of service providers,” said George Tchaparian, CEO, Edgecore Networks.  “We have been contributing to ONF work on CORD and are excited now to become a supplier partner contributing to the reference designs that will accelerate deployment of open networking.”

Juniper Networks

“At Juniper Networks, we are huge proponents of the open-infrastructure movement. In fact, as a testament to that, we open-sourced our Contrail SDN controller in 2013. We also became the first major network solution provider to incorporate P4 and P4Runtime earlier this year, enabling ubiquitous control across all our platforms. The industry’s need for open-source solutions has only grown since then as service providers more and more are working to ensure that a majority their networks include open-source infrastructure to build best-of-breed solutions and encourage innovation. We are excited to be joining the Open Networking Foundation to continue fostering innovation in open networking.”

– Bikash Koley, Chief Technology Officer, Juniper Networks

 

TLT Leadership

Comcast

“Comcast is excited about the positive benefits open-source platforms and solutions are delivering across the industry.  ONF’s work on reference designs represents an important step forward in open source development for the edge cloud, and we’re looking forward to participating.”

– Dr. Robert L. Howald, Comcast and ONF TLT Chair

AT&T

“AT&T has been a strong advocate of ONF’s open-source initiatives and we believe we are now at the cusp of deploying open source based solutions for next-generation broadband access, further driving open solutions to the edges of networks. Reference Designs are being formed to help ensure the success of this effort, and to clearly indicate to the ecosystem where we and our fellow operators are headed.”

-Al Blackburn, AT&T and ONF TLT Vice-Chair

 

Reference Design Details

Additional details on the Reference Designs can be found in this blog post.

Would you ever alarm your lab equipment?

Something curious dawned on me the other day – I wondered how many people / organisations actively manage alarms / alerts being generated by their lab equipment?

At first glance, this would seem silly. Lab environments are in constant flux, in all sorts of semi-configured situations, and therefore likely to be alarming their heads off at different times.

As such, it would seem even sillier to send alarms, syslogs, performance counters, etc from your lab boxes through to your production alarm management platform. Events on your lab equipment simply shouldn’t be distracting your NOC teams from handling events on your active network right?

But here’s why I’m curious, in a word, DevOps. Does the DevOps model now mean that some lab equipment stays in a relatively stable state and performs near-mission-critical activities like automated regression testing?? Therefore, does it make sense to selectively monitor / manage some lab equipment?

In what other scenarios might we wish to send lab alarms to the NOC (and I’m not talking about system integration testing type scenarios, but ongoing operational scenarios)?