The Autonomous Network / OSS Clock

In yesterday’s post, we talked about what needs to happen for a network operator to build an autonomous network. Many of the factors extended beyond the direct control of the OSS stack. We also looked at the difference between designing network autonomy for an existing OSS versus a ground-up build of an autonomous network.

We mostly looked at the ground-up build yesterday (at the expense of legacy augmentation).

So let’s take a slightly closer look at legacy automation. Like any legacy situation, you need to first understand current state. I’ve heard colleagues discuss the level of maturity of an existing network operations stack in terms of a single metric.

However, I feel that this might miss some of the nuances of the situation. For example, different activities are likely to be at different levels of maturity. Hence, the attempt at benchmarking the current situation on the OSS or Autonomous Networking clock below.

OSS Autonomy Clock

Sample activities shown in grey boxes to demonstrate the concept (I haven’t invested enough time into what the actual breakdown of activities might be yet).

  • Midnight is no monitoring capability
  • 3AM is Reactive Mode (ie reacting to data presented by the network / systems)
  • 6AM is Predictive Mode (ie using historical learnings to identify future situations)
  • 9AM is Prescriptive / Pre-cognitive Mode (ie using historical learnings, or pre-cognitive capabilities to identify what to do next)
  • Mid-day is Autonomous Networking (ie to close the loop and implement / control actions that respond to current situations automatically)

As always, I’d love to hear your thoughts!

As a network owner….

….I want to make my network so observable, reliable, predictable and repeatable that I don’t need anyone to operate it.

That’s clearly a highly ambitious goal. Probably even unachievable if we say it doesn’t need anyone to run it. But I wonder whether this has to be the starting point we take on behalf of our network operator customers?

If we look at most networks, OSS, BSS, NOC, SOC, etc (I’ll call this whole stack “the black box” in this article), they’ve been designed from the ground up to be human-driven. We’re now looking at ways to automate as many steps of operations as possible.

If we were to instead design the black-box to be machine-driven, how different would it look?

In fact, before we do that, perhaps we have to take two unique perspectives on this question:

  1. Retro-fitting existing black-boxes to increase their autonomy
  2. Designing brand new autonomous black-boxes

I suspect our approaches / architectures will be vastly different.

The first will require a incredibly complex measure, command and control engine to sit over top of the existing black box. It will probably also need to reach into many of the components that make up the black box and exert control over them. This approach has many similarities with what we already do in the OSS world. The only exception would be that we’d need to be a lot more “closed-loop” in our thinking. I should also re-iterate that this is incredibly complex because it inherits an existing “decision tree” of enormous complexity and adds further convolution.

The second approach holds a great deal more promise. However, it will require a vastly different approach on many levels:

  1. We have to take a chainsaw to the decision tree inside the black box. For example:
    • We start by removing as much variability from the network as possible. Think of this like other utilities such as water or power. Our electricity service only has one feed-type for almost all residential and business customers. Yet it still allows us great flexibility in what we plug into it. What if a network operator were to simply offer a “broadband dial-tone” service and end users decide what they overlay on that bit-stream
    • This reduces the “protocol stack” in the network (think of this in terms of the long list of features / tick-boxes on any router’s brochure)
    • As well as reducing network complexity, it drastically reduces the variables an end-user needs to decide from. The operator no longer needs 50 grandfathered, legacy products 
    • This also reduces the decision tree in BSS-related functionality like billing, rating, charging, clearing-house
    • We achieve a (globally?) standardised network services catalog that’s completely independent of vendor offerings
    • We achieve a more standardised set of telemetry data coming from the network
    • In turn, this drives a more standardised and minimal set of service-impact and root-cause analyses
  2. We design data input/output methods and interfaces (to the black box and to any of its constituent components) to have closed-loop immediacy in mind. At the moment we tend to have interfaces that allow us to interrogate the network and push changes into the network separately rather than tasking the network to keep itself within expected operational thresholds
  3. We allow networks to self-regulate and self-heal, not just within a node, but between neighbours without necessarily having to revert to centralised control mechanisms like OSS
  4. All components within the black-box, down to device level, are programmable. [As an aside, we need to consider how to make the physical network more programmable or reconcilable, considering that cables, (most) patch panels, joints, etc don’t have APIs. That’s why the physical network tends to give us the biggest data quality challenges, which ripples out into our ability to automate networks]
  5. End-to-end data flows (ie controls) are to be near-real-time, not constrained by processing lags (eg 15 minute poll cycles, hourly log processing cycles, etc) 
  6. Data minimalism engineering. It’s currently not uncommon for network devices to produce dozens, if not hundreds, of different metrics. Most are never used by operators manually, nor are likely to be used by learning machines. This increases data processing, distribution and storage overheads. If we only produce what is useful, then it should improve data flow times (point 5 above). Therefore learning machines should be able to control which data sets they need from network devices and at what cadence. The learning engine can start off collecting all metrics, then progressively turning them off as they deem metrics unnecessary. This could also extend to controlling log-levels (ie how much granularity of data is generated for a particular log, event, performance counter)
  7. Perhaps we even offer AI-as-a-service, whereby any of the components within the black-box can call upon a centralised AI service (and the common data lake that underpins it) to assist with localised self-healing, self-regulation, etc. This facilitates closed-loop decisions throughout the stack rather than just an over-arching command and control mechanism

I’m barely exposing the tip of the iceberg here. I’d love to get your thoughts on what else it will take to bring fully autonomous network to reality.

OSS are not just a #$%&ing cost centre

It seems that OSS/BSS are always an afterthought. And always seen as a cost centre rather than a revenue generator.

Now I’m biased of course, but I think that’s such a narrow view. And we need everyone in our industry to spread the same gospel. 

I like to think of it like this… Sales teams identify the customers and revenue (let’s call them THE BUY-SIDE). Network teams build the assets that service the customer needs (let’s call them THE SELL-SIDE). But the OSS/BSS are the profit engine because they bring buyers and sellers together.

They initiate revenues (Fulfillment / Activation workflows), they retain revenues (Assurance workflows) and they can identify, then minimise costs (automations, analytics, leakage management, identify ineffective work practices, identify simplification opportunities, workforce coordination, etc, etc). They also have a strong influence on customer experience.

OSS/BSS operationalise the assets (to deliver the services that customers pay for).

How much revenue do our OSS/BSS operationalise? All of it (unless some orders are for professional services and/or being activated directly on the network without touching OSS or BSS systems)

The OSS/BSS also provide the strategic levers for management to pull in future. In times when long-term competitive advantages are hard to find, your OSS/BSS can give significant competitive advantage (if flexible and effective) or hinder it (if inflexible / unadaptable)

The digital transformation paradox twins

There’s an old adage that “the confused mind always says no.”

Consider this from your own perspective. If you’re in a state of confusion about something, are you likely to commit wholeheartedly or will you look to delay / procrastinate?

The paradox for digital transformation is that our projects are almost always complex, but complexity breeds confusion and uncertainty. Transformation may be urgently needed, but it’s really hard to persuade stakeholders and sponsors to commit to change if they don’t have a clear picture of the way forward.

As change agents, we face another paradox. It’s our task to simplify the messaging. but our messaging should not imply that the project will be simple. That will just set unrealistic expectations for our stakeholders (“but this project was supposed to be simple,” they say).

Like all paradoxes, there’s no perfect solution. However, one technique that I’ve found to be useful is to narrow down the choices. Not by discarding them outright, but by figuring out filters – ways to quick include or exclude branches of the decision tree.

Let’s take the example of OSS vendor selection. An organisation asks itself, “what is the best-fit OSS/BSS for our needs?” The Blue Book OSS/BSS Vendor Directory will show that there are well over 400 OSS/BSS providers to choose from. Confusion!

So let’s figure out what our needs are. We could dive into really detailed requirement gathering, but that in itself requires many complex decisions. What if we instead just use a few broad needs as our first line of filtering? We know we need an outside plant management tool. Our list of 400+ now becomes 20. There’s still confusion, but we’re now more targeted.

But 20 is still a lot to choose from. A slightly deeper level of filtering should allow us to get to a short list of 3-5. The next step is to test those 3-5 to see which does the best at fulfilling the most important needs of the organisation. Chances are that the best-fit won’t fulfil every requirement, but generally it will clearly fulfil more than any of the other alternatives. It’s best-fit, not perfect fit.

We haven’t made the project less complex, but we have simplified the decision. We’ve arrived at the “best” option, so the way forward should be clear right?

Unfortunately, it’s not always that easy. Even though the best way forward has been identified, there’s still uncertainties in the minds of stakeholders caused purely by the complexity of the upcoming project. I’ve seen examples where the choice of vendor has been clear, with the best-fit clearly surpassing the next-best, but the buyer is still indecisive. I completely get it. Our task as change agents is to reduce doubts and increase transformation confidence.

What will get your CEO fired? (part 4)

In Monday’s article, we suggested that the three technical factors that could get the big boss fired are probably only limited to:

  1. Repeated and/or catastrophic failure (of network, systems, etc)
  2. Inability to serve the market (eg offerings, capacity, etc)
  3. Inability to operate network assets profitably

In that article, we looked closely at a human factor and how current trends of open-source, Agile and microservices might actually exacerbate it. In yesterday’s article we looked at market-serving factors for us to investigate and monitor.

But let’s look at point 3 today. The profitability factors we could consider that reduce the chances of the big boss getting fired are:

  1. Ability to see revenues in near-real-time (revenues are relatively easy to collect, so we use these numbers a lot. Much harder are profitability measures because of the shared allocation of fixed costs)

  2. Ability to see cost breakdown (particularly which parts of the technical solution are most costly, such as what device types / topologies are failing most often)

  3. Ability to measure profitability by product type, customer, etc

  4. Are there more profitable or cost-effective solutions available

  5. Is there greater profitability that could be unlocked by simplification

What will get your CEO fired? (part 3)

In Monday’s article, we suggested that the three technical factors that could get the big boss fired are probably only limited to:

  1. Repeated and/or catastrophic failure (of network, systems, etc)
  2. Inability to serve the market (eg offerings, capacity, etc)
  3. Inability to operate network assets profitably

In that article, we looked closely at a human factor and how current trends of open-source, Agile and microservices might actually exacerbate it. In yesterday’s article we looked at the broader set of catastrophic failure factors for us to investigate and monitor.

But let’s look at some of the broader examples under point 2 today. The market-serving factors we could consider that reduce the chances of the big boss getting fired are:

  1. Immediate visibility of key metrics by boss and execs (what are the metrics that matter, eg customer numbers, ARPU, churn, regulatory, media hot-buttons, network health, etc)

  2. Response to “voice of customer” (including customer feedback, public perception, etc)

  3. Human resources (incl up-skill for new tech, etc)

  4. Ability to implement quickly / efficiently

  5. Ability to handle change (to network topology, devices/vendors, business products, systems, etc)

  6. Measuring end-to-end user experience, not just “nodal” monitoring

  7. Scalability / Capacity (ability to serve customer demand now and into a foreseeable future)

What will get your CEO fired? (part 2)

In Monday’s article, we suggested that the three technical factors that could get the big boss fired are probably only limited to:

  1. Repeated and/or catastrophic failure (of network, systems, etc)
  2. Inability to serve the market (eg offerings, capacity, etc)
  3. Inability to operate network assets profitably

In that article, we looked closely at a human factor and how current trends of open-source, Agile and microservices might actually exacerbate it.

But let’s look at some of the broader examples under point 1 today. The failure factors we could consider that might result in the big boss getting fired are:

  1. Availability (nodal and E2E)

  2. Performance (nodal and E2E)

  3. Security (security trust model – cloud vs corporate vs active network and related zones)

  4. Remediation times, systems & processes (Assurance), particularly effectiveness of process for handling P1 (Priority 1) incidents

  5. Resilience Architecture

  6. Disaster Recovery Plan (incl Backup and Restore process, what black-swan events the organisation is susceptible to, etc)

  7. Supportability and Maintenance Routines

  8. Change and Release Management approaches

  9. Human resources (incl business continuity risk of losing IP, etc)

  10. Where are the SPoFs (Single Points of Failure)

We should note too that these should be viewed through two lenses:

  • The lens of the network our OSS/BSS is managing and
  • The lens of the systems (hardware/software/cloud) that make up our OSS/BSS

What will get your CEO fired?

Not sure whether you have a clear answer to that question – either the thought is enticing (you want the CEO to get fired), unthinkable (you don’t want the CEO fired) or somewhere in between.  You’d invariably get different answers from different employees within most organisations (you can’t please all of the people all of the time).

Today’s post also has no singular situation, so it poses a number of hypothetical questions. But let’s follow the thread of the question from a purely technical perspective (ie discounting inappropriate personal decisions, incompetence and the like).

If we look at the technical factors that could get the big boss fired, they’re probably only limited to:

  1. Repeated and/or catastrophic failure (of network, systems, etc)
  2. Inability to serve the market (eg offerings, capacity, etc)
  3. Inability to operate network assets profitably

Let’s start with catastrophic failures, ones where entire networks and/or product lines go down for long periods (as opposed to branches of networks becoming faulty or intermittently failing). We’ve had quite a few catastrophic failures in my region in the last few years.

Interestingly, we’re designing networks and systems that should be more reliable day-by-day. In all likelihood they probably are. But with increased system complexity and programmed automation, I wonder whether we’re actually increasing the likelihood of catastrophic failures. That is, examples of cascading failures that are too complex for operators to contain.

Anecdotal evidence surrounding the failures mentioned above in my area suggest that a skills-gap after many retirements / redundancies has been partly to blame. The replacements haven’t been as well equipped as their predecessors to prevent the runaway train of catastrophic failure.

Do you think a similar risk awaits us with the current trend towards build-it-yourself OSS? We have all these custom-built, one-of-a-kind, complex systems being built currently. What happens in a few years time when their designers / implementers leave? Yes, their replacements might be able to interrogate code repositories, but will they know what changes to rapidly make when patterns of degradation start to appear? Will they have any chance of inherently understanding the logic behind these systems like those who created them? Will they be able to stop the runaway train/s?

This earlier post discussed the build vs buy cycle the market goes through.

OSS build vs buy cycle

There are many pros and cons of the build vs buy argument, but at least there’s a level of repeatability and modularity with off the shelf solutions.

I wonder if the CEOs of today/tomorrow are planning for sophisticated up-skilling of replacements in future when they decide to build in-house today? Or will they just hand the replacements the keys and wish them luck? The latter approach keeps costs down, but it just might get the CEO fired!

We’ll take a closer look at the other two factors (“serve the market” and “profitability”) in the next few days.

 

 

 

A lighter-touch OSS procurement approach (part 3)

We’ve spoken at length about TM Forum’s, “Time to kill the RFP? Reinventing IT procurement for the 2020s,” report so far this week. We’ve also spoken about the feeling that the OSS/BSS RFP (Request For Proposal) still has relevance in some situations… as long as it’s more of a lighter-touch than most. We’ve spoken about a more pragmatic approach that aims to find best available fit (for key objectives through stages of filtering) rather than perfect fit (for all requirements through detailed analyses). And I should note that “best available fit” includes measurement against these three contrarian procurement KPIs ahead of the traditional ones.

Yesterday’s post discussed how we get to a short list with minimal involvement of buyers and sellers, with the promise that we’d discuss the detailed analysis stage today.

It’s where we do use an RFP, but with thought given to the many pain-points cited so brilliantly by Mark Newman and team in the abovementioned TM Forum report.

The RFP provides the mechanism to firm up pricing and architecture, but is also closely tied to a PoC (Proof of Concept) demonstration. The RFP helps to prioritise the order in which PoCs are performed. PoCs tend to be very time consuming for buyer and seller. So if there’s a clear leader from the paper studies so far, then they will demonstrate first.

If there’s not a clear difference, or if the prime candidate’s demonstration identified significant gaps, then additional PoCs are run.

And to ensure the PoCs are run against the objectives that matter most, we use scenarios that were prioritised during part 1 of this series.

Next steps are to form the more detailed designs, commercials / contracts and ratify that the business case still holds up.

In yesterday’s post, I also promised to share our “starting-point” procurement methodology. I say starting point because each buyer situation is different and we tend to customise it to each buyer’s needs. It’s useful for starting discussions.

The overall methodology diagram is shown below:

PAOSS vendor selection process

A few key notes here:

  1. The process looks much heavier than it really is… if you use traditional procurement processes as an indicator
  2. We have existing templates for all the activities marked in yellow
  3. The activity marked in blue partially represents the project we’re getting really excited to introduce to you tomorrow

 

OSS/BSS procurement is flawed from the outset

You may’ve noticed that things have been a little quiet on this blog in recent weeks. We’ve been working on a big new project that we’ll be launching here on PAOSS on Monday. We can’t reveal what this project is just yet, but we can let you in on a little hint. It aims to help overcome one of the biggest problem areas faced by those in the comms network space.

Further clues will be revealed in this week’s series of posts.

The industry we work in is worth tens of billions of dollars annually. We rely on that investment to fund the OSS/BSS projects (and ops/maintenance tasks) that keeps many thousands of us busy. Obviously those funds get distributed by project sponsors in the buyers’ organisations. For many of the big projects, sponsors are obliged to involve the organisation’s procurement team.

That’s a fairly obvious path. But I often wonder whether the next step on that path is full of contradictions and flaws.

Do you agree with me that the 3 KPIs sponsors expect from their procurement teams are:

  1. Negotiate the lowest price
  2. Eliminate as many risks as possible
  3. Create a contract to manage the project by

If procurement achieves these 3 things, sponsors will generally be delighted. High-fives for the buyers that screw the vendor prices right down. Seems pretty obvious right? So where’s the contradiction? Well, let’s look at these same 3 KPIs from a different perspective – a more seller-centric perspective:

  1. I want to win the project, so I’ll set a really low price, perhaps even loss-leader. However, our company can’t survive if our projects lose money, so I’ll be actively generating variations throughout the project
  2. Every project of this complexity has inherent risks, so if my buyer is “eliminating” risks, they’re actually just pushing risks onto me. So I’ll use any mechanisms I can to push risks back on my buyer to even the balance again
  3. We all know that complex projects throw up unexpected situations that contracts can’t predict (except with catch-all statements that aim to push all risk onto sellers). We also both know that if we manage the project by contractual clauses and interpretations, then we’re already doomed to fail (or are already failing by the time we start to manage by contract clauses)

My 3 contrarian KPIs to request from procurement are:

  1. Build relationships / trust – build a framework and environment that facilitates a mutually beneficial, long-lasting buyer/seller relationship (ie procurement gets judged on partnership length ahead of cost reduction)
  2. Develop a team – build a framework and environment that allows the buyer-seller collective to overcome risks and issues (ie mutual risk mitigation rather than independent risk deflection)
  3. Establish clear and shared objectives – ensure both parties are completely clear on how the project will make the buyer’s organisation successful. Then both constantly evolve to deliver benefits that outweigh costs (ie focus on the objectives rather than clauses – don’t sweat the small stuff (or purely technical stuff))

Yes, I know they’re idealistic and probably unrealistic. Just saying that the current KPI model tends to introduce flaws from the outset.

The OSS “out of control” conundrum

Over the years in OSS, I’ve spent a lot of my time helping companies create their OSS / BSS strategies and roadmaps. Sometimes clients come from the buy side (eg carriers, utilities, enterprise), other times clients come from the sell side (eg vendors, integrators). There’s one factor that seems to be most commonly raised by these clients, and it comes from both sides.

What is that one factor? Well, we’ll come back to what that factor is a little later, but let’s cover some background first.

OSS / BSS covers a fairly broad estate of functionality:
OSS and BSS overlaid onto the TAM

Even if only covering a simplified version of this map, very few suppliers can provide coverage of the entire estate. That infers two things:

  1. Integrations; and
  2. Relationships

If you’re from the buy-side, you need to manage both to build a full-function OSS/BSS suite. If you’re from the sell-side, you’re either forced into dealing with both (reactive) or sometimes you can choose to develop those to bring a more complete offering to market (proactive).

You will have noticed that both are double-ended. Integrations bring two applications / functions together. Relationships bring two organisations together.

This two-ended concept means there’s always a “far-side” that’s outside your control. It’s in our nature to worry about what’s outside our control. We tend to want to put controls around what we can’t control. Not only that, but it’s incumbent on us as organisation planners to put mitigation strategies in place.

Which brings us back to the one factor that is raised by clients on most occasions – substitution – how do we minimise our exposure to lock-in with an OSS product / service partner/s if our partnership deteriorates?

Well, here are some thoughts:

  1. Design your own architecture with product / partner substitution in mind (and regularly review your substitution plan because products are always evolving)
  2. Develop multiple integrations so that you always have active equivalency. This is easier for sell-side “reactives” because their different customers will have different products to integrate to (eg an OSS vendor that is able to integrate with four different ITSM tools because they have different customers with each of those variants)
  3. Enhance your own offerings so that you no longer require the partnership, but can do it yourself
  4. Invest in your partnerships to ensure they don’t deteriorate. This is the OSS marriage analogy where ongoing mutual benefits encourage the relationship to continue.

Can you solve the omni-channel identity conundrum for OSS/BSS?

For most end-customers, the OSS/BSS we create are merely back-office systems that they never see. The closest they get are the customer portals that they interact with to drive workflows through our OSS/BSS. And yet, our OSS/BSS still have a big part to play in customer experience. In times where customers can readily substitute one carrier for another, customer service has become a key differentiator for many carriers. It therefore also becomes a priority for our OSS/BSS.

Customers now have multiple engagement options (aka omni-channel) and form factors (eg in-person, phone, tablet, mobile phone, kiosk, etc). The only options we used to have were a call to a contact centre / IVR (Interactive Voice Response), a visit to a store, or a visit from an account manager for business customers. Now there are websites, applications, text messages, multiple social media channels, chatbots, portals, blogs, etc. They all represent different challenges as far as offering a seamless customer experience across all channels.

I’ve just noticed TM Forum’s “Omni-channel Guidebook” (GB994), which does a great job at describing the challenges and opportunities. For example, it explains the importance of identity. End-users can only get a truly seamless experience if they can be uniquely identified across all channels. Unfortunately, some channels (eg IVR, website) don’t force end-users to self-identify.

The Ovum report, “Optimizing Customer Service in a Multi Channel World, March 2011” indicates that around 74% of customers use 3 channels or more for engaging customer service. In most cases, it’s our OSS/BSS that provide the data that supports a seamless experience across channels. But what if we have no unique key? What if the unique key we have (eg phone number) doesn’t uniquely identify the different people who use that contact point (eg different family members who use the same fixed-line phone)?

We could use personality profiling across these channels, but we’ve already seen how that has worked out for Cambridge Analytica and Facebook in terms of customer privacy and security.

I’d love to hear how you’ve done cross-channel identity management in your OSS/BSS. Have you solved the omni-channel identity conundrum?

PS. One thing I find really interesting. The whole omni-channel thing is about giving customers (or potential customers) the ability to connect via the channel they’re most comfortable with. But there’s one glaring exception. When an end-user decides a phone conversation is the only way to resolve their issue (often after already trying the self-service options), they call the contact centre number. But many big telcos insist on trying to deflect as many calls as possible to self-service options (they call it CVR – call volume reduction), because contact centre staff are much more expensive per transaction than the automated channels. That seems to be an anti-customer-experience technique if you ask me. What are your thoughts?

Stealing Fire for OSS (part 2)

Yesterday’s post talked about the difference between “flow state” and “office state” in relation to OSS delivery. It referenced a book I’m currently reading called Stealing Fire.

The post mainly focused on how the interruptions of “office state” actually inhibit our productivity, learning and ability to think laterally on our OSS. But that got me thinking that perhaps flow doesn’t just relate to OSS project delivery. It also relates to post-implementation use of the OSS we implement.

If we think about the various personas who use an OSS (such as NOC operators, designers, order entry operators, capacity planners, etc), do our user interfaces and workflows assist or inhibit them to get into the zone? More importantly, if those personas need to work collaboratively with others, do we facilitate them getting into “group flow?”

Stealing Fire suggests that it costs around $500k to train each Navy SEAL and around $4.25m to train each elite SEAL (DEVGRU). It also describes how this level of training allows DEVGRU units to quickly get into group flow and function together almost as if choreographed, even in high-pressure / high-noise environments.

Contrast this with collaborative activities within our OSS. We use tickets, emails, Slack notifications, work order activity lists, etc to collaborate. It seems to me that these are the precise instruments that prevent us from getting into flow individually. I assume it’s the same collectively. I can’t think back to any end-to-end OSS workflows that seem highly choreographed or seamlessly effective.

Think about it. If you experience significant rates of process fall-out / error, then it would seem to indicate an OSS that’s not conducive to group flow. Ditto for lengthy O2A (order to activate) or T2R (trouble to resolve) times. Ditto for bringing new products to market.

I’d love to hear your thoughts. Has any OSS environment you’ve worked in facilitated group flow? If so, was it the people and/or the tools? Alternatively, have the OSS you’ve used inhibited group flow?

PS. Stealing Fire details how organisations such as Google and DARPA are investing heavily in flow research. They can obviously see the pay-off from those investments (or potential pay-offs). We seem to barely even invest in UI/UX experts to assist with the designs of our OSS products and workflows.

Stealing fire for OSS

I’ve recently started reading a book called Stealing Fire: How Silicon Valley, the Navy SEALs, and Maverick Scientists Are Revolutionizing the Way We Live and Work. To completely over-generalise the subject matter, it’s about finding optimal performance states, aka finding flow. Not the normal topic of conversation for here on the PAOSS blog!!

However, the book’s content has helped to make the link between flow and OSS more palpable than you might think.

In the early days of working on OSS delivery projects, I found myself getting into a flow state on a daily basis – achieving more than I thought capable, learning more effectively than I thought capable and completely losing track of time. In those days of project delivery, I was lucky enough to get hours at a time without interruptions, to focus on what was an almost overwhelming list of tasks to be done. Over the first 5-ish years in OSS, I averaged an 85 hour week because I was just so absorbed by it. It was the source from where my passion for OSS originated. Or was it??

The book now has me pondering a chicken or egg conundrum – did I become so passionate about OSS that I could get into a state of flow or did I only become passionate about OSS because I was able to readily get into a state of flow with it? That’s where the book provides the link between getting in the zone and the brain chemicals that leave us with a feeling of ecstasis or happiness (not to mention the addictive nature of it). The authors describe this state of consciousness as Selflessness, Timelessness, Effortlessness, and Richness, or STER for short. OSS definitely triggered STER for me,, but chicken or egg??

Having spent much of the last few years embedded in big corporate environments, I’ve found a decreased ability to get into the same flow state. Meetings, emails, messenger pop-ups, distractions from surrounding areas in open-plan offices, etc. They all interrupt. It’s left me with a diminishing opportunity to get in the zone. With that has come a growing unease and sense of sub-optimal productivity during “office hours.” It was increasingly disheartening that I could generally only get into the zone outside office hours. For example, whilst writing blogs on the train-trip or in the hours after the rest of my family was asleep.

Since making the concerted effort to leave that “office state,” I’ve been both surprised and delighted at the increased productivity. Not just that, but the ability to make better lateral connections of ideas and to learn more effectively again.

I’d love to hear your thoughts on this in the comments section below. Some big questions for you:

  1. Have you experienced a similar productivity gap between “flow state” and “office state” on your OSS projects?
  2. Have you had the same experience as me, where modern ways of working seem to be lessening the long chunks of time required to get into flow state?
  3. If yes, how can our sponsor organisations and our OSS products continue to progress if we’re increasingly working only in office state?

282 million reasons for increased OSS/BSS scrutiny

The hotel group Marriott International has been told by the UK Information Commissioner’s Office that it will be fined a little over £99 million (A$178 million) over a data breach that occurred in December last year…
This is the second fine for data breaches announced by the ICO on successive days. On Monday, it said British Airways would be fined £183.39 million (A$329.1 million) for a data breach that occurred in September 2018
.”
Sam Varghese of ITwire.

The scale of the fines issued to Marriott and BA is mind-boggling.

Here’s a link to the GDPR (General Data Protection Regulation) fine regime and determination process. Fines can be issued by GDPR policing agencies of up to €20 million, or 4% of the worldwide annual revenue of the prior financial year, whichever is higher.

Determination is based on the following questions:

  1. Nature of infringement: number of people affected, damaged they suffered, duration of infringement, and purpose of processing
  2. Intention: whether the infringement is intentional or negligent
  3. Mitigation: actions taken to mitigate damage to data subjects
  4. Preventative measures: how much technical and organizational preparation the firm had previously implemented to prevent non-compliance
  5. History: (83.2e) past relevant infringements, which may be interpreted to include infringements under the Data Protection Directive and not just the GDPR, and (83.2i) past administrative corrective actions under the GDPR, from warnings to bans on processing and fines
  6. Cooperation: how cooperative the firm has been with the supervisory authority to remedy the infringement
  7. Data type: what types of data the infringement impacts; see special categories of personal data
  8. Notification: whether the infringement was proactively reported to the supervisory authority by the firm itself or a third party
  9. Certification: whether the firm had qualified under approved certifications or adhered to approved codes of conduct
  10. Other: other aggravating or mitigating factors may include financial impact on the firm from the infringement

The two examples listed above provide 282 million reasons for governments to police data protection more stringently than they do today. The regulatory pressure is only going to increase right? As I understand it, these processes are only enforced in reactive mode currently. What if the regulators become move to proactive mode?

Question for you – Looking at #7 above, do you think the customer information stored in your OSS/BSS is more or less “impactful” than that of Marriott or British Airways?

Think about this question in terms of the number of daily interactions you have with hotels and airlines versus telcos / ISPs. I’ve stayed in Marriott hotels for over a year in accumulated days. I’ve boarded hundreds of flights. But I can’t begin to imagine how many of my data points the telcos / ISP could potentially collect every day. It’s in our OSS/BSS data stores where those data points are most likely to end up.

Do you think our OSS/BSS are going to come under increasing GDPR-like scrutiny in coming years? Put it this way, I suspect we’re going to become more familiar with risk management around the 10 dot points above than we have been in the past.

The great OSS squeeeeeeze

TM Forum’s Open Digital Architecture (ODA) White Paper begins with the following statement:

Telecoms is at a crucial turning point. The last decade has dealt a series of punishing blows to an industry that had previously enjoyed enviable growth for more than 20 years. Services that once returned high margins are being reduced to commodities in the digital world, and our insatiable appetite for data demands continuous investment in infrastructure. On the other hand, communications service providers (CSPs) and their partners are in an excellent position to guide and capitalize on the next wave of digital revolution.

Clearly, a reduction in profitability leads to a reduction in cash available for projects – including OSS transformation projects. And reduced profitability almost inevitably leads executives to start thinking about head-count reduction too.

As Luke Clifton of Macquarie Telecom observed here, “Telstra is reportedly planning to shed 1,200 people from its enterprise business with many of these people directly involved in managing small-to-medium sized business customers. More than 10,000 customers in this segment will no longer have access to dedicated Account Managers, instead relegated to being managed by Telstra’s “Digital Hub”… Telstra, like the big banks once did, is seemingly betting that customers won’t leave them nor will they notice the downgrade in their service. It will be interesting to see how 10,000 additional organisations will be managed through a Digital Hub.
Simply put, you cannot cut quality people without cutting the quality of service. Those two ideals are intrinsically linked
…”

As a fairly broad trend across the telco sector, projects and jobs are being cut, whilst technology change is forcing transformation. And as suggested in Luke’s “Digital Hub” quote above, it all leads to increased expectations on our OSS/BSS.

Pressure is coming at our OSS from all angles, and with no signs of abating.

To quote Queen, “Pressure. Pushing down on me.Pressing down on you.”

So it seems to me there are only three broad options when planning our OSS roadmaps:

  1. We learn to cope with increased pressure (although this doesn’t seem like a viable long-term option)
  2. We reduce the size (eg functionality, transaction volumes, etc) of our OSS footprint [But have you noticed that all of our roadmaps seem expansionary in terms of functionality, volumes, technologies incorporated, etc??]
  3. We look beyond the realms of traditional OSS/BSS functionality (eg just servicing operations) and into areas of opportunity

TM Forum’s ODA White Paper goes on to state, “The growth opportunities attached to new 5G ecosystems are estimated to be worth over $580 billion in the next decade.
Servicing these opportunities requires transformation of the entire industry. Early digital transformation efforts focused on improving customer experience and embracing new technologies such as virtualization, with promises of wide-scale automation and greater agility. It has become clear that these ‘projects’ alone are not enough. CSPs’ business and operating models, choice of technology partners, mindset, decision-making and time to market must also change.
True digital business transformation is not an easy or quick path, but it is essential to surviving and thriving in the future digital market.”

BTW. I’m not suggesting 5G is the panacea or single opportunity here. My use of the quote above is drawing more heavily on the opportunities relating to digital transformation. Not of the telcos themselves, but digital transformation of their customers. If data is the oil of the 21st century, then our OSS/BSS and telco assets have the potential to be the miners and pipelines of that oil.

If / when our OSS go from being cost centres to revenue generators (directly attributable to revenue, not the indirect attribution by most OSS today), then we might feel some of the pressure easing off us.

OSS change…. but not too much… oh no…..

Let me start today with a question:
Does your future OSS/BSS need to be drastically different to what it is today?

Please leave me a comment below, answering yes or no.

I’m going to take a guess that most OSS/BSS experts will answer yes to this question, that our future OSS/BSS will change significantly. It’s the reason I wrote the OSS Call for Innovation manifesto some time back. As great as our OSS/BSS are, there’s still so much need for improvement.

But big improvement needs big change. And big change is scary, as Tom Nolle points out:
IT vendors, like most vendors, recognize that too much revolution doesn’t sell. You have to creep up on change, get buyers disconnected from the comfortable past and then get them to face not the ultimate future but a future that’s not too frightening.”

Do you feel like we’re already in the midst of a revolution? Cloud computing, web-scaling and virtualisation (of IT and networks) have been partly responsible for it. Agile and continuous integration/delivery models too.

The following diagram shows a “from the moon” level view of how I approach (almost) any new project.

The key to Tom’s quote above is in step 2. Just how far, or how ambitious, into the future are you projecting your required change? Do you even know what that future will look like? After all, the environment we’re operating within is changing so fast. That’s why Tom is suggesting that for many of us, step 2 is just a “creep up on it change.” The gap is essentially small.

The “creep up on it change” means just adding a few new relatively meaningless features at the end of the long tail of functionality. That’s because we’ve already had the most meaningful functionality in our OSS/BSS for decades (eg customer management, product / catalog management, service management, service activation, network / service health management, inventory / resource management, partner management, workforce management, etc). We’ve had the functionality, but that doesn’t mean we’ve perfected the cost or process efficiency of using it.

So let’s say we look at step 2 with a slightly different mindset. Let’s say we don’t try to add any new functionality. We lock that down to what we already have. Instead we do re-factoring and try to pull the efficiency levers, which means changes to:

  1. Platforms (eg cloud computing, web-scaling and virtualisation as well as associated management applications)
  2. Methodologies (eg Agile, DevOps, CI/CD, noting of course that they’re more than just methodologies, but also come with tools, etc)
  3. Process (eg User Experience / User Interfaces [UX/UI], supply chain, business process re-invention, machine-led automations, etc)

It’s harder for most people to visualise what the Step 2 Future State looks like. And if it’s harder to envisage Step 2, how do we then move onto Steps 3 and 4 with confidence?

This is the challenge for OSS/BSS vendors, supplier, integrators and implementers. How do we, “get buyers disconnected from the comfortable past and then get them to face not the ultimate future but a future that’s not too frightening?” And I should point out, that it’s not just buyers we need to get disconnected from the comfortable past, but ourselves, myself definitely included.

Two concepts to help ease long-standing OSS problems

There’s a famous Zig Ziglar quote that goes something like, “You can have everything in life you want, if you will just help enough other people get what they want.”

You could safely assume that this was written for the individual reader, but there is some truth in it within the OSS context too. For the OSS designer, builder, integrator, does the statement “You can have everything in your OSS you want, if you will just help enough other people get what they want,” apply?

We often just think about the O in OSS – Operations people, when looking for who to help. But OSS/BSS has the ability to impact far wider than just the Ops team/s.

The halcyon days of OSS were probably in the 1990’s to early 2000’s when the term OSS/BSS was at its most sexy and exciting. The big telcos were excitedly spending hundreds of millions of dollars. Those projects were huge… and hugely complex… and hugely fun!

With that level of investment, there was the expectation that the OSS/BSS would help many people. And they did. But the lustre has come off somewhat since then. We’ve helped sooooo many people, but perhaps didn’t help enough people enough. Just speak with anybody involved with an OSS/BSS stack and you’ll hear hints of a large gap that exists between their current state and a desired future state.

Do you mind if I ask two questions?

  1. When you reflect on your OSS activities, do you focus on the technology, the opportunities or the problems
  2. Do you look at the local, day-to-day activities or the broader industry

I tend to find myself focusing on the problems – how to solve them within the daily context on customer challenges, but the broader industry problems when I take the time to reflect, such as writing these blogs.

The part I find interesting is that we still face most of the same problems today that we did back in the 1990’s-2000’s. The same source of risks. We’ve done a fantastic job of helping many people get what they want on their day-to-day activities (the incremental). We still haven’t cracked the big challenges though. That’s why I wrote the OSS Call for Innovation, to articulate what lays ahead of us.

It’s why I’m really excited about two of the concepts we’ve discussed this week:

Auto-releasing chaos monkeys to harden your network (CT/IR)

In earlier posts, we’ve talked about using Netflix’s chaos monkey approach as a way of getting to Zero Touch Assurance (ZTA). The chaos monkeys intentionally trigger faults in the network as a means of ensuring resilience. Not just for known degradation / outage events, but to unknown events too.

I’d like to introduce the concept of CT/IR – Continual Test / Incremental Resilience. Analogous to CI/CD (Continuous Integration / Continuous Delivery) before it, CT/IR is a method to systematically and programmatically test the resilience of the network, then ensuring resilience is continually improving.

This is done by storing a knowledge base of failure cases, pre-emptively triggering them and then recording the results as seed data (for manual or AI / ML observations). Using traditional techniques, we look at event logs and try to reverse-engineer what the root-cause MIGHT be. In the case of CT/IR, the root-cause is certain. We KNOW the root-cause because we systematically and intentionally triggered it.

The continual, incremental improvement in resiliency potentially comes via multiple feedback loops:

  1. Ideally, the existing resilience mechanisms work around or overcome any degradation or failure in the network
  2. The continual triggering of faults into the network will provide additional seed data for AI/ML tools to learn from and improve upon, especially root-cause analysis
  3. We can program the network to overcome the problem (eg turn up extra capacity, re-engineer traffic flows, change configurations, etc). Having the NaaS that we spoke about yesterday, provides greater programmability for the network by the way.
  4. We can implement systematic programs / projects to fix endemic faults or weak spots in the network *
  5. Perform regression tests to constantly stress-test the network as it evolves through network augmentation, new device types, etc

Now, you may argue that no carrier in their right mind will allow intentional faults to be triggered. So that’s where we unleash the chaos monkeys on our digital twin technology and/or PSUP (Production Support) environments at first. Then on our prod network if we develop enough trust in it.

I live in Australia, which suffers from severe bushfires every summer. Our fire-fighters spend a lot of time back-burning during the cooler months to reduce flammable material and therefore the severity of summer fires. Occasionally the back-burns get out of control, causing problems. But they’re still done for the greater good. The same principle could apply to unleashing chaos monkeys on a production network… once you’re confident in your ability to control the problems that might follow.

* When I say network, I’m also referring to the physical and logical network, but also support functions such as EMS (Element Management Systems), NCM (Network Configuration Management tools), backup/restore mechanisms, service order replay processes in the event of an outage, OSS/BSS, NaaS, etc.

NaaS is to networks what Agile is to software

After Telstra’s NaaS (Network as a Service) program won a TM Forum excellence award, I promised yesterday to share a post that describes why I’m so excited about the concept of NaaS.

As the title suggests above, NaaS has the potential to be as big a paradigm shift for networks (and OSS/BSS) as Agile has been for software development.

There are many facets to the Agile story, but for me one of the most important aspects is that it has taken end-to-end (E2E), monolithic thinking and has modularised it. Agile has broken software down into pieces that can be worked on by smaller, more autonomous teams than the methods used prior to it.

The same monolithic, E2E approach pervades the network space currently. If a network operator wants to add a new network type or a new product type/bundle, large project teams must be stood up. And these project teams must tackle E2E complexity, especially across an IT stack that is already a spaghetti of interactions.

But before I dive into the merits of NaaS, let me take you back a few steps, back into the past. Actually, for many operators, it’s not the past, but the current-day model.

Networks become Agile with NaaS (the TMN model)

As per the orange arrow, customers of all types (Retail, Enterprise and Wholesale) interact with their network operator through BSS (and possibly OSS) tools. [As an aside, see this recent post for a “religious war” discussion on where BSS ends and OSS begins]. The customer engagement occurs (sometimes directly, sometimes indirectly) via BSS tools such as:

  • Order Entry, Order Management
  • Product Catalog (Product / Offer Management)
  • Service Management
  • SLA (Service Level Agreement) Management
  • Billing
  • Problem Management
  • Customer Management
  • Partner Management
  • etc

If the customer wants a new instance of an existing service, then all’s good with the current paradigm. Where things become more challenging is when significant changes occur (as reflected by the yellow arrows in the diagram above).

For example, if any of the following are introduced, there are end-to-end impacts. They necessitate E2E changes to the IT spaghetti and require formation of a project team that includes multiple business units (eg products, marketing, IT, networks, change management to support all the workers impacted by system/process change, etc)

  1. A new product or product bundle is to be taken to market
  2. An end-customer needs a custom offering (especially in the case of managed service offerings for large corporate / government customers)
  3. A new network type is added into the network
  4. System and / or process transformations occur in the IT stack

If we just narrow in on point 3 above, fundamental changes are happening in network technology stacks already. Network virtualisation (SDN/NFV) and 5G are currently generating large investments of time and money. They’re fundamental changes because they also change the shape of our traditional OSS/BSS/IT stacks, as follows.

Networks become Agile with NaaS (the virtualisation model)

We now not only have Physical Network Functions (PNF) to manage, but Virtual Network Functions (VNF) as well. In fact it now becomes even more difficult because our IT stacks need to handle PNF and VNF concurrently. Each has their own nuances in terms of over-arching management.

The virtualisation of networks and application infrastructure means that our OSS see greater southbound abstraction. Greater southbound abstraction means we potentially lose E2E visibility of physical infrastructure. Yet we still need to manage E2E change to IT stacks for new products, network types, etc.

The diagram below shows how NaaS changes the paradigm. It de-couples the network service offerings from the network itself. Customer Facing Services (CFS) [as presented by BSS/OSS/NaaS] are de-coupled from Resource Facing Services (RFS) [as presented by the network / domains].

NaaS becomes a “meet-in-the-middle” tool. It effectively de-couples

  • The products / marketing teams (who generate customer offerings / bundles) from
  • The networks / operations teams (who design, build and maintain the network).and
  • The IT teams (who design, build and maintain the IT stack)

It allows product teams to be highly creative with their CFS offerings from the available RFS building blocks. Consider it like Lego. The network / ops teams create the building blocks and the products / marketing teams have huge scope for innovation. The products / marketing teams rarely need to ask for custom building blocks to be made.

You’ll notice that the entire stack shown in the diagram below is far more modular than the diagram above. Being modular makes the network stack more suited to being worked on by smaller autonomous teams. The yellow arrows indicate that modularity, both in terms of the IT stack and in terms of the teams that need to be stood up to make changes. Hence my claim that NaaS is to networks what Agile has been to software.

Networks become Agile with NaaS (the NaaS model)

You will have also noted that NaaS allows the Network / Resource part of this stack to be broken into entirely separate network domains. Separation in terms of IT stacks, management and autonomy. It also allows new domains to be stood up independently, which accommodates the newer virtualised network domains (and their VNFs) as well as platforms such as ONAP.

The NaaS layer comprises:

  • A TMF standards-based API Gateway
  • A Master Services Catalog
  • A common / consistent framework of presentation of all domains

The ramifications of this excites me even more that what’s shown in the diagram above. By offering access to the network via APIs and as a catalog of services, it allows a large developer pool to provide innovative offerings to end customers (as shown in the green box below). It opens up the long tail of innovation that we discussed last week.
Networks become Agile with NaaS (the developer model)

Some telcos will open up their NaaS to internal or partner developers. Others are drooling at the prospect of offering network APIs for consumption by the market.

You’ve probably already identified this, but the awesome thing for the developer community is that they can combine services/APIs not just from the telcos but any other third-party providers (eg Netflix, Amazon, Facebook, etc, etc, etc). I could’ve shown these as East-West services in the diagram but decided to keep it simpler.

Developers are not constrained to offering communications services. They can now create / offer higher-order services that also happen to have communications requirements.

If you weren’t already on board with the concept, hopefully this article has convinced you that NaaS will be to networks what Agile has been to software.

Agree or disagree? Leave me a comment below.

PS1. I’ve used the old TMN pyramid as the basis of the diagram to tie the discussion to legacy solutions, not to imply size or emphasis of any of the layers.

PS2. I use the terms OSS/BSS as per TMN pyramid. The actual demarcation line between what OSS and BSS does tend to be grey and trigger religious wars, as per the post earlier this week.

PS3. Similarly, the size of the NaaS layer is to bring attention to it rather than to imply it is a monolithic stack in it’s own right. In reality, it is actually a much thinner shim layer architecturally

PS4. The analogy between NaaS and Agile is to show similarities, not to imply that NaaS replaces Agile. They can definitely be used together

PS5. I’ve used the term IT quite generically (operationally and technically) just to keep the diagram and discussion as simple as possible. In reality, there are many sub-functions like data centre operations, application monitoring, application control, applications development, product owner, etc. These are split differently at each operator.