OSS diamonds are forever (part 2)

Wednesday’s post discussed how OPEX is forever, just like the slogan for diamonds.
 
As discussed, some aspects of Operational Expenses are well known when kicking off a new OSS project (eg annual OSS license / support costs). Others can slip through the cracks – what I referred to as OPEX leakage (eg third-party software, ongoing maintenance of software customisations).
 
OPEX leakage might be an unfair phrase. If there’s a clear line of sight from the expenses to a profitable return, then it’s not leakage. If costs (of data, re-work, cloud services, applications, etc) are proliferating with no clear benefit, then the term “leakage” is probably fair.
 
I’ve seen examples of Agile and cloud implementation strategies where leakage has occurred. And even the supposedly “cheap” open-source strategies have led to surprises. OPEX leakage has caused project teams to scramble as their financial year progressed and budgets were unexpectedly being exceeded.
 
Oh, and one other observation to share that you may’ve seen examples of, particularly if you’ve worked on OSS in large organisations – Having OPEX incurred by one business unit but the benefit derived by different business units. This can cause significant problems for the people responsible for divisional budgets, even if it’s good for the business as a whole. 
 
Let me explain by example: An operations delivery team needs extralogging capability so they stand up a new open-source tool. They make customisations so that log data can be collected for all of their network types. All log data is then sent to the organisation’s cloud instance. The operations delivery team now owns lifecycle maintenance costs. However, the cost of cloud (compute and storage) and data lake licensing have now escalated but Operations doesn’t foot that bill. They’ve just handed that “forever” budgetary burden to another business unit.
 
The opposite can also be true. The costs of build and maintain might be borne by IT or ops, but the benefits in revenue or CX (customer experience) are gladly accepted by business-facing units.
 
Both types of project could give significant whole-of-company benefit. But the unit doing the funding will tend to choose projects that are less effective if it means their own business unit will derive benefit (especially if individual’s bonuses are tied to those results).
 
OSS can be powerful tools, giving and receiving benefit from many different business units. However, the more OPEX-centric OSS projects that we see today are introducing new challenges to get funded and then supported across their whole life-cycle.
 
PS. Just like diamonds bought at retail prices, there’s a risk that the financials won’t look so great a year after purchase. If that’s the case, you may have to seek justification on intangible benefits.  😉
 
PS2. Check out Robert’s insightful comment to the initial post, including the following question, “I wonder how many OSS procurements are justified on the basis of reducing the Opex only *of the current OSS*, rather than reducing the cost of achieving what the original OSS was created to do? The former is much easier to procure (but may have less benefit to the business). The latter is harder (more difficult analysis to do and change to manage, but payoff potentially much larger).”

Crossing the OSS chasm

Geoff Moore’s seminal book, “Crossing the Chasm,” described the psychological chasm between early buyers and the mainstream market.

Crossing the Chasm

Seth Godin cites Moore’s work, “Moore’s Crossing the Chasm helped marketers see that while innovation was the tool to reach the small group of early adopters and opinion leaders, it was insufficient to reach the masses. Because the masses don’t want something that’s new, they want something that works…

The lesson is simple:

– Early adopters are thrilled by the new. They seek innovation.

– Everyone else is wary of failure. They seek trust.”
 

I’d reason that almost all significant OSS buyer decisions fall into the “mainstream market” section in the diagram above.  Why? Well, an organisation might have the 15% of innovators / early-adopters conceptualising a new OSS project. However, sign-off of that project usually depends on a team of approvers / sponsors. Statistics suggest that 85% of the team is likely to exist in a mindset beyond the chasm and outweigh the 15%. 

The mainstream mindset is seeking something that works and something they can trust.

But OSS / digital transformation projects are hard to trust. They’re all complex and unique. They often fail to deliver on their promises. They’re rarely reliable or repeatable. They almost all require a leap of faith (and/or a burning platform) for the buyer’s team to proceed.

OSS sellers seek to differentiate from the 400+ other vendors (of course). How do they do this? Interestingly, by pitching their innovations and uniqueness mostly.

Do you see the gap here? The seller is pitching the left side of the chasm and the buyer cohort is on the right.

I wonder whether our infuriatingly lengthy sales cycles (often 12-18 months) could be reduced if only we could engineer our products and projects to be more mainstream, repeatable, reliable and trustworthy, whilst being less risky.

This is such a dilemma though. We desperately need to innovate, to take the industry beyond the chasm. Should we innovate by doing new stuff? Or should we do the old, important stuff in new and vastly improved ways? A bit of both??

Do we improve our products and transformations so that they can be used / performed by novices rather than designed for use by all the massive intellects that our industry seems to currently consist of?

 

 

 

 

Diamonds are Forever and so is OSS OPEX

Sourced from: www.couponraja.in

I sometimes wonder whether OPEX is underestimated when considering OSS investments, or at least some facets (sorry, awful pun there!) of it.

Cost-out (aka head-count reduction) seems to be the most prominent OSS business case justification lever. So that’s clearly not underestimated. And the move to cloud is also an OPEX play in most cases, so it’s front of mind during the procurement process too. I’m nought for two so far! Hopefully the next examples are a little more persuasive!

Large transformation projects tend to have a focus on the up-front cost of the project, rightly so. There’s also an awareness of ongoing license costs (usually 20-25% of OSS software list price per annum). Less apparent costs can be found in the exclusions / omissions. This is where third-party OPEX costs (eg database licenses, virtualisation, compute / storage, etc) can be (not) found.

That’s why you should definitely consider preparing a TCO (Total Cost of Ownership) model that includes CAPEX and OPEX that’s normalised across all options when making a buying decision.

But the more subtle OPEX leakage occurs through customisation. The more customisation from “off-the-shelf” capability, the greater the variation from baseline, the larger the ongoing costs of maintenance and upgrade. This is not just on proprietary / commercial software, but open-source products as well.

And choosing Agile almost implies ongoing customisation. One of the things about Agile is it keeps adding stuff (apps, data, functions, processes, code, etc) via OPEX. It’s stack-ranked, so it’s always the most important stuff (in theory). But because it’s incremental, it tends to be less closely scrutinised than during a CAPEX / procurement event. Unless carefully monitored, there’s a greater chance for OPEX leakage to occur.

And as we know about OPEX, like diamonds, they’re forever (ie the costs re-appear year after year). 

A billion dollar bid

A few years ago I was lucky enough to be invited to lead a bid. I say lucky because the partner organisations are two of the most iconic firms in the tech industry. The bid was for bleeding-edge work, potentially worth well over a billion dollars. I was a little surprised to be honest. I mean, two tech titans, with many very, very clever people, much cleverer than me. Why would they need to look outside and engage me?

As it turned out, the answer became clear within the first few meetings. And whilst the project had little to do with OSS, it certainly had (has) parallels in the world of OSS.

Both of the organisations were highly siloed. Each product / capability silo had immense talent and immense depth to it. Our combined team had many PhDs who could discuss their own silo for hours, but could only point me in the general direction of what plugged into their products. 

Clearly, I was engaged to figure out the required end-to-end solution for the customer and then how to bolt the two sets of silos into that solution framework.

The same is true when looking for OSS solution gaps, in my experience at least. If you look into a domain or a product, the functionality / capability is usually quite well defined, understood and supported. For example, alarm / event managers are invariably very good at managing alarm / event lists.

If you’re going to find gaps, they’re more likely to be found in the end-to-end solution – in the handoffs, responsibility demarcation points, interfaces and processes that cross between silos. That’s why external consultancies can prove valuable for large organisations. They generally look into the cross-domain solution performance.

As you’d already know, the end-to-end solution is a combination of people, process and technology. Even so, as the “manager of managers,” I’m not sure our OSS tech is solving this problem as well as it could. Is there even a “glue” product that’s missing from our OSS/BSS stack?

Sure, we have some tools that fit this purpose – workflow engines, messaging buses, orchestration engines, data lakes, etc. Yet I still feel there’s an opportunity to do it far better. And the opportunity probably extends far beyond just OSS and into the broader IT industry.

What have you done to help solve this problem on your OSS suites?

PS. If you’re wondering what happened to the bid. Well, the team was excited to have made the shortlist of 3, but then the behemoths decided to withdraw from the race. Turns out that winning the bid could’ve jeopardised the even bigger supply contracts they already had with the client. Boggles the mind to think there were bigger contracts already in play!!

 

Inventory Management re-states its case

In a post last week we posed the question on whether Inventory Management still retains relevance. There are certainly uses cases where it remains unquestionably needed. But perhaps others that are no longer required, a relic of old-school processes and data flows.
 
If you have an extensive OSP (Outside Plant) network, you have almost no option but to store all this passive infrastructure in an Inventory Management solution. You don’t have the option of having an EMS (Element Management System) console / API to tell you the current design/location/status of the network. 
 
In the modern world of ubiquitous connection and overlay / virtual networks, Inventory Management might be less essential than it once was. For service qualification, provisioning and perhaps even capacity planning, everything you need to know is available on demand from the EMS/s. The network is a more correct version of the network inventory than external repository (ie Inventory Management) can hope to be, even if you have great success with synchronisation.
 
But I have a couple of other new-age use-cases to share with you where Inventory Management still retains relevance.
 
One is for connectivity (okay so this isn’t exactly a new-age use-case, but the scenario I’m about to describe is). If we have a modern overlay / virtual network, anything that stays within a domain is likely to be better served by its EMS equivalent. Especially since connectivity is no longer as simple as physical connections or nearest neighbours with advanced routing protocols. But anything that goes cross-domain and/or off-net needs a mechanism to correlate, coordinate and connect. That’s the role the Inventory Manager is able to do (conceptually).
 
The other is for digital twinning. OSS (including Inventory Management) was the “original twin.” It was an offline mimic of the production network. But I cite Inventory Management as having a new-age requirement for the digital twin. I increasingly foresee the need for predictive scenarios to be modelled outside the production network (ie in the twin!). We want to try failure / degradation scenarios. We want to optimise our allocation of capital. We want to simulate and optimise customer experience under different network states and loads. We’re beginning to see the compute power that’s able to drive these scenarios (and more) at scale.
 
Is it possible to handle these without an Inventory Manager (or equivalent)?

When OSS experts are wrong

When experts are wrong, it’s often because they’re experts on an earlier version of the world.”
Paul Graham.
 
OSS experts are often wrong. Not only because of the “earlier version of the world” paradigm mentioned above, but also the “parallel worlds” paradigm that’s not explicitly mentioned. That is, they may be experts on one organisation’s OSS (possibly from spending years working on it), but have relatively little transferable expertise on other OSS.
 
It would be nice if the OSS world view never changed and we could just get more and more expert at it, approaching an asymptote of expertise. Alas, it’s never going to be like that. Instead, we experience a world that’s changing across some of our most fundamental building blocks.
 
We are the sum total of our experiences.”
B.J. Neblett.
 
My earliest forays into OSS had a heavy focus on inventory. The tie-in between services, logical and physical inventory (and all use-cases around it) was probably core to me becoming passionate about OSS. I might even go as far as saying I’m “an Inventory guy.”
 
Those early forays occurred when there was a scarcity mindset in network resources. You provisioned what you needed and only expanded capacity within tight CAPEX envelopes. Managing inventory and optimising revenue using these scarce resources was important. We did that with the help of Inventory Management (IM) tools. Even end-users had a mindset of resource scarcity. 
 
But the world has changed. We now operate with a cloud-inspired abundance mindset. We over-provision physical resources so that we can just spin up logical / virtual resources whenever we wish. We have meshed, packet-switched networks rather than nailed up circuits. Generally speaking, cost per resource has fallen dramatically so we now buy a much higher port density, compute capacity, dollar per bit, etc. Customers of the cloud generation assume abundance of capacity that is even available in small consumption-based increments. In many parts of the world we can also assume ubiquitous connectivity.
 
So, as “an inventory guy,” I have to question whether the scarcity to abundance transformation might even fundamentally change my world-view on inventory management. Do I even need an inventory management solution or should I just ask the network for resources when I want to turn on new customers and assume the capacity team has ensured there’s surplus to call upon?
 
Is the enormous expense we allocate to building and reconciling a digital twin of the network (ie the data gathered and used by Inventory Management) justified? Could we circumvent many of the fallouts (and a multitude of other problems) that occur because the inventory data doesn’t accurately reflect the real network?
 
For example, in the old days I always loved how much easier it was to provision a customer’s mobile / cellular or IN (Intelligent Network) service than a fixed-line service. It was easier because fixed-line service needed a whole lot more inventory allocation and reservation logic and process. Mobile / IN services didn’t rely on inventory, only an availability of capacity (mostly). Perhaps the day has almost come where all services are that easy to provision?
 
Yes, we continue to need asset management and capacity planning. Yes, we still need inventory management for physical plant that has no programmatic interface (eg cables, patch-panels, joints, etc). Yes, we still need to carefully control the capacity build-out to CAPEX to revenue balance (even more so now in a lower-profitability operator environment). But do many of the other traditional Inventory Management and resource provisioning use cases go away in a world of abundance?
 

 

I’d love to hear your opinions, especially from all you other “inventory guys” (and gals)!! Are your world-views, expertise and experiences changing along these lines too or does the world remain unchanged from your viewing point?
 
Hat tip to Garry for the seed of this post!

The Autonomous Network / OSS Clock

In yesterday’s post, we talked about what needs to happen for a network operator to build an autonomous network. Many of the factors extended beyond the direct control of the OSS stack. We also looked at the difference between designing network autonomy for an existing OSS versus a ground-up build of an autonomous network.

We mostly looked at the ground-up build yesterday (at the expense of legacy augmentation).

So let’s take a slightly closer look at legacy automation. Like any legacy situation, you need to first understand current state. I’ve heard colleagues discuss the level of maturity of an existing network operations stack in terms of a single metric.

However, I feel that this might miss some of the nuances of the situation. For example, different activities are likely to be at different levels of maturity. Hence, the attempt at benchmarking the current situation on the OSS or Autonomous Networking clock below.

OSS Autonomy Clock

Sample activities shown in grey boxes to demonstrate the concept (I haven’t invested enough time into what the actual breakdown of activities might be yet).

  • Midnight is no monitoring capability
  • 3AM is Reactive Mode (ie reacting to data presented by the network / systems)
  • 6AM is Predictive Mode (ie using historical learnings to identify future situations)
  • 9AM is Prescriptive / Pre-cognitive Mode (ie using historical learnings, or pre-cognitive capabilities to identify what to do next)
  • Mid-day is Autonomous Networking (ie to close the loop and implement / control actions that respond to current situations automatically)

As always, I’d love to hear your thoughts!

As a network owner….

….I want to make my network so observable, reliable, predictable and repeatable that I don’t need anyone to operate it.

That’s clearly a highly ambitious goal. Probably even unachievable if we say it doesn’t need anyone to run it. But I wonder whether this has to be the starting point we take on behalf of our network operator customers?

If we look at most networks, OSS, BSS, NOC, SOC, etc (I’ll call this whole stack “the black box” in this article), they’ve been designed from the ground up to be human-driven. We’re now looking at ways to automate as many steps of operations as possible.

If we were to instead design the black-box to be machine-driven, how different would it look?

In fact, before we do that, perhaps we have to take two unique perspectives on this question:

  1. Retro-fitting existing black-boxes to increase their autonomy
  2. Designing brand new autonomous black-boxes

I suspect our approaches / architectures will be vastly different.

The first will require a incredibly complex measure, command and control engine to sit over top of the existing black box. It will probably also need to reach into many of the components that make up the black box and exert control over them. This approach has many similarities with what we already do in the OSS world. The only exception would be that we’d need to be a lot more “closed-loop” in our thinking. I should also re-iterate that this is incredibly complex because it inherits an existing “decision tree” of enormous complexity and adds further convolution.

The second approach holds a great deal more promise. However, it will require a vastly different approach on many levels:

  1. We have to take a chainsaw to the decision tree inside the black box. For example:
    • We start by removing as much variability from the network as possible. Think of this like other utilities such as water or power. Our electricity service only has one feed-type for almost all residential and business customers. Yet it still allows us great flexibility in what we plug into it. What if a network operator were to simply offer a “broadband dial-tone” service and end users decide what they overlay on that bit-stream
    • This reduces the “protocol stack” in the network (think of this in terms of the long list of features / tick-boxes on any router’s brochure)
    • As well as reducing network complexity, it drastically reduces the variables an end-user needs to decide from. The operator no longer needs 50 grandfathered, legacy products 
    • This also reduces the decision tree in BSS-related functionality like billing, rating, charging, clearing-house
    • We achieve a (globally?) standardised network services catalog that’s completely independent of vendor offerings
    • We achieve a more standardised set of telemetry data coming from the network
    • In turn, this drives a more standardised and minimal set of service-impact and root-cause analyses
  2. We design data input/output methods and interfaces (to the black box and to any of its constituent components) to have closed-loop immediacy in mind. At the moment we tend to have interfaces that allow us to interrogate the network and push changes into the network separately rather than tasking the network to keep itself within expected operational thresholds
  3. We allow networks to self-regulate and self-heal, not just within a node, but between neighbours without necessarily having to revert to centralised control mechanisms like OSS
  4. All components within the black-box, down to device level, are programmable. [As an aside, we need to consider how to make the physical network more programmable or reconcilable, considering that cables, (most) patch panels, joints, etc don’t have APIs. That’s why the physical network tends to give us the biggest data quality challenges, which ripples out into our ability to automate networks]
  5. End-to-end data flows (ie controls) are to be near-real-time, not constrained by processing lags (eg 15 minute poll cycles, hourly log processing cycles, etc) 
  6. Data minimalism engineering. It’s currently not uncommon for network devices to produce dozens, if not hundreds, of different metrics. Most are never used by operators manually, nor are likely to be used by learning machines. This increases data processing, distribution and storage overheads. If we only produce what is useful, then it should improve data flow times (point 5 above). Therefore learning machines should be able to control which data sets they need from network devices and at what cadence. The learning engine can start off collecting all metrics, then progressively turning them off as they deem metrics unnecessary. This could also extend to controlling log-levels (ie how much granularity of data is generated for a particular log, event, performance counter)
  7. Perhaps we even offer AI-as-a-service, whereby any of the components within the black-box can call upon a centralised AI service (and the common data lake that underpins it) to assist with localised self-healing, self-regulation, etc. This facilitates closed-loop decisions throughout the stack rather than just an over-arching command and control mechanism

I’m barely exposing the tip of the iceberg here. I’d love to get your thoughts on what else it will take to bring fully autonomous network to reality.

For those starting out in OSS product, here’s a tip

For those starting out in product, here’s a tip: Design, Defaults*, Documentation, Details and Delivery really matter in software.”
Jeetu Patel here.

* Note that you can interpret “Defaults” to be Out-Of-The-Box functionality offered by the product.

Let’s break those 5 D-words down and describe why they really matter to the OSS industry shall we?

  • Design – The power of OSS product development tends to lie with engineering, ie the developers. I have huge admiration for the very clever and very talented engineers who create amazing products for us to use, buuutttttt……. I just have one reservation – is there a single OSS company that is design-driven? A single one that’s making intuitive, effective, beautiful experiences for their users? The obvious answer is of course engineering teams hold sway over design teams in OSS – how many OSS vendors even have a dedicated design department??? See this article for more.
  • Defaults – Almost every OSS I know of has an enormous amount of “out-of-the-box” functionality baked in. You could even say that most have too much functionality. There’s functionality that might be really important for one customer but never even used by any of the vendor’s other customers. It just represents bloat for all the other customers, and potentially a distraction for their operators. I’m still bemused to see vendors trying to differentiate by adding obscure new default features rather than optimising for “must-have” functions. See this article for more. However, I must add that I’m starting to see a shift in some OSS. They’re moving away from having baked-in functionality and are moving to more data-repository-driven architectures. Interesting!!
  • Documentation – This is a really interesting factor! Some vendors make almost no documentation available until a prospect becomes a paying customer. Other vendors make their documentation available for the general public online and dedicate significant effort to maintaining their information library. The low-doc approach espoused by Agile could be argued to be reducing document quality. However, it also reduces the chance of producing documentation that nobody will read ever! Personally, I believe vendors like Cisco have earnt a huge competitive advantage (in the networking space moreso than OSS) because of their training / certification (ie CCNA, etc) and self-learning (ie online documentation). See this article for more. As such, I’d tend to err on over-documenting for customer-facing collateral. And perhaps under-documenting for internal-facing collateral unless it’s likely to be used regularly and by many.
  • Details – This is another item where there are two ends to the spectrum. That might surprise some people who would claim that attention to detail is paramount. Well, yes…. in many cases, but certainly not all on OSS projects. Let me share a story on attention to detail on a past OSS project. And another story on seeking perfection. Sometimes we just need to find the right balance, and knowing when to prioritise resilience and when to favour precision becomes an art.
  • Delivery – I have two perspectives on this D-word. Firstly, the Steve Jobs inspired quote of “Real artists ship!” In other words, to laud the skill of shipping a product that provides value to the customer rather than holding off on a not-yet-perfected solution. But the second case is probably more important. OSS projects tend to be massive and complex transformation efforts. Our OSS are rarely self-installed like office software, so they require big delivery teams. Some products are easy to deliver/deploy. Others are a *&$%#! If you’re a product developer, please get out in the trenches with your delivery teams and find ways to make their job easier and/or more repeatable.

Opinions wanted – How to Benchmark OSS/BSS complexity

I’d love to ask you an important question…  how do we benchmark OSS/BSS complexity? To measure how complex our systems are and therefore provide a signpost for simplification.

A colleague has opined that the number of apps in a stack could be used a proxy. I can see where he’s going with that, but I feel that it doesn’t account for architectural differences such as monolith versus microservices.

I’d love to hear your thoughts via the comments box below.

FWIW, here are some additional thoughts from me, but please don’t let them bias your opinions:

  • For me, complexity relates to the efficiency of getting tasks done:
    • How much time to complete certain tasks
    • How many button clicks
    • How much swivel-chairing
    • How many CPU cycles for automated tasks
    • How much admin overhead
    • How much duplicated effort and/or rework
  • However, there are so many different tasks done within an OSS/BSS stack that it’s difficult to provide a complexity metric that compares one OSS/BSS stack with another. Or compares a single stack before/after changes are made
  • In some cases the complexity happens inside the OSS/BSS “black box” (eg tools within the suite aren’t seamlessly integrated, causing operators to perform dual-entry that leads to data inconsistency and downstream re-work)
  • In other cases the complexity is inherited from outside the black box (eg product offerings have hundreds of possible variants that are imperceptibly different in the customer’s eyes). I call this The OSS Pyramid of Pain
  • In many cases, the complexity of an OSS/BSS stack is less about the systems and integrations, and more about the complexity of The Decision Tree that spans the stack. The spread of the Decision Tree is impacted by:
    • The OSS/BSS applications
    • Support applications (eg authentication, security, data management, resilience / availability, etc)
    • System interfaces (internal and external)
    • User interfaces
    • Process designs
    • Product definitions
    • Work practices
    • Data models
    • Design rules
    • Network topologies
    • etc, etc
  • The more complex the Decision Tree, the more complex it is to transform our OSS. It loosely aligns with what I call this The Chessboard Analogy
  • The development strategy used also has an impact, be monolithic, best-of-breed, hosted or in-house developed. For example an in-house-developed solution is likely to have less functionality-bloat than a COTS (off-the-shelf) solution. The COTS solution needs to include additional functionality to enable it to support requirements of multiple customers
  • And finally, a benchmark is only as useful as the actions that it triggers. How do we codify a complexity metric that has the equally complex array of contributions described above?
  • Perhaps we could take a somewhat abstracted approach like the NPS (Net Promoter Score) does, thus creating a NSS (Net Simplicity Score)

As mentioned above, I’d love to hear your thoughts on how we can benchmark the level of complexity in our OSS/BSS. Please leave your comments below.

 

The digital transformation paradox twins

There’s an old adage that “the confused mind always says no.”

Consider this from your own perspective. If you’re in a state of confusion about something, are you likely to commit wholeheartedly or will you look to delay / procrastinate?

The paradox for digital transformation is that our projects are almost always complex, but complexity breeds confusion and uncertainty. Transformation may be urgently needed, but it’s really hard to persuade stakeholders and sponsors to commit to change if they don’t have a clear picture of the way forward.

As change agents, we face another paradox. It’s our task to simplify the messaging. but our messaging should not imply that the project will be simple. That will just set unrealistic expectations for our stakeholders (“but this project was supposed to be simple,” they say).

Like all paradoxes, there’s no perfect solution. However, one technique that I’ve found to be useful is to narrow down the choices. Not by discarding them outright, but by figuring out filters – ways to quick include or exclude branches of the decision tree.

Let’s take the example of OSS vendor selection. An organisation asks itself, “what is the best-fit OSS/BSS for our needs?” The Blue Book OSS/BSS Vendor Directory will show that there are well over 400 OSS/BSS providers to choose from. Confusion!

So let’s figure out what our needs are. We could dive into really detailed requirement gathering, but that in itself requires many complex decisions. What if we instead just use a few broad needs as our first line of filtering? We know we need an outside plant management tool. Our list of 400+ now becomes 20. There’s still confusion, but we’re now more targeted.

But 20 is still a lot to choose from. A slightly deeper level of filtering should allow us to get to a short list of 3-5. The next step is to test those 3-5 to see which does the best at fulfilling the most important needs of the organisation. Chances are that the best-fit won’t fulfil every requirement, but generally it will clearly fulfil more than any of the other alternatives. It’s best-fit, not perfect fit.

We haven’t made the project less complex, but we have simplified the decision. We’ve arrived at the “best” option, so the way forward should be clear right?

Unfortunately, it’s not always that easy. Even though the best way forward has been identified, there’s still uncertainties in the minds of stakeholders caused purely by the complexity of the upcoming project. I’ve seen examples where the choice of vendor has been clear, with the best-fit clearly surpassing the next-best, but the buyer is still indecisive. I completely get it. Our task as change agents is to reduce doubts and increase transformation confidence.

What will get your CEO fired? (part 2)

In Monday’s article, we suggested that the three technical factors that could get the big boss fired are probably only limited to:

  1. Repeated and/or catastrophic failure (of network, systems, etc)
  2. Inability to serve the market (eg offerings, capacity, etc)
  3. Inability to operate network assets profitably

In that article, we looked closely at a human factor and how current trends of open-source, Agile and microservices might actually exacerbate it.

But let’s look at some of the broader examples under point 1 today. The failure factors we could consider that might result in the big boss getting fired are:

  1. Availability (nodal and E2E)

  2. Performance (nodal and E2E)

  3. Security (security trust model – cloud vs corporate vs active network and related zones)

  4. Remediation times, systems & processes (Assurance), particularly effectiveness of process for handling P1 (Priority 1) incidents

  5. Resilience Architecture

  6. Disaster Recovery Plan (incl Backup and Restore process, what black-swan events the organisation is susceptible to, etc)

  7. Supportability and Maintenance Routines

  8. Change and Release Management approaches

  9. Human resources (incl business continuity risk of losing IP, etc)

  10. Where are the SPoFs (Single Points of Failure)

We should note too that these should be viewed through two lenses:

  • The lens of the network our OSS/BSS is managing and
  • The lens of the systems (hardware/software/cloud) that make up our OSS/BSS

“The Blue Book OSS/BSS Vendor Directory” from Passionate About OSS has officially launched

We’re excited to announce that “The Blue Book OSS/BSS Vendor Directory” has officially gone live here at https://passionateaboutoss.com/directory

It provides a comprehensive directory of over 400 suppliers that produce OSS, BSS and/or related network management tools. Company details, product details and functionality classifications are included.

The Blue Book OSS / BSS Vendor Directory

Every network operator has a unique set of needs from their operational software – software that includes OSS (Operational Support Systems), BSS (Business Support Systems), NMS (Network Management Systems) and the many other related tools.

To service those many and varied needs, a large number of different products have been created by some very clever developers. But it’s a highly fragmented market. There are literally hundreds of product options out there and they all have different capabilities.

If you’re a typical buyer, how many of those products are you familiar with? Five? Ten? Fifty? How do you know whether the best-fit product or supplier is within the list you already know? Perhaps the best-fit is actually amongst the hundreds of other products and suppliers you’re not familiar with yet. How much time do you have to research each one and distill down to a short-list of possible candidates to service your specific needs? Where do you start? Lots of web searches? There has to be an easier way.

What if you’re a seller? These products tend to have lengthy life-cycles once they’ve been installed so it might be years before a prospect actually enters the buying phase. Yet there are so many prospects out there at different phases of their buying windows. There are bound to be some live ones at any time that suit your capabilities. The challenge for you as a supplier is how to make those prospects aware of you. You don’t have the time to establish trusted relationships with hundreds, perhaps even thousands, of buyers across the globe (or maybe just within your region/s). Wouldn’t you love to be presented with qualified prospects who are in (or nearing) their buying window?

Well we at Passionate About OSS have created The Blue Book OSS/BSS Vendor Directory to simplify the task of bringing buyers and sellers together. With over 400 suppliers listed (and climbing), we provide a single, comprehensive repository for searching, matching and connecting. The tools allow you to do it yourself, or we can help you using the approaches we’ve developed, used and refined over the years.

Now just click on “Directory” to start your journey of searching, matching and connecting (and updating your listing if you’re a supplier).

A lighter-touch OSS procurement approach (part 3)

We’ve spoken at length about TM Forum’s, “Time to kill the RFP? Reinventing IT procurement for the 2020s,” report so far this week. We’ve also spoken about the feeling that the OSS/BSS RFP (Request For Proposal) still has relevance in some situations… as long as it’s more of a lighter-touch than most. We’ve spoken about a more pragmatic approach that aims to find best available fit (for key objectives through stages of filtering) rather than perfect fit (for all requirements through detailed analyses). And I should note that “best available fit” includes measurement against these three contrarian procurement KPIs ahead of the traditional ones.

Yesterday’s post discussed how we get to a short list with minimal involvement of buyers and sellers, with the promise that we’d discuss the detailed analysis stage today.

It’s where we do use an RFP, but with thought given to the many pain-points cited so brilliantly by Mark Newman and team in the abovementioned TM Forum report.

The RFP provides the mechanism to firm up pricing and architecture, but is also closely tied to a PoC (Proof of Concept) demonstration. The RFP helps to prioritise the order in which PoCs are performed. PoCs tend to be very time consuming for buyer and seller. So if there’s a clear leader from the paper studies so far, then they will demonstrate first.

If there’s not a clear difference, or if the prime candidate’s demonstration identified significant gaps, then additional PoCs are run.

And to ensure the PoCs are run against the objectives that matter most, we use scenarios that were prioritised during part 1 of this series.

Next steps are to form the more detailed designs, commercials / contracts and ratify that the business case still holds up.

In yesterday’s post, I also promised to share our “starting-point” procurement methodology. I say starting point because each buyer situation is different and we tend to customise it to each buyer’s needs. It’s useful for starting discussions.

The overall methodology diagram is shown below:

PAOSS vendor selection process

A few key notes here:

  1. The process looks much heavier than it really is… if you use traditional procurement processes as an indicator
  2. We have existing templates for all the activities marked in yellow
  3. The activity marked in blue partially represents the project we’re getting really excited to introduce to you tomorrow

 

A lighter-touch OSS procurement approach (part 2)

Yesterday’s post described the approach to get from 400+ possible OSS/BSS suppliers/products down to a more manageable list without:

  1. Having to get into significant discussions with vendors (yet)
  2. Gathering all your stakeholders together to prepare a detailed list of requirements

We’ll call this “the long list,” which might consist of 5-20 suppliers. We use this evaluation technique (which we’ll share more about on Monday) to ensure we’ve looked at the broad market of suppliers rather than just the few the buyer already knows.

The next step we follow helps us to get to a much smaller list, which we’ll call “the short list.”

For this, we do need to contact vendors (the long list) and we do need to prepare a list of requirements to add to the objectives and key workflows we’ve previously identified. The requirements won’t need to be detailed, but will still probably number into the 100s – some from our pick-list, others customised to each client’s needs.

Then we engage in what we refer to as an EOI (Expression of Interest) phase. Our EOIs are not just a generic market capability analysis like many  buyers conduct. Ours seek indicative vendor compliance (to objectives and requirements) and indicative pricing based on the dimensions we supply. We’ve refined this model over the years to make it quite quick and (relatively) easy for vendors to respond to.

Using compliance to measure suitability and indicative pricing to plug in to our long-term TCO (Total Cost of Ownership) model, the long list usually becomes a clear short list of 1-5 very quickly.

Now we can get into detailed discussions with a very small number of best-fit suppliers without having wasted much time of buyer or seller. 

More on the detailed discussions tomorrow!

A lighter-touch OSS procurement approach (part 1)

You may have noticed that we’ve run a series of posts about OSS/BSS procurement, and about the RFP process by association.

One of the first steps in the traditional procurement process is preparing a strategy and detailed set of requirements.

As TM Forum’s, “Time to kill the RFP? Reinventing IT procurement for the 2020s,” report describes:
Before an RFP can be issued, the CSP’s IT or network team must produce a document detailing the strategy for implementing a technology or delivering a service, which is a lengthy process because of the number of stakeholders involved and the need to describe requirements in a way that satisfies them all.”

The problem with most requirements documents, the ones I’ve seen at least, is that they tend to get down into a deep, deep level of detail. And when it’s down in that level of detail, contrasting opinions from different stakeholders can make it really difficult to reach agreement. Have you ever been in a room with many high-value (and high cost) stakeholders spending days debating the semantics (and wording) of requirements? Every stakeholder group needs a say and needs to be heard.

The theory is that you need a great level of detail to evaluate supplier offerings for best-fit. Well, maybe, but not in the initial stages.

First things first – I seek to find out what’s really important for the organisation. That rarely comes from a detailed requirements spreadsheet, but by determining the things that are done most often and/or add the most value to the buyer’s organisation. I use persona mapping, long-tail and perhaps whale-curve mapping approaches to determine this.

Persona mapping means identifying all the groups within the buyer’s organisation that need to interact with the OSS/BSS (current and proposed). Then sitting with each group to determine what they need to achieve, who they need to interact with and what their workflows look like. That also gives a chance for all groups to be heard.

From this, we can collaboratively determine some high-level evaluation criteria, maybe only 15-20 to start with. You’d be surprised at how quickly this 15-20 criteria can help with initial supplier filtering.

Armed with the initial 15-20 evaluation criteria and the project we’re getting excited to launch on Monday, we can get to a relevant list of possible suppliers quite quickly. It allows us to do a broad market search to compile a list of suppliers, not just from the 5-10 suppliers the buyer already knows about, but from the 400+ suppliers/products available on the market. And we don’t even have to ask the suppliers to fill out any lengthy requirement response spreadsheets / forms yet.

We’ll continue the discussion over the next two days. We’ll also share our procurement methodology pack on Sunday.

Do I support the death penalty (of OSS RFPs)? Hmmm….

As per yesterday’s post, I’ll continue to reference a TM Forum report called, “Time to kill the RFP? Reinventing IT procurement for the 2020s” today. Mark Newman and the team have captured and discussed so many layers to the OSS/BSS procurement process.

There’s no doubt the current stereotypical RFP approach to procurement is broken. It needs to be done differently. That’s why we have been doing it differently with customers for years now (another hint regarding a project we’re getting excited to announce this Monday).

The TM Forum report is really powerful and well worth a read. There are a few additional (and somewhat random) thoughts that go through my head when considering the death of the RFP:

  1. The TM Forum report is primarily coming at the problem from the perspective of a carrier that is constantly steering the development of its own systems, as implied through this quote, “The fundamental problem with the RFP process is that in a fast-paced technology environment, where cloud and software are fast becoming preferred options, it is difficult for CSPs to describe in lengthy, written documents what they want and need. The processes are simply too complex and cumbersome to support modern, Agile methods of working.”
  2. That perspective is particularly applicable for some buyers, ones that have committed to having significant developer resources available to build exactly what they want. That could be in the form of in-house developers, contract developers, long-term panel arrangements with suppliers or similar
  3. Others, perhaps such as utilities, enterprise and some telcos want to focus on their core business and delegate OSS/BSS configuration and customisation to third-parties.
  4. Some of those rely on COTS (commercial off the shelf) software to leverage the benefits of innovation, cost and development time that have been spread across multiple customers. Their budgets simply don’t allow for custom-built solutions
  5. COTS, be it on-prem through to cloud service models, are almost never going to be a perfect fit for a buyer’s needs. They’re designed to generically suit many buyers, so a certain amount of bloat becomes part of the trade-off
  6. In recent weeks, I’ve seen two entirely in-house developed OSS/BSS. They fit their organisations like a glove and there’s almost no bloat at all. In fact it would be almost impossible for a COTS solution to replace what they’ve built. In both cases it’s taken a decade of ongoing development to get to that position. Most buyers don’t have that amount of time to get it right though unfortunately
  7. Commercial realities imply a pragmatic approach is taken to procurement – which product/s provide default capability that best aligns with the buyer’s most important objectives.
  8. RFPs often get bogged down at the far right-hand side of the long-tail of requirements (where impact tends to be negligible), or in trying to completely re-sculpt the solution to be the perfect fit (that it’s unlikely to ever be)
  9. In my experience at least, the best-fit (not perfect fit) solution, or very short list of solutions, usually becomes apparent fairly quickly [we’ll share more about how we do that tomorrow]. It’s then just a case of testing objectives, assumptions and gaps (eg via a proof-of-concept) and getting to a mutually beneficial commercial agreement
  10. As one respondent in the TM Forum report put it, “The RFP glorifies the process, not the outcome.” A healthy dose of outcome-driven pragmatism helps to reduce glorification of the RFP process
  11. Also in my experience at least, scope of works quotes from vendors (which RFPs tend to lead to) tend to be written in a waterfall style that don’t fit into Agile frameworks very effectively. That can be partially overcome by slicing and dicing the SoW in ways that are more conducive to Agile delivery
  12. With so much fragmentation in the OSS/BSS market already (there are over 400 in our vendor directory), that means the talent pool of creators is thinly spread. Many of those 400 have duplicated functionality, which isn’t great for the industry’s overall progress. Custom development for each different buyer spreads the talent pool even further… unless buyers can get economies of development scale through shared platforms like ONAP

In summary, I love the concept of avoiding massive procurement events. I still can’t help but think the RFP still fits in there somewhere for many buyers… as long as we ensure we glorify the outcomes and de-emphasise the process. It’s just that we use RFPs like a primitive instrument and inflict blunt-force trauma, rather than using surgical precision.

Lobbying hard for the death penalty for OSS RFPs

Earlier this year, the TM Forum published a really insightful report called, “Time to kill the RFP? Reinventing IT procurement for the 2020s.” There are so many layers to the OSS/BSS procurement discussion and Mark Newman and team have done a fantastic job of capturing them. We’ll expand on a few of those layers in a series of posts this week.

For example, section 2 articulates the typical RFI / RFP / RFQ approach. It’s clear to see why the typical approach is flawed. Yesterday’s post pondered whether procurement events are flawed from the initial KPIs that are set by buyers. Today we’ll take a look at the process that follows.

Two quotes from the TM Forum report frame some of the challenges with RFPs from buyer and seller viewpoints respectively:
QUOTE 1 (Buyer-side) – “CSPs normally distribute RFPs to a group of three to eight suppliers. These are most likely existing suppliers, previous vendors or companies the CSP is aware of through its own technology scouting. Suppliers are likely to include systems integrators who rely on other vendors to fulfill elements of the contract, and CSPs tend to invite bidders offering a range of options.
For example, they may invite a supplier that is likely to offer a good price, one that is a ‘safe’, low-risk option, and the incumbent supplier, which in many cases the CSP is looking to replace.
The document itself is likely to be several hundred pages long, a large portion of it comprising details of technology requirements, with suppliers asked to specify whether they comply with each requirement
.”
The question I’d ask about this process is how does the CSP choose 3-8 out of the 400+ vendors that supply the OSS/BSS market? Does their “own technology scouting” adequately discount the hundreds of others that could potentially be best-fit for their needs?

QUOTE 2 (Seller-side) – “We were holed up in our hotel for a month working feverishly on different aspects of the bid. We had 15 people there in total, and we were asked to come in for meetings with five different teams. The meetings go on and on, and you really have no idea when they’re going to finish.”
Let’s do the sums on this situation. 15 people x 25 days x $1500 per day (a round figure that includes accommodation, meals, etc) = $562,500. That’s over half a million dollars just for the seller-side of the post-RFP evaluation phase. Now let’s say there were 4 sellers going through this. [Just a small aside here – reading between the lines, do you suspect the buyer was taking the seller on a journey into the minutiae or focusing on what will move the needle for them? Re-read that through the lens of yesterday’s contrasting KPI perspectives]

You can see exactly why Mark has proposed that it’s, “Time to kill the RFP,” at least in its traditional form. These two quotes lobby hard for the death penalty. More on that tomorrow!

Also note that another hint was contained above in the lead-up to a project launch on Monday that we’re really excited about.

OSS/BSS procurement is flawed from the outset

You may’ve noticed that things have been a little quiet on this blog in recent weeks. We’ve been working on a big new project that we’ll be launching here on PAOSS on Monday. We can’t reveal what this project is just yet, but we can let you in on a little hint. It aims to help overcome one of the biggest problem areas faced by those in the comms network space.

Further clues will be revealed in this week’s series of posts.

The industry we work in is worth tens of billions of dollars annually. We rely on that investment to fund the OSS/BSS projects (and ops/maintenance tasks) that keeps many thousands of us busy. Obviously those funds get distributed by project sponsors in the buyers’ organisations. For many of the big projects, sponsors are obliged to involve the organisation’s procurement team.

That’s a fairly obvious path. But I often wonder whether the next step on that path is full of contradictions and flaws.

Do you agree with me that the 3 KPIs sponsors expect from their procurement teams are:

  1. Negotiate the lowest price
  2. Eliminate as many risks as possible
  3. Create a contract to manage the project by

If procurement achieves these 3 things, sponsors will generally be delighted. High-fives for the buyers that screw the vendor prices right down. Seems pretty obvious right? So where’s the contradiction? Well, let’s look at these same 3 KPIs from a different perspective – a more seller-centric perspective:

  1. I want to win the project, so I’ll set a really low price, perhaps even loss-leader. However, our company can’t survive if our projects lose money, so I’ll be actively generating variations throughout the project
  2. Every project of this complexity has inherent risks, so if my buyer is “eliminating” risks, they’re actually just pushing risks onto me. So I’ll use any mechanisms I can to push risks back on my buyer to even the balance again
  3. We all know that complex projects throw up unexpected situations that contracts can’t predict (except with catch-all statements that aim to push all risk onto sellers). We also both know that if we manage the project by contractual clauses and interpretations, then we’re already doomed to fail (or are already failing by the time we start to manage by contract clauses)

My 3 contrarian KPIs to request from procurement are:

  1. Build relationships / trust – build a framework and environment that facilitates a mutually beneficial, long-lasting buyer/seller relationship (ie procurement gets judged on partnership length ahead of cost reduction)
  2. Develop a team – build a framework and environment that allows the buyer-seller collective to overcome risks and issues (ie mutual risk mitigation rather than independent risk deflection)
  3. Establish clear and shared objectives – ensure both parties are completely clear on how the project will make the buyer’s organisation successful. Then both constantly evolve to deliver benefits that outweigh costs (ie focus on the objectives rather than clauses – don’t sweat the small stuff (or purely technical stuff))

Yes, I know they’re idealistic and probably unrealistic. Just saying that the current KPI model tends to introduce flaws from the outset.

Moving from traditional assurance to AIOps, what are the differences?

We’re going to look into assurance models of the past versus the changing assurance demands that are appearing these days. The diagrams below are highly stylised for discussion purposes so they’re unlikely to reflect actual implementations, but we’ll get to that.

Old Assurance Architecture
Under the old model, the heart of the OSS/BSS was the database (almost exclusively a relational database). It would gather data, via probes/MDDs/collectors, from the network under management (Note: I’ve shown the sources as devices, but they could equally be EMS/NMS). The mediation device drivers (MDDs) would take feeds from the network and homogenise them to be suitable for very precise loading into tables in the relational databases.

This data came in the form of alarms/events, performance counters, syslogs and not much else. It could come in all sorts of common (eg SNMP) or obscure forms / protocols. Some would come via near-real-time notifications, but a lot was polled at cycles such as 5 or 15 mins.

Then the OSS/BSS applications (eg Assurance, Inventory, etc) would consume data from the database and write other data back to the database. There were some automations, such as hard-coded suppression rules, etc.

The automations were rarely closed-loop (ie to actually resolve the assurance challenge). There were also software assistants such as trendlines and threshold alerts to help capacity planners.

There was little overlap into security assurance – occasionally there might have even been a notification of device configuration varying from a golden config or indirect indicators through performance graphs / thresholds.

New Assurance Architecture
But so many aspects of the old world have been changing within our networks and IT systems. The active network, the resilience mechanisms, the level of virtualisation, the release management methods, containerisation, microservices, etc. The list goes on and the demands have become more complex, but also far more dynamic.

Let’s start with the data sources this time, because this impacts our choice of data storage mechanism. We still receive data from the active network devices (and EMS/NMS), but we also now source data from other sources. They might be internal sources from IT, security, etc, but could also be external sources like social indicators. The 4 Vs of data between old and new models have fundamentally changed:

  • Volume – we’re seeing far more data
  • Variety – the sources are increasing and the structure of data is no longer as homogenised as it once was (in fact unstructured data is now commonplace)
  • Velocity – we’re receiving incoming data at any number of different velocities, often at far higher frequency than the 15 minute poll cycles of the past
  • Veracity (or trustworthiness) – our systems of old relied on highly dependable data due to its relational nature and could easily become a data death spiral if data quality deteriorated. Now we accept data with questionable integrity and need to work around it

Again the data storage mechanism is at the heart of the solution. In this case it’s a (probably) unstructured data lake rather than a relational database because of the 4 Vs above. The data that it stores must still be stored in a way that allows cross-referencing to happen with other data sets (ie the role of the indexer), but not as homogenised as a relational database.

The 4 Vs also fundamentally change the way we have to make use of the data. It surpasses our ability to process in a manual or semi-manual way (where semi-manual implies the traditional rules-based automations like suppression, root-cause analysis, etc). We have no choice but to increase dependency on machine-driven tools as automations need to become:

  • More closed-loop in nature – that is, to not just consolidate and create a ticket, but also to automate the resolution and ticket closure
  • More abundant – doing even more of the mundane, recurring tasks like auto-sizing resources (particularly virtual environments), restarting services, scheduling services, log clean-up, etc

To be honest, we probably passed the manual/semi-manual tipping point many years ago. In the meantime we’ve done as best we could, eagerly waiting until the machine tools like ML (Machine Learning) and AI (Artificial Intelligence) could catch up and help out. This is still fairly nascent, but AIOps tools are becoming increasingly prevalent.

The exciting thing is that once we start harnessing the potential of these machine tools, our AIOps should allow us to ask far more than just network health questions like past models. They could allow us to ask marketing / cost / capacity / profitability / security questions like:

  • Where do I urgently need to increase capacity in the network (and can you automatically just make this happen – a more “just in time” provisioning of resources rather than planning months ahead)
  • Where could I re-position capacity around the network to reduce costs, improve performance, improve customer experience, meet unmet demand
  • Where should sales/marketing teams focus their efforts to best service unmet demand (could be based on current network or in sequence with network build-out that’s due to become ready-for-service)
  • Where are the areas of un-met demand compared with our current network footprint
  • With an available budget of $x, is it best spent on which ratio of maintenance, replacement, expansion and where
  • How do we better understand profitability vectors in the network compared to just the more crude revenue metrics (note that profitability vectors could include service density, amount of maintenance on the supporting infrastructure, customer interactions, churn, etc on a geographic or similar basis)
  • Where (and how) can we progressively automate a more intent or policy-driven auto-remediation of the network (if we don’t already have a consistent approach to config management)
  • What policies can we tweak to get better performance from the network on a more real-time basis (eg tweaking QoS policies based on current traffic in various parts of the network)
  • Can we look at past maintenance trends to automatically set routine maintenance schedules that are customised by device, region, device type, loads, etc rather than using a one-size-fits-all maintenance schedule
  • Can I determine, on a real-time basis, what services are using which resources to get a true service impact estimate in a dynamic, packet-switched network environment
  • What configurations (or misconfigurations) in the network pose security vulnerability threats
  • If a configuration change is identified, can it be automatically audited and reported on (and perhaps even quarantined) before then being authorised (manually or automatically?)
  • What anomalies are we seeing that could represent security events
  • Can we utilise end-to-end constructs such as network services, customer services, product lifecycle, device lifecycle, application performance (as well as the traditional network performance) to enhance context and correlation
  • And so many more that can’t be as easily accommodated by traditional assurance tools