The differences between Inventory, Asset and Config Management in an OSS

We recently discussed the differences between PNI (Physical Network Inventory) and LNI (Logical Network Inventory) solutions that appear as part of many OSS stacks. 

As promised, today we’ll talk about the subtle differences between:

  • Inventory Management Systems 
  • Asset Management Systems and
  • Configuration Management Databases (CMDB)
  • We might even discuss Virtual Infrastructure (VIM) and Resource Managers as well as Config Managers (different from CMDB) too

Inventory vs Asset vs CMDB

To be honest, the diagram above doesn’t show adequate overlap. Each of these systems has a slightly different purpose, usually for a slightly different set of personas. However, they all play a part in managing the resources that make up an organisation’s Active Network (the network segment dedicated to carrying customer traffic, as opposed to internal corporate traffic).

Let’s start with Inventory Management Systems (IMS) because IMHO, these are the tools that were traditionally responsible for managing service-provider networks. These are the tools typically used by network planners, network engineers, capacity planners and other back-office operational staff.  As mentioned in the link above, these tools can be further broken down into:

  • PNI (Physical Network Inventory) – The physical devices like switches, routers, firewalls as well as the outside plant (OSP) like cables, joints, etc. Generally only used by operators with large, wide-spread networks of physical assets, especially outside plant.
  • LNI (Logical Network Inventory) – The set of objects that are formed using physical infrastructure (and possibly associations to other logical objects). This could include circuits, VLANs, and other overlay network topologies as well as the management of attributes like bandwidth, protocols and other network functionality

These tools tend to focus on the key physical/logical/virtual resources that comprise an operator’s active network (AN). However, they often also support functionality that crosses into other domains such as asset and config management.

Asset Management Systems (AMS), as the name implies, have a more “financial” purpose; where assets are objects of intrinsic financial value to an organisation. AMS tools tend to be used by the accounting and asset management teams. They’re used to track current value (purchase price minus depreciation), warranties, spares management, life-cycles / refresh / end-of-life of assets and their contracts, as well as reactive and predictive maintenance. AMS will tend to store information about most of the Active Network Physical devices. This means they will have records for the same devices as PNI, but often with different information / attributes. They won’t tend to store LNI-related data. However, AMS will often keep information about assets in addition to Active Network devices. This could include software licenses and more.

Configuration Management Databases (CMDB) is more of an IT Service Management (ITSM) terminology. Like many IT concepts, ITSM has been increasingly used in parts of service provider networks. CMDBs are a database of Configuration Items (CIs), where CIs can be logical or physical entities. CIs may (or may not) be physical devices (PNI) or logical resource entities (LNI) and may (or may not) represent tangible values (assets). The main purpose of CIs is to store information about IT services that will allow other ITSM processes, such as Incident, Problem and Change Management, to be performed efficiently.

Not only is there functional overlap between these systems, there’s often also terminology overlap and/or misalignments. Different vendors have different levels of functionality and support alternate use-cases, so the areas of overlap differ between organisations.

Oh, and I also promised to mention VIMs and Config Managers:

Virtual Infrastructure Managers (VIM) are responsible for managing the virtual resources made available by physical infrastructure like compute, storage and network devices. In some cases, VIMs generate virtual network devices (VNFs) or virtual machines (VMs) that could look almost identical to any other device stored in LNI, PNI, AMS and/or CMDB. In fact, instances of these VNFs and VMs may even appear in those systems.

Config Management (as opposed to, but also potentially overlapping with, CMDB), is all about managing the configurations of devices in the network (often active network and corporate network). Each device, such as a router, has a configuration that tells the hardware how to function, where to route traffic, which packets to prioritise, where to send management logs (to the OSS), etc. Being able to monitor and manage these configurations centrally and consistently is the purpose for Config Managers. These are mostly used by network engineers to set policies and golden-configs (ie the config templates that all devices of that type must adhere to consistently). For example, you may have hundreds/thousands of devices in your network and want to re-point all management traffic to a new server as part of an OSS upgrade. Rather than configuring each device separately and manually, you can use the config management tool to push config changes out to the network.

Leave us a message to describe how your organisation use these (and other) tools.

Various forms of OSS Inventory

After reading other recent posts such as “Orders Down, Faults Up” and “How is OSS/BSS service and resource availability supposed to work?” an avid reader of the PAOSS blog posed the following brilliant question:

Do you have any thoughts on geospatial vs non geospatial network inventory systems? How often do you see physical plant mapping in a separate system from network inventory, with linkages or integrations between them, vs how often do you see physical and logical inventory being captured primarily in a geospatially oriented system?

Boy do I ever have some thoughts on this topic!! I’m sure you do too, so I’d love to hear what you think in the comments section below.

I was lucky. The first OSS/BSS that I worked on (all the way back in 2000), had both geo and non-geo (topology) views. It also had a brilliantly flexible data model that accommodated physical and logical inventory. All tightly integrated into one package. There aren’t many tools that can do all of that even today. Like I said, I was lucky to have this as a starting point!!

Like all things OSS/BSS, it starts with the personas and the key tasks they need to perform. Or from the supplier’s perspective, which customer personas they’re most actively targeting.

For example, if you have a significant Outside Plant (OSP) Network, then geo-positioning is vital. The exchanges and comms huts are easy enough to find, but pits, cable routes, easements, etc are often harder to find. It’s not uncommon for a field tech to waste time searching for a pit that’s covered in dirt, grass or snow. And knowing the exact cable route in geo view is helpful for sending field techs to the exact location of a fault (ie helping them to pinpoint the location of the bright yellow excavator that has just sliced through your inter-capital link). Geo-view is also important for OSP designers and the field workforce that builds the OSP network.

But other personas don’t care about seeing the detailed cable route. They just want to see a point-to-point topological link to represent physical connections between the ports on adjacent devices. This helps them to quickly understand the network or circuit / service view. They may also like to see an alarm overlay on the topology to quickly determine which parts of the network aren’t performing as expected. For these personas, seeing all the geo-detail just acts as visual noise that they need to subconsciously filter out to understand the topology view.

These personas also tend to want topological views of the network, not just the physical but the logical and virtual network / service overlays too.

In most cases that I can think of, the physical / OSP inventory tools show the physical devices (ports even) that the OSP network connects into. Their main focus is on the cables, joints, pits, pipes, catenaries, poles, lead-ins, patch-panels, patch-leads, splitters, etc. But showing the termination of cables onto active equipment (Inside Plant or ISP) is an important linking key between the physical and logical views.

The physical port (on the physical device) becomes the key demarcation between physical and logical worlds. The physical port connects physical cables / leads, but it also acts as the anchor point from which to create logical ports to which logical connections are made. As a result, the physical device and port tend to be shown in both physical (geo) and logical inventory tools. They also tend to be shown in both physical and logical network topology views.

In the case of the original OSS/BSS I worked on, it had separate visualisation tools for geo, network and circuit/service, but all underpinned by a common data model.

What’s the best way? Different personas will have different perspectives of course. I prefer for physical and logical inventories to be integrated out of the box (to allow simple cross-ref visually and in queries)…. but I also prefer for them to have different views (eg geo, topology, network, circuit/service) to suit different situations.

I also find it helpful if each of those views allow the ability to drill down deeper into specific sections of the graph if necessary. I’d prefer not to have all of those different views overlaid onto a geo visualisation. Too much visual clutter IMHO, but others may love it that way.

Oh, and having separate LNI (Logical Network Inventory) and PNI (Physical Network Inventory) can be a tricky thing to reconcile. The LNI will almost always have programmatic interfaces (APIs) to collect data from, but will generally have to amalgamate many different sources. Meanwhile, the PNI consists of mostly passive equipment and therefore has no API to collect latest info from. I tend to use strategies at the above-mentioned demarcation point (ie physical ports) to help establish linking keys between LNI and PNI.

BTW. There’s one aspect of the question, “How often do you see physical plant mapping in a separate system from network inventory” that I haven’t fully answered. I’ll cover the question of asset management vs inventory management vs CMDB (Configuration Management Database) in more detail in an upcoming post. [Ed. See link here]

OSS sell money!

Huh? But they’re just cost centres aren’t they?
 
Nope, they sell financial outcomes – they reduce downtime, they turn on revenue, they improve productivity by coordinating the workforce, etc…
 
But they only “sell money” if they can help stakeholders clearly see the money! I mean “actually” see it, not “read between the lines” see it! (so many benefits of OSS are intangible, so we have to help make the financial benefits more obvious).
 
They don’t sell network performance metrics or orchestration plans or AI or any other tech chatter. They sell money in the form of turning on customers that pay to use comms services. They sell insurance policies (ie service reliability) that keep customers from churning.
 
Or to think of it another way, could you estimate (in a dollar amount) the consequences of not having the OSS/BSS? What would the cost to your organisation be?

Crossing the OSS chasm

Geoff Moore’s seminal book, “Crossing the Chasm,” described the psychological chasm between early buyers and the mainstream market.

Crossing the Chasm

Seth Godin cites Moore’s work, “Moore’s Crossing the Chasm helped marketers see that while innovation was the tool to reach the small group of early adopters and opinion leaders, it was insufficient to reach the masses. Because the masses don’t want something that’s new, they want something that works…

The lesson is simple:

– Early adopters are thrilled by the new. They seek innovation.

– Everyone else is wary of failure. They seek trust.”
 

I’d reason that almost all significant OSS buyer decisions fall into the “mainstream market” section in the diagram above.  Why? Well, an organisation might have the 15% of innovators / early-adopters conceptualising a new OSS project. However, sign-off of that project usually depends on a team of approvers / sponsors. Statistics suggest that 85% of the team is likely to exist in a mindset beyond the chasm and outweigh the 15%. 

The mainstream mindset is seeking something that works and something they can trust.

But OSS / digital transformation projects are hard to trust. They’re all complex and unique. They often fail to deliver on their promises. They’re rarely reliable or repeatable. They almost all require a leap of faith (and/or a burning platform) for the buyer’s team to proceed.

OSS sellers seek to differentiate from the 400+ other vendors (of course). How do they do this? Interestingly, by pitching their innovations and uniqueness mostly.

Do you see the gap here? The seller is pitching the left side of the chasm and the buyer cohort is on the right.

I wonder whether our infuriatingly lengthy sales cycles (often 12-18 months) could be reduced if only we could engineer our products and projects to be more mainstream, repeatable, reliable and trustworthy, whilst being less risky.

This is such a dilemma though. We desperately need to innovate, to take the industry beyond the chasm. Should we innovate by doing new stuff? Or should we do the old, important stuff in new and vastly improved ways? A bit of both??

Do we improve our products and transformations so that they can be used / performed by novices rather than designed for use by all the massive intellects that our industry seems to currently consist of?

 

 

 

 

For those starting out in OSS product, here’s a tip

For those starting out in product, here’s a tip: Design, Defaults*, Documentation, Details and Delivery really matter in software.”
Jeetu Patel here.

* Note that you can interpret “Defaults” to be Out-Of-The-Box functionality offered by the product.

Let’s break those 5 D-words down and describe why they really matter to the OSS industry shall we?

  • Design – The power of OSS product development tends to lie with engineering, ie the developers. I have huge admiration for the very clever and very talented engineers who create amazing products for us to use, buuutttttt……. I just have one reservation – is there a single OSS company that is design-driven? A single one that’s making intuitive, effective, beautiful experiences for their users? The obvious answer is of course engineering teams hold sway over design teams in OSS – how many OSS vendors even have a dedicated design department??? See this article for more.
  • Defaults – Almost every OSS I know of has an enormous amount of “out-of-the-box” functionality baked in. You could even say that most have too much functionality. There’s functionality that might be really important for one customer but never even used by any of the vendor’s other customers. It just represents bloat for all the other customers, and potentially a distraction for their operators. I’m still bemused to see vendors trying to differentiate by adding obscure new default features rather than optimising for “must-have” functions. See this article for more. However, I must add that I’m starting to see a shift in some OSS. They’re moving away from having baked-in functionality and are moving to more data-repository-driven architectures. Interesting!!
  • Documentation – This is a really interesting factor! Some vendors make almost no documentation available until a prospect becomes a paying customer. Other vendors make their documentation available for the general public online and dedicate significant effort to maintaining their information library. The low-doc approach espoused by Agile could be argued to be reducing document quality. However, it also reduces the chance of producing documentation that nobody will read ever! Personally, I believe vendors like Cisco have earnt a huge competitive advantage (in the networking space moreso than OSS) because of their training / certification (ie CCNA, etc) and self-learning (ie online documentation). See this article for more. As such, I’d tend to err on over-documenting for customer-facing collateral. And perhaps under-documenting for internal-facing collateral unless it’s likely to be used regularly and by many.
  • Details – This is another item where there are two ends to the spectrum. That might surprise some people who would claim that attention to detail is paramount. Well, yes…. in many cases, but certainly not all on OSS projects. Let me share a story on attention to detail on a past OSS project. And another story on seeking perfection. Sometimes we just need to find the right balance, and knowing when to prioritise resilience and when to favour precision becomes an art.
  • Delivery – I have two perspectives on this D-word. Firstly, the Steve Jobs inspired quote of “Real artists ship!” In other words, to laud the skill of shipping a product that provides value to the customer rather than holding off on a not-yet-perfected solution. But the second case is probably more important. OSS projects tend to be massive and complex transformation efforts. Our OSS are rarely self-installed like office software, so they require big delivery teams. Some products are easy to deliver/deploy. Others are a *&$%#! If you’re a product developer, please get out in the trenches with your delivery teams and find ways to make their job easier and/or more repeatable.

Exactly what is an OSS’s “intuition age”

I’m currently reading a book entitled, “Jony Ive. The genius behind Apple’s greatest products.”

I’d like to share a paragraph with you from it (and probably expect a few more in coming days):

“…Apple’s internal culture heavily favored the engineers within the product groups. The design process was engineering driven. In the early days of Frog Design, the engineers had bent over backward to help implement the design team’s ambitions, but now the power had shifted. The different engineering groups gave their products in development to Brunner’s group, who were expected to merely “skin” them.

Brunner wanted to shift the power from engineering to design. He started thinking strategically… The idea was to get ahead of the engineering groups and start to make Apple more of a design-driven company rather than a marketing or engineering one.”

That’s an unbelievably insightful conclusion Robert Brunner made. If he wanted to turn Apple into a design-driven company, then he’d have to prepare design concepts that looked further into the future than where the engineers were up to. Products like the iPod and iPad are testimony that Brunner’s strategy worked.

We face the same situation in OSS today. The power of product development tends to lie with engineering, ie the developers. I have huge admiration for the very clever and very talented engineers who create amazing products for us to use, buuutttttt…….

I just have one reservation – is there a single OSS company that is design-driven? A single one that’s making intuitive, effective, beautiful experiences for their users? Of course engineering holds power over design in OSS – how many OSS vendors even have a dedicated design department???

Let me give a comparison (albeit a slightly unfair one). Both of my children were reasonably adept at navigating their way around our iPad (for multiple use cases) by the age of three. What would the equivalent “intuition age” be for navigating our OSS?

If you’re a product manager, have you ever tried it? Have you ever considered benchmarking it (or an equivalent usability metric) and seeing what you could do to improve it for your OSS products?

“The Blue Book OSS/BSS Vendor Directory” from Passionate About OSS has officially launched

We’re excited to announce that “The Blue Book OSS/BSS Vendor Directory” has officially gone live here at https://passionateaboutoss.com/directory

It provides a comprehensive directory of over 400 suppliers that produce OSS, BSS and/or related network management tools. Company details, product details and functionality classifications are included.

The Blue Book OSS / BSS Vendor Directory

Every network operator has a unique set of needs from their operational software – software that includes OSS (Operational Support Systems), BSS (Business Support Systems), NMS (Network Management Systems) and the many other related tools.

To service those many and varied needs, a large number of different products have been created by some very clever developers. But it’s a highly fragmented market. There are literally hundreds of product options out there and they all have different capabilities.

If you’re a typical buyer, how many of those products are you familiar with? Five? Ten? Fifty? How do you know whether the best-fit product or supplier is within the list you already know? Perhaps the best-fit is actually amongst the hundreds of other products and suppliers you’re not familiar with yet. How much time do you have to research each one and distill down to a short-list of possible candidates to service your specific needs? Where do you start? Lots of web searches? There has to be an easier way.

What if you’re a seller? These products tend to have lengthy life-cycles once they’ve been installed so it might be years before a prospect actually enters the buying phase. Yet there are so many prospects out there at different phases of their buying windows. There are bound to be some live ones at any time that suit your capabilities. The challenge for you as a supplier is how to make those prospects aware of you. You don’t have the time to establish trusted relationships with hundreds, perhaps even thousands, of buyers across the globe (or maybe just within your region/s). Wouldn’t you love to be presented with qualified prospects who are in (or nearing) their buying window?

Well we at Passionate About OSS have created The Blue Book OSS/BSS Vendor Directory to simplify the task of bringing buyers and sellers together. With over 400 suppliers listed (and climbing), we provide a single, comprehensive repository for searching, matching and connecting. The tools allow you to do it yourself, or we can help you using the approaches we’ve developed, used and refined over the years.

Now just click on “Directory” to start your journey of searching, matching and connecting (and updating your listing if you’re a supplier).

A lighter-touch OSS procurement approach (part 3)

We’ve spoken at length about TM Forum’s, “Time to kill the RFP? Reinventing IT procurement for the 2020s,” report so far this week. We’ve also spoken about the feeling that the OSS/BSS RFP (Request For Proposal) still has relevance in some situations… as long as it’s more of a lighter-touch than most. We’ve spoken about a more pragmatic approach that aims to find best available fit (for key objectives through stages of filtering) rather than perfect fit (for all requirements through detailed analyses). And I should note that “best available fit” includes measurement against these three contrarian procurement KPIs ahead of the traditional ones.

Yesterday’s post discussed how we get to a short list with minimal involvement of buyers and sellers, with the promise that we’d discuss the detailed analysis stage today.

It’s where we do use an RFP, but with thought given to the many pain-points cited so brilliantly by Mark Newman and team in the abovementioned TM Forum report.

The RFP provides the mechanism to firm up pricing and architecture, but is also closely tied to a PoC (Proof of Concept) demonstration. The RFP helps to prioritise the order in which PoCs are performed. PoCs tend to be very time consuming for buyer and seller. So if there’s a clear leader from the paper studies so far, then they will demonstrate first.

If there’s not a clear difference, or if the prime candidate’s demonstration identified significant gaps, then additional PoCs are run.

And to ensure the PoCs are run against the objectives that matter most, we use scenarios that were prioritised during part 1 of this series.

Next steps are to form the more detailed designs, commercials / contracts and ratify that the business case still holds up.

In yesterday’s post, I also promised to share our “starting-point” procurement methodology. I say starting point because each buyer situation is different and we tend to customise it to each buyer’s needs. It’s useful for starting discussions.

The overall methodology diagram is shown below:

PAOSS vendor selection process

A few key notes here:

  1. The process looks much heavier than it really is… if you use traditional procurement processes as an indicator
  2. We have existing templates for all the activities marked in yellow
  3. The activity marked in blue partially represents the project we’re getting really excited to introduce to you tomorrow

 

A lighter-touch OSS procurement approach (part 2)

Yesterday’s post described the approach to get from 400+ possible OSS/BSS suppliers/products down to a more manageable list without:

  1. Having to get into significant discussions with vendors (yet)
  2. Gathering all your stakeholders together to prepare a detailed list of requirements

We’ll call this “the long list,” which might consist of 5-20 suppliers. We use this evaluation technique (which we’ll share more about on Monday) to ensure we’ve looked at the broad market of suppliers rather than just the few the buyer already knows.

The next step we follow helps us to get to a much smaller list, which we’ll call “the short list.”

For this, we do need to contact vendors (the long list) and we do need to prepare a list of requirements to add to the objectives and key workflows we’ve previously identified. The requirements won’t need to be detailed, but will still probably number into the 100s – some from our pick-list, others customised to each client’s needs.

Then we engage in what we refer to as an EOI (Expression of Interest) phase. Our EOIs are not just a generic market capability analysis like many  buyers conduct. Ours seek indicative vendor compliance (to objectives and requirements) and indicative pricing based on the dimensions we supply. We’ve refined this model over the years to make it quite quick and (relatively) easy for vendors to respond to.

Using compliance to measure suitability and indicative pricing to plug in to our long-term TCO (Total Cost of Ownership) model, the long list usually becomes a clear short list of 1-5 very quickly.

Now we can get into detailed discussions with a very small number of best-fit suppliers without having wasted much time of buyer or seller. 

More on the detailed discussions tomorrow!

A lighter-touch OSS procurement approach (part 1)

You may have noticed that we’ve run a series of posts about OSS/BSS procurement, and about the RFP process by association.

One of the first steps in the traditional procurement process is preparing a strategy and detailed set of requirements.

As TM Forum’s, “Time to kill the RFP? Reinventing IT procurement for the 2020s,” report describes:
Before an RFP can be issued, the CSP’s IT or network team must produce a document detailing the strategy for implementing a technology or delivering a service, which is a lengthy process because of the number of stakeholders involved and the need to describe requirements in a way that satisfies them all.”

The problem with most requirements documents, the ones I’ve seen at least, is that they tend to get down into a deep, deep level of detail. And when it’s down in that level of detail, contrasting opinions from different stakeholders can make it really difficult to reach agreement. Have you ever been in a room with many high-value (and high cost) stakeholders spending days debating the semantics (and wording) of requirements? Every stakeholder group needs a say and needs to be heard.

The theory is that you need a great level of detail to evaluate supplier offerings for best-fit. Well, maybe, but not in the initial stages.

First things first – I seek to find out what’s really important for the organisation. That rarely comes from a detailed requirements spreadsheet, but by determining the things that are done most often and/or add the most value to the buyer’s organisation. I use persona mapping, long-tail and perhaps whale-curve mapping approaches to determine this.

Persona mapping means identifying all the groups within the buyer’s organisation that need to interact with the OSS/BSS (current and proposed). Then sitting with each group to determine what they need to achieve, who they need to interact with and what their workflows look like. That also gives a chance for all groups to be heard.

From this, we can collaboratively determine some high-level evaluation criteria, maybe only 15-20 to start with. You’d be surprised at how quickly this 15-20 criteria can help with initial supplier filtering.

Armed with the initial 15-20 evaluation criteria and the project we’re getting excited to launch on Monday, we can get to a relevant list of possible suppliers quite quickly. It allows us to do a broad market search to compile a list of suppliers, not just from the 5-10 suppliers the buyer already knows about, but from the 400+ suppliers/products available on the market. And we don’t even have to ask the suppliers to fill out any lengthy requirement response spreadsheets / forms yet.

We’ll continue the discussion over the next two days. We’ll also share our procurement methodology pack on Sunday.

Do I support the death penalty (of OSS RFPs)? Hmmm….

As per yesterday’s post, I’ll continue to reference a TM Forum report called, “Time to kill the RFP? Reinventing IT procurement for the 2020s” today. Mark Newman and the team have captured and discussed so many layers to the OSS/BSS procurement process.

There’s no doubt the current stereotypical RFP approach to procurement is broken. It needs to be done differently. That’s why we have been doing it differently with customers for years now (another hint regarding a project we’re getting excited to announce this Monday).

The TM Forum report is really powerful and well worth a read. There are a few additional (and somewhat random) thoughts that go through my head when considering the death of the RFP:

  1. The TM Forum report is primarily coming at the problem from the perspective of a carrier that is constantly steering the development of its own systems, as implied through this quote, “The fundamental problem with the RFP process is that in a fast-paced technology environment, where cloud and software are fast becoming preferred options, it is difficult for CSPs to describe in lengthy, written documents what they want and need. The processes are simply too complex and cumbersome to support modern, Agile methods of working.”
  2. That perspective is particularly applicable for some buyers, ones that have committed to having significant developer resources available to build exactly what they want. That could be in the form of in-house developers, contract developers, long-term panel arrangements with suppliers or similar
  3. Others, perhaps such as utilities, enterprise and some telcos want to focus on their core business and delegate OSS/BSS configuration and customisation to third-parties.
  4. Some of those rely on COTS (commercial off the shelf) software to leverage the benefits of innovation, cost and development time that have been spread across multiple customers. Their budgets simply don’t allow for custom-built solutions
  5. COTS, be it on-prem through to cloud service models, are almost never going to be a perfect fit for a buyer’s needs. They’re designed to generically suit many buyers, so a certain amount of bloat becomes part of the trade-off
  6. In recent weeks, I’ve seen two entirely in-house developed OSS/BSS. They fit their organisations like a glove and there’s almost no bloat at all. In fact it would be almost impossible for a COTS solution to replace what they’ve built. In both cases it’s taken a decade of ongoing development to get to that position. Most buyers don’t have that amount of time to get it right though unfortunately
  7. Commercial realities imply a pragmatic approach is taken to procurement – which product/s provide default capability that best aligns with the buyer’s most important objectives.
  8. RFPs often get bogged down at the far right-hand side of the long-tail of requirements (where impact tends to be negligible), or in trying to completely re-sculpt the solution to be the perfect fit (that it’s unlikely to ever be)
  9. In my experience at least, the best-fit (not perfect fit) solution, or very short list of solutions, usually becomes apparent fairly quickly [we’ll share more about how we do that tomorrow]. It’s then just a case of testing objectives, assumptions and gaps (eg via a proof-of-concept) and getting to a mutually beneficial commercial agreement
  10. As one respondent in the TM Forum report put it, “The RFP glorifies the process, not the outcome.” A healthy dose of outcome-driven pragmatism helps to reduce glorification of the RFP process
  11. Also in my experience at least, scope of works quotes from vendors (which RFPs tend to lead to) tend to be written in a waterfall style that don’t fit into Agile frameworks very effectively. That can be partially overcome by slicing and dicing the SoW in ways that are more conducive to Agile delivery
  12. With so much fragmentation in the OSS/BSS market already (there are over 400 in our vendor directory), that means the talent pool of creators is thinly spread. Many of those 400 have duplicated functionality, which isn’t great for the industry’s overall progress. Custom development for each different buyer spreads the talent pool even further… unless buyers can get economies of development scale through shared platforms like ONAP

In summary, I love the concept of avoiding massive procurement events. I still can’t help but think the RFP still fits in there somewhere for many buyers… as long as we ensure we glorify the outcomes and de-emphasise the process. It’s just that we use RFPs like a primitive instrument and inflict blunt-force trauma, rather than using surgical precision.

Lobbying hard for the death penalty for OSS RFPs

Earlier this year, the TM Forum published a really insightful report called, “Time to kill the RFP? Reinventing IT procurement for the 2020s.” There are so many layers to the OSS/BSS procurement discussion and Mark Newman and team have done a fantastic job of capturing them. We’ll expand on a few of those layers in a series of posts this week.

For example, section 2 articulates the typical RFI / RFP / RFQ approach. It’s clear to see why the typical approach is flawed. Yesterday’s post pondered whether procurement events are flawed from the initial KPIs that are set by buyers. Today we’ll take a look at the process that follows.

Two quotes from the TM Forum report frame some of the challenges with RFPs from buyer and seller viewpoints respectively:
QUOTE 1 (Buyer-side) – “CSPs normally distribute RFPs to a group of three to eight suppliers. These are most likely existing suppliers, previous vendors or companies the CSP is aware of through its own technology scouting. Suppliers are likely to include systems integrators who rely on other vendors to fulfill elements of the contract, and CSPs tend to invite bidders offering a range of options.
For example, they may invite a supplier that is likely to offer a good price, one that is a ‘safe’, low-risk option, and the incumbent supplier, which in many cases the CSP is looking to replace.
The document itself is likely to be several hundred pages long, a large portion of it comprising details of technology requirements, with suppliers asked to specify whether they comply with each requirement
.”
The question I’d ask about this process is how does the CSP choose 3-8 out of the 400+ vendors that supply the OSS/BSS market? Does their “own technology scouting” adequately discount the hundreds of others that could potentially be best-fit for their needs?

QUOTE 2 (Seller-side) – “We were holed up in our hotel for a month working feverishly on different aspects of the bid. We had 15 people there in total, and we were asked to come in for meetings with five different teams. The meetings go on and on, and you really have no idea when they’re going to finish.”
Let’s do the sums on this situation. 15 people x 25 days x $1500 per day (a round figure that includes accommodation, meals, etc) = $562,500. That’s over half a million dollars just for the seller-side of the post-RFP evaluation phase. Now let’s say there were 4 sellers going through this. [Just a small aside here – reading between the lines, do you suspect the buyer was taking the seller on a journey into the minutiae or focusing on what will move the needle for them? Re-read that through the lens of yesterday’s contrasting KPI perspectives]

You can see exactly why Mark has proposed that it’s, “Time to kill the RFP,” at least in its traditional form. These two quotes lobby hard for the death penalty. More on that tomorrow!

Also note that another hint was contained above in the lead-up to a project launch on Monday that we’re really excited about.

OSS/BSS procurement is flawed from the outset

You may’ve noticed that things have been a little quiet on this blog in recent weeks. We’ve been working on a big new project that we’ll be launching here on PAOSS on Monday. We can’t reveal what this project is just yet, but we can let you in on a little hint. It aims to help overcome one of the biggest problem areas faced by those in the comms network space.

Further clues will be revealed in this week’s series of posts.

The industry we work in is worth tens of billions of dollars annually. We rely on that investment to fund the OSS/BSS projects (and ops/maintenance tasks) that keeps many thousands of us busy. Obviously those funds get distributed by project sponsors in the buyers’ organisations. For many of the big projects, sponsors are obliged to involve the organisation’s procurement team.

That’s a fairly obvious path. But I often wonder whether the next step on that path is full of contradictions and flaws.

Do you agree with me that the 3 KPIs sponsors expect from their procurement teams are:

  1. Negotiate the lowest price
  2. Eliminate as many risks as possible
  3. Create a contract to manage the project by

If procurement achieves these 3 things, sponsors will generally be delighted. High-fives for the buyers that screw the vendor prices right down. Seems pretty obvious right? So where’s the contradiction? Well, let’s look at these same 3 KPIs from a different perspective – a more seller-centric perspective:

  1. I want to win the project, so I’ll set a really low price, perhaps even loss-leader. However, our company can’t survive if our projects lose money, so I’ll be actively generating variations throughout the project
  2. Every project of this complexity has inherent risks, so if my buyer is “eliminating” risks, they’re actually just pushing risks onto me. So I’ll use any mechanisms I can to push risks back on my buyer to even the balance again
  3. We all know that complex projects throw up unexpected situations that contracts can’t predict (except with catch-all statements that aim to push all risk onto sellers). We also both know that if we manage the project by contractual clauses and interpretations, then we’re already doomed to fail (or are already failing by the time we start to manage by contract clauses)

My 3 contrarian KPIs to request from procurement are:

  1. Build relationships / trust – build a framework and environment that facilitates a mutually beneficial, long-lasting buyer/seller relationship (ie procurement gets judged on partnership length ahead of cost reduction)
  2. Develop a team – build a framework and environment that allows the buyer-seller collective to overcome risks and issues (ie mutual risk mitigation rather than independent risk deflection)
  3. Establish clear and shared objectives – ensure both parties are completely clear on how the project will make the buyer’s organisation successful. Then both constantly evolve to deliver benefits that outweigh costs (ie focus on the objectives rather than clauses – don’t sweat the small stuff (or purely technical stuff))

Yes, I know they’re idealistic and probably unrealistic. Just saying that the current KPI model tends to introduce flaws from the outset.

OSS that make men feel more masculine and in command

From watching ESPN, I’d learned about the power of information bombardment. ESPN strafes its viewers with an almost hysterical amount of data and details. Scrolling boxes. Panels. Bars. Graphics. Multi-angle camera perspectives. When exposed to a surfeit of data, men tend to feel more masculine and in command. Do most men bother to decipher these boxes, panels, bars and graphics? No – but that’s not really the point.”
Martin Lindstrom
, in his book, “Small Data.”

I’ve just finished reading Small Data, a fascinating book that espouses forensic analysis of the lives of users (ie small data) rather than using big data methods to identify market opportunities. I like the idea of applying both approaches to our OSS products. After all, we need to make them more intuitive, endearing and ultimately, effective.

The quote above struck a chord in particular. Our OSS GUIs (user interfaces) can tend towards the ESPN model can’t they? The following paraphrasing doesn’t seem completely at odds with most of the OSS that we interact with – “[the OSS] strafes its viewers with an almost hysterical amount of data and details.”

And if what Lindstrom says is an accurate psychological analysis, does it mean:

  1. The OSS GUIs we’re designing help make their developers “feel more masculine and in command” or
  2. Our OSS operators “feel more masculine and in command” or
  3. Both

Intriguingly, does the feeling of being more masculine and in command actually help or hinder their effectiveness?

I find it fascinating that:

  1. Our OSS/BSS form a multi billion dollar industry
  2. Our OSS/BSS are the beating heart of the telecoms industry, being wholly responsible for operationalising the network assets that so much capital is invested in
  3. So little effort is invested in making the human to OSS interface far more effective than they  are today
  4. I keep hearing operators bemoan the complexities and challenges of wrangling their OSS, yet only hear “more functionality” being mentioned by vendors, never “better usability”

Maybe the last point comes from me being something of a rarity. Almost every one of the thousands of people I know in OSS either works for the vendor/supplier or the customer/operator. Conversely, I’ve represented both sides of the fence and often even sit in the middle acting as a conduit between buyers and sellers. Or am I just being a bit precious? Do you also spot the incongruence of point D on a regular basis?

Whether you’re buy-side or sell-side, would you love to make your OSS more effective? Let us know and we can discuss some of the optimisation techniques that might work for you.

Three OSS project responsibility sliders

Last week we shared an article that talked about the different expectations from suppliers and clients when undertaking an OSS implementation project.

The diagram below attempts to demonstrate the concept visually, in the form of three important sliders.

OSS Responsibility Sliders

When it comes to the technical delivery, it makes sense that most of the responsibility falls upon the supplier. They obviously have the greater know-how from building and implementing their own products. However, and despite what some clients expect, you’ll notice that the slider isn’t all the way to the left though. The client can’t just “throw the hand grenade over the fence” and expect the supplier to just build the solution in isolation. The client needs to be involved to ensure the solution is configured to their unique requirements. This covers factors such as network types, service types, process models, naming conventions, personas supported, integrations, approvals, etc.

Unfortunately, organisational change is an afterthought far too often on OSS projects. Not only that, but the client often expects the supplier to handle that too. They expect the slider to fall far to the left too. In my opinion, this is completely unrealistic. In most cases, the supplier simply doesn’t have the knowledge of, or influence over, the individuals within the client’s organisation. That’s why the middle slider falls mostly towards the right-hand (client) side. Not all the way though because the supplier will have suggestions / input / training based on learnings from past implementations. BTW. The link above also describes an important perspective shift to help the org change aspect of OSS transformation.

And lastly, the success of a project relies on strength of relationship throughout, but also far beyond, the initial implementation. You’d expect that most OSS implementations will have a useful life of many years. Due to the complexity of OSS transformations, clients want to stay with the same supplier for long periods because they don’t want to endure a change-out. Like any relationships, trust plays an important role. The relationship clearly has to be beneficial to both parties. Unfortunately, three factors often doom OSS relationships from the outset.

Firstly, the sliders above show my unbiased perspective of the weight of responsibility on a generic OSS project. If each party has a vastly different expectation of slider positioning, then the project can be off to a difficult (but all-too-common) start.

Secondly, the nature of vendor selection process can also gnaw away at trust quite quickly. The client wants an as-low-as-possible cost in the contract (obviously). The supplier wants to win the bid, so they keep costs as low as possible, often hoping to make up the difference through the inevitable variations that happen on these complex projects.

And thirdly, the complexity of these projects means challenges almost always arise and can cause cynicism being hurled across the fence by both parties.

You may be wondering why the third slider isn’t perfectly centred between both. You may claim that significant responsibility for humility, fairness and forgiveness lies with each participant to ensure a long-lasting, trusted relationship. I’d agree with you on that, but I’d also argue that the supplier carries slightly more responsibility as they (usually) hold a slight balance in power. They know the client doesn’t want to endure another OSS change-out project any time soon, so the client generally has more to lose from a relationship breakdown. Unfortunately, I’ve seen this leveraged by vendors too many times.

Do you agree/disagree with these observations? I’d love to hear your thoughts.

Oh, and if you’re ever need an independent third-party to help set the right balance of expectations across these sliders on your project, you’re welcome to call upon Passionate About OSS to assist.

OSS user heat-mapping

Over the many OSS implementation projects I’ve worked on, UI/UX (user interface / user experience) has been an afterthought (if even thought about at all). I know there are OSS UI/UX experts out there (I’ve met a handful), but none have ever been assigned to the projects I’ve worked on unfortunately. UI has always just been the domain of the developer. If the functionality worked (even if in a highly convoluted way), then the developer would move on to the next epic. The UI was almost never re-visited unless the functionality proved to be almost unusable.

So the question becomes, how do we observe, measure and trial UI/UX effectiveness?

Have you ever tried running a heat-mapping analysis over your OSS to show actual user behaviour?

Given that almost all OSS are now browser-based, there are plenty of heat-map tools available. They give results that might look something like this (but can also provide more granularity of analysis too):
Heat-map
Image source: https://www.tatvic.com/data-analytics-solutions/heat-map-integration/

Whereas these tools are generally used by web developers to improve retention and conversion rates on their pages (ie customers buying, clicking through on a banner ad, calls to action, etc), we’ll use them in a different way within our OSS. We’ll instead be looking for efficiency of movement, an indicator of whether the design of our page is intuitive or not. Are the operators of your OSS clicking in the right places (menus, titles, buttons, links, etc) to achieve certain outcomes?

I’d be particularly interested in comparing heat-maps of new operators (eg if you’ve installed a sand-pit environment at a client site for the first time and let the client’s operators loose) versus experienced teams. Depending on the OSS application you’re analysing, you may even been interested in observing different behaviours across different devices (eg desktops, phones, tablets).

There’s generally a LOT of functionality available within each OSS. Are we optimising the experience for the functionality that matters most? For web-page designers, that might mean ensuring all the most important information is “above-the-fold” (ie can be seen on the page without having to scroll down – noting that the “fold” will be different across different devices/resolutions). If they want a user to click on the “buy now” button, then they *may* want that to appear above the fold, ensuring the prospective buyer doesn’t have to go searching for it.

In the case of an OSS, you don’t want to hide the most important functionality under layers of menus. And don’t forget that different personas (eg designers, admins, execs, help-desk, NOC, etc) are likely to have different definitions of “important functionality.” You may want to optimise important UI elements for each different persona (if your OSS allows for that level of configurability).

I’m not endorsing Smartlook, but if you’d like to read more about heat-mapping techniques, you can start here.

This OSS is different to what I’m used to

OSS implementations / transformations are always challenging. Stakeholders seem to easily get their heads around the fact that there will be technical challenges (even if they / we can’t always get their head around the actual changes initially).

When a supplier is charged with doing an OSS implementation, the client (perhaps rightly) expects the supplier to lead the technical implementation and guide the client through any challenges. It’s the, “Over to you!” client mentality at times.

However, it’s the change management challenges that are often overlooked and/or underestimated (by client and supplier alike). It’s far less realistic for a client to delegate these activities and challenges to the supplier. The supplier simply doesn’t have the reach or influence within the client’s organisation (unless they’re long-term trusted partners). Just doing a 2 week training course at the end of the implementation rarely works.

Now, if you do represent the client, change management starts all the way back at the start of the project – from the time we start to gather current and desired future state, including process and persona mappings.

At that time we can put ourselves in the shoes of each person impacted by OSS change and consider, “If your current normal is exactly what you need, then different isn’t worth exploring” (a Seth Godin quote).

How many times have you heard about operators bypassing their sophisticated new OSS and reverting back to their old spreadsheets (thus keeping an offline store of data that would be valuable to be stored in the OSS)?

Interestingly though, if you approach those same people before the OSS implementation and ask them whether their as-is spreadsheet model gives them exactly what they need, you will undoubtedly get some great insights (either yes it is and here’s why…
or not it’s not because…).

You have a stronger position of influence with these operators if you involve and listen pre-implementation than enforcing change afterwards.

To again quote Seth, they’re not always, “hesitant about this new idea because it’s a risky, problematic, defective idea… [but] because it’s simply different than [they’re] used to.”

A modern twist on OSS architecture

I was speaking with a friend today about an old OSS assurance product that is undergoing a refresh and investment after years of stagnation.

He indicated that it was to come with about 20 out of the box adaptors for data collection. I found that interesting because it was replacing a product that probably had in excess of 100 adaptors. Seemed like a major backward step… until my friend pointed out the types of adaptor in this new product iteration – Splunk, AWS, etc.

Of course!!

Our OSS no longer collect data directly from the network anymore. We have web-scaled processes sucking everything out of the network / EMS, aggregating it and transforming / indexing / storing it (ETL – Extract Transform Load). Then, like any other IT application, our OSS just collect what we need from a data set that has already been consolidated and homogenised.

I don’t know why I’d never thought about it like this before (ie building an architecture that doesn’t even consider connecting to the the multitude of network / device / EMS types). In doing so, we lose the direct connection to the source, but we also reduce our integration tax load (directly to the OSS at least).

Another side benefit of this approach is that the data store then serves data to the various consumers, thus taking load off our OSS servers which have traditionally fed data to many consumers.

This is perfect for organisations that perform a lot of business intelligence (BI) and analytics. They can burn cycles on the data store rather than slowing down real-time operational transactions.

Making a basic assessment of OSS value

“…as technology gets more complicated, it becomes more difficult for buyers to acquire the skills needed to make even a basic assessment of value. Without such an assessment, it’s hard to get a project going, and in particular hard to get one going the right way.”
Tom Nolle
.

Have you noticed that over the last few years, OSS choice has proliferated, making project assessment more challenging? Previously, the COTS (Commercial Off-the-Shelf) product solution dominated. That was already a challenge because there are hundreds to choose from (there are around 400 on our vendors page alone). But that’s just the tip of the iceberg.

We now also have choices to make across factors such as:

  • Building OSS tools with open-source projects
  • An increasing amount of in-house development (as opposed to COTS implementations by the product’s vendors)
  • Smaller niche products that need additional integration
  • An increase in the number of “standards” that are seeking to solve traditional OSS/BSS problems (eg ONAP, ETSI’s ZSM, TM Forum’s ODA, etc, etc)
  • Revolutions from the IT world such as cloud, containerisation, virtualisation, etc

As Tom indicates in the quote above, the diversity of skills required to make these decisions is broadening. Broadening to the point where you generally need a large team to have suitable skills coverage to make even a basic assessment of value.

At Passionate About OSS, we’re seeking to address this in the following ways:

  • We have two development projects underway (more news to come)
    • One to simplify the vendor / product selection process
    • One to assist with up-skilling on open-source and IT tools to build modern OSS
  • In addition to existing pages / blogs, we’re assembling more content about “standards” evolution, which should appear on this blog in coming days
  • Use our “Finding an Expert” tool to match experts to requirements
  • And of course there are the variety of consultancy services we offer ranging from strategy, roadmap, project business case and vendor selection through to resource identification and implementation. Leave us a message on our contact page if you’d like to discuss more

The OSS “out of control” conundrum

Over the years in OSS, I’ve spent a lot of my time helping companies create their OSS / BSS strategies and roadmaps. Sometimes clients come from the buy side (eg carriers, utilities, enterprise), other times clients come from the sell side (eg vendors, integrators). There’s one factor that seems to be most commonly raised by these clients, and it comes from both sides.

What is that one factor? Well, we’ll come back to what that factor is a little later, but let’s cover some background first.

OSS / BSS covers a fairly broad estate of functionality:
OSS and BSS overlaid onto the TAM

Even if only covering a simplified version of this map, very few suppliers can provide coverage of the entire estate. That infers two things:

  1. Integrations; and
  2. Relationships

If you’re from the buy-side, you need to manage both to build a full-function OSS/BSS suite. If you’re from the sell-side, you’re either forced into dealing with both (reactive) or sometimes you can choose to develop those to bring a more complete offering to market (proactive).

You will have noticed that both are double-ended. Integrations bring two applications / functions together. Relationships bring two organisations together.

This two-ended concept means there’s always a “far-side” that’s outside your control. It’s in our nature to worry about what’s outside our control. We tend to want to put controls around what we can’t control. Not only that, but it’s incumbent on us as organisation planners to put mitigation strategies in place.

Which brings us back to the one factor that is raised by clients on most occasions – substitution – how do we minimise our exposure to lock-in with an OSS product / service partner/s if our partnership deteriorates?

Well, here are some thoughts:

  1. Design your own architecture with product / partner substitution in mind (and regularly review your substitution plan because products are always evolving)
  2. Develop multiple integrations so that you always have active equivalency. This is easier for sell-side “reactives” because their different customers will have different products to integrate to (eg an OSS vendor that is able to integrate with four different ITSM tools because they have different customers with each of those variants)
  3. Enhance your own offerings so that you no longer require the partnership, but can do it yourself
  4. Invest in your partnerships to ensure they don’t deteriorate. This is the OSS marriage analogy where ongoing mutual benefits encourage the relationship to continue.