Channelling “Intel Inside” to improve the brand recognition of your OSS

During Intel’s marketing of “Intel Inside” they taught consumers to look for the Intel Inside logo as an assurance of quality. Consumers eventually came to see “Intel Inside” as a standard and began asking the question: “Why doesn’t your product use Intel processors?” This standard became so important that today it is one of the world’s largest co-operative marketing programs,where hundreds of computer companies license the use of the Intel Inside® logos.
The ingredient must be highly differentiated in order to add value to the overall brand. This means the ingredient should have a separate name and logo and overall purpose, because the added value comes from the extra identity. First and foremost, brands should not create confusion in the market. The main brand should be well established in the market before employing an ingredient branding strategy because the consumer needs first understand the core brand and then find additional value in the ingredient
From an article about Ingredient Branding on

When you think about the chips inside the multitude of electronic devices you own and use, how many brands could you identify? How many chipsets are differentiated enough to sway your decision on the device itself?

According to FastCompany, “If one thing distinguishes Intel’s innovative thinking, it is their 1990s strategy of branding a semiconductor chip as a valuable feature that consumers would look for when they purchased a computer. The campaign’s two decades of ubiquity make us forget this now, but at the time it was an incredibly novel approach to marketing. People bought computers because of the software, the specs, or a friend’s recommendation. Who cared about who made some tiny chip inside the box that you couldn’t even see?
But with the proliferation of PCs, and with consumers at a loss in trying to figure out what made one better than the other, Intel saw an opportunity, and so it took a major risk. Intel’s leadership was convinced this was the way to grow market share…

The “Intel Inside” initiative has arguably been the most successful ingredient marketing exercises in history. They’ve made the invisible visible.

To many users/customers, the achievements of our OSS are invisible, with other components getting the credit for outcomes that are heavily underpinned by our OSS. The challenge for us, just like Intel, is to differentiate and improve the value of our brand. We get the flak for when things go wrong (often through no fault of our own), but we rarely get the credit for the things that work – for the services activated, for the problems resolved, for the equipment ordered, for the new services enabled, for the insights gleaned, etc.

The question for us all is how to make those accomplishments, that differentiation, the brand value visible. [Note that I’m referring to the brands of internal OSS suites here, not just external vendor products]

If we think about making the invisible visible to key execs / sponsors / stakeholders, we need to think about how they tend to interact with OSS. I would surmise that in most cases they interact via reports. Assuming you’re confident that you’ve nailed the content of your reports, that they’re succinct and valuable to those stakeholders, is there any reason why we couldn’t put our equivalent of the “Intel Inside” badge on the reports?

But first, are you confident that the badge is a badge of honour? Will your brand instil in customers a sense of quality, assuredness and innovation?

Next step, man-machine partnerships

Getting literate in the language of the future posed the thought that data is the language of the future and therefore it is incumbent on all OSS practitioners to ensure their literacy.

It also posed that data literacy provides a stepping stone to a future where machine learning is more prevalent. This got me thinking. We currently think about man-machine interfaces where we design ways for humans and machines to get info into the system. But the next step, the future way of thinking will be man-machine partnerships, where we design ways to leverage machines to get more out of the system.

Data literacy will be essential to making this transition from MM interface to MM partnership. Machines (eg IoT and OSS) will get data into the system. Through partnerships with humans, machines will also be instrumental in the actions / insights coming out of the system.

Inverting the iceberg to get more funding for your OSS

In the last couple of days, we’ve discussed frameworks that could allow CSPs to design disruptive business models around factors that our OSS / BSS can control. Not exactly riveting stuff for the tech-heads amongst the reader-base, but relevant for the amount of investment that gets directed into the tech projects that we all work on.

Let me get a little more specific today. We only get to work on cool OSS projects if funding is allocated to them. As we all know, OSS are the linchpin around which a service provider is built, but most executive sponsors don’t grasp the full extent of this dependence. There are a few reasons for this:

  1. OSS are perceived as not generating any revenue directly
  2. OSS projects continually disappoint (on time, cost and/or functionality)
  3. The full picture of OSS functionality is hidden (like an iceberg) and is boring to most. The Service provider’s customers certainly have zero interest in the back-end systems / functionality in use
  4. Our business cases tend to be built around cost reduction, not business growth. Let’s be honest, OSS cost reductions go in the same excitement bucket as a reduction in paper clips, pens and printer paper
  5. The technologies used by OSS seem to be obsoleted even faster than the network technologies they manage
  6. They tend to deliver more negative messaging (eg network outages, fall-outs resulting in poor customer experiences, etc), whilst others get the credits for any positive messaging that OSS have facilitated
  7. Due to the costs of implementation and change, OSS are seen by many as giant sinkholes of cost centres
  8. The list goes on!!

Aaarghhh! Why do we bother working in OSS?

For all of us in the OSS industry, we understand the justifications behind these derogatory remarks above.

It all comes down to the messaging going to executive sponsors. Like Steve Jobs, I have just three things (to help build a compelling story for your executive sponsors):

  • Break “the sink-hole perspective” by generating revenues – via APIs, via platforms, via metrics that show OSS contribution to revenues or anything else that shows real value, not just cost-out, to executive sponsors as well as the customers they’re trying to service
  • Invert the iceberg – show just how much capability is hidden under the surface by mapping how dependant the organisation’s positive outcomes are on its OSS (OSS as the puppet-master). All the sexy stuff (eg 5G, IoT, network virtualisation, etc) can only be monetised if our OSS pull all the strings to make them happen
  • Simplify – the massive (and increasing) complexity in our business is boring to non-OSS people and an impediment to us in our ability to deliver exciting outcomes for everyone else. Just take Google – they have an incredibly complex tech-stack to run… but what does the customer interact with on their search page?

How to disrupt through your OSS – a base principles framework

We’ve all heard the stories about the communications services industry being ripe for disruption. In fact many over-the-top (OTT) players, like Skype and WhatsApp have already proven this fact for basic communications services, let alone the value-add applications that leverage CSP connectivity.

As much as the innovative technologies they’ve built, the OTT players have thrived via some really interesting business models to service gaps in the market. The oft-quoted examples here are platforms like Airbnb providing clients with access to accommodation without having the capital costs of building housing.

In thinking about the disruption of the CSP industry and how OSS can provide an innovative vehicle to meeting customer demands, the framework below builds upon the basic principles of supply and demand:

Supply and Demand opportunities in a service provider

The objective of the diagram is to identify areas where innovative OSS models could be applied, taking a different line of thinking than just fulfillment / assurance / inventory:

  • Supply – how can OSS influence supply factors such as:
    • Identifying latent supply (eg un-used capacity) and how incremental revenues could be generated from it, such as building dynamic offers
    • Unbundling supply by stripping multiple elements apart to supply smaller desirable components rather than all of the components of a cross-subsidised bundle. Or vice versa, taking smaller components and bundling them together to supply something new
    • Changing the costs structures of supply, which could include virtualisation, automation and many of the other big buzz-words swirling around our industry
  • Demand – how can OSS / BSS build upon demand factors such as:
  • Marketplace / Platforms – how can OSS / BSS better facilitate trade between a CSP‘s subscribers, who are both producer / suppliers and consumers. CSPs traditionally provide data connectivity, but the OTT players tend to provide higher perceived value of trade-connectivity including:
    • Bringing buyers and sellers together on a common platform
    • Providing end-to-end trade support from product development, sales / marketing, trading transactions, escrow / billing / clearing-house; and then
    • Analytics on gathered data to improve the whole cycle
  • Supply chain – how can OSS / BSS help customers to efficiently bring the many pieces of a supply chain together. This is sometimes overlooked but can be one of the hardest parts of a business model to replicate (and therefore build a sustainable competitive advantage from). For OSS / BSS, it includes factors such as:
    • As technologies get more complex, but more modular, partnerships and their corresponding interfaces become more important, playing into microservices strategies
    • Identifying delivery inefficiencies, which include customer impediment factors such as ordering, delivery, activations. Many CSPs have significant challenges in this area, so efficiency opportunities abound

These are just a few of the ideas coming out of the framework above. Can the questions it poses help you to frame your next OSS innovation roadmap (ie taking it beyond just the typical tech roadmap)?

The OSS / Singapore analogy

Singapore has made some really innovative decisions over the years. Recent ones include tokenisation of the Singapore Dollar on cyber-currencies, investing heavily in international startups based in Singapore and the streamlining of identity management (which will undoubtedly help to get around one of the biggest blockers to self-on-boarding new customers onto comms networks, particularly mobile).

Singapore perhaps lacks the natural commercial advantages (eg mining, manufacturing, agriculture, etc) that some other countries have. Despite that, it has made itself into a significant global player by being an economic and trading hub. It’s a focus on the abovementioned types of innovative thinking that has helped simplify the ability for people to do business in / through Singapore and has been instrumental in it developing a presence that outsizes its natural assets.

OSS falls into a similar category with regards to natural commercial advantages. Most people see OSS as cost centres, which implies having no ability to generate revenues directly (I don’t agree with this perspective, but that’s another story or two).

OSS can take a leaf out of Singapore’s book to out-size its presence – by acting as a highly efficient facilitator of business (and social) activities – but also by diligently identifying innovative ways to improve efficiency even further.

One way is for OSS to take more of a lead in the omni-channel experience as elegantly stated here:
The customer travels across all their channels — online, mobile, IVR, live and more – as they interact with you. You need to travel with them. If your channels aren’t fully integrated, they become just one more source of customer frustration. When you aren’t fully aware of how your customers have engaged with you across all your channels, costs of sales and service rise, and customer satisfaction shrinks.”

Another is through acting as a conduit to trade by doing more to bring its subscribers, all of which are buyers and sellers at some level, together with platform / marketplace / API / service thinking.

Another is through using data to provide marketing / sales / product-dev insights. Taking this further, to provide insightful information as mobile moments.

I’m sure you can think of more angles that take OSS beyond an inward-only facing operations toolset (ie cost centre). Do you have any great Singapore-thinking ideas for OSS to embrace?

OSS S-curves

I should say… that in the real world exponential curves don’t continue for ever. We get S-curves which closely mimic exponential curves in the beginning, but then tail off after a while often as new technologies hit physical limits which prevent further progress. What seems to happen in practice is that some new technology emerges on its own S-curve which allows overall progress to stay on an something approximating an exponential curve.
Socio tech

The chart above shows interlocking S-curves for change in society over the last 6,000 years. That’s as macro as it gets, but if you break down each of those S-curves they will in turn be comprised of their own interlocking S-curves. The industrial age, for example, was kicked off by the spinning jenny and other simple machines to automate elements of the textile industry, but was then kicked on by canals, steam power, trains, the internal combustion engine, and electricity. Each of these had it’s own S-curve, starting slowly, accelerating fast and then slowing down again. And to the people at the time the change would have seemed as rapid as change seems to us now. It’s only from our perspective looking back that change seems to have been slower in the past. Once again, that’s only because we make the mistake of thinking in absolute rather than relative terms.”
Nic Brisbourne

I love that Nic has taken the time to visualise and articulate what many of us can perceive.

Bringing the exponential / S-curve concept into OSS, we’re at a stage in the development of OSS that seems faster than at any other time during my career. Technology change in adjacent industries are flowing into OSS, dragging it (perhaps kicking and screaming) into a very different future. Technologies such as continual integration, cloud-scaling, big-data / graph databases, network virtualisation, robotic process automation (RPA) and many others are making OSS look so different to what they did only five years ago. In fact, we probably need these technologies to keep apace with the other technologies. For example, the touchpoint explosion caused by network virtualisation and IoT mean we need improved database technologies to cope. In turn this introduces a complexity and change that is almost impossible for people to keep track of, driving the need for RPA… etc.

But then, there are also things that aren’t changing.

Many of our OSS have been built through millions of developer days of effort. That forces a monumental decision for the owners of that OSS – to keep up with advances, you need to rapidly overhaul / re-write / supersede / obsolete all that effort and replace it with something that keeps track of the exponential curve. The monolithic OSS of the past simply won’t be able to keep pace, so highly modular solutions, drawing on external tools like cloud, development automation and the like are going to be the only way to track the curve.

All of these technologies rely on programmable interfaces (APIs) to interlock. There is one major component of a telco’s network that doesn’t have an API yet – the physical (passive) network. We don’t have real-time data feeds or programmable control mechanisms to update it and manage these typically unreliable data sources. They are the foundation that everything else is built upon though so for me, this is the biggest digitalisation challenge / road-block that we face. Collectively, we don’t seem to be tackling it with as much rigour as it probably deserves.

Customers buy the basic and learn to love the features

Most customers buy the basic and learn to love the features, but the whole customer experience is based on trying to sell the features.”
Roger Gibson

This statement appears oh-so-true in the OSS sales pitches that I’ve observed.

In many cases the customer really only needs the basic, but when every vendor is selling the features, customers also tend to get caught up in the features. “The basic” effectively represents Pareto’s 20% of functionality that is required by 80% (or more) of customers. However, since every vendor has the basic they don’t feel differentiated enough to sell that 20%. They sell the remaining 80%, “the features,” that give the perspective of uniqueness compared with all the other vendors.

When I help customers through the vendor selection process, I take a different perspective though. I work with the customer to understand what is “the basic” of their business model and what the OSS will channel most impact through. It’s not the hundreds of sexy features, the ones which will get used on rare occasions, but “the basic” that get used hundreds (or thousands, or millions) of times every day. I then work with the customers to figure out a way of benchmarking which vendor solution delivers the greatest efficiency on their basic.

A couple of side benefits come from this strategy too:

  • The basic is usually the easiest part of an OSS to roll-out and get delivery momentum going (and momentum is such an important feature of delivering large, complex OSS projects)
  • Once delivery momentum is established and the customer’s operators are using the basic, there are still hundreds of features to play with and enhance over time. Hundreds of more little wins that enhance the customer experience, building the love

OSS survivorship bias

Take a look at the image below. If you were told that the image showed where planes had taken hits in WWII before returning to base and were asked to recommend locations on the plane where armour should be strengthened, where would you choose? Would you choose to strengthen behind the cockpit and on the wingtips?

Survivorship bias

“During World War II, the statistician Abraham Wald took survivorship bias into his calculations when considering how to minimize bomber losses to enemy fire. Researchers from the Center for Naval Analyses had conducted a study of the damage done to aircraft that had returned from missions, and had recommended that armor be added to the areas that showed the most damage. Wald noted that the study only considered the aircraft that had survived their missions—the bombers that had been shot down were not present for the damage assessment. The holes in the returning aircraft, then, represented areas where a bomber could take damage and still return home safely. Wald proposed that the Navy instead reinforce the areas where the returning aircraft were unscathed, since those were the areas that, if hit, would cause the plane to be lost.” Wikipedia.

Survivorship bias is prevalent in OSS too. We have complex processes (assurance, fulfilment and inventory) with many possible variants. Too many variants to test them all. Too many variants to catch them all. And if we don’t catch them, we can’t measure or process them – Survivorship bias.

Another example is in telco IVRs. Their designers are often asked to make traversing the decision tree so complicated that callers drop off the call before reaching an agent (that costs the telcos money to man the call centres). The sentiments of those dropping callers aren’t measured. Oftentimes, these callers are probably analogous to the downed planes of WWII (ie they’re the ones you actually want data on).

In the current age of omnichannel communications there are more cracks for customers to fall into than ever before because there is generally no single tool measuring journeys through all channels and watching all hand-offs between systems.

Do you think survivorship bias might be prevalent in your OSS?

OSS data as a business asset

To become a business asset, the role of data within an organization must move from being departmentally siloed to being centrally managed. Breaking down the siloes is not necessarily a technical challenge, but an organizational one. It requires a data strategy, the correct level of ownership and corporate governance. Data-as-a-business-asset means a single definition of the customer, ownership at an executive level and a well-managed change control process to ensure ongoing data quality and trust in the data across the organization.”
Steve Earl

As Eric Hoving said at TM Forum Live!, “For 20 years we tried to make sense out of data and failed, but Google did it. Stuff is not an asset if you don’t get something out of it. If you aren’t going to do something with it then stop doing it.”

This is a strong statement by Eric Hoving but an important one when building our OSS strategies. Do we see ourselves collecting data for operational purposes or are we really going to try to make sense of it and turn it into a significant business asset? [wikipedia refers to an asset as “Anything tangible or intangible that can be owned or controlled to produce value and that is held by a company to produce positive economic value is an asset.”]

Should the data collection requirements of our OSS and BSS be defined by ops only or should they be defined by the Chief Data Office (ie centrally organised with a whole-of-business asset strategy in mind)?

As Steve Earl says, this “is not necessarily a technical challenge, but an organizational one.” Do we want to lead this change or follow?

Communications Support Systems (CSS)

Have you ever worked in a big telco (or any large organisation for that matter)? Have you ever noticed the disconnection in knowledge / information? On a recent project, I was brought in as an external consultant to find potential improvements to one business unit’s specific key performance indicator (KPI).

As a consultant, my role is a connector – a connector of people, ideas, projects, products, technologies, approaches, etc. On the abovementioned project, I posed questions within the business unit, but also reached out to adjacent units. There were no less than 6 business units that were investigating a particular leading-edge technology that could’ve improved the KPI. Unfortunately, they all only had budget for investigations, with one also having the budget to conduct a very small Proof-of-Concept (PoC). Collectively, they would’ve had the budget to do a fairly significant implementation for all to leverage but separately, none were within 1-2 years of implementation .

I later found out that there was an additional, completely unrelated business unit that had already taken this technology into production with a small trial. It was for a completely different use case but had stood up a product that could’ve been leveraged by the rest.

That got me thinking – the challenge for us in technology is finding a way to remove this waste through “perfect” communication, without bogging us down with information gathering and grooming. Can a conjunct of existing technologies be used, as follows?
Perfect Communication
We already have:

  • Knowledge management systems (KMS) to gather and groom knowledge
  • Chat-bots to collect verbal information and respond with recommendations (more on OSS chat bots tomorrow BTW)
  • Artificial Intelligence (or even just basic analytics tools) to scan data (from KMS, chat-bots, OSS, etc) and provide decision support prompts

We have all the technologies required to significantly augment our current poor communication methods. The question I have for you is do Communications Support Systems (CSS) become an adjunct to OSS or just another IT system that happens to consume OSS data as well as any other sources?

Note: I’ve bracketed “Ambient Listening” in the chat-bot because the six different groups above were all working on research into the cutting-edge technology and talking about it, but there was little in the way of official documentation stored – mostly just emails, communicator conversations, verbal discussions, etc. I do acknowledge the invasion of privacy concerns with this thought experiment though!!  🙂

Out-running the bear

A bear jumps out of a bush and starts chasing two hikers. They both start running for their lives, but then one of them stops to put on his running shoes.
His friends says, “What are you doing? You can’t out-run a bear!”
His friend replies, “I don’t have to out-run the bear. I only have to out-run you!

Let’s say the two friends represent competing telcos and the OTT (Over the Top) players represent the bear. The telcos have an average NPS (Net Promoter Score) of 6 and the OTT players peak at around 70 NPS to represent this speed advantage.

Does this analogy hold true? Do the telcos only need to out-run each other to succeed or will the bear just gobble them up one by one because “you can’t out-run a bear.”

If we look at this in relation to the OSI stack, if the telcos provide L1-3 and the OTT provide L4-7 does the bear only gobble them all up if they take over L1-3 too (and hence are no longer just OTT)?

OSI stack
Courtesy of

To follow on from yesterday’s blog on the value of connections we can consider two levels of connection – the telcos are typically selling connections at L3 (network layer) whilst OTT is selling connections at L7 (application layer).

I equate this to layer 3 selling to the HOW whilst layer 7 (if done well) is selling to the WHY. Check out Simon Sinek’s famous “Start with why” talk to see why this is allowing the OTT play to generate out-sized revenues and NPS scores.

So if telcos and their OSS want to out-run the bear, they need to better understand and deliver to the WHY of customer connections, not just the HOW. In addition to the content and applications play, solutions like microservices, APIs, IoT platforms (not just carriage) and others could present this opportunity.

OSS are too important to be just cost centres

When we distill it down, what are telcos selling?

They’re selling connections.
Whether it’s connecting with information, a group, another person, even a virtual assistant or machine-to-machine, we tend to use communications services to connect.

Networks are an important component to establish those connections. OSS go a step further. They help to establish a connection but also help to maintain the ability to connect – via change, incident, problem, capacity customer and other service management tools.

That makes OSS pivotal to each telco’s revenue stream. And yet, OSS are most commonly seen as cost centres. Is that because we collectively don’t really tie our metrics to what is being sold? If anything, our OSS tend to be more closely aligned with metrics related to loss (eg network outages, faults, SLA degradation, etc).

We’re a step removed from all of the positive stuff. Examples include:

  • The selling process
  • The billing process
  • The revenue collection process
  • The product design process

But we enable all of those tasks as well as allowing the part that actually sells – connections.

So, how do we change those perceptions? Two possibilities spring to mind:

  1. We get better at understanding (and facilitating) why customers want to connect (and why they want to connect elsewhere); and
  2. We need to better demonstrate how influential we are at establishing and maintaining connections, And part of that is as described in yesterday’s blog on omni-channel experience orchestration

Bending over backwards

Over the years, I’ve dealt with (and worked with) many vendors, as I’m sure you have too.

Some will bend over backwards to help their customers (or potential customers) out, finding workarounds to their own internal rules to help make something good happen.

Others will persuade their customers into signing a contract before bending their customers over backwards, finding internal rules to their own workarounds to make sure nothing good happens.

Which group do you think customers prefer dealing with? The answer’s obviously the first.
But having been the impartial advisor in vendor selection processes, the group the customer prefers dealing with doesn’t always equate with who they sign a contract with. That answer seems less obvious – to me at least (acknowledging though that there are many factors that go into an OSS purchasing decision).

An OSS data input quandary

In writing or in life, people who are successful focus on the input (putting in). Those who aren’t successful focus on the output (taking out).”
Paraphrasing Tucker Max during a podcast with James Altucher.

In the post that follows, you’ll notice that I’m putting a completely different context on Tucker’s quote above, which is a new spin on the old adage of, “you get out what you put in.” This got me wondering whether this is true in the world of OSS. I know from personal experience that the more effort you put into building an OSS, the more you get out from it. In my years of consulting, it’s also been obvious that the customers who put the most into an assignment also get the best results (with the opposite also generally being true too). But the part I was wondering about was actually the information you put into an OSS.

Is it true that if you focus on what data you’re putting in to your OSS, it just follows that you’ll also get great insights coming out? I’m not so sure on this one. You may’ve seen an earlier post on “Minimum Viable Data (MVD)” that discusses the merits of small data (as well as the possibilities of big data – if you know how to make use of it).

If we break the input down, we have two lines of thought then – highly targeted, streamlined data versus gather as much as you can, then figure out what to do with it. With the first, the MVD approach, you almost need to design input with the end in mind. With the second, the thinking isn’t so much on the input, but asking the right questions of the data to get great answers. The second is more flexible and due to statistical techniques, can possibly overcome poor quality data.

This leads to the question – since an OSS is only as good as its data and data usage, is it always true that if you put garbage in, you get garbage out (GIGO)? Is that where the “input” focus should always be?

When it comes to data, do you think Tucker Max is right that, “the successful focus on the input whereas those who aren’t successful focus on the output (taking out)?” Have you developed great techniques for dealing with rubbish data? I’d love to hear your thoughts on this one.

That’s somebody else’s job

One of the advantages of being a consultant is that you get to assist big corporates without being wrapped up in some of the big corporates’ mindsets.

Of course this isn’t true of all employees at big corporates, but I’ve found the, “that’s somebody else’s job,” mindset to be more prevalent there. In some ways it makes sense right – having a large group of employees means that there is likely to be someone with the title / responsibility to tackle almost any given task.

But personally, I’ve loved working on large OSS projects with smaller project teams because there are always activities that slip through the cracks. I’ve found these types of projects give the biggest opportunities to learn.

For example, back in my early days in OSS, I was a network SME who was assisting the data migration team. Unfortunately (fortunately) there was a change in personnel and I ended up inheriting the whole network modelling and data migration function. This provided the opportunity to design and model network inventory that covered 3000+ network elements, 200 SDH rings, 10 DWDM rings, 70 ATM switches, 2000 multi-service access nodes, 16 PSTN switches, more than 250 IP switches/routers, RAS, LMDS, DCME, DACS, VoIP and network synchronisation devices. There were well over 100,000 services modeled, across products including voice/PABX/IN, ISDN, ATM, FR, Leased Line, ADSL and other IP related services. Having an intimate knowledge of the data meant that I was increasingly called upon to architect solutions and even design product enhancements.

Now nobody would ever hire me as a coder (I can’t program my way out of a wet paper bag, despite having a computer science degree), but to get the above-mentioned job done, I had to teach myself SQL to manipulate, correlate and import large data sets. It was somebody else’s job to write code, but they couldn’t keep up with demand and got moved to another project anyway.

That was a very formative project in terms of my passion for OSS. Having a mindset of “that’s somebody else’s job” would’ve reduced my workload on that project immeasurably, but also would’ve prevented me from accessing the amazing opportunities I’ve been given since. A “somebody else’s job” mindset would’ve meant the cool opportunities that followed would’ve also become somebody else’s jobs!

OSS fears trump OSS pains

The truth is that whale-sized companies would love to do business with smaller suppliers. The relationship is more immediate in every way. Small suppliers can deliver goods more quickly than larger ones. It’s much easier to get the CEO of a small supplier on the phone for discussion of the transactions. Small suppliers are more flexible. So why are so many small companies unsuccessful in entering this larger arena? Because they do not fully understand the fears of whale-sized companies, fears that have arisen because of all the factors affecting business during the last few decades. It may be true that you have prepared yourself and your company to respond to the immediate demand for an excellent product. We call this demand the “whale’s pain.” But fear trumps pain every time. If you don’t know how to understand and allay the whale’s fear, you will not be able to make the sale.”
Barbara Weaver Smith and Tom Searcy
, from their book, “Whale Hunting: How to Land Big Sales and Transform Your Company

When it comes to OSS customers, most are whales relative to their vendors (ie service providers tend to have larger market cap than the OSS vendors that they partner with).

Barbara and Tom make a fascinating point here – “If you don’t know how to understand and allay the whale’s fear, you will not be able to make the [OSS] sale.” And this is not just the literal sale. Te statement rings true not just for vendor sales teams, but for internal project initiators and implementation teams who are making changes mid-flight.

I’ve seen scenarios where one vendor has been clearly better able to overcome a customer’s pain points, but haven’t been able to overcome their fears, whether that’s been the perception of size, the perception of the ability to deliver or other fears.

OSS projects tend to be complex and costly. There tend to be a lot of unknowns, and with that comes a lot of fear.

When helping customers with their product / vendor selections (or OSS consultancies in general), I try to overcome customer fears with a methodology that steps through the evaluation of pain points, the demonstration of capabilities and the opportunity for customer and vendor to build trust in each other through the interaction process. But a vendor selection can’t overcome all of the unknowns prior to entering into OSS implementation contracts (partnerships), so there are often still residual fears.

Do you agree with Barbara and Tom’s premise? Does fear trump pain every time? Do you have any techniques that you’ve found to be successful in allaying customer fears (and/or pain points) on OSS projects?

To leave you with one last thought – The confused mind says no. How do you remove the confusion from what seem like complex projects in the eyes of your customers / stakeholders / sponsors?

OSS – the meet-in-the-middle tool

Many telcos around the world have a sometimes subtle, sometimes not, turf-war going on between networks / operations and IT groups. Virtualisation of the network potentially amplifies this because it increases the scope of possible overlap.

As described in yesterday’s “Noah’s Ark of OSS success,” one of the ways of improving the success of an OSS is to make it relevant to a larger set of users. Theoretically, this includes representatives from networks, ops and IT, which isn’t always easy to achieve.

Is there anything you can do with your OSS to engineer a meet-in-the-middle collaboration space? Are there adjacencies where people from different business units need to collaborate but don’t always succeed in doing so? Are there areas of potential overlap where demarcation and validation can be defined?

Notwithstanding the technical and user interface considerations, there’s also the perspective of who actually owns the meet-in-the-middle tool. For example, if it’s an IT-owned tool, there are risks that networks / ops might not want to contribute to its success. Ditto if it’s owned by networks or ops.

As an integrator I’ve seen many examples of technically relevant solutions not succeeding because of people-related issues. Organisational change management is an often-underestimated tool that isn’t factored into OSS projects until too late, particularly within large, complex organisation structures.

Is the Magic Quadrant really a vendor selection strategy?

Gartner’s magic quadrant for OSS is often used by organisations as a proxy for determining a short-list from the hundreds (thousands?) of OSS products available on the market. I’ve even heard of executives from tier-one telcos issuing directives that their short-list should consist of the top right corner of Gartner’s OSS Magic Quadrant (the Leaders quadrant).

The Gartner analysis makes for great reading and its readers can be left with the impression that they’ve been given the silver bullet of vendor selection. Unfortunately there’s an inherent problem with it. It only covers a small portion of what it takes to find the right vendor because it takes a generic viewpoint and doesn’t evaluate best-fit for any given customer’s requirements (that’s not a criticism, but a statement of fact that it doesn’t cater for every different customer variant). Again this is not a criticism of the Gartner report, but I’d go as far as saying that the Gartner rankings can actually be misleading for some customers if they intend to use it as a vendor selection proxy.

For example, many years ago I was brought in to advise the board of a tier-1 telco who had recently selected an OSS vendor that was in the leaders’ quadrant (there were also rumours of brown paper bags changing hands to influence the decision but we won’t go there). The vendor brought many products to the deal, giving it a completeness of vision and it was very modular, increasing the vendor’s ability to execute (thus putting them in the leader quadrant).

Unfortunately, the lack of integration between the modular products meant there was none of the richness of cross-domain data sharing that I’d come to expect from an OSS (or that the customer’s requirements indicated that they needed). The vendor could execute on basic functionality quickly but they couldn’t fulfill the important requirements, certainly not at the price-point the customer could afford.

In this case, there were other vendors / products that were definitely a better fit for this customer’s requirements and price-point that weren’t even on the Gartner matrix.

If you need any help designing the best vendor selection strategy for your business and requirements, I’d be delighted to hear from you at:

OSS vendor websites – are they helping their customers?

Amongst other consultancy activities, I help clients find the best OSS solution for their needs. This means I’m constantly analysing vendor offerings to cross-reference against client needs.

Based on this perspective, there’s a question I would like to pose to vendors – Why are potential customers coming to your site? [Note: for the purpose of this blog we’ll disregard existing customer visits]

Can we assume that it’s because customers either:
A) Have an existing OSS and are looking to replace it
B) Don’t have the functionality yet (or maybe they’re just winging it with spreadsheets or similar)
And they’ll want to do a review (ie an informal shortlisting) before contacting any vendors directly.

For both A and B above:
1) They’ll need to prepare a business case to justify that the new investment will deliver a positive return
2) They’ll need to justify that the new capability will improve their current situation
3) They’ll want to have the sense that they can trust the professionalism of the vendor

They’ll also need to do a comparison:
4A) If A) above, they’ll need to be able to compare your solution (financial, technical, risk, etc) with theirs.
4B) If B) above, they’ll need to be able to compare your solution (financial, technical, risk, etc) against others on the market

Vendor websites usually help with 2) and 3), but obviously some are much easier than others to find the information customers are after. As a customer, the dilemma is that most vendors have highly flexible pricing scales / models so they’re hesitant to publish this info online and vendors find it difficult to gather the quantifiable benefits they provide. THhis means they struggle to help customers to resolve 1). And regarding 4), customers will usually have to kick off their own analysis (or engage PAOSS) because vendors rarely make it easy to generate competitor comparisons.

An important note in relation to 1), the customer potentially has little concept of what their new OSS will cost, and hence prefer to do some groundwork before negotiating with their financial boffins what their budget will be… and only then will they speak with vendors. There’s also the old concept that if an investment looks like delivering big then budgets can (almost) always be increased commensurately. For example, would you spend your $1M OSS budget on Option 1 if you could only identify $500k benefits, or would you spend $2M (ie a 100% increase on budget) on Option 2 if you could identify $5M in recurring benefits? Your organisation would invariably opt to increase the budget wouldn’t they?

The other challenge in relation to 1) is that the benefits of OSS are often quite intangible. That’s part of the reason why I wrote the OSS Business Case builder e-book.

If you’re a vendor and think any of that relatively generic info is worth diving further into, please contact me and we can create a more directly actionable plan for your site.

Calling something an experiment…

Calling it an experiment gives you permission to fail.”
A.J. Jacobs

It’s usually really cheap to experiment with data (assuming that it’s data you’ve already collected and curated).

What if you combine that “license to fail” insight with yesterday’s start of the Minimum Viable Telco (and OSS) movement? Can we use experiments to trial increased minimalism and find a way to reduce the pyramid of OSS pain? OSS the world over are overwhelmed by the complexity of uncountable variants. These variants all need to be conceptualised, designed for, developed for, tested for (not to mention when there are variants we haven’t identified that lead to fall-outs). Think of the effort that requires!

I love the concept of A/B testing. It’s used to trial two variants and watch what happens. Can we use our OSS and decision support tools to selectively drop stuff off and see whether there is any loss of fidelity? Can we trigger a chain reaction of A/B (ie Darwinian) tests to weed out the variants that don’t really matter and determine what the MVP really is?

Machines are really good at handling millions of variants. Us humans, not so much. Getting humans to accept reduction can be extremely painful (ever tried convincing a marketing department to drop one of its product lines?) Is it time for us to use exponential technology to conduct more reductionist experiments??