Stealing Fire for OSS (part 2)

Yesterday’s post talked about the difference between “flow state” and “office state” in relation to OSS delivery. It referenced a book I’m currently reading called Stealing Fire.

The post mainly focused on how the interruptions of “office state” actually inhibit our productivity, learning and ability to think laterally on our OSS. But that got me thinking that perhaps flow doesn’t just relate to OSS project delivery. It also relates to post-implementation use of the OSS we implement.

If we think about the various personas who use an OSS (such as NOC operators, designers, order entry operators, capacity planners, etc), do our user interfaces and workflows assist or inhibit them to get into the zone? More importantly, if those personas need to work collaboratively with others, do we facilitate them getting into “group flow?”

Stealing Fire suggests that it costs around $500k to train each Navy SEAL and around $4.25m to train each elite SEAL (DEVGRU). It also describes how this level of training allows DEVGRU units to quickly get into group flow and function together almost as if choreographed, even in high-pressure / high-noise environments.

Contrast this with collaborative activities within our OSS. We use tickets, emails, Slack notifications, work order activity lists, etc to collaborate. It seems to me that these are the precise instruments that prevent us from getting into flow individually. I assume it’s the same collectively. I can’t think back to any end-to-end OSS workflows that seem highly choreographed or seamlessly effective.

Think about it. If you experience significant rates of process fall-out / error, then it would seem to indicate an OSS that’s not conducive to group flow. Ditto for lengthy O2A (order to activate) or T2R (trouble to resolve) times. Ditto for bringing new products to market.

I’d love to hear your thoughts. Has any OSS environment you’ve worked in facilitated group flow? If so, was it the people and/or the tools? Alternatively, have the OSS you’ve used inhibited group flow?

PS. Stealing Fire details how organisations such as Google and DARPA are investing heavily in flow research. They can obviously see the pay-off from those investments (or potential pay-offs). We seem to barely even invest in UI/UX experts to assist with the designs of our OSS products and workflows.

What if most OSS/BSS are overkill? Planning a simpler version

You may recall a recent article that provided a discussion around the demarcation between OSS and BSS, which included the following graph:

Note that this mapping is just my demarc interpretation, but isn’t the definitive guide. It’s definitely open to differing opinions (ie religious wars).

Many of you will be familiar with the framework that the mapping is overlaid onto – TM Forum’s TAM (The Application Map). Version R17.5.1 in this case. It is as close as we get to a standard mapping of OSS/BSS functionality modules. I find it to be a really useful guide, so today’s article is going to call on the TAM again.

As you would’ve noticed in the diagram above, there are many, many modules that make up the complete OSS/BSS estate. And you should note that the diagram above only includes Level 2 mapping. The TAM recommendation gets a lot more granular than this. This level of granularity can be really important for large, complex telcos.

For the OSS/BSS that support smaller telcos, network providers or utilities, this might be overkill. Similarly, there are OSS/BSS vendors that want to cover all or large parts of the entire estate for these types of customers. But as you’d expect, they don’t want to provide the same depth of functionality coverage that the big telcos might need.

As such, I thought I’d provide the cut-down TAM mapping below for those who want a less complex OSS/BSS suite.

It’s a really subjective mapping because each telco, provider or vendor will have their own perspective on mandatory features or modules. Hopefully it provides a useful starting point for planning a low complexity OSS/BSS.

Then what high-level functionality goes into these building blocks? That’s possibly even more subjective, but here are some hints:

OSS change…. but not too much… oh no…..

Let me start today with a question:
Does your future OSS/BSS need to be drastically different to what it is today?

Please leave me a comment below, answering yes or no.

I’m going to take a guess that most OSS/BSS experts will answer yes to this question, that our future OSS/BSS will change significantly. It’s the reason I wrote the OSS Call for Innovation manifesto some time back. As great as our OSS/BSS are, there’s still so much need for improvement.

But big improvement needs big change. And big change is scary, as Tom Nolle points out:
IT vendors, like most vendors, recognize that too much revolution doesn’t sell. You have to creep up on change, get buyers disconnected from the comfortable past and then get them to face not the ultimate future but a future that’s not too frightening.”

Do you feel like we’re already in the midst of a revolution? Cloud computing, web-scaling and virtualisation (of IT and networks) have been partly responsible for it. Agile and continuous integration/delivery models too.

The following diagram shows a “from the moon” level view of how I approach (almost) any new project.

The key to Tom’s quote above is in step 2. Just how far, or how ambitious, into the future are you projecting your required change? Do you even know what that future will look like? After all, the environment we’re operating within is changing so fast. That’s why Tom is suggesting that for many of us, step 2 is just a “creep up on it change.” The gap is essentially small.

The “creep up on it change” means just adding a few new relatively meaningless features at the end of the long tail of functionality. That’s because we’ve already had the most meaningful functionality in our OSS/BSS for decades (eg customer management, product / catalog management, service management, service activation, network / service health management, inventory / resource management, partner management, workforce management, etc). We’ve had the functionality, but that doesn’t mean we’ve perfected the cost or process efficiency of using it.

So let’s say we look at step 2 with a slightly different mindset. Let’s say we don’t try to add any new functionality. We lock that down to what we already have. Instead we do re-factoring and try to pull the efficiency levers, which means changes to:

  1. Platforms (eg cloud computing, web-scaling and virtualisation as well as associated management applications)
  2. Methodologies (eg Agile, DevOps, CI/CD, noting of course that they’re more than just methodologies, but also come with tools, etc)
  3. Process (eg User Experience / User Interfaces [UX/UI], supply chain, business process re-invention, machine-led automations, etc)

It’s harder for most people to visualise what the Step 2 Future State looks like. And if it’s harder to envisage Step 2, how do we then move onto Steps 3 and 4 with confidence?

This is the challenge for OSS/BSS vendors, supplier, integrators and implementers. How do we, “get buyers disconnected from the comfortable past and then get them to face not the ultimate future but a future that’s not too frightening?” And I should point out, that it’s not just buyers we need to get disconnected from the comfortable past, but ourselves, myself definitely included.

Network slicing and a seismic shift in OSS responsibility

Network slicing allows operators to segment their network and configure each different slice to the specific needs of that customer (or group of customers). So rather than the network infrastructure being configured for the best compromise that suits all use-cases, instead each slice can be configured optimally for each use-case. That’s an exciting concept.

The big potential roadblock however, falls almost entirely on our OSS/BSS. If our operational tools require significant manual intervention on just one network now, then what chance do operators have of efficiently looking after many networks (ie all the slices).

This article describes the level of operational efficiency / automation required to make network slicing cost effective. It clearly shows that we’ll have to deliver massive sophistication in our OSS/BSS to handle automation, not to mention the huge number of variants we’d have to cope with across all the slices. If that’s the case, network slicing isn’t going to be viable any time soon.

But something just dawned on me today. I was assuming that the onus for managing each slice would fall on the network operator. What if we take the approach that telcos use with security on network pipes instead? That is, the telco shifts the onus of security onto their customer (in most cases). They provide a dumb pipe and ask the customer to manage their own security mechanisms (eg firewalls) on the end.

In the case of network slicing, operators just provide “dumb slices.” The operator assumes responsibility for providing the network resource pool (VNFs – Virtual Network Functions) and the automation of slice management including fulfilment (ie adds, modifies, deletes, holds, etc) and assurance. But the customers take responsibility for actually managing their network (slice) with their own OSS/BSS (which they probably already have a suite of anyway).

This approach doesn’t seem to require the same level of sophistication. The main impacts I see (and I’m probably overlooking plenty of others) are:

    1. There’s a new class of OSS/BSS required by the operators, that of automated slice management
    2. The customers already have their own OSS/BSS, but they currently tend to focus on monitoring, ticketing, escalations, etc. Their new customer OSS/BSS would need to take more responsibility for provisioning, including traffic engineering
    3. And I’d expect that to support customer-driven provisioning, the operators would probably need to provide ways for customers to programmatically interface with the network resources that make up their slice. That is, operators would need to offer network APIs or NaaS to their customers externally, not just for internal purposes
    4. Determining the optimal slice model. For example, does the carrier offer:
      1. A small number of slice types (eg video, IoT low latency, IoT low chat, etc), where each slice caters for a category of customers, but with many slice instances (one for each customer)
      2. A small number of slice instances, where all customers in that category share the single slice
      3. Customised slices for premium customers
      4. A mix of the above

.In the meantime, changes could be made as they have in the past, via customer portals, etc.

Thoughts?

Two concepts to help ease long-standing OSS problems

There’s a famous Zig Ziglar quote that goes something like, “You can have everything in life you want, if you will just help enough other people get what they want.”

You could safely assume that this was written for the individual reader, but there is some truth in it within the OSS context too. For the OSS designer, builder, integrator, does the statement “You can have everything in your OSS you want, if you will just help enough other people get what they want,” apply?

We often just think about the O in OSS – Operations people, when looking for who to help. But OSS/BSS has the ability to impact far wider than just the Ops team/s.

The halcyon days of OSS were probably in the 1990’s to early 2000’s when the term OSS/BSS was at its most sexy and exciting. The big telcos were excitedly spending hundreds of millions of dollars. Those projects were huge… and hugely complex… and hugely fun!

With that level of investment, there was the expectation that the OSS/BSS would help many people. And they did. But the lustre has come off somewhat since then. We’ve helped sooooo many people, but perhaps didn’t help enough people enough. Just speak with anybody involved with an OSS/BSS stack and you’ll hear hints of a large gap that exists between their current state and a desired future state.

Do you mind if I ask two questions?

  1. When you reflect on your OSS activities, do you focus on the technology, the opportunities or the problems
  2. Do you look at the local, day-to-day activities or the broader industry

I tend to find myself focusing on the problems – how to solve them within the daily context on customer challenges, but the broader industry problems when I take the time to reflect, such as writing these blogs.

The part I find interesting is that we still face most of the same problems today that we did back in the 1990’s-2000’s. The same source of risks. We’ve done a fantastic job of helping many people get what they want on their day-to-day activities (the incremental). We still haven’t cracked the big challenges though. That’s why I wrote the OSS Call for Innovation, to articulate what lays ahead of us.

It’s why I’m really excited about two of the concepts we’ve discussed this week:

Is your OSS squeaking like an un-oiled bearing?

Network operators spend huge amounts on building and maintaining their OSS/BSS every year. There are many reasons they invest so heavily, but in most cases it can be distilled back to one thing – improving operational efficiency.

And our OSS/BSS definitely do improve operational efficiency, but there are still so many sources of friction. They’re squeaking like un-oiled bearings. Here are just a few of the common sources:

  1. First-time Installation
  2. Identifying best-fit tools
  3. Procurement of new tools
  4. Update / release processes
  5. Continuous data quality / consistency improvement
  6. Navigating to all features through the user interface
  7. Non-intuitive functionality / processes
  8. So many variants / complexity that end-users take years to attain expert-level capability
  9. Integration / interconnect
  10. Getting new starters up to speed
  11. Getting proficient operators to expertise
  12. Unlocking actionable insights from huge data piles
  13. Resolving the root-cause of complex faults
  14. Onboarding new customers
  15. Productionising new functionality
  16. Exception and fallout handling
  17. Access to supplier expertise to resolve challenges

The list goes on far deeper than that list too. The challenge for many OSS product teams, for any number of reasons, is that their focus is on adding new features rather than reducing friction in what already exists.

The challenge for product teams is diagnosing where the friction  and risks are for their customers / stakeholders. How do you get that feedback?

  • Every vendor has a product support team, so that’s a useful place to start, both in terms of what’s generating the most support calls and in terms of first-hand feedback from customers
  • Do you hold user forums on a regular basis, where you get many of your customers together to discuss their challenges, your future roadmap, new improvements / features
  • Does your process “flow” data show where the sticking points are for operators
  • Do you conduct gemba walks with your customers
  • Do you have a program of ensuring all developers spend at least a few days a year interacting directly with customers on their site/s
  • Do you observe areas of difficulty when delivering training
  • Do you go out of your way to ask your customers / stakeholders questions that are framed around their pain-points, not just framed within the context of your existing OSS
  • Do you conduct customer surveys? More importantly, do you conduct surveys through an independent third-party?

On the last dot-point, I’ve been surprised at some of the profound insights end-users have shared with me when I’ve been conducting these reviews as the independent interviewer. I’ve tended to find answers are more open / honest when being delivered to an independent third-party than if the supplier asks directly. If you’d like assistance running a third-party review, leave us a note on the contact page. We’d be delighted to assist.

Would you hire a furniture maker as an OSS CEO?

Well, would you hire a furniture maker as CEO of an OSS vendor?

At face value, it would seem to be an odd selection right? There doesn’t seem to be much commonality between furniture and OSS does there? It seems as likely as hiring a furniture maker to be CEO of a car maker?

Oh wait. That did happen.

Ford Motor Company made just such a decision last year when appointing Jim Hackett, a furniture industry veteran, as its CEO. Whether the appointment proves successful or not, it’s interesting that Ford made the decision. But why? To focus on user experience and design as it’s next big differentiator. Clever line of thinking Bill Ford!!

I’ve prepared a slightly light-hearted table for comparison purposes between cars and OSS. Both are worth comparing as they’re both complex feats of human engineering:

Idx Comparison Criteria Car OSS
1 Primary objective Transport passengers between destinations Operationalise and monetise a comms network
2 Claimed “Business” justification Personal freedom Reducing the cost of operations
3 Operation of common functionality without conscious thought (developed through years of operator practice) Steering

Changing gears

Indicating

Hmmm??? Depends on which sales person or operator you speak with
4 Error detection and current-state monitoring Warning lights and instrument cluster/s Alarm lists, performance graphs
5 Key differentiator for customers (1970’s) Engine size Database / CPU size
6 Key differentiator for customers (2000’s) Gadgets / functions / cup-holders Functionality
7 Key differentiator for customers (2020+) User Experience

Self-driving

Connected car (car as an “experience platform”)

User Experience??

Zero-touch assurance?

Connected OSS (ie OSS as an experience platform)???

I’d like to focus on three key areas next:

  1. Item 3
  2. Item 4 and
  3. The transition between items 6 and 7

Item 3 – operating on auto-pilot

If we reference against item 1, the primary objective, experienced operators of cars can navigate from point A to point B with little conscious thought. Key activities such as steering, changing gears and Indicating can be done almost as a background task by our brains whilst doing other mental processing (talking, thinking, listening to podcasts, etc).

Experienced operators of OSS can do primary objectives quickly, but probably not on auto-pilot. There are too many “levers” to pull, too many decisions to make, too many options to choose from, for operators to background-process key OSS activities. The question is, could we re-architect to achieve key objectives more as background processing tasks?

Item 4 – error detection and monitoring

In a car, error detection is also a background task, where operators are rarely notified, only for critical alerts (eg engine light, fuel tank empty, etc). In an OSS, error detection is not a background task. We need full-time staff monitoring all the alarms and alerts popping up on our consoles! Sometimes they scroll off the page too fast for us to even contemplate.

In a car, monitoring is kept to the bare essentials (speedo, tacho, fuel guage, etc). In an OSS, we tend to be great at information overload – we have a billion graphs and are never sure which ones, or which thresholds, actually allow us to operate our “vehicle” effectively. So we show them all.

Transitioning from current to future-state differentiators

In cars, we’ve finally reached peak-cup-holders. Manufacturers know they can no longer differentiate from competitors just by having more cup-holders (at least, I think this claim is true). They’ve also realised that even entry-level cars have an astounding list of features that are only supplementary to the primary objective (see item 1). They now know it’s not the amount of functionality, but how seamlessly and intuitively the users interact with the vehicle on end-to-end tasks. The car is now seen as an extension of the user’s phone rather than vice versa, unlike the recent past.

In OSS, I’ve yet to see a single cup holder (apart from the old gag about CD trays). Vendors mark that down – cup holders could be a good differentiator. But seriously, I’m not sure if we realise the OSS arms race of features is no longer the differentiator. Intuitive end-to-end user experience can be a huge differentiator amongst the sea of complex designs, user interfaces and processes available currently. But nobody seems to be talking about this. Go to any OSS event and we only hear from engineers talking about features. Where are the UX experts talking about innovative new ways for users to interact with machines to achieve primary objectives (see item 1)?

But a functionality arms race isn’t a completely dead differentiator. In cars, there is a horizon of next-level features that can be true differentiators like self-driving or hover-cars. Likewise in OSS, incremental functionality increases aren’t differentiators. However, any vendor that can not just discuss, but can produce next-level capabilities like zero touch assurance (ZTA) and automated O2A (Order to Activate) will definitely hold a competitive advantage.

Hat tip to Jerry Useem, whose article on Atlantic provided the idea seed for this OSS post.

Do you wish more people fell in love with your OSS?

I’d hazard a guess that everyone reading this would admit to being a techie at some level. And being a techie, I’d also imagine that you have blatant tech-love for certain products – gadgets, apps, sites, whatever.

But, let me ask you, are there any OSS products on your love-interest list?

If yes, leave me a comment of “yes” and name of the product below.
If no, leave me a comment of “no” below.

I’m really interested and intrigued to see your answer.

There’s probably only one OSS that I’ve ever had a tech-crush on (but it’s no longer available on the market). It definitely wasn’t love at first sight. If I’m honest, it was probably the opposite. It was a love that took a long time to build. It had some cool modules, but generally it was a bit clunky. The real attraction was that the power and elegance of its data model allowed me to do almost anything with it. To build almost anything with it. To answer almost any business / network / operation question that I could dream up.

I wonder whether the same is true of your other tech-loves? Do they provide the platform for us to create/achieve things that we never dreamed we’d be able to?

If that’s true, I wonder then whether that’s one key to solving the header question?

I wonder whether the other key (the second authentication factor) is in the speed that a user can achieve the necessary level of expertise? Few users ever have the luxury that I had, spending every day for years, to establish the required expertise to make that OSS excel.

As Seth Godin says, “Make things better by making better things.”

PS. If you were kind enough to leave a Yes or No comment below, I’d also love to hear why in an additional comment.

OSS orgitecture

So far this week we’ve been focusing on ways to improve the OSS transformation process. Monday provided 7 models for achieving startup-like efficiency for larger OSS transformations. Tuesday provided suggestions for speeding up the transition from OSS PoC to getting the solution into production, specifically strategies for absorbing an OSS PoC into production.

Both of these posts talk about the speed of getting things done outside the bureaucracy of big operators, big networks and big OSS. Today, as the post title suggests, we’re going to look at orgitecture – how re-designing the structure and culture of an organisation can help streamline digital transformations.

Do you agree with the premise that smaller entities (eg Agile autonomous groups, partners, consultants, etc) can get OSS tasks done more efficiently when operating at arms-length of the larger entity (eg the carrier)? I believe that this is a first principle of physics at play.

If you’ve worked under this arms-length arrangement in the past, you’ll also know that at some point those delivery outcomes need to get integrated back into the big entity. It’s what we referred to yesterday as absorption, where the level of integration effort falls on a continuum between minimally absorbed to fully absorbed.

OSS orgitecture is the re-architecture of the people, processes, culture and org structure to better allow for the absorption process. In the past, all the safety-checks (eg security, approvals, ops handover, etc) were designed on the assumption that internal teams were doing the work. They’re not always a great fit, especially when it comes to documentation review and approval.

For example, I have a belief that the effectiveness of documentation review and approval is inversely proportional to the number of reviewers (in most, but not all cases). Unfortunately, when an external entity is delivering, there tends to be inherently less trust than if an internal entity was delivering. As such, the safety-checks increase.

Another example is when the large organisation uses Agile delivery models, but use supply partners to deliver scope of works. The partners are able to assign effort in a sequential / waterfall manner, but can be delayed by only getting timeslices of attention from client’s staff (ie resources are available according to Agile sprint planning).

Security and cutover planning mechanisms such as Change Review Boards (CRB) have also been designed around old internal delivery models. They also need to be reconsidered to facilitate a pipeline of externally-implemented change.

Perhaps the biggest orgitecture factor is in getting multiple internal business units to work together effectively. In the old world we needed all the business units to reach consensus for a new product to come to market. Sales/Marketing/Products had to work with OSS/IT and Networks. Each of these units tend to have vastly different cultures and different cadences for getting their tasks done. Delivering a new product was as much an organisational challenge as it was a technical challenge and often took months. Those times-to-market are not feasible in a world of software where competitive advantages are fleeting. External entities can potentially help or hinder these timeframes. Careful design of small autonomous teams have the potential to improve abstraction at the interlocks, but culture remains the potential roadblock.

I’m excited by the opportunity for OSS delivery improvement coming from leveraging the gig economy. But if big OSS transformations are to make use of these efficiency gains, then we may also need to consider culture and process refinement as part of the change management.

Seven OSS transformation efficiency models

Do you work in a large organisation? Have you also worked in smaller organisations?
Where have you felt more efficient?

I’ve been lucky enough to work on some massive OSS transformations for large T1 telcos. But I’ve always noticed the inefficiency of working on these projects when embedded inside the bureaucracy of the beast. With all of the documentation, sign-offs, meetings, politics, gaining consensus, budget allocations, etc it can sometimes feel so inefficient. On some past projects, I’ve felt I can accomplish more in a day outside than a week or more inside the beast.

This makes sense when applying the fundamental law of physics F = M x a to OSS projects. In other words, the greater the mass (of the organisation), the more force must be applied to reach a given acceleration (ie to effect a change).

It’s one of the reasons I love working within a small entity (Passionate About OSS), but into big entities (the big telcos and utilities). It’s also why I strongly believe that the big entities need to better leverage smaller working groups to facilitate big OSS change. Not just OSS transformation, but any project where the size of the culture and technology stack are prohibitive.

Here are a few ways you can use to bring a start-up’s efficiency to a big OSS transformation:

  1. Agile methodologies – If done well, Agile can be great at breaking transformations down into smaller, more manageable pieces. The art is in designing small autonomous teams / responsibilities and breakdown of work to minimise dependencies
  2. Partnerships – Using smaller, external partners to deliver outcomes (eg product builds or service offerings) that can be absorbed into the big organisation. There are varying levels of absorption here – from an external, “clip-the-ticket” offering to offerings that are fully absorbed into the large entity’s OSS/BSS stack
  3. Consultancies – Similar to partnerships, but using smaller teams to implement professional services
  4. Spin-out / spin-in teams – Separating small teams of experts out from the bureaucracy of the large organisation so that they can achieve rapid progress
  5. Smart contracts / RFPs – I love the potential for smart contracts to automate the offer of small chunks of work to trusted partners to bid upon and then deliver upon
  6. Externalised Proofs of Concept (PoC) – One of the big challenges in implementing for large organisations is all of the safety checks that slow progress. Many, such as security and privacy mechanisms, are completely justified for a production network. But when a concept needs to be proved, such as user journeys, product integrations, sand-pit environments, etc, then cloud-based PoCs can be brilliant
  7. Alternate brands – Have you also noticed that some of the tier-1 telcos have been spinning out low-cost and/or niche brands with much leaner OSS/BSS stacks, offerings and related culture lately? It’s a clever business model on many levels. Combined with the strangler fig transformation approach, this might just represent a pathway for the big brand to shed many of their OSS/BSS legacy constraints

Can you think of other models that I’ve missed?

The key to these strategies is not so much the carve-out, the process of getting small teams to do tasks efficiently, but the absorb-in process. For example, how to absorb a cloud-based PoC back into the PROD network, where all safety checks (eg security, privacy, operations acceptance, etc) still need to be performed. More on that in tomorrow’s post.

OSS Best Practices, cough, splutter

Organizations that seek transformations frequently bring in an army of outside consultants [or implementers in the case of OSS] who tend to apply one-size-fits-all solutions in the name of “best practices.” Our approach to transforming our respective organizations is to rely instead on insiders — staff who have intimate knowledge about what works and what doesn’t in their daily operations.”
Behnam Tabrizi, Ed Lam, Kirk Gerard and Vernon Irvin here

I don’t know about you, but the term “best practices” causes me make funny noises. A cross between a laugh, cough, derisive snicker and chortle. This noise isn’t always audible, but it definitely sounds inside my head any time someone mentions best practices in the field of OSS.

There are two reasons for my bemusement, no, actually there’s a third, which I’ll share as the story that follows. The first two reasons are:

  • That every OSS project is so different that chaos theory applies. I’m all for systematising aspects of OSS projects to create repeatable building blocks (like TM Forum does with tools such as eTOM). But as much as I build and use repeatable approaches, I know they always have to be customised for each client situation
  • Best practices becomes a mind-set that can prevent the outsiders / implementers from listening to insiders

Luckily, out of all the OSS projects I’ve worked on, there’s only been one where the entire implementation team has stuck with their “best practices” mantra throughout the project.

The team used this phrase as the intellectual high-ground over their OSS-novice clients. To paraphrase their words, “This is best practice. We’ve done it this way successfully for dozens of customers in the past, so you must do it our way.” Interestingly, this project was the most monumental failure of any OSS I’ve worked on.

The implementation team’s organisation lost out because the project was halted part-way through. The client lost out because they had almost no OSS functionality to show for their resource investment.

The project was canned largely because the implementation company wasn’t prepared to budge from their “best practices” thinking. To be honest, their best practices approaches were quite well formed. The only problem was that the changes they were insisting on (to accommodate their 10-person team of outsiders) would’ve caused major re-organisation of the client’s 100,000-person company of insiders. The outsiders / implementers either couldn’t see that or were so arrogant that they wanted the client to bend anyway.

That was a failure on their behalf no doubt, but not the monumental failure. I could see the massive cultural disconnect between client and implementer very early. I could even see the way to fix it (I believe). I was their executive advisor (the bridge between outsiders and insiders) so the monumental failure was mine. Not through lack of trying, I was unable to persuade either party to consider the other’s perspective.

Without compromise, the project became compromised.

All OSS products are excellent. So where’s the advantage?

“You don’t get differential advantage from your products, it’s from the way you speak to and relate to your customers . All products are excellent these days.”

The quote above paraphrases Malcolm McDonald from a podcast about his book, “Malcolm McDonald on Value Propositions: How to Develop Them, How to Quantify Them.”

This quote had nothing to do with OSS specifically, but consider for a moment how it relates to OSS.

Consider also in relation to the diagram below.
Long-tail features

Let’s say the x-axis on this graph shows a list of features within any given OSS product. And the y-axis shows a KPI that measures the importance of each feature (eg number of uses, value added by using that feature, etc).

As Professor McDonald indicates, all OSS products are excellent these days. And all product vendors know what the most important features are. As a result, they all know they must offer the features that appear on the left-side of the image. Since all vendors do the left-side, it seems logical to differentiate by adding features to the far-right of the image, right?

Well actually, there’s almost no differential advantage at the far-right of the image.

Now what if we consider the second part of Prof McDonald’s statement on differential advantage, “…it’s from the way you speak to and relate to your customers.”

To me it implies that the differential advantage in OSS is not in the products, but in the service wrapper that is placed around it. You might be saying, “but we’re an OSS product company. We don’t want to get into services.” As described in this earlier post, there are two layers of OSS services.

One of the layers mentioned is product-related services (eg product installation, product configuration, product release management, license management, product training, data migration, product efficiency / optimisation, etc). None of these items would appear as features on the long-tail diagram above. Perhaps as a result, it’s these items that are often less than excellent in vendor offerings. It’s often in these items where complexity, time, cost and risk are added to an OSS project, increasing stress for clients.

If Prof McDonald is correct and all OSS products are excellent, then perhaps it’s in the services wrapper where the true differential advantage is waiting to be unlocked. This will come from spending more time relating to customers than cutting more code.

What if we take it a step further? What if we seek to better understand our clients’ differential advantages in their markets? Perhaps this is where we will unlock an entirely different set of features that will introduce new bands on the left-side of the image. I still feel that amazing OSS/BSS can give carriers significant competitive advantage in their marketplace. And the converse can give significant competitive disadvantage!

Are you desperately seeking to increase your OSS‘s differential advantage? Contact us at Passionate About OSS to help map out a way.

Can OSS/BSS assist CX? We’re barely touching the surface

Have you ever experienced an epic customer experience (CX) fail when dealing a network service operator, like the one I described yesterday?

In that example, the OSS/BSS, and possibly the associated people / process, had a direct impact on poor customer experience. Admittedly, that 7 truck-roll experience was a number of years ago now.

We have fewer excuses these days. Smart phones and network connected devices allow us to get OSS/BSS data into the field in ways we previously couldn’t. There’s no need for printed job lists, design packs and the like. Our OSS/BSS can leverage these connected devices to give far better decision intelligence in real time.

If we look to the logistics industry, we can see how parcel tracking technologies help to automatically provide status / progress to parcel recipients. We can see how recipients can also modify their availability, which automatically adjusts logistics delivery sequencing / scheduling.

This has multiple benefits for the logistics company:

  • It increases first time delivery rates
  • Improves the ability to automatically notify customers (eg email, SMS, chatbots)
  • Decreases customer enquiries / complaints
  • Decreases the amount of time the truck drivers need to spend communicating back to base and with clients
  • But most importantly, it improves the customer experience

Logistics is an interesting challenge for our OSS/BSS due to the sheer volume of customer interaction events handled each day.

But it’s another area that excites me even more, where CX is improved through improved data quality:

  • It’s the ability for field workers to interact with OSS/BSS data in real-time
  • To see the design packs
  • To compare with field situations
  • To update the data where there is inconsistency.

Even more excitingly, to introduce augmented reality to assist with decision intelligence for field work crews:

  • To provide an overlay of what fibres need to be spliced together
  • To show exactly which port a patch-lead needs to connect to
  • To show where an underground cable route goes
  • To show where a cable runs through trayway in a data centre
  • etc, etc

We’re barely touching the surface of how our OSS/BSS can assist with CX.

The OSS Tinder effect

On Friday, we provided a link to an inspiring video showing Rolls-Royce’s vision of an operations centre. That article is a follow-on from other recent posts about to pros and cons of using MVPs (Minimum Viable Products) as an OSS transformation approach.

I’ve been lucky to work on massive OSS projects. Projects that have taken months / years of hard implementation grind to deliver an OSS for clients. One was as close to perfect (technically) as I’ve been involved with. But, alas, it proved to be a failure.

How could that be you’re wondering? Well, it’s what I refer to as the Tinder Effect. On Tinder, first appearances matter. Liked or disliked at the swipe of a hand.

Many new OSS are delivered to users who are already familiar with one or more OSS. If they’re not as pretty or as functional or as intuitive as what the users are accustomed to, then your OSS gets a swipe to the left. As we found out on that project (a ‘we’ that included all the client’s stakeholders and sponsors), first impressions can doom an otherwise successful OSS implementation.

Since then, I’ve invested a lot more time into change management. Change management that starts long before delivery and handover. Long before designs are locked in. Change management that starts with hearts and minds. And starts by involving the end users early in the change process. Getting them involved in the vision, even if not quite as elaborate as Rolls-Royce’s.

An OSS theatre of combat

Have you sat on both sides of the OSS procurement process? That is, been an OSS buyer (eg writing an RFP) and an OSS seller (eg responded to an RFP) on separate projects?

Have you noticed the amount of brain-power allocated to transferral of risk from both angles?

If you’re the buyer, you seek to transfer risk to the seller through clever RFP clauses.
If you’re the seller, you seek to transfer risk to the buyer through exclusions, risk margins, etc in your RFP response.

We openly collaborate on features during the RFP, contract formation, design and implementation phases. We’re open to finding the optimal technical solution throughout those phases.

But when it comes to risk, it’s bordering on passive-aggressive behaviour when you think about it. We’re also not so transparent or collaborative about risk in the pre-implementation phases. That increases the likelihood of combative risk / issue management during the implementation phase.

The trusting long-term relationship that both parties wish to foster starts off with a negative undercurrent.

The reality is that OSS projects carry significant risk. Both sides carry a large risk burden. It seems like we could be as collaborative on risks as we are on requirements and features.

Thoughts?

OSS transformation is hard. What can we learn from open source?

Have you noticed an increasing presence of open-source tools in your OSS recently? Have you also noticed that open-source is helping to trigger transformation? Have you thought about why that might be?

Some might rightly argue that it is the cost factor. You could also claim that they tend to help resolve specific, but common, problems. They’re smaller and modular.

I’d argue that the reason relates to our most recent two blog posts. They’re fast to install (don’t need to get bogged down in procurement) and they’re easy to run in parallel for comparison purposes.

If you’re designing an OSS can you introduce the same concepts? Your OSS might be for internal purposes or to sell to market. Either way, if you make it fast to build and easy to use, you have a greater chance of triggering transformation.

If you have a behemoth OSS to “sell,” transformation persuasion is harder. The customer needs to rally more resources (funds, people, time) just to compare with what they already have. If you have a behemoth on your hands, you need to try even harder to be faster, easier and more modular.

I have the need for OSS speed

You already know that speed is important for OSS users. They / we don’t want to wait for minutes for the OSS to respond to a simple query. That’s obvious right? The bleeding obvious.

But that’s not what today’s post is about. So then, what is it about?

Actually, it follows on from yesterday’s post about re-framing of OSS transformation.  If a parallel pilot OSS can be stood up in weeks then it helps persuasion. If the OSS is also fast for operators to learn, then it helps persuasion.  Why is that important? How can speed help with persuasion?

Put simply:

  • It takes x months of uncertainty out of the evaluators’ lives
  • It takes x months of parallel processing out of the evaluators’ lives
  • It also takes x months of task-switching out of the evaluators’ lives
  • Given x months of their lives back, customers will be more easily persuaded

It also helps with the parallel bake-off if your pilot OSS shows a speed improvement.

Whether we’re the buyer or seller in an OSS pilot, it’s incumbent upon us to increase speed.

You may ask how. Many ways, but I’d start with a mass-simplification exercise.

Re-framing an OSS replacement strategy

Friday’s post posed a re-framing exercise that asked you (whether customer, seller or integrator) to run a planning exercise as if you MUST offer a money-back guarantee on your OSS (whether internal or external). It’s designed to force a change in mindset from risk mitigation to risk removal.

We have another re-framing exercise for you today.

As we all know, incumbent OSS can be really difficult to replace / usurp. It becomes a massive exercise for a customer to change the status quo. And when you’re on the team that’s trying to instigate change (again whether you’re internal or external to the OSS customer organisation), you want to minimise the barriers to change.

The ideal replacement approach is to put a parallel pilot in place (which also bears some similarity with the strangler fig analogy). Unfortunately the pilot approach doesn’t get used as often as it could because pilot implementation projects tend to take months to stand up. This implies significant effort and cost, which in turn implies a major procurement event needs to occur.

If the parallel pilot could be stood-up in days or a couple of weeks, then it becomes a more useful replacement persuasion strategy.

So today’s re-framing exercise is to ask yourself what you could do to stand up a pilot version of your OSS in only days/weeks and at very little cost?

Let me add an extra twist to that exercise. When I say stand up the OSS in days/weeks, I also mean to hand over to the users, which means that it has to be intuitive enough for operators to begin using with almost no training. Don’t forget that the parallel solution is unlikely to have additional resources to operate it. It’s likely that the same workforce will need to operate incumbent and pilot, performing a comparison.

So, what you could do to stand up a pilot version of your OSS in only days/weeks, at very little cost and with almost immediate take-up by users?

What’s the one big factor holding back your OSS? And the exercise to reduce it

We’ve talked about some of the emotions we experience in the OSS industry earlier this week, the trauma of OSS and anxiety relating to OSS.

To avoid these types of miserable feelings, it’s human nature to seek to limit them. We over-analyse, we over-specify, we over-engineer, we over-document, we over-contract, we over-react, we over-estimate (nah, actually we almost never over-estimate do we?), we over-resource (well, actually, we don’t seem to do that very often either). Anyway, you get the “over” idea.

What is the one big factor that leads to all of these overs? What is the one big factor that makes our related costs and delivery times become overs too?

Have you guessed yet?

The answer is…… drum-roll please…… RISK.

Let’s face it. OSS projects are as full as a centipede’s sock drawer when it comes to risk. The customer carries risks, the supplier carries risk, the integrators carry risk, the sponsors carry risk, the end-users carry risk, the implementers carry risk. What a burden! And it is a burden that impacts in many ways, as indicated in the triple constraint of OSS projects.

Anyone who’s done more than a few OSS projects knows there are many risks and they tend to respond by going into over-mode (ie all the overs mentioned above). That’s a clever strategy. It’s called risk mitigation.

But today’s post isn’t about risk mitigation. It takes a contrarian approach. Let me explain.

Have you noticed how many companies build risk reduction techniques into their sales models? Phrases like “money-back guarantee” abound. This technique is designed to remove most of the risk for the customer and also remove the associated barrier to purchase. To be fair, it might not actually be a case of removing the risk, but directing all of the risk onto the seller. Marketers call it risk reversal.

I’m sure you’re thinking, “well that’s fine for high-volume, low-cost products like burgers or books, but not so easy for complex, customised solutions like OSS.” I hear you!

I’m not actually asking you to offer a money-back guarantee for your OSS, although Passionate About OSS does offer that all the way from our products through to our high-end consultancy services.

What I am asking you to do (whether customer, seller or integrator) is to run a planning exercise as if you MUST offer a money-back guarantee. What that forces is a change of mindset from risk mitigation to risk removal. It forces consideration of what are the myriad risks “in the system” (for customer, seller and integrator) and how can they be removed? Here are a few risk planning suggestions FWIW.

Set the following challenge for your analysts and engineers – Don’t come to me with a business case for the one-million-and-first feature to add, but prove your brilliance by showing me the business case for the risks you will remove. Risk reduction rather than feature-add or cost-out business cases.

Let me know what you discover and what your results are.

Identifying the fault-lines that trigger OSS churn

Most people slog through their days in a dark funk. They almost never get to do anything interesting or go to interesting places or meet interesting people. They are ignored by marketers who want them to buy their overpriced junk and be grateful for it. They feel disrespected, unappreciated and taken for granted. Nobody wants to take the time to listen to their fears, dreams, hopes and needs. And that’s your opening.
John Carlton
.

Whilst the quote above may relate to marketing, it also has parallels in the build and run phases of an OSS project. We talked about the trauma of OSS yesterday, where the OSS user feels so much trauma with their current OSS that they’re willing to go through the trauma of an OSS transformation. Clearly, a procurement event must be preceded by a significant trauma!

Sometimes that trauma has its roots in the technical, where the existing OSS just can’t do (or be made to do) the things that are most important to the OSS user. Or it can’t do it reliable, at scale, in time, cost effectively, without significant risk / change. That’s a big factor certainly.

However, the churn trigger appears to more often be a human one. The users feel disrespected, unappreciated and taken for granted. But here’s an interesting point that might surprise some users – the suppliers also often feel disrespected, unappreciated and taken for granted.

I have the privilege of working on both sides of the equation, often even as the intermediary between both sides. Where does the blame lie? Where do the fault-lines originate? The reasons are many and varied of course, but like a marriage breakup, it usually comes down to relationships.

Where the communication method is through hand-grenades being thrown over the fence (eg management by email and by contractual clauses), results are clearly going to follow a deteriorating arc. Yet many OSS relationships structurally start from a position of us and them – the fence is erected – from day one.

Coming from a technical background, it took me far too deep into my career to come to this significant realisation – the importance of relationships, not just the quest for technical perfection. The need to listen to both sides’ fears, dreams, hopes and needs.