Micro-strangulation vs COTS customisation

Over the last couple of posts, we’ve referred to the following diagram and its ability to create a glass ceiling on OSS feature releases:
The increasing percentage of tech debt

Yesterday’s post indicated that the current proliferation of microservices has the potential to amplify the strangulation.

So how does that compare with the previous approach that was built around COTS (Commercial off-the-shelf) OSS packages?

With COTS, the same time-series chart exists, just that it sees the management of legacy, etc fall largely with the COTS vendor, freeing up the service provider… until the service provider starts building customisations and the overhead becomes shared.

With microservices, the rationalisation responsibility is shifted to the in-house (or insourced) microservice developers.

And a third option: If the COTS is actually delivered via a cloud “OSS as a service” (OSSaaS) model, then there’s a greater incentive for the vendor to constantly re-factor and reduce clutter.

A fourth option, which I haven’t actually seen as a business model yet, is once an accumulation of modular microservices begins to grow, vendors might begin to offer microservices as a COTS offering.

Is micro-strangulation underway within OSS?

Yesterday’s post spoke of how the accumulation of features was limiting us to small, incremental change.

The diagram below re-tells that story:
The increasing percentage of tech debt

You’ve probably noticed that microservices are the big buzz in our industry. They’re perceived as being the big white hope for our future. I have my reservations though.

If you’re at t0 in the chart above, microservices allow for rapid rollout of features, whole small-grid architectures even (in a Lean / MVP world). My reservations stem from the propensity for rapid release of microservices to amplify the accumulation of tech debt (ie the escalation of maintenance and testing in the chart above). They have the potential to take organisations to t0+100 really quickly.

The upside though is that replacement or re-factoring of smaller modules (ie microservices) should be easier than the change-out of monolithic software suites. The one caveat… we have to commit to a culture of subtraction projects being as important as feature releases.

The strangulation of OSS feature releases

The diagram below provides a time-sequence view of how tech-debt accumulation eventually strangles new OSS feature releases unless the drastic measures described are taken.

The increasing percentage of tech debt

At start-up (t0), the system is brand new and has no legacy to maintain, so all effort can be dedicated to delivering new features (or products) as well as testing to ensure control of quality.

But over time (t0 + 10, where 10 is a nominal metric that could be days, years, release cycles, etc), effort is now required to maintain existing functionality / infrastructure. Not only that, but the test load increases. New features need to be tested as well as regression testing done on the legacy because there are now more variants to consider. You’ll notice that less of the effort is now available for adding new features.

The rest of the chart is self-explanatory I hope. Over a longer period of time, so much effort is required just to maintain and assure the status quo that there is almost no time left to add new features. Any new features come with a significant testing and maintenance load.

Many traditional telcos (Mammoths) and their OSS suites have found themselves at t0+100. The legacy is so large and entwined that it’s a massive undertaking to make any pivotal change (the chess-board analogy).

This is where startups and the digital / cloud players have a significant disruptive advantage over the Mammoths. They’re at t0 to t0+10 (if the metric is in years) and can roll out more new features proportionally.

What the chart above doesn’t show is subtraction projects, the effort required to ensure the legacy maintenance load and number of variants (ie testing load) are hacked away at every opportunity. The digital players call this re-factoring and the telcos, well, they don’t really have a name for it because they rarely do it (do they?).

Telcos (and their OSS suites) are like hoarders, starting off with an empty house (t0) and progressively filling it with stuff until they can barely see any carpet for the clutter (t0+100). It generally takes the intervention of an outsider to force a de-cluttering because the hoarder can’t notice a problem.

The risk with the Agile, DevOps, continuous release movement that’s currently underway is that it’s rapidly speeding up the release cadence so we might be near t0 now but we’re going to get to t0+100 far faster than before when release cadences were far slower.

Can we all see that an additional colour MUST be added to the time-series chart above – the colour that represents reductionist effort? I’m so passionate about this that it’s a strong thread running through the arc of my next book (keep an eye out for upcoming posts as I’ll be seeking your help and insights on it in the lead-up to launch).

A career without OSS

Have you ever noticed that the biographies of almost every successful person contains the chapter(s) where everything goes disastrously? It seems inevitable that there are periods in our careers where things don’t go right, no matter how successful you are.

Interestingly my least successful project was also one that had only a very small OSS component to it. It was one of the triggers to starting PAOSS.com. PAOSS was a way to remain connected to OSS outside the demands of that day job.

That project may’ve been less successful, but it certainly wasn’t short on handing me lessons. It wasn’t the lack of OSS in that day job that made it less successful. I’ve done other telco projects that have given very different, valuable insights on OSS without being directly related to OSS.

I’ve recently had a number of job offers that have looked quite exciting. They’ve made me re-think whether I’d be better at my “art” (with PAOSS as the vehicle) if it wasn’t also my main career arc.

Derek Sivers has an interesting take on this here, “Do something for love, and something for money. Don’t try to make one thing satisfy your entire life. In practice, then, each half of your life becomes a remedy for the other. You get paid and get stability for part of your day, but then need creative time for expression.”

Contrary to Derek’s suggestion, do you combine your art with your job? If OSS is your job, what is your art?

Can the OSS mammoths survive extinction?

Startups win with data. Mammoths go extinct with products.”
Jay Sharma
.

Interesting phraseology. I love the play on words with the term mammoths. There are some telcos that are mammoth in size but are threatened with extinction though changes in environment and new competitors appearing.

I tend to agree with the intent of the quote, but also have some reservations. For example, products are still a key part of the business model of digital phenoms like Google, Facebook, etc. It’s their compelling products that allow them to collect the all-important data. As consumers, we want the product, they get our data. We also want the products sold by the Mammoths but perhaps they don’t leverage the data entwined in our usage (or more importantly, the advertising revenues that gets attracted to all that usage) as well as the phenoms do.

Another interesting play on words exists here for the telcos – in the “winning with data.” Telcos are losing at data (their profitability per bit is rapidly declining to the point of commoditisation), so perhaps a mindset shift is required. Moving the business model that’s built on the transport of data to a model based on the understanding of, and learning from, data. It’s certainly not a lack of data that’s holding them back. Our OSS / BSS collect and curate plenty. The difference is that Google’s and Facebook’s customers are advertisers, whilst the Mammoths’ customers are subscribers.

As OSS providers, the question remains for us to solve – how can we provide the products that allow the Mammoths to win with data?

PS. The other part of this equation is the rise of data privacy regulations such as GDPR (General Data Protection Regulation). Is it just me, or do the Mammoths seem to attract more attention in relation to privacy of our data than the OTT service providers?

Analytics and OSS seasonality

Seasonality is an important factor for network and service assurance. It’s also known as time-of-day/week/month/year specific activity.

For example, we often monitor network health through the analysis of performance metrics (eg CPU utilisation) and set up thresholds to alert us if those metrics go above (or below) certain levels. The most basic threshold is a fixed one (eg if a CPU goes above 95% utilisation, then raise an alert). However, this might just create unnecessary activity. Perhaps we do an extract at 2am every evening, which causes CPU utilisation to bounce at nearly 100% for long perids of time. We don’t want to receive an alert in the middle of the night for what might be expected behaviour.

Another example might be a higher network load for phone / SMS traffic on major holidays or during disaster events.

The great thing about modern analytics tools is that as long as they have long time series of data, then they can spot patterns of expected behaviour at certain times/dates that humans might not be observing and adjust alerting accordingly. This reduces the number of spurious notifications for network assurance operators to chase up on.

10 ways to #GetOutOfTheBuilding

Eric Ries’ “The Lean Startup,” has a short chapter entitled, “Get out of the Building.” It basically describes getting away from your screen – away from reading market research, white papers, your business plan, your code, etc – and out into customer-land. Out of your comfort zone and into a world of primary research that extends beyond talking to your uncle (see video below for that reference!).

This concept applies equally well to OSS product developers as it does to start-up entrepreneurs. In fact the concept is so important that the chapter name has inspired it’s own hashtag (#GetOutOfTheBuilding).

This YouTube video provides 10 tips for getting out of the building (I’ve started the clip at Tendai Charasika’s list of 10 ways but you may want to scroll back a bit for his more detailed descriptions).

But there’s one thing that’s even better than getting out of the building and asking questions of customers. After all, customers don’t always tell the complete truth (even when they have good intentions). No, the better research is to observe what they do, not what they say. #ObserveWhatTheyDoNotWhatTheySay

This could be by being out of the building and observing customer behaviour… or it could be through looking at customer usage statistics generated by your OSS. That data might just show what a customer is doing… or not doing (eg customers might do small volume transactions through the OSS user interface, but have a hack for bulk transactions because the UI isn’t efficient at scale).

Not sure if it’s indicative of the industry as a whole, but my experience working for / with vendors is that they don’t heavily subscribe to either of these hashtags when designing and refining their products.

Does your OSS collect primary data to #ObserveWhatTheyDoNotWhatTheySay? If it does, do you ever make use of it? Or do you prefer to talk with your uncle (does he know much about OSS BTW)?

Are your OSS better today than they were 5 years ago?

Are your OSS better today than they were 5 years ago?
(or 10, 15, 20 years depending on how long you’ve been in the industry) 

Your immediate reaction to this question is probably going to be, “Yes!” After all, you and your peers have put so much effort into your OSS in the last 5 years. They have to be better right?

On the basis of effort, our OSS are definitely more capable… but let me ask again, “Are they better?”

How do they stack up on key metrics such as:

  1. Do they need less staff to run / maintain
  2. Do they allow products to be released more quickly to market
  3. Do they allow customer services to be ready for service (RFS) faster
  4. Are mean times to repair (MTTR) faster when there’s a problem in the network
  5. Are bills more accurate (and need less intervention across all of the parties that contribute)
  6. Are there less fall-outs (eg customer activations that get lost in the ether)
  7. Are we better at delivering (or maintaining) OSS on budget
  8. Are your CAPEX and OPEX budgets lower
  9. Are our front-office staff (eg retail, contact centres, etc) able to give better outcomes for customers via our OSS/BSS
  10. Are our average truck-rolls per activation lower
  11. Are the insights we’re identifying generating longer-run competitive advantages
  12. etc, etc

Maybe it’s the rose-coloured glasses, but my answer to the initial question when framed against these key metrics is, “Probably not,” but with a couple of caveats.

Our OSS are certainly far more complicated. The bubble in which we operate is far more complicated (ie network types, product offerings, technology options, contact channels, more touchpoints, etc). This means more variants for our OSS / BSS to handle. In addition, we’ve added a lot more functionality (ie complexity of our own).

Comparison of metrics will vary greatly across different OSS operators – some for the better, some worse. Maybe I’m just working on projects that are more challenging now than I was 5, 10, 15 years ago.

Do you have the data to confirm / deny that your OSS is better than in years past?

PS. Oh, and one last call-out. You’ll notice that the metrics above tend to be cross-silo. I have no doubt that individual OSS products have improved in terms of functionality, usability, processing speeds, etc. But what about our end-to-end workflows through our OSS/BSS suite of products?

Watching customers under an omnichannel strobe light

Omnichannel will remain full of holes until we figure out a way of tracking user journeys rather than trying to prescribe (design, document, maintain) process flows.

As a customer jumps between the various channels, they move between systems. In doing so, we tend to lose the ability to watch customer’s journey as a single continuous flow. It’s like trying to watch customer behaviour under a strobe light… except that the light only strobes on for a few seconds every minute.

Theoretically, omnichannel is a great concept for customers because it allows them to step through any channel at any time to suit their unique behavioural preferences. In practice, it can be a challenging experience for customers because of a lack of consistency and flow between channels.

It’s a massive challenge for providers to deliver consistency and flow because the disparate channels have vastly different user interfaces and experiences. IVR, digital, retail, etc all come from completely different design roots.

Vendors are selling the dream of cost reductions through improved efficiency within their channels. Unfortunately this is the wrong place for a service provider to look. It’s the easier place to look, but the wrong place nonetheless. Processes already tend to be relatively efficient within a channel and data tends to be tracked well within a channel.

The much harder, but better place to seek benefits is through the cross-channel user journeys, the hand-offs between channels. That’s where the real competitive advantage opportunities lie.

The unfair OSS advantage

My wife and I attended a Christmas party over the weekend and on the trip home we discussed customer service. In particular we were discussing the customer service training she’d had, as well as the culture of customer service reinforcement she’d experienced via leaders and peers in her industry. She doesn’t work in ICT or OSS (obviously?).

In our industry, we talk the customer experience talk via metrics like NPS (Net Promoter Score). However, I don’t recall ever working with a company that provided customer service training or had a strong culture of reinforcing customer service behaviours. Some might claim that it’s just an unwritten rule / expectation.

Conversely, some players in our industry go the opposite way and appear to have the mentality of trying to screw over their customers. Their customers know it and don’t like it but are locked in for any number of reasons.

As OSS implementers, the more consistent trend seems to be a culture of technical perfection. I know I’ve dropped the ball on customer service in the past by putting the technical solution ahead of the customer. I feel bad about that on reflection.

Perhaps what we don’t realise is that we’re missing out on an unfair advantage.

As Seth Godin states in this blog, “Here’s a sign I’ve never seen hanging in a corporate office, a mechanic’s garage or a politician’s headquarters:
WE HAVE AN UNFAIR ADVANTAGE:
We care more.

It’s easy to promise and difficult to do. But if you did it, it would work. More than any other skill or attitude, this is what keeps me (and people like me) coming back
.”

Could it be a real differentiator in our fragmented market?

Do you want dirty or clean automation?

Earlier in the week, we spoke about the differences between dirty and clean consulting, as posed by Dr Richard Claydon, and how it impacted the use of consultants on OSS projects.

The same clean / dirty construct applies to automation projects / tools such as RPA (Robotic Process Automation).

Clean Automation = simply building robotic automations (ie fixed algorithms) that manage existing process designs
Dirty Automation = understanding the process deeply first, optimising it for automation, then creating the automation.

The first is cheap(er) and easy(er)… in the short-term at least.
The second requires getting hands dirty, analysing flows, analysing work practices, analysing data / logs, understanding operator psychology, identifying inefficiencies, refining processes to make them better suited to automation, etc.

Dirty automation requires analysis, not just of the SOP (Standard Operating Procedure), but the actual state-changes occurring from start to end of each iteration of process implementation.
This also represents the better launching-off point to lead into machine-learning (ie cognitive automation), rather than algorithmic or robotic automation.

What in OSS does nobody agree with you on?

Peter Thiel (co-founder of PayPal, Founders Fund and many other snippets in an impressive highlights reel) asks prospective entrepreneurs to tell him something they believe is true that nobody agrees with them about.

Today I’m asking you the same question and would love to hear your answers:

What do you believe to be true in OSS that nobody else seems to agree with you on?

The exciting thing about OSS is that it has so much potential, so many opportunities to do things better. And that means so many opportunities to do things differently, to come at things from a different angle to everyone else.

After all, success comes from doing things differently.

5 principles for your OSS Innovation Lab

Corporate innovation is far more dependent on external collaboration and customer insight than having a ‘lab’.”
Andy Howard
in a fabulous LinkedIn post.

Like so many other industries, OSS is ripe for disruption through innovation. Andy Howard’s post provides a number of sobering statistics for any large OSS vendors thinking of embarking on an Innovation Lab journey as a way of triggering innovation. Andy quotes the New York Times as follows, “The last three years have seen Nordstrom, Microsoft, Disney, Target, Coca-Cola, British Airways and The New York Times either close or dramatically downsize their innovation labs. 90% of innovation labs are failing.”

He also proposes five principles for corporate innovation success (Andy’s comments are in italics, mine follow):

  1. People. Will taking people out of the business and placing them into a new department change their thinking? No way. Those successful in corporate innovation are more entrepreneurial and more customer-centered, and usually come from outside of the organisation.
    Are you identifying (and then leveraging) those with an entrepreneurial bent in your organisation?
  2. Commercial intent. Every innovation project requires a commercial forecast. To progress, a venture must demonstrate how it could ultimately generate at least €100 million in annual revenue from a market worth at least €1 billion, and promise higher profit margins than usual.
    The numbers quoted above come from Daimler’s (wildly successful) Innovation Lab. Have you noticed that they’ve set the bar high for their innovation teams? They’re seeking the moonshots, not the incremental change.
  3. Organisational architecture. Whether it’s an innovation lab or simply an innovation department, separating the innovation team from the rest of the business is important. While the team may be bound by the same organisational policies, separation has cultural benefits. The most critical separation is not in terms of physical space, but in the team’s roles and responsibilities. Having employees attempt to function in both an ‘innovation’ role and ‘business as usual’ role is counterproductive and confusing. Innovation is an exclusive job.
    I’m 50/50 on this one. Having a gemba / coal-face / BAU role provides a much better understanding of real customer challenges. However, having BAU responsibilities can detract from a focus on innovation. The question is how to find a balance that works.
  4. External collaboration. Working with consultants and customers from outside of the organisation has long been a contributor to corporate innovation success. Companies attempting a Silicon Valley-style ‘lone genius’ breakthrough are headed towards failure. P&G’s ‘Connect and Develop’ innovation model, designed to bring outside thinking together with P&G’s own teams, is attributed with helping to double the P&G share price within five years.
    Where do you source your external collaboration on OSS innovation? Dirty or clean consultants? Contractors? Training of staff? Delegating to vendors?
  5. Customer insight. Innovations solve real customer problems. Staying close to customers and getting out of the building is how customer problems are discovered.
    As indicated under point 3 above, how do you ensure your innovators are also deeply connected with the customer psyche? Getting the team out of the ivory tower and onto the customer site is a key here

Bill Gates’ two rules of OSS technology (plus one)

The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency.”
Bill Gates
.

The pervading OSS business case paradigm is to seek cost-out by introducing automation that reduces head-count – Do more with less.

But it seems that’s the antithesis of how to look for cost reduction. It’s adding more complexity into a given system. Fundamentally, more complexity can not be the best approach to a cost-reduction strategy, right?

The cost-out paradigm should be built on reducing, not adding complexity – Let’s stop doing more that delivers less.

To add to Bill Gates’ two rules of technology, my third rule is that if you’re going to add technology (ie complexity), it should attempt to create growth opportunities, not seek to reduce costs.

Do you want dirty or clean OSS consulting?

The original management consultant was Frederick Taylor, who prided himself in having discovered the “one best way” which would be delivered by “first-class men”. These assumptions, made in 1911, are still dominant today. Best practice is today’s “one best way” and recruiters, HR and hiring managers spend months and months searching for today’s “first-class men”.

I call this type of consulting clean because the assumptions allow the consultant to avoid dirty work or negative feedback. The model is “proven” best practice. Thus, if the model fails, it is not the consultants’ fault – rather it’s that the organisation doesn’t have the “first-class employees” who can deliver the expected outcome. You just have to find those that can. Then everything will be hunky dory.

All responsibility and accountability are abdicated downwards to HR and hiring managers. A very clean solution for everybody but them.

It’s also clean because it can be presented in a shiny manner – lots of colourful slide-decks promising a beautiful outcome – rational, logical, predictable, ordered, manageable. Clean. In today’s world of digital work, the best practice model is a new platform transforming everything you do into a shiny, pixelated reality. Cleaner than ever.

The images drawn by clean consultants are compelling. The client gets a clearly defined vision of a future state backed up by evidence of its efficacy.

But it’s far too often a dud. Things are ignored. The complex differences between the client and the other companies the model has been used on. The differences in size, in market, in demographic, in industry. None matter – because the one best way model is just that – one best way. It will work everywhere for everyone. As long as they keep doing it right and can find the right people to do it.

The dirty consultant has a problem that the clean consultant doesn’t have. It’s a big problem. He doesn’t have an immediate answer for the complex problem vexing the client. He has no flashy best practice model he strongly believes in. No shiny slide deck that outlines a defined future state.

It’s a difficult sell.

What he does have is a research process. A way of finding out what is actually causing the organisational problems. Why and how the espoused culture is different from organisational reality. Why and how the supposed best practice solution is producing stressed out anxiety or cynical apathy.

This process is underpinned by a fundamentally different perspective on the world of work. Context is everything. There is no solution that can fit every company all of the time. But there’s always a solution for the problem. It just has to be discovered.

The dirty consultant enters an organisation ready and willing to uncover the dirty reasons for the organisation not performing. This involved two processes – (1) working out where the inefficiencies and absurdities are, and (2) finding out who knows how to solve them.”

The text above all comes from this LinkedIn post by Dr Richard Claydon. It’s also the longest quote I’ve used in nearly 2000 posts here on PAOSS. I’ve copied such a great swathe of it because it articulates a message that is important for OSS.

There is no “best practice.” There is no single way. There are no cookie-cutter consulting solutions. There are too many variants at play. Every OSS has massive local context. They all have a local context that is far bigger than any consultant can bring to bear.

They all need dirty consulting – assignments where the consultant doesn’t go into the job knowing the answers, acknowledging that they don’t have the same local, highly important context of those who are at gemba every day, at the coal-face every day.

There is no magic-square best-fit OSS solution for a given customer. There should be no domino-effect selection of OSS (ie the big-dog service provider in the region has chosen product X after a long product evaluation so therefore all the others should choose X too). There is no perfect, clean answer to all OSS problems.

Having said that, we should definitely seek elements of repeatability – using repeatable decision frameworks to guide the dirty consulting process, to find solutions that really do fit, to find where repeatable processes will actually make a difference for a given customer.

So if the local context is so important, why even use a consultant?

It’s a consultant’s role to be a connector – to connect people, ideas, technologies, concepts, organisations – to help a customer make valuable connections they would otherwise not be able to make.

These connections often come from the ability to combine the big-picture concepts of clean consulting with the contextual methods of dirty consulting. There’s a place for both, but it’s the dirty consulting that provides the all-important connection to gemba. If an OSS consultant doesn’t have a dirty-consulting background, an ability to frame from a knowledge of gemba, I wonder whether the big-picture concepts can ever be workable?

What are your experiences working with clean consultants (vs dirty consultants) in OSS?

The biggest moonshot facing OSS today

Moonshot thinking is about making something 10x better. This forces you to throw away the existing assumptions and create something bold and new. Reality will eat into your 10x. At the end of the process it may only be 2x, but that’s still amazing.”
Brian Jansen
‘s Book Summary: “Bold: How To Go Big, Create Wealth, and Impact the World,” by Peter Diamandis & Steven Kotler.

I think the biggest moonshot facing OSS today is the design and implementation of an architecture that allows other moonshots to happen.

Take a moment to reflect on that…

As of today, our OSS tend to be complex, entangled beasts, governed by the chess-board analogy. The entanglement is so profound that we tend to only do small, incremental charges. Moving a single piece on the chess-board takes soooo much planning to avoid negative consequences. lt’s the reason that some of our high-profile OSS probably still contain chunks of code that were written in the 1990’s or 2000’s.

In the world of OSS, the 10x moonshot comes with a risk of delivering -5x not just the 2x mentioned in the quote above.

Having said that, I’m all for a good moonshot project. It might take just one disentanglement moonshot to allow 1000 subsequent moonshots to fire! A disentanglement moonshot like the small-grid approach described here.

OSS expendables

When looking at a telco org chart, where does the highest staff turnover tend to occur? Contact centres? Network Operations?

The fact that these two groups tend to have the highest turnover indicates that their employers see them as expendable resources. They’ll never come out and say it directly, but actions speak louder than words. If these resources were valued more highly, more effort would be made on their retention.

Now, what do you notice on the diagram below?
The pyramid of pain

The diagram below is taken from an earlier post entitled “The pyramid of OSS pain.” It’s an over-simplification of where the source of OSS complexity (ie pain) tends to originate from, but who bears the brunt of all the upstream complexity generated within a service provider? Yes, the contact centres and network operations centres.

This can’t be a coincidence can it? The teams bearing the brunt of complexity have the highest turnover.

But how can this be allowed? If those are the roles dealing with most complexity, why do we tend to have our least experienced operators there? And why are we allowing their accrued knowledge for handling that complexity to walk out the door as expendable resources?

Avoiding the OSS honey trap

Regardless of whose estimates you read, OSS is a multi billion industry. However, based on the relatively infrequent signing of new vendor deals, it’s safe to say that only a very small percentage of those billions are ever “in play.”

In other words, OSS tend to be very sticky, in part because they’re so difficult to forklift out and replace. Some vendors play his situation extremely well, with low install costs but with strategies such as “land and expand,” “so sue us” and “that will be a variation.” These honey pots hide the real cost of ownership.

Cloud IT architectures such as containerisation and microservices can provide a level of modularity and instant replaceability between products (ie competition). When combined with a Minimum Viable Product mindset rather than complex, entwining customisations, you can seek to engineer a lower lock-in solution.

The aim is to ensure that products (and vendors) stay in-situ for long periods based on merit (ie partnership strength, functionality, valuable outcomes, mutual benefit, etc) rather than lock-in.

Are we measuring OSS at the wrong end?

I have a really simple philosophical question to pose of you today – Are we measuring our OSS at the wrong end?
It seems that a vast majority of our OSS measurement is at the input end of a process rather than at the output.

Just a few examples:

  • Financial predictions in a business cases vs Return on Invested Capital (ROIC) of that project
  • Implementation costs vs lifetime ownership implication costs
  • Revenues vs profitability (of products, services, workflows, activities, etc)
  • OSS costs vs enablement of service and/or monetisation of assets (ie operationalising assets such as network equipment via service activation)
  • OSS incidents raised (or even resolved) vs insurance on brand value (ie prevention of negative word-of-mouth caused by network / service outages)

In each of these cases, it’s much easier to measure the inputs. However, the output measurements portray a far more powerful message don’t you think?