New Report – Request for OSS/BSS Innovations


Over the next couple of months we’re planning to create a very visual report that highlights the 25 Most Exciting Innovations in OSS/BSS for 2022, ready for release in early Jan.

It will include a few succinct paragraphs about each innovation and the problem/s it solves.

Would you like to nominate any of your own innovations or the innovations of others that we simply must consider, either:

  • Officially (ie provide customised text & imagery) or
  • Unofficially (ie point us to relevant online material)?

We’re looking forward to hearing whether this is of interest to you and more specifically, hearing about the exciting things you’ve been working on recently.

A couple of other important notes:

  1. When we talk about innovations in OSS/BSS, we’re actually talking about anything that helps to operationalise a network and/or get customers onto it. That opens up the discussion to technologies like AR/VR, AI, etc, etc
  2. There’s no fee for nomination or entry. It’s just something we’re doing for the industry

Please leave us a message via the form or email address below.

How to Approach OSS Vendor Selection Differently than Most

Selecting a new OSS / BSS product, vendor or integrator for your transformation project can be an arduous assignment.

Every network operator and every project has a unique set of needs. Counter to that, there are literally hundreds of vendors creating an even larger number of products to service those widely varied sets of needs.

If you’re a typical buyer, how many of those products are you already familiar with? Five? Ten? Fifty? How do you know whether the best-fit product or supplier is within the list you already know? Perhaps the best-fit is actually amongst the hundreds of other products and suppliers you’re not familiar with yet. How much time do you have to research each one and distill down to a short-list of possible candidates to service your specific needs? Where do you start? Lots of web searches?

Then how do you go about doing a deeper analysis to find the one that’s best fit for you out of those known products? The typical approach might follow a journey similar to the following:

The first step alone can take days, if not weeks, but also chews up valuable resources because many key stakeholders will be engaged in the requirement gathering process. The other downside of the requirements-first approach is that it becomes a wish-list that doesn’t always specify level of importance (eg “nice to have” versus “absolutely mandatory”).

Then, there’s no guarantee that any vendor will support every single one of the mandatory requirements. There’s always a level of compromise and haggling between stakeholders.

Next comes the RFP process, which can be even more arduous.

There has to be an easier way!

We think there is.

Our approach starts with the entire 400+ vendors in our OSS/BSS Vendor Directory. Then we apply two rounds of filters:

  1. Long-list Filter – Query by high-level product capability as per diagram below. For example, if you want outside plant management, then we filter by 9b, which returns a list of over 60 candidate vendors alone, but we can narrow it down further by applying filters 10 and 14 as well if you need these functionalities
  2. Short-list Filter – We then sit with your team to prepare a list of approx. 20 high-level questions (eg regions the vendor works in, what support levels they provide, high level functional questions, etc). We send this to the long-list of vendors for their clarification. This usually then yields a short-list of 3-10 best-fit candidates that you/we can then do a deeper evaluation on (how deep you dive depends on how thorough you want the review to be, which could include PoCs and other steps).

The 2-step filter approach is arguably even quicker to prepare and more likely to identify the best-fit short-list solutions because it starts by assessing 400+ vendors, not just the small number that most clients are aware of.

The next step (step 5 in the diagram above) also uses a contrarian approach. Rather than evaluating via an RFP that centres around a set of requirements, we instead identify:

  • The personas (people or groups) that need to engage with the OSS/BSS
  • The highest-priority activities each persona needs to perform with the OSS/BSS
  • End-to-end workflows that blend these activities into a prioritised list of demonstration scenarios

These steps quickly prioritise what’s most important for the to-be solution to perform. We describe the demonstration scenarios to the short-listed vendors and ask them to demonstrate how their solutions solve for those scenarios (as best they can). The benefit of this approach is that the client can review each vendor demonstration through their own context (ie the E2E workflows / scenarios they’ve helped to design).

This approach does not provide an itemised list of requirement compliance like the typical approach. However, we’d argue that even the requirement-led approach will (almost?) never identify a product of perfect fit, even if it’s filled with “Will Comply” responses for functionality that requires specific customisation.

Our “filtering” approach will uncover the solution of closest fit (in out-of-the-box state) in a much more efficient way.

We should also highlight that the two chevron diagrams above are just sample vendor-selection flows. We actually customise them to each client’s specific requirements. For example, some clients require much more thorough analysis than others. Others have mandatory RFP and/or business case templates that need to be followed.

If you need help, either with:

  • Preparing a short-list of vendors for further evaluation down from our known list of 400+; or
  • Need help to perform a more thorough analysis and identify the best-fit solution/s

then we’d be delighted to assist. Please leave us a note in the contact form below.

New report – Inventory of the Future

I’ve recently been co-opted to lead the development of an “Inventory of the Future” transformation guide on behalf of TM Forum and thought you, the readers, might have some interesting feedback to contribute.

I had previously prepared an “Inventory of the Future” discussion paper prior to being invited into the TMF discussions about a month ago. However, my original pack had too much emphasis on a possible target state (so those slides have been omitted from the attachment below).

Click to access Inventory_of_the_Future_v9a.pdf

Instead, before attempting to define a target state under the guise of TM Forum, we first need to understand:

  1. Where carriers are currently at with their inventory solutions
  2. What they want to achieve next with their inventory and
  3. What will the future environment  (in say 5 years from now) look like that inventory will exist within. Eg
    1. Will it still integrate with orchestration (resource allocation), fault management (enrichment, SIA, RCA, etc), etc?
    2. Will it manage a vastly different type / style of network (such as massive networks of sensors / IoT that are managed as a cohort rather than individually)
    3. etc

This, dear reader, is where your opinions will be so valuable.

Do the objectives (slide 3) and problem statements (slide 4) resonate with you? Or do you have an entirely different perspective?

Would love to hear your thoughts in the comment section below!! Alternatively, if you’d prefer to share your ideas directly, please leave me a personal message. Leave a personal message if you’d like to see the rest of the report (as it currently stands) too.

Do we need more dummies working on our OSS?

I was reading an article by Chris Campbell that states, “In his book How to Rule the World, Brian J. Ford reveals the relationship between sophistication and complexity; what he calls “obscurantism.” In short, the more sophisticated the problem-solver, the more complex (and costly) the solution.

Does that resonate with you in the world of OSS/BSS? Is it because we have so many unbelievably clever, sophisticated people working on them that they are so complex (and costly)??

Do we actually need more stupid, unsophisticated people (like me) to be asking the stupid questions and coming up with the simple solutions to problems?? Do we need to intentionally apply the dummy-lens??

Sadly, with so much mental horsepower spread across the industry, the dummy-lens is probably shouted down in most instances (in favour of the sophisticated / clever solutions… that turn out to be complex and costly). Check out this article about what impact complexity has on the triple constraint of project management.

I’d love to get your thoughts.

Even better than that, I’d love to hear any “dummy lens” ideas you have that must be considered right away!!

Drop us a note in the comment section below.

Time to Kill the Typical OSS Partnership Model?

A couple of years ago Mark Newman and the content team at TM Forum created a seminal article, “Time to Kill the RFP? Reinventing IT Procurement for the 2020s.” There were so many great ideas within the article. We shared a number of concordant as well as divergent ideas (see references #1, #2, #3, #4, #5, #6, and others).

As Mark’s article described, the traditional OSS/BSS vendor selection process is deeply flawed for both buyer and seller. They’re time-consuming and costly. But worst of all, they tend to set the project on a trajectory towards conflict and disillusionment. That’s the worst possible start for a relationship that will ideally last for a decade or more (OSS and BSS projects are “sticky” because they’re difficult to transform / replace once in-situ).

Partnership is the key word in this discussion – as reiterated in Mark’s report and our articles back then as well.

Clearly this message of long-held partnerships is starting to resonate, as we see via the close and extensive partnerships that some of the big service providers have formed with third-parties for integration and other services. 

That’s great…. but…… in many cases it introduces its own problem for the implementation of OSS and BSS projects. A situation that is also deeply flawed.

Many partnerships are based around a time and materials (T&M) model. In other words, the carrier pays the third-party a set amount per day for the number of days each third-party-provided resource works. A third-party supplies solution architects at ($x per day), business analysts at ($y per day), developers at ($z per day), project managers at… you get the picture. That sounds simple for all parties to wrap their head around and come to mutually agreeable terms on. It’s so simple to comprehend that most carriers almost default to asking external contractors for their daily charge-out rates.

This approach is deeply flawed – ethically conflicted even. You may ask why…. Well, Alan Weiss articulates it best as follows:

When you charge by the hour, you’re in ethical conflict with the client. You only receive larger pay for the longer you’re there, but the client is better served by how quickly you can meet objectives and leave.

Complex IT projects like OSS and BSS projects are the perfect example of this. If your partners are paid on a day rate, they’re financially incentivised for delays, bureaucracy, endless meetings and general inefficiency to prosper. In big organisations, these things tend to already thrive without any incentivisation!

Assuming a given project continues at a steady-state of resources, if a project goes twice as long as projected, then it also goes 100% over the client’s budget. By direct contrast, the third-party doubles their revenue on the project.

T&M partnership models disincentivise efficiency, yet efficiency is one of the primary reasons for the existence of OSS and BSS. They also disincentivise reusability. Why would a day-rater spend the extra time (in their own time) to systematise what they’ve learnt on a project when they know they will be paid by the hour to re-invent that same wheel on the next project?

Can you see why PAOSS only provides scope of work proposals (ie defined outcomes / deliverables / timelines and, most importantly, defined value) rather than day-rates (other than in exceptional circumstances)??

Let me cite just one example to illustrate the point (albeit a severe example of the point).

I was once assisting an OEM vendor to implement an OSS at a tier-1 carrier. This vendor also provided ongoing professional services support for tasks such as customisation. However, the vendor’s day-rates were slightly higher than the carrier was paying for similar roles (eg architects, developers, etc). The carrier invited a third-party to perform much of the customisation work because their day-rates were lower than the OEM.

Later on, I was tasked with reviewing a customisation written by the third-party because it wasn’t functioning as expected. On closer inspection, it had layers of nested function calls and lookups to custom tables in the OEM’s database (containing fixed values). It comprised around 1,500 lines of code. It must’ve taken weeks of effort to write, test and release into production via the change process that was in place. The sheer entanglement of the code took me hours to decipher. Once I finally grasped why it was failing and then interpreted the intent of what it should do, I took it back to a developer at the OEM. His response?

Oh, you’ve gotta be F#$%ing kidding me!

He then proceeded to replace the entire 1,500 lines and spurious lookup tables with half a line of code.

Let’s put that into an equation containing hypothetical numbers:

  • For the sake of the process, let’s assume test and release amounts are equivalent
  • OEM charges $1,000 per day for a dev
  • Third-party charges $900 per day for a dev
  • OEM developer (who knows how the OEM software works) takes 15 seconds to write the code  = $0.52
  • Third-party dev takes (conservatively) 5 days to write the equivalent code (which didn’t work properly) = $4,500

In the grand scheme of this multi-million dollar project, the additional $4,499.48 was almost meaningless, but it introduced weeks of delays (diagnosis, re-dev, re-test, re-release, etc).

Now, let’s say the new functionality offered by this code was worth $50,000 to the carrier in efficiency gains. Who deserves to be rewarded $5,000 for value delivered?

  • The OEM who completed the task and got it working in seconds (and was compensated $0.52); or
  • The Third-party who never got it to work despite a week of trying (and was compensated $4,500)

The hard part about scope of works projects is that someone has to scope them and define the value delivered by them. That’s a whole lot harder and provides less flexibility than just assigning a day rate. But perhaps that in itself provides value. If we need to consider the value of what we’re producing, we might just find that some of the tasks in our agile backlog aren’t really worth doing.

If you’d like to share your thoughts on this, please leave a comment below.




You’ve Heard of SIA. Do You Know Anything About PIA?

If you’ve worked in operations, you’ve probably heard of the term SIA, or Service Impact Analysis. It’s an important feature performed by our OSS that allows you to identify which services, which customers, are impacted by any given outage or deterioration of the network.

Your OSS might even initiate notifications to customers that are impacted. The more sophisticated OSS might even initiate prioritised rehabilitation activities depending on the criticality of the service/s or customer/s impacted.

But, have you ever heard of PIA?
You probably haven’t because it’s a term that I’m just coining for the first time here in this article (AFAIK anyway).

PIA stands for Power Impact Analysis. That is, to link your comms and power assets to identify how power supply impacts the health of the network – more specifically, which devices (and in turn which services / customers) are impacted when localised power supply is lost. Loss of power can be localised or can impact large sections of network. Adjacent network nodes could be impacted because they’re supplied from the same source of power or they could each be powered from different sections of the grid.

One client estimated that between 50-60% of their outages were power-related. Apparently most of these were caused not by power failures, but from the hundreds of planned outages that were underway, intentionally initiated by power utilities, across their network map at any point in time. But they didn’t store details about the power assets / network that shadowed and sustained their comms assets / network. They weren’t automatically correlating power outages with incidents in their comms network. They didn’t have the data in their OSS that allowed them to.

Does your inventory solution store the data that can give you PIA visibility?

You’ve probably noticed that RCA (Root Cause Analysis) is often mentioned in the same breath because of the similar associations that are formed using OSS / inventory data. Ever seen the term “RCA / SIA” used?

I’d like to start seeing PIA / RCA / SIA being used collectively. Power Impact Analysis triggers Root Cause Analysis, which then triggers Service Impact Analysis and proactive notifications to operators and customers alike.


What Pottery and SpaceX Rockets Have in Common with OSS will Surprise You

In our previous article, we described five new OSS/BSS automation design rules applied by Elon Musk. Today, we’ll continue on into the second part of the Starbase video tour and lock onto another brilliant insight from Elon.

From 5:10 to 11:00 in the video below, Elon and Tim (of Everyday Astronaut) discuss why failure, in certain scenarios, is a viable option even in the world of rocket development.

An edited excerpt from the video is provided below.

[Elon] We have just a fundamentally different optimization for Starship versus say, like the polar extreme would be Dragon. Dragon, there can be no failures ever. Everything’s gotta be tested six ways to Sunday. There has to be tons of margin. There can never be a failure ever for any reason whatsoever.
Then Falcon is a little less conservative. It is possible for us to have, say, a failure of the booster on landing. That’s not the end of the world.
And then for Starship, it’s like the polar opposite of Dragon: we’re iterating rapidly in order to create the first ever fully reusable rocket, orbital rocket.
Anyway, it’s hard to iterate, though, when people are on every mission. You can’t just be blowing stuff up ’cause you’re gonna kill people. Starship does not have anyone on board so we can blow things up.

[Tim] Are you just hoping that by the time you put people on it, you’ve flown it say 100, 200 times, and you’re familiar with all the failure modes, and you’ve mitigated it to a high degree of confidence.

[Elon] Yeah.

Interesting…. Very interesting…

How does that relate to OSS? Well, first I’d like to share with you another story, this time about pottery, that I also found fascinating.

The ceramics teacher announced on opening day that he was dividing the class into two groups. All those on the left side of the studio, he said, would be graded solely on the quantity of work they produced, all those on the right solely on its quality. His procedure was simple: on the final day of class he would bring in his bathroom scales and weigh the work of the “quantity” group: fifty pounds of pots rated an “A,” forty pounds a “B,” and so on. Those being graded on “quality,” however, needed to produce only one pot—albeit a perfect one—to get an “A.” Well, come grading time and a curious fact emerged: the works of the highest quality were all produced by the group being graded for quantity. It seems that while the “quantity” group was busily churning out piles of work—and learning from their mistakes—the “quality” group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of dead clay
David Bayles and Ted Orland in their book, “Art and Fear: Observations on the Perils (And Rewards) of Art making,” which I discussed previously in this post years ago.

These two stories are all about being given the licence to iterate…. within an environment where there was also a licence to fail.

The consequences of failure for our OSS/BSS usually falls somewhere between these two examples. Not quite as catastrophic as crashing a rocket, but more impactful than creating some bad pottery.

But even within each of these, there are different scales of consequence. Elon presciently identifies that it’s less catastrophic to crash an unmanned rocket than it is to blow up a manned rocket, killing the occupants. That’s why he has different platforms. He can use his unmanned Starship platform to rapidly iterate, flying a lot more often than the manned Dragon platform by reducing compliance checks, redundancy and safety margins. 

So, let me ask you, what are our equivalents of Starship and Dragon?
Answer: Non-Prod and Prod environments!

We can set up non-production environments where it’s safe to crash the OSS/BSS rocket without killing anyone. We can reduce the compliance, run lots of tests / iterations and rapidly learn, identifying the risks and unknowns.

With OSS/BSS being software, we’re probably already doing this. Nothing particularly new in the paragraph above. But I’m less interested in ensuring the reliability of our OSS/BSS (although that’s important of course). I’m actually more interested in ensuring the reliability of the networks that our OSS/BSS manage.

What if we instead changed the lens to using the OSS/BSS to intentionally test / crash the (non-production or lab) network (and EMS, NMS, OSS/BSS too perhaps)? I previously discussed the concept of CT/IR – Continual Test / Incremental Resilience here, which is analogous to CI/CD (Continuous Integration / Continuous Delivery) in software. CT/IR is a method to automatically, systematically and programmatically test the resilience of the network, then using the learnings to harden the network and ensure resilience is continually improving. 

Like the SpaceX scenario, we can use the automated destructive testing approach to pump out high testing volumes and variants that could not occur if operating in risk-averse environments like Dragon / Production. Where planned, thoroughly tested changes to production may only be allowed during defined and pre-approved change windows, intentionally destructive tests can be run day and night on non-prod environments.

We could even pre-seed AIOps data sets with all the boundary failure cases ready for introduction into production environments without them having ever even been seen in the prod environment.


Five new OSS/BSS automation design rules from Elon Musk

Automation has become a big buzz-word in OSS architecture. It’s all about the automation. At face value that’s great, it’s triggering an implication of bringing tangible business value to our OSS…. Except sometimes it’s not.

Our industry consists of so many incredibly clever engineers who love solving problems, which is great. The problem with automations is our engineers can leap straight to the automation as being the challenging problem to solve and completely forgetting that it’s actually the business value that’s the problem that needs solving.

Don’t just take my word for this, check out this 3 minute snippet from Elon Musk below talking about OSS/BSS design… or it could be about rocket design or Tesla design… I’m not sure. You be the judge. BTW. You can stop watching after 4:27 if you like, but there are a few other interesting ideas from Martin along the way, such as his quote, “I underestimated the problem and overestimated myself.” I’ve ticked that same box more times than I’d like to admit!!

Now, if you look a little closer at the video, you’ll notice that it’s actually an excerpt prepared by the inventor of Marble Machine X, a machine that’s probably aligned with some OSS/BSS designs – It’s beautifully engineered, an absolute piece of precision art. However, it arguably fails step 1 in Elon’s 5 step process – Make your requirements less dumb! It arguably also fails step 3 – The most common error of a smart engineer is to optimise the thing that should not exist!

Anyway, let’s do a recap of those five new rules of OSS/BSS design directly from the mouth of Elon Musk:

Step Elon’s Rules  Additional notes from Elon
1 Make your requirements less dumb Your requirements are definitely dumb. It does not matter who gave them to you. It’s particularly dangerous if a smart person gave you the requirements because you might not question them enough. Everyone’s wrong, no matter who you are.
2 Delete the part or process If you’re not occasionally adding things back in, you’re not deleting enough.
3 Simplify or optimize It’s possibly the most common error of a smart engineer is to optimise the thing that should not exist.
Everyone has been trained in high school and college that you’ve gotta answer the question. Convergent logic. So you can’t tell a professor [ed. or in our case a client] your question is dumb. You will get a bad grade.
So everyone is basically, without knowing it, they’ve got a mental straight-jacket on. They’ll work on optimising the thing that should simply not exist.
4 Accelerate cycle time You’re moving too slowly. You have to go faster. But don’t go faster until you have worked on the other three things first. If you’re digging your grave, don’t dig it faster. Stop digging your grave.
5 Automate And then the final step is automate. Now I have personally made the mistake of going backwards on all five steps, multiple times. I automated, accelerated, simplified and then deleted.

I couldn’t agree more. So many of our OSS/BSS designs are based on dumb requirements that keep perpetuating across the industry. So many of our cleverest people are jumping straight to step 5 to solve a problem. When it comes to OSS/BSS, it pays to walk before you run and question everything you see along the way before trying to automate it. There are so many examples where it just doesn’t make sense to automate. With OSS being software, we can do (almost) anything. But my motto – Just because we can, doesn’t mean we should!

To close out with a final caption from Martin……

BTW, if you’d like to watch the entire 53 minute source video of Elon, you can find it here but be warned that there’s a lot of rocket-scientist geekery going on throughout. Great if you like that kind of thing!!

Zooming in and out of your OSS

Our previous post talked about using the following frame of reference to think bigger about our OSS/BSS projects and their possibilities.

It’s the multitudes of experts at Level 1 that get projects done and products released. All hail the doers!!

As Rory Sutherland indicated in his book, Alchemy – The surprising power of ideas that don’t make sense, “No one complained that Darwin was being trivial in comparing the beaks of finches from one island to another because his ultimate inferences were so interesting.”

In the world of OSS/BSS, we do need people who are comparing the beaks of finches. We need to zoom down into the details, to understand the data. 

But if you’re planning an OSS/BSS project or product; leading a team; consulting; or marketing / selling an OSS/BSS product or service, you also need to zoom out. You need to be like Darwin to look for, and comprehend the ramifications of, the details.

This is why I use WBS to break down almost every OSS/BSS project I work on. I start with the problem statements at levels 2-5 of the reference framework above (depending on the project) and try to take in the broadest view of the project. I then start breaking down the work at the highest level of granularity. From there, we can zoom in as far into the details as we need to. It provides a plan on a page that all stakeholders can readily zoom out of and back into, seeing where they fit in the overall scheme of the project.

With this same perspective of zooming in and out, I often refer to the solution architecture analogy. SAs tend to create designs for an end-state – the ideal solution that will appear at the end of a project or product. Having implemented many OSS/BSS, I’ve also seen examples of where the end-state simply isn’t achievable because of the complexity of all the moving parts (The Chessboard Analogy). The SAs haven’t considered all the intermediate states that the delivery team needs to step through or the constraints that pop up during OSS/BSS transformations. Their designs haven’t considered the detailed challenges along the way.

Interestingly, those challenges often have little to do with the OSS/BSS you’re implementing. It could be things like:

  • A lack of a suitable environment for dev, test, staging, etc purposes
  • A lack of non-PROD infrastructure to test with, leading to the challenging carve-out and protection of PROD whilst still conducting integration testing
  • A new OSS/BSS introduces changes in security or shared services (eg DNS, UAM, LDAP, etc) models that need to be adapted on the fly before the OSS/BSS can function
  • Carefully disentangling parts of an existing OSS/BSS before stitching in elements of your new functionality (The Strangler Fig transformation model)
  • In-flight project changes that are moving the end-state or need to be progressively implemented in lock-step with your OSS/BSS phased release, which is especially common across integrations and infrastructure
  • Changes in underlying platforms or libraries that your code-base depends on
  • Refactoring of other products like microservices 
  • The complex nuances of organisational change management (since our OSS/BSS often trigger significant change events)
  • Changes in market landscape
  • The many other dependencies that may or may not be foreseeable at the start of the journey

You need to be able to zoom out to consider and include these types of adjacency when planning an OSS/BSS.

I’ve only seen one person successfully manage an OSS/BSS project using bottom-up work breakdown (although he was an absolute genius and it did take him 14 hour days x 7 day weeks throughout the project to stay on top of his 10,000+ row Gantt chart and all of the moving dependencies within it). I’ve also seen other bottom-up thinkers nearing mental breakdown trying to keep their highly complex OSS/BSS projects under control.

Being able to zoom up and down may be your only hope for maintaining sanity in this OSS/BSS world (although it might already be too late for me…. and you???)

If you need guidance with the breakdown of work on your OSS/BSS project or need to reconsider the approach to a project that’s veering off target, give us a call. We’d be happy to discuss.

How to take in OSS/BSS’s Bigger Picture

We had a planned power outage at our place yesterday, so I thought I’d use that as an opportunity to get out of the details of projects and do a day of longer-term strategy and planning.

In the days prior, I decided to investigate frameworks that might help to get me into a bigger-picture and/or lateral-thinking mindset. I’d already set aside three books from my bookshelf and thought that Google might have some other great ideas. Turns out that 8 out of the top 10 results that Google returned were of no help at all. The other 2 gave a couple of insights but still weren’t going to help with any Elon Musk level big-picture thinking.

Somewhat disappointed, I thought I’d have a crack at creating my own. Or more to the point, just articulating the approach that I’ve just intuitively used over the years of OSS/BSS implementing and consulting.

One of the things I’ve noticed is that most OSS/BSS practitioners are unbelievably clever. However, I’ve also noticed that most (randomly plucks 95% out of the air) are focused on doing an awesome job of getting their next project or product or feature or project phase or deliverable done so they can move on quickly to the next set of challenges. They’re the 95% at step 1 in the Think Bigger Pyramid below that bring their brainpower to solving the myriad of detailed challenges that it takes to make OSS/BSS projects fly.

1. The Think-Bigger Pyramid (Frames of Reference)

That’s our first frame of reference. The details of product and project (and BAU operations). The questions asked here relate to solve the fundamental and often intensely challenging, implementation problems that await every OSS/BSS.

But over the years, I’ve observed that there’s a special cohort that have a bigger impact on their projects and their teams. They’re what I refer to as Valuable Tripods. These are the people that see the technical and see the detail, but also understand the business imperative. They know that the OSS/BSS is just the means to an end. And that’s where the higher frames of reference come in.

The tripods understand that they need to put themselves in the shoes of the business unit and/or project sponsors. They understand that these people have a different set of objectives and challenges. They have a different set of questions that they ask (more on that later), but they take the perspective of how the OSS/BSS will add value to their business unit and sponsors.

Next up the stack are those who consider who the OSS/BSS will add value to their company / organisation, knowing that a well-implemented OSS/BSS stack can contribute significant competitive advantage to their organisation (the opposite can just as drastically deliver competitive DISadvantage). This takes an understanding of what represents a rcompetitive advantage to each organisation. This invariably elates to time-to-market, operational efficiency, customer experience, product desireability and more. You can already see there’s a mindset shift between layer 1 and midway here at layer 3. But even here at layer 3, we’re still inward facing. We’re considering the internal the needs / objectives of the organisation we represent and have the blinkers on that relate to our little part of the world.

Layer 4 is the first to take the blinkers off and consider the broader ramifications of OSS/BSS, beyond the cause of what we’re directly contributing to. This begins to take the mindset of the more generic needs / benefits / objectives of the industry at large. As a collective of immense brainpower, initiative and effort, the OSS/BSS industry has achieved so much for the people and customers we serve. But I’m a firm believer that there’s still an enormous amount to still achieve, to still do better, as discussed in our Call for Innovation. The global network service provider industry has proven to be so essential to our modern way of life, but they’re currently battling with a structural decline in profitability. This reference layer still really sits within the telco vertical.

And finally we get to the consumers of communications services. You could consider this to be the users of your communications services, but I’m thinking more of the global users of all communications services and what their needs / objectives / challenges are. This goes beyond the network service operator domain and also encapsulates communications services from over-the-top (OTT) delivery models. But this is also the first layer that goes beyond the telco vertical because it now starts to ask questions about why all comms service users need the services that OSS/BSS help to deliver. It delves into personal comms, corporate comms, machine-to-machine comms and across every single industry vertical (Is there a vertical that doesn’t leverage communications services in some way??).

I should also mention that I feel the best-of-the-best OSS/BSS practitioners have a rare ability to zoom in and out of these frames of reference. They can dive down into the details and comprehend them, but then zoom back out and understand how all the pieces fit together. They need to know the big picture and all the gnarly details if they are to de-risk a project and make it slot in amongst other existing or in-flight projects. They need to understand the resources required across each frame of reference to make things happen. They can also communicate with others across each layer of reference, invariably communicating in different ways to different audiences, communicating the big picture to the upper layers of the pyramid and the specifics at the lower layers.

Now, that covers the frames of reference. But there’s more required.

2. Walking in their shoes (Understanding Personas and their Problems)

We now need to take ourselves out of our own world and into the minds of the stakeholders that represent the five frames of reference. For example, if we look at the third layer of reference – Company – then the stakeholders might be the CEO, the CTO, the CMO, etc. Have you ever stopped to think about how our OSS/BSS might add significant value to a CMO (Chief Marketing Officer – or perhaps the collective term of the chief marketing office and all the marketing team within it)? We can either hypothesise about what their needs are, or we can ask them, or we can research their problems, objectives, etc, understanding the exact phraseology of what each stakeholder talks about. This is the process of understanding the personas that represent each frame of reference and the specifics of what’s important to them. [BTW. Send us a message if you’d like to find out about our persona mapping methodology that we use to map personas to key workflows and benchmarks to help evaluate best-fit OSS/BSS solutions for your needs].

3. Idea Generation Questions

Next, ask yourself some questions about those personas and the challenges they face. Questions such as:

  • Can I solve their problem more naively (ie take the perspective of a 7 year old, removing all the constraints an expert would know, and solve the problem in a naive and less complex way than the status-quo)
  • How can the problem be decoupled from dependency problems or constraints (eg remove dependencies that contribute to the problem)
  • How would a counsel of heroes (eg Einstein, Roosevelt, etc) tackle this problem 
  • Is there a different path that bypasses the problem entirely
  • Have I removed all assumptions
  • What unique perspectives / skills / traits can you bring to solving the problem that others just won’t have had
  • What other hat could I wear (ie take an artist’s perspective to solving the problem if you normally apply an engineer’s thinking)
  • What would a complete weirdo do (taking out any risk of ridicule or loss of face from your thinking)
  • Is there a latent opportunity that could be captured by doing this in a whole different way
  • The industry has been doing it this way for twenty years. When it was originally designed, we were constrained. If we were to design it from scratch using modern technology / frameworks / principles, would it look different
  • Is there a Venn Diagram I can apply to this
  • What does the world / company / business-unit need
  • I’m sure you can think of so many more

4. Next Steps

By now, you’ve hopefully generated some great idea seeds. The next step is to identify goals (you may like to try Google’s OKR model), actions and projects (or products / services). Your challenge (and mine) is to turn the seeds into something much bigger. From little things, big things grow (hopefully). I have some additional product and project frameworks that I’m now going to apply the idea seeds to.

Good luck!

Oh, and BTW, do you have a go-to technique that you use to stimulate big-picture thinking? I’m completely open-minded to trying something better. Leave us a comment below.




How we fall in love with our OSS ideas

Do you know what it is that people like / love about your OSS/BSS?
Why they buy it? Why they use it? What problem it solves? How it makes their life easier?

If you do, how do you know?
Do you speak with its users? Do you survey them? Do you spend time with them observing how they use your product? Is it just the “vibe” you get?

Do you actually have a way of measuring what the masses actually use? Actually quantifying, not just guessing. 

Did you notice how I switched tack there, from like/love in the first paragraph to use in the preceding paragraph? We’ll get back to that too, but let’s first look at quantifying use / volume.

The long-tail graph below shows a product’s functional features on the x-axis and the number of times it’s used during the log-period. Are you able to log all customer use of your product and create a graph like this?

The big-impact demands are the features that get used. They’re also probably the features that were in your original MVP (Minimum Viable Product).

I once worked with a vendor alongside someone I now call a friend. We were trying to deliver an OSS/BSS project with many modules. One of the modules was proving the be quite problematic. Our customers just didn’t like it. It had lots of features that had been progressively built into the product over a number of years. The only problem was that the developers had never really spent much (any) time working in operations… or even speaking with anyone who did.

My friend was getting fed up with this particular module. He did have ops experience. He was with me on the customer site and dealing with people who also did (ie customers). He intuitively knew what the product module needed. So he took it upon himself to write a replacement and he finished it in a single weekend. A single weekend!! He showed it to key customer stakeholders on the Monday morning and they loved it. They pushed for it to become their product because it delivered what they needed. It was an MVP that needed further enhancements and functionality, but in a single weekend it did all the big-impact functions (ie the left side of the graph above).

It soon became the product that other customers used too, replacing the original product that had years of development effort invested into it. The new product became a selling point during product demos, whereas the original product had been a liability.

This reminds me of the quote below from Andrew Mason (the founder of Groupon).

The biggest mistake we made was being completely encumbered by this vision of what I wanted it to be… instead of focusing on the one piece of the product that people actually liked. You’re way too dumb to figure out if your idea is good. It’s up to the masses.”

There are some products out there on the market that have been around for decades. The frameworks that underpin them are out of date and they’re in desperate need of an overhaul. Unfortunately, their vendors are rightfully hesitant to throw away all the work that has gone into them to create a much-needed replacement.

However, if they’re able to log their use like the long-tail diagram above, they might be surprised to find that a new product with only MVP functionality might replace most of what customers actually want and need. [I do caveat the “might” in the sentence above because some OSS/BSS products do require a lot of complex capabilities]

But let’s come back to the earlier statement about love/like vs use. Not all of the functionality that customers love/like is actually used a lot. There might be functionality like bulk migration or automation of rare but complex tasks that customers only use rarely but adds a lot of value. That’s why you still need to consider whether there are some functions that appear in the right-hand block of the long-tail that need to be implemented, not just summarily excluded from your product roadmap. That’s why the long-tail diagram has red/green colour-coding to identify what’s actually needed.

A couple of final notes.

It can be harder to evaluate what people like in OSS/BSS products because they’re high-cost / low-volume (of buyers), especially compared to mass-market products like Groupon. It’s harder to evaluate directional sentiment for OSS/BSS products because the user group is compelled to use the products whether they like them or not.

Similarly, do you have a way of measuring the efficiency of what customers use via activity duration analysis? Similar to the long-tail diagram above, we have another approach for measuring efficiency of use.

If you would like any guidance on your product roadmap, especially quantifying and prioritising what functionality to add / modify / remove, we’d be delighted to assist.

The confused mind says no – the psychology of OSS purchasing

When it comes to OSS/BSS vendor selections, a typical customer might say they’re evaluating vendors on criteria that are a mix of technical and commercial (and other) factors. However, there’s more likely a much bigger and often hidden factor that drives a purchasing event. We’ll explore what that is shortly.

I’m currently reading the book, Alchemy: The Surprising Power of Ideas That Don’t Make Sense. It discusses a number of lateral, counter-intuitive approaches that have actually been successful and the psychological effects behind them.

The author, Rory Sutherland, proffers the idea that, “we make decisions… not only for the expected average outcome (but) also seek to minimise the possible variance, which makes sense in an uncertain world.” Also that, “A 1 percent chance of a nightmarish experience dwarfs a 99 percent chance of a 5 percent gain.”

Does that make you wonder about what the OSS/BSS buying experience feels like?

Are OSS/BSS vendors so busy promoting the 5% gain and the marginal differentiating features that they’re overlooking the white-knuckled fear of the nightmarish experience for OSS buyers?

OSS/BSS transformation projects tend to be large, complex and risky. There are a lot of unknowns, especially for organisations that tackle these types of projects rarely. Every project is different. The stakeholders signing off these projects are making massive investment decisions (relative to their organisation’s respective size) in the allocation of  resources (in financial, human and time allocation). The ramifications of these buying decisions will last for years and can often be career-defining (in the positive or the negative depending on the success of the transformation).

As someone who assists organisations with their buying decisions, I can concur with the old saying that, “the confused mind says no.” I’d also suggest that the scared mind says F#$@ no! If the vendor bamboozles the buyer with jargon and features and data, it only amplifies the fears that they might be walking into a nightmarish experience.

Fear and confusion are the reason customers often seek out the vendors who are in the top-right corner of the Gartner quadrant, even when they’re just not the best-fit solution. It’s the reason for the old saying that nobody got fired for hiring IBM. It’s the reason OSS/BSS procurement events can be so arduous (9, 12, 18 months are the norm).

The counter-intuitive approach for vendors is to spend more time overcoming the fear and confusion rather than technical demonstrations:

  • Simplify the messaging
  • Simplify the user experience (refer to the OSS intuition age)
  • Simplify the transformation
  • Provide work breakdowns and phasing to deliver early proof of value rather than a big-bang delivery way off into the future
  • Taking time to learn and communicate in the customer’s voice and terminology rather than language that’s more comfortable to you
  • Provide working proofs-of-concept / sandpits of your solutions as early as possible for the customer to interact with
  • Allow customers to use these sandpit environments and self-help with extensive support collateral (eg videos, how-to’s) enabling the customer to build trust in you at their own pace
  • Show evidence of doing the important things really well and efficiently rather than a long-tail of marginal features
  • Show evidence of striving to ensure every customer gets a positive outcome. This includes up-front transparency of the challenges faced (and still being faced) on similar projects. Not just words, but evidence of your company’s actions on behalf of customers. This might include testimonials and referrals for every single customer
  • Show evidence of no prior litigations or rampant variations or cost escalations on past projects
  • Trust is required to reduce fear and confusion (refer to “the relationship slider” in the three project responsibility sliders)
  • Provide examples of past challenges and risk mitigations. Even school the client on what they need to do to de-risk the project prior to commencement*

Can you think of other techniques / proofs that almost guarantee to the customer that they aren’t entering into a nightmarish situation?

* Note: I wrote Mastering your OSS with this exact concept in mind – to get customers ready for the transformation project they’re about to embark on and the techniques they can use to de-risk the project.


OSS Sandpit – Radio Planning Exercise

Wireless or radio technologies are making waves (sorry for the awful pun) at the moment. The telco world is abuzz with the changes being brought about by 5G, IoT, LEO satellite and other related tech. They all rely on radio frequency (RF) to carry their all-important communications information.

This article provides an example of preparing RF coverage maps by using the inventory module of our Personal OSS Sandpit Project as well as a freely available trial of Twinkler ( We’ve already shown some RF propagation in previous articles in this series, including:

But in today’s article, we’ll show radio coverage planning across the three sectors of a cellular network site.

This RF planning capability is becoming more widely useful with the advent of these technologies and the new business models / use-cases they support. Whereas RF planning used to be the domain of mobile network operators, we’re now seeing more widespread use including:

  • Neutral-host infrastructure owners
  • Wireless ISPs
  • IoT networks
  • Private mobile networks, especially for in-building or in-campus in-fill coverage (BTW, the Twinkler app allows both indoor and outdoor RF planning use-cases)
  • Government radio networks (eg emergency services)
  • Utilities
  • Enterprise (eg mining, transport / logistics, agriculture, broadcast radio / TV, etc)
  • Consulting services
  • Managed service offerings

In this sand-pit demo, we’ll reverse-engineer an operator’s existing tower / assets, but the same approach can be used to design and configure new assets (eg adding antenna for millimeter wave 5G). Here in Australia, ACMA provides a public record of all licenced comms transmitter / receiver devices. This link is an example of one site recorded in the ACMA database that we’ll use for this demo – (Warrimoo Tower).

You may recall that we’d recently demonstrated building a 3D model of Warrimoo Tower. This was done by stitching together photos taken from a drone. Looking at the animated GIF below, you’ll notice that we’ve not just built a model of the tower, but also tagged the assets (antenna, combiners, radio units) attached to the tower in 3D space. If you look closely, you’ll notice the labels that appear at the end of the visual loop, which identify the names of each asset.

Whilst Warrimoo tower holds assets of many owners (different colours on the animation), we’ll focus specifically on one 3-sector cluster. We’ll specifically focus on 3 antennae owned by Optus transmitting in the 700MHz band (763 MHz centre frequency). These are highlighted in green in the 3D model above.

The steps we use for RF planning are as follows:

  1. Extract ACMA data into Kuwaiba, our inventory database
  2. Push data from Kuwaiba to Twinkler, our RF modelling tool.
  3. Visualise the radio coverage map

Step 1 – Tower / antenna data in inventory:

The following diagram shows the inventory of the tower in topological format. The 3-sector cluster we’re modelling has been circled in yellow. [Click on the image for a closer look]

You’ll also notice that we’ve specifically highlighted one antenna (in blue, which has the name “204667-ANT-81193-MG1-01 (Optus)” according to our naming convention). There’s a corresponding list of attributes in the right-hand pane relating to this antenna. Most of these attributes have been taken from the ACMA database, but could equally come from your OSS, NMS, asset management system or other network data sets.

Some of the most important attributes (for RF calculation purposes anyway) are:

  • Device make/model (as this defines the radiation profile)
  • Height (either above the base of the tower or sea-level, but above the base in our case)
  • Azimuth (direction the antenna is pointing)
  • Emission centre frequency (ie the frequency of transmission)
  • Transmission power

Step 2 – Twinkler takes these attributes and builds an RF model/s

You can either use the Twinkler application (sign up to a free account here – The application can visualise coverage maps, but these can also be gathered via the Twinkler API if you wish to add them as an overlay in your inventory, GIS or similar tools (API details can be found here:

Step 3 – Visualise radio coverage diagrams

As you can see from the attributes in the inventory diagram above, we have an azimuth of 230 degrees for the first antenna in the 3-sector group. The azimuths of the other two antennae are 15 and 140 degrees respectively.

These give us the following radiation patterns (each is a separate map, but I’ve combined to make an animated GIF for easier viewing):

You’ll notice that the combined spread in the diagram is slightly larger because the combined coverage is set to -130dBm signal level whereas the per-sector coverages are to -120dBm.

Note: You may have spotted that the mapping algorithm considers the terrain. From this zoomed-in view you’ll see the coverage black-spot in the gully in the top-left corner more easily.


I hope you enjoyed this brief introduction into how we’ve created a radio coverage map of an existing cellular network using the Inventory module of our Personal OSS Sandpit Project. Click on the link to step back to the parent page and see what other modules and/or use-cases are available for review.


Are you kidding? We’ll never use open-source OSS/BSS!

Back in the days when I first started using OSS/BSS software tools, there was no way any respectable telco was going to use open-source software (the other oss, for which I’ll use lower-case in this article) in their OSS/BSS stacks. The arguments were plenty, and if we’re being honest, probably had a strong element of truth in many cases back then.

These arguments included:

  • Security – This is the most commonly cited aversion I’ve heard to open-source. Our OSS/BSS control our network, so they absolutely have to be secure. Secure across all aspects of the stack from network / infrastructure to data (at rest and in motion) to account access to applications / code, etc. The argument against open-source is that the code is open to anyone to view, so vulnerabilities can be identified by hackers. Another argument is that community contributors could intentionally inject vulnerabilities that aren’t spotted by the rest of the community
  • Quality – There is a perception that open-source projects are more hobby projects  than professional. Related to that, hobbyists can’t expend enough effort to make the solution as feature-rich and/or user-friendly as commercial software
  • Flexibility – Large telcos tend to want to steer the products to their own unique needs via a lot of customisations. OSS/BSS transformation projects tend to be large enough to encourage proprietary software vendors to be paid to make the requested changes. Choosing open-source implies accepting the product (and its roadmap) is defined by its developer community unless you wish to develop your own updates
  • Support – Telcos run 24x7x365, so they often expect their OSS/BSS vendors to provide round-the-clock support as well. There’s a  belief that open-source comes with a best-effort support model with no contracted service obligations. And if something does go drastically wrong, that open-source disclaims all responsibility and liability
  • Continuity – Telcos not only run 24x7x365, but also expect to maintain this cadence for decades to come. They need to know that they can rely on their OSS/BSS today but also expect a roadmap of updates into the future. They can’t become dependent upon a hobbyist or community that decides they don’t want to develop their open-source project anymore

Luckily, these perceptions around open-source have changed in telco circles in recent years. The success of open-source organisations like Red Hat (acquired by IBM for $34 billion on annual revenues of $3.4 billion) have shown that valuable business models can be underpinned by open-source. There are many examples of open-source OSS/BSS projects driving valuable business models and associated professionalism. The change in perception has possibly also been driven by shifts in application architectures, from monolithic OSS/BSS to more modular ones. Having smaller modules has opened the door to utilisation of building block solutions like the Apache projects.

So let’s look at the same five factors above again, but through the lens of the pros rather than the cons.

  • Security – There’s no doubt that security is always a challenge, regardless of being open-source or proprietary software, especially for an industry like OSS/BSS where all organisations are still investing more heavily in innovation (new features/capabilitys) more than security optimisations. Clearly the openness of code means vulnerabilities are more easily spotted in open-source than in “walled-garden” proprietary solutions. Not just by nefarious actors, but its development community as well. Linus’ Law suggests that “given enough eyeballs, all bugs (and security flaws) are shallow.” The question for open-source OSS/BSS is whether there are actually many eyeballs. All commercially successful open-source OSS/BSS vendors that I’m aware of have their own teams of professional developers who control every change to the code base, even on the rare occasions when there are community contributions. Similarly, many modern open-source OSS/BSS leverage other open-source modules that do have many eyes (eg linux, snmp libaries, Apache projects, etc). Another common perception is security through obscurity, that there are almost no external “eyeballs.” The fragmented nature of the OSS/BSS industry means that some proprietary tools have a tiny install base. This can lull some into a false sense of security. Alternatively, open-source OSS/BSS manufacturers know there’s a customer focus on security and have to mitigate this concern. The other interesting perspective of openness is that open-source projects can quite quickly be scrutinised for security-code-quality. An auditor has free reign to identify whether the code is professional and secure. With proprietary software, the customer’s auditor isn’t afforded the same luxury unless special access is granted to the code. With no code access, the auditor has to reverse-engineer for vulnerabilities rather than foresee them in the code.
  • Quality – There’s no doubt that many open-source OSS/BSS have matured and found valuable business models to sustain them. With the profitable business model has come increased resources, professionalism and quality. With the increased modularity of modern architectures, open-source OSS/BSS projects are able to perform very specific niche functionalities. Contrast this with the monolithic proprietary solutions that have needed to spread their resources thinner across a much wider functional estate. Also successful open-source OSS/BSS organisations tend to focus on product development and product-related services (eg support), whereas the largest OSS/BSS firms tend to derive a much larger percentage of revenues from value-added services (eg transformations, customisations, consultancy, managed services, etc). The latter are more services-oriented companies than product companies. As inferred in the security point above, open-source also provides transparency relating to code-quality. A code auditor will quickly identify whether open-source code is of good quality, whereas proprietary software quality is hidden inside the black-box 
  • Flexibility – There has been a significant shift in telco mindsets in recent years, from an off-the-shelf to a build-your-own OSS/BSS stack. Telcos like AT&T have seen the achievements of the hyperscalers, observed the increased virtualisation of networks and realised they needed to have more in-house software development skills. Having in-house developers and access to the code-base of open-source means that telcos have (almost) complete control over their OSS/BSS destinies. They don’t need to wait for proprietary vendors to acknowledge, quote, develop and release new feature requests. They no longer rely on the vendor’s roadmap. They can just slip the required changes into their CI/CD pipeline and prioritise according to resource availability. Or if you don’t want to build a team of developers specifically skilled with your OSS/BSS, you can pick and choose – what functionality to develop in-house, versus what functionality you want to sponsor the open-source vendor to build
  • Support – Remember when I mentioned above that OSS/BSS organisations have found ways to build profitable business models around open-source software? In most cases, their revenues are derived from annual support contracts. The quality and coverage of their support (and the products that back it up) is directly tied to their income stream, so there’s commensurate professionalism assigned to support. As mentioned earlier, almost all the open-source OSS/BSS I’m aware of are developed by an organisation that controls all code change, not community consensus projects. This is a good thing when it comes to support, as the support partner is clear, unlike community-led open-source projects. Another support-related perspective is in the number of non-production environments that can be used for support, test, training, security analysis, etc. The cost of proprietary software means that it can be prohibitive to stand up additional environments for the many support use-cases. Apart from the underlying hardware costs and deployment effort, standing up additional open-source environments tends to be at no additional cost. Open-source also gives you greater choice in deciding whether to self-support your OSS/BSS in future (if you feel that your internal team knows the solution so intimately that they can fix any problem or functionality requirement that arises) rather than paying ongoing support contracts. Can the same be said for proprietary product support?
  • Continuity – This is perhaps the most interesting one for me. There is the assumption that big, commercial software vendors are more reliable than open-source vendors. This may (or might not) be the case. Plenty of commercial vendors have gone out of business, just as plenty of open-source projects have burned out or dwindled away. To counter the first risk, telcos pay to enter into software escrow agreements with proprietary vendors to ensure critical fixes and roadmap can continue even in the event that a vendor ceases to operate. But the escrow contract may not cover when a commercial vendor chooses to obsolete a product line of software or just fail to invest in new features or patches. This is a common event from even the world’s largest OSS/BSS providers. Under escrow arrangements, customers are effectively paying an insurance fee to have access to the code for organisational continuity purposes, not product continuity. Escrow may not cover that, but open-source is available under any scenario. The more important continuity consideration is the data and data is the reason OSS/BSS exist. When choosing a commercial provider, especially a cloud software / service provider, the data goes into a black box. What happens to the data inside the black box is proprietary and often what comes out of it is also. Telcos will tend to have far more control of their data destinies for operational continuity if using open-source solutions. Speaking of cloud and SaaS-model and/or subscription-model OSS/BSS, customers are at the whim of the vendor’s product direction. Products, features and modules can be easily changed or deprecated by these vendors, with little recourse for the buyers. This can still happen in open-source and become a impediment for buyers too, but at least open-source provides buyers with the opportunity to control their own OSS/BSS destinies.

Now, I’m not advocating one or the other for your particular situation. As cited above, there are clearly pros and cons for each approach as well as different products of best-fit for different operators. However, open-source can no longer be as summarily dismissed as it was when I first started on my OSS/BSS journey. There are many fine OSS and BSS products and vendors in our Blue Book OSS/BSS Vendor Directory that are worthy of your consideration too when looking into your next product or transformation.

Edit: Thanks to Guy B. who rightly suggested that Scalability was another argument against open-source in the past. Ironically, open-source has been a significant influencer in the almost unlimited scalability that many of our solutions enjoy today.

How to calculate the right jeopardy metrics in your end-to-end workflows

Last week we created an article that described how to use your OSS/BSS log data to generate reliable / quantifiable process flow diagrams.

We’ve expanded upon this research to identify a reliable calculation of jeopardy metrics. Jeopardy Management is the method for notifying operators when an in-flight workflow (eg customer order, etc) is likely to breach targets such as RFS date (ie when the customer’s service will be available for use) and/or SLAs (service level agreements) are likely to be breached.

Jeopardy management techniques are used to predict forward before a breach has occurred, hopefully. For example if an Order to Activate workflow for a particular product type consists of 10 steps and only the first 2 steps are completed within 29 days of a target of 30 days RFS, then we could expect that the RFS date is likely to be missed. The customer should be alerted. If the right trackers were built, this order should’ve had a jeopardy notification long before 29 days had elapsed. 

In the past, jeopardy indicators have tended to be estimated thresholds. Operators have tended to set notifications based on gut-feel (eg step 2 must be completed by day 5).  But through the use of log data, we can now provide a more specific jeopardy indicator for every step in the process.

The chart above shows every activity within a workflow across the horizontal axis. The vertical axis shows the number of days elapsed since the start of the workflow.

By looking at all past instances of this workflow, we can show the jeopardy indicator as a series of yellow dots. In other words, if any activity has ever been finished later than its corresponding yellow dot, then the E2E workflow it was part of has breached its SLA. 

To use a more familiar analogy it’s the latest possible date that you can study for exams and still be able to pass the subject, using time-stamped historical data. Not that I ever left it late to study for exams back in uni days!!  🙂

And yet if you look closely, you’ll notice that some blue dots (average elapsed time for this activity) in this example are higher than the jeopardy indicator. You’ll also notice that the orange dots (the longest elapsed time to complete this task across all instances of this workflow according to log data) are almost all above the jeopardy indicator. Those examples highlight significant RFS / SLA breaches in this data set (over 10% are in breach).

Leave us a note below if you’d like us to assist with capturing your jeopardy indicators and identifying whether process interventions are required across your OSS/BSS.


How to Document, Benchmark and Optimise Operational Processes

Have you been tasked with:

  1. Capturing as-is process flows (eg swim-lane charts or BPMN [Business Process Model and Notation] diagrams)
  2. Starting a new project where understanding the current state is important
  3. Finding ways to optimise day-to-day activities performed by your team
  4. Creating a baseline process to identify automation opportunities
  5. Comparing your current processes with recommendations such as eTOM or ITIL
  6. Identifying which tasks are leading to SLA / OLA breaches

As you may’ve experienced during project kick-off phases, as-is processes are usually not well defined, captured or adequately quantified (eg transaction volumes, duration times, fall-outs, etc) by many customers. 

If process diagrams have been captured, they’re often theoretical workflow maps developed by Business Analysts and Subject Matter Experts to the best of their knowledge. As such, they don’t always reflect real and/or complete flows. They may have no awareness of the rare flows / tasks / situations that can often trip our OSS/BSS tools and operators up. The rarer the sub-flows, the less likely they are to be documented.

Even if the flows have been fully documented, real metrics / benchmarks are rarely recorded. Metrics such as end-to-end completion times and times taken between each activity within the flow can be really challenging to capture and visualise, especially when you have large numbers of flows underway at any point in time.

Do you struggle to know where the real bottlenecks are in your process flows? Which tasks cause fall-outs? Which team members need advanced training? Which process steps have the largest differences in max / min / average durations? Which steps are justified to build automations for? As the old saying goes, if you can’t measure it, you can’t manage it.

You need quantitative, not qualitative understanding of your workflows

As a result, we’ve developed a technique to reverse-engineer log data to map and quantify processes. Logs that our OSS/BSS routinely collect and automatically time-stamp. By using time-stamped logs, we can trace every step, every flow variant, every sequence in the chain and every duration between them. This technique can be used on fulfilment, assurance and other flows. The sample below shows transaction volumes / sequences, but can also show durations within the flows:

Note that this and subsequent diagrams have been intentionally left in low-res format here on this page.

Better than just volumes, we can compare the max / mean / min processing times to identify the duration of activities and show bottlenecks (thicker red lines in the diagram below) as well as identifying hold-ups and inconsistencies in processing times:

By combining insights from flow volumes and timings, we can also recommend the processes and/or steps that optimisation / automations are most justified for.

We can also use monitoring of the flows to identify failure situations that have occurred with a given process, such as the examples highlighted in red below. 

We can also use various visualisation techniques to identify changes / trends in processing over time. These techniques can even assist in identifying whether interventions (eg process improvements or automations) are having the intended impacts.

The following chart (which is clickable) can be used to identify which tasks are consistently leading to SLA (Service Level Agreement) breaches.

The yellow dots indicate the maximum elapsed time (from the start of a given workflow) that has not resulted in the SLA breach. In other words, if this activity has ever been finished later than the yellow dot, then the E2E workflow it was part of has breached its SLA. These numbers can be used during a workflow to predict likelihood that it will breach SLA. It can also be used for setting jeopardy values to notify operators of workflow slippages.

There are a few other points of interest in this chart:

  • Orange dots indicate the longest elapsed time for this activity seen within all flows in the log data
  • Grey dots indicate the shortest elapsed time from the beginning of a workflow
  • Blue dots indicate the average elapsed time
  • Yellow dots are the Jeopardy indicator, meaning that if the elapsed time of this activity has ever exceeded this value then it has gone on to breach SLA
  • The red line is SLA threshold for this particular workflow type
  • Shaded box shows tasks that have never been in an E2E flow that has met SLA
  • You’ll notice that many average values (blue dots) are above jeopardy, which indicates this activity is regularly appearing in flows that go on to breach SLA levels
  • Almost all max values are above jeopardy (most are so high that they’re off the top of the scale) so most activities have been part of an E2E flow that has breached SLA
  • The shaded blue box shows tasks that have never been in an E2E flow that has met SLA!!
  • Needless to say, there were some interventions required with this example!

Operational Process Summary

As described above, using log data that you probably already have ready access to in your OSS/BSS, we can assist you to develop quantifiable process flow information. Having quantifiable data in turn can lead to greater confidence in initiating process interventions, whether they are people (eg advanced training), process (eg business process re-engineering) or technology (eg automations).

This technique works equally well for:

  • Understanding the current situation before commencing an OSS/BSS transformation project
  • Benchmarking and refining processes on an OSS/BSS stack that is running in business-as-usual mode
  • Highlighting the impact a process intervention has had (ie comparing before and after)

Would you like to book a free consultation to discuss the challenges you face with your (or a customer’s) as-is process situation? Please leave your details and list of challenges in the contact form below.

019 – Modern OSS/BSS Transformation Techniques that start with the Customer Journey with Martin Pittard

Digital transformation is a term that’s entered the modern vernacular, but here in the world of OSS/BSS it’s just what we’ve been doing for decades. Whether aimed at delivering digital services, collecting data from all points of an organisation’s compass, increasing the internal efficiencies of operational teams or improving user experiences externally, this is just what our OSS/BSS tools and projects do.

Our guest on today’s episode, Martin Pittard, has been leading digital transformations since long before the digital transformation term existed. As Principal IT Architect at Vocus ( Martin is in the midst of leading his most recent digital transformation (ie OSS/BSS transformation project). On this latest transformation, Martin is using a number of new techniques plus well-held architectural principles including the use of dynamic / Open APIs (a TM Forum initiative), being catalog-driven, standards-based, model-based and having an intense focus on separation of concerns. Of perhaps even greater focus is the drive to improve customer journeys as well as ensuring solution flexibility to support customer interactions across future business and service models.

It was a recent talk at a TM Forum event in Sydney that reinforced our interest in having Martin on as a guest. During this presentation, Martin shared some fantastic ideas on how Vocus is tackling the specific challenges and techniques of its OSS/BSS transformation. So good was it that we turned it into an article on our blog. A video of Martin’s in-depth presentation plus a summary of key points can be found here:

In addition to the Vocus transformation, Martin also shares stories and insights from past transformations at organisations like Rockwell (building combat systems for submarines), Fujitsu, the structural separation of British Telecom to form Openreach, Alcatel-Lucent (transforming the Telstra network and OSS/BSS) and then nbn. On the latter, Martin spent 8+ years leading the build of mission-critical systems across industry integrations (ie customer-facing systems) and network assurance for nbn. During that time, Martin led a large team through the transition to Agile delivery and recounts some of the challenges, benefits and insights from embarking on that journey.

For any further questions you may have, Martin can be found at:

Disclaimer. All the views and opinions shared in this podcast, and others in the series, are solely those of our guest and do not reflect the opinions or beliefs of the organisations discussed.

How to improve user experience with a headless OSS

The first OSS/BSS I used, back in 2000, was built around an Oracle relational database and the GUI was built using Oracle Forms (as well as some C++ apps). The developers had implemented a concept they referred to as a boilerplate. It basically allowed the customers to modify any label they wished on the forms.

When I say labels, I mean any of the default text fields, such as the following in the diagram below:

  • Connecting Links
  • Connect Selected Endpoints
  • Disconnect Both Sides
  • Disconnect A Side
  • Disconnect B Side

By default, the forms in my first OSS were all written in English. But the boilerplate functionality offered the customer flexibility. Instead of “connecting links,” they may’ve preferred to call it “cross-connects” or “enlaces de conexión.” It was perfect for supporting different languages without changing the code. At the time, I thought this was a really neat feature. It was the first sign I’d seen of codeless flexibility in the UI of an OSS.

These days, most OSS/BSS need drastic improvements in their UI. As we described previously, they tend to be non-intuitive and take too long to master. We need to go further than boilerplate functionality. This is where headless or decoupled architectures appear to hold some promise.  But we’ll loop back to the how in a moment. First, we’ll take a look at the why.

Apple has shown the tech world the importance of customer experience (CX) and elegant UI / industrial design. Much like the contrast between the iPod and previous MP3 players, our OSS/BSS are anonymous, poorly made objects. We have some catching up to do.

But, let’s first start by asking who is the customer we’re designing a Customer Experience for? Well, there are two distinct categories of customer that we have to design our OSS/BSS and workflows for, as shown in the sample Order to Cash (O2C) flow infographic below.

  • We have the first level of customers, Customer 2, the operators in the diagram below. These use our OSS/BSS directly but often behind the scenes.
  • Then there’s the second level of customers, Customer 1, the end users who also interact with our OSS/BSS, but often indirectly

The end users need to have a CX that appears highly integrated, smooth and seamless. It has to appear consistent, even though there are multiple channels that often aren’t linked (or even linkable – eg a customer might interact with an IVR or chatbot without revealing personal identifiers that can be linked with a customer ID in the OSS/BSS).

The end user follows the journey through the left-hand column of the infographic from start to finish. However, to deliver upon the flow on the right-side of the infographic, the CSP side, it’s likely that dozens of operators using many completely unrelated applications / UIs will perform disparate activities. They’ll perform a small subset of activities, but for many different end-users within a given day. It’s highly unlikely that there will be a single person (right side) mirroring the end-user journey (left side), so CSPs have to hope their workflows, data flows and operators all work in unison, without any fall-outs along the way.

Along the right-hand path, the operators tend to have a plethora of different back-end tools (as implied in the comments in the O2C flow above). They might be integrated (via API), but the tools often come from different vendors, so there is no consistency in UI.

The “headless” approach (see article from for further info), allows the user interface to be decoupled from the application logic and data (as per the diagram below). If all the OSS/BSS tools along the right-hand path of the O2C infographic were headless, it might allow an integrator to design a smooth, consistent and robust end-to-end customer experience across many different vendor applications.

A few other thoughts too:

  • Engineers / Developers can create the application logic and data manipulation. They only need to solve this problem once and then move onto solving for the next use-case. It can then be handed to UX / UI experts to “skin” the products, potentially trialing many different UI variants until they find an optimal solution. UX experts tell me trial and error is the real science behind great user interface design. This is also reflected in great industrial design by organisations like Apple that create hundreds of prototypes before going to market. Our engineers / devs are too precious a resource to have them trialing many different UI variants
  • Most OSS/BSS have a lot of functionality baked into the product and UI. Unfortunately, most operators won’t need to use a lot of the functionality. Therefore it’s just cluttering up the UI and their daily activities. The better approach is to only show the functionality that the operators actually need
  • This “just show the operators what they need” concept can also allow for separation by skill. For example, a lot of workflows (eg an O2C or T2R) will have high-volume, highly repeatable, fast turnover variants that can be easily taught to new operators. However, the same O2C workflow might have less common or obscure variants, such as fall-outs that require handling by exception by more highly experienced / skilled / trained operators. You might choose to have a fast-lane (high-volume) UX versus a high-control UX for these scenarios 
  • A consistent UI could be underpinned by any number of applications. Theoretically, one OSS/BSS app could be swapped out for another without any change in workflow for the end user or even the CSP’s operator
  • Consistent UIs that are matched to context-sensitive workflows could also streamline the automation and/or RPA of each
  • Consistent UIs allow for easier benchmarking for speed and effectiveness, which in turn could help with feedback loops, such as autonomous network concepts
  • UI / UX experts can design style guides that ensure consistency across all applications in the OSS/BSS stack
  • With the iPod, Apple arrived at the insight that it made more sense to take much of the logic “off-device”

Hat tip to George for the headless UI concept as a means of improving UX!

PS. I should also point out that another parallel universe I’m diving into is Augmented Reality based OSS/BSS and it provides a slightly different context when mentioning headless UX. In AR, headless implies having no screen to interact with, so the UX comes from audio and visual presentation of data that diverges significantly from the OSS/BSS of today. Regardless, the separation of logic / data from the front-end that we described earlier also seems like a step towards future AR/VR-based OSS/BSS.

018 – How a NaaS Transformation can Revolutionise your OSS_BSS Stack with Johanne Mayer

OSS/BSS stacks can be incredibly complex and cumbersome beasts, especially in large carriers with many different product, process and network variants. We don’t make that task any easier by creating many unique product offerings to take to market. And this time to market can be a significant competitive advantage, or be a serious impediment to it. NaaS, or Network as a Service, is a novel approach to increasing flexibility in our OSS/BSS stacks, inserting an API layer that provides a separation of concerns.

Our guest on today’s episode, Johanne Mayer, is so passionate about the benefits of NaaS-based transformations that she’s formed a company named NaaS Compass( to assist others with their transformations. She provides the hard-won experience from being involved with NaaS transformations at organisations like Telstra.

Prior to embarking on this latest venture, Johanne has also worked with many of the most iconic organisations in the telco / OSS/BSS industries. These include Nortel, Alcatel-Lucent (now part of Nokia), Ericsson, Oracle, Ciena Blue Planet and Analysis Mason. Johanne takes us on a journey through a career that has seen her work on exciting projects from the days of NMS and X.25 networks to more recent projects leading collaborative transformation with standards organisations like TM Forum (where she is a Distinguished Fellow), MEF and ETSI.

For any further questions you may have, Johanne can be found at:

Disclaimer. All the views and opinions shared in this podcast, and others in the series, are solely those of our guest and do not reflect the opinions or beliefs of the organisations discussed.