Zooming in and out of your OSS

Our previous post talked about using the following frame of reference to think bigger about our OSS/BSS projects and their possibilities.

It’s the multitudes of experts at Level 1 that get projects done and products released. All hail the doers!!

As Rory Sutherland indicated in his book, Alchemy – The surprising power of ideas that don’t make sense, “No one complained that Darwin was being trivial in comparing the beaks of finches from one island to another because his ultimate inferences were so interesting.”

In the world of OSS/BSS, we do need people who are comparing the beaks of finches. We need to zoom down into the details, to understand the data. 

But if you’re planning an OSS/BSS project or product; leading a team; consulting; or marketing / selling an OSS/BSS product or service, you also need to zoom out. You need to be like Darwin to look for, and comprehend the ramifications of, the details.

This is why I use WBS to break down almost every OSS/BSS project I work on. I start with the problem statements at levels 2-5 of the reference framework above (depending on the project) and try to take in the broadest view of the project. I then start breaking down the work at the highest level of granularity. From there, we can zoom in as far into the details as we need to. It provides a plan on a page that all stakeholders can readily zoom out of and back into, seeing where they fit in the overall scheme of the project.

With this same perspective of zooming in and out, I often refer to the solution architecture analogy. SAs tend to create designs for an end-state – the ideal solution that will appear at the end of a project or product. Having implemented many OSS/BSS, I’ve also seen examples of where the end-state simply isn’t achievable because of the complexity of all the moving parts (The Chessboard Analogy). The SAs haven’t considered all the intermediate states that the delivery team needs to step through or the constraints that pop up during OSS/BSS transformations. Their designs haven’t considered the detailed challenges along the way.

Interestingly, those challenges often have little to do with the OSS/BSS you’re implementing. It could be things like:

  • A lack of a suitable environment for dev, test, staging, etc purposes
  • A lack of non-PROD infrastructure to test with, leading to the challenging carve-out and protection of PROD whilst still conducting integration testing
  • A new OSS/BSS introduces changes in security or shared services (eg DNS, UAM, LDAP, etc) models that need to be adapted on the fly before the OSS/BSS can function
  • Carefully disentangling parts of an existing OSS/BSS before stitching in elements of your new functionality (The Strangler Fig transformation model)
  • In-flight project changes that are moving the end-state or need to be progressively implemented in lock-step with your OSS/BSS phased release, which is especially common across integrations and infrastructure
  • Changes in underlying platforms or libraries that your code-base depends on
  • Refactoring of other products like microservices 
  • The complex nuances of organisational change management (since our OSS/BSS often trigger significant change events)
  • Changes in market landscape
  • The many other dependencies that may or may not be foreseeable at the start of the journey

You need to be able to zoom out to consider and include these types of adjacency when planning an OSS/BSS.

I’ve only seen one person successfully manage an OSS/BSS project using bottom-up work breakdown (although he was an absolute genius and it did take him 14 hour days x 7 day weeks throughout the project to stay on top of his 10,000+ row Gantt chart and all of the moving dependencies within it). I’ve also seen other bottom-up thinkers nearing mental breakdown trying to keep their highly complex OSS/BSS projects under control.

Being able to zoom up and down may be your only hope for maintaining sanity in this OSS/BSS world (although it might already be too late for me…. and you???)

If you need guidance with the breakdown of work on your OSS/BSS project or need to reconsider the approach to a project that’s veering off target, give us a call. We’d be happy to discuss.

How to take in OSS/BSS’s Bigger Picture

We had a planned power outage at our place yesterday, so I thought I’d use that as an opportunity to get out of the details of projects and do a day of longer-term strategy and planning.

In the days prior, I decided to investigate frameworks that might help to get me into a bigger-picture and/or lateral-thinking mindset. I’d already set aside three books from my bookshelf and thought that Google might have some other great ideas. Turns out that 8 out of the top 10 results that Google returned were of no help at all. The other 2 gave a couple of insights but still weren’t going to help with any Elon Musk level big-picture thinking.

Somewhat disappointed, I thought I’d have a crack at creating my own. Or more to the point, just articulating the approach that I’ve just intuitively used over the years of OSS/BSS implementing and consulting.

One of the things I’ve noticed is that most OSS/BSS practitioners are unbelievably clever. However, I’ve also noticed that most (randomly plucks 95% out of the air) are focused on doing an awesome job of getting their next project or product or feature or project phase or deliverable done so they can move on quickly to the next set of challenges. They’re the 95% at step 1 in the Think Bigger Pyramid below that bring their brainpower to solving the myriad of detailed challenges that it takes to make OSS/BSS projects fly.

1. The Think-Bigger Pyramid (Frames of Reference)

That’s our first frame of reference. The details of product and project (and BAU operations). The questions asked here relate to solve the fundamental and often intensely challenging, implementation problems that await every OSS/BSS.

But over the years, I’ve observed that there’s a special cohort that have a bigger impact on their projects and their teams. They’re what I refer to as Valuable Tripods. These are the people that see the technical and see the detail, but also understand the business imperative. They know that the OSS/BSS is just the means to an end. And that’s where the higher frames of reference come in.

The tripods understand that they need to put themselves in the shoes of the business unit and/or project sponsors. They understand that these people have a different set of objectives and challenges. They have a different set of questions that they ask (more on that later), but they take the perspective of how the OSS/BSS will add value to their business unit and sponsors.

Next up the stack are those who consider who the OSS/BSS will add value to their company / organisation, knowing that a well-implemented OSS/BSS stack can contribute significant competitive advantage to their organisation (the opposite can just as drastically deliver competitive DISadvantage). This takes an understanding of what represents a rcompetitive advantage to each organisation. This invariably elates to time-to-market, operational efficiency, customer experience, product desireability and more. You can already see there’s a mindset shift between layer 1 and midway here at layer 3. But even here at layer 3, we’re still inward facing. We’re considering the internal the needs / objectives of the organisation we represent and have the blinkers on that relate to our little part of the world.

Layer 4 is the first to take the blinkers off and consider the broader ramifications of OSS/BSS, beyond the cause of what we’re directly contributing to. This begins to take the mindset of the more generic needs / benefits / objectives of the industry at large. As a collective of immense brainpower, initiative and effort, the OSS/BSS industry has achieved so much for the people and customers we serve. But I’m a firm believer that there’s still an enormous amount to still achieve, to still do better, as discussed in our Call for Innovation. The global network service provider industry has proven to be so essential to our modern way of life, but they’re currently battling with a structural decline in profitability. This reference layer still really sits within the telco vertical.

And finally we get to the consumers of communications services. You could consider this to be the users of your communications services, but I’m thinking more of the global users of all communications services and what their needs / objectives / challenges are. This goes beyond the network service operator domain and also encapsulates communications services from over-the-top (OTT) delivery models. But this is also the first layer that goes beyond the telco vertical because it now starts to ask questions about why all comms service users need the services that OSS/BSS help to deliver. It delves into personal comms, corporate comms, machine-to-machine comms and across every single industry vertical (Is there a vertical that doesn’t leverage communications services in some way??).

I should also mention that I feel the best-of-the-best OSS/BSS practitioners have a rare ability to zoom in and out of these frames of reference. They can dive down into the details and comprehend them, but then zoom back out and understand how all the pieces fit together. They need to know the big picture and all the gnarly details if they are to de-risk a project and make it slot in amongst other existing or in-flight projects. They need to understand the resources required across each frame of reference to make things happen. They can also communicate with others across each layer of reference, invariably communicating in different ways to different audiences, communicating the big picture to the upper layers of the pyramid and the specifics at the lower layers.

Now, that covers the frames of reference. But there’s more required.

2. Walking in their shoes (Understanding Personas and their Problems)

We now need to take ourselves out of our own world and into the minds of the stakeholders that represent the five frames of reference. For example, if we look at the third layer of reference – Company – then the stakeholders might be the CEO, the CTO, the CMO, etc. Have you ever stopped to think about how our OSS/BSS might add significant value to a CMO (Chief Marketing Officer – or perhaps the collective term of the chief marketing office and all the marketing team within it)? We can either hypothesise about what their needs are, or we can ask them, or we can research their problems, objectives, etc, understanding the exact phraseology of what each stakeholder talks about. This is the process of understanding the personas that represent each frame of reference and the specifics of what’s important to them. [BTW. Send us a message if you’d like to find out about our persona mapping methodology that we use to map personas to key workflows and benchmarks to help evaluate best-fit OSS/BSS solutions for your needs].

3. Idea Generation Questions

Next, ask yourself some questions about those personas and the challenges they face. Questions such as:

  • Can I solve their problem more naively (ie take the perspective of a 7 year old, removing all the constraints an expert would know, and solve the problem in a naive and less complex way than the status-quo)
  • How can the problem be decoupled from dependency problems or constraints (eg remove dependencies that contribute to the problem)
  • How would a counsel of heroes (eg Einstein, Roosevelt, etc) tackle this problem 
  • Is there a different path that bypasses the problem entirely
  • Have I removed all assumptions
  • What unique perspectives / skills / traits can you bring to solving the problem that others just won’t have had
  • What other hat could I wear (ie take an artist’s perspective to solving the problem if you normally apply an engineer’s thinking)
  • What would a complete weirdo do (taking out any risk of ridicule or loss of face from your thinking)
  • Is there a latent opportunity that could be captured by doing this in a whole different way
  • The industry has been doing it this way for twenty years. When it was originally designed, we were constrained. If we were to design it from scratch using modern technology / frameworks / principles, would it look different
  • Is there a Venn Diagram I can apply to this
  • What does the world / company / business-unit need
  • I’m sure you can think of so many more

4. Next Steps

By now, you’ve hopefully generated some great idea seeds. The next step is to identify goals (you may like to try Google’s OKR model), actions and projects (or products / services). Your challenge (and mine) is to turn the seeds into something much bigger. From little things, big things grow (hopefully). I have some additional product and project frameworks that I’m now going to apply the idea seeds to.

Good luck!

Oh, and BTW, do you have a go-to technique that you use to stimulate big-picture thinking? I’m completely open-minded to trying something better. Leave us a comment below.

 

 

 

How we fall in love with our OSS ideas

Do you know what it is that people like / love about your OSS/BSS?
Why they buy it? Why they use it? What problem it solves? How it makes their life easier?

If you do, how do you know?
Do you speak with its users? Do you survey them? Do you spend time with them observing how they use your product? Is it just the “vibe” you get?

Do you actually have a way of measuring what the masses actually use? Actually quantifying, not just guessing. 

Did you notice how I switched tack there, from like/love in the first paragraph to use in the preceding paragraph? We’ll get back to that too, but let’s first look at quantifying use / volume.

The long-tail graph below shows a product’s functional features on the x-axis and the number of times it’s used during the log-period. Are you able to log all customer use of your product and create a graph like this?

The big-impact demands are the features that get used. They’re also probably the features that were in your original MVP (Minimum Viable Product).

I once worked with a vendor alongside someone I now call a friend. We were trying to deliver an OSS/BSS project with many modules. One of the modules was proving the be quite problematic. Our customers just didn’t like it. It had lots of features that had been progressively built into the product over a number of years. The only problem was that the developers had never really spent much (any) time working in operations… or even speaking with anyone who did.

My friend was getting fed up with this particular module. He did have ops experience. He was with me on the customer site and dealing with people who also did (ie customers). He intuitively knew what the product module needed. So he took it upon himself to write a replacement and he finished it in a single weekend. A single weekend!! He showed it to key customer stakeholders on the Monday morning and they loved it. They pushed for it to become their product because it delivered what they needed. It was an MVP that needed further enhancements and functionality, but in a single weekend it did all the big-impact functions (ie the left side of the graph above).

It soon became the product that other customers used too, replacing the original product that had years of development effort invested into it. The new product became a selling point during product demos, whereas the original product had been a liability.

This reminds me of the quote below from Andrew Mason (the founder of Groupon).

The biggest mistake we made was being completely encumbered by this vision of what I wanted it to be… instead of focusing on the one piece of the product that people actually liked. You’re way too dumb to figure out if your idea is good. It’s up to the masses.”

There are some products out there on the market that have been around for decades. The frameworks that underpin them are out of date and they’re in desperate need of an overhaul. Unfortunately, their vendors are rightfully hesitant to throw away all the work that has gone into them to create a much-needed replacement.

However, if they’re able to log their use like the long-tail diagram above, they might be surprised to find that a new product with only MVP functionality might replace most of what customers actually want and need. [I do caveat the “might” in the sentence above because some OSS/BSS products do require a lot of complex capabilities]

But let’s come back to the earlier statement about love/like vs use. Not all of the functionality that customers love/like is actually used a lot. There might be functionality like bulk migration or automation of rare but complex tasks that customers only use rarely but adds a lot of value. That’s why you still need to consider whether there are some functions that appear in the right-hand block of the long-tail that need to be implemented, not just summarily excluded from your product roadmap. That’s why the long-tail diagram has red/green colour-coding to identify what’s actually needed.

A couple of final notes.

It can be harder to evaluate what people like in OSS/BSS products because they’re high-cost / low-volume (of buyers), especially compared to mass-market products like Groupon. It’s harder to evaluate directional sentiment for OSS/BSS products because the user group is compelled to use the products whether they like them or not.

Similarly, do you have a way of measuring the efficiency of what customers use via activity duration analysis? Similar to the long-tail diagram above, we have another approach for measuring efficiency of use.

If you would like any guidance on your product roadmap, especially quantifying and prioritising what functionality to add / modify / remove, we’d be delighted to assist.

The confused mind says no – the psychology of OSS purchasing

When it comes to OSS/BSS vendor selections, a typical customer might say they’re evaluating vendors on criteria that are a mix of technical and commercial (and other) factors. However, there’s more likely a much bigger and often hidden factor that drives a purchasing event. We’ll explore what that is shortly.

I’m currently reading the book, Alchemy: The Surprising Power of Ideas That Don’t Make Sense. It discusses a number of lateral, counter-intuitive approaches that have actually been successful and the psychological effects behind them.

The author, Rory Sutherland, proffers the idea that, “we make decisions… not only for the expected average outcome (but) also seek to minimise the possible variance, which makes sense in an uncertain world.” Also that, “A 1 percent chance of a nightmarish experience dwarfs a 99 percent chance of a 5 percent gain.”

Does that make you wonder about what the OSS/BSS buying experience feels like?

Are OSS/BSS vendors so busy promoting the 5% gain and the marginal differentiating features that they’re overlooking the white-knuckled fear of the nightmarish experience for OSS buyers?

OSS/BSS transformation projects tend to be large, complex and risky. There are a lot of unknowns, especially for organisations that tackle these types of projects rarely. Every project is different. The stakeholders signing off these projects are making massive investment decisions (relative to their organisation’s respective size) in the allocation of  resources (in financial, human and time allocation). The ramifications of these buying decisions will last for years and can often be career-defining (in the positive or the negative depending on the success of the transformation).

As someone who assists organisations with their buying decisions, I can concur with the old saying that, “the confused mind says no.” I’d also suggest that the scared mind says F#$@ no! If the vendor bamboozles the buyer with jargon and features and data, it only amplifies the fears that they might be walking into a nightmarish experience.

Fear and confusion are the reason customers often seek out the vendors who are in the top-right corner of the Gartner quadrant, even when they’re just not the best-fit solution. It’s the reason for the old saying that nobody got fired for hiring IBM. It’s the reason OSS/BSS procurement events can be so arduous (9, 12, 18 months are the norm).

The counter-intuitive approach for vendors is to spend more time overcoming the fear and confusion rather than technical demonstrations:

  • Simplify the messaging
  • Simplify the user experience (refer to the OSS intuition age)
  • Simplify the transformation
  • Provide work breakdowns and phasing to deliver early proof of value rather than a big-bang delivery way off into the future
  • Taking time to learn and communicate in the customer’s voice and terminology rather than language that’s more comfortable to you
  • Provide working proofs-of-concept / sandpits of your solutions as early as possible for the customer to interact with
  • Allow customers to use these sandpit environments and self-help with extensive support collateral (eg videos, how-to’s) enabling the customer to build trust in you at their own pace
  • Show evidence of doing the important things really well and efficiently rather than a long-tail of marginal features
  • Show evidence of striving to ensure every customer gets a positive outcome. This includes up-front transparency of the challenges faced (and still being faced) on similar projects. Not just words, but evidence of your company’s actions on behalf of customers. This might include testimonials and referrals for every single customer
  • Show evidence of no prior litigations or rampant variations or cost escalations on past projects
  • Trust is required to reduce fear and confusion (refer to “the relationship slider” in the three project responsibility sliders)
  • Provide examples of past challenges and risk mitigations. Even school the client on what they need to do to de-risk the project prior to commencement*

Can you think of other techniques / proofs that almost guarantee to the customer that they aren’t entering into a nightmarish situation?

* Note: I wrote Mastering your OSS with this exact concept in mind – to get customers ready for the transformation project they’re about to embark on and the techniques they can use to de-risk the project.

 

OSS Sandpit – Radio Planning Exercise

Wireless or radio technologies are making waves (sorry for the awful pun) at the moment. The telco world is abuzz with the changes being brought about by 5G, IoT, LEO satellite and other related tech. They all rely on radio frequency (RF) to carry their all-important communications information.

This article provides an example of preparing RF coverage maps by using the inventory module of our Personal OSS Sandpit Project as well as a freely available trial of Twinkler (www.twinkler.app). We’ve already shown some RF propagation in previous articles in this series, including:

But in today’s article, we’ll show radio coverage planning across the three sectors of a cellular network site.

This RF planning capability is becoming more widely useful with the advent of these technologies and the new business models / use-cases they support. Whereas RF planning used to be the domain of mobile network operators, we’re now seeing more widespread use including:

  • Neutral-host infrastructure owners
  • Wireless ISPs
  • IoT networks
  • Private mobile networks, especially for in-building or in-campus in-fill coverage (BTW, the Twinkler app allows both indoor and outdoor RF planning use-cases)
  • Government radio networks (eg emergency services)
  • Utilities
  • Enterprise (eg mining, transport / logistics, agriculture, broadcast radio / TV, etc)
  • Consulting services
  • Managed service offerings

In this sand-pit demo, we’ll reverse-engineer an operator’s existing tower / assets, but the same approach can be used to design and configure new assets (eg adding antenna for millimeter wave 5G). Here in Australia, ACMA provides a public record of all licenced comms transmitter / receiver devices. This link is an example of one site recorded in the ACMA database that we’ll use for this demo – https://web.acma.gov.au/rrl/site_search.site_lookup?pSITE_ID=204667 (Warrimoo Tower).

You may recall that we’d recently demonstrated building a 3D model of Warrimoo Tower. This was done by stitching together photos taken from a drone. Looking at the animated GIF below, you’ll notice that we’ve not just built a model of the tower, but also tagged the assets (antenna, combiners, radio units) attached to the tower in 3D space. If you look closely, you’ll notice the labels that appear at the end of the visual loop, which identify the names of each asset.

Whilst Warrimoo tower holds assets of many owners (different colours on the animation), we’ll focus specifically on one 3-sector cluster. We’ll specifically focus on 3 antennae owned by Optus transmitting in the 700MHz band (763 MHz centre frequency). These are highlighted in green in the 3D model above.

The steps we use for RF planning are as follows:

  1. Extract ACMA data into Kuwaiba, our inventory database
  2. Push data from Kuwaiba to Twinkler, our RF modelling tool.
  3. Visualise the radio coverage map

Step 1 – Tower / antenna data in inventory:

The following diagram shows the inventory of the tower in topological format. The 3-sector cluster we’re modelling has been circled in yellow. [Click on the image for a closer look]

You’ll also notice that we’ve specifically highlighted one antenna (in blue, which has the name “204667-ANT-81193-MG1-01 (Optus)” according to our naming convention). There’s a corresponding list of attributes in the right-hand pane relating to this antenna. Most of these attributes have been taken from the ACMA database, but could equally come from your OSS, NMS, asset management system or other network data sets.

Some of the most important attributes (for RF calculation purposes anyway) are:

  • Device make/model (as this defines the radiation profile)
  • Height (either above the base of the tower or sea-level, but above the base in our case)
  • Azimuth (direction the antenna is pointing)
  • Emission centre frequency (ie the frequency of transmission)
  • Transmission power

Step 2 – Twinkler takes these attributes and builds an RF model/s

You can either use the Twinkler application (sign up to a free account here – https://twinkler.io). The application can visualise coverage maps, but these can also be gathered via the Twinkler API if you wish to add them as an overlay in your inventory, GIS or similar tools (API details can be found here: https://twinkler.io/TwinklerAPITestClient.html).

Step 3 – Visualise radio coverage diagrams

As you can see from the attributes in the inventory diagram above, we have an azimuth of 230 degrees for the first antenna in the 3-sector group. The azimuths of the other two antennae are 15 and 140 degrees respectively.

These give us the following radiation patterns (each is a separate map, but I’ve combined to make an animated GIF for easier viewing):

You’ll notice that the combined spread in the diagram is slightly larger because the combined coverage is set to -130dBm signal level whereas the per-sector coverages are to -120dBm.

Note: You may have spotted that the mapping algorithm considers the terrain. From this zoomed-in view you’ll see the coverage black-spot in the gully in the top-left corner more easily.

Summary

I hope you enjoyed this brief introduction into how we’ve created a radio coverage map of an existing cellular network using the Inventory module of our Personal OSS Sandpit Project. Click on the link to step back to the parent page and see what other modules and/or use-cases are available for review.

 

Are you kidding? We’ll never use open-source OSS/BSS!

Back in the days when I first started using OSS/BSS software tools, there was no way any respectable telco was going to use open-source software (the other oss, for which I’ll use lower-case in this article) in their OSS/BSS stacks. The arguments were plenty, and if we’re being honest, probably had a strong element of truth in many cases back then.

These arguments included:

  • Security – This is the most commonly cited aversion I’ve heard to open-source. Our OSS/BSS control our network, so they absolutely have to be secure. Secure across all aspects of the stack from network / infrastructure to data (at rest and in motion) to account access to applications / code, etc. The argument against open-source is that the code is open to anyone to view, so vulnerabilities can be identified by hackers. Another argument is that community contributors could intentionally inject vulnerabilities that aren’t spotted by the rest of the community
  • Quality – There is a perception that open-source projects are more hobby projects  than professional. Related to that, hobbyists can’t expend enough effort to make the solution as feature-rich and/or user-friendly as commercial software
  • Flexibility – Large telcos tend to want to steer the products to their own unique needs via a lot of customisations. OSS/BSS transformation projects tend to be large enough to encourage proprietary software vendors to be paid to make the requested changes. Choosing open-source implies accepting the product (and its roadmap) is defined by its developer community unless you wish to develop your own updates
  • Support – Telcos run 24x7x365, so they often expect their OSS/BSS vendors to provide round-the-clock support as well. There’s a  belief that open-source comes with a best-effort support model with no contracted service obligations. And if something does go drastically wrong, that open-source disclaims all responsibility and liability
  • Continuity – Telcos not only run 24x7x365, but also expect to maintain this cadence for decades to come. They need to know that they can rely on their OSS/BSS today but also expect a roadmap of updates into the future. They can’t become dependent upon a hobbyist or community that decides they don’t want to develop their open-source project anymore

Luckily, these perceptions around open-source have changed in telco circles in recent years. The success of open-source organisations like Red Hat (acquired by IBM for $34 billion on annual revenues of $3.4 billion) have shown that valuable business models can be underpinned by open-source. There are many examples of open-source OSS/BSS projects driving valuable business models and associated professionalism. The change in perception has possibly also been driven by shifts in application architectures, from monolithic OSS/BSS to more modular ones. Having smaller modules has opened the door to utilisation of building block solutions like the Apache projects.

So let’s look at the same five factors above again, but through the lens of the pros rather than the cons.

  • Security – There’s no doubt that security is always a challenge, regardless of being open-source or proprietary software, especially for an industry like OSS/BSS where all organisations are still investing more heavily in innovation (new features/capabilitys) more than security optimisations. Clearly the openness of code means vulnerabilities are more easily spotted in open-source than in “walled-garden” proprietary solutions. Not just by nefarious actors, but its development community as well. Linus’ Law suggests that “given enough eyeballs, all bugs (and security flaws) are shallow.” The question for open-source OSS/BSS is whether there are actually many eyeballs. All commercially successful open-source OSS/BSS vendors that I’m aware of have their own teams of professional developers who control every change to the code base, even on the rare occasions when there are community contributions. Similarly, many modern open-source OSS/BSS leverage other open-source modules that do have many eyes (eg linux, snmp libaries, Apache projects, etc). Another common perception is security through obscurity, that there are almost no external “eyeballs.” The fragmented nature of the OSS/BSS industry means that some proprietary tools have a tiny install base. This can lull some into a false sense of security. Alternatively, open-source OSS/BSS manufacturers know there’s a customer focus on security and have to mitigate this concern. The other interesting perspective of openness is that open-source projects can quite quickly be scrutinised for security-code-quality. An auditor has free reign to identify whether the code is professional and secure. With proprietary software, the customer’s auditor isn’t afforded the same luxury unless special access is granted to the code. With no code access, the auditor has to reverse-engineer for vulnerabilities rather than foresee them in the code.
  • Quality – There’s no doubt that many open-source OSS/BSS have matured and found valuable business models to sustain them. With the profitable business model has come increased resources, professionalism and quality. With the increased modularity of modern architectures, open-source OSS/BSS projects are able to perform very specific niche functionalities. Contrast this with the monolithic proprietary solutions that have needed to spread their resources thinner across a much wider functional estate. Also successful open-source OSS/BSS organisations tend to focus on product development and product-related services (eg support), whereas the largest OSS/BSS firms tend to derive a much larger percentage of revenues from value-added services (eg transformations, customisations, consultancy, managed services, etc). The latter are more services-oriented companies than product companies. As inferred in the security point above, open-source also provides transparency relating to code-quality. A code auditor will quickly identify whether open-source code is of good quality, whereas proprietary software quality is hidden inside the black-box 
  • Flexibility – There has been a significant shift in telco mindsets in recent years, from an off-the-shelf to a build-your-own OSS/BSS stack. Telcos like AT&T have seen the achievements of the hyperscalers, observed the increased virtualisation of networks and realised they needed to have more in-house software development skills. Having in-house developers and access to the code-base of open-source means that telcos have (almost) complete control over their OSS/BSS destinies. They don’t need to wait for proprietary vendors to acknowledge, quote, develop and release new feature requests. They no longer rely on the vendor’s roadmap. They can just slip the required changes into their CI/CD pipeline and prioritise according to resource availability. Or if you don’t want to build a team of developers specifically skilled with your OSS/BSS, you can pick and choose – what functionality to develop in-house, versus what functionality you want to sponsor the open-source vendor to build
  • Support – Remember when I mentioned above that OSS/BSS organisations have found ways to build profitable business models around open-source software? In most cases, their revenues are derived from annual support contracts. The quality and coverage of their support (and the products that back it up) is directly tied to their income stream, so there’s commensurate professionalism assigned to support. As mentioned earlier, almost all the open-source OSS/BSS I’m aware of are developed by an organisation that controls all code change, not community consensus projects. This is a good thing when it comes to support, as the support partner is clear, unlike community-led open-source projects. Another support-related perspective is in the number of non-production environments that can be used for support, test, training, security analysis, etc. The cost of proprietary software means that it can be prohibitive to stand up additional environments for the many support use-cases. Apart from the underlying hardware costs and deployment effort, standing up additional open-source environments tends to be at no additional cost. Open-source also gives you greater choice in deciding whether to self-support your OSS/BSS in future (if you feel that your internal team knows the solution so intimately that they can fix any problem or functionality requirement that arises) rather than paying ongoing support contracts. Can the same be said for proprietary product support?
  • Continuity – This is perhaps the most interesting one for me. There is the assumption that big, commercial software vendors are more reliable than open-source vendors. This may (or might not) be the case. Plenty of commercial vendors have gone out of business, just as plenty of open-source projects have burned out or dwindled away. To counter the first risk, telcos pay to enter into software escrow agreements with proprietary vendors to ensure critical fixes and roadmap can continue even in the event that a vendor ceases to operate. But the escrow contract may not cover when a commercial vendor chooses to obsolete a product line of software or just fail to invest in new features or patches. This is a common event from even the world’s largest OSS/BSS providers. Under escrow arrangements, customers are effectively paying an insurance fee to have access to the code for organisational continuity purposes, not product continuity. Escrow may not cover that, but open-source is available under any scenario. The more important continuity consideration is the data and data is the reason OSS/BSS exist. When choosing a commercial provider, especially a cloud software / service provider, the data goes into a black box. What happens to the data inside the black box is proprietary and often what comes out of it is also. Telcos will tend to have far more control of their data destinies for operational continuity if using open-source solutions. Speaking of cloud and SaaS-model and/or subscription-model OSS/BSS, customers are at the whim of the vendor’s product direction. Products, features and modules can be easily changed or deprecated by these vendors, with little recourse for the buyers. This can still happen in open-source and become a impediment for buyers too, but at least open-source provides buyers with the opportunity to control their own OSS/BSS destinies.

Now, I’m not advocating one or the other for your particular situation. As cited above, there are clearly pros and cons for each approach as well as different products of best-fit for different operators. However, open-source can no longer be as summarily dismissed as it was when I first started on my OSS/BSS journey. There are many fine OSS and BSS products and vendors in our Blue Book OSS/BSS Vendor Directory that are worthy of your consideration too when looking into your next product or transformation.

Edit: Thanks to Guy B. who rightly suggested that Scalability was another argument against open-source in the past. Ironically, open-source has been a significant influencer in the almost unlimited scalability that many of our solutions enjoy today.

How to calculate the right jeopardy metrics in your end-to-end workflows

Last week we created an article that described how to use your OSS/BSS log data to generate reliable / quantifiable process flow diagrams.

We’ve expanded upon this research to identify a reliable calculation of jeopardy metrics. Jeopardy Management is the method for notifying operators when an in-flight workflow (eg customer order, etc) is likely to breach targets such as RFS date (ie when the customer’s service will be available for use) and/or SLAs (service level agreements) are likely to be breached.

Jeopardy management techniques are used to predict forward before a breach has occurred, hopefully. For example if an Order to Activate workflow for a particular product type consists of 10 steps and only the first 2 steps are completed within 29 days of a target of 30 days RFS, then we could expect that the RFS date is likely to be missed. The customer should be alerted. If the right trackers were built, this order should’ve had a jeopardy notification long before 29 days had elapsed. 

In the past, jeopardy indicators have tended to be estimated thresholds. Operators have tended to set notifications based on gut-feel (eg step 2 must be completed by day 5).  But through the use of log data, we can now provide a more specific jeopardy indicator for every step in the process.

The chart above shows every activity within a workflow across the horizontal axis. The vertical axis shows the number of days elapsed since the start of the workflow.

By looking at all past instances of this workflow, we can show the jeopardy indicator as a series of yellow dots. In other words, if any activity has ever been finished later than its corresponding yellow dot, then the E2E workflow it was part of has breached its SLA

To use a more familiar analogy it’s the latest possible date that you can study for exams and still be able to pass the subject, using time-stamped historical data. Not that I ever left it late to study for exams back in uni days!!  🙂

And yet if you look closely, you’ll notice that some blue dots (average elapsed time for this activity) in this example are higher than the jeopardy indicator. You’ll also notice that the orange dots (the longest elapsed time to complete this task across all instances of this workflow according to log data) are almost all above the jeopardy indicator. Those examples highlight significant RFS / SLA breaches in this data set (over 10% are in breach).

Leave us a note below if you’d like us to assist with capturing your jeopardy indicators and identifying whether process interventions are required across your OSS/BSS.

 

How to Document, Benchmark and Optimise Operational Processes

Have you been tasked with:

  1. Capturing as-is process flows (eg swim-lane charts or BPMN [Business Process Model and Notation] diagrams)
  2. Starting a new project where understanding the current state is important
  3. Finding ways to optimise day-to-day activities performed by your team
  4. Creating a baseline process to identify automation opportunities
  5. Comparing your current processes with recommendations such as eTOM or ITIL
  6. Identifying which tasks are leading to SLA / OLA breaches

As you may’ve experienced during project kick-off phases, as-is processes are usually not well defined, captured or adequately quantified (eg transaction volumes, duration times, fall-outs, etc) by many customers. 

If process diagrams have been captured, they’re often theoretical workflow maps developed by Business Analysts and Subject Matter Experts to the best of their knowledge. As such, they don’t always reflect real and/or complete flows. They may have no awareness of the rare flows / tasks / situations that can often trip our OSS/BSS tools and operators up. The rarer the sub-flows, the less likely they are to be documented.

Even if the flows have been fully documented, real metrics / benchmarks are rarely recorded. Metrics such as end-to-end completion times and times taken between each activity within the flow can be really challenging to capture and visualise, especially when you have large numbers of flows underway at any point in time.

Do you struggle to know where the real bottlenecks are in your process flows? Which tasks cause fall-outs? Which team members need advanced training? Which process steps have the largest differences in max / min / average durations? Which steps are justified to build automations for? As the old saying goes, if you can’t measure it, you can’t manage it.

You need quantitative, not qualitative understanding of your workflows

As a result, we’ve developed a technique to reverse-engineer log data to map and quantify processes. Logs that our OSS/BSS routinely collect and automatically time-stamp. By using time-stamped logs, we can trace every step, every flow variant, every sequence in the chain and every duration between them. This technique can be used on fulfilment, assurance and other flows. The sample below shows transaction volumes / sequences, but can also show durations within the flows:

Note that this and subsequent diagrams have been intentionally left in low-res format here on this page.

Better than just volumes, we can compare the max / mean / min processing times to identify the duration of activities and show bottlenecks (thicker red lines in the diagram below) as well as identifying hold-ups and inconsistencies in processing times:

By combining insights from flow volumes and timings, we can also recommend the processes and/or steps that optimisation / automations are most justified for.

We can also use monitoring of the flows to identify failure situations that have occurred with a given process, such as the examples highlighted in red below. 

We can also use various visualisation techniques to identify changes / trends in processing over time. These techniques can even assist in identifying whether interventions (eg process improvements or automations) are having the intended impacts.

The following chart (which is clickable) can be used to identify which tasks are consistently leading to SLA (Service Level Agreement) breaches.

The yellow dots indicate the maximum elapsed time (from the start of a given workflow) that has not resulted in the SLA breach. In other words, if this activity has ever been finished later than the yellow dot, then the E2E workflow it was part of has breached its SLA. These numbers can be used during a workflow to predict likelihood that it will breach SLA. It can also be used for setting jeopardy values to notify operators of workflow slippages.

There are a few other points of interest in this chart:

  • Orange dots indicate the longest elapsed time for this activity seen within all flows in the log data
  • Grey dots indicate the shortest elapsed time from the beginning of a workflow
  • Blue dots indicate the average elapsed time
  • Yellow dots are the Jeopardy indicator, meaning that if the elapsed time of this activity has ever exceeded this value then it has gone on to breach SLA
  • The red line is SLA threshold for this particular workflow type
  • Shaded box shows tasks that have never been in an E2E flow that has met SLA
  • You’ll notice that many average values (blue dots) are above jeopardy, which indicates this activity is regularly appearing in flows that go on to breach SLA levels
  • Almost all max values are above jeopardy (most are so high that they’re off the top of the scale) so most activities have been part of an E2E flow that has breached SLA
  • The shaded blue box shows tasks that have never been in an E2E flow that has met SLA!!
  • Needless to say, there were some interventions required with this example!

Operational Process Summary

As described above, using log data that you probably already have ready access to in your OSS/BSS, we can assist you to develop quantifiable process flow information. Having quantifiable data in turn can lead to greater confidence in initiating process interventions, whether they are people (eg advanced training), process (eg business process re-engineering) or technology (eg automations).

This technique works equally well for:

  • Understanding the current situation before commencing an OSS/BSS transformation project
  • Benchmarking and refining processes on an OSS/BSS stack that is running in business-as-usual mode
  • Highlighting the impact a process intervention has had (ie comparing before and after)

Would you like to book a free consultation to discuss the challenges you face with your (or a customer’s) as-is process situation? Please leave your details and list of challenges in the contact form below.

019 – Modern OSS/BSS Transformation Techniques that start with the Customer Journey with Martin Pittard

Digital transformation is a term that’s entered the modern vernacular, but here in the world of OSS/BSS it’s just what we’ve been doing for decades. Whether aimed at delivering digital services, collecting data from all points of an organisation’s compass, increasing the internal efficiencies of operational teams or improving user experiences externally, this is just what our OSS/BSS tools and projects do.

Our guest on today’s episode, Martin Pittard, has been leading digital transformations since long before the digital transformation term existed. As Principal IT Architect at Vocus (www.vocus.com.au). Martin is in the midst of leading his most recent digital transformation (ie OSS/BSS transformation project). On this latest transformation, Martin is using a number of new techniques plus well-held architectural principles including the use of dynamic / Open APIs (a TM Forum initiative), being catalog-driven, standards-based, model-based and having an intense focus on separation of concerns. Of perhaps even greater focus is the drive to improve customer journeys as well as ensuring solution flexibility to support customer interactions across future business and service models.

It was a recent talk at a TM Forum event in Sydney that reinforced our interest in having Martin on as a guest. During this presentation, Martin shared some fantastic ideas on how Vocus is tackling the specific challenges and techniques of its OSS/BSS transformation. So good was it that we turned it into an article on our blog. A video of Martin’s in-depth presentation plus a summary of key points can be found here: https://passionateaboutoss.com/how-to-transform-your-oss-bss-with-open-apis

In addition to the Vocus transformation, Martin also shares stories and insights from past transformations at organisations like Rockwell (building combat systems for submarines), Fujitsu, the structural separation of British Telecom to form Openreach, Alcatel-Lucent (transforming the Telstra network and OSS/BSS) and then nbn. On the latter, Martin spent 8+ years leading the build of mission-critical systems across industry integrations (ie customer-facing systems) and network assurance for nbn. During that time, Martin led a large team through the transition to Agile delivery and recounts some of the challenges, benefits and insights from embarking on that journey.

For any further questions you may have, Martin can be found at: www.linkedin.com/in/martinpittard

Disclaimer. All the views and opinions shared in this podcast, and others in the series, are solely those of our guest and do not reflect the opinions or beliefs of the organisations discussed.

How to improve user experience with a headless OSS

The first OSS/BSS I used, back in 2000, was built around an Oracle relational database and the GUI was built using Oracle Forms (as well as some C++ apps). The developers had implemented a concept they referred to as a boilerplate. It basically allowed the customers to modify any label they wished on the forms.

When I say labels, I mean any of the default text fields, such as the following in the diagram below:

  • Connecting Links
  • Connect Selected Endpoints
  • Disconnect Both Sides
  • Disconnect A Side
  • Disconnect B Side

By default, the forms in my first OSS were all written in English. But the boilerplate functionality offered the customer flexibility. Instead of “connecting links,” they may’ve preferred to call it “cross-connects” or “enlaces de conexión.” It was perfect for supporting different languages without changing the code. At the time, I thought this was a really neat feature. It was the first sign I’d seen of codeless flexibility in the UI of an OSS.

These days, most OSS/BSS need drastic improvements in their UI. As we described previously, they tend to be non-intuitive and take too long to master. We need to go further than boilerplate functionality. This is where headless or decoupled architectures appear to hold some promise.  But we’ll loop back to the how in a moment. First, we’ll take a look at the why.

Apple has shown the tech world the importance of customer experience (CX) and elegant UI / industrial design. Much like the contrast between the iPod and previous MP3 players, our OSS/BSS are anonymous, poorly made objects. We have some catching up to do.

But, let’s first start by asking who is the customer we’re designing a Customer Experience for? Well, there are two distinct categories of customer that we have to design our OSS/BSS and workflows for, as shown in the sample Order to Cash (O2C) flow infographic below.

  • We have the first level of customers, Customer 2, the operators in the diagram below. These use our OSS/BSS directly but often behind the scenes.
  • Then there’s the second level of customers, Customer 1, the end users who also interact with our OSS/BSS, but often indirectly

The end users need to have a CX that appears highly integrated, smooth and seamless. It has to appear consistent, even though there are multiple channels that often aren’t linked (or even linkable – eg a customer might interact with an IVR or chatbot without revealing personal identifiers that can be linked with a customer ID in the OSS/BSS).

The end user follows the journey through the left-hand column of the infographic from start to finish. However, to deliver upon the flow on the right-side of the infographic, the CSP side, it’s likely that dozens of operators using many completely unrelated applications / UIs will perform disparate activities. They’ll perform a small subset of activities, but for many different end-users within a given day. It’s highly unlikely that there will be a single person (right side) mirroring the end-user journey (left side), so CSPs have to hope their workflows, data flows and operators all work in unison, without any fall-outs along the way.

Along the right-hand path, the operators tend to have a plethora of different back-end tools (as implied in the comments in the O2C flow above). They might be integrated (via API), but the tools often come from different vendors, so there is no consistency in UI.

The “headless” approach (see article from pantheon.io for further info), allows the user interface to be decoupled from the application logic and data (as per the diagram below). If all the OSS/BSS tools along the right-hand path of the O2C infographic were headless, it might allow an integrator to design a smooth, consistent and robust end-to-end customer experience across many different vendor applications.

A few other thoughts too:

  • Engineers / Developers can create the application logic and data manipulation. They only need to solve this problem once and then move onto solving for the next use-case. It can then be handed to UX / UI experts to “skin” the products, potentially trialing many different UI variants until they find an optimal solution. UX experts tell me trial and error is the real science behind great user interface design. This is also reflected in great industrial design by organisations like Apple that create hundreds of prototypes before going to market. Our engineers / devs are too precious a resource to have them trialing many different UI variants
  • Most OSS/BSS have a lot of functionality baked into the product and UI. Unfortunately, most operators won’t need to use a lot of the functionality. Therefore it’s just cluttering up the UI and their daily activities. The better approach is to only show the functionality that the operators actually need
  • This “just show the operators what they need” concept can also allow for separation by skill. For example, a lot of workflows (eg an O2C or T2R) will have high-volume, highly repeatable, fast turnover variants that can be easily taught to new operators. However, the same O2C workflow might have less common or obscure variants, such as fall-outs that require handling by exception by more highly experienced / skilled / trained operators. You might choose to have a fast-lane (high-volume) UX versus a high-control UX for these scenarios 
  • A consistent UI could be underpinned by any number of applications. Theoretically, one OSS/BSS app could be swapped out for another without any change in workflow for the end user or even the CSP‘s operator
  • Consistent UIs that are matched to context-sensitive workflows could also streamline the automation and/or RPA of each
  • Consistent UIs allow for easier benchmarking for speed and effectiveness, which in turn could help with feedback loops, such as autonomous network concepts
  • UI / UX experts can design style guides that ensure consistency across all applications in the OSS/BSS stack
  • With the iPod, Apple arrived at the insight that it made more sense to take much of the logic “off-device”

Hat tip to George for the headless UI concept as a means of improving UX!

PS. I should also point out that another parallel universe I’m diving into is Augmented Reality based OSS/BSS and it provides a slightly different context when mentioning headless UX. In AR, headless implies having no screen to interact with, so the UX comes from audio and visual presentation of data that diverges significantly from the OSS/BSS of today. Regardless, the separation of logic / data from the front-end that we described earlier also seems like a step towards future AR/VR-based OSS/BSS.

018 – How a NaaS Transformation can Revolutionise your OSS_BSS Stack with Johanne Mayer

OSS/BSS stacks can be incredibly complex and cumbersome beasts, especially in large carriers with many different product, process and network variants. We don’t make that task any easier by creating many unique product offerings to take to market. And this time to market can be a significant competitive advantage, or be a serious impediment to it. NaaS, or Network as a Service, is a novel approach to increasing flexibility in our OSS/BSS stacks, inserting an API layer that provides a separation of concerns.

Our guest on today’s episode, Johanne Mayer, is so passionate about the benefits of NaaS-based transformations that she’s formed a company named NaaS Compass(www.naascompass.com) to assist others with their transformations. She provides the hard-won experience from being involved with NaaS transformations at organisations like Telstra.

Prior to embarking on this latest venture, Johanne has also worked with many of the most iconic organisations in the telco / OSS/BSS industries. These include Nortel, Alcatel-Lucent (now part of Nokia), Ericsson, Oracle, Ciena Blue Planet and Analysis Mason. Johanne takes us on a journey through a career that has seen her work on exciting projects from the days of NMS and X.25 networks to more recent projects leading collaborative transformation with standards organisations like TM Forum (where she is a Distinguished Fellow), MEF and ETSI.

For any further questions you may have, Johanne can be found at: www.linkedin.com/in/johannemayer

Disclaimer. All the views and opinions shared in this podcast, and others in the series, are solely those of our guest and do not reflect the opinions or beliefs of the organisations discussed.

Just Launched: Do you Want to Learn more about OSS/BSS?

The world of OSS/BSS is changing rapidly. There’s so much to learn. New forms of information, approaches and technologies are proliferating. Are you wrestling with the challenge of trying to determine what OSS/BSS training you and/or your team requires? We’ve just revised and re-launched our core OSS/BSS training offerings.

Click on the image below to open the OSS/BSS Training Plan, which includes details about each course we offer.

The courses described in the OSS/BSS Training Plan include:

  1. An Introduction to OSS/BSS (PAOSS-INT-01)
  2. Strategic Analysis of Your OSS/BSS (PAOSS-INT-02)
  3. Creating an OSS/BSS Transformation Plan (PAOSS-PRE-01)
  4. OSS/BSS Persona and Workflow Mapping (PAOSS-PRE-02)
  5. Gathering Your Specific OSS/BSS Requirements (PAOSS-PRE-03)
  6. Choosing the Right OSS/BSS Products for Your Needs (PAOSS-PRE-04)
  7. Preparing Your OSS/BSS Business Case (PAOSS-PRE-05)
  8. OSS/BSS Project Planning (PAOSS-PRE-06)
  9. Developing your OSS/BSS Roadmap (PAOSS-PRE-07)
  10. Identifying and Mitigating Your Biggest OSS/BSS Risks (PAOSS-EE-01)
  11. Developing a Data Integration Plan (PAOSS-EE-02)
  12. Integrating with other Systems (PAOSS-EE-03)
  13. Defining OSS Naming Conventions (PAOSS-EE-04)

The attached Training Plan also describes how we can assist you to develop long-term training and mentoring programs to supplement the day-to-day tasks performed by your trainees in their OSS/BSS-related roles.

Are there other subjects you’re interested in that aren’t outlined above? Are you interested in making a group booking? We’d be delighted to work with you to develop customised training for your organisation’s specific needs. Feel free to leave us a note via the contact form below.

017 – Leading Global OSS/BSS Transformation through Collaboration with George Glass

Our OSS and BSS are highly complex by nature. However, we seem to do a great job of making them more complex, more challenging, less repeatable and hence, more difficult to change. Perhaps that caters to our deeper desires – so many of us in this industry love to prove our worth by solving complex problems.

Our guest on this episode, George Glass, has spent a career looking for ways to remove complexity and increase re-use in our OSS/BSS stacks. First during 31 years (to the day) working as a developer, architect and executive at BT. Now at TM Forum, where he’s CTO and continuing to carry the flame of next generation architectural concepts like ODA and the Open API initiative that started when George was still at BT.

George walks us through a career that started with cutting code on BT’s NMS solutions and the charging systems that allowed BT to drive (significant) revenue. He talks us through very early separation of charging, taking it away from mainframes and onto Unix server farms. He also discusses how he was instrumental in the development of BT’s SOA (Service Oriented Architecture) in circa 2008-9, which generated over £300M in cost-benefit for BT and remains in use (in a modified form) to this day. He also discusses how BT’s structural separation to form Openreach had architectural ramifications and learnings that propagated to other carrier environments around the world.

George then goes on to talk about the origins of TM Forum’s modern flagship architectural models and how they’re assisting their members with digital transformations globally. Not just telcos and their supporting vendors / partners / integrators, but also across other industries (including George’s favourite, the automotive industry).

For any further questions you may have, George can be found at: www.linkedin.com/in/george-glass-887ba61

Disclaimer. All the views and opinions shared in this podcast, and others in the series, are solely those of our guest and do not reflect the opinions or beliefs of the organisations discussed.

How Network Operations Centre (NOC) Efficiency is Powered by Your OSS/BSS

We just launched a new video series describing the Fundamentals of OSS/BSS yesterday. One of the videos in the series describes Network Operations Centres (or NOCs) for telcos or network operators. It also provides examples of the OSS/BSS tools and data sets that help to power them. 

Whilst creating this video, it dawned on me that we’ve done over 2,500 posts, but none specifically about Network Operations Centres (aka Network Management Centres). A significant oversight that must be addressed!! NOCs are the telco’s nerve centre through which the network is monitored and maintained. They are the ultimate insurance policy for any carrier. It also acts as the first line of defence against cyber-security attacks.

The video above provides a picture of Telstra’s GOC (or Global Operations Centre, which is just a glorified name for a NOC). The one below shows AT&T’s NOC (image courtesy of softwarestudies.com).  [More about AT&T’s GNOC in a video later in this article]

In the middle band of the picture above, you’ll notice an impressive video wall with data presented from a number of OSS/BSS. These tend to show rolled-up information that gives a perspective on the current topology, traffic patterns and health of the network. There’s not a lot of red showing, so I’d assume the network was in a fairly healthy state at the time this photo was taken… Either that, or the “green screener” was activated to ensure that any visitors or VIPs weren’t scared off by the number of catastrophes that were being handled / remediated by the network operators. 🙂

Speaking of operators, you’ll notice all the operator pods in the foreground. You can see the workstations, where each operator is enveloped by multiple screens. Like the video wall, each of these operator screens will typically have multiple OSS/BSS applications open at any given point in time. Generally speaking, the operators will be dealing with more granular data-sets via OSS/BSS views on their workstations compared with those shown on the video wall. This is because they’ll be performing more specific tasks such as dealing with a specific device outage and will need to drill down to more detailed data.

Each operator has this wealth of visual real-estate for a reason. Our OSS/BSS gather, generate and process huge amounts of data with updated information arriving all the time. Operators need to pick through all these different data points and derive insights that allow them to perform BAU (Business as Usual) activities. These activities generally focus on assuring the health of the network and the customer services that are carried over that network, but can cover a broader scope too. When things get out of control, they become crisis management centres.

Information can be presented to these operators in a range of different ways depending on the task at hand, whether trying to identify a root-cause of an event / situation through to coordinating routine / preventative maintenance or remedial actions. Note that the second part of the video above gives some examples of OSS/BSS data visualisation techniques to perform these different functions.

Speaking of routine maintenance, many types of routine maintenance activities are coordinated through the NOC. This ensures there’s a coordinated management of the many changes happening during change windows. This includes maintenance of our OSS/BSS tools, which can require regular updates and patching as well as routine administrative activities. 

Our OSS and BSS assist NOC operators, mostly across the middle bands of TM Forum’s TAM (represented by blue clusters 7 to 11, plus 12 to 14, in the TAM diagram below), but potentially many others. Operators may also interact closely with the live network devices to retrieve or update device configurations. This might be achieved via command line interfaces (CLI) on devices or via EMS (Element Management Systems) and NMS (Network Management System) tools, which supplement our OSS/BSS.

Due to the complexity, variability and sheer volume of events that the NOC has to handle (and therefore the costs), our OSS/BSS can become an important efficiency engine. OSS / BSS business cases can often be built around the automation of processes, data processing and IT transactions because of the cost-benefit possibilities. The related benefits tend to be driven by human effort reductions, but also in the speed-up of fault resolution times.

This video below, courtesy of AT&T shows some of their OSS/BSS in action on the giant video wall within their NOC:

If you’re wondering about the role of NOC operators and shift managers, a great article can be found describing a day (night) in the life of Paul Harrison, Telstra’s National Emergency Response Manager, who works from the Telstra GOC. As his article highlights, most NOCs are operational 24x7x365, which requires multiple shifts (eg 3 x 8-hour shifts) to ensure 24-hour coverage. That is, they need at least 3 teams of operators to make sure the network and services are being monitored around the clock.

Depending on the functional coverage required at any given organisation, NOCs might also be considered SOCs (Service Operations Centres), SOCs (Security Operations Centes) or other names too. FWIW, In this earlier article, I pose the slightly novel concept of a DOC as well.

Also, if you’re interested, you might like to check out this video of an XR simulation of a NOC that was inspired by AT&T’s control centre. More about this project here. I’d love to get your thoughts on use-cases relating to how this could be applied. Leave us a comment below.

 

And speaking about mixed reality, I’m excited about how these technologies can be used to improve the command and control functionality of NOCs. Whether that’s in the form of:

  • Visual Collaboration – allowing operators in various locations to “see” the same thing (eg a person in the NOC, a worker in the field and an equipment vendor SME all viewing what the field worker can see on-site and discussing the best way to fix the on-site problem)
  • Decision Support – providing operators, especially field workers, with information provided by the NOC and/or our OSS/BSS that helps the operator perform their tasks effectively
  • Optimised UX – providing operators with OSS/BSS user interfaces (UIs) that are more intuitive and efficient to perform tasks with. Due to the massive amounts of information at NOC operator fingertips from which they have to derive actions, it seems that we need to provide far better UIs for them. Heads-up Displays (HUDs) seem like the natural progression for NOC UIs, so it’s something we’re already investing effort into. Watch this space closely in coming years

Just Launched: Fundamentals of OSS/BSS Video

We’ve just launched a multi-part video series to provide an introduction to OSS and BSS. It answers these fundamental questions and more:

  • Part 1 – What is an OSS? What is a BSS? Why are OSS and BSS even a thing?
  • Part 2 – Who uses an OSS and/or BSS?
  • Part 3 – What Functions do OSS and BSS Perform?
  • Part 4 – What Business Benefits do OSS/BSS Generate?
  • Part 5 – What’s the difference between an OSS and BSS?
  • Part 6 – How do OSS & BSS interact with a Comms Network?
  • Part 7 – What does an OSS/BSS look like?
  • Part 8 – Where can I Find Out More About OSS and BSS?:

Check it out. And give us a Like if you think it’s useful.

016 – Leading the Network Strategy and Operations at a Tier 1 Carrier with Carolyn Phiddian

If the network is ultimately the product for any network operator, then OSS/BSS are the great connectors, connecting customers to that product. Both for initial activation, but also ongoing utilisation of network resources. Whilst everyone has a different perspective on the relevance / importance of OSS/BSS, there tend to be even broader divergences of opinion across networks, operations and executive teams.

Our guest on this episode, Carolyn Phiddian, has formed a career around network strategy and is now a gun-for-hire broadband industry strategist. Networks are her main specialty, but she’s also held executive roles, including leading a team of nearly 500 people in network operations for a Tier 1 carrier. Carolyn has held roles with iconic telco organisations such as Telstra, British Telecom, Cable & Wireless, Alcatel-Lucent and more recently with nbn. Having experienced roles across network, operations and the C-suite within these organisations gives Carolyn a somewhat unique perspective on OSS/BSS.

Carolyn walks us through her career highlights to date but also shares some really important insights and experiences along the way. Some of these include:- that when Architects / Engineers are detailed, rather than big-picture people, then that detail can have the tendency to become hard-wired into their OSS/BSS, which has benefits and ramifications; that one of the biggest challenges for executives and sponsors of OSS/BSS projects is the mismatch between what people want and what they get, often through the stakeholders not knowing what they want or being able to adequately articulate it; and that diversity of thought combined with inquisitiveness are powerful traits for OSS/BSS implementation and operations teams.

For any further questions you may have, Carolyn can be found at: www.linkedin.com/in/carolynphiddian.

Disclaimer. All the views and opinions shared in this podcast, and others in the series, are solely those of our guest and do not reflect the opinions or beliefs of the organisations discussed.

How to Transform your OSS/BSS with Open APIs

The video below, starring Martin Pittard, the Principal IT Architect at Vocus Group, provides a number of important OSS/BSS Transformation call-outs that we’ll dive into in this article.

Vocus has embarked on a journey to significantly overhaul its OSS/BSS stack and has heavily leveraged TM Forum’s Open API suite to do so. One of the primary objectives of the overhaul was to provide Vocus with business agility via its IT and digital assets.

As Martin indicates, all transformation journeys start with the customer, but there are a few other really interesting call-outs to make in relation to this slide below:

  • The journey started with 6 legacy networks and 8 legacy BSS stacks. This is a common situation facing many telcos. Legacy / clutter has developed over the years. It results in what I refer to as The Chessboard Analogy, which precludes business agility. Without a significant transformation, this clutter constrains you to incremental modifications, which leads to a Strangulation of Feature Releases.
  • Over 60,000 SKUs (Stock-keeping Units or distinct product offerings) and 100+ order characteristics is also sadly not unusual. The complexity ramifications of having this many SKUs is significant. I refer to it as The Colour Palette Analogy. The complexity is not just in IT systems, but also for customers and internal teams (eg sales, contact centre, operations, etc)
  • No end-to-end view of inventory. AKA Information of all sorts, not just inventory, spread across many siloes. The ramifications of this are also many, but ultimately it impacts speed of insight (and possibly related SLA implications). Whether that’s via swivel-chairing between products, data scientists having to determine ways to join disparate (and possibly incompatible) data sets, data quality / consistency implications, manual workarounds, etc
  • Manual Sales Processes and “Price on Availability” tend to also imply a lack of repeatability / consistency across sale (and other) processes. More repeatability means more precision, more room for continual improvement and more opportunity to automate and apply algorithmic actions (eg AI/ML-led self-healing). It’s the Mona Lisa of OSS. Or to quote Karl Popper, “Non-reproducible single occurrences are of no significance to science”… or OSS/BSS.

The Vocus transformation was built upon the following Architectural Principles:

Interestingly, Martin describes that Vocus has blended TM Forum SID and MEF data models in preparing its common data model, using standards of best-fit.

Now, this is the diagram that I most want to bring to your attention.

It shows how Vocus has leveraged TM Forum’s Open APIs across OSS/BSS functionalities / workflows in its stack. It shows actual Open API numbers (eg TMF620). You can find a table of all available Open APIs here including specifications, Swagger, Postman, etc resources.

A key to the business agility objective is making product offerings catalog-driven. Vocus’ broad list of product offerings are described as:

And underpinning the offerings are a hierarchical, catalog of building blocks:

Martin also raises the importance of tying operational processes / data-flows like telemetry, alarms, fault-fix, etc to the CFS entities, referring to it as, “Having a small set of tightly bounded reusable components.”

Martin then provides a helpful example to show flows between actual suppliers within their transformed stack, including the Open API reference number. In this case, he shows the CPQ (Configure, Price, Quote) process as part of the L2Q (Lead to Quote) process:

Note that the continuation of decomposition of actions is also described in the video, but not shown in this article.

And finally, Martin outlines the key learnings during the Vocus OSS/BSS transformation:

Final call-outs from Martin include:

  • By following standards as much as possible means less unique tech-debt to maintain
  • The dynamic architecture was the key to this transformation
  • The Open APIs have varying levels of maturity and vendor support. These Open APIs are definitely a work in progress, with evolving contributions from TM Forum members
  • Partnering with organisations that have experience with these technologies are important, but even more important is retaining design and architecture accountability within Vocus and similar carriers
  • Careful consideration of whether data needed to be retained or orphaned when bringing across to the new EIM (Enterprise Information Model)
  • There remains swivel-chair processes on some of the smaller-volume flows that didn’t warrant an API
  • Federated customer, network and services inventory information will continue to inter-work with legacy networks

Martin has also been kind enough to be a guest on The Passionate About OSS Podcast (Episode 019), where he provides additional layers of discussion to this article.

And a final call-out from me. It’s a really detailed and generous sharing of information about the recent OSS/BSS Transformation by Vocus and Martin. It’s well worth taking a closer look and listen for anyone embarking on an OSS/BSS transformation.

015 – Using Modern OSS/BSS Architectures to get Offerings to Market Fast with Greg Tilton

When it comes to OSS/BSS implementations (and products), Time to Market (TTM) is one of our most important metrics. Not just for the network operator to deliver new offerings to market, but also in getting solutions up and running quickly. Faster TTM provides the benefits of cost reduction and faster turn-on of revenue, but potentially allows the operator beat competitors to the acquisition of new customers.

Our guest on this episode, Greg Tilton, has spent many years building OSS and BSS with this key metric in mind. Initially with carriers / ISPs such as Telstra, Request, AAPT and nbn, but more recently with DGIT Systems (www.dgitsystems.com). Greg is a founder and CEO of DGIT, a company that has been creating BSS products since 2011 across order management, CPQ (Configure, Price, Quote), product catalog and billing (through its acquisition of Inomial in 2018).

Greg provides us with a range of helpful hints for improving TTM market across a number of facets. These include project implementations, architecture and product design as well as highlighting the importance of standardisation. On the latter, Greg and DGIT have long held a mutually beneficial relationship with TM Forum (https://www.tmforum.org), being both a consumer of and contributor to many of the standards that are widely used by the OSS/BSS / telco industries (and beyond).

For any further questions you may have, Greg can be found at: www.linkedin.com/in/greg-tilton-a460207 and via www.dgitsystems.com.

Disclaimer. All the views and opinions shared in this podcast, and others in the series, are solely those of our guest and do not reflect the opinions or beliefs of the organisations discussed.

014 – The challenges and pitfalls awaiting OSS implementation teams with Michael De Boer

There are three distinct categories of organisations that interact with OSS/BSS – those who create them, those who use them and those who implement them. But no matter how good the first two are (ie the products / creators and the users), if the implementation isn’t done well, then the OSS/BSS is almost pre-destined to fail. There are many, many challenges and pitfalls that await implementation teams. There’s a reason why the Passionate About OSS logo is an octopus (the OctopOSS). Just when you think you have all the implementation tentacles under control, another comes and whacks you.

Our guest on this episode, Michael De Boer, has spent many years wrangling the OctopOSS. He’s had implementation roles on the buyer / user side with companies like NextGen, but also on the implementer / integrator side with Pitney Bowes. He’s also had ultimate accountability for OSS/BSS delivery as Managing Director of Dynamic Design Australia, where he had to quote and sell but also get hands-on with implementations. Now as Director of GQI (https://gqi.com.au), Michael leads integrations and consultancies that extend beyond OSS/BSS and into other areas of ICT.

Michael describes some of his important learnings on how to ensure your OSS/BSS implementation runs smoothly. He pays particular attention to the people management and data management aspects of any implementation, noting that these are vital components of any build.

For any further questions you may have, Michael can be found at: https://www.linkedin.com/in/michael-de-boer-abb60b2 and via https://gqi.com.au.

Disclaimer. All the views and opinions shared in this podcast, and others in the series, are solely those of our guest and do not reflect the opinions or beliefs of the organisations discussed.

013 – Using a Commercial and Open Source approach to Tackle Network Assurance with Keith Sinclair

Have you noticed the rise in trust, but also the rise in sophistication in Open Source OSS/BSS in recent years? There are many open-source OSS/BSS tools out there. Some have been built as side-projects by communities that have day jobs, whilst others have many employed developers / contributors. Generally speaking, the latter are able to employ developers because they have a reliable revenue stream to support the wages.

Our guest on this episode, Keith Sinclair, has made the leap from side-project to thriving OSS/BSS vendor whilst retaining an open-source model. His product, NMIS, has been around since the 1990s, building on the legendary work of other open-source developers like Tobias Oetiker. NMIS has since become one of the flagship products for his company, Opmantek (https://opmantek.com). Keith and the team have succeeded in creating a commercial construct around their open-source roots, offering product support and value-add products.

Keith retraces those steps, from the initial discussion that triggered the creation of NMIS, its evolution whilst he simultaneously worked at organisations like Cisco, Macquarie Bank and Anixter, through to the IP buy-out and formation of Opmantek, where he’s been CTO for over 10 years. He also describes some of the core beliefs that have guided this journey, from open-source itself, to the importance of automation, scalability and refactoring. The whole conversation is underpinned by a clear passion for helping SysAdmins and Network Admins tackle network assurance challenges at service providers and enterprises alike. Having done these roles himself, he has a powerful empathy for what these people face each day and how tools can help improve their consistency and effectiveness.

For any further questions you may have, Keith can be found at: https://www.linkedin.com/in/kcsinclair

Disclaimer. All the views and opinions shared in this podcast, and others in the series, are solely those of our guest and do not reflect the opinions or beliefs of the organisations discussed.