New OSS functionality or speed and scale?

We all know that revenue per bit (of data transferred across comms networks) is trending lower. How could we not? It’s posited as one of the reasons for declining profitability of the industry. The challenge for telcos is how to engineer an environment of low revenue per bit but still be cost viable.

I’m sure there are differentiated comms products out there in the global market. However, for the many products that aren’t differentiated, there’s a risk of commoditisation. Customers of our OSS are increasingly moving into a paradigm of commoditisation, which in turn impacts the form our OSS must mould themselves to.

The OSS we deliver can either be the bane or the saviour. They can be a differentiator where otherwise there is none. For example, getting each customer’s order ready for service (RFS) faster than competitors. Or by processing orders at scale, yet at a lower cost-base through efficiencies / repeatability such as streamlined products, processes and automations.

OSS exist to improve efficiency at scale of course, but I wonder whether we lose sight of that sometimes? I’ve noticed that we have a tendency to focus on functionality (ie delivering new features) rather than scale.

This isn’t just the OSS vendors or implementation teams either by the way. It’s often apparent in customer requirements too. If you’ve been lucky enough to be involved with any OSS procurement processes, which side of the continuum was the focus – on introducing a raft of features, or narrowing the field of view down to doing the few really important things at scale and speed?

Just in time design

It’s interesting how we tend to go in cycles. Back in the early days of OSS, the network operators tended to build their OSS from the ground up. Then we went through a phase of using Commercial off-the-shelf (COTS) OSS software developed by third-party vendors. We now seem to be cycling back towards in-house development, but with collaboration that includes vendors and external assistance through open-source projects like ONAP. Interesting too how Agile fits in with these cycles.

Regardless of where we are in the cycle for our OSS, as implementers we’re always challenged with finding the Goldilocks amount of documentation – not too heavy, not too light, but just right.

The Agile Manifesto espouses, “working software over comprehensive documentation.” Sounds good to me! It perplexes me that some OSS implementations are bogged down by lengthy up-front documentation phases, especially if we’re basing the solution on COTS offerings. These can really stall the momentum of a project.

Once a solution has been selected (which often does require significant analysis and documentation), I’m more of a proponent of getting the COTS software stood up, even if only in a sandpit environment. This is where just-in-time (JIT) documentation comes into play. Rather than having every aspect of the solution documented (eg process flows, data models, high availability models, physical connectivity, logical connectivity, databases, etc, etc), we only need enough documentation for collaborative stakeholders to do their parts (eg IT to set up hardware / hosting, networks to set up physical connectivity, vendor to provide software, integrator to perform build, etc) to stand up a vanilla solution.

Then it’s time to start building trial scenarios through the solution. There’s usually quite a bit of trial and error in this stage, as we seek to optimise the scenarios for the intended users. Then we add a few more scenarios.

There’s little point trying to document the solution in detail before a scenario is trialled, but some documentation can be really helpful. For example, if the scenario is to build a small sub-section of a network, then draw up some diagrams of that sub-network that include the intended naming conventions for each object (eg device, physical connectivity, addresses, logical connectivity, etc). That allows you to determine whether there are unexpected challenges with naming conventions, data modelling, process design, etc. There are always unexpected challenges that arise!

I figure you’re better off documenting the real challenges than theorising on the “what if?” challenges, which is what often happens with up-front documentation exercises. There are always brilliant stakeholders who can imagine millions of possible challenges, but these often bog the design phase down.

With JIT design, once the solution evolves, the documentation can evolve too… if there is an ongoing reason for its existence (eg as a user guide, for a test plan, as a training cheat-sheet, a record of configuration for fault-finding purposes, etc).

Interestingly, the first value in the Agile Manifesto is, “individuals and interactions over processes and tools.” This is where the COTS vs in-house-dev comes back into play. When using COTS software, individuals, interactions and processes are partly driven by what the tools support. COTS functionality constrains us but we can still use Agile configuration and customisation to optimise our solution for our customers’ needs (where cost-benefit permits).

Having a working set of vanilla tools allows our customers to get a much better feel for what needs to be done rather than trying to understand the intent of up-front design documentation. And that’s the key to great customer outcomes – having the customers knowledgeable enough about the real solution (not hypothetical solutions) to make the most informed decisions possible.

Of course there are always challenges with this JIT design model too, especially when third-party contracts are involved!

Using risk reversal to design OSS

There’s a concept in sales called “risk reversal” that takes all of the customers’ likely issues with a product and provides answers to alleviate customer concerns. I believe we can apply the same concept to OSS, not just to sell them, but to design them.

To borrow from a risk register page here on PAOSS, the major categories of risk that appear on almost all OSS projects are:

  • Organisational change management – the OSS will touch almost all parts of a business and a large number of people within the organisation. If all parts of the business is not conditioned to the change then the implementation will not be successful even if the technical deliverables are faultless. Change management has many, many layers but one way to minimise change management is to make the products and processes highly intuitive. I feel that intuitive OSS will come from a focus on design and simplification rather than our current focus on constantly adding more features. The aim should be to create OSS that are as easy for operators to start using as office tools like spreadsheets, word processors, presentation applications, etc
  • Data integrity – the OSS is only as good as good as the data that is being fed to it. If the quality of data in the OSS database is poor then operational staff will quickly lose faith in the tools. The product-based techniques that can be used to overcome this risk include:
    • Design tools / data model to cope with poor data quality, but also flag it as low confidence for future repair
    • Reduction in data relationships / dependencies (ie referential integrity) to ensure that quality problems don’t have a domino effect on OSS usability
    • Building checks and balances that ensure the data can be reconciled and quality remains high
    • Incorporate closed-loop processes to ensure data quality is continually improved, rather than the open-loop processes that tend to lead to data quality degradation
  • Application functionality mapping to real business needs OSS have been around long enough to have all but run out of features for vendors to differentiate against. The truly useful functionality has arisen from real business needs. “Wish-list” functionality that adds little tangible business benefit or requires significant effort is just adding product and project risk
  • Northbound Interface / Integration – Costs and risks of integrations are significant on each OSS project. There are many techniques that can be used to reduce risk such as a Minimum Viable Data (ie less data types to collect across an interface), standardised destination mapping models, etc but the industry desperately needs major innovation here
  • Implementation – there are so many sources of risk within this category, as is to be expected on any large, complex project. Taking the PMP approach to risk reduction, we can apply the Triple Constraint model

Aggregated OSS buying models

Last week we discussed a sell-side co-op business model. Today we’ll look at buy-side co-op models.

In other industries, we hear of buying groups getting great deals through aggregated buying volumes. This is a little harder to achieve with products that are as uniquely customised as OSS. It’s possible that OSS buy-side aggregation could occur for operators that are similar in nature but don’t compete (eg regional operators). Having said that, I’ve yet to see any co-ops formed to gain OSS group-purchase benefits. If you have, I’d love to hear about it.

In OSS, there are three approaches that aren’t exactly co-op buying models but do aggregate the evaluation and buying decision.

The most obvious is for corporations that run multiple carriers under one umbrella such as Telefonica (see Telefonica’s various OSS / BSS contract notifications here), SingTel (group contracts here), etisalat, etc. There would appear to benefits in standardising OSS platforms across each of the group companies.

A far less formal co-op buying model I’ve noticed is the social-proof approach. This is where one, typically large, network operator in a region goes through an extensive OSS / BSS evaluation and chooses a vendor. Then there’s a domino effect where other, typically smaller, network operators also buy from the same vendor.

Even less formal again is by using third-party organisations like Passionate About OSS to assist with a standard vendor selection methodology. The vendors selected aren’t standardised because each operator’s needs are different, but the product / vendor selection methodology builds on the learnings of past selection processes across multiple operators. The benefits comes in the evaluation and decision frameworks.

An OSS data creation brain-fade

Many years ago, I made a data migration blunder that slowed a production OSS down to a crawl. Actually, less than a crawl. It almost became unusable.

I was tasked with creating a production database of a carrier’s entire network inventory, including data migration for a bunch of Nortel Passport ATM switches (yes, it was that long ago).

  • There were around 70 of these devices in the network
  • 14 usable slots in each device (ie slots not reserved for processing, resilience, etc)
  • Depending on the card type there were different port densities, but let’s say there were 4 physical ports per slot
  • Up to 2,000 VPIs per port
  • Up to 65,000 VCIs per VPI
  • The customer was running SPVC

To make it easier for the operator to create a new customer service, I thought I should script-create every VPI/VCI on every port on every devices. That would allow the operator to just select any available VPI/VCI from within the OSS when provisioning (or later, auto-provisioning) a service.

There was just one problem with this brainwave. For this particular OSS, each VPI/VCI represented a logical port that became an entry alongside physical ports in the OSS‘s ports table… You can see what’s about to happen can’t you? If only I could’ve….

My script auto-created nearly 510 billion VCI logical ports; over 525 billion records in the ports table if you also include VPIs and physical ports…. in a production database. And that was just the ATM switches!

So instead of making life easier for the operators, it actually brought the OSS‘s database to a near stand-still. Brilliant!!

Luckily for me, it was a greenfields OSS build and the production database was still being built up in readiness for operational users to take the reins. I was able to strip all the ports out and try again with a less idiotic data creation plan.

The reality was that there’s no way the customer could’ve ever used 2,000 x 65,000 VPI/VCI groupings I’d created on every single physical port. Put it this way, there were far less than 130 million services across all service types across all carriers across that whole country!

Instead, we just changed the service activation process to manually add new VPI/VCIs into the database on demand as one of the pre-cursor activities when creating each new customer service.

From that experience, I have reverted back to the Minimum Viable Data (MVD) mantra ever since.

Network slicing, another OSS activity

One business customer, for example, may require ultra-reliable services, whereas other business customers may need ultra-high-bandwidth communication or extremely low latency. The 5G network needs to be designed to be able to offer a different mix of capabilities to meet all these diverse requirements at the same time.
From a functional point of view, the most logical approach is to build a set of dedicated networks each adapted to serve one type of business customer. These dedicated networks would permit the implementation of tailor-made functionality and network operation specific to the needs of each business customer, rather than a one-size-fits-all approach as witnessed in the current and previous mobile generations which would not be economically viable.
A much more efficient approach is to operate multiple dedicated networks on a common platform: this is effectively what “network slicing” allows. Network slicing is the embodiment of the concept of running multiple logical networks as virtually independent business operations on a common physical infrastructure in an efficient and economical way.
.”
GSMA’s Introduction to Network Slicing.

Engineering a network is one of compromises. There are many different optimisation levers to pull to engineer a set of network characteristics. In the traditional network, it was a case of pulling all the levers to find a middle-ground set of characteristics that supported all their service offerings.

QoS striping of traffic allowed for a level of differentiation of traffic handling, but the underlying network was still a balancing act of settings. Network virtualisation offers new opportunities. It allows unique segmentation via virtual networks, where each can be optimised for the specific use-cases of that network slice.

For years, I’ve been posing the concept of telco offerings being like electricity networks – that we don’t need so many service variants. I should note that this analogy is not quite right. We do have a few different types of “electricity” such as highly available (health monitoring), high-bandwidth (content streaming), extremely low latency (rapid reaction scenarios such as real-time sensor networks), etc.

Now what do we need to implement and manage all these network slices?? Oh that’s right, OSS! It’s our OSS that will help to efficiently coordinate all the slicing and dicing that’s coming our way… to optimise all the levers across all the different network slices!

The OSS co-op business model

A co-operative is a member-owned business structure with at least five members, all of whom have equal voting rights regardless of their level of involvement or investment. All members are expected to help run the cooperative.”
Small Business WA.

The co-op business model has fascinated me since doing some tech projects in the dairy industry in the deep distant past. The dairy co-ops empower collaboration of dairy farmers where the might of the collective outweighs that of each individually. As the collective, they’ve been able to establish massive processing plants, distribution lines, bargaining power, etc. The dairy co-ops are a sell-side collaboration.

By contrast open source projects like ONAP represent an interesting hybrid – part buy-side collaboration (ie the service providers acquiring software to run their organisations) and part sell-side (ie the vendors contributing code to the project alongside the service providers).

I’ve long been intrigued by the potential for a pure sell-side co-operative in OSS.

As we all know, the OSS market is highly fragmented (just look the number of vendors / products on this page), which means inefficiency because of the duplicated effort across vendors. A level of market efficiency comes from mergers and acquisitions. In addition, some comes from vendors forming partnerships to offer more complete solutions to a given customer requirement list.

But the key to a true sell-side OSS co-operative would be in the definition above – “at least five members.” Perhaps it’s an open-source project that brings them together. Perhaps it’s an extended partnership.

As Tom Nolle stated in an article that prompted the writing of today’s post, “On the vendor side, commoditization tends to force consolidation. A vendor who doesn’t have a nice market share has little to hope for but slow decline. A couple such vendors (like Infinera and Coriant, recently) can combine with the hope that the combination will be more survivable than the individual companies were likely to be. Consolidation weeds out industry inefficiencies like parallel costly operations structures, and so makes the remaining players stronger.

Imagine for a moment if instead of having developers spread across 100 alarm management tools, that same developer pool can take a consolidated 5 alarm management products forward? Do you think we’d get better, more innovative, more complete products faster?

Having said that, co-ops have their weaknesses too.

What do you think? Could such a model work? Would it be a disaster?

OSS, with drama, without drama. Your choice

A recent blog from Seth Godin brought back some memories from a past project.

Two ways to solve a problem and provide a service.
With drama. Make sure the customer knows just how hard you’re working, what extent you’re going to in order to serve. Make a big deal out of the special order, the additional cost, the sweat and the tears.
Without drama. Make it look effortless.
Either can work. Depends on the customer and the situation.
Seth Godin here.

Over the course of the long-running and challenging project, I worked under a number of different Program Directors. The second last (chronologically) took the team barrel-chested down the “With Drama” path whilst the last took the “Without Drama” approach.

The “With Drama” approach was very melodramatic and political, but to be honest, was also really draining. It was draining because of the high levels of contact (eg meetings, reports, etc), reducing the amount of productive delivery time.

The “Without Drama” approach did make it look effortless, because by comparison it was effortless. The Program Director took responsibility for peer-level contact and cleared the way for the delivery team to focus on delivering. The team was still working well over 60 hour weeks, but it was now more clearly focused on delivery tasks. Interestingly, this approach brought a seemingly endless project to a systematic and clean conclusion (ie delivery) within about three months.

Now I’m not sure about your experiences or preferences, but I’d go with the “Without Drama” OSS delivery approach every time. The emotional intensity required of the “With Drama” approach just isn’t sustainable over long-running projects like our OSS projects tend to be.

What are your thoughts / experiences?

How an OSS is like an F1 car

A recent post discussed the challenge of getting a timeslice of operations people to help build the OSS. That post surmised, “as the old saying goes, you get back what you put in. In the case of OSS I’ve seen it time and again that operations need to contribute significantly to the implementation to ensure they get a solution that fits their needs.”

I have a new saying for you today, this time from T.D. Jakes, “You can’t be committed to the dream. You have to be committed to the process.”

If you’re representing an organisation that is buying an OSS solution from a vendor / integrator, please consider these two adages above. Sometimes we’re good at forming the dream (eg business requirements, business case, etc) and expecting the vendor to conduct almost all of the process. While our network operations teams are hired for the process of managing the network, we also need their significant input on the process of building / configuring an OSS. The vendor / integrator can’t just develop it in isolation and then hand it over to ops with a few days of training at the end.

The process of bringing a new OSS into an organisation is not like buying a road car. With an OSS, you can’t just place an order with some optional features like paint and trim specified, then expect to start driving it as soon as it leaves the vendor’s assembly line. It’s more like an F1 car where the driver is in constant communications with the pit-crew, changing and tweaking and refining to optimise the car to the driver’s unique needs (and in turn to hopefully optimise the results).

At least, that’s what current-state OSS are like. Perhaps in the future… we’ll strive to refine our OSS to be more like a road-car – standardised and intuitive enough for operators to drive straight off the assembly line.

Orchestration looks a bit like provisioning

The following is the result of a survey question posed by TM Forum:
Number 1 Driver for Orchestration

I’m not sure how the numbers tally, but conceptually the graph above paints an interesting perspective of why orchestration is important. The graph indicates the why.

But in this case, for me, the why is the by-product of the how. The main attraction of orchestration models is in how we can achieve modularity. All of the business outcomes mentioned in the graph above will only be achievable as a result of modularity.

Put another way, rather than having the integration spaghetti of an “old-school” OSS / BSS stack, orchestration (and orchestration plans) potentially provides the ability to provide clearer demarcation and abstraction all the way from product design down into transactions that hit the network… not to mention the meet-in-the-middle points between business units.

Demarcation points support catalog items (perhaps as APIs / microservices with published contracts), allowing building-block design of products rather than involvement of (and disputes between) business units all down the line of product design. This facilitates the speed (34%) and services on demand (28%) objectives stated in the graph.

But I used the term “old-school” with intent above. The modularity mentioned above was already achieved in some older OSS too. The ability to carve up, sequence, prioritise and re-construct a stream of service orders was already achievable by some provisioning + workflow engines of the past.

The business outcomes remain the same now as they were then, but perhaps orchestration takes it to the next level.

There is no differentiation left in out-bundling competitors

In 1998 Berkshire Hathaway acquired a reinsurance company called General Re. “The only significant staff change that followed the merger was the elimination of General Re’s investment unit. Some 150 people had been in charge of deciding where to invest the company’s funds; they were replaced with just one individual – Warren Buffett.
Robert G. Hagstrom
in, “The Warren Buffett Way.”

Buffett was able to replace 150 people, and significantly outperform them, because they were conducting (relatively) small value, high volume transactions and he did the exact opposite.

Compare this with Gemini Waghmare’s thoughts on BSS, “It used to be that operators differentiated by pricing. Complex bundles, friends and family plans, rollover minutes and megabytes were used as ways to win over consumers. This drove significant investment into charging platforms and product catalogs. The internet economy runs on one-click purchases and a recurring flat rate. Roaming and overages are going away and transactional VOD (video on-demand) makes way for subscription VOD.
It’s not uncommon for operators to have 10,000 price plans while Netflix has three. Facebook and Google make billions of dollars without charging a cent.
Operators would do well to deprecate the value of their charging systems and invest instead in cloud and flat-rate billing with added focus on collecting, normalizing and monetizing user data. By simplifying subscription models with lightweight billing platforms, the scale and cost of BSS will drop dramatically. After all, there is no differentiation left in out-bundling competitors
,” quoted here on Inform. There are some brilliant insights in this link, so I recommend you taking a closer look BTW.

10,000+ pricing plans definitely sounds like the equivalent to General Re before Buffett arrived. Having only 3 pricing plans would be more like the Buffett approach, change the dynamic of BSS tools and the size of the teams that use them! Having only 3 pricing plans would certainly change the dynamic for OSS too. The number of variants we’d be asked to handle would diminish, making it much easier to build and operate our OSS. Due to all the down-stream inefficiencies, you could actually argue that there is only negative-differentiation left in out-bundling competitors.

As an aside… Interesting comment that, “Facebook and Google make billions of dollars without charging a cent.” I’d beg to differ. Whilst consumers of the service aren’t billed, advertisers certainly are, which I assume still needs a billing engine… one that probably has quite a bit of algorithmic complexity.

Are OSS business tools or technical tools?

I’d like to get your opinion on this question – are OSS business tools or technical tools?

We can say that BSS are as the name implies – business support systems.
We can say that NMS / EMS / NEMS are network management tools – technical tools.

The OSS layer fits between those two layers . It’s where the business and technology worlds combine (collide??).
BSS_OSS_NMS_EMS_NE_abstract_connect

If we use the word Operations / Operational to represent the “O” in OSS, it might imply that they exist to help operate technology. Many people in the industry undoubtedly see OSS as technical, operational tools. If I look back to when I first started on OSS, I probably had this same perception – I primarily faced the OSS / NMS interface in the early days.

But change the “O” to operationalisation and it changes the perspective slightly. It encourages you to see that the technology / network is the means via which business models can be implemented. It’s our OSS that allow operationalisation to happen.

So, let me re-ask the question – are OSS business tools or technical tools?

They’re both right? And therefore as OSS operators / developers / implementers, we need to expand our vision of what OSS do and who they service… which helps us get to Simon Sinek’s Why for OSS.

OSS of the past probably tended to be the point of collision and friction between business and tech groups within an organisation. Some modern OSS architectures give me the impression of being meet-in-the-middle tools, which will hopefully bring more collaboration between fiefdoms. Time will tell.

OSS compromise, no, prioritise

On Friday, we talked about how making compromises on OSS can actually be a method for reducing risk. We used the OSS vendor selection process to discuss the point, where many stakeholders contribute to the list of requirements that help to select the best-fit product for the organisation.

To continue with this same theme, I’d like to introduce you to a way of prioritising requirements that borrows from the risk / likelihood matrix commonly used in project management.

The diagram below shows the matrix as it applies to OSS.
OSS automation grid

The y-axis shows the frequency of use (of a particular feature / requirement). They x-axis shows the time / cost savings that will result from having this functionality or automation.

If you add two extra columns to your requirements list, the frequency of use and the resultant savings, you’ll quickly identify which requirements are highest priority (green) based on business benefit. Naturally there are other factors to consider, such as cost-benefit, but it should quickly narrow down to your 80/20 that will allow your OSS to make the most difference.

The same model can be used to sub-prioritise too. For example, you might have a requirement to activate orders – but some orders will occur very frequently, whilst other order types occur rarely. In this case, when configuring the order activation functionality, it might make sense to prioritise on the green order types first.

Are we making our OSS lives easier?

As an implementer of OSS, what’s the single factor that makes it challenging for us to deliver on any of the three constraints of project delivery? Complexity. Or put another way, variants. The more variants, the less chance we have of delivering on time, cost or functionality.

So let me ask you, is our next evolution simpler? No, actually. At least, it doesn’t seem so to me.

For all their many benefits, are virtualised networks simpler? We can apply abstractions to give a simpler view to higher layers in the stack, but we’ve actually only introduced more layers. Virtualisation will also bring an even higher volume of devices, transactions, etc to monitor, so we’re going to have to develop complex ways of managing these factors in cohorts.

We’re big on automations to simplify the roles of operators. But automations don’t make the task simpler for OSS implementers. Once we build a whole bunch of complex automations it might give the appearance of being simpler. But under the hood, it’s not. There are actually more moving parts.

Are we making it simpler through repetition across the industry? Nope, with the proliferation of options we’re getting more diverse. For example, back in the day, we only had a small number of database options to store our OSS data in (I won’t mention the names, I’m sure you know them!). But what about today? We have relational databases of course, but also have so many more options. What about virtualisation options? Mediation / messaging options? Programming languages? Presentation / reporting options? The list goes on. Each different OSS uses a different suite of tools, meaning less standardisation.

Our OSS lives seem to be getting harder by the day!

From PoC to OSS sandpit

You all know I’m a fan of training operators in OSS sandpits (and as apprenticeships during the build phase) rather than a week or two of classroom training at the end of a project.

To reduce the re-work in building a sandpit environment, which will probably be a dev/test environment rather than a production environment, I like to go all the way back to the vendor selection process.
From PoC to OSS sandpit

Running a Proof of Concept (PoC) is a key element of vendor selection in my opinion. The PoC should only include a small short-list of pre-selected solutions so as to not waste time of operator or vendor / integrator. But once short-listed, the PoC should be a cut-down reflection of the customer’s context. Where feasible, it should connect to some real devices / apps (maybe lab devices / apps, possibly via a common/simple interface like SNMP). This takes some time on both sides to set up, but it shows how easily (or not) the solution can integrate with the customer’s active network, BSS, etc. It should be specifically set up to show the device types, alarm types, naming conventions, workflows, etc that fit into the customer’s specific context. That allows the customer to understand the new OSS in terms they’re familiar with.

And since the effort has been made to set up the PoC, doesn’t it make sense to make further use of it and not just throw it away? If the winning bidder then leaves the PoC environment in the hands of the customer, it becomes the sandpit to play in. The big benefit for the winning bidder is that hopefully the customer will have less “what if?” questions that distract the project team during the implementation phase. Questions can be demonstrated, even if only partially, using the sandpit environment rather than empty words.

Post Implementation Review (PIR)

Have you noticed that OSS projects need to go through extensive review to get funding of business cases? That makes sense. They tend to be a big investment after all. Many OSS projects fail, so we want to make sure this one doesn’t and we perform thorough planing / due-diligence.

But I do find it interesting that we spend less time and effort on Post Implementation Reviews (PIRs). We might do the review of the project, but do we compare with the Cost Benefit Analysis (CBA) that undoubtedly goes into each business case?

OSS Project Analysis Scales

Even more interesting is that we spend even less time and effort performing ongoing analysis of an implemented OSS
against against the CBA.

Why interesting? Well, if we took the time to figure out what has really worked, we might have better (and more persuasive) data to improve our future business cases. Not only that, but more chance to reduce the effort on the business case side of the scale compared with current approaches (as per diagrams above).

What do you think?

OSS – just in time rather than just in case

We all know that once installed, OSS tend to stay in place for many years. Too much effort to air-lift in. Too much effort to air-lift back out, especially if tightly integrated over time.

The monolithic COTS (off-the-shelf) tools of the past would generally be commissioned and customised during the initial implementation project, with occasional integrations thereafter. That meant we needed to plan out what functionality might be required in future years and ask for it to be implemented, just in case. Along with all the baked-in functionality that is never needed, and the just in case but possibly never used, we ended up with a lot of bloat in our OSS.

With the current approach of implementing core OSS building blocks, then utilising rapid release and microservice techniques, we have an ongoing enhancement train. This provides us with an opportunity to build just in time, to build only functionality that we know to be essential.

This has pluses and minuses. On the plus side, we have more opportunity to restrict delivery to only what’s needed. On the minus side, a just in time mindset can build a stop-gap culture rather than strategic, long-term thinking. It’s always good to have long-term thinkers / planners on the team to steer the rapid release implementations (and reductions / refactoring) and avoid a new cause of bloat.

An OSS doomsday scenario

If I start talking about doomsday scenarios where the global OSS job industry is decimated, most people will immediately jump to the conclusion that I’m predicting an artificial intelligence (AI) takeover. AI could have a role to play, but is not a key facet of the scenario I’m most worried about.
OSS doomsday scenario

You’d think that OSS would be quite a niche industry, but there must be thousands of OSS practitioners in my home town of Melbourne alone. That’s partly due to large projects currently being run in Australia by major telcos such as nbn, Telstra, SingTel-Optus and Vodafone, not to mention all the smaller operators. Some of these projects are likely to scale back in coming months / years, meaning less seats in a game of OSS musical chairs. But this isn’t the doomsday scenario I’m hinting at in the title either. There will still be many roles at the telcos and the vendors / integrators that support them.

There are hundreds of OSS vendors in the market now, with no single dominant player. It’s a really fragmented market that would appear to be ripe for M&A (mergers and acquisitions). Ripe for consolidation, but massive consolidation is still not the doomsday scenario because there would still be many OSS roles in that situation.

The doomsday scenario I’m talking about is one where only one OSS gains domination globally. But how?

Most traditional telcos have a local geographic footprint with partners/subsidiaries in other parts of the world, but are constrained by the costs and regulations of a wired or cellular footprint to be able to reach all corners of the globe. All that uniqueness currently leads to the diversity of OSS offerings we see today. The doomsday scenario arises if one single network operator usurps all the traditional telcos and their legacy network / OSS / BSS stacks in one technological fell swoop.

How could a disruption of that magnitude happen? I’m not going to predict, but a satellite constellation such as the one proposed by Starlink has some of the hallmarks of such a scenario. By using low-earth orbit (LEO) satellites (ie lower latency than geostationary satellite solutions), point-to-point laser interconnects between them and peering / caching of data in the sky, it could fundamentally change the world of communications and OSS.

It has global reach, no need for carrier interconnect (hence no complex contract negotiations or OSS/BSS integration for that matter), no complicated lead-in negotiations or reinstatements, no long-haul terrestrial or submarine cable systems. None of the traditional factors that cost so much time and money to get customers connected and keep them connected (only the complication of getting and keeping the constellation of birds in the sky – but we’ll put that to the side for now). It would be hard for traditional telcos to compete.

I’m not suggesting that Starlink can or will be THE ubiquitous global communications network. What if Google, AWS or Microsoft added this sort of capability to their strengths in hosting / data? Such a model introduces a new, consistent network stack without the telcos’ tech debt burdens discussed here. The streamlined network model means the variant tree is millions of times simpler. And if the variant tree is that much simpler, so is the operations model and so is the OSS… with one distinct contradiction. It would need to scale for billions of customers rather than millions and trillions of events.

You might be wondering about all the enterprise OSS. Won’t they survive? Probably not. Comms networks are generally just an important means-to-an-end for enterprises. If the one global network provider were to service every organisation with local or global WANs, as well as all the hosting they would need, and hosted zero-touch network operations like Google is already pre-empting, would organisation have a need to build or own an on-premises OSS?

One ubiquitous global network, with a single pared back but hyperscaled OSS, most likely purpose-built with self-healing and/or AI as core constructs (not afterthoughts / retrofits like for existing OSS). How many OSS roles would survive that doomsday scenario?

Do you have an alternative OSS doomsday scenario that you’d like to share?

Hat tip again to Jay Fenton for pointing out what Starlink has been up to.

The OSS MoSCoW requirement prioritisation technique

Since the soccer World Cup is currently taking place in Russia, I thought I’d include reference to the MoSCoW technique in today’s blog. It could be used as part of your vendor selection processes for the purpose of OSS requirement prioritisation.

The term MoSCoW itself is an acronym derived from the first letter of each of four prioritization categories (Must have, Should have, Could have, and Won’t have), with the interstitial Os added to make the word pronounceable.”
Wikipedia.

It can be used to rank the importance an OSS operator gives to each of their requirements, such as the sample below:

Index Description MoSCoW
1 Requirement #1 Must have (mandatory)
2 Requirement #2 Should have (desired)
3 Requirement #3 Could have (optional)
4 Requirement #5 Won’t have (not required)

But that’s only part of the story in a vendor selection process – the operator’s wish-list. This needs to be cross-referenced with a vendor or solution integrator’s ability to deliver to those wishes. This is where the following compliance codes come in:

  • FC – Fully Compliant – This functionality is available in the baseline installation of the proposed OSS
  • NC – Non-compliant – This functionality is not able to be supported by the proposed OSS
  • PC – Partially Compliant – This functionality is partially supported by the proposed OSS (a vendor description of what is / isn’t supported is required)
  • WC – Will Comply – This functionality is not available in the baseline installation of the proposed software, but will be provided via customisation of the proposed OSS
  • CC – Can Comply – This functionality is possible, but not part of the offer and can be provided at additional cost

So a quick example might look like the following:

Index Description MoSCoW Compliance Comment
1 Requirement #1 Must have (mandatory) FC
2 Requirement #2 Should have (desired) PC Can only delivery the functionality of 2a, not 2b in this solution

Yellow columns are created by the operator / customer, blue cells are populated by the vendor / SI. I usually also add blue columns to indicate which product module delivers the compliance and room to pose questions / assumptions back to the customer.

BTW. I’ve only heard of the MoSCoW acronym recently but have been using a similar technique for years. My prioritisation approach is a little simpler. I just use Mandatory, Preferred, Optional.

The OSS dart-board analogy

The dartboard, by contrast, is not remotely logical, but is somehow brilliant. The 20 sector sits between the dismal scores of five and one. Most players aim for the triple-20, because that’s what professionals do. However, for all but the best darts players, this is a mistake. If you are not very good at darts, your best opening approach is not to aim at triple-20 at all. Instead, aim at the south-west quadrant of the board, towards 19 and 16. You won’t get 180 that way, but nor will you score three. It’s a common mistake in darts to assume you should simply aim for the highest possible score. You should also consider the consequences if you miss.”
Rory Sutherland
on Wired.

When aggressive corporate goals and metrics are combined with brilliant solution architects, we tend to aim for triple-20 with our OSS solutions don’t we? The problem is, when it comes to delivery, we don’t tend to have the laser-sharp precision of a professional darts player do we? No matter how experienced we are, there tends to be hidden surprises – some technical, some personal (or should I say inter-personal?), some contractual, etc – that deflect our aim.

The OSS dart-board analogy asks the question about whether we should set the lofty goals of a triple-20 [yellow circle below], with high risk of dismal results if we miss (think too about the OSS stretch-goal rule); or whether we’re better to target the 19/16 corner of the board [blue circle below] that has scaled back objectives, but a corresponding reduction in risk.

OSS Dart-board Analogy

Roland Leners posed the following brilliant question, “What if we built OSS and IT systems around people’s willingness to change instead of against corporate goals and metrics? Would the corporation be worse off at the end?” in response to a recent post called, “Did we forget the OSS operating model?

There are too many facets to count on Roland’s question but I suspect that in many cases the corporate goals / metrics are akin to the triple-20 focus, whilst the team’s willingness to change aligns to the 19/16 corner. And that is bound to reduce delivery risk.

I’d love to hear your thoughts!!