Google’s Circular Economy in OSS

OSS wear many hats and help many different functions within an organisation. One function that OSS assists might be surprising to some people – the CFO / Accounting function.

The traditional service provider business model tends to be CAPEX-heavy, with significant investment required on physical infrastructure. Since assets need to be depreciated and life-cycle managed, Accountants have an interest in the infrastructure that our OSS manage via Inventory Management (IM) tools.

I’ve been lucky enough to work with many network operators and see vastly different asset management approaches used by CFOs. These strategies have ranged from fastidious replacement of equipment as soon as depreciation cycles have expired through to building networks using refurbished equipment that has already passed manufacturer End-of-Life dates. These strategies fundamentally effect the business models of these operators.

Given that telecommunications operator revenues are trending lower globally, I feel it’s incumbent on us to use our OSS to deliver positive outcomes to global business models. 

With this in mind, I found this article entitled, “Circular Economy at Work in Google Data Centers,” to be quite interesting. It cites, “Google’s circular approach to optimizing end of life of servers based on Total Cost of Ownership (TCO) principles have resulted in hundreds of millions per year in cost avoidance.”

Google Asset Lifecycle

Asset lifecycle management is not your typical focus area for OSS experts, but an area where we can help add significant value for our customers!

Some operators use dedicated asset management tools such as SAP. Others use OSS IM tools. Others reconcile between both. There’s no single right answer.

For a deeper dive into ideas where our OSS can help in asset lifecycle (which Google describes as its Circular Economy and seems to manage using its ReSOLVE tool), I really recommend reviewing the article link above.

If you need to develop such a tool using machine learning models, reach out to us and we’ll point you towards some tools equivalent to ReSOLVE to augment your OSS.

New OSS product – Restoration Manager

At Passionate About OSS, we’re lucky enough to count the utilities market as an important part of our client base. This probably puts us in a very small percentage of OSS exponents that work across OSS for telco and utilities.

Utilities have a number of interesting and unique nuances compared with other OSS markets. Starting at the top, the network is core business for a telco, whereas the comms network only supports the core business of other utilities.

Despite having vastly different functions, there are still many similarities between operational support tools at telcos and other utilities. Similarities include:

  • Network inventory is made up of nodes and arcs (nodes are routers vs pumps vs sub-stations; arcs are comms cables vs power cables vs pipes)
  • All are CAPEX-heavy industries, so asset management is important from a financial (ie depreciation and “useful life remaining” modelling) as well as physical perspective
  • Assets need to be systematically life-cycle managed (ie commissioned, repaired, replaced, modified, maintained, decommissioned, etc)
  • A field workforce needs to be coordinated to keep the network in a healthy operational state
  • The network either provides (or supports) essential services so rapid remediation of failed / degraded services is expected by customers

Anyway, enough of the preamble. I find it interesting to observe the tools used by the different utilities to prompt alternate ways of thinking about our OSS.

Last week I observed a tool called a Restoration Manager. It is used widely in the power industry to handle fault restoration on comms networks. It has little direct equivalent in comms network management.

Some ticket managers allow task templates to be developed and defined. Similarly, the Restoration Manager also retains restoration plans, which are sequences of responses, but it also goes further by:

  • Coordinating implementation of restoration plans in real-time 
  • Looking ahead of each step in the  restoration plan to determine whether it is still useful or potentially harmful
  • Providing an indicator of whether the current network state is suited to being handled by each of the stored restoration plan/s
  • Coordinating restoration of planned or unplanned outages and even degradation events
  • Facilitates use AI or past restorations to create an optimal restoration plan
  • Documenting a proposed plan of action/s that can be audited by internal groups (eg engineering, QC, etc) or external groups (eg regulators)

The restoration plans could be made to tie in with DRP (Disaster Recovery Plans), CRB (Change Request Boards), outage window sequencing/management and even security incident response.

What are your thoughts? Would a Restoration Manager be useful for our OSS stack (ie solves an existing, unsolved problem) or do we already have suitable ways of solving / avoiding the outage problem?

Do I support the death penalty (of OSS RFPs)? Hmmm….

As per yesterday’s post, I’ll continue to reference a TM Forum report called, “Time to kill the RFP? Reinventing IT procurement for the 2020s” today. Mark Newman and the team have captured and discussed so many layers to the OSS/BSS procurement process.

There’s no doubt the current stereotypical RFP approach to procurement is broken. It needs to be done differently. That’s why we have been doing it differently with customers for years now (another hint regarding a project we’re getting excited to announce this Monday).

The TM Forum report is really powerful and well worth a read. There are a few additional (and somewhat random) thoughts that go through my head when considering the death of the RFP:

  1. The TM Forum report is primarily coming at the problem from the perspective of a carrier that is constantly steering the development of its own systems, as implied through this quote, “The fundamental problem with the RFP process is that in a fast-paced technology environment, where cloud and software are fast becoming preferred options, it is difficult for CSPs to describe in lengthy, written documents what they want and need. The processes are simply too complex and cumbersome to support modern, Agile methods of working.”
  2. That perspective is particularly applicable for some buyers, ones that have committed to having significant developer resources available to build exactly what they want. That could be in the form of in-house developers, contract developers, long-term panel arrangements with suppliers or similar
  3. Others, perhaps such as utilities, enterprise and some telcos want to focus on their core business and delegate OSS/BSS configuration and customisation to third-parties.
  4. Some of those rely on COTS (commercial off the shelf) software to leverage the benefits of innovation, cost and development time that have been spread across multiple customers. Their budgets simply don’t allow for custom-built solutions
  5. COTS, be it on-prem through to cloud service models, are almost never going to be a perfect fit for a buyer’s needs. They’re designed to generically suit many buyers, so a certain amount of bloat becomes part of the trade-off
  6. In recent weeks, I’ve seen two entirely in-house developed OSS/BSS. They fit their organisations like a glove and there’s almost no bloat at all. In fact it would be almost impossible for a COTS solution to replace what they’ve built. In both cases it’s taken a decade of ongoing development to get to that position. Most buyers don’t have that amount of time to get it right though unfortunately
  7. Commercial realities imply a pragmatic approach is taken to procurement – which product/s provide default capability that best aligns with the buyer’s most important objectives.
  8. RFPs often get bogged down at the far right-hand side of the long-tail of requirements (where impact tends to be negligible), or in trying to completely re-sculpt the solution to be the perfect fit (that it’s unlikely to ever be)
  9. In my experience at least, the best-fit (not perfect fit) solution, or very short list of solutions, usually becomes apparent fairly quickly [we’ll share more about how we do that tomorrow]. It’s then just a case of testing objectives, assumptions and gaps (eg via a proof-of-concept) and getting to a mutually beneficial commercial agreement
  10. As one respondent in the TM Forum report put it, “The RFP glorifies the process, not the outcome.” A healthy dose of outcome-driven pragmatism helps to reduce glorification of the RFP process
  11. Also in my experience at least, scope of works quotes from vendors (which RFPs tend to lead to) tend to be written in a waterfall style that don’t fit into Agile frameworks very effectively. That can be partially overcome by slicing and dicing the SoW in ways that are more conducive to Agile delivery
  12. With so much fragmentation in the OSS/BSS market already (there are over 400 in our vendor directory), that means the talent pool of creators is thinly spread. Many of those 400 have duplicated functionality, which isn’t great for the industry’s overall progress. Custom development for each different buyer spreads the talent pool even further… unless buyers can get economies of development scale through shared platforms like ONAP

In summary, I love the concept of avoiding massive procurement events. I still can’t help but think the RFP still fits in there somewhere for many buyers… as long as we ensure we glorify the outcomes and de-emphasise the process. It’s just that we use RFPs like a primitive instrument and inflict blunt-force trauma, rather than using surgical precision.

What if most OSS/BSS are overkill? Planning a simpler version

You may recall a recent article that provided a discussion around the demarcation between OSS and BSS, which included the following graph:

Note that this mapping is just my demarc interpretation, but isn’t the definitive guide. It’s definitely open to differing opinions (ie religious wars).

Many of you will be familiar with the framework that the mapping is overlaid onto – TM Forum’s TAM (The Application Map). Version R17.5.1 in this case. It is as close as we get to a standard mapping of OSS/BSS functionality modules. I find it to be a really useful guide, so today’s article is going to call on the TAM again.

As you would’ve noticed in the diagram above, there are many, many modules that make up the complete OSS/BSS estate. And you should note that the diagram above only includes Level 2 mapping. The TAM recommendation gets a lot more granular than this. This level of granularity can be really important for large, complex telcos.

For the OSS/BSS that support smaller telcos, network providers or utilities, this might be overkill. Similarly, there are OSS/BSS vendors that want to cover all or large parts of the entire estate for these types of customers. But as you’d expect, they don’t want to provide the same depth of functionality coverage that the big telcos might need.

As such, I thought I’d provide the cut-down TAM mapping below for those who want a less complex OSS/BSS suite.

It’s a really subjective mapping because each telco, provider or vendor will have their own perspective on mandatory features or modules. Hopefully it provides a useful starting point for planning a low complexity OSS/BSS.

Then what high-level functionality goes into these building blocks? That’s possibly even more subjective, but here are some hints:

The use of drones by OSS

The last few days have been all about organisational structuring to support OSS and digital transformations. Today we take a different tack – a more technical diversion – onto how drones might be relevant to the field of OSS.

A friend recently asked for help to look into the use of drones in his archaeological business. This got me to thinking about how they might apply in cross-over with OSS.

I know they’re already used to perform really accurate 3D cable route / corridor surveying. Much cooler than the old surveyor diagrams on A1 sheets from the old days. Apparently experts in the field can even tell if there’s rock in the surveyed area by looking at the vegetation patterns, heat and LIDAR scans.

But my main area of interest is in the physical inventory. With accurate geo-tagging available on drones and the ability to GPS correct the data, it seems like a really useful technique for getting outside plant (OSP) data into OSS inventory systems.

Or geo-correcting data for brownfields assets (it’s not uncommon for cable routes to be drawn using road centre-lines when the actual easement to the side of the road isn’t known – ie it’s offset from the real cable route). I expect that the high resolution of drone imagery will allow for identification of poles, roads and pits (if not overgrown). Perhaps even aerial lead-in cables and attachment points on buildings?
Drone-based cable corridor surveys
Have you heard of drone-based OSP asset identification and mapping data being fed into inventory systems yet? I haven’t, but it seems like the logical next step. Do you know anyone who has started to dabble in this type of work? If you do, please send me a note as I’d love to be introduced.

Once loaded into the inventory system, with 3d geo-location, we then have the ability to visualise the OSP data with augmented reality solutions.

And other applications for drone technology?

OSS come in all shapes and sizes

As the OSS vendors / suppliers page here on PAOSS shows, there are a LOT of different OSS options, making it an extremely fragmented market. But there’s something of a reason for that fragmentation – customer requirements for OSS come in all shapes and sizes. Here are four of the major categories that I’ve been lucky enough to work on.

Tier 1 telcos – the OSS of these organisations tend to be best classified as having to cope with scale. Scale comes in multiple dimensions. The number of network devices under management tend to be large, as do the types of device. The number of subscribers and customer services tend to be large, not to mention having large amounts of change occurring on a daily basis. The number of process variants and system integrations also tend to be large. And being at scale means that they’re more likely to be able to justify the cost of customisations and automations – either to off-the-shelf products or via purpose-built tools. Budgets, both CAPEX and OPEX, also tend to be large. Except where niche slices of the total OSS suite are being delivered, the vendors that service this market are also large in terms of revenues, but also in their number of services staff available to service the customer’s unique needs. In the case of the telco, the business (and revenue model) is built around the network so it gets the clear attention of the organisation’s executives.

Enterprise customers – these OSS tend to be at the other end of the spectrum, even when the enterprise is large (eg banks). Networks tend to be more homogeneous, being IT/IP-centric. Services tend to be less customer-specific (ie for journaling costs at a business unit level rather than individual subscribers) but follow ITSM process models, so the service management daily delta is not at the same scale as the Tier 1 telco. For enterprise customers, the network is rarely core business, even if it is mission-critical to the business. As such, attention and budgets tend to be much smaller. In turn, this means that the smaller, self-service or open-source OSS products / suppliers tend to be present in this segment.

Then there are two categories of organisation that fit between the two previous ends of the spectrum:

Tier 2/3 telcos, MVNOs and data centres – Similar to the Tier-1 telco, just not at the same scale, which has implications on the nature of their OSS. They generally need all the same types of OSS tools as the T1s, just not catering for the same number of variants. Due to cost constraints, there may be one or a few significant OSS building blocks such as inventory, assurance or orchestration, but often there will also be enterprise-grade and/or open-source products in their OSS stack. CAPEX and OPEX budgets are smaller, so clever jack-of-all-trades OSS experts are often on the operational teams delivering sophisticated solutions on shoe-string budgets. Some of the best OSS experts I’ve come across can trace their roots back to these origins.

Utilities – the OSS of these organisations are a fascinating mix of the first two categories above because enterprise-grade OSS often aren’t really fit-for-purpose and carrier-grade OSS doesn’t quite suit either. Except in the case of multi-utilities (eg power + telco), these organisations tend to have very little service management change, mainly because they tend to have few to no external customers. This makes them similar to enterprise OSS. But like telcos, they often have networks that are more varied than your typical IT/IP-centric networks under management in enterprise-land. They often have less common network topologies and protocols, including older and even proprietary models that enterprise-grade OSS rarely support without expensive mediation. Just like the enterprise, the telco network (and hence the OSS) of a utility is not core business and can’t be justified through driving incremental revenues. However, it is generally mission-critical to the core business (eg tele-protection circuits are in place to ensure resilience of the electricity supply across the power network). As such, telco Network health / reliability and asset management tend to be the main focus of these OSS. And whereas telcos can delegate some responsibility for network security to their customers (using the dumb-pipe excuse), utilities bear full responsibility for the security of their telco networks and the critical infrastructure that these networks and OSS tools support.

These are only broadly general categories and there are more than 50 shades of grey in between. Are there any other broad categories that you feel I’m missing?

Automated Network Operations as a Service (ANOaaS)

Google has started applying its artificial intelligence (AI) expertise to network operations and expects to make its tools available to companies building virtual networks on its global cloud platform.
That could be a troubling sign for network technology vendors such as Ericsson AB (Nasdaq: ERIC), Huawei Technologies Co. Ltd. and Nokia Corp. (NYSE: NOK), which now see AI in the network environment as a potential differentiator and growth opportunity…
Google already uses software-defined network (SDN) technology as the bedrock of this infrastructure and last week revealed details of an in-development “Google Assistant for Networking” tool, designed to further minimize human intervention in network processes.
That tool would feature various data models to handle tasks related to network topology, configuration, telemetry and policy.
Iain Morris
here on Light Reading.

This is an interesting, but predictable, turn of events isn’t it? If (when?) automated network operations as a service (ANOaaS) is perfected, it has the ability to significantly change the OSS space doesn’t it?

Let’s have a look at this from a few scenarios (and I’m considering ANOaaS from the perspective of any of the massive cloud providers who are also already developing significant AI/ML resource pools, not just Google).

Large Enterprise, Utilities, etc with small networks (by comparison to telco networks), where the network and network operations are simply a cost of doing business rather than core business. Virtual networks and ANOaaS seem like an attractive model for these types of customer (ignoring data sovereignty concerns and the myriad other local contexts for now). Outsourcing this responsibility significantly reduces CAPEX and head-count to run what’s effectively non-core business. This appears to represent a big disruptive risk for the many OSS vendors who service the Enterprise / Utilities market (eg Solarwinds, CA, etc, etc).

T2/3 Telcos with relatively small networks that tend to run lean operations. In this scenario, the network is core business but having a team of ML/AI expects is hard to justify. Automations are much easier to build for homogeneous (consistent) infrastructure platforms (like those of the cloud providers) than for those carrying different technologies (like T2/T3 telcos perhaps?). Combine complexity, lack of scale and lack of large ML/AI resource pools and it becomes hard for T2/T3 telcos to deliver cost-effective ANOaaS either internally or externally to their customer base. Perhaps outsourcing the network (ie VNO) and ANOaaS allows these operators to focus more on sales?

T1 Telcos have large networks, heterogenous platforms and large workforces where the network is core business. The question becomes whether they can build network cloud at the scale and price-point of Amazon, Microsoft, Google, etc. This is partly dependent upon internal processes, but also on what vendors like Ericsson, Huawei and Nokia can deliver, as quoted as a risk above.

As you probably noticed, I just made up ANOaaS. Does a term already exist for this? How do you think it’s going to change the OSS and telco markets?

Telcos still innovate… but more by proxy now

CSPs globally are trying to be innovative, and being heavily involved in tech since their earliest days, always perceive themselves to be innovative to their core (yes, bad pun).

There’s no doubt that there is a lot of innovation happening in large CSPs, but I wonder how much of it is really attributable to the CSP? Much of the change swirling through our organisations (or customer organisations) has been designed elsewhere. SDx (Software Defined Everything) was invented elsewhere (as were IP networks prior), IoT, CI / CD, Agile, Design Thinking, User Experience (UI/UX), AR/VR, AI/ML, open-source and all the other modern buzz-words.

Telcos do come from innovative roots when you look back at what Bell Labs, BT Research, Telstra Research Labs, or most other major telcos were doing globally in the early-mid 1900s. They defined not just telco but developed ground-breaking primary research into transistors, microwaves, satellites, lasers, communications theory, etc. Since this golden era, they’ve increasingly delegated the innovation process to suppliers. The same is true in our OSS. OSS were originally invented by telcos, but suppliers largely frame the story now.

In many recent blogs, including yesterday’s on network vs digital variants, we’ve discussed ways to invent new, more profitable revenue streams… revenue streams that will in turn fund exciting new OSS projects.

But I also wonder whether the reverse is true? If a telco was to discard the addiction to innovation and take on the utility mindset – providing ubiquitous, bullet-proof, regulated supply of long-haul and wide-spread access networks – would that be a better fit for these large, highly regulated enterprises?

They tend to have the natural moat in their networks – the cables and spectrum that require too much capital for others to easily replicate and compete, but whose physical assets have long useful lives. If they remove the constant need to constantly invest in change then their cost structures will arguably reduce more quickly than their revenues are dipping (yes, that means profitability increases). That gives the opportunity to focus on doing things simpler and better rather than doing things newer.

Have many CSPs long since lost the innovation war? Is their innovation primarily by proxy (through partners and suppliers), acquired rather than earned? Many factors work against them ever recapturing the innovative lead that they once held. So why not just admit it and let the innovators innovate, incubating partnerships rather than competing?
The answer is because that’s anathema to these organisations (and the people who drive them), which still perceive themselves to be innovators, perhaps akin to the aging Lothario with the comb-over and expanding waistline dreaming of his younger days.

Would the OSS market look much differently if CSPs were to all “age gracefully” with respect to innovation and take a different approach to the inventiveness that still surrounds them?

The three big lies of the telecoms industry

What are the three big lies of the telecoms industry?
The first lie is that data monetisation is coming. Well we are still waiting.
The second is that we have billions of customers. Well are they really our customers or are they people who just tolerate us and are really customers of someone else?
The third lie is that we are utility so we have stable returns, and because we have spectrum allocations that is our safety net. EBITDA multiples for the telco industry were at around 6, while utilities were at multiples of 12 to 16, so it could not be said that telcos – which often had up to five competitors in markets – were utilities
Alexey Reznikovich

Three valid points. OSS / BSS could be highly influential in whether these are actually lies or not.

If we think of data as carriage, that’s clearly not monetising for most telcos currently. Well, it does bring in revenues but at a diminishing rate per bit. If we think of data in terms of content (eg voice, video, text, etc), then there are some monetisation wins (eg Game of Thrones) but more content is coming on line so there is more supply, making the profitability tail longer. If we talk about data as insight (or supporting the generation of insights if selling analytics or API offerings) then this will never go out of fashion (although with more “insights” being generated, the bar will be raised on what is truly insightful)

Alexey’s note here that our customers just tolerate us and are really customers of someone else possibly has some merit. It’s the reason why the OTT play has been so successful. I still have the sense that there is an implicit trust in our service providers, due to the long subscription/billing history, media / shareholder attention on them and regulation that governments place on them (even if not an implicit trust in their customer service). Not all OTT players have the same track record of regulatory governance. Some OTT providers are invisible, hidden somewhere out on the cloud. I feel that this could represent a strong opportunity in a world where crypto-currencies carry vastly more value than they do today.

This comes down to the business model of any given Telco and / or the regulatory frameworks they operate within. They could opt to go down the path of ubiquitous data (like ubiquitous voice of the distant past), or like most, they can go down the path of being a digital service provider. To be honest, there are probably quite a few incumbent providers that are probably more closely aligned to utility than DSP in the collective way of thinking within their workforce. That often transfers to their mindset in building OSS.

Treating telco like electricity

Whenever I look at telco provisioning projects, I can’t help but think at the complexity involved. Processes are lengthy, with multiple manual steps, mappings, data gathering, sequencing activities, approvals, settings and options. It’s no wonder that OSS evolutions and transformations are a nightmare for operators from the perspectives of effort, risk, cost, etc.

If we look at residential customer services, the 80% in Pareto’s 80:20 rule just want a connection to the Internet at the fastest speeds possible and possibly some additional over-the-top services like email. The base service sounds like electricity supply – a standardised service with almost no service choices and a consumption-based pricing model (perhaps with a fixed annual service and/or connection fee).

I can understand the thought processes. 1) From the product-owner perspective, a standardised service tends to commoditise. 2) From a network-operator perspective, the configurability of current networks make service complexity a necessity.

What if we were to counter those arguments in an effort to get to the electricity model? 1) Profit is the difference between income and cost. A commoditised service generates lower income, but cost and risk (particularly within OSS) reduce massively if the proposed model could be implemented. 2) If the network can’t be simplified because of the vendor offerings currently on the market then virtualised networks represent an opportunity to change this. To change entire protocols even. Even still, most network complexity is introduced because of the long-tail of “what-if” scenarios – “what if” a customer asks for feature X? But if feature X is introduced, does the revenue generated outweigh the total lifecycle cost of introducing it? Instead, can network features be pared back?

I’d love to hear your thoughts about why there is an opportunity for the SouthWest Airlines version of a telco, or why I’m in dreamland and it can never happen.

Why is mass customisation so important for the future of OSS?

McDonald’s hit a peak moment of productivity by getting to a mythical scale, with a limited menu and little in they way of customization. They could deliver a burger for a fraction of what it might take a diner to do it on demand.
McDonald’s now challenges the idea that custom has to cost more, because they’ve invested in mass customization.
Things that are made on demand by algorithmic systems and robots cost more to set up, but once they do, the magic is that the incremental cost of one more unit is really low. If you’re organized to be in the mass customization business, then the wind of custom everything is at your back.
The future clearly belongs to these mass customization opportunities, situations where there is little cost associated with stop and start, little risk of not meeting expectations, where a robot and software are happily shifting gears all day long
Seth Godin
in “On demand vs. in stock

We’ve all experienced the modern phenomenon of “the market of one.” We all want a solution to our own specific needs, whilst wanting to pay for it at an economy of scale. For example, to continue the burger theme, I rarely order a burger without having to request a change from the menu item (they all seem to put onions and tomatoes on, which I don’t like).

One of the challenges of the OSS market segments I tend to do most work in (the tier-one telcos and utilities) is that they’ve always needed a market of one approach to fit their needs (ie heavy customisation, with few projects being even similar to any previous ones). This approach comes with an associated cost of customisation (during the commissioning and forever after) as well as the challenge in finding the right people to drive these customisations (yes, you may’ve noticed a shortage too!).

If we can overcome this challenge with a model of repeatability overlaid onto customised requirements (ie mass customisation) then we’re going to reduce costs, reduce risks, reduce our reliance on a limited supply of resources and improve quality / reliability.

But OSS is a bit more complex than the burger business (I imagine, having never learnt much about the science of making and delivering burgers to order). So where do we start on our repeatability mantra? Here are a few ideas but I’m sure you can think of many more:

  1. Systematising the OSS product ordering process, whether you make the process self-serve (ie customers can log on to build a shopping cart) or more likely, you streamline the order collection for your sales agents to build a shopping cart with customers
  2. Providing decision support for the install process, guiding the person doing the install in real-time rather than giving them an admin guide. The process of setting up databases, high-availaility, applications, schema, etc will invariably be a little different for each customer and can often take days for top-end installs
  3. Reducing core functionality down to the features that virtually every customer will use, working hard to make those features highly modularised. They become the building blocks that customisations can be built around
  4. Building a platform that is designed to easily plug in additional functionality to create bespoke solutions. This specifically includes clever user experience design to help them find the right plug ins for their requirements rather than confusing them in the vast array of choice
  5. Wherever possible, give the flexibility in data rather than in applications / code
  6. Modularisation of products and processes as well as functionality
  7. Build models of intent that are abstracted from the underlying technology layers

The transient demands facilitated by a future of virtualised networks makes this modularity and repeatability more important than its ever been for OSS.

Where does trial and error belong in OSS?

I hold a somewhat philosophical view of where OSS (and IT in general) fits within its overall timeline. It’s all pretty nascent in the grand scheme of things.

Whilst communitications technology is the common thread, I’ve worked in many industries including construction, mining, engineering, government, utilities, emergency services, healthcare, farming and more. Most of these industries have been around for far longer than OSS. As the outsider looking in on those industries, it seems that the basic techniques they use have existed for decades and have been refined to the point of relative maturity and consistency*. Think the technique for preparing a suture on a wound, of building a timber frame for a house, of milking a cow, etc. There’s not much error because the trialling has already been refined out of the process.

But OSS is much younger. The technologies they’re built upon are still in a state of massive upheaval. We aren’t even close to reaching an asymptote of refinement yet. Within this maelstrom of change, there is still a lot of trialling underway. And when there’s trial, there’s bound to be errors. They happen whether you like it or not. In the case of OSS, a LOT of errors. Clearly, errors are not in short supply.

Trial, on the other hand can be far more scarce. The fear of making mistakes on these large, complex projects often holds us back from performing the trials that could help us contribute to the Global OSS Body of Knowledge (GOSSBOK). We mistakenly believe that to avoid error, we have to avoid trial. We actually need more trial. **

Having said that, these projects consume too many resources to be an out-of-control learning experiment. The key call-out here is that our OSS already provide the tools to conduct many controlled micro-experiments. Our databases of large, relational information are perfect for conducting rapid prototyping or rapid insight checking.

* Note that I’m not trying to denigrate the innovation occurring in these industries, as I’m sure they are all highly innovative.
** I’ve shamelessly borrowed from the words and concepts in a Seth Godin blog

Have you noticed the different races being run in OSS?

Yesterday’s blog discussed innovation at the speed of data being even faster than innovation at the speed of software. But not all aspects of OSS need to evolve at the sames speeds.

In the Olympics, sprinters need fast-twitch muscles and training to hone for speed, whilst marathon runners need slow-twitch muscles and appropriate training for endurance. The same appears to be true in OSS, with the two business model extremes – OTT / DSP (Over the Top or DSP) versus REIT / TaaU (Telco as a Utility) and anywhere in between (or a combination of the two extremes – the decathlete OSS??).

The OTT / DSP model requires transient networks and services, with virtualised infrastructure being spun up and torn down to cope with demand. They’re provisioned for burst capacity and a customer expectation of speedy outcomes. In this model there’s arguably just as much happening within the data centre* as there is out in the field, if not more. This model requires the sprinter OSS and a corresponding mindset of innovation (fast-twitch innovation).

Conversely, the REIT / TaaU model doesn’t change as much, although there is always maintenance and build-out going on. Customers know the physical nature of this network build (ie planning, approvals, truck-rolls, etc) is more methodical and time-consuming. At this juncture in history, not much is changing in the physical network, as we’re still using optical fibres, copper, radio and coax (not taking into account changes in the active equipment that plugs into these networks like G.Fast or topology innovations like FTTPdp). There are improvements to be made in the user interfaces of the OSS that support this type of business model to support designers, the field workforce, a contractor-based workforce, etc but generally the marathoner OSS needs more of an efficiency improvement mindset (slow-twitch innovation). This is a model that’s built around physical assets / inventory.

* as an aside, you may have been noticing that the traditional CSPs have been increasingly outsourcing their data centre capacity requirements, which appears to be a clever ploy on so many levels. Since the assets being used are often not directly managed, the concept of owning and managing the assets / inventory becomes more abstracted, meaning the OTT / DSP model of OSS becomes less dependent on inventory and more reliant on its ability to orchestrate services through any number of cloud providers.

48% drop in store visits in three years

There were 34 billion visits to US stores in 2010. By 2013, that number had plummeted 48% to 17.6 billion, according to Elite Wealth Management. As consumers make more of their purchases online, the challenge of engaging consumers in store is accelerating the rise of ‘experiential shopping’.”
David Kelnar
in a fascinating trend analysis on Medium.

Are you surprised by the headline percentage? Clearly online purchases are rising, but 48% is a massive drop in physical store visits, with massive implications if it continues apace.

It talks to the efficiencies of digitisation and the changes in business models that go with it. Since the digitisation of business relies on communications technologies, it also highlights the increased dependence of modern businesses on their networks as an essential delivery channel.

The modern service provider has noticed this shift and is transitioning from a communications service provider (CSP – the provider of telephony and data services to act as business enablers) to digital service providers (DSP – the provider of digital services to enable digital business customers). The shift from CSP to DSP is causing a shift in which assets are most important to the modern service provider (as dictated by what is more important for the customer).

Phone attendants are a more expensive medium for a business to offer because they require people to be available to take calls. Digital channels like digital chat will become progressively cheaper as automated chat bot technologies get better at handling customer requests.

The OSS of today service the legacy business model of CSPs. They’re evolving to meet the infrastructure needs of the DSP model through managing and maintaining virtualised network assets and associated hyperscaling technologies. However, are you seeing a corresponding evolution to handle the other side of the DSP model, the apps and content that their customers see as the true enablers of their digital businesses? The DSP needs to find solutions that help their customers to thrive as digital businesses and those solutions will need operational support tools.

This could still be classed as a niche of operational support systems, but not as we know traditional OSS. It’s the managing of software and contracts and content as services rather than cables and devices and channels and circuits. It’s a more IT-style of service operation than a traditional telco, so can we expect more IT-style of thinking to pervade the oss/OSS of the DSPs of the future?

Open source OSS

Last week, two new open source groups focusing on management and orchestration (MANO) of network functions virtualization (NFV) announced their existence: the Open Source Management (OSM) group hosted by ETSI, and Open-O hosted by the Linux Foundation.
At the press conference announcing Open-O, Yang Zhiqiang, deputy general manager of the China Mobile Research Institute, said the operation support system (OSS) will lead to open source software (OSS)
Linda Hardesty

Open source software (the other OSS) conjures up vastly different views in our industry doesn’t it?

Some believe that open source will take over OSS. Others believe that their mission critical networks can’t possibly have open source tools running them. There is a perception that the risk, especially security risk, is too high,

My perspective lies somewhere in the middle.

With so much brainpower being directed to open-source collaboration projects like OpenStack it is inevitable that many of the advanced, mission-critical networks of the future will have open-source in their toolboxes. Even the ever-cautious utilities companies are likely to start considering open-source in their OSS if there is the right support wrapper around it.

Not quite so inevitable is the replacement of the entire OSS stack with open-source. Whilst there are open-source projects in ticketing, workforce management, GIS, inventory management, etc the challenge will be building enough sophistication to usurp the customised solutions that customers already have in place.

The interesting one will be the VNFs (Virtual Network Functions) that sit on virtualised networks to deliver switching, routing, network security, etc. Will open-source VNFs ever go mainstream? Will open-source VNFs reach the level of capability of proprietary vendors? The vendors have a big head-start due to having an existing code-base that was tied to their proprietary hardware.

I think that it’s also inevitable that open source VNFs will find their way into mission-critical operational networks of utilities, emergency services, etc eventually. Will they also be managed by open-source OSS? Yes…with the right support models.

Managing property with OSS?

There’s a slight problem about being passionate about OSS – you see everything In relation to OSS problems, solutions, analogies, etc.

l was talking with Simon, a great friend of mine recently about a new role that he’s taking on. He will be responsible for technology in the facilities used by the large bank that he works for. Those facilities include branches, offices, data centres and more, The conversation started out on the challenges facing facilities managers including energy efficiency, occupation rates, meeting room utilisation rates, cost per desk, workforce efficiency, utility allocation / billing and many other KPls.

The tech he has been considering in this space was wide and varied but primarily came down to additional types of sensors that will ultimately reduce costs for his employer. Many of these sensors sound very cool (no pun intended re. HVAC sensors). As you can imagine, the executives at the bank don’t fund cool, they fund cost-out projects,

The same is true for OSS but that’s not the overlap I was thinking about on this occasion. It was how sensor networks in buildings collect vast amounts of data, aggregate it, process it, analyse it against a particular metric or theme and then provide insights based on that benchmark. We then got onto the topic of circulating (another HVAC pun) the benchmark results for the purpose of gamification and competition between different parts of the organisation.

You can see why l consider existing OSS capabilities as being well placed for servicing the IoT space can’t you?

Then we got onto the topic of blockchain and smart contracts so my OSS-coloured glasses kicked in again but that’s a story for another day.

PS. Yes, in the pre-IoT days, we managed buildings through software like BMS (Building Management Systems), PAGA (Public Address General Alarm), physical security (eg ACS – Access Control Systems) and other tools not to mention environmentals but the IoT buzzword is taking it to another level.


Another 10 ideas

In yesterday’s post, “Just 10 ideas“, I talked about James Altucher’s “Idea Machine,” of coming up with 10 ideas every day, regardless of whether they’re good or not. I took a slightly different twist on the concept and posed a series of 10 questions, which in turn will probably have at least 10 idea responses. That’s 100 OSS ideas.

Today’s post lists another 10 questions to jump-start your (and my) idea machine, as follows:

  1. What are the 10 reasons why the term OSS will be irrelevant / redundant within 5 years?
  2. What are the 10 reasons why OSS will fundamentally change the world of communications and digital industry within the next 5 years?
  3. In what 10 ways can OSS better utilise spatially-relevant data?
  4. What 10 unexpected data sources could provide fundamentally different insight generation from your existing OSS?
  5. Communications and Digital Service Provider (CSP / DSP) business models are evolving almost faster than we can keep up. What 10 reasons can your foresee OSS facilitating these changes?
  6. What 10 reasons can you foresee OSS hindering these changes?
  7. What are the 10 most frequent reasons why customers contact CSP / DSP call centres? Hence, what are the 10 most important functions / insights / capabilities that front-line staff (eg call centre operators) need at their fingertips from OSS to be able to respond to customer requests?
  8. What 10 ways can you interact with your OSS that don’t include keyboard, mouse or touch-pad?
  9. What are the 10 most ridiculous things you’ve seen whilst working on OSS projects? But more importantly, what did you learn from them?
  10. What should be the first 10 events scheduled at an OSS Olympics?

Tomorrow’s blog will have a closer look at the last one – OSS Olympics.

Just 10 ideas

You can’t trust the old style of thinking anymore. You have to come up with a new way of thinking. A new way of having ideas. A new way of interacting with the outside universe.”
James Altucher.

I’m currently reading James Altucher’s latest book. It has nothing to do with OSS but it has a myriad of ideas to ponder for OSS exponents.

He speaks about ideas being currency in the modern world and hence the need to be an idea machine.

l love the concept. It’s part of the reason why l founded PAOSS – to force me to generate more ideas about OSS and share them with the world (or at least the sub-set of the global population that reads PAOSS).

There’s just one problem. James suggests coming up with 10 ideas every day. Sounds like a challenge worth taking on though.

I’d love to get your assistance on this one, to help provide a little push to get the PAOSS idea machine moving faster.

  1. What are the 10 biggest problems you face or our industry faces?
  2. What are the 10 things that you’d like to get better at?
  3. Our thinking is often constrained by the complexity of the challenges, so which 10 complexities would you like to snap your fingers and remove?
  4. Which 10 nascent technologies will impact OSS in the future?
  5. Which 10 technologies will be dead in 5 years or less?
  6. Which 10 technologies do we need that haven’t been invented yet?
  7. Name 10 ways in which machine learning will impact our industry
  8. Which 10 ways can we use to overcome the OSS skills shortage?
  9. Which 10 laws of nature don’t apply to OSS?
  10. Which 10 analogies from other industries can you apply to OSS?

Perhaps you’d like to throw some other lists of 10 at the PAOSS community and I for philosophising over?

Hmmm. I already have another 10 in a list so let’s continue with this tomorrow.

Last mover monopolies

The PayPal network, as it’s been called, is a set of friendships built over the course of a decade. It has become a sort of franchise. But this isn’t unique; that kind of dynamic arguably characterizes all great tech companies, i.e. last mover monopolies. Last movers build non-commoditized businesses. They are relationship-driven. They create value. They last. And they make money.”
Peter Thiel
(actually a notes essay from Peter Thiel’s CS183: Startup – Class 3 lecture).

The OSS market is highly fragmented. The Leaders (the top right corner of the Gartner OSS Magic Quadrant) tend to service the Tier-1 CSPs exclusively and from my experience running vendor selections, of these vendors only one entertains bidding for customers from outside the top tier. To borrow from the long-tail diagram below, the Gartner Leaders primarily service only the yellow band.

The hundreds of other OSS vendors supply to a combination of yellow and green bands, either servicing functional niches (eg root cause analytics) or perhaps customer-type niches (eg power companies).

Given this fragmentation, OSS has yet to be dominated by an organisation that fits Peter Thiel’s last mover monopoly classification. In OSS we don’t have what Google is to search, Amazon is to online retail, etc.

Can you think of an organisation that fits all of the following criteria based on their OSS solutions alone?
They are relationship-driven. They create value. They last. And they make money.”

Tomorrow we look at some further insights of Peter Thiel and see what factors might allow a last mover monopoly to emerge in the OSS industry.

Vertical or horizontal OSS branching?

If you embrace special orders, you’re doing something difficult, scarce and worth seeking out.”
Seth Godin

Recent post, “Long Tail Dynamics in OSS,” highlighted the different bands of customers that a CSP services and the different OSS that they use to service each band. The head (the top-X customers by revenue) tends to get customised OSS solutions, whilst the long tail (the other customers, many of who generate very little individual revenue or profitability for the CSP) tends to get a one-size-fits-all multi-tenant OSS.

OSS developers, like any software developers, are often faced with the dilemma of which type of code tree strategy to follow. Do you follow the “Main Line” approach or the “Special Orders” approach as highlighted in the diagram below?
Different Trees

The main line approach sees all code developed towards a common objective, with no custom branches for any customers. Every customer gets the same functionality and there is only ever one current version of code being used by every customer. This approach is attractive because there is only one version of code to enhance and maintain, which allows all coder hours to focus on roadmap functionality (represented by increasing the height of the tree as fast as possible).

The special orders approach sees every different customer wanting specific modifications to suit their needs. If the developer wants to add new functionality, significant effort must go into evaluating which customers want it, whether it can be easily added to each customer branch, whether changes to any of the branches cause problems to third-party applications / processes / data / integrations and corresponding release management and supportability challenges.

In my experience, every developer wants to stay on the main line with no branching, but at some point becomes swayed by the special order requested by key customers.

In my opinion, there are three strategies that should be held onto steadfastly and you have to know whether your target customer base fits into the head or long-tail band:

  • STRATEGY 1 – If your customer base consists of tier-1 CSPs / utilities, with an appetite and budget for specifically customised solutions, choose the “special orders” approach, knowing what the ongoing ramifications will be. You will need to embrace the variations
  • STRATEGY 2 – If your customer base consists of customers that have smaller budgets and are willing to put up with flexibility through configuration rather than code change, then choose the “main line” approach and build delivery models
  • STRATEGY 3 – If you service customers that want some variation but you don’t have the resources to support the ramifications, then consider a hybrid approach. This approach sees the developer producing a product along the main-line approach, but with a strategy to provide impeccable frameworks to support third-party developers to fill the custom-requirements gaps. By “impeccable frameworks” I mean highly open interfaces / APIs, 24×7 developer support mechanisms, high-quality documentation, developer forums, etc. The developer will rely on the third-parties to contribute towards the success of their brand, so they have to ensure the success of the third-parties in turn