Telcos still innovate… but more by proxy now

CSPs globally are trying to be innovative, and being heavily involved in tech since their earliest days, always perceive themselves to be innovative to their core (yes, bad pun).

There’s no doubt that there is a lot of innovation happening in large CSPs, but I wonder how much of it is really attributable to the CSP? Much of the change swirling through our organisations (or customer organisations) has been designed elsewhere. SDx (Software Defined Everything) was invented elsewhere (as were IP networks prior), IoT, CI / CD, Agile, Design Thinking, User Experience (UI/UX), AR/VR, AI/ML, open-source and all the other modern buzz-words.

Telcos do come from innovative roots when you look back at what Bell Labs, BT Research, Telstra Research Labs, or most other major telcos were doing globally in the early-mid 1900s. They defined not just telco but developed ground-breaking primary research into transistors, microwaves, satellites, lasers, communications theory, etc. Since this golden era, they’ve increasingly delegated the innovation process to suppliers. The same is true in our OSS. OSS were originally invented by telcos, but suppliers largely frame the story now.

In many recent blogs, including yesterday’s on network vs digital variants, we’ve discussed ways to invent new, more profitable revenue streams… revenue streams that will in turn fund exciting new OSS projects.

But I also wonder whether the reverse is true? If a telco was to discard the addiction to innovation and take on the utility mindset – providing ubiquitous, bullet-proof, regulated supply of long-haul and wide-spread access networks – would that be a better fit for these large, highly regulated enterprises?

They tend to have the natural moat in their networks – the cables and spectrum that require too much capital for others to easily replicate and compete, but whose physical assets have long useful lives. If they remove the constant need to constantly invest in change then their cost structures will arguably reduce more quickly than their revenues are dipping (yes, that means profitability increases). That gives the opportunity to focus on doing things simpler and better rather than doing things newer.

Have many CSPs long since lost the innovation war? Is their innovation primarily by proxy (through partners and suppliers), acquired rather than earned? Many factors work against them ever recapturing the innovative lead that they once held. So why not just admit it and let the innovators innovate, incubating partnerships rather than competing?
The answer is because that’s anathema to these organisations (and the people who drive them), which still perceive themselves to be innovators, perhaps akin to the aging Lothario with the comb-over and expanding waistline dreaming of his younger days.

Would the OSS market look much differently if CSPs were to all “age gracefully” with respect to innovation and take a different approach to the inventiveness that still surrounds them?

The three big lies of the telecoms industry

What are the three big lies of the telecoms industry?
The first lie is that data monetisation is coming. Well we are still waiting.
The second is that we have billions of customers. Well are they really our customers or are they people who just tolerate us and are really customers of someone else?
The third lie is that we are utility so we have stable returns, and because we have spectrum allocations that is our safety net. EBITDA multiples for the telco industry were at around 6, while utilities were at multiples of 12 to 16, so it could not be said that telcos – which often had up to five competitors in markets – were utilities
.”
Alexey Reznikovich

Three valid points. OSS / BSS could be highly influential in whether these are actually lies or not.

Data:
If we think of data as carriage, that’s clearly not monetising for most telcos currently. Well, it does bring in revenues but at a diminishing rate per bit. If we think of data in terms of content (eg voice, video, text, etc), then there are some monetisation wins (eg Game of Thrones) but more content is coming on line so there is more supply, making the profitability tail longer. If we talk about data as insight (or supporting the generation of insights if selling analytics or API offerings) then this will never go out of fashion (although with more “insights” being generated, the bar will be raised on what is truly insightful)

Customers:
Alexey’s note here that our customers just tolerate us and are really customers of someone else possibly has some merit. It’s the reason why the OTT play has been so successful. I still have the sense that there is an implicit trust in our service providers, due to the long subscription/billing history, media / shareholder attention on them and regulation that governments place on them (even if not an implicit trust in their customer service). Not all OTT players have the same track record of regulatory governance. Some OTT providers are invisible, hidden somewhere out on the cloud. I feel that this could represent a strong opportunity in a world where crypto-currencies carry vastly more value than they do today.

Utilities:
This comes down to the business model of any given Telco and / or the regulatory frameworks they operate within. They could opt to go down the path of ubiquitous data (like ubiquitous voice of the distant past), or like most, they can go down the path of being a digital service provider. To be honest, there are probably quite a few incumbent providers that are probably more closely aligned to utility than DSP in the collective way of thinking within their workforce. That often transfers to their mindset in building OSS.

Treating telco like electricity

Whenever I look at telco provisioning projects, I can’t help but think at the complexity involved. Processes are lengthy, with multiple manual steps, mappings, data gathering, sequencing activities, approvals, settings and options. It’s no wonder that OSS evolutions and transformations are a nightmare for operators from the perspectives of effort, risk, cost, etc.

If we look at residential customer services, the 80% in Pareto’s 80:20 rule just want a connection to the Internet at the fastest speeds possible and possibly some additional over-the-top services like email. The base service sounds like electricity supply – a standardised service with almost no service choices and a consumption-based pricing model (perhaps with a fixed annual service and/or connection fee).

I can understand the thought processes. 1) From the product-owner perspective, a standardised service tends to commoditise. 2) From a network-operator perspective, the configurability of current networks make service complexity a necessity.

What if we were to counter those arguments in an effort to get to the electricity model? 1) Profit is the difference between income and cost. A commoditised service generates lower income, but cost and risk (particularly within OSS) reduce massively if the proposed model could be implemented. 2) If the network can’t be simplified because of the vendor offerings currently on the market then virtualised networks represent an opportunity to change this. To change entire protocols even. Even still, most network complexity is introduced because of the long-tail of “what-if” scenarios – “what if” a customer asks for feature X? But if feature X is introduced, does the revenue generated outweigh the total lifecycle cost of introducing it? Instead, can network features be pared back?

I’d love to hear your thoughts about why there is an opportunity for the SouthWest Airlines version of a telco, or why I’m in dreamland and it can never happen.

Why is mass customisation so important for the future of OSS?

McDonald’s hit a peak moment of productivity by getting to a mythical scale, with a limited menu and little in they way of customization. They could deliver a burger for a fraction of what it might take a diner to do it on demand.
McDonald’s now challenges the idea that custom has to cost more, because they’ve invested in mass customization.
Things that are made on demand by algorithmic systems and robots cost more to set up, but once they do, the magic is that the incremental cost of one more unit is really low. If you’re organized to be in the mass customization business, then the wind of custom everything is at your back.
The future clearly belongs to these mass customization opportunities, situations where there is little cost associated with stop and start, little risk of not meeting expectations, where a robot and software are happily shifting gears all day long
.”
Seth Godin
in “On demand vs. in stock

We’ve all experienced the modern phenomenon of “the market of one.” We all want a solution to our own specific needs, whilst wanting to pay for it at an economy of scale. For example, to continue the burger theme, I rarely order a burger without having to request a change from the menu item (they all seem to put onions and tomatoes on, which I don’t like).

One of the challenges of the OSS market segments I tend to do most work in (the tier-one telcos and utilities) is that they’ve always needed a market of one approach to fit their needs (ie heavy customisation, with few projects being even similar to any previous ones). This approach comes with an associated cost of customisation (during the commissioning and forever after) as well as the challenge in finding the right people to drive these customisations (yes, you may’ve noticed a shortage too!).

If we can overcome this challenge with a model of repeatability overlaid onto customised requirements (ie mass customisation) then we’re going to reduce costs, reduce risks, reduce our reliance on a limited supply of resources and improve quality / reliability.

But OSS is a bit more complex than the burger business (I imagine, having never learnt much about the science of making and delivering burgers to order). So where do we start on our repeatability mantra? Here are a few ideas but I’m sure you can think of many more:

  1. Systematising the OSS product ordering process, whether you make the process self-serve (ie customers can log on to build a shopping cart) or more likely, you streamline the order collection for your sales agents to build a shopping cart with customers
  2. Providing decision support for the install process, guiding the person doing the install in real-time rather than giving them an admin guide. The process of setting up databases, high-availaility, applications, schema, etc will invariably be a little different for each customer and can often take days for top-end installs
  3. Reducing core functionality down to the features that virtually every customer will use, working hard to make those features highly modularised. They become the building blocks that customisations can be built around
  4. Building a platform that is designed to easily plug in additional functionality to create bespoke solutions. This specifically includes clever user experience design to help them find the right plug ins for their requirements rather than confusing them in the vast array of choice
  5. Wherever possible, give the flexibility in data rather than in applications / code
  6. Modularisation of products and processes as well as functionality
  7. Build models of intent that are abstracted from the underlying technology layers

The transient demands facilitated by a future of virtualised networks makes this modularity and repeatability more important than its ever been for OSS.

Where does trial and error belong in OSS?

I hold a somewhat philosophical view of where OSS (and IT in general) fits within its overall timeline. It’s all pretty nascent in the grand scheme of things.

Whilst communitications technology is the common thread, I’ve worked in many industries including construction, mining, engineering, government, utilities, emergency services, healthcare, farming and more. Most of these industries have been around for far longer than OSS. As the outsider looking in on those industries, it seems that the basic techniques they use have existed for decades and have been refined to the point of relative maturity and consistency*. Think the technique for preparing a suture on a wound, of building a timber frame for a house, of milking a cow, etc. There’s not much error because the trialling has already been refined out of the process.

But OSS is much younger. The technologies they’re built upon are still in a state of massive upheaval. We aren’t even close to reaching an asymptote of refinement yet. Within this maelstrom of change, there is still a lot of trialling underway. And when there’s trial, there’s bound to be errors. They happen whether you like it or not. In the case of OSS, a LOT of errors. Clearly, errors are not in short supply.

Trial, on the other hand can be far more scarce. The fear of making mistakes on these large, complex projects often holds us back from performing the trials that could help us contribute to the Global OSS Body of Knowledge (GOSSBOK). We mistakenly believe that to avoid error, we have to avoid trial. We actually need more trial. **

Having said that, these projects consume too many resources to be an out-of-control learning experiment. The key call-out here is that our OSS already provide the tools to conduct many controlled micro-experiments. Our databases of large, relational information are perfect for conducting rapid prototyping or rapid insight checking.

* Note that I’m not trying to denigrate the innovation occurring in these industries, as I’m sure they are all highly innovative.
** I’ve shamelessly borrowed from the words and concepts in a Seth Godin blog

Have you noticed the different races being run in OSS?

Yesterday’s blog discussed innovation at the speed of data being even faster than innovation at the speed of software. But not all aspects of OSS need to evolve at the sames speeds.

In the Olympics, sprinters need fast-twitch muscles and training to hone for speed, whilst marathon runners need slow-twitch muscles and appropriate training for endurance. The same appears to be true in OSS, with the two business model extremes – OTT / DSP (Over the Top or DSP) versus REIT / TaaU (Telco as a Utility) and anywhere in between (or a combination of the two extremes – the decathlete OSS??).

The OTT / DSP model requires transient networks and services, with virtualised infrastructure being spun up and torn down to cope with demand. They’re provisioned for burst capacity and a customer expectation of speedy outcomes. In this model there’s arguably just as much happening within the data centre* as there is out in the field, if not more. This model requires the sprinter OSS and a corresponding mindset of innovation (fast-twitch innovation).

Conversely, the REIT / TaaU model doesn’t change as much, although there is always maintenance and build-out going on. Customers know the physical nature of this network build (ie planning, approvals, truck-rolls, etc) is more methodical and time-consuming. At this juncture in history, not much is changing in the physical network, as we’re still using optical fibres, copper, radio and coax (not taking into account changes in the active equipment that plugs into these networks like G.Fast or topology innovations like FTTPdp). There are improvements to be made in the user interfaces of the OSS that support this type of business model to support designers, the field workforce, a contractor-based workforce, etc but generally the marathoner OSS needs more of an efficiency improvement mindset (slow-twitch innovation). This is a model that’s built around physical assets / inventory.

* as an aside, you may have been noticing that the traditional CSPs have been increasingly outsourcing their data centre capacity requirements, which appears to be a clever ploy on so many levels. Since the assets being used are often not directly managed, the concept of owning and managing the assets / inventory becomes more abstracted, meaning the OTT / DSP model of OSS becomes less dependent on inventory and more reliant on its ability to orchestrate services through any number of cloud providers.

48% drop in store visits in three years

There were 34 billion visits to US stores in 2010. By 2013, that number had plummeted 48% to 17.6 billion, according to Elite Wealth Management. As consumers make more of their purchases online, the challenge of engaging consumers in store is accelerating the rise of ‘experiential shopping’.”
David Kelnar
in a fascinating trend analysis on Medium.

Are you surprised by the headline percentage? Clearly online purchases are rising, but 48% is a massive drop in physical store visits, with massive implications if it continues apace.

It talks to the efficiencies of digitisation and the changes in business models that go with it. Since the digitisation of business relies on communications technologies, it also highlights the increased dependence of modern businesses on their networks as an essential delivery channel.

The modern service provider has noticed this shift and is transitioning from a communications service provider (CSP – the provider of telephony and data services to act as business enablers) to digital service providers (DSP – the provider of digital services to enable digital business customers). The shift from CSP to DSP is causing a shift in which assets are most important to the modern service provider (as dictated by what is more important for the customer).

Phone attendants are a more expensive medium for a business to offer because they require people to be available to take calls. Digital channels like digital chat will become progressively cheaper as automated chat bot technologies get better at handling customer requests.

The OSS of today service the legacy business model of CSPs. They’re evolving to meet the infrastructure needs of the DSP model through managing and maintaining virtualised network assets and associated hyperscaling technologies. However, are you seeing a corresponding evolution to handle the other side of the DSP model, the apps and content that their customers see as the true enablers of their digital businesses? The DSP needs to find solutions that help their customers to thrive as digital businesses and those solutions will need operational support tools.

This could still be classed as a niche of operational support systems, but not as we know traditional OSS. It’s the managing of software and contracts and content as services rather than cables and devices and channels and circuits. It’s a more IT-style of service operation than a traditional telco, so can we expect more IT-style of thinking to pervade the oss/OSS of the DSPs of the future?

Open source OSS

Last week, two new open source groups focusing on management and orchestration (MANO) of network functions virtualization (NFV) announced their existence: the Open Source Management (OSM) group hosted by ETSI, and Open-O hosted by the Linux Foundation.
At the press conference announcing Open-O, Yang Zhiqiang, deputy general manager of the China Mobile Research Institute, said the operation support system (OSS) will lead to open source software (OSS)
.”
Linda Hardesty
here.

Open source software (the other OSS) conjures up vastly different views in our industry doesn’t it?

Some believe that open source will take over OSS. Others believe that their mission critical networks can’t possibly have open source tools running them. There is a perception that the risk, especially security risk, is too high,

My perspective lies somewhere in the middle.

With so much brainpower being directed to open-source collaboration projects like OpenStack it is inevitable that many of the advanced, mission-critical networks of the future will have open-source in their toolboxes. Even the ever-cautious utilities companies are likely to start considering open-source in their OSS if there is the right support wrapper around it.

Not quite so inevitable is the replacement of the entire OSS stack with open-source. Whilst there are open-source projects in ticketing, workforce management, GIS, inventory management, etc the challenge will be building enough sophistication to usurp the customised solutions that customers already have in place.

The interesting one will be the VNFs (Virtual Network Functions) that sit on virtualised networks to deliver switching, routing, network security, etc. Will open-source VNFs ever go mainstream? Will open-source VNFs reach the level of capability of proprietary vendors? The vendors have a big head-start due to having an existing code-base that was tied to their proprietary hardware.

I think that it’s also inevitable that open source VNFs will find their way into mission-critical operational networks of utilities, emergency services, etc eventually. Will they also be managed by open-source OSS? Yes…with the right support models.

Managing property with OSS?

There’s a slight problem about being passionate about OSS – you see everything In relation to OSS problems, solutions, analogies, etc.

l was talking with Simon, a great friend of mine recently about a new role that he’s taking on. He will be responsible for technology in the facilities used by the large bank that he works for. Those facilities include branches, offices, data centres and more, The conversation started out on the challenges facing facilities managers including energy efficiency, occupation rates, meeting room utilisation rates, cost per desk, workforce efficiency, utility allocation / billing and many other KPls.

The tech he has been considering in this space was wide and varied but primarily came down to additional types of sensors that will ultimately reduce costs for his employer. Many of these sensors sound very cool (no pun intended re. HVAC sensors). As you can imagine, the executives at the bank don’t fund cool, they fund cost-out projects,

The same is true for OSS but that’s not the overlap I was thinking about on this occasion. It was how sensor networks in buildings collect vast amounts of data, aggregate it, process it, analyse it against a particular metric or theme and then provide insights based on that benchmark. We then got onto the topic of circulating (another HVAC pun) the benchmark results for the purpose of gamification and competition between different parts of the organisation.

You can see why l consider existing OSS capabilities as being well placed for servicing the IoT space can’t you?

Then we got onto the topic of blockchain and smart contracts so my OSS-coloured glasses kicked in again but that’s a story for another day.

PS. Yes, in the pre-IoT days, we managed buildings through software like BMS (Building Management Systems), PAGA (Public Address General Alarm), physical security (eg ACS – Access Control Systems) and other tools not to mention environmentals but the IoT buzzword is taking it to another level.

 

Another 10 ideas

In yesterday’s post, “Just 10 ideas“, I talked about James Altucher’s “Idea Machine,” of coming up with 10 ideas every day, regardless of whether they’re good or not. I took a slightly different twist on the concept and posed a series of 10 questions, which in turn will probably have at least 10 idea responses. That’s 100 OSS ideas.

Today’s post lists another 10 questions to jump-start your (and my) idea machine, as follows:

  1. What are the 10 reasons why the term OSS will be irrelevant / redundant within 5 years?
  2. What are the 10 reasons why OSS will fundamentally change the world of communications and digital industry within the next 5 years?
  3. In what 10 ways can OSS better utilise spatially-relevant data?
  4. What 10 unexpected data sources could provide fundamentally different insight generation from your existing OSS?
  5. Communications and Digital Service Provider (CSP / DSP) business models are evolving almost faster than we can keep up. What 10 reasons can your foresee OSS facilitating these changes?
  6. What 10 reasons can you foresee OSS hindering these changes?
  7. What are the 10 most frequent reasons why customers contact CSP / DSP call centres? Hence, what are the 10 most important functions / insights / capabilities that front-line staff (eg call centre operators) need at their fingertips from OSS to be able to respond to customer requests?
  8. What 10 ways can you interact with your OSS that don’t include keyboard, mouse or touch-pad?
  9. What are the 10 most ridiculous things you’ve seen whilst working on OSS projects? But more importantly, what did you learn from them?
  10. What should be the first 10 events scheduled at an OSS Olympics?

Tomorrow’s blog will have a closer look at the last one – OSS Olympics.

Just 10 ideas

You can’t trust the old style of thinking anymore. You have to come up with a new way of thinking. A new way of having ideas. A new way of interacting with the outside universe.”
James Altucher.

I’m currently reading James Altucher’s latest book. It has nothing to do with OSS but it has a myriad of ideas to ponder for OSS exponents.

He speaks about ideas being currency in the modern world and hence the need to be an idea machine.

l love the concept. It’s part of the reason why l founded PAOSS – to force me to generate more ideas about OSS and share them with the world (or at least the sub-set of the global population that reads PAOSS).

There’s just one problem. James suggests coming up with 10 ideas every day. Sounds like a challenge worth taking on though.

I’d love to get your assistance on this one, to help provide a little push to get the PAOSS idea machine moving faster.

  1. What are the 10 biggest problems you face or our industry faces?
  2. What are the 10 things that you’d like to get better at?
  3. Our thinking is often constrained by the complexity of the challenges, so which 10 complexities would you like to snap your fingers and remove?
  4. Which 10 nascent technologies will impact OSS in the future?
  5. Which 10 technologies will be dead in 5 years or less?
  6. Which 10 technologies do we need that haven’t been invented yet?
  7. Name 10 ways in which machine learning will impact our industry
  8. Which 10 ways can we use to overcome the OSS skills shortage?
  9. Which 10 laws of nature don’t apply to OSS?
  10. Which 10 analogies from other industries can you apply to OSS?

Perhaps you’d like to throw some other lists of 10 at the PAOSS community and I for philosophising over?

Hmmm. I already have another 10 in a list so let’s continue with this tomorrow.

Last mover monopolies

The PayPal network, as it’s been called, is a set of friendships built over the course of a decade. It has become a sort of franchise. But this isn’t unique; that kind of dynamic arguably characterizes all great tech companies, i.e. last mover monopolies. Last movers build non-commoditized businesses. They are relationship-driven. They create value. They last. And they make money.”
Peter Thiel
(actually a notes essay from Peter Thiel’s CS183: Startup – Class 3 lecture).

The OSS market is highly fragmented. The Leaders (the top right corner of the Gartner OSS Magic Quadrant) tend to service the Tier-1 CSPs exclusively and from my experience running vendor selections, of these vendors only one entertains bidding for customers from outside the top tier. To borrow from the long-tail diagram below, the Gartner Leaders primarily service only the yellow band.

The hundreds of other OSS vendors supply to a combination of yellow and green bands, either servicing functional niches (eg root cause analytics) or perhaps customer-type niches (eg power companies).

Given this fragmentation, OSS has yet to be dominated by an organisation that fits Peter Thiel’s last mover monopoly classification. In OSS we don’t have what Google is to search, Amazon is to online retail, etc.

Can you think of an organisation that fits all of the following criteria based on their OSS solutions alone?
They are relationship-driven. They create value. They last. And they make money.”

Tomorrow we look at some further insights of Peter Thiel and see what factors might allow a last mover monopoly to emerge in the OSS industry.

Vertical or horizontal OSS branching?

If you embrace special orders, you’re doing something difficult, scarce and worth seeking out.”
Seth Godin
here.

Recent post, “Long Tail Dynamics in OSS,” highlighted the different bands of customers that a CSP services and the different OSS that they use to service each band. The head (the top-X customers by revenue) tends to get customised OSS solutions, whilst the long tail (the other customers, many of who generate very little individual revenue or profitability for the CSP) tends to get a one-size-fits-all multi-tenant OSS.

OSS developers, like any software developers, are often faced with the dilemma of which type of code tree strategy to follow. Do you follow the “Main Line” approach or the “Special Orders” approach as highlighted in the diagram below?
Different Trees

The main line approach sees all code developed towards a common objective, with no custom branches for any customers. Every customer gets the same functionality and there is only ever one current version of code being used by every customer. This approach is attractive because there is only one version of code to enhance and maintain, which allows all coder hours to focus on roadmap functionality (represented by increasing the height of the tree as fast as possible).

The special orders approach sees every different customer wanting specific modifications to suit their needs. If the developer wants to add new functionality, significant effort must go into evaluating which customers want it, whether it can be easily added to each customer branch, whether changes to any of the branches cause problems to third-party applications / processes / data / integrations and corresponding release management and supportability challenges.

In my experience, every developer wants to stay on the main line with no branching, but at some point becomes swayed by the special order requested by key customers.

In my opinion, there are three strategies that should be held onto steadfastly and you have to know whether your target customer base fits into the head or long-tail band:

  • STRATEGY 1 – If your customer base consists of tier-1 CSPs / utilities, with an appetite and budget for specifically customised solutions, choose the “special orders” approach, knowing what the ongoing ramifications will be. You will need to embrace the variations
  • STRATEGY 2 – If your customer base consists of customers that have smaller budgets and are willing to put up with flexibility through configuration rather than code change, then choose the “main line” approach and build delivery models
  • STRATEGY 3 – If you service customers that want some variation but you don’t have the resources to support the ramifications, then consider a hybrid approach. This approach sees the developer producing a product along the main-line approach, but with a strategy to provide impeccable frameworks to support third-party developers to fill the custom-requirements gaps. By “impeccable frameworks” I mean highly open interfaces / APIs, 24×7 developer support mechanisms, high-quality documentation, developer forums, etc. The developer will rely on the third-parties to contribute towards the success of their brand, so they have to ensure the success of the third-parties in turn

Self-serve building blocks

It is change, continuing change, inevitable change, that is the dominant factor in society today. No sensible decision can be made any longer without taking into account not only the world as it is, but the world as it will be.”
Isaac Asimov
.

Last week I brought some of Tom Nolle’s ideas to your attention in a blog about building blocks.

The benefits of this ambitious, but I believe inevitable, approach are multi-dimensional. It reduces complexity and promotes increased repeatability at many levels of a CSP‘s org chart. Improved repeatability and a reduced number of variants (in processes and services) means that automation becomes easier to implement and manage through its life-cycle.

And automation leads towards one of the OSS industry’s holy grails – customer self service or flow-through provisioning. Even more exciting (for me at least) is that graphical customer self-serve network / service design tools become more viable. This is something I’ve envisaged and even prototyped since the early 2000’s. This concept is described in more detail in an earlier blog entitled “The end of network engineers.” NFV and intent networking are mechanisms that should also facilitate the adoption of the building block approach to networking and OSS.

The challenge for CSPs is that this type of service, taken to its fullest extent, disrupts and commoditises the CSP industry so that data streams are like any other utility. Just like streams of electricity or water, the developed world just expects a standardised supply irrespective of the supplier and a monthly bill that consists of connection and consumption charges.

Maybe the CSP industry doesn’t want this end-game supplied (in part) by its OSS but the march of progress will make it happen regardless. Hence the importance for CSPs to evolve into DSP (digital service providers)… unless they’re happy with a utility-style business model of course, which many once were as telephony service providers.

Free eBook from multiple OSS experts

I’m delighted to announce the arrival of PAOSS’s latest publication, an eBook entitled, “OSS Masterclasses – Insights from Thought-Leaders in Operational Support Systems (OSS).” You can grab it via the link above or by clicking on the image below-right.

OSS Masterclasses - Insights from Thought-Leaders in Operational Support Systems (OSS)

 

SYNOPSIS:
In creating content here on PassionateAboutOSS.com, I spend a great deal of time writing about all things OSS – technology, process, projects and beyond. It has recently dawned on me that I should spend more time writing about, and shining a light on, the people in OSS who make it such a great industry.

It starts off with a short story on how to get started on an OSS project, which is always a challenge of knowing exactly where to start due to their size and complexity.

Then Ashton Wood advises us how to design an OSS platform that delivers from the variety of options that will be available to you. John Reilly then takes the baton to describe how to transform Customer Engagement to Party Engagement Management, a guide to the next generation of collaborative OSS.

Evan Linwood then shows us how to avoid B/OSS scope creep, an ever-present danger for OSS projects. Taking this further, Adrian Mah describes how to gain OSS transformational benefits without doing a transformation by using innovative thinking and the Benefits Realisation Method / Management (BRM) approach.

Doug Duke takes us on the next leg of the journey with ideas for how to build a strong data foundation for an OSS solution. Sound reasoning on a subject that has brought about the downfall of many an OSS. David Hendy then articulates what many are unable or unwilling to do – he describes how OSS platforms deliver business benefits.

Next up, Simon Osborne looks at the past to look into the future with a career perspective of how to know what is needed for the OSS systems of the future. On a similar theme, we then investigate the mega-themes that are likely to impact (and be impacted by) OSS in the future with a story on how to predict the next phase of the OSS explosion.

OSS Masterclasses is a compilation of the remarkable short stories, in each author’s own words, from the people who inspire me. I’m confident that their stories will enlighten and inspire you too.

How can I add value to you?

Today’s blog is all about you (actually they all have you in mind collectively, but this one encourages you to voice your specific obstacles to progress).

In the world of OSS (or perhaps even beyond), what value can I add that goes beyond the generalities of a blog and will help you get what you want? Is there any specific challenge that is preventing you from getting to where you want (or need) to go?

My role as an ICT and OSS consultant can be broken down into two key attributes :

  1. Being a connector (of people, ideas, technologies, products, methodologies, companies, customers, etc) and coach
  2. Producing an outcome that is not possible to do in-house given the constraints that exist in time, resources, skill-sets, priorities, etc

Who do you need to connect with who would be able to help you resolve a problem or jump to the next level? What technology do you want to know more about such as whether it fits your business objectives for the future?

What else is vexing you?

Leave me a message below and I’ll hopefully be able to produce an outcome or connection that can assist you in some meaningful way.

The law of cascading problems (part 2)

For every complex problem there is an answer that is clear, simple, and wrong..”
H. L. Mencken
.

In yesterday’s blog, we discussed the law of cascading problems where if we have data accuracy levels as follows:

  • Locations are 90%
  • Pits are 95%
  • Ducts are 90%
  • Cables are 90%
  • Joints are 85%
  • Patch panels are 85%
  • Active devices are 95%
  • Cards are 95%
  • Ports are 90%
  • Bearer circuits are 85%

then the success rate of end-to-end circuits through all of these objects is less than 30% (ie multiply all these percentages together).

One of the simplest ways to build a higher level of resiliency via imperfect data is to reduce the amount of cascading (ie simplify the data by reducing the number of relationships).

In the example above, it’s fantastic to know all the outside plant relationships (ie pits, ducts, cables, joints, patch-panels) for fault-finding purposes but outside plant data is notoriously difficult to maintain (it generally has no programmatic interface to facilitate electronic audits so manual audits are required).

So the challenge for the OSS experts dealing with this scenario is:

  1. Do I model outside plant as a dotted-line to increase E2E circuit success rates and reduce the effort in manicuring the data; or
  2. Do the fault-identification benefits of outweigh the effort of maintaining the data  and coping with the E2E circuit fall-outs

Furthermore, the larger the data volumes, the bigger the remediation task. For example, it might be easy to choose option 2 if you have a local network and infrequent customer network change but rely heavily on your outside plant (eg utilities). It becomes much harder to accommodate option 2 if you have global infrastructure and large amounts of change in your customer data sets (ie E2E circuits) (eg tier-1 CSPs).

IoT equals lower ARPU?

Surprisingly though, wireless providers themselves are one of the largest hurdles for advancing IoT, with many resisting adding connectivity to new devices due to fears of lower APRU metrics. CSPs must band together and embrace future innovation before competition edges them out. Perhaps creating an IoT MVNO is the transition necessary for addressing lower ARPU, while still fostering the innovation necessary to drive these new revenues.”
Humera Malik
in this article.

Seriously??

Are CSPs really resisting IoT due to fears of lowering ARPU (Average Revenue Per User) figures? If so (and I have no reason to disbelieve Humera’s claims BTW), then aren’t the CSPs taking too narrow a perspective on what IoT represents? Are they really only looking at providing wireless data (and hence the low ARPU due to IoT generally being a low data volume telemetry-style service)?

Now I’m only a humble OSS consultant / engineer / architect / promoter, but why would the telcos not be seeking to build* the whole IoT ecosystem play, as discussed here, not just the data plumbing?

Would ARPUs from the whole IoT ecosystem (network, apps, content, sensor management, etc) be less than ARPUs of a traditional mobile user? Given that the ecosystem model is designed around the value fabric and revenues are shared amongst the players that are adding value, does ARPU remain the relevant metric for IoT for CSPs?

I’d love to get your opinions on this. Are telcos going to just provide the wireless data streams or should they (are they) aiming higher?

* When I say build, this could be in the value-fabric / partnership model of service offering.

OSS flaws

People are too eager to say “This legendary person had flaws!” instead of, “Wow, this flawed human being managed to do something legendary.”
Mishell Baker.

Interesting perspective from Mishell above, one which I concur with.

As usual, I have a take on this in relation to OSS. I think we often spend so much time on making the little things perfect that we lose sight of the big things that will make our OSS legendary. But this is something of a contrarian view.

Your customers and your stakeholders expect your product offerings to be perfect, for no fall-outs to occur, for all events to be processed, for no exceptions to arise, etc. I agree that this is a great objective, although I’d also note that this level of perfection requires an exceptional amount of your thought, design, development, process, data cleansing and testing.

But I wonder whether all that extra effort could be redirected to identifying and creating products that can make a significant difference to the organisation, acknowledging that there will be slight imperfections that are captured and processed manually.

This perspective is particularly important for customers / stakeholders that don’t have large budgets to allocate to their OSS. They need to get the best organisational bang for their buck, rather than a smaller but highly refined product.

But the tough part of this is convincing stakeholders that they won’t be getting perfection (although noting that there is no such thing in OSS anyway). How many of customers / stakeholders have you ever been able to persuade to accept flaws in return for something legendary?

Chief Simplification Officer

What simple action could you take today to produce a new momentum toward success in your life?
Tony Robbins
.

Complexity is the single biggest challenge that stands in the way of us delivering OSS masterpieces. As described in, “The triple constraint of complexity,” the reduction of any complexity should have a multiplier effect towards the success of the project. This principle doesn’t just apply to OSS, but also to the CSPs that utilise them.

So for today’s blog, I came up with this catchy concept of the Chief Simplification Officer. How very unique… Only problem is that when I searched for the term online I found there were around 400,000 existing references so the concept isn’t remotely unique unfortunately. Anyway….

Whilst we all have the responsibility of being CSO on our projects, I wonder whether every significant OSS project or OSS operational team should have a CSO or a Project Simplification Officer whose entire purpose is to identify ways of making the OSS simpler.

Do you think that such a role is justified? If so, what do you think are the essential traits that this person would need?