DTA is all wrapped up for another year

We’ve just finished the third and final day at TM Forum’s Digital Transformation Asia (https://dta.tmforum.org and #tmfdigitalasia ). Wow, talk about a lot happening!!

After spending the previous two days focusing on the lecture series, it would’ve been remiss of me to not catch up with the vendors and Catalyst presentations that had been on display for all three days. So that was my main focus for day 3. Unfortunately, I probably missed seeing some really interesting presentations, although I did catch the tail-end of the panel discussion, “Zero-touch – Identifying the First Steps Toward Fully Automated NFV/SDN,” which was ably hosted by George Glass (along with NFV/SDN experts Tomohiro Otani and Ir. Rizaludin Kaspin ). From the small amount I did see, it left me wishing that I could’ve experienced the entire discussion.

But on with the Catalysts, which are one of the most exciting assets in TM Forum’s arsenal IMHO. They connect carriers (as project champions) with implementers to deliver rapid prototypes on some of the carriers’ most pressing needs. They deliver some seriously impressive results in a short time, often with implementers only being able to devote part of their working hours (or after-hours) to the Catalyst.

As reported here, the winning Catalysts are:

1. Outstanding Catalyst for Business Impact
Telco Cloud Orchestration Plus, Using Open APIs on IoT
Champion: China Mobile
Participants: BOCO Inter-Telecom, Huawei, Hewlett Packard Enterprise, Nokia

2. Outstanding Catalyst for Innovation
5G Pâtisserie
Champions: Globe Telecom, KDDI Research, Singtel
Patricipants: Neural Technologies, Infosys, Ericsson

3. Outstanding New Catalyst
Artificial Intelligence for IT Operations (AIOps)
Champions: China Telecom, China Unicom, China Mobile
Participants: BOCO Inter-Telecom, Huawei, Si-Tech

These were all undoubtedly worthy winners, reward for the significant effort that has already gone into them. Three other Catalysts that I particularly liked are:

  • Transcend Boundaries – which demonstrates the use of Augmented Reality for the field workforce in particular, as championed by Globe. Collectively we haven’t even scratched the surface of what’s possible in this space, so it was exciting to see the concept presented by this Catalyst
  • NaaS in Action – which is building upon Telstra’s exciting Network as a Service (NaaS) initiative; and
  • Telco Big Data Security and Privacy Management Framework – the China Mobile led Catalyst that is impressive for the number of customers that are already signed up and generating revenues for CT.

BTW. The full list of live Catalysts can be found here.

For those who missed this year’s event, I can only suggest that you mark it in your diaries for next year. The TM Forum team is already starting to plan out next year’s event, one that will surely be even bigger and better than the one I’ve been privileged to have attended this week.

Is OSS the future of OSS?

Don’t worry. The title of this post isn’t a typo, but I’ll get to that shortly.

I’ve just had an interesting day 2 at TM Forum’s Digital Transformation Asia (https://dta.tmforum.org and #tmfdigitalasia ). The quality of presentations was again quite high with further thought-provoking ideas!!

My favorite session for the day was a panel discussion entitled, “Is open-source the future of OSS/BSS?” Hence the title of today’s blog. Is OSS (open source software) the future of OSS?

Trevor Cheung of OpenROADS Community spoke about their framework for delivering transformation. One point he emphasised was that we’re so wrapped up in Customer Experience (CX), we often forget about Employee Experience. Put simply, if we don’t win the hearts and minds of the implementers, there’s never going to be a transformed experience for the customers to have.

Jurgen Hase of unlimit gave a number of really interesting perspectives, but the best is paraphrased as follows, “The S in IoT stands for security… Wait, what? There is no S in IoT??”

Next was Angelia Ooi of TIME. Angelia provided 8 really useful tips on digital transformation via a presentation pack that is easily the most succinct and polished of all those I’ve seen at DTA so far.

Joddy Hernady of Telkom Indonesia provided some of the economics of becoming a digital telco, which provided an interesting perspective on the benefits of achieving digital transformation.

But finally, and it was the last presentation of the day that was most thought-provoking. Is open source the future of OSS/BSS?
Unfortunately I missed almost all of Catherine Michel’s opening gambit, but I believe the CTO of Sigma Systems made the key point that open source projects such as Mongo DB should really only be considered once they’d reached a level of maturity, ongoing development and support that approaches the large ISVs (Independent Software Vendors) such as Sigma Systems. She also highlighted the multi-layered challenges around licensing / rights.
Gnanapriya Chidambaranathan of Infosys contended that there is a wealth of open source projects that can be leveraged, curated and supported by integrators such as Infosys. She posed that open source adoption is a key to innovation.
Venura Mendis of Apigate provided the perspective of an open source software provider. He highlighted the challenge he faces in dealing with traditional carrier procurement teams, particularly in their ambition of reaching comparative TCO (Total Cost of Ownership) models.
Guy Lupo of Telstra provided a number of different and interesting perspectives, as he regularly does, this time on a carrier deciding between ISV, open source products and going down the path (rabbit hole?) of open sourcing their own developments. Guy’s perspectives were really pertinent as he’s currently utilising all of these options in his NaaS (Network as a Service) program at Telstra.

Finally, a few thoughts from me on the topic of OSS as the future of OSS.

1. One of the biggest challenges facing the future of OSS is fragmentation. The PAOSS vendor list has over 200 records (and I’ll be doing a major update again shortly that will add hundreds of additional vendors). This means the available skills pool is diluted with a lot of functionality duplication. It also means it becomes really challenging for customers to choose the right product for their needs (although we could claim that this is a good thing for PAOSS as we often assist customers with this challenge). The proliferation of open source projects that deliver OSS/BSS functionality further fragments and dilutes

2. We’re seeing a trend away from the behemoth software stacks of the past for a variety of reasons, but could be summed up as the laws of physics preventing us from making a large-scale OSS pivots. The more modular OSS appear to be more nimble. This plays into the hands of niche open source offerings. It appears contra to the massive-scale open source efforts of ONAP, which interestingly, the above mentioned panelists also held doubts over ONAP’s ability to succeed. I should note that they, like me, were also enthusiastic about facets of ONAP such as the collaboration, initiative taken, etc.

3. I still believe there is the potential to build an open-source OSS core that then allows collaboration and plug-ins to be developed, thus better leveraging the long tail of innovation from the available skills pool. Today’s panelists did throw something of a spanner in these works though by pointing out the layered licensing challenge with open source. It’s quite common for open source projects to leverage open source projects, which in turn leverage open source projects. Guy in particular highlighted just how big a problem it has been for Telstra’s procurement team to trace out all the open source threads.

OSS that capture value, not just create it

I’ve just had a really interesting first day at TM Forum’s Digital Transformation Asia (https://dta.tmforum.org and #tmfdigitalasia ). The quality of presentations was quite high. Some great thought-provoking ideas!!

Nik Willetts kicked off his keynote with the following quote, which I’m paraphrasing, “Telcos need to start capturing value, not just creating it as they have for the last decade.”

For me, this is THE key takeaway for this event, above any of the other interesting technical discussions from day 1 (and undoubtedly on the agenda for the next 2 days too).

The telecommunications industry has made a massive contribution to the digital lifestyle that we now enjoy. It has been instrumental in adding enormous value to our lives and our economy. But all the while, telecommunications providers globally have been experiencing diminishing profitability and share-of-wallet (as described in this earlier post). Clearly the industry has created enormous value, but hasn’t captured as much as it would’ve liked.

The question to ask is how will our thinking and our OSS/BSS stacks help to contribute to capturing more value for our customers. As described in the share of wallet post above, the premium end of the value chain has always been in the content (think in terms of phone conversations in days gone by, or the myriad of comms techniques today such as email, live chat, blogs, etc, etc). That’s what the customer pays for – the experience – not the networks or systems that facilitate it.

Nik’s comments made me think of Andrew Carnegie. Monopolies such as the telecommunications organisations of the past and Andrew Carnegie’s steel business owned vast swathes of the value chain (Carnegie Steel Company owned the mines which extracted the raw materials needed to make steel, controlled the transportation used to deliver the materials and the product, and ran the mills used for steel production). Buyers didn’t care for the mines or mills or transportation. Customers were paying for the end product as it is what helped them achieve their goals, whether that was the railway tracks needed by the railroads or the beams needed by construction companies.

The Internet has allowed enormous proliferation of the premium-end of the telecommunications value chain. It’s too late to stuff that genie back into the bottle. But to Nik’s further comment, we can help customers achieve their goals by becoming their “do-it-yourself” digital partners.

Our customers now look to platforms like Facebook, Instagram, Google, WordPress, Amazon, etc to build their marketing, order capture, product / content delivery, commercial transactions, etc. I really enjoyed Monty Hong‘s presentation that showed how Telkomsel’s OSS/BSS is helping to embed Telkomsel into customers’ digital lifestyles / value-chains. It’s a perfect example of the biggest OSS loser proof discussed in yesterday’s post.

The biggest OSS loser

You are so much more likely to put effort into something when you know whether it will pay off and what the gains will be. Not knowing how things will turn out undermines your motivation and makes you delay taking action.”
Dr Theo Tsaousides
in his book, Brainblocks.

Have you seen the reality TV show, “The Biggest Loser?” I rarely watch TV, but have noticed that it’s been a runaway hit in the ratings here in Australia (and overseas apparently). Why has it been so successful and what does it have to do with OSS?

Well, according to Dr Tsaousides, the success of the show comes down to the obvious body-shape / fitness transformations each of the contestants makes over each season of the show. But more specifically, “You need to watch only one season from beginning to end and you will start craving to be a contestant on the show, regardless of your current weight… Seeing the people’s amazing transformation over a few months is a much more convincing way to start working out and eating well than being told by your doctor that you need to lose weight and about the cardiovascular advantages of exercise. Forecasting a positive outcome, especially when dealing with something new and unfamiliar, leads to action.”

Can you see how this might be a useful technique when planning an OSS transformation?

Change management is always a challenging task on any large OSS transformation. It’s always best to have the entire OSS user population involved in the change, but that’s not always feasible for large groups of users.

It’s one of the reasons I’m always a big advocate for getting a baseline, sandpit version of off-the-shelf OSS stood up and available for the user population to start interacting with. This is particularly helpful if the sandpit is perceptibly better than the current one.

To paraphrase, “Forecasting a positive outcome (via the OSS sandpit), especially when dealing with something new and unfamiliar (the future state after OSS transformation), leads to action (more excitement, engagement and less pushback from the user population during the course of the transformation).”

Do you think the biggest loser technique could work on your next OSS transformation?

Is your data getting too heavy for your OSS to lift?

Data mass is beginning to exhibit gravitational properties – it’s getting heavy – and eventually it will be too big to move.”
Guy Lupo
in this article on TM Forum’s Inform that also includes contributions from George Glass and Dawn Bushaus.

Really interesting concept, and article, linked above.

The touchpoint explosion is helping to make our data sets ever bigger… and heavier.

In my earlier days in OSS, I was tasked with leading the migration of large sets of data into relational databases for use by OSS tools. I was lucky enough to spend years working on a full-scope OSS (ie it’s central database housed data for inventory management, alarm management, performance management, service order management, provisioning, etc, etc).

Having all those data sets in one database made it incredibly powerful as an insight generation tool. With a few SQL joins, you could correlate almost any data sets imaginable. But it was also a double-edged sword. Firstly, ensuring that all of the sets would have linking keys (and with high data quality / reliability) was a data migrator’s nightmare. Secondly, all those joins being done by the OSS made it computationally heavy. It wasn’t uncommon for a device list query to take the OSS 10 minutes to provide a response in the PROD environment.

There’s one concept that makes GIS tools more inherently capable of lifting heavier data sets than OSS – they generally load data in layers (that can be turned on and off in the visual pane) and unlike OSS, don’t attempt to stitch the different sets together. The correlation between data sets is achieved through geographical proximity scans, either algorithmically, or just by the human eye of the operator.

If we now consider real-time data (eg alarms/events, performance counters, etc), we can take a leaf out of Einstein’s book and correlate by space and time (ie by geographical and/or time-series proximity between otherwise unrelated data sets). Just wondering – How many OSS tools have you seen that use these proximity techniques? Very few in my experience.

BTW. I’m the first to acknowledge that a stitched data set (ie via linking keys such as device ID between data sets) is definitely going to be richer than uncorrelated data sets. Nonetheless, this might be a useful technique if your data is getting too heavy for your OSS to lift (eg simple queries are causing minutes of downtime / delay for operators).

Are telco services and SLAs no longer relevant?

I wonder if we’re reaching the point where “telecommunication services” is no longer a relevant term? By association, SLAs are also a bust. But what are they replaced by?

A telecommunication service used to effectively be the allocation of a carrier’s resources for use by a specific customer. Now? Well, less so

  1. Service consumption channel alternatives are increasing, from TV and radio; to PC, to mobile, to tablet, to YouTube, to Insta, to Facebook, to a million others.
    Consumption sources are even more prolific.
  2. Customer contact channel alternatives are also increasing, from contact centres; to IVR, to online, to mobile apps, to Twitter, etc.
  3. A service bundle often utilises third-party components, some of which are “off-net”
  4. Virtualisation is increasingly abstracting services from specific resources. They’re now loosely coupled with resource pools and rely on high availability / elasticity to ensure customer service continuity. Not only that, but those resource pools might extend beyond the carrier’s direct control and out to cloud provider infrastructure

The growing variant-tree is taking the concept beyond the reach of “customer services” and evolves to become “customer experiences.”

The elements that made up a customer service in the past tended to fall within the locus of control of a telco and its OSS. The modern customer experience extends far beyond the control of any one company or its OSS. An SLA – Service Level Agreement – only pertains to the sub-set of an experience that can be measured by the OSS. We can aspire to offer an ELA – Experience Level Agreement – because we don’t have the mechanisms by which to measure or manage the entire experience yet.

The metrics that matter most for telcos today tend to revolve around customer experience (eg NPS). But aside from customer surveys, ratings and derived / contrived metrics, we don’t have electronic customer experience measurements.

Customer services are dead; Long live the customer experiences king… if only we can invent a way to measure the whole scope of what makes up customer experiences.

Are you heading to Digital Transformation Asia (the new name for TM Forum Live Asia!)?

DTA – TM Forum’s Digital Transformation Asia event (https://dta.tmforum.org/) is almost upon us already. Held in Kuala Lumpur from 13-15 November, there are some really interesting looking talks on the agenda (https://dta.tmforum.org/agenda). I’m looking forward to being overwhelmed by the collective genius that is sure to be in attendance.

Will you be making an appearance?

Does Malcolm Gladwell’s 10,000 hours apply to OSS?

You’ve probably all heard of Malcolm Gladwell’s 10,000 hour rule from his book, Outliers? In it he suggests that roughly 10,000 hours of deliberate practice makes an individual world-class in their field. But is 10,000 hours enough in the field of OSS?

I look back to the year 2000, when I first started on OSS projects. Over the following 5 years or so, I averaged an 85 hour week (whilst being paid for a 40 hour week, but I just loved what I was doing!!). If we multiply 85 by 48 by 5, we end up with 20,400 hours. That’s double the Gladwell rule. And I was lucky to have been handed assignments across the whole gamut of OSS activities, not just monotonously repeating the same tasks over and over. But those first 5 years / 20,000+ hours were barely the tip of the iceberg in terms of OSS expertise.

Whilst 10,000 hours might work for repetitive activities such as golf, tennis, chess, music, etc it’s probably less impactful for such multi-faceted fields as OSS.

So, what does it take to make an OSS expert? Narrow focus on just one of the facets? Broad view across many facets? Experience just using, building, designing, optimising OSS, or all of those things? Study, practice or a combination of both?

If you’re an OSS expert, or you know any who stand head and shoulders above all others, how would you describe the path that got you / them there?
Or if I were to ask another way, how would you get an OSS newbie up to speed if you were asked to mentor them from your lofty position of expertise?

Presence vs omni-presence and the green button of OSS design

In OSS there are some tasks that require availability (the green button on communicator). The Network Operations Centre (NOC) is one. But does it require on-site presence in the NOC?

An earlier post showed how wrong I was about collaboration rooms. It seems that ticket flicking (and perhaps communication tools like slack) is the preferred model. If this is the preferred model, then perhaps there is no need for a NOC… perhaps only a DR NOC (Disaster Recover NOC).

Truth is, there are hardly any good reasons to know if someone’s available or away at any given moment. If you truly need something from someone, ask them. If they respond, then you have what you needed. If they don’t, it’s not because they’re ignoring you – it’s because they’re busy. Respect that! Assume people are focused on their own work.
Are there exceptions? Of course. It might be good to know who’s around in a true emergency, but 1% occasions like that shouldn’t drive policy 99% of the time. 
Jason Fried on Signal v Noise

Customer service needs availability. But with a multitude of channels (for customers) and collaboration tools (for staff*), it decreasingly needs presence (except in retail outlets perhaps). You could easily argue that contact centres, online chat operators, etc don’t require presence, just availability.

The one area where I’m considering the paradox of presence is in OSS design / architecture. There are often many facets of a design that require multiple SMEs – OSS application, security, database, workflow, user-experience design, operations, IT, cloud etc.

When we get many clever SMEs in the one room, they often have so many ideas and so much expertise that the design process resembles an endless loop. Presence seems to inspire omnipresence (the need to show expertise across all facets of the design). Sometimes we achieve a lot in these design workshops. Sometimes we go around in circles almost entirely because of the cleverness of our experts. They come up with so many good ideas we end up in paralysis by analysis.

The idea I’m toying with is how to use the divide and conquer theory – being able to carve up areas of responsibility and demarcation points to ensure each expert focuses on their area of responsibility. Having one expert come up with their best model within their black box of responsibility and connecting their black box with adjacent demarcation points. The benefits are also the detriments. The true double-edged sword. The benefits are having one true expert work through the options within the black box. The detriments are having only one expert work through the options within the black box.

There are some past projects that I wished I’d tried to inspire the divide and conquer approach in hindsight. In others, the collaboration model has worked extremely well.

But to get back to presence, I wonder whether thrashing up front to define black boxes and demarcation points then allows the experts to do their thing remotely and become less inclined to analyse and opine on everyone else’s areas of expertise.

* I use the term staff to represent anyone representing the organisation (staff, contractor, consultant, freelancer, etc)

Intent to simplify our OSS

The left-hand panel of the triptych below shows the current state of interactions with most OSS. There are hundreds of variants inbound via external sources (ie multi-channel) and even internal sources (eg different service types). Similarly, there are dozens of networks (and downstream systems), each with different interface models. Each needs different formatting and integration costs escalate.
Intent model OSS

The intent model of network provisioning standardises the network interface, drastically simplifying the task of the OSS and the variants required for it to handle. This becomes particularly relevant in a world of NFVs, where it doesn’t matter which vendor’s device type (router say) can be handled via a single command intent rather than having separate interfaces to each different vendor’s device / EMS northbound interface. The unique aspects of each vendor’s implementation are abstracted from the OSS.

The next step would be in standardising the interface / data model upstream of the OSS. That’s a more challenging task!!

Telco services that are bigger, faster, better and the OSS that supports that

We all know of the tectonic shifts in the world of telco services, profitability and business models.

One common trend is for telcos to offer pipes that are bigger and faster. Seems like a commoditising business model to me, but our OSS still need to support that. How? Through enabling efficiency at scale. Building tools, GUIs, workflows, integrations, sales pipelines, etc that enable telcos march seamlessly towards offering ever bigger/faster pipes. An OSS/BSS stack that supports this could represent one of the few remaining sustainable competitive advantages, so any such OSS/BSS could be highly valuable to its owner.

But if the bigger/faster pipe model is commoditising and there’s little differentiation between competing telcos’ OSS/BSS on service activation, then what is the alternative? Services that are better? But what is “better”? More to the point, what is sustainably better (ie can’t be easily copied by competitors)? Services that are “better” are likely to come in many different forms, but they’re unlikely to be related to the pipe (except maybe reliability / SLA / QoS). They’re more likely to be in the “bundling,” which may include premium content, apps, customer support, third-party products, etc. An OSS/BSS that is highly flexible in supporting any mix of bundling becomes important. Product / service catalogs are one of many possible examples.

An even bigger differentiator is not bigger / faster / better, but different (if perceived by the market as being invaluably different). The challenge with being different is that “different” tends to be fleeting. It tends to only last for a short period of time before competitors catch up. Since many of the differences available to telco services are defined in software, the window of opportunity is getting increasingly short… except when it comes to the OSS/BSS being able to operationalise that differentiator. It’s not uncommon for a new feature to take 9+ months to get to market, with changes to the OSS/BSS taking up a significant chunk of the project’s critical path. Having an OSS/BSS stack that can repeatedly get a product / feature to market much faster than competing telcos provides greater opportunity to capture the market during the window of difference.

Who are more valuable, OSS hoarders or teachers?

Any long-term readers of this blog will have heard me talk about tripods, and how valuable they are to any OSS team. They’re the people who know about IT, operations/networks and the business context within which they’re working. They’re the most valuable people I’ve worked with in the world of OSS because they’re the ones who can connect the many disparate parts that make up an OSS.

But on reflection, there’s one other factor that almost all of the tripods have who I’ve worked with – they’re natural teachers. They want to impart any and all of the wisdom they may contain.

I once worked in an Asian country where teaching was valued incredibly highly. Teachers were put on the highest pedestal of the social hierarchy. Yet almost every single person in the organisation, all the way from the board that I was advising through to the workers in the field, hoarded their knowledge. Knowledge is power and it was definitely treated that way by the people in this organisation. Knowledge was treated like a finite resource.

It was a fascinating paradox. They valued teachers, they valued the fact that I was open with sharing everything I could with them, but guarded their own knowledge from anyone else in their team.

I could see their rationale, sort of. Their unique knowledge made them important and impossible to replace, giving them job stability. But I could also not see their rationale at all. Let me summarise that thought in a single question – who have you found to be more valuable (and needing to be retained in their role), the genius hoarder of knowledge who can perform individual miracles or the connector who can lift and coordinate the contributions of the whole team to get things done?

I’d love to get your thoughts and experiences working with hoarders and teachers.

Introducing our OSS expert registry, for making connections in the OSS industry

Here at Passionate About OSS, we’re passionate about making OSS happen. We have an extensive network of contacts. We just naturally tend to find ourselves making connections between the many experts in our network. Connecting those who are hoping to find an OSS expert with an OSS expert hoping to be found.

We’ve just introduced a new free-of-charge OSS expert registry to help people find OSS experts when they need to. This registry is intended to cover the buy-side and sell-side of the OSS market. Click on the link above to check it out.

Facebook’s algorithmic feed for OSS

This is the logic that led Facebook inexorably to the ‘algorithmic feed’, which is really just tech jargon for saying that instead of this random (i.e. ‘time-based’) sample of what’s been posted, the platform tries to work out which people you would most like to see things from, and what kinds of things you would most like to see. It ought to be able to work out who your close friends are, and what kinds of things you normally click on, surely? The logic seems (or at any rate seemed) unavoidable. So, instead of a purely random sample, you get a sample based on what you might actually want to see. Unavoidable as it seems, though, this approach has two problems. First, getting that sample ‘right’ is very hard, and beset by all sorts of conceptual challenges. But second, even if it’s a successful sample, it’s still a sample… Facebook has to make subjective judgements about what it seems that people want, and about what metrics seem to capture that, and none of this is static or even in in principle perfectible. Facebook surfs user behaviour..”
Ben Evans
here.

Most of the OSS I’ve seen tend to be akin to Facebook’s old ‘chronological feed’ (where users need to sift through thousands of posts to find what’s most interesting to them).

The typical OSS GUI has thousands of functions (usually displayed on a screen all at once – via charts, menus, buttons, pull-downs, etc). But of all of those available functions, any given user probably only interacts with a handful.
Current-style OSS interface

Most OSS give their users the opportunity to customise their menus, colour schemes, even filters. For some roles such as network ops, designers, order entry operators, there are activity lists, often with sophisticated prioritisation and skills-based routing, which starts to become a little more like the ‘algorithmic feed.’

However, unlike the random nature of information hitting the Facebook feed, there is a more explicit set of things that an OSS user is tasked to achieve. It is a little more directed, like a Google search.

That’s why I feel the future OSS GUI will be more like a simple search bar (like Google) that will provide a direction of intent as well as some recent / regular activity icons. Far less clutter than the typical OSS. The graphs and activity lists that we know and love would still be available to users, but the way of interacting with the OSS to find the most important stuff quickly needs to get more intuitive. In future it may even get predictive in knowing what information will be of interest to you.
OSS interface of the future

OSS collaboration rooms. Getting to the coal-face

A number of years ago I heard about an OSS product that introduced collaborative rooms for network operators to collectively solve challenging network health events. It was in line with some of my own thinking about the use of collaboration techniques to solve cross-domain or complex events. But the concept hasn’t caught on in the way that I expected. I was curious why, so I asked around some friends and colleagues who are hands-on managing networks every day.

The answer showed that I hadn’t got close enough to understanding the psyche at the coal-face. It seems that operators have a preference for the current approach, the tick and flick of trouble tickets until the solution forms and the problem is solved.

This shows the psyche of collaboration at a micro scale. I wonder if it holds true at a macro scale too?

No CSP has an everywhere footprint (admittedly cloud providers are close to everywhere though, in part through global presence, in part through coverage of the access domain via their own networks and/or OTT connectivity). For customers that need to cross geo-footprints, carriers take a tick and flick approach in terms of OSS. The OSS of one carrier passes orders to the other carrier’s OSS. Each OSS stays within the bounds of its organisation’s locus of control (see this blog for further context).

To me, there seems to be an opportunity for carriers to get out of their silo. To leverage collaboration for speed, coverage, etc by designing offerings in OSS design rooms rather than standards workshops. A global product catalog sandpit as it were for carriers to design offerings in. Every carrier’s service offering / API / contract resides there for other carriers to interact with.

But once again, I may not be close enough to understanding the psyche at the coal-face. If you work at this coal-face, I’d love to get your opinions on why this would or would not work.

Are we better off waiting for OSS technology to catch up?

Yesterday’s post discussed Dave Duggal’s concept of 20th century OSS being all about centralizing command and control to gain efficiency through vertical integration and mass standardization, whilst 21st century OSS are about decentralization – gaining efficiency through horizontal integration of partner ecosystems and mass customization.

We talked about transitioning from a telco market driven by economies of scale (the 20th century benchmark) to a “market of one” (21st century target state), where fully personalised experience exists and is seamless across all channels.

Dave wrote the original article back in 2016. Two years on and some of the technology in our OSS is just starting to catch up to Dave’s concepts. To be completely honest, we still haven’t architected or built the decentralised OSS that truly offer wide-scale partner ecosystems or customer personalisation, particularly at a scale that is cost-viable.

So I’m going to ask a really pointed question. If our OSS are still better suited to 20th century markets and can’t handle the incalculable number of variants that come with a fully personalised customer experience, are we better off waiting for the technology to catch up before trying to build business models that cater to the “market of one?”

Why? Well, as Gadi Solotorevsky, Chief Technology Officer, cVidya in this post on TM Forum’s Inform says, “…digital customers aren’t known for their patience and or tolerance for errors (I should know – I’m one of them). And any serious glitch, e.g. an error in charging, will not only push them towards a competitor – did I mention how easy is to change digital service providers? It will probably find also its way to social media, causing a ripple effect. The same goes for the partners who are enabling operators to offer cool digital services in the first place.”

Better to have a business model that is simpler and repeatable / reliable at massive scale than attempt a 21st century model where it’s the fall-outs that are scaling.

I’d love to hear your thoughts.

BTW. Kudos to those organisations investing in the bleeding edge tech that are attempting to solve what Dave refers to as “the challenge of our times.” I’m certainly not going to criticise their bold efforts. Just highlighting the point that many operators have 21st century ambitions of their OSS whilst only having 20th century capabilities currently.

Extending the OSS beyond a customer’s locus of control

While the 20th century was all about centralizing command and control to gain efficiency through vertical integration and mass standardization, 21st century automation is about decentralization – gaining efficiency through horizontal integration of partner ecosystems and mass customization, as in the context-aware cloud where personalized experience across channels is dynamically orchestrated.
The operational challenge of our time is to coordinate these moving parts into coherent and manageable value chains. Instead of building yet another siloed and brittle application stack, the age of distributed computing requires that we re-think business architecture to avoid becoming hopelessly entangled in a “big ball of CRUD”
.”
Dave Duggal
here on TM Forum’s Inform back in May 2016.

We’ve quickly transitioned from a telco services market driven by economies of scale (Dave’s 20th century comparison) to a “market of one” (21st century), where the market wants a personalised experience that seamlessly crosses all channels.

By and large, the OSS world is stuck between the two centuries. Our tools are largely suited to the 20th century model (in some cases, today’s OSS originated in the 20th century after all), but we know we need to get to personalisation at scale and have tried to retrofit them. We haven’t quite made the jump to the model Dave describes yet, although there are positive signs.

It’s interesting. Telcos have the partner ecosystems, but the challenge is that the entire ecosystem still tends to be centrally controlled by the telco. This is the so-called best-of-breed model.

In the truly distributed model Dave talks about, the telcos would get the long tail of innovation / opportunity by extending their value chain beyond their own OSS stack. They could build an ecosystem that includes partners outside their locus of control. Outside their CAPEX budget too, which is the big attraction. They telcos get to own their customers, build products that are attractive to those customers, gain revenues from those products / customers, but not incur the big capital investment of building the entire OSS stack. Their partners build (and share profits from) external components.

It sounds attractive right? As-a-service models are proliferating and some are steadily gaining take-up, but why is it still not happening much yet, relatively speaking? I believe it comes down to control.

Put simply, the telcos don’t yet have the right business architectures to coordinate all the moving parts. From my customer observation at least, there are too many fall-outs as a customer journeys hand off between components within the internally controlled partner ecosystem. This is especially when we talk omni-channel. A fully personalised solution leaves too many integration / data variants to provide complete test coverage. For example, just at the highest level of an architecture, I’ve yet to see a solution that tracks end-to-end customer journeys across all components of the OSS/BSS as well as channels such as online, IVR, apps, etc.

Dave rightly points out that this is the challenge of our times. If we can coherently and confidently manage moving parts across the entire value chain, we have more chance of extending the partner ecosystem beyond the telco’s locus of control.

OSS feature parity. A functionality arms race

OSS Vendor 1. “I have 1 million features.” (Dr Evil puts finger in mouth)
OSS Vendor 2. “Yeah, well I have 1,000,001 features in my OSS.”

This is the arms-race that we see in OSS, just like almost any other tech product. I imagine that vendors get into this arms-race because they wish to differentiate. Better to differentiate on functionality than price. If there’s a feature parity, then the only differentiator is price. We all know that doesn’t end well!

But I often ask myself a few related questions:

  • Of those million features, how many are actually used regularly
  • As a vendor do you have logging that actually allows you to know what features are being used
  • Taking the Whale Curve perspective, even if being used, how many of those features are actually contributing to the objectives of the vendor
    • Do they clearly contribute towards making sales
    • Do customers delight in using them
    • Would customers be irate if you removed them
    • etc

Earlier this week, I spoke about a friend who created an alarm management tool by himself over a weekend. It didn’t have a million features, but it did have all of what I’d consider to be the most important ones. It did look like a lot of other alarm managers that are now on the market. The GUI based on alarm lists still pervades.

If they all look alike, and all have feature parity, how do you differentiate? If you try to add more features, is it safe to assume that those features will deliver diminishing returns?

But is an alarm list and the flicking of tickets the best way to manage network health?

What if, instead of seeking incremental improvement, someone went back to the most important requirements and considered whether the current approach is meeting those customer needs? I have a strong suspicion that customer feedback will indicate that there are definitely flaws to overcome, especially on high event volume networks.

Clever use of large data volumes provides a level of pre-cognition and automation that wasn’t available when simple alarm lists were first invented. This in turn potentially changes the way that operators can engage with network monitoring and management.

What if someone could identify a whole new user interface / approach that overcame the current flaws and exceeded the key requirements? Would that be more of a differentiator than adding a 1,000,002nd feature?

If you’re looking for a comparison, there were plenty of MP3 players on the market with a heap of features, many more than the iPod. We all know how that one played out!

Pitching an OSS? Don’t call it OSS.

If you asked me how to sell cybersecurity, I wouldn’t call it cybersecurity.” The raw truth of the statement hit me like a lightning bolt between the eyes. Cybersecurity might loosely describe what we do, and we tell people it’s what we’re selling, but it’s not what people buy.
Safety. Assurance. Peace of mind. Confidence. These are the kinds of things that people buy, concepts which ordinary people can understand and relate to because they are feelings which they have experienced themselves. Cybersecurity is not a next gen firewall, or multi-layered endpoint protection with machine learning and threat sandbox technology. Cybersecurity is not risk management or ISO27001 policies. Cybersecurity is being able to use the Internet in any way I can imagine without having to worry I might lose my family photos, get robbed, or get in trouble with my boss. If you could (honestly) sell me “worry free Internet”, I’d buy it in a heartbeat, and so would everyone you know
.”
Corch X
, here.

Sound familiar?
If you asked me how to sell OSS, I wouldn’t call it OSS. Doh! Now you enlighten me… after I’ve already chosen the domain name, PassionateAboutOSS.com. After I’ve already written over 2,000 posts on topics like orchestration, microservices, cloud-native, DevOps, and every other technical buzzword. Time to start again from scratch.

One thing in my favour is that you, the audience I’m interacting with, also speaks in the same jargon. These are the terms we use to communicate with each other. To get things started. To get things done. To get things delivered.

That’s all fine if we’re only interacting with like-minded OSS experts. However, of the thousands of people who interact with our OSS / BSS, only a small percentage are OSS experts. A majority of people use the tools rather than designing, building or commissioning them.

The people who use the tools have a huge range of job roles and reasons for needing to use our OSS / BSS. Just like with cybersecurity, the core reasons could be Safety. Assurance. Peace of mind. Confidence. But they might also include Speed. Efficiency. Reliability. Repeatability. Simplicity. Monetisation. Insightful. And more.

The challenge we have is that so much of the benefit that our OSS and BSS deliver is intangible. We might talk about orchestration delivering speed, simplicity, reliability, etc. But how do we establish a more tangible link?

How do we achieve the equivalent of what the “Intel Inside” marketing ploy delivered, which made people associate an otherwise obscure integrated circuit with a premium feature to consider when they bought their next computing device. How do we ensure that people know that our OSS / BSS is the master of puppets that make our networks dance? It’s our OSS / BSS that are pulling all the strings of operationalisation, connecting customers with networks.

What if the OSS solution lies in its connections?

Imagine for a moment that you’re sitting in front of a pristine chess board, awaiting the opportunity to make your first move. All of the pieces have been exquisitely carved from stone, polished to a sheen. The rules of the game have been established for centuries, so you know exactly which piece is able to move in which sequences. Time to make the opening move.

You’ve studied the games of the masters who have preceded you and have planned your opening gambit, the procession of moves that will hopefully take you into a match-winning position. Due to your skills with modern automations, you’ve connected some of the chess pieces with delicate strings to implement your opening gambit with precision.

Unfortunately, after the first few moves, your strings are starting to pull the pieces out of position. Your opponent has countered well and you’re having to modify your initial plans. You introduce some additional pulleys and springs to help retain the rightful position of your pieces on the board and cope with unexpected changes in strategy. The automations are becoming ever more complex, taking more time to plan and implement than the actual next move.

The board is starting to devolve into unmanageable chaos.

Does this sound like the analogy of a modern OSS? It’s what I refer to as the chessboard analogy.

We’ve been at this OSS game for long enough to already have an understanding of all of the main pieces. TM Forum’s TAM provides this definition as a useful guide. The pieces are modular, elegant and quite well understood by its many players. The rules of the game haven’t really changed much. The main use cases of an OSS from decades ago (ie assure, fulfil, plan, build, etc) probably don’t differ significantly from those of today. This
“should” set the foundations for interchangeability of applications.

We see programs of work like ONAP, where millions of lines of code are being developed to re-write the rules of the game. I’m a big advocate of many of the principles of ONAP, but I’m still not sure that such a massive re-write is what’s needed.

It’s not so much in the components of our OSS as in the connections between them where things tend to go awry.

The foundation of all brilliance is seeing connections when no one else does.”
Richard Parkinson
.

This article distills ONAP from its answers back to the core questions. What if instead of seeking an entirely-new architectural stack, we focused on solving the core questions and the chessboard problem – the problem of connections?

Perhaps the answer to the connection problem lies in the interchangeable small grid OSS model discussed in yesterday’s article on planned OSS obsolescence.
But it probably also incorporates what ONAP calls, “real-time, policy-driven orchestration and automation,” to replace pre-defined processes. I wonder instead whether state-based transitions, being guided by intent/policy rules and feedback loops (ie learning systems) might hold the key. An evolving and learning solution that shares similarities with the electrical pathways in our brain, which strengthen the more they’re used and diminish if no longer used.