The 7 truck-roll fail

In yesterday’s post we talked about the cost of quality. We talked about examples of primary, secondary and tertiary costs of bad data quality (DQ). We also highlighted that the tertiary costs, including the damage to brand reputation, can be one of the biggest factors.

I often cite an example where it took 7 truck rolls to connect a service to my house a few years ago. This provider was unable to provide an estimate of when their field staff would arrive each day, so it meant I needed to take a full day off work on each of those 7 occasions.

The primary cost factors are fairly obvious, for me, for the provider and for my employer at the time. On the direct costs alone, it would’ve taken many months, if not years, for the provider to recoup their install costs. Most of it attributable to the OSS/BSS and associated processes.

Many of those 7 truck rolls were a direct result of having bad or incomplete data:

  • They didn’t record that it was a two storey house (and therefore needed a crew with “working at heights” certification and gear)
  • They didn’t record that the install was at a back room at the house (and therefore needed a higher-skilled crew to perform the work)
  • The existing service was installed underground, but they had no records of the route (they went back to the designs and installed a completely different access technology because replicating the existing service was just too complex)

Customer Experience (CX), aka brand damage, is the greatest of all cost of quality factors when you consider studies such as those mentioned below.

A dissatisfied customer will tell 9-15 people about their experience. Around 13% of dissatisfied customers tell more than 20 people.”
White House Office of Consumer Affairs
(according to customerthink.com).

Through this page alone, I’ve told a lot more than 20 (although I haven’t mentioned the provider’s name, so perhaps it doesn’t count! 🙂  ).

But the point is that my 7 truck-roll example above could’ve been avoided if the provider’s OSS/BSS gave better information to their field workers (or perhaps enforced that the field workers populated useful data).

We’ll talk a little more tomorrow about modern Field Services tools and how our OSS/BSS can impact CX in a much more positive way.

Calculating the cost of quality

This week of posts has followed the theme of the cost of quality. Data quality that is.

But how do you calculate the cost of bad data quality?

Yesterday’s post mentioned starting with PNI (Physical Network Inventory). PNI is the cables, splices / joints, patch panels, ducts, pits, etc. This data doesn’t tend to have a programmable interface to electronically reconcile with. This makes it prone to errors of many types – mistakes in manual entry, reconfigurations that are never documented, assets that are lost or stolen, assets that are damaged or degraded, etc.

Some costs resulting from poor PNI data quality (DQ) can be considered primary costs. This includes SLA breaches caused by an inability to identify a fault within an SLA window due to incorrect / incomplete / indecipherable design data. These costs are the most obvious and easy to calculate because they result in SLA penalties. If a network operator misses a few of these with tier 1 clients then this is the disaster referred to yesterday.

But the true cost of quality is in the ripple-out effects. The secondary costs. These include the many factors that result in unnecessary truck rolls. With truck rolls come extra costs including contractor costs, delayed revenues, design rework costs, etc.

Other secondary effects include:

  • Downstream data maintenance in systems that rely on PNI data
  • Code in downstream systems that caters for poor data quality, which in turn increases the costs of complexity such as:
    • Additional testing
    • Additional fixing
    • Additional curation
  • Delays in the ability to get new products to market
  • Ability to accurately price products (due to variation in real costs caused by extra complexity)
  • Reduced impact of automations (due to increased variants)
  • Potential to impact Machine Learning / Artificial Intelligence engines, which rely on reliable and consistent data at scale
  • etc

There are probably more sophisticated ways to calculate the cost of quality across all these factors and more, but in most cases I just use a simple multiplier:

  • Number of instances of DQ events (eg number of additional truck rolls); times by
  • A rule-of-thumb cost impact of each event (eg the cost of each additional truck roll)

Sometimes the rules-of-thumb are challenging to estimate, so I tend to err on the side of conservatism. I figure that even if the rules-of-thumb aren’t perfectly accurate, at least they produce a real cost estimate rather than just anecdotal evidence.

And more importantly, the tertiary and less tangible costs of brand damage (also known as Customer Experience or CX or reputation damage). We’ll talk a little more about that tomorrow.

 

The OSS Tinder effect

On Friday, we provided a link to an inspiring video showing Rolls-Royce’s vision of an operations centre. That article is a follow-on from other recent posts about to pros and cons of using MVPs (Minimum Viable Products) as an OSS transformation approach.

I’ve been lucky to work on massive OSS projects. Projects that have taken months / years of hard implementation grind to deliver an OSS for clients. One was as close to perfect (technically) as I’ve been involved with. But, alas, it proved to be a failure.

How could that be you’re wondering? Well, it’s what I refer to as the Tinder Effect. On Tinder, first appearances matter. Liked or disliked at the swipe of a hand.

Many new OSS are delivered to users who are already familiar with one or more OSS. If they’re not as pretty or as functional or as intuitive as what the users are accustomed to, then your OSS gets a swipe to the left. As we found out on that project (a ‘we’ that included all the client’s stakeholders and sponsors), first impressions can doom an otherwise successful OSS implementation.

Since then, I’ve invested a lot more time into change management. Change management that starts long before delivery and handover. Long before designs are locked in. Change management that starts with hearts and minds. And starts by involving the end users early in the change process. Getting them involved in the vision, even if not quite as elaborate as Rolls-Royce’s.

The Rolls Royce vision of OSS

Yesterday’s post mentioned the importance of setting a future vision as part of your MVP delivery strategy.

As Steve Blank said here, Founders act like the “minimum” part is the goal. Or worse, that every potential customer should want it. In the real world not every customer is going to get overly excited about your minimum feature set…You’re selling the vision and delivering the minimum feature set to visionaries not everyone.”

Yesterday’s post promised to give you an example of an exciting vision. Not just any vision, the Rolls-Royce version of a vision.

We’ve all seen examples of customers wanting a Rolls-Royce OSS solution. Here’s a video that’s as close as possible to Rolls-Royce’s own vision of an OSS solution.

The OSS Minimum Feature Set is Not The Goal

This minimum feature set (sometimes called the “minimum viable product”) causes lots of confusion. Founders act like the “minimum” part is the goal. Or worse, that every potential customer should want it. In the real world not every customer is going to get overly excited about your minimum feature set. Only a special subset of customers will and what gets them breathing heavy is the long-term vision for your product.

The reality is that the minimum feature set is 1) a tactic to reduce wasted engineering hours (code left on the floor) and 2) to get the product in the hands of early visionary customers as soon as possible.

You’re selling the vision and delivering the minimum feature set to visionaries not everyone.”
Steve Blank here.

A recent blog series discussed the use of pilots as an OSS transformation and augmentation change agent.
I have the need for OSS speed
Re-framing an OSS replacement strategy
OSS transformation is hard. What can we learn from open source?

Note that you can replace the term pilot in these posts with MVP – Minimum Viable Product.

The attraction in getting an MVP / pilot version of your OSS into the hands of users is familiarity and momentum. The solution becomes more tangible and therefore needs less documentation (eg architecture, designs, requirement gathering, etc) to describe foreign concepts to customers. The downside of the MVP / pilot is that not every customer will “get overly excited about your minimum feature set.”

As Steve says, “Only a special subset of customers will and what gets them breathing heavy is the long-term vision for your product.” The challenge for all of us in OSS is articulating the long-term vision and making it compelling…. and not just leaving the product in its pilot state (we’ve all seen this happen haven’t we?)

We’ll provide an example of a long-term vision tomorrow.

PS. I should also highlight that the maximum feature set also isn’t the goal either.

OSS transformation is hard. What can we learn from open source?

Have you noticed an increasing presence of open-source tools in your OSS recently? Have you also noticed that open-source is helping to trigger transformation? Have you thought about why that might be?

Some might rightly argue that it is the cost factor. You could also claim that they tend to help resolve specific, but common, problems. They’re smaller and modular.

I’d argue that the reason relates to our most recent two blog posts. They’re fast to install (don’t need to get bogged down in procurement) and they’re easy to run in parallel for comparison purposes.

If you’re designing an OSS can you introduce the same concepts? Your OSS might be for internal purposes or to sell to market. Either way, if you make it fast to build and easy to use, you have a greater chance of triggering transformation.

If you have a behemoth OSS to “sell,” transformation persuasion is harder. The customer needs to rally more resources (funds, people, time) just to compare with what they already have. If you have a behemoth on your hands, you need to try even harder to be faster, easier and more modular.

I have the need for OSS speed

You already know that speed is important for OSS users. They / we don’t want to wait for minutes for the OSS to respond to a simple query. That’s obvious right? The bleeding obvious.

But that’s not what today’s post is about. So then, what is it about?

Actually, it follows on from yesterday’s post about re-framing of OSS transformation.  If a parallel pilot OSS can be stood up in weeks then it helps persuasion. If the OSS is also fast for operators to learn, then it helps persuasion.  Why is that important? How can speed help with persuasion?

Put simply:

  • It takes x months of uncertainty out of the evaluators’ lives
  • It takes x months of parallel processing out of the evaluators’ lives
  • It also takes x months of task-switching out of the evaluators’ lives
  • Given x months of their lives back, customers will be more easily persuaded

It also helps with the parallel bake-off if your pilot OSS shows a speed improvement.

Whether we’re the buyer or seller in an OSS pilot, it’s incumbent upon us to increase speed.

You may ask how. Many ways, but I’d start with a mass-simplification exercise.

Addressing the trauma of OSS

You also have to understand their level of trauma. Your product, service or information is selling a solution to someone who is in trauma. There are different levels, from someone who needs a nail to finish the swing set in their backyard to someone who just found out they have a life-threatening disease. All of your customers had something happen in their life, where the problem got to an unmanageable point that caused them to actively search for your solution.
A buying decision is an emotional decision
.”
John Carlton
.

My clients tend to fall into three (fairly logical) categories:

  1. They’re looking to buy an OSS
  2. They’re looking to sell an OSS
  3. They’re in the process of implementing an OSS

Category 3 clients tend to bring a very technical perspective to the task. Lists of requirements, architectures, designs, processes, training, etc.

Category 2 clients tend to also bring a technical perspective to the task. Lists of features, processes, standards, workflows, etc.

Category 1 clients also tend to break down the buying decision in a technical manner. List of requirements, evaluation criteria, ranking/weighting models, etc.

But what’s interesting about this is that category 1 is actually a very human initiative. It precedes the other two categories (ie it is the lead action). And category 1 clients tend to only reach this state of needing help due to a level of trauma. The buying decision is an emotional decision.

Nobody wants to go through an OSS replacement or the procurement event that precedes it. It’s also a traumatic experience for the many people involved. As much as I love being involved in these projects, I wouldn’t wish them on anyone.

I wonder whether taking the human perspective, actively putting ourselves in the position of understanding the trauma the buyer is experiencing, might change the way we approach all three categories above?

That is, taking less of a technical approach (although that’s still important of course), but more on addressing the trauma. As the first step, do you step back to understand what is the root-cause of your customer’s unique trauma?

Zero Touch Assurance – ZTA (part 3)

This is the third in a series on ZTA, following on from yesterday’s post that suggested intentionally triggering events to allow the accumulation of a much larger library of historical network data.

Today we’ll look at the impact of data collection on our ability to achieve ZTA and refer back to part 1 in the series too.

  1. Monitoring – There is monitoring the events that happen in the network and responding manually
  2. Post-cognition – There is monitoring events that happen in the network, comparing them to past events and actions (using analytics to identify repeating patterns), using the past to recommend (or automate) a response
  3. Pre-cognition – There is identifying events that have never happened in the network before, yet still being able to provide a recommended / automated response

In my early days of OSS projects, it was common that network performance data would be collected at 15 minute intervals at best. Sometimes even less if it put too much load on the processor of any given network devices. That was useful for long and medium term trend analysis, but averaging across the 15 minute period meant that significant performance events could be missed. Back in those days it was mostly Stage 1 – Monitoring. Stage 2, Post-cognition, was unsophisticated at a system level (eg manually adjusting threshold-crossing event levels) so post-cognition relied on talented operators who could remember similar events in the past.

If we want to reach the goal of ZTA, we have to drastically reduce measurement / polling / notification intervals. Ideally, we want near-real-time data collection across the following dimensions:

  • To extract (from the device/EMS)
  • To transform / normalise the data (different devices may use different counter models for example)
  • To load
  • To identify patterns (15 minute poll cycles disguise too many events)
  • To compare with past patterns
  • To compare with past responses / results
  • To recommend or automate a response

I’m sure you can see the challenge here. The faster the poll cycle, the more data that needs to be processed. It can be a significant feat of engineering to process large data volumes at near-real-time speeds (streaming analytics) on large networks.

Do you have a nagging OSS problem you cannot solve?

On Friday, we published a post entitled, “Think for a moment…” which posed the question of whether we might be better-served looking back at our most important existing features and streamlining them rather than inventing new features to solve that have little impact.

Over the weekend, a promotional email landed in my inbox from Nightingale Conant. It is completely unrelated to OSS, yet the steps outlined below seem to be a fairly good guide for identifying what to reinvent within our existing OSS.

Go out and talk to as many people [customers or potential] as you can, and ask them the following questions:
1. Do you have a nagging problem you cannot solve?
2. What is that problem?
3. Why is solving that problem important to you?
4. How would solving that problem change the quality of your life?
5. How much would you be willing to pay to solve that problem?

Note: Before you ask these questions, make sure you let the people know that you’re not going to sell them anything. This way you’ll get quality answers.
After you’ve talked with numerous people, you’ll begin to see a pattern. Look for the common problems that you believe you could solve. You’ll also know how much people are willing to pay to have their problem solved and why.

I’d be curious to hear back from you. Do those first 4 questions identify a pattern that relates to features you’ve never heard of before or features that your OSS already performs (albeit perhaps not as optimally as it could)?

TM Forum’s Open API links

Those of you familiar with TM Forum are already quite familiar with the Frameworx enterprise architecture model. It’s as close as we get to a standard used across the OSS industry.

Frameworx consists of four main modules, with eTOM, TAM and SID being the most widely referred to:

But there’s a newer weapon in the TM Forum arsenal that appears to be gaining widespread use. It’s the TM Forum Open API suite, which has over 50 REST-based APIs as well as having many more under development. This link provides the full list of APIs, including specifications, swagger files and postman collections.

The following diagram comes from the Open API Map (GB992) (accurate as of 9 Jan 2018).
GB992_Open_API_Map_R17.0.1

It’s well worth reviewing as I think you’ll be hearing about TMF642 (Alarm Management API) and all its sister products as commonly as you hear eTOM or SID mentioned today.

My favourite OSS saying

My favourite OSS saying – “Just because you can, doesn’t mean you should.”

OSS are amazing things. They’re designed to gather, process and compile all sorts of information from all sorts of sources. I like to claim that OSS/BSS are the puppet masters of any significant network operator because they assist in every corner of the business. They assist with the processes carried out by almost every business unit.

They can be (and have been) adapted to fulfill all sorts of weird and wonderful requirements. That’s the great thing about software. It can be *easily* modified to do almost anything you want. But just because you can, doesn’t mean you should.

In many cases, we have looked at a problem from a technical perspective and determined that our OSS can (and did) solve it. But if the same problem were also looked at from business and/or operational perspectives, would it make sense for our OSS to solve it?

Some time back, I was involved in a micro project that added 1 new field to an existing report. Sounds simple. Unfortunately by the time all the rigorous deploy and transition processes were followed, to get the update into PROD, the support bill from our team alone ran into tens of thousands of dollars. Months later, I found out that the business unit that had requested the additional field had a bug in their code and wasn’t even picking up the extra field. Nobody had even noticed until a secondary bug prompted another developer to ask how the original code was functioning.

It wasn’t deemed important enough to fix. Many tens of thousands of dollars were wasted because we didn’t think to ask up the design tree why the functionality was (wasn’t) important to the business.

Other examples are when we use the OSS to solve a problem by expensive customisation / integration when manual processes can do the job more cash efficiently.

Another example was a client that had developed hundreds of customisations to resolve annoying / cumbersome, but incredibly rare tasks. The efficiency of removing those tasks didn’t come close to compensating for the expense of building the automations / tools. Just one sample of those tools was a $1000 efficiency improvement for a ~$200,000 project cost… on a task that had only been run twice in the preceding 5 years.

 

How to build a personal, cloud-native OSS sandpit

As a project for 2019, we’re considering the development of a how-to training course that provides a step-by-step guide to build your own OSS sandpit to play with. It will be built around cloud-native and open-source components. It will be cutting-edge and micro-scaled (but highly scalable in case you want to grow it).

Does this sound like something you’d be interested in hearing more about?

Like or comment if you’d like us to keep you across this project in 2019.

I’d love to hear your thoughts on what the sandpit should contain. What would be important to you? We already have a set of key features identified, but will refine it based on community feedback.

How to kill the OSS RFP (part 3)

As the title suggests, this is the third in a series of articles spawned by TM Forum’s initiative to investigate better procurement practices than using RFI / RFP processes.

There’s no doubt the RFI / RFP / contract model can be costly and time-consuming. To be honest, I feel the RFI / RFP process can be a reasonably good way of evaluating and identifying a new supplier / partner. I say “can be” because I’ve seen some really inefficient ones too. I’ve definitely refined and improved my vendor procurement methodology significantly over the years.

I feel it’s not so much the RFI / RFP that needs killing (significant disruption maybe), but its natural extension, the contract development and closure phase that can be significantly improved.

As mentioned in the previous two parts of this series (part 1 and part 2), the main stumbling block is human nature, specifically trust.

Have you ever been involved in the contract phase of a large OSS procurement event? How many pages did the contract end up being? Well over a hundred? How long did it take to reach agreement on all the requirements and clauses in that document?

I’d like to introduce the concept of a Minimum Viable Contract (MVC) here. An MVC doesn’t need most of the content that appears in a typical contract. It doesn’t attempt to predict every possible eventuality during the many years the OSS will survive for. Instead it focuses on intent and the formation of a trusting partnership.

I once led a large, multi-organisation bid response. Our response had dozens of contributors, many person-months of effort expended, included hundreds of pages of methodology and other content. It conformed with the RFP conditions. It seemed justified on a bid that exceeded $250M. We came second on that bid.

The winning bidder responded with a single page that included intent and fixed price amount. Their bid didn’t conform to RFP requests. Whereas we’d sought to engender trust through content, they’d engendered trust through relationships (in a part of the world where we couldn’t match the winning bidder’s relationships). The winning bidder’s response was far easier for the customer to evaluate than ours. Undoubtedly their MVC was easier and faster to gain agreement on.

An MVC is definitely a more risky approach for a customer to initiate when entering into a strategically significant partnership. But just like the sports-star transfer comparison in part 2, it starts from a position of trust and seeks to build a trusted partnership in return.

This is a highly contrarian view. What are your thoughts? Would you ever consider entering into an MVC on a big OSS procurement event?

Thump thump clap

I recently watched the film Bohemian Rhapsody about Freddy Mercury and the band Queen.

The title of this blog refers to the sounds made by the band at the start of their song, “We Will Rock You.”

There was a scene in the movie showing the origins of Thump Thump Clap, with the band adding it into the song purely to engage their fans more in their concert performances. It was to be the first of many engagement triggers that Queen used during their concerts. The premeditated thinking behind that simple act blew me away.

Audience engagement. It’s as important to a band as it is to an OSS transformation.

Thump Thump Clap. Simple. Brilliant. You could say it’s become more than just engaging. It’s become transcendent.

It got me thinking about what could the equivalent in OSS transformations be? How do we get the audience (stakeholders) participating to make the outcomes bigger and better than if only the project team were involved? Too momentous an experience for anyone to quibble about a technical off-note here or there.

There needs to be a performance involved. That implies a sandpit environment. There needs to be excitement around what the audience is seeing / hearing. There needs to be crowd involvement (or does there?).

There needs to be less dead-pan presentation than most of the OSS show-cases I’ve seen (and delivered if I’m being completely honest) 🙂

What are your Thump Thump Clap suggestions?

The OSS proof-of-worth dilemma

Earlier this week we posted an article describing Sutton’s Law of OSS, which effectively tells us to go where the money is. The article suggested that in OSS we instead tend towards the exact opposite – the inverse-Sutton – we go to where the money isn’t. Instead of robbing banks like Willie Sutton, we break into a cemetery and aimlessly look for the cash register.

A good friend responded with the following, “Re: The money trail in OSS … I have yet to meet a senior exec. / decision maker in any telco who believes that any OSS component / solution / process … could provide benefit or return on any investment made. In telco, OSS = cost. I’ve tried very hard and worked with many other clever people also trying hard to find a way to pitch OSS which overcomes this preconception. BSS is often a little easier … in many cases it’s clear that “real money” flows through BSS and needs to be well cared for.”

He has a strong argument. The cost-out mentality is definitely ingrained in our industry.

We are saddled with the burden of proof. We need to prove, often to multiple layers of stakeholders, the true value of the (often intangible) benefits that our OSS deliver.

The same friend also posited, “The consequence is the necessity to establish beneficial working relationships with all key stakeholders – those who specify needs, those who design and implement solutions and those, especially, who hold or control the purse strings. [To an outsider] It’s never immediately obvious who these people are, nor what are their key drivers. Sometimes it’s ambition to climb the ladder, sometimes political need to “wedge” peers to build empires, sometimes it’s necessity to please external stakeholders – sometimes these stakeholders are vendors or government / international agencies. It’s complex and requires true consultancy – technical, business, political … at all levels – to determine needs and steer interactions.

Again, so true. It takes more than just technical arguments.

I’m big on feedback loops, but also surprised at how little they’re used in telco – at all levels.

  • We spend inordinate amounts of time building and justifying business cases, but relatively little measuring the actual benefits produced after we’ve finished our projects (or gaining the learnings to improve the next round of business cases)
  • We collect data in our databases, obliviously let it age, realise at some point in the future that we have a data quality issue and perform remedial actions (eg audits, fixes, etc) instead of designing closed-loop improvement cycles that ensure DQ isn’t constantly deteriorating
  • We’re great at spending huge effort in gathering / arguing / prioritising requirements, but don’t always run requirements traceability all the way into testing and operational rollout.
  • etc

Which leads us back to the burden of proof. Our OSS have all the data in the world, but how often do we use it to justify and persuade – to prove?

Our OSS products have so many modules and technical functionality (so much of it effectively duplicated from vendor to vendor). But I’ve yet to see any vendor product that allows their customer, the OSS operators, to automatically gather proof-of-worth stats (ie executive-ready reports). Nor have I seen any integrator build proof-of-worth consultancy into their offer, whereby they work closely with their customer to define and collect the metrics that matter. BTW. If this sounds hard, I’d be happy to discuss how I approach this task.

So let me leave you with three important questions today:

  1. Have you also experienced the overwhelming burden of the “OSS = cost” mentality
  2. If so, do you have any suggestions / experiences on how you’ve overcome it
  3. Does the proof-of-worth product functionality sound like it could be useful (noting that it doesn’t even have to be a product feature, but a custom report / portal using data that’s constantly coursing through our OSS databases)

OSS’ Rosetta Stone

When working on OSS projects, I find that linking or reference keys are so valuable at so many levels. Not just data management within the OSS database, but project management, design activities, task assignment / delivery, etc.

People might call things all sorts of different names, which leads to confusion.

Let me cite an example. When a large organisation has lots of projects underway and many people are maintaining project lists, but each with a slightly (or massively!!) different variant of the project name, there can be confusion, and possibly duplication. But introduce a project number and it becomes a lot easier to compare project lists (if everyone cites the project number in their documentation).

Trying to pattern match text / language can be really difficult. But if data sets have linking keys, we can easily use Excel’s vlookup function (or the human brain, or any other equivalent tool of choice) to make comparison a whole lot easier.

Linking keys could be activity codes in a WBS, a device name in a log file, a file number, a process number, part numbers in a designer’s pick-lists, device configuration code, etc.

Correlation and reconcile tasks are really important in OSS. An OSS does a lot of data set joins at application / database level. We can take a lead from code-level use of linking keys.

Just like the Rosetta stone proved to be the key to unlocking Egyptian hieroglyphs, introducing linking keys can be useful for translating different “languages” at many levels in OSS projects.

The Jeff Bezos prediction for OSS

If we start to focus on ourselves, instead of focusing on our customers, that will be the beginning of the end … We have to try and delay that day for as long as possible.”
Jeff Bezos
.

Jeff Bezos recently predicted that Amazon is likely to fail and/or go bankrupt at some point in time, as history has eventually proven for most high-flying companies. The quote above was part of that discussion.

I’ve worked with quite a few organisations that have been in the midst of some sort of organisational re-structure – upsizing, downsizing, right-scaling, transforming – whatever the words that might be in effect. Whilst it would be safe to say that all of those companies were espousing being focused on their customers, the organisational re-structures always seem to cause inward-facing behaviour. It’s human nature that change causes feelings of fear, job security, opportunities to expand empires, etc.

And in these types of inward-facing environments in particular, I’ve seen some really interesting decisions made around OSS projects. When making these decisions, customer experience has clearly been a long way down the list of priorities!! And in the current environment of significant structural change in the telco industry, these stimulants of internal-facing behaviour appear to be growing.

Whilst many people want to see OSS projects as technical delivery solutions and focus on the technology, the people and culture aspects of the project can often be even more challenging. They can also be completely underestimated.

What have your experiences been? Do you agree that customer-facing vision, change management and stakeholder management can be just as vital as technical brilliance on an OSS implementation team?

The culture required to support Telkomsel’s OSS/BSS transformation

Yesterday’s post described the ways in which Telkomsel has strategically changed their value-chain to attract revenues with greater premiums than the traditional model of a telco. They’ve used a new digital core and an API framework to help facilitate their business model transformation. As promised yesterday, we’ll take a slightly closer look at the culture of Telkomsel’s transformation today.

Monty Hong of Telkomsel presented the following slides during a presentation at TM Forum’s DTA (Digital Transformation Asia) last week.

The diagram below shows a graph showing the need for patience and ongoing commitment to major structural transformations like the one Telkomsel underwent.

Telkomsel's commitment to transformation

The curve above tends to represent the momentum and morale I’ve felt on most large OSS projects. Unfortunately, I’ve also been involved in projects where project sponsors haven’t stayed the journey beyond the dip (Q4/5 in the graph above) and haven’t experienced the benefits of the proposed project. This graph articulates the message well that change management and stakeholder / sponsor champions are an important, but often overlooked component of an OSS transformation.

The diagram below helps to articulate the benefits of an open API model being made accessible to external market-places. We’re entering an exciting time for OSS, with previously hidden, back-end telco functionality now being increasingly presented to the market (if even only as APIs into the black-box).

Telkomsel's internal/external API influences

Amongst many other benefits, it helps to bring the customer closer to implementers of these back-end systems.

DTA is all wrapped up for another year

We’ve just finished the third and final day at TM Forum’s Digital Transformation Asia (https://dta.tmforum.org and #tmfdigitalasia ). Wow, talk about a lot happening!!

After spending the previous two days focusing on the lecture series, it would’ve been remiss of me to not catch up with the vendors and Catalyst presentations that had been on display for all three days. So that was my main focus for day 3. Unfortunately, I probably missed seeing some really interesting presentations, although I did catch the tail-end of the panel discussion, “Zero-touch – Identifying the First Steps Toward Fully Automated NFV/SDN,” which was ably hosted by George Glass (along with NFV/SDN experts Tomohiro Otani and Ir. Rizaludin Kaspin ). From the small amount I did see, it left me wishing that I could’ve experienced the entire discussion.

But on with the Catalysts, which are one of the most exciting assets in TM Forum’s arsenal IMHO. They connect carriers (as project champions) with implementers to deliver rapid prototypes on some of the carriers’ most pressing needs. They deliver some seriously impressive results in a short time, often with implementers only being able to devote part of their working hours (or after-hours) to the Catalyst.

As reported here, the winning Catalysts are:

1. Outstanding Catalyst for Business Impact
Telco Cloud Orchestration Plus, Using Open APIs on IoT
Champion: China Mobile
Participants: BOCO Inter-Telecom, Huawei, Hewlett Packard Enterprise, Nokia

2. Outstanding Catalyst for Innovation
5G Pâtisserie
Champions: Globe Telecom, KDDI Research, Singtel
Patricipants: Neural Technologies, Infosys, Ericsson

3. Outstanding New Catalyst
Artificial Intelligence for IT Operations (AIOps)
Champions: China Telecom, China Unicom, China Mobile
Participants: BOCO Inter-Telecom, Huawei, Si-Tech

These were all undoubtedly worthy winners, reward for the significant effort that has already gone into them. Three other Catalysts that I particularly liked are:

  • Transcend Boundaries – which demonstrates the use of Augmented Reality for the field workforce in particular, as championed by Globe. Collectively we haven’t even scratched the surface of what’s possible in this space, so it was exciting to see the concept presented by this Catalyst
  • NaaS in Action – which is building upon Telstra’s exciting Network as a Service (NaaS) initiative; and
  • Telco Big Data Security and Privacy Management Framework – the China Mobile led Catalyst that is impressive for the number of customers that are already signed up and generating revenues for CT.

BTW. The full list of live Catalysts can be found here.

For those who missed this year’s event, I can only suggest that you mark it in your diaries for next year. The TM Forum team is already starting to plan out next year’s event, one that will surely be even bigger and better than the one I’ve been privileged to have attended this week.