How “what if?” scenarios can halt a project

Let’s admit it; we’ve all worked on an OSS project that has gone into a period of extended stagnation because of a fear of the unknown. I call them “What if?” scenarios. They’re the scenarios where someone asks, “What if x happens?” and then the team gets side-tracked whilst finding an answer / resolution. The problem with “What if?” scenarios is that many of them will never happen, or will happen on such rare occasions that the impact will be negligible. They’re the opposite end of the Pareto Principle – they’re the 20% that take up the 80% of effort / budget / time. They need to be minimised and/or mitigated.

In some cases, the “what if?” questions comes from a lack of understanding about the situation, the product suite and / or the future solution. That’s completely understandable because we can never predict all of the eventualities of an OSS project at the outset. That’s the OctopOSS at work – you think you have all of the tentacles under control, but another one always comes and whacks you on the back of the head.

The best way to reduce the “what if?” questions from getting out of control is to give stakeholders a sandpit / MVP / rapid-prototype / PoC environment to interact with.

The benefit of the prototype environment is that it delivers something tangible, something that stakeholders far and wide can interact with and test assumptions, usefulness, usability, boundary cases, scalability, etc. Stakeholders get to understand the context of the product and get a better feeling for what the end solution is going to look like. That way, many of the speculative “what ifs?” are bypassed and you start getting into the more productive dialogue earlier. The alternative, the creation of a document or discussion, can devolve into an almost endless set of “what-if” scenarios and opinions, especially when there are large groups of (sometimes militant) stakeholders.

The more dangerous “what if?” questions come from the experts. They’re the ones who demonstrate their intellectual prowess by finding scenario after scenario that nobody else is considering. I have huge admiration for those who can uncover potential edge cases, race conditions, loopholes in code, etc. The challenge is that they can be extremely hard to document, test for and circumvent. They’re also often very difficult to quantify or prove a likelihood of occurrence, thus consuming significant resources.

Rather than divert resources to resolving all these “what if?” questions one-by-one, I try to seek a higher-order “safety-net” solution. This might be in the form of exception handling, try-catch blocks, fall-out analysis reports, etc. Or, it might mean assigning a watching brief on the problem and handling it only if it arises in future.

The developer development analogy (to OSS investment)

In a post last week, we quoted Jim Rohn who said, “You can have more – if you become more.” Jim was surely speaking about personal growth, but we equated it to OSS needing to become more too, especially by looking beyond the walls of operations.

Your first thoughts may be, “Ohh, good idea, I’d love to get my hands on some of the CMO’s budget to invest in my OSS because I’ve already allocated all of the OSS budget (to operational imperatives no doubt).”

We all talk about getting more budget to do bigger and better things with our OSS. But apart from small windows during capital allocation cycles, “more budget” is rarely an option. Sooooo, I’d like to run a thought experiment past you today.

Rather than thinking of budget as CAPEX, what if we think of the existing (OPEX) budget as a “draw-down of utilisation” bucket? The question we have to ask is whether we are drawing down on the right stuff. If we’re drawing down to deliver on more operational initiatives, are we effectively pushing towards an asymptote? If we were to draw down to deliver something outside the (operations) box, are we increasing our chances of “becoming more?”

I equate it to “the developer development analogy.” Let’s say a developer is already proficient at 10 programming languages. If he/she allocates their yearly development budget on learning another programming language (number 11), are they really going to become a much better coder? What if instead, they chose to invest in an adjacency like user experience design or leadership or entrepreneurship, etc? Is that more likely to trigger a leap-frogging S-curve rather than asymptotic result from their investment?

And, if we become more (ie our OSS is delivering more value outside the ops box), we can have more (ie investment coming in from benefiting business units). It’s tied to the law of reciprocity (which hopefully exists in your organisation rather than the law of scavenging other people’s cash).

This is clearly a contrarian and idealistic concept, so I’d love to hear whether you think it could be workable in your organisation.

The evolving complexity of RCA

Root cause analysis (RCA) is one of the great challenges of OSS. As you know, it aims to identify the probable cause of an alarm-storm, where all alarms are actually related to a single fault.

In the past, my go-to approach was to start with a circuit hierarchy-based algorithm. If you had an awareness of the hierarchy of circuits, usually through an awareness in inventory, if you have a lower-order fault (eg Loss of Signal on a transmission link caused by a cable break), then you could suppress all higher-order alarms (ie from bearers or tributaries that were dependent upon the L1 link. That works well in the fixed networks of distant past (think SDH / PDH). This approach worked well because it was repeatable between different customer environments.

Packet-switching data networks changed that to an extent, because a data service could traverse any number of links, on-net or off-net (ie leased links). The circuit hierarchy approach was still applicable, but needed to be supplemented with other rules.

Now virtualised networking is changing it again. RCA loses a little relevance in the virtualised layer. Workloads and resource allocations are dynamic and transient, making them less suited to fixed algorithms. The objective now becomes self-healing – if a failure is identified, failed resources are spun down and new ones spun up to take the load. The circuit hierarchy approach loses relevance, but perhaps infrastructure hierarchy still remains useful. Cable breaks, server melt-downs, hanging controller applications are all examples of root causes that will cause problems in higher layers.]

Rather than fixed-rules, machine-based pattern-matching is the next big hope to cope with the dynamically changing networks.

The number of layers and complexity of the network seems to be ever increasing, and with it RCA becomes more sophisticated…. If only we could evolve to simpler networks rather than more complex ones. Wishful thinking?

What is your OSS answer : question ratio?

Experts know a lot…. obviously.
They have lots of answers… obviously.

There are lots of OSS experts. Combined, they know A LOT!!

Powerful indeed, but not sure if that’s what we need right now. I feel like we’re in a bit of an OSS innovation funk. The biggest improvements in OSS are coming from outside OSS – extrinsic improvement.

Where’s the intrinsic improvement coming from? Do we need someone to shake it up (do we need everyone to shake it up?)? Do we need new thinking to identify and create new patterns? To re-organise and revolutionise what the experts already know. Or do we need to ask the massive questions that re-frame the situation for the experts?

So, considering this funky moment in time, is the real expert the one who knows lots of answers… or the person who can catalyse change by asking the best mind-shift questions?

May I ask you – As an OSS expert, are you prouder of your answers…. or your questions?

To tackle that from a different angle – What is your answer : question ratio? Are you such an important expert that your day is so full of giving brilliant answers that you have no time left to ruminate and develop brilliant questions?

If so, can we take some of your answer time back and re-prioritise it please?

In the words of Socrates, “I cannot teach anybody anything, I can only make them think.

Customers don’t invest in OSS. What do they invest in?

“An organisation buys an OSS, not because it wants an Operational Support System, but because it wants Operational Support.”

So if our customers are not investing in our OSS, what are they actually investing in? Easy! They’re investing in the ability to solve their own problems and opportunities in future.

If we don’t actually understand operations, what chance do we have to deliver operational support? We keep hearing the term, “customer experience this,” “CX that,” so it must be important right? Operational support staff might be a few steps removed from us (intentionally or unintentionally) but they are our “real” customers and the only way we can develop a solution that empathises with them is by spending time with them and listening (not always easy for us know-it-all OSS builder-types).

And just because we have a history in ops doesn’t mean we can assume to know this time. Operations are different at each organisation.

So, are we sure we understand the nature, extent and context of the unique problem/s that this customer needs to solve (not wants to solve)?

The exposure effect can work for or against OSS projects

The exposure effect (no, not the one circulating through Hollywood) has a few interesting implications for OSS.

“The mere-exposure effect is a psychological phenomenon by which people tend to develop a preference for things merely because they are familiar with them.”
Wikipedia

In effect, it’s the repetition that drills familiarity, comfort, but also bias, into our sub-conscious. Repetition doesn’t make a piece of information true, but it can make us believe it’s true.

Many OSS experts are exposed to particular vendors/products for a number of years during their careers, and in doing so, the exposure effect can build. It can have a subtle bias on vendor selection, whereby the evaluators choose the solution/s they know ahead of the best-fit solution for their organisation. Perhaps having independent vendor selection facilitators who are familiar with many products can help to reduce this bias?

The exposure effect can also appear through sales and marketing efforts. By regularly contacting customers and repetitively promoting their wares, the customer builds a familiarity with the product. In theory it works for OSS products as it does with beer commercials. This can work for or against, depending on the situation.

In the case for, it can help to build a guiding coalition to get a complex, internal OSS project approved and supported through the challenging times that await every OSS project. I’d even go so far as to say, “you should use it to help build a guiding coalition,” rather than, “you can use it to help build a guiding coalition.” Never underestimate the importance of organisational change management on an OSS project.

In the case against, it can again develop a bias towards vendors / products that aren’t best-fit for the organisation. Similarly, if a “best-fit” product doesn’t take the time to develop repetition, they may never even get considered in a selection process, as highlighted in the diagram below.

7 touches of sales
Courtesy of the OnlineMarketingInstitute.

If the customer thinks they have a problem, they do have a problem

Omni-channel is an interesting concept because it generates two distinctly different views.
The customer will use whichever channel (eg digital, apps, contact-centre, IVR, etc) that they want to use.
The service provider will try to push the customer onto whichever channel suits the service provider best.

The customer will often want to use digital or apps, back-ended by OSS – whether that’s to place an order, make configuration changes, etc. The service provider is happy for the customer to use these low-cost, self-service channels.

But when the customer has a problem, they’ll often try to self-diagnose, then prefer to speak with a person who has the skills to trouble-shoot and work with the back-end systems and processes. Unfortunately, the service provider still tries to push the customer into low-cost, self-service channels. Ooops!

If the customer thinks they have a problem, they do have a problem (even if technically, they don’t).
Omni-channel means giving customers the channels that they want to work via, not the channels the service provider wants them to work via.
Call Volume Reduction (CVR) projects (which can overlap into our OSS) sometimes lose sight of this fact just because the service provider has their heart set on reducing costs.

Funding beyond the walls of operations

You can have more – if you become more.”
Jim Rohn.

I believe that this is as true of our OSS as it is of ourselves.

Many people use the name Operational Support Systems to put an electric fence around our OSS, to limit uses to just operational activities. However, the reach, awareness and power of what they (we) offer goes far beyond that.

We have powerful insights to deliver to almost every department in an organisation – beyond just operations and IT. But first we need to understand the challenges and opportunities faced by those departments so that we can give them something useful.

That doesn’t necessarily mean expensive integrations but unlocking the knowledge that resides in our data.

Looking for additional funding for your OSS? Start by seeking ways to make it more valuable to more people… or even one step further back – seeking to understand the challenges beyond the walls of operations.

When low OSS performance is actually high performance

It’s not unusual for something to be positioned as the high performance alternative. The car that can go 0 to 60 in three seconds, the corkscrew that’s five times faster, the punch press that’s incredibly efficient…
The thing is, though, that the high performance vs. low performance debate misses something. High at what?
That corkscrew that’s optimized for speed is more expensive, more difficult to operate and requires more maintenance.
That car that goes so fast is also more difficult to drive, harder to park and generally a pain in the neck to live with.
You may find that a low-performance alternative is exactly what you need to actually get your work done. Which is the highest performance you can hope for
.”
Seth Godin
in this article, What sort of performance?

Whether selecting a vendor / product, designing requirements or building an OSS solution, we can sometimes lose track of what level of performance is actually required to get the work done can’t we?

How many times have you seen a requirement sheet that specifies a Ferrari, but you know the customer lives in a location with potholed and cobblestoned roads? Is it right to spec them – sell them – build them – charge them for a Ferrari?

I have to admit to being guilty of this one too. I have gotten carried away in what the OSS can do, nearer the higher performance end of the spectrum, rather than taking the more pragmatic view of what the customer really needs.

Automations, custom reports and integrations are the perfect OSS examples of low performance actually being high performance. We spend a truckload of money on these types of features to avoid manual tasks (curse having to do those manual tasks)… when a simple cost-benefit analysis would reveal that it makes a lot more sense to stay manual in many cases.

The 10 minute / 1 minute / 10 second OSS challenge

Check out the video below, which gives an example of the 10 minute / 1 minute / 10 second challenge (you can check out more of them here).

When given 10 minutes to sketch Spiderman, the result is far richer than when the artist is given only 10 seconds… well obviously!!

But let me pose a question. If Sketch B was compiled from 60 sequential 10s updates (ie Sketch B would also take 10 mins total sketching time) do you think the final sketch would look as impressive as the 1 x 10 min sketch (Sketch A)? The total sketching time is the same, but will the results be similar?

From the 10s sketch above, you can see that the composition is not as precise. Subsequent updates would have to work around the initial structural flaws.

Do you wonder whether this is somewhat analogous to creating OSS using continuous development frameworks like Agile or DevOps? By having tightly compressed (eg weekly) release cycles, are we compromising the structure from the start?

I’m a big believer in rapid prototyping with subsequent incremental improvements instead of the old big-bang OSS delivery model. I’m also impressed with automated dev / test / release frameworks. However, I’m concerned that rapid release cycles can enforce unnecessary deadlines and introduce structural compromises that are difficult to fix mid-flight.

The double-edged sword of OSS/BSS integrations

…good argument for a merged OSS/BSS, wouldn’t you say?
John Malecki
.

The question above was posed in relation to Friday’s post about the currency and relevance of OSS compared with research reports, analyses and strategic plans as well as how to extend OSS longevity.

This is a brilliant, multi-faceted question from John. My belief is that it is a double-edged sword.

Out of my experiences with many OSS, one product stands out above all the others I’ve worked with. It’s an integrated suite of Fault Management, Performance Management, Customer Management, Product / Service Management, Configuration / orchestration / auto-provisioning, Outside Plant Management / GIS, Traffic Engineering, Trouble Ticketing, Ticket of Work Management, and much more, all tied together with the most elegant inventory data model I’ve seen.

Being a single vendor solution built on a relational database, the cross-pollination (enrichment) of data between all these different modules made it the most powerful insight engine I’ve worked with. With some SQL skills and an understanding of the data model, you could ask it complex cross-domain questions quite easily because all the data was stored in a single database. That edge of the sword made a powerful argument for a merged OSS/BSS.

Unfortunately, the level of cross-referencing that made it so powerful also made it really challenging to build an initial data set to facilitate all modules being inter-operable. By contrast, an independent inventory management solution could just pull data out of each NMS / EMS under management, massage the data for ingestion and then you’d have an operational system. The abovementioned solution also worked this way for inventory, but to get the other modules cross-referenced with the inventory required engineering rules, hand-stitched spreadsheets, rules of thumb, etc. Maintaining and upgrading also became challenges after the initial data had been created. In many cases, the clients didn’t have all of the data that was needed, so a data creation exercise needed to be undertaken.

If I had the choice, I would’ve done more of the cross-referencing at data level (eg via queries / reports) rather than entwining the modules together so tightly at application level… except in the most compelling cases. It’s an example of the chess-board analogy.

If given the option between merged (tightly coupled) and loosely coupled, which would you choose? Do you have any insights or experiences to share on how you’ve struck the best balance?

Keeping the OSS executioner away

With the increasing pace of change, the moment a research report, competitive analysis, or strategic plan is delivered to a client, its currency and relevance rapidly diminishes as new trends, issues, and unforeseen disrupters arise.”
Soren Kaplan
.

By the same token as the quote above, does it follow that the currency and relevance of an OSS rapidly diminishes as soon as it is delivered to a client?

In the case of research reports, analyses and strategic plans, currency diminishes because the static data sets upon which they’re built are also losing currency. That’s not the case for an OSS – they are data collection and processing engines for streaming (ie constantly refreshing) data. [As an aside here – Relevance can still decrease if data quality is steadily deteriorating, irrespective of its currency. Meanwhile currency can decrease if the ever expanding pool of OSS data becomes so large as to be unmanagable or responsiveness is usurped by newer data processing technologies]

However, as with research reports, analyses and strategic plans, the value of an OSS is not so much related to the data collected, but the questions asked of, and answers / insights derived from, that data.

Apart from the asides mentioned above, the currency and relevance of OSS only diminish as a result of new trends, issues and disrupters if new questions can not or are not being asked with them.

You’ll recall from yesterday’s post that, “An ability to use technology to manage, interpret and visualise real data in a client’s data stores, not just industry trend data,” is as true of OSS tools as it is of OSS consultants. I’m constantly surprised that so few OSS are designed with intuitive, flexible data interrogation tools built in. It seems that product teams are happy to delegate that responsibility to off-the-shelf reporting tools or leave it up to the client to build their own.

The future of telco / service provider consulting

Change happens when YOU and I DO things. Not when we argue.”
James Altucher
.

We recently discussed how ego can cause stagnation in OSS delivery. The same post also indicated how smart contracts potentially streamline OSS delivery and change management.

Along similar analytical lines, there’s a structural shift underway in traditional business consulting, as described in a recent post contrasting “clean” and “dirty” consulting. There’s an increasing skepticism in traditional “gut-feel” or “set-and-forget” (aka clean) consulting and a greater client trust in hard data / analytics and end-to-end implementation (dirty consulting).

Clients have less need for consultants that just turn the ignition and lay out sketchy directions, but increasingly need ones that can help driving the car all the way to their desired destination.

Consultants capable of meeting these needs for the telco / service provider industries have:

  • Extensive coal-face (delivery) experience, seeing and learning from real success and failure situations / scenarios
  • An ability to use technology to manage, interpret and visualise real data in a client’s data stores, not just industry trend data
  • An ability to build repeatable frameworks (including the development of smart contracts)
  • A mix of business, IT and network / tech expertise, like all valuable tripods

Have you noticed that the four key features above are perfectly aligned with having worked in OSSOSS/BSS data stores contain information that’s relevant to all parts of a telco / service provider business. That makes us perfectly suited to being the high-value consultants of the future, not just contractors into operations business units.

Few consultancy tasks are productisable today, but as technology continues to advance, traditional consulting roles will increasingly be replaced by IP (Intellectual Property) frameworks, data analytics, automations and tools… as long as the technology provides real business benefit.

I found a way to save ten million dollars

Yesterday’s post about egos in OSS contained the following Dilbert cartoon:
Dilbert - I found a way to save a million dollars.
It reminded me of a story from many years ago.

I was working in a developing country, advising the board of a tier-one telco on the implementation of their first-ever OSS (they’d only ever operated their networks at NMS level previously). During the analysis phase I came across some data that showed an interesting opportunity for an innovation relating to their voice Points of Interconnect (PoI).

From a back-of-a-paper napkin analysis it seemed that a ~$50-100k investment could’ve resulted in an improvement to the company’s profit by at least $10M. I ran the concept, and the numbers, past their head of switching. His response was, “I think you’re right…. but I’m not going to recommend it.”

You could say that I was a little bewildered.

He then followed with, “You have to see this from my perspective. If I recommend this project and it succeeds, I receive no benefit. I’m not due for promotion for another two years at the earliest. I will barely receive any recognition at all, certainly no financial reward. The company receives all the benefits. But if the project fails, I will be put aside, passed over for any future promotions. It would be a career killer.”

He was right. I hadn’t seen it from his perspective… still not sure that I do, but as a consultant, I was only ever passing through their corporate culture rather than having a 4-5 decade career embedded within it.

It wasn’t within my OSS scope, but I quietly mentioned it to the board. They delegated the decision back to the head of switching. The project was not recommended to proceed, not even for further analysis.

It’s interesting the human factors that come into play when project investment is under evaluation isn’t it? What human factors have you seen influence purchasing decisions?

Bad OSS ego decisions

A long, long time ago Dennis Haslinger told me that most of the most serious mistakes I would make in life would be bad ego decisions. I have found that to be true.”
Gary Halbert
.

OSS is an industry filled with highly intelligent people. In every country I’ve visited to work on OSS assignments, perhaps excluding Vietnam, my colleagues have been predominantly male. Dare I say it, do those two preceding facts imply a significant ego level exists on many (most?) OSS projects (or has it just been a coincidence that I’ve experienced)?

Given that OSS projects tend to cross business units, inter-departmental power plays like the one described in the Dilbert comic below can become just another potential pitfall.
Dilbert - I found a way to save a million dollars

To be honest, I can’t recall any examples where ego (mine or others) has lead to serious mistakes as such, but I’ve seen many cases where it’s lead to serious stagnation, delays in project delivery, that have been extremely costly.

One example is cited in this post, where the intellectual brilliance of one person caused a document to blow out from 30 pages to 150+, causing a 3+ month delay and more than $100k additional cost.

Stakeholder management and change management are highly underestimated factors in the success of OSS projects.

PS. The “intellectual brilliance” link above also talks about the possible benefits of smart contracts in OSS delivery. I wonder whether smart contracts will reduce some of the ego-related stagnation on OSS projects, or simply shift it from the delivery phase to the up-front smart contract agreement phase, thus introducing more “what if scenario” stagnation?

A deeper level of OSS connection,

Yesterday we talked about the cuckoo-bird analogy and how it was preventing telcos from building more valuable platforms on top of their capital-intensive network platforms. Thanks to Dean Bubley, it gave examples of how the most successful platform plays were platforms on platforms (eg Microsoft Office on Windows, iTunes on iOS, phones on physical networks, etc).

The telcos have found it difficult to build the second layer of platform on their data networks during the Internet age to keep the cuckoo chicks out of the nest.

Telcos are great at helping customers to make connections. OSS are great at establishing and maintaining those connections. But there’s a deeper level of connection waiting for us to support – helping the telcos’ customers to make valuable connections that they wouldn’t otherwise be able to make by themselves.

In the past, telcos provided yellow pages directories to help along these lines. The internet and social media have marginalised the value of this telco-owned asset in recent years.

But the telcos still own massive subscriber bases (within our OSS / BSS suites). How can our OSS / BSS facilitate a deeper level of connection, providing the telcos’ customers with valuable connections that they would not have otherwise made?

OSS that keep the cuckoos out of the nest

The cuckoo bird is infamous for laying its eggs in other birds’ nests. The young cuckoos grow much faster than the rightful occupants, forcing the other chicks out – if they haven’t already physically knocked the other eggs overboard. (See “brood parasitism”, here).
Analogies exist quite widely in technology – a faster-growing “tenant” sometimes pushes out the offspring of the host. Arguably Microsoft’s original Windows OS was an early “cuckoo platform” on top of IBM’s PC, removing much of IBM’s opportunity for selling additional software. 

In many ways, Internet access itself has outgrown its own host: telco-provided connectivity. Originally, fixed broadband (and the first iterations of 3G mobile broadband) were supposed to support a wide variety of telco-supplied services. Various “service delivery platforms” were conceived, including IMS, yet apart from ordinary operator telephony/VoIP and some IPTV, very little emerged as saleable services.

Instead, Internet access – which started using dial-up modems and normal phone lines before ADSL and cable and 3G/4G were deployed – has been the interloping bird which has thrived in the broadband nest instead of telcos’ own services. It’s interesting to go back and look at the 2000-era projections for walled-garden, non-Internet services.

The problem is that everyone wants to be a platform player. And when you’re building and scaling a new potential platform, it’s really hard to turn down a large and influential “anchor tenant”, even if you worry it might ultimately turn out to be a Trojan Horse (apologies for the mixed metaphor). You need the scale, the validation, and the draw for other developers and partners.

This is why the most successful platforms are always the one which have one of their own products as the key user. It reduces the cannibalisation risk. Office is the anchor tenant on Windows. iTunes, iMessage and the camera app are anchors on iOS. Amazon.com is the anchor tenant for AWS.

Unfortunately, the telecoms industry looks like it will have to learn a(nother) tough lesson or two about “cuckoo platforms”.”
Dean Bubley from Disruptive Wireless.

The link above provides some really interesting perspectives from Dean in relation to OTT business models and the challenges that telcos have faced in trying to build valuable platforms to sit on top of their capital-intensive network platforms. I really recommend having a read of the full article by clicking on the link.

I loosely equate this to the OSI stack where telcos own the L1 to L2 (L3 in many cases) platform, but haven’t been so successful at creating dominant platforms in the layers above that. That’s also why there are two distinct business model categories – the traditional CSP (Communications Service Provider) that services L1 to 2/3 and acts like a utility or REIT or the more competitive DSP (Digital Service Provider). One Telco group can have both by leveraging their trillion dollar treasure chest.

Traditional OSS service the CSP (as well as some of the aspects of the DSP model) but we probably need to create some innovative new concepts if we’re going to assist our telco customers to build DSP platforms and / or to keep the cuckoos out of the nest.

To reduce OSS dark data (or not)?

Dark data is the name for data that is collected but never used.
lt’s said that 96-98% of all data is dark data (not that I can confirm or deny those claims).

Dark data forms the bottom layer in the DIKW hierarchy below (image sourced from here).
DIKW hierarchy

What would the dark data percentage be within OSS do you think? Or more specifically, your OSS?

If you’re not going to use it, then why collect it?

I have two conflicting trains of thought here:

  • The Minimum Viable Data perspective; and
  • It’s relatively cheap and easy to collect / store raw data if an interface is already built, so hoard it all just in case your data scientists (or automated data algorithms) ever need it

Where do you sit on the data collection spectrum?

Raising the OSS horizon

With the holiday period looming for many of us, we will have the head-space to reflect – on the year(s) gone and to ponder the one(s) upcoming. I’d like to pose the rhetorical question, “What do you expect to reflect on?

It’s probably safe to say that a majority of OSS experts are engaged in delivery roles. Delivery roles tend to require great problem-solving skills. That’s one of the exciting aspects of being an OSS expert after all.

There’s one slight problem though. Delivery roles tend to have a focus on the immediacy of delivery, a short-term problem-solving horizon. This generates incremental improvements like new dashboards within an existing dashboard framework, refining processes, next release software upgrades, releasing new stuff that adds to the accumulation of tech-debt, etc, etc.

That’s great, highly talented, admirable work, often exactly what our customers are requesting, but not necessarily what our industry needs most.

We need the revolutionary, not the evolutionary. And that means raising our horizons – to identify and comprehend the bigger challenges and then solving those. That is the intent of the OSS Call for Innovation – to lift our vision to a more distant horizon.

When you reflect during this holiday period, how distant will your horizon be?

PS. Upon your own reflection, are there additional big challenges or exponential opportunities that should be captured in the OSS Call for Innovation?

Can you re-skill fast enough to justify microservices?

There’s some things that I’ve challenged my team to do. We have to be faster than the web scale players and that sounds audacious. I tell them you can’t you can’t go to the bus station and catch a bus that’s already left the station by getting on a bus. We have to be faster than the people that we want to get to. And that sounds like an insane goal but that’s one of the goals we have. We have to speed up to catch the web scale players.”
John Donovan
, AT&T at this link.

Last week saw a series of articles appear here on the PAOSS blog around the accumulation of tech-debt and how microservices / Agile had the potential to accelerate that accumulation.

The part that I find most interesting about this new approach to telco (or more to the point, to the Digital Service Provider (DSP) model) is that it speaks of a shift to being software companies like the OTT players. Most telcos are definitely “digital” companies, but very few could be called “software” companies.

All telcos have developers on their payroll but how many would have software roles filling more than 5% of their workforce? How many would list their developer pools amongst a handful of core strengths? I’d hazard a guess that the roots of most telcos’ core strengths would’ve been formed decades ago.

Software-centric networks are on the rise. Rapid implementation models like DevOps and Agile are on the rise. API / Microservice interfaces to network domains (irrespective of being VNF, PNF, etc) are on the rise. Software, software, software.

In response, telcos are talking software. Talking, but how many are doing?

Organic transition of the workforce (ie boomers out, millennials in) isn’t going to refresh fast enough. Are telcos actively re-inventing their resource pool? Are they re-skilling on a grand scale, often tens of thousands of people, to cater for a future mode of operation where software is a core capability like it is at the OTT players? Re-skilling at a speed that’s faster than the web-scale bus?

If they can’t, or don’t, then perhaps software is not really the focus. Software isn’t their differentiator… they do have many other strengths to work with after all.

If so then OSS, microservices, SDN / NFV, DevOps, etc are key operational requirements without being core differentiators. So therefore should they all be outsourced to trusted partners / vendors / integrators (rather than the current insourcing trend), thus delegating the responsibility for curating the tech-debt we spoke about last week?

I’m biased. I see OSS as a core differentiator (if done well), but few agree with me.