Are telco services and SLAs no longer relevant?

I wonder if we’re reaching the point where “telecommunication services” is no longer a relevant term? By association, SLAs are also a bust. But what are they replaced by?

A telecommunication service used to effectively be the allocation of a carrier’s resources for use by a specific customer. Now? Well, less so

  1. Service consumption channel alternatives are increasing, from TV and radio; to PC, to mobile, to tablet, to YouTube, to Insta, to Facebook, to a million others.
    Consumption sources are even more prolific.
  2. Customer contact channel alternatives are also increasing, from contact centres; to IVR, to online, to mobile apps, to Twitter, etc.
  3. A service bundle often utilises third-party components, some of which are “off-net”
  4. Virtualisation is increasingly abstracting services from specific resources. They’re now loosely coupled with resource pools and rely on high availability / elasticity to ensure customer service continuity. Not only that, but those resource pools might extend beyond the carrier’s direct control and out to cloud provider infrastructure

The growing variant-tree is taking the concept beyond the reach of “customer services” and evolves to become “customer experiences.”

The elements that made up a customer service in the past tended to fall within the locus of control of a telco and its OSS. The modern customer experience extends far beyond the control of any one company or its OSS. An SLA – Service Level Agreement – only pertains to the sub-set of an experience that can be measured by the OSS. We can aspire to offer an ELA – Experience Level Agreement – because we don’t have the mechanisms by which to measure or manage the entire experience yet.

The metrics that matter most for telcos today tend to revolve around customer experience (eg NPS). But aside from customer surveys, ratings and derived / contrived metrics, we don’t have electronic customer experience measurements.

Customer services are dead; Long live the customer experiences king… if only we can invent a way to measure the whole scope of what makes up customer experiences.

Does Malcolm Gladwell’s 10,000 hours apply to OSS?

You’ve probably all heard of Malcolm Gladwell’s 10,000 hour rule from his book, Outliers? In it he suggests that roughly 10,000 hours of deliberate practice makes an individual world-class in their field. But is 10,000 hours enough in the field of OSS?

I look back to the year 2000, when I first started on OSS projects. Over the following 5 years or so, I averaged an 85 hour week (whilst being paid for a 40 hour week, but I just loved what I was doing!!). If we multiply 85 by 48 by 5, we end up with 20,400 hours. That’s double the Gladwell rule. And I was lucky to have been handed assignments across the whole gamut of OSS activities, not just monotonously repeating the same tasks over and over. But those first 5 years / 20,000+ hours were barely the tip of the iceberg in terms of OSS expertise.

Whilst 10,000 hours might work for repetitive activities such as golf, tennis, chess, music, etc it’s probably less impactful for such multi-faceted fields as OSS.

So, what does it take to make an OSS expert? Narrow focus on just one of the facets? Broad view across many facets? Experience just using, building, designing, optimising OSS, or all of those things? Study, practice or a combination of both?

If you’re an OSS expert, or you know any who stand head and shoulders above all others, how would you describe the path that got you / them there?
Or if I were to ask another way, how would you get an OSS newbie up to speed if you were asked to mentor them from your lofty position of expertise?

Who are more valuable, OSS hoarders or teachers?

Any long-term readers of this blog will have heard me talk about tripods, and how valuable they are to any OSS team. They’re the people who know about IT, operations/networks and the business context within which they’re working. They’re the most valuable people I’ve worked with in the world of OSS because they’re the ones who can connect the many disparate parts that make up an OSS.

But on reflection, there’s one other factor that almost all of the tripods have who I’ve worked with – they’re natural teachers. They want to impart any and all of the wisdom they may contain.

I once worked in an Asian country where teaching was valued incredibly highly. Teachers were put on the highest pedestal of the social hierarchy. Yet almost every single person in the organisation, all the way from the board that I was advising through to the workers in the field, hoarded their knowledge. Knowledge is power and it was definitely treated that way by the people in this organisation. Knowledge was treated like a finite resource.

It was a fascinating paradox. They valued teachers, they valued the fact that I was open with sharing everything I could with them, but guarded their own knowledge from anyone else in their team.

I could see their rationale, sort of. Their unique knowledge made them important and impossible to replace, giving them job stability. But I could also not see their rationale at all. Let me summarise that thought in a single question – who have you found to be more valuable (and needing to be retained in their role), the genius hoarder of knowledge who can perform individual miracles or the connector who can lift and coordinate the contributions of the whole team to get things done?

I’d love to get your thoughts and experiences working with hoarders and teachers.

Build an OSS and they will come… or sometimes not

Build it and they will come.

This is not always true for OSS. Let me recount a few examples.

The project team is disconnected from the users – The team that’s building the OSS in parallel to existing operations doesn’t (or isn’t able to) engage with the end users of the OSS. Once it comes time for cut-over, the end users want to stick with what they know and don’t use the shiny new OSS. From painful experience I can attest that stakeholder management is under-utilised on large OSS projects.

Turf wars – Different groups within a customer are unable to gain consensus on the solution. For example, the operational design team gains the budget to build an OSS but the network assurance team doesn’t endorse this decision. The assurance team then decides not to endorse or support the OSS that is designed and built by the design team. I’ve seen an OSS worth tens of millions of dollars turned off less than 2 years after handover because of turf wars. Stakeholder management again, although this could be easier said than done in this situation.

It sounded like a good idea at the time – The very clever OSS solution team keeps coming up with great enhancements that don’t get used, for whatever reason (eg non fit-for-purpose, lack of awareness of its existence by users, lack of training, etc). I’ve seen a customer that introduced over 500 customisations to an off-the-shelf solution, yet hundreds of those customisations hadn’t been touched by users within a full year prior to doing a utilisation analysis. That’s right, not even used once in the preceding 12 months. Some made sense because they were once-off tools (eg custom migration activities), but many didn’t.

The new OSS is a scary beast – The new solution might be perfect for what the customer has requested in terms of functionality. But if the solution differs greatly from what the operators are used to, it can be too intimidating to be used. A two-week classroom-based training course at the end of an OSS build doesn’t provide sufficient learning to take up all the nuances of the new system like the operators have developed with the old solution. Each significant new OSS needs an apprenticeship, not just a short-course.

It’s obsolete before it’s finishedOSS work in an environment of rapid change – networks, IT infrastructure, organisation models, processes, product offerings, regulatory shifts, disruptive innovation, etc, etc. The longer an OSS takes to implement, the greater the likelihood of obsolescence. All the more reason for designing for incremental delivery of business value rather than big-bang delivery.

What other examples have you experienced where an OSS has been built, but the users haven’t come?

OSS data Ponzi scheme

The more data you have, the more data you need to understand the data you have. You are engaged in a data ponzi scheme…Could it be in service assurance and IT ops that more data equals less understanding?
Phil Tee
in the opening address at the AIOps Symposium.

Interesting viewpoint right?

Given that our OSS hold shed-loads of data, Phil is saying we need lots of data to understand that data. Well, yes… and possibly no.

I have a theory that data alone doesn’t talk, but it’s great at answering questions. You could say that you need lots of data, although I’d argue in semantics that you actually need lots of knowledge / awareness to ask great questions. Perhaps that knowledge / awareness comes from seeding machine-led analysis tools (or our data scientists’s brains) with lots of data.

The more data you have, the more noise that you need to find signal in amongst. That means you have to ask more questions of your data if you want to drive a return that justifies the cost of collecting and curating it all. Machine-led analytics certainly assist us in handling the volume and velocity of data our OSS create / collect. That’s just asking the same question/s over and over. There’s almost no end to the questions that can be asked of our data, just a limit on the time in which we can ask it.

Does that make data a Ponzi scheme? A Ponzi scheme pays profits to earlier investors using funds obtained from newer investors. Eventually it must collapse the scheme eventually runs out of new investors to fund profits. In a data Ponzi scheme, it pays in insights from earlier (seed) data by obtaining new (streaming) data. The stream of data reaching an OSS never runs out. If we need to invest heavily in data (eg AI / ML, etc), at what point in the investment lifecycle will we stop creating new insights?

The OSS self-driving vehicle

I was lucky enough to get some time of a friend recently, a friend who’s running a machine-learning network assurance proof-of-concept (PoC).

He’s been really impressed with the results coming out of the PoC. However, one of the really interesting factors he’s been finding is how frequently BAU (business as usual) changes in the OSS data (eg changes in naming conventions, topologies, etc) would impact results. Little changes made by upstream systems effectively invalidated baselines identified by the machine-learning engines to key in on. Those little changes meant the engine had to re-baseline / re-learn to build back up to previous insight levels. Or to avoid invalidating the baseline, it would require re-normalising all of data prior to the identification of BAU changes.

That got me wondering whether DevOps (or any other high-change environment) might actually hinder our attempts to get machine-led assurance optimisation. But more to the point, does constant change (at all levels of a telco business) hold us back from reaching our aim of closed-loop / zero-touch assurance?

Just like the proverbial self-driving car, will we always need someone at the wheel of our OSS just in case a situation arises that the machines hasn’t seen before and/or can’t handle? How far into the future will it be before we have enough trust to take our hands off the OSS wheel and let the machines drive closed-loop processes without observation by us?

Optimisation Support Systems

We’ve heard of OSS being an acronym for operational support systems, operations support systems, even open source software. I have a new one for you today – Optimisation Support Systems – that exists for no purpose other than to drive a mindset shift.

I think we have to transition from “expectations” in a hype sense to “expectations” in a goal sense. NFV is like any technology; it depends on a business case for what it proposes to do. There’s a lot wrong with living up to hype (like, it’s impossible), but living up to the goals set for a technology is never unrealistic. Much of the hype surrounding NFV was never linked to any real business case, any specific goal of the NFV ISG.”
Tom Nolle
in his blog here.

This is a really profound observation (and entire blog) from Tom. Our technology, OSS included, tends to be surrounded by “hyped” expectations – partly from our own optimistic desires, partly from vendor sales pitches. It’s far easier to build our expectations from hype than to actually understand and specify the goals that really matter. Goals that are end-to-end in manner and preferably quantifiable.

When embarking on a technology-led transformation, our aim is to “make things better,” obviously. A list of hundreds of functional requirements might help. However, having an up-front, clear understanding of the small number of use cases you’re optimising for tends to define much clearer goal-driven expectations.

Expanding your bag of OSS tricks

Let me ask you a question – when you’ve expanded your bag of tricks that help you to manage your OSS, where have they typically originated?

By reading? By doing? By asking? Through mentoring? Via training courses?
Relating to technical? People? Process? Product?
Operations? Network? Hardware? Software?
Design? Procure? Implement / delivery? Test? Deploy?
By retrospective thinking? Creative thinking? Refinement thinking?

If you were to highlight the questions above that are most relevant to the development of your bag of tricks, how much coverage does your pattern show?

There are so many facets to our OSS (ie. tentacles on the OctopOSS) aren’t there? We have to have a large bag of tricks. Not only that, we need to be constantly adding new tricks too right?

I tend to find that our typical approaches to OSS knowledge transfer cover only a small subset (think about discussion topics at OSS conferences that tend to just focus on the technical / architectural)… yet don’t align with how we (or maybe just I) have developed capabilities in the past.

The question then becomes, how do we facilitate the broader learnings required to make our OSS great? To introduce learning opportunities for ourselves and our teams across vaguely related fields such as project management, change management, user interface design, process / workflows, creative thinking, etc, etc.

Just in time design

It’s interesting how we tend to go in cycles. Back in the early days of OSS, the network operators tended to build their OSS from the ground up. Then we went through a phase of using Commercial off-the-shelf (COTS) OSS software developed by third-party vendors. We now seem to be cycling back towards in-house development, but with collaboration that includes vendors and external assistance through open-source projects like ONAP. Interesting too how Agile fits in with these cycles.

Regardless of where we are in the cycle for our OSS, as implementers we’re always challenged with finding the Goldilocks amount of documentation – not too heavy, not too light, but just right.

The Agile Manifesto espouses, “working software over comprehensive documentation.” Sounds good to me! It perplexes me that some OSS implementations are bogged down by lengthy up-front documentation phases, especially if we’re basing the solution on COTS offerings. These can really stall the momentum of a project.

Once a solution has been selected (which often does require significant analysis and documentation), I’m more of a proponent of getting the COTS software stood up, even if only in a sandpit environment. This is where just-in-time (JIT) documentation comes into play. Rather than having every aspect of the solution documented (eg process flows, data models, high availability models, physical connectivity, logical connectivity, databases, etc, etc), we only need enough documentation for collaborative stakeholders to do their parts (eg IT to set up hardware / hosting, networks to set up physical connectivity, vendor to provide software, integrator to perform build, etc) to stand up a vanilla solution.

Then it’s time to start building trial scenarios through the solution. There’s usually quite a bit of trial and error in this stage, as we seek to optimise the scenarios for the intended users. Then we add a few more scenarios.

There’s little point trying to document the solution in detail before a scenario is trialled, but some documentation can be really helpful. For example, if the scenario is to build a small sub-section of a network, then draw up some diagrams of that sub-network that include the intended naming conventions for each object (eg device, physical connectivity, addresses, logical connectivity, etc). That allows you to determine whether there are unexpected challenges with naming conventions, data modelling, process design, etc. There are always unexpected challenges that arise!

I figure you’re better off documenting the real challenges than theorising on the “what if?” challenges, which is what often happens with up-front documentation exercises. There are always brilliant stakeholders who can imagine millions of possible challenges, but these often bog the design phase down.

With JIT design, once the solution evolves, the documentation can evolve too… if there is an ongoing reason for its existence (eg as a user guide, for a test plan, as a training cheat-sheet, a record of configuration for fault-finding purposes, etc).

Interestingly, the first value in the Agile Manifesto is, “individuals and interactions over processes and tools.” This is where the COTS vs in-house-dev comes back into play. When using COTS software, individuals, interactions and processes are partly driven by what the tools support. COTS functionality constrains us but we can still use Agile configuration and customisation to optimise our solution for our customers’ needs (where cost-benefit permits).

Having a working set of vanilla tools allows our customers to get a much better feel for what needs to be done rather than trying to understand the intent of up-front design documentation. And that’s the key to great customer outcomes – having the customers knowledgeable enough about the real solution (not hypothetical solutions) to make the most informed decisions possible.

Of course there are always challenges with this JIT design model too, especially when third-party contracts are involved!

An OSS data creation brain-fade

Many years ago, I made a data migration blunder that slowed a production OSS down to a crawl. Actually, less than a crawl. It almost became unusable.

I was tasked with creating a production database of a carrier’s entire network inventory, including data migration for a bunch of Nortel Passport ATM switches (yes, it was that long ago).

  • There were around 70 of these devices in the network
  • 14 usable slots in each device (ie slots not reserved for processing, resilience, etc)
  • Depending on the card type there were different port densities, but let’s say there were 4 physical ports per slot
  • Up to 2,000 VPIs per port
  • Up to 65,000 VCIs per VPI
  • The customer was running SPVC

To make it easier for the operator to create a new customer service, I thought I should script-create every VPI/VCI on every port on every devices. That would allow the operator to just select any available VPI/VCI from within the OSS when provisioning (or later, auto-provisioning) a service.

There was just one problem with this brainwave. For this particular OSS, each VPI/VCI represented a logical port that became an entry alongside physical ports in the OSS‘s ports table… You can see what’s about to happen can’t you? If only I could’ve….

My script auto-created nearly 510 billion VCI logical ports; over 525 billion records in the ports table if you also include VPIs and physical ports…. in a production database. And that was just the ATM switches!

So instead of making life easier for the operators, it actually brought the OSS‘s database to a near stand-still. Brilliant!!

Luckily for me, it was a greenfields OSS build and the production database was still being built up in readiness for operational users to take the reins. I was able to strip all the ports out and try again with a less idiotic data creation plan.

The reality was that there’s no way the customer could’ve ever used 2,000 x 65,000 VPI/VCI groupings I’d created on every single physical port. Put it this way, there were far less than 130 million services across all service types across all carriers across that whole country!

Instead, we just changed the service activation process to manually add new VPI/VCIs into the database on demand as one of the pre-cursor activities when creating each new customer service.

From that experience, I have reverted back to the Minimum Viable Data (MVD) mantra ever since.

Are OSS business tools or technical tools?

I’d like to get your opinion on this question – are OSS business tools or technical tools?

We can say that BSS are as the name implies – business support systems.
We can say that NMS / EMS / NEMS are network management tools – technical tools.

The OSS layer fits between those two layers . It’s where the business and technology worlds combine (collide??).

If we use the word Operations / Operational to represent the “O” in OSS, it might imply that they exist to help operate technology. Many people in the industry undoubtedly see OSS as technical, operational tools. If I look back to when I first started on OSS, I probably had this same perception – I primarily faced the OSS / NMS interface in the early days.

But change the “O” to operationalisation and it changes the perspective slightly. It encourages you to see that the technology / network is the means via which business models can be implemented. It’s our OSS that allow operationalisation to happen.

So, let me re-ask the question – are OSS business tools or technical tools?

They’re both right? And therefore as OSS operators / developers / implementers, we need to expand our vision of what OSS do and who they service… which helps us get to Simon Sinek’s Why for OSS.

OSS of the past probably tended to be the point of collision and friction between business and tech groups within an organisation. Some modern OSS architectures give me the impression of being meet-in-the-middle tools, which will hopefully bring more collaboration between fiefdoms. Time will tell.

Unexpected OSS indicators

Yesterday’s post talked about using customer contacts as a real-time proxy metric for friction in the business, which could also be a directional indicator for customer experience.

That got me wondering what other proxy metrics might be used by to provide predictive indicators of what’s happening in your network, OSS and/or BSS. Apparently, “Colt aims to enhance its service assurance capabilities by taking non-traditional data (signal strength, power, temperature, etc.) from network elements (cards, links, etc.) to predict potential faults,” according to James Crawshaw here on LightReading.

What about environmental metrics like humidity, temperature, movement, power stability/disturbance?

I’d love to hear about what proxies you use or what unexpected metrics you’ve found to have shone the spotlight on friction in your organisation.

Automated testing and new starters

Can you guess what automated OSS testing and OSS new starters have in common?

Both are best front-loaded.

As a consultant, I’ve been a new starter on many occasions, as well as being assigned new starters on probably even more occasions. From both sides of that fence, it’s far more effective to front-load the new starter with training / knowledge to bring them up to speed / effectiveness, as soon as possible. Far more useful from the perspective of quality, re-work, self-sufficiency, etc.

Unfortunately, this is easier said than done. For a start, new starters are generally only required because the existing team is completely busy. So busy that it’s really hard to drop everything to make time to deliver up-front training. It reminds me of this diagram.
We're too busy

Front-loading of automated testing is similar… it takes lots of time to get it operational, but once in place it allows the team to focus on business outcomes faster.

In both cases, front-loading leads to a decrease in efficiency over the first few days, but tends to justify the effort soon thereafter. What other examples of front-loading can you think of in OSS?

The OSS dart-board analogy

The dartboard, by contrast, is not remotely logical, but is somehow brilliant. The 20 sector sits between the dismal scores of five and one. Most players aim for the triple-20, because that’s what professionals do. However, for all but the best darts players, this is a mistake. If you are not very good at darts, your best opening approach is not to aim at triple-20 at all. Instead, aim at the south-west quadrant of the board, towards 19 and 16. You won’t get 180 that way, but nor will you score three. It’s a common mistake in darts to assume you should simply aim for the highest possible score. You should also consider the consequences if you miss.”
Rory Sutherland
on Wired.

When aggressive corporate goals and metrics are combined with brilliant solution architects, we tend to aim for triple-20 with our OSS solutions don’t we? The problem is, when it comes to delivery, we don’t tend to have the laser-sharp precision of a professional darts player do we? No matter how experienced we are, there tends to be hidden surprises – some technical, some personal (or should I say inter-personal?), some contractual, etc – that deflect our aim.

The OSS dart-board analogy asks the question about whether we should set the lofty goals of a triple-20 [yellow circle below], with high risk of dismal results if we miss (think too about the OSS stretch-goal rule); or whether we’re better to target the 19/16 corner of the board [blue circle below] that has scaled back objectives, but a corresponding reduction in risk.

OSS Dart-board Analogy

Roland Leners posed the following brilliant question, “What if we built OSS and IT systems around people’s willingness to change instead of against corporate goals and metrics? Would the corporation be worse off at the end?” in response to a recent post called, “Did we forget the OSS operating model?

There are too many facets to count on Roland’s question but I suspect that in many cases the corporate goals / metrics are akin to the triple-20 focus, whilst the team’s willingness to change aligns to the 19/16 corner. And that is bound to reduce delivery risk.

I’d love to hear your thoughts!!

Dematerialisation of OSS

In 1972, the Club of Rome in its report The Limits to Growth predicted a steadily increasing demand for material as both economies and populations grew. The report predicted that continually increasing resource demand would eventually lead to an abrupt economic collapse. Studies on material use and economic growth show instead that society is gaining the same economic growth with much less physical material required. Between 1977 and 2001, the amount of material required to meet all needs of Americans fell from 1.18 trillion pounds to 1.08 trillion pounds, even though the country’s population increased by 55 million people. Al Gore similarly noted in 1999 that since 1949, while the economy tripled, the weight of goods produced did not change.
Wikipedia on the topic of Dematerialisation.

The weight of OSS transaction volumes appears to be increasing year on year as we add more stuff to our OSS. The touchpoint explosion is amplifying this further. Luckily, our platforms / middleware, compute, networks and storage have all been scaling as well so the increased weight has not been as noticeable as it might have been (even though we’ve all worked on OSS that have been buckling under the weight of transaction volumes right?).

Does it also make sense that when there is an incremental cost per transaction (eg via the increasingly prevalent cloud or “as a service” offerings) that we pay closer attention to transaction volumes because there is a great perception of cost to us? But not for “internal” transactions where there is little perceived incremental cost?

But it’s not so much the transaction processing volumes that are the problem directly. It’s more by implication. For each additional transaction there’s the risk of a hand-off being missed or mis-mapped or slowing down overall activity processing times. For each additional transaction type, there’s additional mapping, testing and regression testing effort as well as an increased risk of things going wrong.

Do you measure transaction flow volumes across your entire OSS suite? Does it provide an indication of where efficiency optimisation (ie dematerialisation) could occur and guide your re-factoring investments? Does it guide you on process optimisation efforts?

Getting lost in the flow of OSS

The myth is that people play games because they want to avoid challenging work. The reality is, people play games to engage in well-designed, challenging work. The only thing they are avoiding is poorly designed work. In essence, we are replacing poorly designed work with work that provides a more meaningful challenge and offers a richer sense of progress.
And we should note at this point that just because something is a game, it doesn’t mean it’s good. As we’ll soon see, it can be argued that everything is a game. The difference is in the design.
Really good games have been ruthlessly play-tested and calibrated to the point where achieving a state of flow is almost guaranteed for many. Play-testing is just another word for iterative development, which is essentially the conducting of progressive experiments
Dr Jason Fox
in his book, “The Game Changer.”

Reflect with me for a moment – when it comes to your OSS activities, in which situations do you consistently get into a state of flow?

For me, it’s in quite a few different scenarios, but one in particular stands out – building up a network model in an inventory management tool. This activity starts with building models / patterns of devices, services, connections, etc, then using the models to build a replica of the network, either manually or via data migration, within the inventory tool(s). I can lose complete track of time when doing this task. In fact I have almost every single time I’ve performed this task.

Whilst not being much of a gamer, I suspect it’s no coincidence that by far my favourite video game genre is empire-building strategy games like the Civilization series. Back in the old days, I could easily get lost in them for hours too. Could we draw a comparison from getting that same sense of achievement, seeing a network (of devices in OSS, of cities in the empire strategy games) grow rapidly as a result of your actions?

What about fans of first-person shooter games? I wonder whether they get into a state of flow on assurance activities, where they get to hunt down and annihilate every fault in their terrain?

What about fans of horse grooming and riding games? Well…. let’s not go there. 🙂

Anyway, enough of all these reflections and musings. I would like to share three concepts with you that relate to Dr Fox’s quote above:

  1. Gamification – I feel that there is MASSIVE scope for gamification of our OSS, but I’ve yet to hear of any OSS developers using game design principles
  2. Play-testing – How many OSS are you aware of that have been, “ruthlessly play-tested and calibrated?” In almost every OSS situation I’ve seen, as soon as functionality meets requirements, we stop and move on to the next feature. We don’t pause and try a few more variants to see which is most likely to result in a great design, refining the solution, “to the point where achieving a state of flow is almost guaranteed for many
  3. Richer Progress – How many of our end-to-end workflows are designed with, “a richer sense of progress” in mind? Feedback tends to come through retrospective reporting (if at all), rarely through the OSS game-play itself. Chances are that our end-to-end processes actually flow through multiple un-related applications, so it comes back to clever integration design to deliver more compelling feedback. We simply don’t use enough specialist creative designers in OSS

Training network engineers to code, not vice versa

Did any of you read the Light Reading link in yesterday’s post about Google creating automated network operations services? If you haven’t, it’s well worth a read.

If you did, then you may’ve also noticed a reference to Finland’s Elisa selling its automation smarts to other telcos. This is another interesting business model disruption for the OSS market, although I’ll reserve judgement on how disruptive it will be until Elisa sells to a few more operators.

What did catch my eye in the Elisa article (again by Light Reading’s Iain Morris), is this paragraph:
Automation has not been hassle-free for Elisa. Instilling a software culture throughout the organization has been a challenge, acknowledges [Kirsi] Valtari. Rather than recruiting software expertise, Elisa concentrated on retraining the people it already had. During internal training courses, network engineers have been taught to code in Python, a popular programming language, and to write algorithms for a self-optimizing network (or SON). “The idea was to get engineers who were previously doing manual optimization to think about automating it,” says Valtari. “These people understand network problems and so it is a win-win outcome to go down this route.”.

It provides a really interesting perspective on this diagram below (from a 2014 post about the ideal skill-set for the future of networking)

There is an undoubted increase in the level of network / IT overlap (eg SDN). Most operators appear to be taking the path of hiring for IT and hoping they’ll grow to understand networks. Elisa is going the opposite way and training their network engineers to code.

With either path, if they then train their multi-talented engineers to understand the business (the red intersect), then they’ll have OSS experts on their hands right folks?? 😉

A purple cow in our OSS paddock

A few years ago, I read a book that had a big impact on the way I thought about OSS and OSS product development. Funnily enough, the book had nothing to do with OSS or product development. It was a book about marketing – a subject that I wasn’t very familiar with at the time, but am now fascinated with.

And the book? Purple Cow by Seth Godin.
Purple Cow

The premise behind the book is that when we go on a trip into the countryside, we notice the first brown or black cows, but after a while we don’t pay attention to them anymore. The novelty has worn off and we filter them out. But if there was a purple cow, that would be remarkable. It would definitely stand out from all the other cows and be talked about. Seth promoted the concept of building something into your products that make them remarkable, worth talking about.

I recently heard an interview with Seth. Despite the book being launched in 2003, apparently he’s still asked on a regular basis whether idea X is a purple cow. His answer is always the same – “I don’t decide whether your idea is a purple cow. The market does.”

That one comment brought a whole new perspective to me. As hard as we might try to build something into our OSS products that create a word-of-mouth buzz, ultimately we don’t decide if it’s a purple cow concept. The market does.

So let me ask you a question. You’ve probably seen plenty of different OSS products over the years (I know I have). How many of them are so remarkable that you want to talk about them with your OSS colleagues, or even have a single feature that’s remarkable enough to discuss?

There are a lot of quite brilliant OSS products out there, but I would still classify almost all of them as brown cows. Brilliant in their own right, but unremarkable for their relative sameness to lots of others.

The two stand-out purple cows for me in recent times have been CROSS’ built-in data quality ranking and Moogsoft’s Incident Room model. But it’s not for me to decide. The market will ultimately decide whether these features are actual purple cows.

I’d love to hear about your most memorable OSS purple cows.

You may also be wondering how to go about developing your own purple OSS cow. Well I start by asking, “What are people complaining about?” or “What are our biggest issues?” That’s where the opportunities lie. Once discovering those issues, the challenge is solving the problem/s in an entirely different, but better, way. I figure that if people care enough to complain about those issues, then they’re sure to talk about any product that solves the problem for them.

An embarrassing experience on an overseas OSS project

The video below has been doing the rounds on LinkedIn lately. What is your key takeaway from the video?

Most would say the perfection, but for me, the perfection was a result of the hand-offs, which were almost immediate and precise, putting team-mates into better position. The final shot didn’t need the brilliance of a Messi or Ronaldo to put the ball into the net.

Whilst based overseas on an OSS project I played in an expat Australian Rules Football team. Aussie Rules, by the way, is completely different from the game of soccer played in the video above (check out this link to spot the differences and to see why I think it’s the greatest game in the world). Whilst training one afternoon, we were on an adjacent pitch to one of that country’s regional soccer teams.

Watching from the sidelines, we could see that each of the regional team’s players had a dazzling array of foot skills. When we challenged them to a game of soccer we were wondering if this was going to be embarrassing. None of us had anywhere near their talent, stamina, knowledge of the game of soccer, etc.

As it turns out, it WAS a bit embarrassing. We won 5-1, having led 5-0 up until the final few minutes. We didn’t deserve to beat such a talented team, so you’re probably wondering how (and how it relates to OSS).

Well, whenever one of their players got the ball, they’d show off their sublime skills by running rings around our players, but ultimately going around in circles and becoming corralled. They’d rarely dish off a pass when a teammate was in space like the team in the video above.

By contrast, our team was too clumsy to control the ball and had to pass it off quickly to teammates in space. It helped us bring our teammates into the game and keep moving forward. Clumsy passing and equally clumsy goals.

The analogy for OSS is that our solutions can be so complex that we get caught up in the details and go around in circles (sometimes through trying to demonstrate our intellectual skills) rather than just finding ways to reduce complexity and keep momentum heading towards the goals. In some cases, the best way-forward solution might not even use the OSS to solve certain problems.

Oh, and by the way, the regional team did score that final goal… by finally realising that they should use more passing to bring their team-mates into the game. It probably looked a little like the goal in the video above.

Assuming the other person can’t come up with the answer

Just a quick word of warning. This blog starts off away from OSS, but please persevere. It ends up back with a couple of key OSS learnings.

Long ago in the technology consulting game, I came to an important realisation. When arriving on a fresh new client site, chances are that many of the “easy technical solutions” that pop into my head to solve the client’s situation have already been tried by the client. After all, the client is almost always staffed with clever people, but they also know the local context far better than me.

Alan Weiss captures the sentiment brilliantly in the quote below.
I’ve found that in many instances a client will solve his or her own problem by talking it through while I simply listen. I may be asked to reaffirm or validate the wisdom of the solution, but the other person has engaged in some nifty self-therapy in the meantime.
I’m often told that I’m an excellent problem solver in these discussions! But all I’ve really done is listen without interrupting or even trying to interpret.
Here are the keys:
• Never feel that you’re not valuable if you’re not actively contributing.
• Practice “active listening”.
• Never cut-off or interrupt the other person.
• Stop trying to prove how smart you are.
• Stop assuming the other person can’t come up with the answer

I’m male and an Engineer, so some might say I’m predisposed to immediately jumping into problem solving mode before fully understanding a situation… I have to admit that I do have to fight really hard to resist this urge (and sometimes don’t succeed). But enough about stereotypes.

One of the techniques that I’ve found to be more successful is to pose investigative questions rather than posing “brilliant” answers. If any gaps are appearing, then provide bridging connections (ie through broader industry trends, ideas, people, process, technology, contract, etc) that supplement the answers the client already has. These bridges might be built in the form of statements, but often it’s just through leading questions that allow the client to resolve / affirm for themselves.

But as promised earlier, this is more an OSS blog than a consulting one, so there is an OSS call-out.

You’ll notice in the first paragraph that I wrote “easy technical solutions,” rather than “easy solutions.” In almost all cases, the client representatives have great coverage of the technical side of the problems. They know their technology well, they’ve already tried (or thought about) many of the technology alternatives.

However, the gaps I’ve found to be surprisingly common aren’t related to technology at all. A Toyota five-why analysis shows they’re factors like organisational change management, executive buy-in, change controls, availability of skilled resources, requirement / objective mis-matches, stakeholder management, etc, as described in this recent post.

It’s not coincidence then that the blog roll here on PAOSS often looks beyond the technology of OSS.

If you’re an OSS problem solver, three messages:
1) Stop assuming the other person (client, colleague, etc) can’t come up with the answer
2) Broaden your vision to see beyond the technology solution
3) Get great at asking questions (if you aren’t already of course)

Does this align or conflict with your experiences?