An embarrassing experience on an overseas OSS project

The video below has been doing the rounds on LinkedIn lately. What is your key takeaway from the video?

Most would say the perfection, but for me, the perfection was a result of the hand-offs, which were almost immediate and precise, putting team-mates into better position. The final shot didn’t need the brilliance of a Messi or Ronaldo to put the ball into the net.

Whilst based overseas on an OSS project I played in an expat Australian Rules Football team. Aussie Rules, by the way, is completely different from the game of soccer played in the video above (check out this link to spot the differences and to see why I think it’s the greatest game in the world). Whilst training one afternoon, we were on an adjacent pitch to one of that country’s regional soccer teams.

Watching from the sidelines, we could see that each of the regional team’s players had a dazzling array of foot skills. When we challenged them to a game of soccer we were wondering if this was going to be embarrassing. None of us had anywhere near their talent, stamina, knowledge of the game of soccer, etc.

As it turns out, it WAS a bit embarrassing. We won 5-1, having led 5-0 up until the final few minutes. We didn’t deserve to beat such a talented team, so you’re probably wondering how (and how it relates to OSS).

Well, whenever one of their players got the ball, they’d show off their sublime skills by running rings around our players, but ultimately going around in circles and becoming corralled. They’d rarely dish off a pass when a teammate was in space like the team in the video above.

By contrast, our team was too clumsy to control the ball and had to pass it off quickly to teammates in space. It helped us bring our teammates into the game and keep moving forward. Clumsy passing and equally clumsy goals.

The analogy for OSS is that our solutions can be so complex that we get caught up in the details and go around in circles (sometimes through trying to demonstrate our intellectual skills) rather than just finding ways to reduce complexity and keep momentum heading towards the goals. In some cases, the best way-forward solution might not even use the OSS to solve certain problems.

Oh, and by the way, the regional team did score that final goal… by finally realising that they should use more passing to bring their team-mates into the game. It probably looked a little like the goal in the video above.

Designing an OSS from NFRs backwards

When we’re preparing a design (or capturing requirements) for a new or updated OSS, I suspect most of us design with functional requirements (FRs) in mind. That is, our first line of thinking is on the shiny new features or system behaviours we have to implement.

But what if we were to flip this completely? What if we were to design against Non-Functional Requirements (NFRs) instead? [In case you’re not familiar with NFRs, they’re the requirements that measure the function or performance of a solution rather than features / behaviours]

What if we already have all the really important functionality in our OSS (the 80/20 rule suggests you will), but those functions are just really inefficient to use? What if we can meet the FR of searching a database for a piece of inventory… but our loaded system takes 5 mins to return the results of the query? It doesn’t sound much, but if it’s an important task that you’re doing dozens of times a day, then you’re wasting hours each day. Worse still, if it’s a system task that needs to run hundreds of times a day…

I personally find NFRs to be really hard to design for because we usually won’t know response times until we’ve actually built the functionality and tried different load / fail-over / pattern (eg different query types) models on the available infrastructure. Yes, we can benchmark, but that tends to be a bit speculative.

Unfortunately, if we’ve built a solution that works, but end up with queries that take minutes… when our SLAs might be 5-15 mins, then we’ve possibly failed in our design role.

We can claim that it’s not our fault. We only have finite infrastructure (eg compute, storage, network), each with inherent performance constraints. It is what it is right?…. maybe.

What if we took the perspective of determining our most important features (the 80/20 rule again), setting NFR benchmarks for each and then designing the solution back from there? That is, putting effort into making our most important features super-efficient rather than adding new nice-to-have features (features that will increase load, thus making NFRs harder to hit mind you!)?

In this new world of open-source, we have more “product control” than we’ve probably had before. This gives us more of a chance to start with the non-functionals and work back towards a product. An example might be redesigning our inventory to work with Graph database technology rather than the existing relational databases.

How feasible is this NFR concept? Do you know anyone in OSS who does it this way? Do you have any clever tricks for ensuring your developed features stay within NFR targets?

Assuming the other person can’t come up with the answer

Just a quick word of warning. This blog starts off away from OSS, but please persevere. It ends up back with a couple of key OSS learnings.

Long ago in the technology consulting game, I came to an important realisation. When arriving on a fresh new client site, chances are that many of the “easy technical solutions” that pop into my head to solve the client’s situation have already been tried by the client. After all, the client is almost always staffed with clever people, but they also know the local context far better than me.

Alan Weiss captures the sentiment brilliantly in the quote below.
I’ve found that in many instances a client will solve his or her own problem by talking it through while I simply listen. I may be asked to reaffirm or validate the wisdom of the solution, but the other person has engaged in some nifty self-therapy in the meantime.
I’m often told that I’m an excellent problem solver in these discussions! But all I’ve really done is listen without interrupting or even trying to interpret.
Here are the keys:
• Never feel that you’re not valuable if you’re not actively contributing.
• Practice “active listening”.
• Never cut-off or interrupt the other person.
• Stop trying to prove how smart you are.
• Stop assuming the other person can’t come up with the answer
.”

I’m male and an Engineer, so some might say I’m predisposed to immediately jumping into problem solving mode before fully understanding a situation… I have to admit that I do have to fight really hard to resist this urge (and sometimes don’t succeed). But enough about stereotypes.

One of the techniques that I’ve found to be more successful is to pose investigative questions rather than posing “brilliant” answers. If any gaps are appearing, then provide bridging connections (ie through broader industry trends, ideas, people, process, technology, contract, etc) that supplement the answers the client already has. These bridges might be built in the form of statements, but often it’s just through leading questions that allow the client to resolve / affirm for themselves.

But as promised earlier, this is more an OSS blog than a consulting one, so there is an OSS call-out.

You’ll notice in the first paragraph that I wrote “easy technical solutions,” rather than “easy solutions.” In almost all cases, the client representatives have great coverage of the technical side of the problems. They know their technology well, they’ve already tried (or thought about) many of the technology alternatives.

However, the gaps I’ve found to be surprisingly common aren’t related to technology at all. A Toyota five-why analysis shows they’re factors like organisational change management, executive buy-in, change controls, availability of skilled resources, requirement / objective mis-matches, stakeholder management, etc, as described in this recent post.

It’s not coincidence then that the blog roll here on PAOSS often looks beyond the technology of OSS.

If you’re an OSS problem solver, three messages:
1) Stop assuming the other person (client, colleague, etc) can’t come up with the answer
2) Broaden your vision to see beyond the technology solution
3) Get great at asking questions (if you aren’t already of course)

Does this align or conflict with your experiences?

Blown away by one innovation. Now to extend on it

Our most recent two posts, from yesterday and Friday, have talked about one stunningly simple idea that helps to overcome one of OSS‘ biggest challenges – data quality. Those posts have stimulated quite a bit of dialogue and it seems there is some consensus about the cleverness of the idea.

I don’t know if the idea will change the OSS landscape (hopefully), or just continue to be a strong selling point for CROSS Network Intelligence, but it has prompted me to think a little longer about innovating around OSS‘ biggest challenges.

Our standard approach of just adding more coats of process around our problems, or building up layers of incremental improvements isn’t going to solve them any time soon (as indicated in our OSS Call for Innovation). So how?

Firstly, we have to be able to articulate the problems! If we know what they are, perhaps we can then take inspiration from the CROSS innovation to spur us into new ways of thinking?

Our biggest problem is complexity. That has infiltrated almost every aspect of our OSS. There are so many posts about identifying and resolving complexity here on PAOSS that we might skip over that one in this post.

I decided to go back to a very old post that used the Toyota 5-whys approach to identify the real cause of the problems we face in OSS [I probably should update that analysis because I have a whole bunch of additional ideas now, as I’m sure you do too… suggested improvements welcomed BTW].

What do you notice about the root-causes in that 5-whys analysis? Most of the biggest causes aren’t related to system design at all (although there are plenty of problems to fix in that space too!). CROSS has tackled the data quality root-cause, but almost all of the others are human-centric factors – change controls, availability of skilled resources, requirement / objective mis-matches, stakeholder management, etc. Yet, we always seem to see OSS as a technical problem.

How do you fix those people challenges? Ken Segal puts it this way, “When process is king, ideas will never be. It takes only common sense to recognize that the more layers you add to a process, the more watered down the final work will become.” Easier said than done, but a worthy objective!

I’ve just been blown away by the most elegant OSS innovation I’ve seen in decades

Looking back, I now consider myself extremely lucky to have worked with an amazing product on the first OSS project I worked on (all the way back in 2000). And I say amazing because the underlying data models and core product architecture are still better than any other I’ve worked with in the two decades since. The core is the most elegant, simple and powerful I’ve seen to date. Most importantly, the models were designed to cope with any technology, product or service variant that could be modelled as a hierarchy, whether physical or virtual / logical. I never found a technology that couldn’t be modelled into the core product and it required no special overlays to implement a new network model. Sadly, the company no longer exists and the product is languishing on the books of the company that bought out the assets but isn’t leveraging them.

Having been so spoilt on the first assignment, I’ve been slightly underwhelmed by the level of elegant innovation I’ve observed in OSS since. That’s possibly part of the reason for the OSS Call for Innovation published late last year. There have been many exciting innovations introduced since, but many modern tools are still more complex and complicated than they should be, for implementers and operators alike.

But during a product demo last week, I was blown away by an innovation that was so simple in concept, yet so powerful that it is probably the single most impressive innovation I’ve seen since that first OSS. Like any new elegant solution, it left me wondering why it hasn’t been thought of previously. You’re probably wondering what it is. Well first let me start by explaining the problem that it seeks to overcome.

Many inventory-based OSS rely on highly structured and hierarchical data. This is a double-edged sword. Significant inter-relationship of data increases the insight generation opportunities, but the downside is that it can be immensely challenging to get the data right (and to maintain a high-quality data state). Limited data inter-relationships make the project easier to implement, but tend to allow less rich data analyses. In particular, connectivity data (eg circuits, cables, bearers, VPNs, etc) can be a massive challenge because it requires the linking of separate silos of data, often with no linking key. In fact, the data quality problem was probably one of the most significant root-causes of the demise of my first OSS client.

Now getting back to the present. The product feature that blew me away was the first I’ve seen that allows significant inter-relationship of data (yet in a simple data model), but still copes with poor data quality. Let’s say your OSS has a hierarchical data model that comprises Location, Rack, Equipment, Card, Port (or similar) and you have to make a connection from one device’s port to another’s. In most cases, you have to build up the whole pyramid of data perfectly for each device before you can create a customer connection between them. Let’s also say that for one device you have a full pyramid of perfect data, but for the other end, you only know the location.

The simple feature is to connect a port to a location now, or any other point to point on the hierarchy (and clean up the far-end data later on if you wish). It also allows the intermediate hops on the route to be connected at any point in the hierarchy. That’s too simple right, yet most inventory tools don’t allow connections to be made between different levels of their hierarchies. For implementers, data migration / creation / cleansing gets a whole lot simpler with this approach. But what’s even more impressive is that the solution then assigns a data quality ranking to the data that’s just been created. The quality ranking is subsequently considered by tools such as circuit design / routing, impact analysis, etc. However, you’ll have noted that the data quality issue still hasn’t been fixed. That’s correct, so this product then provides the tools that show where quality rankings are lower, thus allowing remediation activities to be prioritised.

If you have an inventory data quality challenge and / or are wondering the name of this product, it’s CROSS, from the team at CROSS Network Intelligence (www.cross-ni.com).

Is your data AI-ready (part 2)

Further to yesterday’s post that posed the question about whether your data was AI ready for virtualised network assurance use cases, I thought I’d raise a few more notes.

The two reasons posed were:

  1. Our data sets haven’t had time to collect much elastic / dynamic network data yet
  2. Our data is riddled with human-generated data that is error-prone

On the latter case in particular, I sense that we’re going to have to completely re-architect the way we collect and store assurance data. We’re almost definitely going to have to think in terms of automated assurance actions and related logging to avoid the errors of human data creation / logging. The question becomes whether it’s worthwhile trying to wrangle all of our old data into formats that the AI engines can cope with or do we just start afresh with new models? (This brings to mind the recent “perfect data” discussion).

It will be one thing to identify patterns, but another thing entirely to identify optimum response activities and to automate those.

If we get these steps right, does it become logical that the NOC (network) and SOC (security operations centre) become conjoined… at least much more so than they tend to be today? In other words, does incident management merge network incidents and security incidents onto common analysis and response platforms? If so, does that imply another complete re-architecture? It certainly changes the operations model.

I’d love to hear your thoughts and predictions.

Are your existing data sets actually suited to seeding an AI engine?

In the virtualization domain, the old root cause technology is becoming obsolete because resources and workloads move around dynamically – we no longer have fixed network and compute resources. Existing service assurance systems in the telecommunication network were designed to manage a fixed set of resources and these assurance systems fall short in monitoring dynamic virtualized networks. Code was written using a rule based approach on known problems. Some advances have been made to develop signature patterns to determine the root cause of a problem, but this approach will also fall short in a dynamic virtualized network where autonomous changes will occur continuously.”
Patrick Kelly
here.

This quote is taken from a really interesting article by Patrick Kelly (see link above).

The old models of determining service impact and root-cause certainly struggle to hold up in the transient world of virtualised networks. Artificial Intelligence or Machine Learning or machine-led pattern identification, or whatever the technologies will be called by their developers, have a really important part to play in networks that are not just dynamic, but undergoing a touchpoint explosion.

The fascinating part of this story is that these clever new models will rely on data. Lots of data. We already have lots of data to feed into the new models. Buuuuut…. I’ve long held the reservation that there might be one slight problem… does all of our existing data actually suit the “AI” models available today?

Firstly, our existing data doesn’t include much of a history on dynamically transient networks. But the more important factor is that our networks have been managed by humans – operators who have a tendency of recording the quickest, dirtiest (and not necessarily correct or complete) set of data that allows them to restore service quickly.

Following a recent discussion with someone who’s running an AI assurance PoC for a big telco, it seems this reservation is turning out to be true. Their existing data sets just aren’t suited to the AI models. They’re having to reconsider their whole approach to their data model and how to collect / store it. They’re now starting to get positive results from the custom-built data sets.

It’s coming back to the same story as a post from last week – having connectors that can translate the different languages of ops, data, AI, etc and building a people / process / technology solution that the AI models can cope with.

You might not be ready to start an AI experiment yet, but you may like to start the journey by understanding whether your existing data is suited to AI modelling. If not, you get the chance to change it and have a great repository of data to seed an AI engine when you are ready in future. The first step on an exponential OSS journey.

Drinking from the OSS firehose

Most people know what they want, but don’t know how to get it. When you don’t know the next step, you procrastinate or feel lost. But a little research can turn a vague desire into specific actions.
For example: When musicians say, “I need a booking agent”, I ask, “Which one? What’s their name?”
You can’t act on a vague desire. But with an hour of research you could find the names of ten booking agents that work with ten artists you admire. Then you’ve got a list of the next ten people you need to contact.
A life coach told me that most of his job is just helping people get specific. Once they turn a vague goal into a list of specific steps, it’s easy to take action
.”
Derek Sivers
in his blog, “Get Specific!

In a post last week, I spoke about feeling like never before that I’m at an OSS cross-road, looking towards a set of paths. The paths all contribute heavily to the next-generations of OSS, but there’s the feeling of dread that no one person will have the ability to step out each path. The paths I’m talking about include network virtualisation, data-science / artificial-intelligence / machine-learning, open-source deployments like ONAP, cloud infrastructure and delivery models, and so many more. Each represents a life’s work to become a fully-fledged expert.

In the past, a single OSS polymath could potentially scramble along a majority of the paths and understand the terrain within their local OSS environment. But that’s becoming increasingly less likely as we become ever more dependent upon the interconnection of disparate expertise.

This represents a growing risk. If nobody understands the whole terrain, how do we map out Derek’s “list of specific steps” on our complex OSS projects? If we can’t adequately break down the work, we’re at risk of running projects as a set of vague, disjointed activities. So I imagine you’re wondering how we do “Get Specific!”?

Most technology experts appear to me to have a predilection to plan projects from the bottom up (ie building up a solution from their detailed understanding of some parts of the project). However, on projects as complex as OSS, I’ve never seen a bottom-up plan come together efficiently. Nobody knows enough of the details to build up the entire plan.

Instead, I prefer the top-down approach of building a WBS (work breakdown structure), progressively diving deeper into the details and turning the vague goal into a tree of ever more specific steps. I consider the ability to break down complex projects into manageable chunks of work as my only real super-power, but in reality it largely just comes from using the WBS approach.

Okay, it might sound a bit like a waterfall model (depends on how you design the tree really), but it beats the “trying to drink from a firehose” alternative model.

Which approach works best for you?

After the boys of OSS have gone

Something has always bothered me about the medical profession. Whenever you visit a GP (General Practitioner), unless you need to come back for test results or ongoing treatment, the doctor never finds out if their diagnoses / prescriptions have been effective. In my experience at least, they don’t call to see whether there were any complications, allergic reactions to treatments, improvement in condition, etc and only find out if you make a follow-up appointment. As a result, they never close the feedback loop or gather a potentially rich source of data on their efficacy.

I sometimes wonder whether this is true of OSS implementers too. There can be a tendency to move from one implementation project to the next, from one customer to the next, without having the time to circle back on previous clients. Any unrealised but ongoing problems are handed over to operations and/or product support teams, so the implementers may not get to see them. Alternatively the team might also be consistently missing out on identifying opportunities for value-add on their projects.

If you’re an implementer (as I often am), how do you close the loop to find out what you could be doing better? Do you retain dialog with customers after handover? Do you question your support teams about what client problems / enquiries are landing on their desk? Do you ever book follow-up sessions with client staff at scheduled intervals after handover? Are you always engaged on an operational handover period where you have the chance to see post-handover challenges first-hand?

Just like a doctor, you’re bound to hear of any major or catastrophic outcomes after a “patient’s” initial visit. But what about the niggling ailments your clients have that could be easily rectified for all future clients… if only you knew of them?

I’d love to hear the thoughts from implementers on how they’re continually upping their game. Similarly, if you’re in ops / support, what experiences (ie messes) are consistently landing with you to clean up after the implementers have moved on? Do you have any suggestions for how they (we) could close the loop better with you?

Note: For all the highly talented women out there in OSS-land, please note that I’m not overlooking you. The title of my post is just a play on Don Henley’s famous song.

Trickle-down impact planning

We introduced the concept of The Trickle-down Effect last year, an effect that sees the most minor changes trickling down through an OSS stack, with much bigger consequences than expected.

The trickle-down effect can be insidious, turning a nice open COTS solution into a beast that needs constant attention to cope with the most minor of operational changes. The more customisations made, the more gnarly the beast tends to be.”

Here’s an example I saw recently. An internal business unit wanted to introduce a new card type into the chassis set they managed. Speaking with the physical inventory team, it seemed the change was quite small and a budget was developed for the works… but the budget (dollars / time / risk) was about to blow out in a big way.

The new card wasn’t being picked up in their fault-management or performance management engines. It wasn’t picked up in key reports, nor was it being identified in the configuration management database or logical inventory. Every one of these systems needed interface changes. Not massive change obviously, but collectively the budget blew out by 10x and expedite changes pushed out the work previously planned by each of the interface development and testing teams.

These trickle-down impacts were known…. by some people…. but weren’t communicated to the business unit responsible for managing the new card type. There’s a possibility that they may not have even added the new card type if they realised the full OSS cost consequences.

Are these trickle-down impacts known and readily communicated within your OSS change processes?

One sentence to make most OSS experts cringe

Let me warn you. The following sentence is going to make many OSS experts cringe, maybe even feel slightly disgusted, but take the time to read the remainder of the post and ponder how it fits within your specific OSS context/s.

“Our OSS need to help people spend money!”

Notice the word is “help” and not “coerce?” This is not a post about turning our OSS into sales tools, well, not directly anyway.

May I ask you a question – Do you ever spend time thinking about how your OSS is helping your customer’s customer (which I’ll refer to as the end-customer) to spend their money? And I mean making it easier for them to buy the stuff they want to buy in return for some form of value / utility, not trick or coerce them into buying stuff they don’t want.

Let me step you through the layers of thinking here.

The first layer for most OSS experts is their direct customer, which is usually the service provider or enterprise that buys and operates the OSS. We might think they are buying an OSS, but we’re wrong. An organisation buys an OSS, not because it wants an Operational Support System, but because it wants Operational Support.

The second layer is a distinct mindset change for most OSS experts. Following on from the first layer, OSS has the potential to be far more than just operational support. Operational support conjures up the image of being a cost-centre, or something that is a necessary evil of doing business (ie in support of other revenue-raising activities). To remain relevant and justify OSS project budgets, we have to flip the cost-centre mentality and demonstrate a clear connection with revenue chains. The more obvious the connection, the better. Are you wondering how?

That’s where the third layer comes in. We have to think hard about the end-customer and empathise with their experiences. These experiences might be a consumer to a service provider’s (your direct customer) product offerings. It might even be a buying cycle that the service provider’s products facilitate. Either way, we need to simplify their ability to buy.

So let’s work back up through those layers again:
Layer 3 – If end-customers find it easier to buy stuff, then your customer wins more revenue (and brand value)
Layer 2 – If your customer sees that its OSS / BSS has unquestionably influenced revenue increase, then more is invested on OSS projects
Layer 1 – If your customer recognises that your OSS / BSS has undeniably influenced the increased OSS project budget, you too get entrusted with a greater budget to attempt to repeat the increased end-customer buy cycle… but only if you continue to come up with ideas that make it easier for people (end-customers) to spend their money.

At what layer does your thinking stop?

How smart contracts might reduce risk and enhance trust on OSS projects

Last Friday, we spoke about all wanting to develop trusted OSS supplier / customer relationships but rarely finding them and a contrarian factor for why trust is so hard to achieve in OSS – complexity.

Trust is the glue that allows OSS projects to happen. Not only that, it becomes a catch-22 with complexity. If OSS partners don’t trust each other, requirements, contracts, etc get more complex as a self-protection barrier. But with every increase in complexity, there becomes an increasing challenge to deliver and hence, risk of further reduction in trust.

On a smaller scale, you’ve seen it on all projects – if the project starts to falter, increased monitoring attention is placed on the project, which puts increased administrative load on the project team and reduces the time they have to deliver the intended outcomes. Sometimes the increased admin / report gains the attention of sponsors and access to additional resources, but usually it just detracts from the available delivery capability.

Vish Nandlall also associates trust and complexity in organisational models in his LinkedIn post below:

This is one of the reasons I’m excited about what smart contracts can do for the organisations and OSS projects of the future. Just as “Likes” and “Supplier Rankings” have facilitated online trust models, smart contracts success rankings have the ability to do the same for OSS suppliers, large and small. For example, rather than needing to engage “Big Vendor A” to build your entire, monolithic OSS stack, if an operator develops simpler, more modular work breakdowns (eg microservices), then they can engage “Freelancer B” and “Small Vendor C” to make valuable contributions on smaller risk increments. Being lower in complexity and risk means B and C have a greater chance of engendering trust, but their historical contract success ranking forces them to develop trust as a key metric.

We all want to develop trusted OSS partnerships, so why does so much scepticism exist?

Every OSS supplier wants to achieve “trusted” status with their customers. Each supplier wants to be the source trusted to provide the best vision of the future for each customer.

I’m an independent consultant, so I have been lucky enough to represent many organisations on both sides of that equation. And in that position, I’ve been able to get a first-hand view of the perception of trust between OSS vendors / integrators (suppliers) and operators (customers). Let’s just say that in general, we’re working in an industry with more scepticism than trust.

So if trust is so important and such a desired status, where is it breaking down?

Whilst I’d like to assume that most people in our industry go into OSS projects with the very best of intentions, there are definitely some suppliers that try to trick and entrap their customers whilst acting in an untrustworthy way. For the rest of this post, I’m going to assume the best – assume that we all have great intentions. We then look at why the trust relationships might be breaking down and some of the ways we can do better.

Jon Gordon provides a great list of 11 ways to build trust. Check out his link for a more detailed view, but the 11 factors are as follows:

  1. Say what you are going to do and then do what you say!
  2. Communicate, communicate, communicate
  3. Trust is built one day, one interaction at a time, and yet it can be lost in a moment because of one poor decision
  4. Value long term relationships more than short term success
  5. Sell without selling out. Focus more on your core principles and customer loyalty than short term commissions and profits.
  6. Trust generates commitment; commitment fosters teamwork; and teamwork delivers results.
  7. Be honest!
  8. Become a coach. Coach your customers. Coach your team at work
  9. Show people you care about them
  10. Always do the right thing. We trust those who live, walk and work with integrity.
  11. When you don’t do the right thing, admit it. Be transparent, authentic and willing to share your mistakes and faults

They all sound quite obvious don’t they? Do you also notice that many of the 11 (eg communication, transparency, admitting failure, doing what you say, etc) can be really easy to say but harder to do flawlessly under the pressure of complex OSS delivery projects (and ongoing operations)?

I know I certainly can’t claim a perfect track record on all of  these items. Numbers 1 and 2 can be particularly difficult when under extreme delivery pressure, especially when things just aren’t going to plan technically and you’re focussing attention on regaining control of the situation. In those situations, communication and transparency are what the customer needs to maintain confidence, but the customer relationship takes time that also needs to be allocated to overcoming the technical challenges. It becomes a balancing act.

So, how do we position ourselves to make it easier to keep to these 11 best intentions? Simple. By making a concerted effort to reduce complexity… actually not so simple as it sounds, but rewarding if you can achieve it. The less complex your delivery projects (or operational models), the more repeatable and reliable a supplier’s OSS delivery becomes. The more reliable, the less friction and a reduced chance of fracturing relationships. Subsequently, the more chance of building and retaining trust.

Hat-tip to Robert Curran of Aria Networks for spawning a discussion about trust.

Fast / Slow OSS processes

Yesterday’s post discussed using smart contracts and Network as a Service (NaaS) to give a network the properties that will allow it to self-heal.

It mentioned a couple of key challenges, one being that there will always be physical activities such as cable cuts fixes, faulty equipment replacement, physical equipment expansion / contraction / lifecycle-management.

In a TM Forum presentation last week, Sylvain Denis of Orange proposed the theory of fast and slow OSS processes. Fast – soft factories (software and logical resources) within the operations stack are inherently automatable (notwithstanding the complexities and cost-benefit dilemma of actually building automations). Slow – physical factories are slow processes as they usually rely on human tasks and/or have location constraints.

Orchestration relies on programmatic interfaces to both. Not all physical factories have programmatic interfaces in all OSS / BSS stacks yet. It will remain a key requirement for the forseeable future to be able to handle dual-speed processes / factories.

Potential OSS failures aren’t always technical

I recently attended an event where a brainstorming question was posed about how a particular next-gen OSS concept might fail. Interesting exercise!

There were a lot of super-clever technical people in the room. The brainstorming of ideas was a fascinating one. We dived deeply into the experiences of many of the technical people in the room and all the potential technical reasons for failure.

But I was left with an overwhelming feeling that:

    1. Most, if not all, of those technical hurdles could be overcome if given enough resources
    2. None of the more likely causes of failure were brought up, including:
      • People-related factors (or organisational change factors) such as resistance to change, a shortage of skills in a nascent area, stakeholder management, lack of “champion” support if momentum slows, inability to reach consensus on scope / design, etc
      • Financial viability factors such as inability to deliver on time/cost/scope, parallel operations and maintenance of legacy, lower additional benefit than predicted in the business case

That’s where I’ve noticed a greater proportion of OSS project failures anyway. Does this align with your experiences?

Compiling “The Zen of OSS” perhaps?

A recent presentation just reminded me of “The Zen of Python.” It’s a collection of 20 (19?) software principles, particularly as they relate to the Python programming language.

Since OSS is software-defined, (almost) all of the principles (not sure about the “Dutch” one) relate to OSS in a programming sense, but perhaps in a broader sense as well. I’d like to share two pairings:

Errors should never pass silently.
Unless explicitly silenced.

Unfortunately too many do pass silently, particularly across “best-of-breed” OSS stacks.

And

Now is better than never.
Although never is often better than right now.

An especially good hint if you’re working within an Agile model!

So that got me thinking (yes, scary, I know!). What would a Zen of OSS look like? I’d be delighted to accept your suggestions. Does one already exist (and no, I’m not referring to the vendor, Zenoss)?

In the meantime, I’ll have to prepare a list I think. However, you can be almost assured that the first principle on the Zen of OSS will be:

Just because you can, doesn’t mean you should.

Finding the most important problems to solve

The problem with OSS is that there are too many problems. We don’t have to look too hard to find a problem that needs solving.

An inter-related issue is that we’re (almost always) constrained by resources and aren’t able to solve every problem we find. I have a theory – As much as you are skilled at solving OSS problems, it’s actually your skill at deciding which problem to solve that’s more important.

With continuous release methodologies gaining favour, it’s easy to prioritise on the most urgent or easiest problems to solve. But what if we were to apply the Warren Buffett 20 punch-card approach to tackling OSS problems?

I could improve your ultimate financial welfare by giving you a ticket with only twenty slots in it so that you had twenty punches – representing all the investments that you got to make in a lifetime. And once you’d punched through the card, you couldn’t make any more investments at all. Under those rules, you’d really think carefully about what you did, and you’d be forced to load up on what you’d really thought about. So you’d do so much better.”
Warren Buffett
.

I’m going through this exact dilemma at the moment – am I so busy giving attention to the obvious problems that I’m not allowing enough time to discover the most important ones? I figure that anyone can see and get caught up in the noise of the obvious problems, but only a rare few can listen through it…

50 exercises to ignite your OSS innovation sessions

Every project starts with an idea… an idea that someone is excited enough to sponsor.

  1. But where are your ideas being generated from?
  2. How do they get cultivated and given time to grow?
  3. How do they get pitched? and How do they get heard?
  4. How are sponsors persuaded?
  5. How do they then get implemented?
  6. How do we amplify this cycle of innovation and implementation?

I’m fascinated by these questions in OSS for the reasons outlined in The OSS Call for Innovation.

If we look at the levels of innovation (to be honest, it’s probably more a continuum than bands / levels):

  1. Process Improvement
  2. Incremental Improvement (new integrations, feature enhancement, etc)
  3. Derivative Ideas (iPhone = internet + phone + music player)
  4. Quantum Innovation (Tablet computing, network virtualisation, cloud delivery models)
  5. Radical Innovations (transistors, cellular wireless networks, Claude Shannon’s Information Theory)

We have so many immensely clever people working in our industry and we’re collectively really good at the first two levels. Our typical mode of working – which could generally be considered fire-fighting (or dare I say it, Agile) – doesn’t provide the time and headspace to work on anything in the longer life-cycles of levels 3-5. These are the levels that can be more impactful, but it’s these levels where we need to carve out time specifically for innovation planning.

If you’re ever planning to conduct innovation fire-starter sessions, I really recommend reading Richard Brynteson’s, “50 Activities for Building Innovation.” As the title implies, it provides 50 (simple but powerful) exercises to help groups to generate ideas.

Please contact us if you’d like PAOSS to help facilitate your OSS idea firestarter or road-mapping sessions.

A summary of RPA uses in an OSS suite

This is the sixth and final post in a series about the four styles of RPA (Robotic Process Automation) in OSS.

Over the last few days, we’ve looked into the following styles of RPA used in OSS, their implementation approaches, pros / cons and the types of automation they’re best suited to:

  1. Automating repeatable tasks – using an algorithmic approach to completing regular, mundane tasks
  2. Streamlining processes / tasks – using an algorithmic approach to assist an operator during a process or as an alternate integration technique
  3. Predefined decision support – guiding operators through a complex decision process
  4. As part of a closed-loop system – that provides a learning, improving solution

RPA tools can significantly improve the usability of an OSS suite, especially for end-to-end processes that jump between different applications (in the many ways mentioned in the above links).

However, there can be a tendency to use the power of RPAs to “solve all problems” (see this article about automating bad processes). That can introduce a life-cycle of pain for operators and RPA admins alike. Like any OSS integration, we should look to keep the design as simple and streamlined as possible before embarking on implementation (subtraction projects).

The OSS / RPA parrot on the shoulder analogy

This is the fourth in a series about the four styles of RPA (Robotic Process Automation) in OSS.

The third style is Decision Support. I refer to this style as the parrot on the shoulder because the parrot (RPA) guides the operator through their daily activities. It isn’t true automation but it can provide one of the best cost-benefit ratios of the different RPA styles. It can be a great blend of human-computer decision making.

OSS processes tend to have complex decision trees and need different actions performed depending on the information being presented. An example might be a customer on-boarding, which includes credit and identity check sub-processes, followed by the customer service order entry.

The RPA can guide the operator to perform each of the steps along the process including the mandatory fields to populate for regulatory purposes. It can also recommend the correct pull-down options to select so that the operator traverses the correct branch of the decision tree of each sub-process.

This functionality can allow organisations to deliver less training than they would without decision support. It can be highly cost-effective in situations where:

  • There are many inexperienced operators, especially if there is high staff turnover such as in NOCs, contact centres, etc
  • It is essential to have high process / data quality
  • The solution isn’t intuitive and it is easy to miss steps, such as a process that requires an operator to swivel-chair between multiple applications
  • There are many branches on the decision tree, especially when some of the branches are rarely traversed, even by experienced operators

In these situations the cost of training can far outweigh the cost of building an OSS (RPA) parrot on each operator’s shoulder.