Who can make your OSS dance?

OSS tend to be powerful software suites that can do millions of things. Experts at the vendors / integrators know how to pull the puppet’s strings and make it dance. As a reader of PAOSS, chances are that you are one of those experts. I’ve sat through countless vendor demonstrations, but I’m sure you’ll still be able to wow me with a demo of what your OSS can do.

Unfortunately, most OSS users don’t have that level of expertise, nor experiences or training, to pull all of your OSS‘s strings. Most only use the tiniest sub-set of functionality.

If we look at the millions of features of your OSS in a decision tree format, how easy will it be for the regular user to find a single leaf on your million-leaf tree? To increase complexity further, OSS workflows actually require the user group to hop from one leaf, to another, to another. Perhaps it’s not even as conceptually simple as a tree structure, but a complex inter-meshing of leaves. That’s a lot of puppet-strings to know and control.

A question for you – You can make your OSS dance, but can your customers / users?

What can you do to assist users to navigate the decision tree? A few thoughts below:

  1. Prune the decision tree – chances are that many of the branches of your OSS are never / rarely used, so why are they there?
  2. Natural language search – a UI that allows users to just ask questions. The tool interprets those questions and navigates the tree by itself (ie it abstracts the decision tree from the user, so they never need to learn how to navigate it)
  3. Use decision support – machine assistance to guide users in navigating efficiently through the decision tree
  4. Restrict access to essential branches – design the GUI to ensure a given persona can only see the clusters of options they will use (eg via the use of role-based functionality filtering)

I’d love to hear your additional thoughts how to make it easier for users to make your  (their) OSS dance.

Do we actually need less intellectual giants?

Have you ever noticed that almost every person who works in OSS is extremely clever?
No?

They may not know the stuff that you know or even talk in the same terminologies that you and your peers use, but chances are they also know lots of stuff that you don’t.

OSS sets a very high bar. I’ve been lucky enough to cross into many different industries as a consultant. I’d have to say that there are more geniuses per capita in OSS than in any other industry / sector I’ve worked in.

So why then are so many of our OSS a shambles?

Is it groupthink? Do we need more diversity of thinking? Do we actually need less intellectual giants to create pragmatic, mere-mortal solutions?

Our current approach appears to be flawed. Perhaps Project Platypus gives us on alternate framework?

Actually, I don’t think we need less intellectual giants. But I do think we need our intellectual giants to have a greater diversity of experiences.

The augmented analytics journey

Smart Data Discovery goes beyond data monitoring to help business users discover subtle and important factors and identify issues and patterns within the data so the organization can identify challenges and capitalize on opportunities. These tools allow business users to leverage sophisticated analytical techniques without the assistance of technical professionals or analysts. Users can perform advanced analytics in an easy-to-use, drag and drop interface without knowledge of statistical analysis or algorithms. Smart Data Discovery tools should enable gathering, preparation, integration and analysis of data and allow users to share findings and apply strategic, operational and tactical activities and will suggest relationships, identifies patterns, suggests visualization techniques and formats, highlights trends and patterns and helps to forecast and predict results for planning activities.

Augmented Data Preparation empowers business users with access to meaningful data to test theories and hypotheses without the assistance of data scientists or IT staff. It allows users access to crucial data and Information and allows them to connect to various data sources (personal, external, cloud, and IT provisioned). Users can mash-up and integrate data in a single, uniform, interactive view and leverage auto-suggested relationships, JOINs, type casts, hierarchies and clean, reduce and clarify data so that it is easier to use and interpret, using integrated statistical algorithms like binning, clustering and regression for noise reduction and identification of trends and patterns. The ideal solution should balance agility with data governance to provide data quality and clear watermarks to identify the source of data.

Augmented Analytics automates data insight by utilizing machine learning and natural language to automate data preparation and enable data sharing. This advanced use, manipulation and presentation of data simplifies data to present clear results and provides access to sophisticated tools so business users can make day-to-day decisions with confidence. Users can go beyond opinion and bias to get real insight and act on data quickly and accurately.”
The definitions above come from a post by Kartik Patel entitled, “What is Augmented Analytics and Why Does it Matter?.”

Over the years I’ve loved playing with data and learnt so much from it – about networks, about services, about opportunities, about failures, about gaps, etc. However, modern statistical analysis techniques fall into one of the categories described in “You have to love being incompetent“, where I’m yet to develop the skills to a comfortable level. Revisiting my fifth year uni mathematics content is more nightmare than dream, so if augmented analytics tools can bypass the stats, I can’t wait to try them out.

The concepts described by Kartik above would take those data learning opportunities out of the data science labs and into the hands of the masses. Having worked with data science labs in the past, the value of the information has been mixed, all dependent upon which data scientist I dealt with. Some were great and had their fingers on the pulse of what data could resolve the questions asked. Others, not so much.

I’m excited about augmented analytics, but I’m even more excited about the layer that sits on top of it – the layer that manages, shares and socialises the aggregation of questions (and their answers). Data in itself doesn’t provide any great insight. It only responds when clever questions are asked of it.

OSS data has an immeasurable number of profound insights just waiting to be unlocked, so I can’t wait to see where this relatively nascent field of augmented analytics takes us.

My least successful project

Many years ago I worked on a three-way project with 1) a customer, 2) a well-known equipment vendor and 3) a service provider (my client). Time-frames were particularly tight, not so much because of the technical challenge, but because of the bureaucratic processes of the customer and the service provider. The project was worth well in excess of $100M, so it was a decent-sized project as part of a $1B+ program.

The customer had handed the responsibility of building a project schedule to the equipment vendor and I, which we duly performed. The Gantt chart was quite comprehensive, running into thousands of lines of activities and had many dependencies where actions by the customer were essential. These were standard dependencies such as access to their data centres, uplift to infrastructure, firewall burns, design approvals, and the list goes on. The customer had also just embarked on a whole-of-company switch of project management frameworks, so it wasn’t hard to see that related delays were likely.

The vendor and I met with the customer to walk through the project plan. About half-way in, the customer asked the vendor whether they were confident that timelines could be met. The vendor was happy to say yes. I was asked the same question. My response was that I was comfortable with the vendor’s part, I was comfortable with our part (ie the service provider’s), but that the customer’s dependencies were a risk because we’d had push-back from their Project Manager and each of the internal business units that we knew were impacted (not to mention the other ones that were likely to be impacted but we had no visibility of yet).

That didn’t go down well. I copped by far the biggest smashing of my career to date. The customer didn’t want to acknowledge that they had any involvement in the project – despite the fact that they were to approve it, house it, host it, use it and maintain aspects of it. It seemed like common sense that they would need to get involved.

Over the last couple of decades of delivery projects, one trend has been particularly clear – the customer gets back what they put in. That project had at least twelve PMs on the customer side over the 18 month duration of the project. It moved forward during stints under the PMs who got involved in internal solutioning, but stagnated during periods under PMs that just blame-stormed. Despite this, we ended up delivering, but the user outcomes weren’t great.

As my least successful project to date (hopefully ever), it was also one of my biggest “learnings” projects. For a start, it emphasised that I needed to get better at hearts and minds change management. There were many areas where better persuasion was required – from the timelines / dependencies to the compromised architecture / hardware that was thrust upon us by the customer’s architects. What seemed obvious to me was clearly not so obvious to the customer stakeholders I was trying to persuade.

You have to love being incompetent

You have to love being incompetent in order to be competent.”
James Altucher
.

Not sure that anyone loves feeling incompetent, but James’ quote is particularly relevant in the world of OSS. There are always so many changes underway that you’re constantly taken out of your comfort zone. But the question becomes how do you overcome those phases / areas of incompetence?

Earlier in my career, I had more of an opportunity to embed myself into any area of incompetence, usually spawned by a technical challenge being faced, and pick it up through a combination of practical and theoretical research. That’s a little harder these days with less hands-on and more management responsibilities, not to mention more demands on time outside hours.

In a way, it’s a bit like stepping up the layers of TMN management pyramid.
TMN Pyramid
Image courtesy of www.researchgate.net.

With each step up, the context gets broader (eg more domains under management), but more abstracted from what’s happening in the network. Each subsequent step northbound does the same thing:

  • It abstracts – it only performs a sub-set of the lower layer’s functionality
  • It connects – it performs the task of connecting and managing a larger number of network elements than the lower layer

Conversely, each step down the management stack should produce a narrower (ie not so many device interconnections), but deeper field of view (ie a deeper level of information about the fewer devices).

The challenge of OSS is in choosing where to focus curiosity and improvements – diving down the stack into new tech or looking up and sidewards?

Deciding whether to PoC or to doc

As recently discussed with two friends and colleagues, Raman and Darko, Proofs of Concept (PoC) or Minimum Viable Product (MVP) implementations can be a double-edged sword.

By building something without fully knowing the end-game, you are potentially building tech-debt that may be very difficult to work around without massive (or complete) overhaul of what you’ve built.

The alternative is to go through a process of discovery to build a detailed document showing what you think the end product might look like.

I’m all for leaving important documentation behind for those who come after us, for those who maintain the solutions we create or for those who build upon our solutions. But you’ll notice the past-tense in the sentence above.

There are pros and cons with each approach, but I tend to believe in documentation in the “as-built” sense. However, there is a definite need for some up-front diagrams/docs too (eg inspiring vision statements, use cases, architecture diagrams, GUI/UX designs, etc).

The two biggest reasons I find for conducting PoCs are:

  • Your PoC delivers something tangible, something that stakeholders far and wide can interact with to test assumptions, usefulness, usability, boundary cases, etc. The creation of a doc can devolve into an almost endless set of “what-if” scenarios and opinions, especially when there are large groups of (sometimes militant) stakeholders
  • You’ve already built something – your PoC establishes the momentum that is oh-so-vital on OSS projects. Even if you incur tech-debt, or completely overhaul what you’ve worked on, you’re still further into the delivery cycle than if you spend months documenting. Often OSS change management can be a bigger obstacle than the technical challenge and momentum is one of change management’s strongest tools

I’m all for deep, reflective thinking but that can happen during the PoC process too. To paraphrase John Kennedy, “Don’t think, don’t hope, (don’t document), DO!” 🙂

This is the best OSS book I’ve ever read

This post is about the most inspiring OSS book I’ve ever read, and yet it doesn’t contain a single word that is directly about OSS (so clearly I’m not spruiking my own OSS-centric book here 😉 ).
It’s a book that outlines the resolutions to so many of the challenges being faced by traditional communications service providers (CSPs) as well as the challenges faced by their OSS.

It resonates strongly with me because it reflects so many of my beliefs, but articulates them brilliantly through experiences from some of the most iconic organisations of our times – through their successes and failures.

And the title?

Insanely Simple: The Obsession That Drives Apple’s Success.
Book by Ken Segall.
Insanely Simple

OSS is downstream of so many Complexity choices that this book needs to be read far beyond the boundaries of OSS. Having said that, we’re incredibly good at adding so many of our own layers of complexity.

Upcoming blogs here on PAOSS will surely share some of its words of wisdom.

If you can’t repeat it, you can’t improve it

The cloud model (ie hosted by a trusted partner) becomes attractive from the perspective of repeatability, from the efficiency of doing the same thing repeatedly at scale.”
From, “I want a business outcome, not a deployment challenge.”

OSS struggles when it comes to repeatability. Often within an organisation, but almost always when comparing between organisations. That’s why there’s so much fragmentation, which in turn is holding the industry back because there is so much duplicated effort and brain-power spread across all the multitude of vendors in the market.

I’ve worked on many OSS projects, but none have even closely resembled each other, even back in the days when I regularly helped the same vendors deliver to different clients. That works well for my desire to have constant mental stimulation, but doesn’t build a very efficient business model for the industry.

Closed loop architectures are the way of the future for OSS, but only if we can make our solutions repeatable, measurable / comparable and hence, refinable (ie improvable). If we can’t then we may as well forget about AI. After all, AI requires lots of comparable data.

I’ve worked with service providers that have prided themselves on building bespoke solutions for every customer. I’m all for making every customer feel unique and having their exact needs met, but this can still be accommodated through repeatable building blocks with custom tweaks around the edges. Then there are the providers that have so many variants that you might as well be designing / building / testing an OSS for completely bespoke solutions.

You could even look at it this way – If you can’t implement a repeatable process / solution, then measure it, then compare it and then refine it, then you can’t create a customer offering that is improving.

What’s the next tool in your toolbelt?

As OSS exponents, I’m sure you’ll agree that there are many OSS tools / skills that we use and develop (to differing degrees) over the years.

In fact, there are so many to choose from that we often have to make a conscious decision which ones to master and which ones to leave for others to master. Other times we have the decision thrust upon us.

When you have the chance to decide, do you choose from a perspective of craft or value?

Choosing by craft is analogous to already having a toolbelt with a hammer, a screwdriver and a glue gun, then choosing a nail-gun as the next tool to learn. They all fall into the same category of fasteners. They make you more proficient at choosing the right fastener for the job, but adding more to the list is unlikely to significantly increase the hourly rate a customer or employer will pay for your services. Whilst you’re more skilled, there are still a lot of others out there who can use fasteners. In fact, it can arguably be said that I can even use a few of those tools. They’re not really differentiators for you or your customer / employer.

Choosing by value, to extend on the analogy, is to add expertise as a builder, surveyor, draftsman, architect, etc in addition to the fastener skills you already have. They might be harder to attain, but that’s what increases differentiation. They’re perceived to be more valuable because they are perceived to play a more exclusive part in the final product that’s being offered to customers.

In OSS, if you can already program in five languages, does taking the time to learn a sixth significantly add value (to you or your value chain)? Sometimes perhaps. But if you were to spend the same amount of time to become more proficient at infrastructure or networks or team leadership, etc I suspect your contribution to the value fabric (ie customers, your team, etc) would increase far more… even if it didn’t immediately translate to a higher hourly rate.

The most invaluable people I’ve worked alongside in OSS, the valuable tripods, are proficient across many of OSS‘s domains and can link silos of expertise together. But that’s certainly not to devalue the importance of the craftspeople, as it’s their continued search for excellence that strengthens the silos, the foundations of OSS.

When you next have the chance to decide, will you choose from a perspective of craft or value?

A new, more sophisticated closed-loop OSS model

Back in early 2014, PAOSS posted an article about the importance of closed loop designs in OSS, which included the picture below:

OSS / DSS feedback loop

It generated quite a bit of discussion at the time and led me to being introduced to two companies that were separately doing some interesting aspects of this theoretical closed loop system. [Interestingly, whilst being global companies, they both had strong roots tying back to my home town of Melbourne, Australia.]

More recently, Brian Levy of TM Forum has published a more sophisticated closed-loop system, in the form of a Knowledge Defined Network (KDN), as seen in the diagram below:
Brian Levy Closed Loop OSS
I like that this control-loop utilises relatively nascent technologies like intent networking and the constantly improving machine-learning capabilities (as well as analytics for delta detection) to form a future OSS / KDN model.

The one thing I’d add is the concept of inputs (in the form of use cases such as service orders or new product types) as well as outputs / outcomes such as service activations for customers and not just the steady-state operations of a self-regulating network. Brian Levy’s loop is arguably more dependent on the availability and accuracy of data, so it needs to be initially seeded with inputs (and processing of workflows).

Current-day OSS are too complex and variable (ie un-repeatable), so perhaps this represents an architectural path towards a simpler future OSS – in terms of human interaction at least – although the technology required to underpin it will be very sophisticated. The sophistication will be palatable if we can deliver the all-important repeatability described in, “I want a business outcome, not a deployment challenge.” BTW. This refers to repeatability / reusability across organisations, not just being able to repeatedly run workflows within organisations.

Can you define why OSS projects are so challenging?

Answer. Change creates incompetence.

Whether on the vendor/integrator side, the customer side or along the strategic advisor line, OSS projects bring about constant change; massive change.

I’ve often wondered why I can feel so incompetent in the world of OSS, even though I know more about it than (probably) any other topic.

The answer has just dawned on me, as per above – change creates incompetence. There is so much change happening on so many levels within OSS that every one of us is a freshly-minted incompetent, no matter how much prior competence we may’ve built up.

And this defines the double-edged sword of OSS. The pain of feeling incompetent drives the thirst for learning / evolution.

Instead of the easy metrics…

What is it that you hope to accomplish? Not what you hope to measure as a result of this social media strategy/launch, but to actually change, create or build?

An easy but inaccurate measurement will only distract you. It might be easy to calibrate, arbitrary and do-able, but is that the purpose of your work?

I know that there’s a long history of a certain metric being a stand-in for what you really want, but perhaps that metric, even though it’s tried, might not be true. Perhaps those clicks, views, likes and grps are only there because they’re easy, not relevant.

If you and your team can agree on the goal, the real goal, they might be able to help you with the journey…

System innovations almost always involve rejecting the standard metrics as a first step in making a difference. When you measure the same metrics, you’re likely to create the same outcomes. But if you can see past the metrics to the results, it’s possible to change the status quo.”
Seth Godin on his blog here.

There are a lot of standard metrics in OSS and comms networks. In the context of Seth’s post, I have two layers of metrics for you to think about. One layer is the traditional role of OSS – to provide statistics on the operation of the comms network / services / etc. The second layer is in the statistics of the OSS itself.

Layer 1 – Our OSS tend to just provide a semi-standard set of metrics because service providers tend to use similar metrics. We even had standards bodies helping providers get consistent in their metrics. But are those metrics still working for a modern carrier?
Can we disrupt the standard set of metrics by asking what a service provider is really wanting to achieve in our changing environment?

Layer 2 – What do we know about our own OSS? How are they used? How does that usage differ across clients? How does it differ between other products? What are the metrics that help sell an OSS (either internally or externally)?

OSS billionaires with perfect abs

If [more] information was the answer, then we’d all be billionaires with perfect abs.”
Derek Sivers
.

The sharing economy has made a deluge of information available to us at negligible cost. We have more information available at our fingertips than we can ever consume and process. So why don’t we all have massive bank balances and perfect abs? (although I’m sure most PAOSS readers do of course)

The answer is that information is only as good as the decisions we are able to make with it. More specifically, the ability to distill the information down to the insights that compel great decisions to be made.

As OSS implementers, we can easily bombard our users with enough information calories to make perfect abs an impossible dream. Too easily in fact.

It’s much harder to consistently produce insights of great value. Perhaps it even needs the unique billionaire’s lens to spot the insights hidden in the information. But that’s what makes billionaires so rare.

Herein lies the message I want to leave you with today – how do us OSS Engineers train ourselves to see information more through a billionaire / value lens rather than our more typical technical lenses? I’m not a billionaire so I (we?) spend too much time thinking about technically correct solutions rather that thinking about valuable solutions. Do we have the right type of training / thinking to actually know what a valuable solution looks like?

AIOps (Algorithmic IT Operations)

AIOps stands for Algorithmic IT Operations and is a new category as defined by Gartner research that is an evolution of what the industry previously referred to as ITOA (IT Operations and Analytics). We have reached a point where data science and algorithms are being successfully applied to automate traditionally manual tasks and processes in IT Operations. Now, algorithmics are being incorporated into tools that allow organizations to streamline operations even further by liberating humans from time-consuming and error prone processes, such as defining and managing an endless sprawl of rules and filters in legacy IT Management systems.

Algorithmic IT operations platforms offer increasingly wide and valuable sets of advanced analytical techniques. Although initially targeted at IT operations management use cases and data, they can also be applied by infrastructure and operations leaders to broader data sets to yield unique insights.

A goal of AIOps solutions is to make life better for us, but the line gets a bit blurry when humans interact with AIOps. The more advanced AIOps solutions will have neural-network technology built in that will learn from its operators, adapt and attempt to eliminate repetitive and tedious tasks.
Sai Krishna here.

Yesterday’s post talked about how to reduce a 150-person OSS implementation team down to just 1. The concept of AIOps, if taken to its proposed conclusion, could lead to large reductions in OSS support teams too. In theory, you put the system/s in place, seed them and then let them learn for themselves rather than having a team implement lots of logic rules.

In Gartner’s report, they detail the monitoring capabilities of the top AIOps tools, dividing comprehensive monitoring into 11 categories that include historical data management, streaming data management, log data ingestion, wire data ingestion, document text ingestion, automated pattern discovery and prediction, anomaly detection, root cause determination, on-premise delivery, and software as a service.”
This article from Loom Systems provides some really interesting perspectives on AIOps and the corresponding Gartner report.

For full disclosure, I have no financial interests in Loom Systems or Gartner, nor have I used the Loom Systems tools to be able to promote or deride their market offerings.

Short-sighted / long-sighted OSS

When I hear that the average tenure in tech is just two years, I wonder how anyone gets anything done. When I hear such job hopping justified by the fact that changing companies is the only way to get a raise, I just shake my head at the short-sightedness of such companies.”
David Heinemeier Hansson
here on Signal v Noise.

Have you noticed how your most valuable colleagues also tend to have had lengthy tenures in their workplace*? No matter how experienced you are at OSS, it takes a six month (plus) apprenticeship before you start adding real value in a new OSS role. The apprenticeship is usually twelve months or longer for those who are new to the industry. Unfortunately, it takes that long to develop the tribal knowledge of the tools, the processes, the people, the variants and the way things get done (including knowing how to circumvent rules).

To be honest, like DHH above, I shake my head when employers treat their OSS talent as expendable and don’t actively seek to quell high turnover in their ranks. An average tenure of two years equates to massive inefficiency. That’s my perspective on internal resources, the resources that run an OSS. The problem with internal roles though is that they can be so all-encompassing that resources become myopic, focused only on the internal challenges / possibilities.

The question then becomes how you can open up a wider field of view. The perfect example in our current environment is in the increasing use of CI/CD / DevOps / Agile methods to manage OSS delivery. I hear of a new tool almost every day (think Ansible, Kubernetes, Jenkins, Docker, Cucumber, etc, etc). It bewilders me how people keep up to know which are the best options, yet this is only one dimension of the change that is occurring in the OSS landscape. In these situations, high turnover actually helps with the cross-fertilisation of ideas / tools / practices. Similarly, external consultants can also assist with insights garnered from multiple environments.

There is a place for both on OSS projects, but I strongly subscribe to DHH’s views above. It’s the age-old question – how to attract and retain great talent, but do we give this question enough consideration??

* BTW. I’m certainly not implying that by corollary all long-tenured resources are the most valuable.

The interview question tech recruiters will never ask, but should

Over the last few days, this blog has been diving into the career steps that OSS (and tech) specialists can make in readiness for the inevitable changes in employment dynamics that machine learning and AI will bring about.

You’ve heard all the stories about robots and AI taking all of our jobs. Any job that is repeatable and / or easy to learn will be automated. We’ve also seen that products are commoditising and many services are too, possibly because automation and globalisation is increasing the supply of both. Commoditisation means less profitability per unit to drive projects and salaries. We’ve already seen how this is impacting the traditional investors in OSS (eg CSPs).

Oh doom and gloom. So what do we do next?

Art!

That leads to my contrarian interview question. “So, tell me about your art.”

More importantly, using the comments section below, tell me about YOUR art. I’d love to hear your thoughts.

FWIW, here’s Seth Godin’s definition of art, which resonates with me, “Art isn’t only a painting. Art is anything that’s creative, passionate, and personal. And great art resonates with the viewer, not only with the creator.
What makes someone an artist? I don’t think it has anything to do with a paintbrush. There are painters who follow the numbers, or paint billboards, or work in a small village in China, painting reproductions. These folks, while swell people, aren’t artists. On the other hand, Charlie Chaplin was an artist, beyond a doubt. So is Jonathan Ive, who designed the iPod. You can be an artist who works with oil paints or marble, sure. But there are artists who work with numbers, business models, and customer conversations. Art is about intent and communication, not substances
.”

If we paint our OSS by numbers (not with numbers), we’re more easily replaced. If we inspire with solutions that are unique, creative, passionate and personal, that’s harder for machines to replace.

Next step, man-machine partnerships

Getting literate in the language of the future posed the thought that data is the language of the future and therefore it is incumbent on all OSS practitioners to ensure their literacy.

It also posed that data literacy provides a stepping stone to a future where machine learning is more prevalent. This got me thinking. We currently think about man-machine interfaces where we design ways for humans and machines to get info into the system. But the next step, the future way of thinking will be man-machine partnerships, where we design ways to leverage machines to get more out of the system.

Data literacy will be essential to making this transition from MM interface to MM partnership. Machines (eg IoT and OSS) will get data into the system. Through partnerships with humans, machines will also be instrumental in the actions / insights coming out of the system.

Getting literate in the language of the future

As we all know, digitalisation of everything is decreasing barriers to entry and increasing the speed of change in almost every perceivable industry. Unfortunately, this probably also means the half-life of opportunity exploitation is also shrinking. The organisations that seem to be best at leveraging opportunities in the market are the ones that are able to identify and act the quickest.

They identify and act quickly… based on data. That’s why we hear about data-driven organisations and data-driven decision support. OSS collect enormous amounts of data every year. But it’s only those who can turn that information into action who are able to turn opportunities into outcomes.

Data is the language of the future (well today too of course), so literacy in that language will become increasingly important. I’m not expecting to become a highly competent data scientist any time soon, but I’m certainly not expecting to delegate completely to mathletes either.

The language of data is not just in the data sets, but also in the data processing techniques that will become increasingly important – regression, clustering, statistics, augmentation, pattern-matching, joining, etc. If you can’t speak the language, you can’t drive the change or ask the right questions to unearth the gems you’re seeking. Speaking the language allows you to take the tentative first steps towards machine learning and AI.

As a consultant, I see myself as a connector – of ideas, people, concepts, solutions – and I see OSS largely falling into the same category. But the consultancy, and/or OSS, skills of today will undoubtedly need to be augmented with connection of data too – connection of data sets, data analysis techniques, data models – to be able to prove consultancy hypotheses with real data. That’s where consultancy and OSS is going, so that’s where I need to go too.

Burning out

Information overload is happening at all career levels. Companies restructure, and current staff absorb additional responsibilities, requiring new skills to be learned. Technology changes, and new systems, tools and processes need to be mastered. Markets change, and new strategies or client prospects or industry sectors need to be researched. To be successful in today’s times you need to master not only the skills relevant to your own job, but also the skills of learning , adapting to change quickly, and sustainably dealing with change without burning out.”
Caroline Ceniza-Levine
here.

Sounds like a day in the life of OSS doesn’t it?

I’m a huge fan of AFL. The game, like OSS, is getting increasingly sophisticated, faster and strategic. Despite this, there’s almost not a week that goes by without one of the top-Line coaches stating that it’s really a simple game. The truth of the matter is that it is not a simple game but the best coaches have a way of simplifying the objectives so that their players can function well within the chaos.

They train and strategise for the complexity but simplify the game-day messaging to players down to only 2-3 key indicators and themes [at least that’s what the coaches say in interviews, but I wouldn’t know for sure having not been a professional footballer].

On any given day on most of the OSS projects I’ve worked on, there is more chaos than clarity. A chaos of technologies, meetings, deadlines, deliverables, processes, designs, checklists, etc.

How often do we stop to run a pre-mortem before death by complexity has occurred?

Crossing the OSS tech chasm

When discussing yesterday’s post about increasing feedback loops in OSS, the technology gap on exponential technologies such as IoT, network virtualisation and machine learning reminded me of Geoffrey Moore’s “Crossing the Chasm” as shown in the graph below.

Crossing the chasm

In the context of the abovementioned technologies, the chasm isn’t represented by the adoption of a product (as per Moore’s graph) but in the level of sophistication required to move to future iterations. [Noting that the sophistication chasm may also effect the take-up rate, as the OSS vendors that can cross the chasm to utilising more advanced machine learning, robotics and automation will have a distinct competitive advantage].

This gets back to the argument towards developing these exponential technologies, even if only by investing in the measure and feedback steps in the control loop initially.
Singularity Hub's power law diagram
Image courtesy of singularity hub.