How to bring your art and your science to your OSS

In the last two posts, we’ve discussed repeatability within the field of OSS implementation – paint-by-numbers vs artisans and then resilience vs precision in delivery practices.

Now I’d like you to have a think about how those posts overlay onto this quote by Karl Popper:
Non-reproducible single occurrences are of no significance to science.”

Every OSS implementation is different. That means that every one is a non-reproducible single occurrence. But if we bring this mindset into our OSS implementations, it means we’re bringing artisinal rather than scientific method to the project.

I’m all for bringing more art, more creativity, more resilience into our OSS projects.

I’m also advocating more science though too. More repeatability. More precision. Whilst every OSS project may be different at a macro level, there are a lot of similarities in the micro-elements. There tends to be similarities in sequences of activities if you pay close attention to the rhythms of your projects. Perhaps our products can use techniques to spot and leverage similarities too.

In other words, bring your art and your science to your OSS. Please leave a comment below. I’d love to hear the techniques you use to achieve this.

How did your first OSS project make you feel?

Featured

Can you remember how you felt during the initial weeks of your first OSS project?

I can vividly recall how out of my depth I felt on my first OSS project. I was in my 20s and had relocated to a foreign country for the first time. I had a million questions (probably more actually). The challenges seemed immense (they were). I was working with talented and seasoned telco veterans who had led as many as 500 people, but had little OSS experience either. None of us had worked together before. We were all struggling. We were all about to embark on an incredibly steep learning curve, by far the steepest of my career to date.

Now through PAOSS I’m looking to create a new series of free e-books to help those starting out on their own OSS journey.

But this isn’t about my experiences. That would be a perspective limited to just one unique journey in OSS. No, this survey is all about you. I’d love to capture your feelings, experiences, insights and opinions to add much greater depth to the material.

Are you up for it? Are you ready to answer just five one-line questions to help the next generation of OSS experts?

We’ve created a survey below that closes on 23 March 2019. The best 5 responses will win a free copy of my physical book, “Mastering your OSS,” (valued at US$49.97+P/H).

Click on the link below to start the 5 question survey:

https://passionateaboutoss.com/how-did-your-first-oss-make-you-feel

Where an absence of OSS data can still provide insights

The diagram below has some parallels with OSS. The story however is a little long before it gets to the OSS part, so please bear with me.

null

The diagram shows analysis the US Navy performed during WWII on where planes were being shot. The theory was that they should be reinforcing the areas that received the most hits (ie the wing-tips, central part of the body, etc as per the diagram).

Abraham Wald, a statistician, had a completely different perspective. His rationale was that the hits would be more uniform and the blank sections on the diagram above represent the areas that needed reinforcement. Why? Well, the planes that were hit there never made it home for analysis.

In OSS, this is akin to the device or EMS that has failed and is unable to send any telemetry data. No alarms are appearing in our alarm lists for those devices / EMS because they’re no longer capable of sending any.

That’s why we use heartbeat mechanisms to confirm a system is alive (and capable of sending alarms). These come in the form of pollers, pingers or just watching for other signs of life such as network traffic of any form.

In what other areas of OSS can an absence of data demonstrate an insight?

Of OSS bosses and teachers

We need more teachers and less bosses.”
Lee Cockerell
.

The quote from Lee could be bordering on condescending to some people. Hmmm…. Sorry about that.

But let me ask you a couple of questions:

  1. Of all your years in OSS (or in industry in general), who are the 5 people who have been the most valuable to you (and your customers)?
  2. Of those 5 people, which of the following three tags would you assign to them:- a) teacher, b) boss, c) implementer

From my experiences and list of top five, I would only classify one as a boss. The other four are dual-tagged as teachers and implementers. That makes me wonder whether they go hand-in-glove? ie is the ability to implement vital to being a good teacher as much as being a good communicator / teacher makes for better implementers.

Based on my list (a tiny sample size I admit), it seems we need four times as many OSS teacher/implementers as we need bosses. 🙂

Another question. Do you aspire to be (or already are) a boss, a teacher or an implementer?

Identifying the fault-lines that trigger OSS churn

Most people slog through their days in a dark funk. They almost never get to do anything interesting or go to interesting places or meet interesting people. They are ignored by marketers who want them to buy their overpriced junk and be grateful for it. They feel disrespected, unappreciated and taken for granted. Nobody wants to take the time to listen to their fears, dreams, hopes and needs. And that’s your opening.
John Carlton
.

Whilst the quote above may relate to marketing, it also has parallels in the build and run phases of an OSS project. We talked about the trauma of OSS yesterday, where the OSS user feels so much trauma with their current OSS that they’re willing to go through the trauma of an OSS transformation. Clearly, a procurement event must be preceded by a significant trauma!

Sometimes that trauma has its roots in the technical, where the existing OSS just can’t do (or be made to do) the things that are most important to the OSS user. Or it can’t do it reliable, at scale, in time, cost effectively, without significant risk / change. That’s a big factor certainly.

However, the churn trigger appears to more often be a human one. The users feel disrespected, unappreciated and taken for granted. But here’s an interesting point that might surprise some users – the suppliers also often feel disrespected, unappreciated and taken for granted.

I have the privilege of working on both sides of the equation, often even as the intermediary between both sides. Where does the blame lie? Where do the fault-lines originate? The reasons are many and varied of course, but like a marriage breakup, it usually comes down to relationships.

Where the communication method is through hand-grenades being thrown over the fence (eg management by email and by contractual clauses), results are clearly going to follow a deteriorating arc. Yet many OSS relationships structurally start from a position of us and them – the fence is erected – from day one.

Coming from a technical background, it took me far too deep into my career to come to this significant realisation – the importance of relationships, not just the quest for technical perfection. The need to listen to both sides’ fears, dreams, hopes and needs.

Zero Touch Assurance – ZTA (part 1)

A couple of years ago, we published a series on pre-cognitive OSS based on the following quote by Ben Evans about three classes of search/discovery:

  1. There is giving you what you already know you want (Amazon, Google)
  2. There is working out what you want (Amazon and Google’s aspiration)
  3. And then there is suggesting what you might want (Heywood Hill).

Today, I look to apply a similar model towards the holy grail of OSS – Zero Touch Assurance (ZTA).

  1. Monitoring – There is monitoring the events that happen in the network and responding manually
  2. Post-cognition – There is monitoring events that happen in the network, comparing them to past events and actions (using analytics to identify repeating patterns), using the past to recommend (or automate) a response
  3. Pre-cognition – There is identifying events that have never happened in the network before, yet still being able to provide a recommended / automated response

The third step, pre-cognition, is where the supposed holy grail lies. It’s where everyone talks about the prospect of AI solving all of our problems. It seems we’re still a way off this happening.

But I wonder whether the actual ZTA solution might be more of a brute-force version of step 2 – post-cognition?

More on that tomorrow.

Do you have a nagging OSS problem you cannot solve?

On Friday, we published a post entitled, “Think for a moment…” which posed the question of whether we might be better-served looking back at our most important existing features and streamlining them rather than inventing new features to solve that have little impact.

Over the weekend, a promotional email landed in my inbox from Nightingale Conant. It is completely unrelated to OSS, yet the steps outlined below seem to be a fairly good guide for identifying what to reinvent within our existing OSS.

Go out and talk to as many people [customers or potential] as you can, and ask them the following questions:
1. Do you have a nagging problem you cannot solve?
2. What is that problem?
3. Why is solving that problem important to you?
4. How would solving that problem change the quality of your life?
5. How much would you be willing to pay to solve that problem?

Note: Before you ask these questions, make sure you let the people know that you’re not going to sell them anything. This way you’ll get quality answers.
After you’ve talked with numerous people, you’ll begin to see a pattern. Look for the common problems that you believe you could solve. You’ll also know how much people are willing to pay to have their problem solved and why.

I’d be curious to hear back from you. Do those first 4 questions identify a pattern that relates to features you’ve never heard of before or features that your OSS already performs (albeit perhaps not as optimally as it could)?

Nobody dabbles at dentistry

There are some jobs that are only done by accredited professionals.
And then there are most jobs, jobs that some people do for fun, now and then, perhaps in front of the bathroom mirror.
It’s difficult to find your footing when you’re a logo designer, a comedian or a project manager. Because these are gigs that many people think they can do, at least a little bit.”
Seth Godin here.

I’d love to plagiarise the entire post from Seth above, but instead suggest you have a look at the other pearls of wisdom he shares in the link above.

So where does OSS fit in Seth’s thought model? Well, you don’t need an accreditation like a dentist does. Most of the best I’ve met haven’t had any OSS certifications to speak of.

Does the layperson think they can do an OSS role? Most people have never heard of OSS, so I doubt they would believe they could do a role as readily as they could see themselves being logo designers. But the best I’ve met have often come from fields other than telco / IT / network-ops.

One of my earliest OSS projects was for a new carrier in a country that had just deregulated. They were the second fixed-line carrier in this country and tried to poach staff from the incumbent. Few people crossed over. To overcome this lack of experience, the carrier built an OSS team that consisted of a mathematician, an architect, an automotive engineer, a really canny project manager and an assortment of other non-telco professionals.

The executives of that company clearly felt they could develop a team of experts (or just had no choice but to try). The strategy didn’t work out very well for them. It didn’t work out very well for us either. We were constantly trying to bring their team up to speed on the fundamentals in order to use the tools we were developing / delivering (remembering that as one of my first OSS projects, I was still desperately trying to bring myself up to speed – still am for that matter).

As Seth also states, “If you’re doing one of these non-dentist jobs, the best approach is to be extraordinarily good at it. So much better than an amateur that there’s really no room for discussion.” That needs hunger (a hungriness to learn without an accredited syllabus).

It also needs humility though. Even the most extraordinarily good OSS proponents barely scratch the surface of all there is to learn about OSS. It’s the breadth of sub-expertise that probably explains why there is no single accreditation that covers all of OSS.

How to build a personal, cloud-native OSS sandpit

As a project for 2019, we’re considering the development of a how-to training course that provides a step-by-step guide to build your own OSS sandpit to play with. It will be built around cloud-native and open-source components. It will be cutting-edge and micro-scaled (but highly scalable in case you want to grow it).

Does this sound like something you’d be interested in hearing more about?

Like or comment if you’d like us to keep you across this project in 2019.

I’d love to hear your thoughts on what the sandpit should contain. What would be important to you? We already have a set of key features identified, but will refine it based on community feedback.

OSS answers that are simple but wrong vs complex but right

We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills…”
John F Kennedy
.

Let’s face it. The business of running a telco is complex. The business of implementing an OSS is complex. The excitement about working in our industry probably stems from the challenges we face, but the impact we can make if/when we overcome them.

The cartoon below tells a story about telco and OSS consulting (I’m ignoring the “Science vs everything else” box for the purpose of this post, focusing only on the simple vs complex sign-post).

Simple vs Complex

I was recently handed a brochure from a consulting firm that outlined a step-by-step transformation approach for comms service providers of different categories. It described quarter-by-quarter steps to transform across OSS, BSS, networks, etc. Simple!

The problem with their prescriptive model was that they’d developed a stereotype for each of the defined carrier categories. By stepping through the model and comparing against some of my real clients, it was clear that their transformation approaches weren’t close to aligning to any of those clients’ real situations.

Every single assignment and customer has its own unique characteristics, their own nuances across many layers. Nuances that in some cases are never even visible to an outsider / consultant. Trying to prepare generic, but prescriptive transformation models like this would seem to be a futile exercise.

I’m all for trying to bring repeatable methodologies into consulting assignments, but they can only act as general guidelines that need to be moulded to local situations. I’m all for bringing simplification approaches to consultancies too, as reflected by the number of posts that are categorised as “Simplification” here on PAOSS. We sometimes make things too complex, so we can simplify, but this definitely doesn’t imply that OSS or telco transformations are simple. There is no one-size-fits-all approach.

Back to the image above, there’s probably another missing arrow – Complex but wrong! And perhaps another answer with no specific path – Simple, but helpful in guiding us towards the summit / goal.

I can understand why telcos get annoyed with us consultants telling them how they should run their business, especially consultants who show no empathy for the challenges faced.

But more on that tomorrow!

The Theory of Evolution, OSS evolution

Evolution says that biological change is a property of populations — that every individual is a trial run of an experimental combination of traits, and that at the end of the trial, you are done and discarded, and the only thing that matters is what aggregate collection of traits end up in the next generation. The individual is not the focus, the population is. And that’s hard for many people to accept, because their entire perception is centered on self and the individual.”
FreeThoughtBlog.

Have we almost reached the point where the same can be said for OSS workflows? In the past (and the present?) we had pre-defined process flows. There may be an occasional if/else decision gate, but we could capture most variants on a process diagram. These pre-defined processes were / are akin to a production line.

Process diagrams are becoming harder to lock down as our decision trees get more complicated. Technologies proliferate, legacy product lines don’t get obsoleted, the number of customer contact channels increases. Not only that, but we’re now marketing to a segment of one, treating every one of our customers as unique, whilst trying not to break our OSS / BSS.

Do we have the technology yet that allows each transaction / workflow instance to just be treated as an experimental combination of attributes / tasks? More importantly, do we have the ability to identify any successful mutations that allow the population (ie the combination of all transactions) to get progressively better, faster, stronger.

It seems that to get to CX nirvana, being able to treat every customer completely uniquely, we need to first master an understanding of the population at scale. Conversely, to achieve the benefits of scale, we need to understand and learn from every customer interaction uniquely.

That’s evolution. The benchmark sets the pattern for future workflows until a variant / mutation identifies a better benchmark to establish the new pattern for future workflows, which continues.

The production line workflow model of the past won’t get us there. We need an evolution workflow model that is designed to accommodate infinite optionality and continually learn from it.

Does such a workflow tool exist yet? Actually, it’s more than a workflow tool. It’s a continually improving loop workflow.

The culture required to support Telkomsel’s OSS/BSS transformation

Yesterday’s post described the ways in which Telkomsel has strategically changed their value-chain to attract revenues with greater premiums than the traditional model of a telco. They’ve used a new digital core and an API framework to help facilitate their business model transformation. As promised yesterday, we’ll take a slightly closer look at the culture of Telkomsel’s transformation today.

Monty Hong of Telkomsel presented the following slides during a presentation at TM Forum’s DTA (Digital Transformation Asia) last week.

The diagram below shows a graph showing the need for patience and ongoing commitment to major structural transformations like the one Telkomsel underwent.

Telkomsel's commitment to transformation

The curve above tends to represent the momentum and morale I’ve felt on most large OSS projects. Unfortunately, I’ve also been involved in projects where project sponsors haven’t stayed the journey beyond the dip (Q4/5 in the graph above) and haven’t experienced the benefits of the proposed project. This graph articulates the message well that change management and stakeholder / sponsor champions are an important, but often overlooked component of an OSS transformation.

The diagram below helps to articulate the benefits of an open API model being made accessible to external market-places. We’re entering an exciting time for OSS, with previously hidden, back-end telco functionality now being increasingly presented to the market (if even only as APIs into the black-box).

Telkomsel's internal/external API influences

Amongst many other benefits, it helps to bring the customer closer to implementers of these back-end systems.

Are telco services and SLAs no longer relevant?

I wonder if we’re reaching the point where “telecommunication services” is no longer a relevant term? By association, SLAs are also a bust. But what are they replaced by?

A telecommunication service used to effectively be the allocation of a carrier’s resources for use by a specific customer. Now? Well, less so

  1. Service consumption channel alternatives are increasing, from TV and radio; to PC, to mobile, to tablet, to YouTube, to Insta, to Facebook, to a million others.
    Consumption sources are even more prolific.
  2. Customer contact channel alternatives are also increasing, from contact centres; to IVR, to online, to mobile apps, to Twitter, etc.
  3. A service bundle often utilises third-party components, some of which are “off-net”
  4. Virtualisation is increasingly abstracting services from specific resources. They’re now loosely coupled with resource pools and rely on high availability / elasticity to ensure customer service continuity. Not only that, but those resource pools might extend beyond the carrier’s direct control and out to cloud provider infrastructure

The growing variant-tree is taking the concept beyond the reach of “customer services” and evolves to become “customer experiences.”

The elements that made up a customer service in the past tended to fall within the locus of control of a telco and its OSS. The modern customer experience extends far beyond the control of any one company or its OSS. An SLA – Service Level Agreement – only pertains to the sub-set of an experience that can be measured by the OSS. We can aspire to offer an ELA – Experience Level Agreement – because we don’t have the mechanisms by which to measure or manage the entire experience yet.

The metrics that matter most for telcos today tend to revolve around customer experience (eg NPS). But aside from customer surveys, ratings and derived / contrived metrics, we don’t have electronic customer experience measurements.

Customer services are dead; Long live the customer experiences king… if only we can invent a way to measure the whole scope of what makes up customer experiences.

Does Malcolm Gladwell’s 10,000 hours apply to OSS?

You’ve probably all heard of Malcolm Gladwell’s 10,000 hour rule from his book, Outliers? In it he suggests that roughly 10,000 hours of deliberate practice makes an individual world-class in their field. But is 10,000 hours enough in the field of OSS?

I look back to the year 2000, when I first started on OSS projects. Over the following 5 years or so, I averaged an 85 hour week (whilst being paid for a 40 hour week, but I just loved what I was doing!!). If we multiply 85 by 48 by 5, we end up with 20,400 hours. That’s double the Gladwell rule. And I was lucky to have been handed assignments across the whole gamut of OSS activities, not just monotonously repeating the same tasks over and over. But those first 5 years / 20,000+ hours were barely the tip of the iceberg in terms of OSS expertise.

Whilst 10,000 hours might work for repetitive activities such as golf, tennis, chess, music, etc it’s probably less impactful for such multi-faceted fields as OSS.

So, what does it take to make an OSS expert? Narrow focus on just one of the facets? Broad view across many facets? Experience just using, building, designing, optimising OSS, or all of those things? Study, practice or a combination of both?

If you’re an OSS expert, or you know any who stand head and shoulders above all others, how would you describe the path that got you / them there?
Or if I were to ask another way, how would you get an OSS newbie up to speed if you were asked to mentor them from your lofty position of expertise?

Who are more valuable, OSS hoarders or teachers?

Any long-term readers of this blog will have heard me talk about tripods, and how valuable they are to any OSS team. They’re the people who know about IT, operations/networks and the business context within which they’re working. They’re the most valuable people I’ve worked with in the world of OSS because they’re the ones who can connect the many disparate parts that make up an OSS.

But on reflection, there’s one other factor that almost all of the tripods have who I’ve worked with – they’re natural teachers. They want to impart any and all of the wisdom they may contain.

I once worked in an Asian country where teaching was valued incredibly highly. Teachers were put on the highest pedestal of the social hierarchy. Yet almost every single person in the organisation, all the way from the board that I was advising through to the workers in the field, hoarded their knowledge. Knowledge is power and it was definitely treated that way by the people in this organisation. Knowledge was treated like a finite resource.

It was a fascinating paradox. They valued teachers, they valued the fact that I was open with sharing everything I could with them, but guarded their own knowledge from anyone else in their team.

I could see their rationale, sort of. Their unique knowledge made them important and impossible to replace, giving them job stability. But I could also not see their rationale at all. Let me summarise that thought in a single question – who have you found to be more valuable (and needing to be retained in their role), the genius hoarder of knowledge who can perform individual miracles or the connector who can lift and coordinate the contributions of the whole team to get things done?

I’d love to get your thoughts and experiences working with hoarders and teachers.

Build an OSS and they will come… or sometimes not

Build it and they will come.

This is not always true for OSS. Let me recount a few examples.

The project team is disconnected from the users – The team that’s building the OSS in parallel to existing operations doesn’t (or isn’t able to) engage with the end users of the OSS. Once it comes time for cut-over, the end users want to stick with what they know and don’t use the shiny new OSS. From painful experience I can attest that stakeholder management is under-utilised on large OSS projects.

Turf wars – Different groups within a customer are unable to gain consensus on the solution. For example, the operational design team gains the budget to build an OSS but the network assurance team doesn’t endorse this decision. The assurance team then decides not to endorse or support the OSS that is designed and built by the design team. I’ve seen an OSS worth tens of millions of dollars turned off less than 2 years after handover because of turf wars. Stakeholder management again, although this could be easier said than done in this situation.

It sounded like a good idea at the time – The very clever OSS solution team keeps coming up with great enhancements that don’t get used, for whatever reason (eg non fit-for-purpose, lack of awareness of its existence by users, lack of training, etc). I’ve seen a customer that introduced over 500 customisations to an off-the-shelf solution, yet hundreds of those customisations hadn’t been touched by users within a full year prior to doing a utilisation analysis. That’s right, not even used once in the preceding 12 months. Some made sense because they were once-off tools (eg custom migration activities), but many didn’t.

The new OSS is a scary beast – The new solution might be perfect for what the customer has requested in terms of functionality. But if the solution differs greatly from what the operators are used to, it can be too intimidating to be used. A two-week classroom-based training course at the end of an OSS build doesn’t provide sufficient learning to take up all the nuances of the new system like the operators have developed with the old solution. Each significant new OSS needs an apprenticeship, not just a short-course.

It’s obsolete before it’s finishedOSS work in an environment of rapid change – networks, IT infrastructure, organisation models, processes, product offerings, regulatory shifts, disruptive innovation, etc, etc. The longer an OSS takes to implement, the greater the likelihood of obsolescence. All the more reason for designing for incremental delivery of business value rather than big-bang delivery.

What other examples have you experienced where an OSS has been built, but the users haven’t come?

OSS data Ponzi scheme

The more data you have, the more data you need to understand the data you have. You are engaged in a data ponzi scheme…Could it be in service assurance and IT ops that more data equals less understanding?
Phil Tee
in the opening address at the AIOps Symposium.

Interesting viewpoint right?

Given that our OSS hold shed-loads of data, Phil is saying we need lots of data to understand that data. Well, yes… and possibly no.

I have a theory that data alone doesn’t talk, but it’s great at answering questions. You could say that you need lots of data, although I’d argue in semantics that you actually need lots of knowledge / awareness to ask great questions. Perhaps that knowledge / awareness comes from seeding machine-led analysis tools (or our data scientists’s brains) with lots of data.

The more data you have, the more noise that you need to find signal in amongst. That means you have to ask more questions of your data if you want to drive a return that justifies the cost of collecting and curating it all. Machine-led analytics certainly assist us in handling the volume and velocity of data our OSS create / collect. That’s just asking the same question/s over and over. There’s almost no end to the questions that can be asked of our data, just a limit on the time in which we can ask it.

Does that make data a Ponzi scheme? A Ponzi scheme pays profits to earlier investors using funds obtained from newer investors. Eventually it must collapse the scheme eventually runs out of new investors to fund profits. In a data Ponzi scheme, it pays in insights from earlier (seed) data by obtaining new (streaming) data. The stream of data reaching an OSS never runs out. If we need to invest heavily in data (eg AI / ML, etc), at what point in the investment lifecycle will we stop creating new insights?

The OSS self-driving vehicle

I was lucky enough to get some time of a friend recently, a friend who’s running a machine-learning network assurance proof-of-concept (PoC).

He’s been really impressed with the results coming out of the PoC. However, one of the really interesting factors he’s been finding is how frequently BAU (business as usual) changes in the OSS data (eg changes in naming conventions, topologies, etc) would impact results. Little changes made by upstream systems effectively invalidated baselines identified by the machine-learning engines to key in on. Those little changes meant the engine had to re-baseline / re-learn to build back up to previous insight levels. Or to avoid invalidating the baseline, it would require re-normalising all of data prior to the identification of BAU changes.

That got me wondering whether DevOps (or any other high-change environment) might actually hinder our attempts to get machine-led assurance optimisation. But more to the point, does constant change (at all levels of a telco business) hold us back from reaching our aim of closed-loop / zero-touch assurance?

Just like the proverbial self-driving car, will we always need someone at the wheel of our OSS just in case a situation arises that the machines hasn’t seen before and/or can’t handle? How far into the future will it be before we have enough trust to take our hands off the OSS wheel and let the machines drive closed-loop processes without observation by us?

Optimisation Support Systems

We’ve heard of OSS being an acronym for operational support systems, operations support systems, even open source software. I have a new one for you today – Optimisation Support Systems – that exists for no purpose other than to drive a mindset shift.

I think we have to transition from “expectations” in a hype sense to “expectations” in a goal sense. NFV is like any technology; it depends on a business case for what it proposes to do. There’s a lot wrong with living up to hype (like, it’s impossible), but living up to the goals set for a technology is never unrealistic. Much of the hype surrounding NFV was never linked to any real business case, any specific goal of the NFV ISG.”
Tom Nolle
in his blog here.

This is a really profound observation (and entire blog) from Tom. Our technology, OSS included, tends to be surrounded by “hyped” expectations – partly from our own optimistic desires, partly from vendor sales pitches. It’s far easier to build our expectations from hype than to actually understand and specify the goals that really matter. Goals that are end-to-end in manner and preferably quantifiable.

When embarking on a technology-led transformation, our aim is to “make things better,” obviously. A list of hundreds of functional requirements might help. However, having an up-front, clear understanding of the small number of use cases you’re optimising for tends to define much clearer goal-driven expectations.

Expanding your bag of OSS tricks

Let me ask you a question – when you’ve expanded your bag of tricks that help you to manage your OSS, where have they typically originated?

By reading? By doing? By asking? Through mentoring? Via training courses?
Relating to technical? People? Process? Product?
Operations? Network? Hardware? Software?
Design? Procure? Implement / delivery? Test? Deploy?
By retrospective thinking? Creative thinking? Refinement thinking?
Other?

If you were to highlight the questions above that are most relevant to the development of your bag of tricks, how much coverage does your pattern show?

There are so many facets to our OSS (ie. tentacles on the OctopOSS) aren’t there? We have to have a large bag of tricks. Not only that, we need to be constantly adding new tricks too right?

I tend to find that our typical approaches to OSS knowledge transfer cover only a small subset (think about discussion topics at OSS conferences that tend to just focus on the technical / architectural)… yet don’t align with how we (or maybe just I) have developed capabilities in the past.

The question then becomes, how do we facilitate the broader learnings required to make our OSS great? To introduce learning opportunities for ourselves and our teams across vaguely related fields such as project management, change management, user interface design, process / workflows, creative thinking, etc, etc.