OSS – just in time rather than just in case

We all know that once installed, OSS tend to stay in place for many years. Too much effort to air-lift in. Too much effort to air-lift back out, especially if tightly integrated over time.

The monolithic COTS (off-the-shelf) tools of the past would generally be commissioned and customised during the initial implementation project, with occasional integrations thereafter. That meant we needed to plan out what functionality might be required in future years and ask for it to be implemented, just in case. Along with all the baked-in functionality that is never needed, and the just in case but possibly never used, we ended up with a lot of bloat in our OSS.

With the current approach of implementing core OSS building blocks, then utilising rapid release and microservice techniques, we have an ongoing enhancement train. This provides us with an opportunity to build just in time, to build only functionality that we know to be essential.

This has pluses and minuses. On the plus side, we have more opportunity to restrict delivery to only what’s needed. On the minus side, a just in time mindset can build a stop-gap culture rather than strategic, long-term thinking. It’s always good to have long-term thinkers / planners on the team to steer the rapid release implementations (and reductions / refactoring) and avoid a new cause of bloat.

The OSS dart-board analogy

The dartboard, by contrast, is not remotely logical, but is somehow brilliant. The 20 sector sits between the dismal scores of five and one. Most players aim for the triple-20, because that’s what professionals do. However, for all but the best darts players, this is a mistake. If you are not very good at darts, your best opening approach is not to aim at triple-20 at all. Instead, aim at the south-west quadrant of the board, towards 19 and 16. You won’t get 180 that way, but nor will you score three. It’s a common mistake in darts to assume you should simply aim for the highest possible score. You should also consider the consequences if you miss.”
Rory Sutherland
on Wired.

When aggressive corporate goals and metrics are combined with brilliant solution architects, we tend to aim for triple-20 with our OSS solutions don’t we? The problem is, when it comes to delivery, we don’t tend to have the laser-sharp precision of a professional darts player do we? No matter how experienced we are, there tends to be hidden surprises – some technical, some personal (or should I say inter-personal?), some contractual, etc – that deflect our aim.

The OSS dart-board analogy asks the question about whether we should set the lofty goals of a triple-20 [yellow circle below], with high risk of dismal results if we miss (think too about the OSS stretch-goal rule); or whether we’re better to target the 19/16 corner of the board [blue circle below] that has scaled back objectives, but a corresponding reduction in risk.

OSS Dart-board Analogy

Roland Leners posed the following brilliant question, “What if we built OSS and IT systems around people’s willingness to change instead of against corporate goals and metrics? Would the corporation be worse off at the end?” in response to a recent post called, “Did we forget the OSS operating model?

There are too many facets to count on Roland’s question but I suspect that in many cases the corporate goals / metrics are akin to the triple-20 focus, whilst the team’s willingness to change aligns to the 19/16 corner. And that is bound to reduce delivery risk.

I’d love to hear your thoughts!!

The OSS farm equipment analogy

OSS End of Financial Year
It’s an interesting season as we come up to the EOFY (end of financial year – on 30 June). Budget cycles are coming to an end. At organisations that don’t carry un-spent budgets into the next financial year, the looming EOFY triggers a use-it-or-lose-it mindset.

In some cases, organisations are almost forced to allocate funds on OSS investments even if they haven’t always had the time to identify requirements and / or model detailed return projections. That’s normally anathema to me because an OSS‘ reputation is determined by the demonstrable value it creates for years to come. However, I can completely understand a client’s short-term objectives. The challenge we face is to minimise any risk of short-term spend conflicting with long-term objectives.

I take the perspective of allocating funds to build the most generally useful asset (BTW, I like Robert Kiyosaki’s simple definition of an asset as, “in reality, an asset is only something that puts money in your pocket,”) In the case of OSS, putting money in one’s pocket needs to consider earnings [or cost reductions] that exceed outgoings such as maintenance, licensing, operations, etc as well as cost of capital. Not a trivial task!

So this is where the farm equipment analogy comes in.

If we haven’t had the chance to conduct demand estimation (eg does the telco’s market want the equivalent of wheat, rice, stone fruit, etc) or product mix modelling (ie which mix of those products will bear optimal returns) then it becomes hard to predict what type of machinery is best fit for our future crops. If we haven’t confirmed that we’ll focus efforts on wheat, then it could be a gamble to invest big in a combine harvester (yet). We probably also don’t want to invest capital and ongoing maintenance on a fruit tree shaker if our trees won’t begin bearing fruit for another few years.

Therefore, a safer investment recommendation would be on a general-purpose machine that is most likely to be useful for any type of crop (eg a tractor).

In OSS terminology, if you’re not sure if your product mix will provision 100 customers a day or 100,000 then it could be a little risky to invest in an off-the-shelf orchestration / provisioning engine. Still potentially risky, but less so, would be to invest in a resource and service inventory solution (if you have a lot of network assets), alarm management tools (if you process a lot of alarms), service order entry, workforce management, etc.

Having said that, a lot of operators already have a strong gut-feel for where they intend to get returns on their investment. They may not have done the numbers extensively, but they know their market roadmap. If wheat is your specialty, go ahead and get the combine harvester.

I’d love to get your take on this analogy. How do you invest capital in your OSS without being sure of the projections (given that we’re never sure on projections becoming reality)?

Did we forget the OSS operating model?

When we have a big OSS transformation to undertake, we tend to start with the use cases / requirements, work our way through the technical solution and build up an implementation plan before delivering it (yes, I’ve heavily reduced the real number of steps there!).

However, we sometimes overlook the organisational change management part. That’s the process of getting the customer’s organisation aligned to assist with the transformation, not to mention being fully skilled up to accept handover into operations. I’ve seen OSS projects that were nearly perfect technically, but ultimately doomed because the customer wasn’t ready to accept handover. Seasoned OSS veterans probably already have plans in place for handling organisational change through stakeholder management, training, testing, thorough handover-to-ops processes, etc. You can find some hints on the Free Stuff pages here on PAOSS.

In addition, long-time readers here on PAOSS have probably already seen a few posts about organisational management, but there’s a new gotcha that I’d like to add to the mix today – the changing operating model. This one is often overlooked. The changes made in a next-gen OSS can often have profound changes on the to-be organisation chart. Roles and responsibilities that used to be clearly defined now become blurred and obsoleted by the new solution.

This is particularly true for modern delivery models where cloud, virtualisation, as-a-service, etc change the dynamic. Demarcation points between IT, operations, networks, marketing, products, third-party suppliers, etc can need complete reconsideration. The most challenging part about understanding the re-mapping of operating models is that we often can’t even predict what they will be until we start using the new solution and refining our processes in-flight. We can start with a RACI and a bunch of “what if?” assumptions / scenarios to capture new operational mappings, but you can almost bet that it will need ongoing refinement.

A new phenomenon for IT

In the past, business-oriented groups have had ideas about what they want to do and then they come to us… Now, they want to know what technology can bring to the table and then they’ll work on the business plan.
So there’s a big gap here. It’s a phenomenon that’s been happening in the last year and it’s an uncomfortable place for IT. We’re not used to having to lead in that way. We have been more in the order-taker business.

Veenod Kurup
, Liberty Global, Group CIO.

That’s a really thought-provoking insight from Veenod isn’t it? Technology driving the business rather than business driving the technology. Technology as the business advantage.

I’ll be honest here – I never thought I’d see that day although I… guess… as e-business increases, the dependency on tech increases in lockstep. I’m passionate about tech, but also of the opinion that the tech is only a means to an end.

So if what Veenod says is reflective of a macro-trend across all industry then he’s right in saying that we’re going to have some very uncomfortable situations for some tech experts. Many will have to widen their field of view from a tech-only vision to a business vision.

Maybe instead the business-oriented groups could just come to the OSS / BSS department and speak with our valuable tripods. After all, we own and run the information and systems where business (BSS) meets technology (OSS) right? Or as previously reiterated on this blog, we have the opportunity to take the initiative and demonstrate the business value within our OSS / BSS.

PS. hat-tip to Dawn Bushaus for unearthing the quote from Veenod in her article on Inform.

Vulnerability in OSS

All over the world – from America’s National Football League (NFL) to the National Basketball Association (NBA), from our own AFL to NRL – athletes and coaches are cultivating club cultures in which tales of personal hardship and woe are welcome, even desirable. All are clamouring to embrace the biggest buzzword in professional sport: vulnerability.
The most publicised incarnation of this shift was the “Triple H” sessions used at AFL winners Richmond last year, where once a fortnight a player stood and shared three personal stories about a Hero, Hardship and Highlight from their life
.”
Konrad Marshall
, GoodWeekend.

I’ve just done a quick rummage through the OSS and/or tech-related books in my bookshelves. Would you like to know what I noticed? None mention teams, teamwork or culture, let alone the V-word quoted above – vulnerability. Funny that.

I say funny, because each of the highest-performing teams I’ve worked with have also had great team culture. Conversely, the worst-performing teams I’ve worked with have seen ego overpower vulnerability and empathy. Does that resonate with your experiences too?

Professor Amy Edmondson of Harvard Business School refers to psychological safety as, “a team climate characterized by interpersonal trust and mutual respect in which people are comfortable being themselves.” Graeme Cowan states, “We become more open-minded, resilient, motivated, and persistent when we feel safe. Humor increases, as does solution-finding and divergent thinking — the cognitive process underlying creativity.”

Yet why is it that we don’t seem to rate team factors when it comes to OSS delivery? “Team bonding” in stylishly inverted commas can come across as a bit ridiculous, but informal culture building seems to be more valuable than any of the technical alignment workshops we tend to build into our project plans.

OSS are built by teams, for teams, clearly. They’re often built in politically charged situations. They’re also usually built in highly complex environments, where complexities abound in technology and process, but even more so within the people involved. Not only that, but they’re regularly built across divisional lines of business units or organisations over which (hopefully metaphorical) hand grenades can be easily thrown.

Underestimate psychological safety and vulnerability across the entire stakeholder group at your peril on OSS projects. We could benefit from looking outside the walls of OSS, to models used by sporting teams in particular, where team culture is invested in far more heavily because of the proven performance benefits they’ve delivered.

OSS, the great multipliers

Skills multiply labors by two, five, 10, 50, 100 times. You can chop a tree down with a hammer, but it takes about 30 days. That’s called labor. But if you trade the hammer in for an ax, you can chop the tree down in about 30 minutes. What’s the difference in 30 days and 30 minutes? Skills—skills make the difference.”
Jim Rohn
, here.

OSS can be great labour multipliers. They can deliver baked-in “skills” that multiply labors by two, five, 10, 50, 100 times. They can be not just a hammer, not just an axe, but a turbo-charged chainsaw.

The more pertinent question to understand with our OSS though is, “Why are we chopping this tree down?” Is it the right tree to achieve a useful outcome? Do the benefits outweigh the cost of the hammer / axe / chainsaw?

What if our really clever OSS engineers come up with a brilliant design that can reduce a task from 30 days to 30 minutes…. but it takes us 35 days to design/build/test/deploy the customisation for a once-off use? We’re clearly cutting down the wrong tree.

What if instead we could reduce this same task from 30 days to 1 day with just a quick analysis and process change? It’s nowhere near as sexy or challenging for us OSS engineers though. The very clever 30 minute solution is another case of “just because we can, doesn’t mean we should.”

The strangler fig transformation analogy

You’re probably familiar with strangler figs, which grow on a host tree, often resulting in the death of the host. You’re probably less familiar with the strangler fig analogy as an OSS transformation or cutover model.

The concept is that there is a “host tree” (ie legacy system) that needs to be obsoleted and replaced, but it’s so dominant and integral (eg because of complex and/or meshed integrations) that a big-bang replacement is unviable (eg due to risk, costs, etc).

The strangler fig (ie new solution) is developed in parallel to the host tree and is progressively grown over time. Generally, it grows through step-wise enhancement / replacement. This approach is best suited to scenarios where there are lots of transaction types, fault types, process types, use-cases, etc that can be systematically switched from host to strangler.

This approach can also be used for product consolidation (ie multiple products consolidated into one).

Clever use of automated regression testing can help with this evolving cutover approach.

Getting lost in the flow of OSS

The myth is that people play games because they want to avoid challenging work. The reality is, people play games to engage in well-designed, challenging work. The only thing they are avoiding is poorly designed work. In essence, we are replacing poorly designed work with work that provides a more meaningful challenge and offers a richer sense of progress.
And we should note at this point that just because something is a game, it doesn’t mean it’s good. As we’ll soon see, it can be argued that everything is a game. The difference is in the design.
Really good games have been ruthlessly play-tested and calibrated to the point where achieving a state of flow is almost guaranteed for many. Play-testing is just another word for iterative development, which is essentially the conducting of progressive experiments
.”
Dr Jason Fox
in his book, “The Game Changer.”

Reflect with me for a moment – when it comes to your OSS activities, in which situations do you consistently get into a state of flow?

For me, it’s in quite a few different scenarios, but one in particular stands out – building up a network model in an inventory management tool. This activity starts with building models / patterns of devices, services, connections, etc, then using the models to build a replica of the network, either manually or via data migration, within the inventory tool(s). I can lose complete track of time when doing this task. In fact I have almost every single time I’ve performed this task.

Whilst not being much of a gamer, I suspect it’s no coincidence that by far my favourite video game genre is empire-building strategy games like the Civilization series. Back in the old days, I could easily get lost in them for hours too. Could we draw a comparison from getting that same sense of achievement, seeing a network (of devices in OSS, of cities in the empire strategy games) grow rapidly as a result of your actions?

What about fans of first-person shooter games? I wonder whether they get into a state of flow on assurance activities, where they get to hunt down and annihilate every fault in their terrain?

What about fans of horse grooming and riding games? Well…. let’s not go there. 🙂

Anyway, enough of all these reflections and musings. I would like to share three concepts with you that relate to Dr Fox’s quote above:

  1. Gamification – I feel that there is MASSIVE scope for gamification of our OSS, but I’ve yet to hear of any OSS developers using game design principles
  2. Play-testing – How many OSS are you aware of that have been, “ruthlessly play-tested and calibrated?” In almost every OSS situation I’ve seen, as soon as functionality meets requirements, we stop and move on to the next feature. We don’t pause and try a few more variants to see which is most likely to result in a great design, refining the solution, “to the point where achieving a state of flow is almost guaranteed for many
  3. Richer Progress – How many of our end-to-end workflows are designed with, “a richer sense of progress” in mind? Feedback tends to come through retrospective reporting (if at all), rarely through the OSS game-play itself. Chances are that our end-to-end processes actually flow through multiple un-related applications, so it comes back to clever integration design to deliver more compelling feedback. We simply don’t use enough specialist creative designers in OSS

Being an OSS map-maker

Each problem that I solved became a rule, which served afterwards to solve other problems.”
Rene Descartes
.

On a recent project, I spent quite a lot of time thinking in terms of problem statements, then mapping them into solutions that could be broken down for assignment to lots of delivery teams – feeding their Agile backlogs.

On that assignment, like the multitude of OSS projects in the past, there has been very little repetition in the solutions. The people, org structure, platforms, timelines, objectives, etc all make for a highly unique solution each time. And high uniqueness doesn’t easily equate to repeatability. If there’s no repeatability, there’s no point building repeatable tools to improve efficiency. But repeatability is highly desirable for the purpose of reliability, continual improvement, economy of scale, etc.

However, if we look a step above the solution, above the use cases, above the challenges, we have key problem statements and they do tend to be more consistent (albeit still nuanced for each OSS). These problem statements might look something like:

  • We need to find a new vendor / solution to do X (where X is the real problem statement)
  • Realised risks have impacted us badly on past projects (so we need to minimise risk on our upcoming transformation)
  • We don’t get new products out to market fast enough to keep up with competitor Y and are losing market share to them
  • Our inability to resolve network faults quickly is causing customers to lose confidence in us
  • etc

It’s at this level that we begin to have more repeatability, so it’s at this level that it makes sense to create rules, frameworks, etc that are re-usable and refinable. You’ll find some of the frameworks I use under the Free Stuff menu above.

It seems that I’m an OSS map-maker by nature, wanting the take the journey but also map it out for re-use and refinement.

I’d love to hear whether it’s a common trait and inherent in many of you too. Similarly, I’d love to hear about how you seek out and create repeatability.

Operator involvement on OSS projects

You cannot simply have your end users give some specifications then leave while you attempt to build your new system. They need to be involved throughout the process. Ultimately, it is their tool to use.”
José Manuel De Arce
here.

As an OSS consultant and implementer, I couldn’t agree more with José’s quote above. José, by the way is an OSS Manager at Telefónica, so he sits on the operator’s side of the implementation equation. I’m glad he takes the perspective he does.

Unfortunately, many OSS operators are so busy with operations, they don’t get the time to help with defining and tuning the solutions that are being built for them. It’s understandable. They are measured by their ability to keep the network (and services) running and in a healthy state.

From the implementation side, it reminds me of this old comic:
Too busy

The comic reminds me of OSS implementations for two reasons:

  1. Without ongoing input from operators, you can only guess at how the new tools could improve their efficacy and mitigate their challenges
  2. Without ongoing involvement from operators, they don’t learn the nuances of how the new tool works or the scenarios it’s designed to resolve… what I refer to as an OSS apprenticeship

I’ve seen it time after time on OSS implementations (and other projects for that matter) – [As a customer] you get back what you put in.

The Goldilocks OSS story

We all know the story of Goldilocks and the Three Bears where Goldilocks chooses the option that’s not too heavy, not too light, but just right.

The same model applies to OSS – finding / building a solution that’s not too heavy, not too light, but just right. To be honest, we probably tend to veer towards the too heavy, especially over time. We put more complexity into our architectures, integrations and customisations… because we can… which end up burdening us and our solutions.

A perfect example is AT&T offering its ECOMP project (now part of the even bigger Linux Foundation Network Fund) up for open source in the hope that others would contribute and help mature it. As a fairytale analogy, it’s an admission that it’s too heavy even for one of the global heavyweights to handle by itself.

The ONAP Charter has some great plans including, “…real-time, policy-driven orchestration and automation of physical and virtual network functions that will enable software, network, IT and cloud providers and developers to rapidly automate new services and support complete lifecycle management.”

These are fantastic ambitions to strive for, especially at the Pappa Bear end of the market. I have huge admiration for those who are creating and chasing bold OSS plans. But what about for the large majority of customers that fall into the Goldilocks category? Is our field of vision so heavy (ie so grand and so far into the future) that we’re missing the opportunity to solve the business problems of our customers and make a difference for them with lighter solutions today?

TM Forum’s Digital Transformation World is due to start in just over two weeks. It will be fascinating to see how many of the presentations and booths consider the Goldilocks requirements. There probably won’t be many because it’s just not as sexy a story as one that mentions heavy solutions like policy-driven orchestration, zero-touch automation, AI / ML / analytics, self-scaling / self-healing networks, etc.

[I should also note that I fall into the category of loving to listen to the heavy solutions too!! ]

Are today’s platform benefits tomorrow’s constraints?

One of the challenges facing OSS / BSS product designers is which platform/s to tie the roadmap to.

Let’s use a couple of examples.
In the past, most outside plant (OSP) designs were done in AutoCAD, so it made sense to build OSP design tools around AutoCAD. However in making that choice, the OSP developer becomes locked into whatever product directions AutoCAD chooses in future. And if AutoCAD falls out of favour as an OSP design format, the earlier decision to align with it becomes an even bigger challenge to work around.

A newer example is for supply chain related tools to leverage Salesforce. The tools and marketplace benefits make great sense as a platform to leverage today. But it also represents a roadmap lock-in that developers hope will remain beneficial in the years / decades to come.

I haven’t seen an OSS / BSS product built around Facebook, but there are plenty of other tools that have been. Given Facebook’s recent travails, I wonder how many product developers are feeling at risk due to their reliance on the Facebook platform?

The same concept extends to the other platforms we choose – operating systems, programming languages, messaging models, data storage models, continuous development tools, etc, etc.

Anecdotally, it seems that new platforms are coming online at an ever greater rate, so the chance / risk of platform churn is surely much higher than when AutoCAD was chosen back in the 1980s – 1990s.

The question becomes how to leverage today’s platform benefits, whilst minimising dependency and allowing easy swap-out. There’s too much nuance to answer this definitively, but the one key strategy is to try to build key logic models outside the dependent platform (ie ensuring logic isn’t built into AutoCAD or similar).

Powerful ranking systems with hidden variables

There are ratings and rankings that ostensibly exist to give us information (and we are supposed to use that information to change our behavior).
But if we don’t know what variables matter, how is it supposed to be useful?
Just because it can be easily measured with two digits doesn’t mean that it’s accurate, important or useful.
[Marketers learned a long time ago that people love rankings and daily specials. The best way to boost sales is to put something in a little box on the menu, and, when in doubt, rank things. And sometimes people even make up the rankings.]

Seth Godin
here.

Are there any rankings that are made up in OSS? Our OSS collect an amazing amount of data so there’s rarely a need to make up the data we present.

Are they based on hidden variables? Generally, we use raw counters and / or well known metrics so we’re usually quite transparent with what our OSS present.

What about when we’re trying to select the right vendor to fulfill the OSS needs of our organisation? As Seth states, Just because it can be easily measured with two digits* doesn’t mean that it’s accurate, important or useful. [* In this case, I’m thinking of a 2 x 2 matrix].

The interesting thing about OSS ranking systems is that there is so much nuance in the variables that matter. There are potentially hundreds of evaluation criteria and even vast contrasts in how to interpret a given criteria.

For example, a criteria might be “time to activate a service.” A vendor might have a really efficient workflow for activating single services manually but have no bulk load or automation interface. For one operator (which does single activations manually), the TTAS metric for that product would be great, but for another operator (which does thousands of activations a day and tries to automate), the TTAS metric for the same product would be awful.

As much as we love ranking systems… there are hundreds of products on the market (in some cases, hundreds of products in a single operator’s OSS stack), each fitting unique operator needs differently… so a 2 x 2 matrix is never going to cut it as a vendor selection tool… not even as a short-listing tool.

Better to build yourself a vendor selection framework. You can find a few OSS product / vendor selection hints here based on the numerous vendor / product selections I’ve helped customers with in the past.

Networks lead. OSS are an afterthought. Time for a change?

In a recent post, we described how changing conditions in networks (eg topologies, technologies, etc) cause us to reconsider our OSS.

Networks always lead and OSS (or any form of network management including EMS/NMS) is always an afterthought. Often a distant afterthought.

But what if we spun this around? What if OSS initiated change in our networks / services? After all, OSS is the platform that operationalises the network. So instead of attempting to cope with a huge variety of network options (which introduces a massive number of variants and in turn, massive complexity, which we’re always struggling with in our OSS), what if we were to define the ways that networks are operationalised?

Let’s assume we want to lead. What has to happen first?

Network vendors tend to lead currently because they’re offering innovation in their networks, but more importantly on the associated services supported over the network. They’re prepared to take the innovation risk knowing that operators are looking to invest in solutions they can offer to their customers (as products / services) for a profit. The modus operandi is for operators to look to network vendors, not OSS vendors / integrators, to help to generate new revenues. It would take a significant perception shift for operators to break this nexus and seek out OSS vendors before network vendors. For a start, OSS vendors have to create a revenue generation story rather than the current tendency towards a cost-out business case.

ONAP provides an interesting new line of thinking though. As you know, it’s an open-source project that represents multiple large network operators banding together to build an innovative new approach to OSS (even if it is being driven by network change – the network virtualisation paradigm shift in particular). With a white-label, software-defined network as a target, we have a small opening. But to turn this into an opportunity, our OSS need to provide innovation in the services being pushed onto the SDN. That innovation has to be in the form of services/products that are readily monetisable by the operators.

Who’s up for this challenge?

As an aside:
If we did take the lead, would our OSS look vastly different to what’s available today? Would they unequivocally need to use the abstract model to cope with the multitude of scenarios?

A purple cow in our OSS paddock

A few years ago, I read a book that had a big impact on the way I thought about OSS and OSS product development. Funnily enough, the book had nothing to do with OSS or product development. It was a book about marketing – a subject that I wasn’t very familiar with at the time, but am now fascinated with.

And the book? Purple Cow by Seth Godin.
Purple Cow

The premise behind the book is that when we go on a trip into the countryside, we notice the first brown or black cows, but after a while we don’t pay attention to them anymore. The novelty has worn off and we filter them out. But if there was a purple cow, that would be remarkable. It would definitely stand out from all the other cows and be talked about. Seth promoted the concept of building something into your products that make them remarkable, worth talking about.

I recently heard an interview with Seth. Despite the book being launched in 2003, apparently he’s still asked on a regular basis whether idea X is a purple cow. His answer is always the same – “I don’t decide whether your idea is a purple cow. The market does.”

That one comment brought a whole new perspective to me. As hard as we might try to build something into our OSS products that create a word-of-mouth buzz, ultimately we don’t decide if it’s a purple cow concept. The market does.

So let me ask you a question. You’ve probably seen plenty of different OSS products over the years (I know I have). How many of them are so remarkable that you want to talk about them with your OSS colleagues, or even have a single feature that’s remarkable enough to discuss?

There are a lot of quite brilliant OSS products out there, but I would still classify almost all of them as brown cows. Brilliant in their own right, but unremarkable for their relative sameness to lots of others.

The two stand-out purple cows for me in recent times have been CROSS’ built-in data quality ranking and Moogsoft’s Incident Room model. But it’s not for me to decide. The market will ultimately decide whether these features are actual purple cows.

I’d love to hear about your most memorable OSS purple cows.

You may also be wondering how to go about developing your own purple OSS cow. Well I start by asking, “What are people complaining about?” or “What are our biggest issues?” That’s where the opportunities lie. Once discovering those issues, the challenge is solving the problem/s in an entirely different, but better, way. I figure that if people care enough to complain about those issues, then they’re sure to talk about any product that solves the problem for them.

Designing OSS to cope with greater transience

There are three broad models of networking in use today. The first is the adaptive model where devices exchange peer information to discover routes and destinations. This is how IP networks, including the Internet, work. The second is the static model where destinations and pathways (routes) are explicitly defined in a tabular way, and the final is the central model where destinations and routes are centrally controlled but dynamically set based on policies and conditions.”
Tom Nolle here.

OSS of decades past worked best with static networks. Services / circuits that were predominantly “nailed up” and (relatively) rarely changed after activation. This took the real-time aspect out of play and justified the significant manual effort required to establish a new service / circuit.

However, adaptive and centrally managed networks have come to dominate the landscape now. In fact, I’m currently working on an assignment where DWDM, a technology that was once largely static, is now being augmented to introduce an SDN controller and dynamic routing (at optical level no less!).

This paradigm shift changes the fundamentals of how OSS operate. Apart from the physical layer, network connectivity is now far more transient, so our tools must be able to cope with that. Not only that, but the changes are too frequent to justify the manual effort of the past.

To tie in with yesterday’s post, we are again faced with the option of abstract / generic modelling or specific modelling.

Put another way, we have to either come up with adaptive / algorithmic mechanisms to deal with that transience (the specific model), or need to mimic “nailed-up” concepts (the abstract model).

More on the implications of this tomorrow.

Which OSS tool model do you prefer – Abstract or Specific?

There’s something I’ve noticed about OSS products – they are either designed to be abstract / flexible or they are designed to cater for specific technologies / topologies.

When designed from the abstract perspective, the tools are built around generic core data models. For example, whether a virtual / logical device, a physical device, a customer premises device, a core network device, an access network device, a trasmission device, a security device, etc, they would all be lumped into a single object called a device (but with flexibility to cater for the many differences).

When designed from the specific perspective, the tools are built with exact network and/or service models in mind. That could be 5G, DWDM, physical plant, etc and when any new technology / topology comes along, a new plug-in has to be built to augment the tools.

The big advantage of the abstract approach is obvious (if they truly do have a flexible core design) – you don’t have to do any product upgrades to support a new technology / topology. You just configure the existing system to mimic the new technology / topology.

The first OSS product I worked with had a brilliant core design and was able to cope with any technology / topology that we were faced with. It often just required some lateral thinking to make the new stuff fit into the abstract data objects.

What I also noticed was that operators always wanted to customise the solution so that it became more specific. They were effectively trying to steer the product from an open / abstract / flexible tool-set to the other end of the scale. They generally paid significant amounts to achieve specificity – to mould the product to exactly what their needs were at that precise moment in time.

However, even as an OSS newbie, I found that to be really short-term thinking. A network (and the services it carries) is constantly evolving. Equipment goes end-of-life / end-of-support every few years. Useful life models are usually approx 5-7 years and capital refresh projects tend to be ongoing. Then, of course, vendors are constantly coming up with new products, features, topologies, practices, etc.

Given this constant change, I’d much rather have a set of tools that are abstract and flexible rather than specific but less malleable. More importantly, I’ll always try to ensure that any customisations should still retain the integrity of the abstract and flexible core rather than steering the product towards specificity.

How about you? What are your thoughts?

An OSS conundrum with many perspectives

Even aside from the OSS impact, it illustrates the contrast between “bottom-up” planning of networks (new card X is cheaper/has more ports) and “top down” (what do we need to change to reduce our costs/increase capacity).”
Robert Curran
.

Robert’s quote above is in response to a post called “Trickle-down impact planning.”

Robert makes a really interesting point. Adding a new card type is a relatively common event for a big network operator. It’s a relatively minor challenge for the networks team – a BAU (Business as Usual) activity in fact. But if you follow the breadcrumbs, the impact to other parts of the business can be quite significant.

Your position in your organisation possibly dictates your perspective on the alternative approaches Robert discusses above. Networks, IT, planning, operations, sales, marketing, projects/delivery, executive – all will have different impacts and a different field of view on the conundrum. This makes it an interesting problem to solve – which viewpoint is the “right” one to tackle the challenge from?

My “solutioning” background tends to align with the top down viewpoint, but today we’ll take a look at this from the perspective of how OSS can assist from either direction.

Bottom Up: In an ideal world, our OSS and associated processes would be able to identify a new card (or similar) and just ripple changes out without interface changes. The first OSS I worked on did this really well. However, it was a “single-vendor” solution so the ripples were self-contained (mostly). This is harder to control in the more typical “best-of-breed” OSS stacks of today. There are architectural mechanisms for controlling the ripples out but it’s still a significant challenge to solve. I’d love to hear from you if you’re aware of any vendors or techniques that do this really well.

Top Down: This is where things get interesting. Should top-down impact analysis even be the task of an OSS/BSS? Since it’s a common / BAU operational task, then you could argue it is. If so, how do we create OSS tools* that help with organisational impact / change / options analysis and not just network impact analysis? How do we build the tools* that can:

  1. Predict the rippling impacts
  2. Allow us to estimate the impact of each
  3. Present options (if relevant) and
  4. Provide a cost-benefit comparison to determine whether any of the options are viable for development

* When I say “tools,” this might be a product, but it could just mean a process, data extract, etc.

I have the sense that this type of functionality falls into the category of, “just because you can, doesn’t mean you should… build it into your OSS.” Have you seen an OSS/BSS with this type of impact analysis functionality built-in?

When your ideas get stolen

When your ideas get stolen.
A few meditations from Seth Godin:
“Good for you. Isn’t it better that your ideas are worth stealing? What would happen if you worked all that time, created that book or that movie or that concept and no one wanted to riff on it, expand it or run with it? Would that be better?
You’re not going to run out of ideas. In fact, the more people grab your ideas and make magic with them, the more of a vacuum is sitting in your outbox, which means you will prompted to come up with even more ideas, right?
Ideas that spread win. They enrich our culture, create connection and improve our lives. Isn’t that why you created your idea in the first place?
The goal isn’t credit. The goal is change.”

A friend of mine has lots of great ideas. Enough to write a really valuable blog. Unfortunately he’s terrified that someone else will steal those ideas. In the meantime, he’s missing out on building a really important personal brand for himself. Do you know anyone like him?

The great thing about writing a daily blog is that it forces you to generate lots of ideas. It forces you to be constantly thinking about your subject matter and how it relates to the world. Putting them out there in the hope that others want to run with them, in the hope that they spread. In the hope that others will expand upon them and make them more powerful, teaching you along the way. At over 2000 posts now, it’s been an immensely enriching experience for me anyway. As Seth states, the goal is definitely change and we can all agree that OSS is in desperate need for change.

It is incumbent on all of us in the OSS industry to come up with a constant stream of ideas – big and small. That’s what we tend to do on a daily basis right? Do yours tend towards the smaller end of the scale, to resolve daily delivery tasks or the larger end of the scale, to solve the industry’s biggest problems?

Of your biggest ideas, how do you get them out into the world for others to riff on? How many of your ideas have been stolen and made a real difference?

If someone rips off your ideas, it’s a badge of honour and you know that you’ll always generate more…unless you don’t let your idea machine run.