The OSS breathing coach analogy

To paraphrase the great Chinese philosopher Lao Tzu, resisting change is like trying to hold your breath – even if you’re successful, it won’t end well.”
Michael McQueen
here.

OSS is an interesting dichotomy. At one end of the scale, you have the breath holders – those who want the status quo to remain so that they can bring their OSS (and/or network) under control. At the other end, you have the hyperventilators – those who want to force a constant stream of change to overcome any perceived shortfalls in the current solution.

The more desirable state is probably a balance between breath holding and hyperventilation:

  • If you’re an OSS implementer (eg vendor, integrator, internal project delivery team), then you rely on change – as long as it’s enough to deliver an income, but not so much as to overwhelm you.
  • If you’re an OSS operator, then you long for an OSS that does its role perfectly and evolves at a manageable speed, allowing you stability.

The art of change management in OSS is to act as a breathing coach – to find the collective balance of respiration that suits the organisation whilst considering the natural tendencies of all of the different contributors to the project.

Just like breathing, change might seem simple, but is often completely underestimated as a result. But spend some time with any breathing coach or OSS change manager and you’ll find that there are many techniques that they call upon to find optimal balance.

OSS stepping stone or wet cement

Very often, what is meant to be a stepping stone turns out to be a slab of wet cement that will harden around your foot if you do not take the next step soon enough.”
Richelle E. Goodrich
.

Not sure about your parts of the world, but I’ve noticed the terms “tactical” (ie stepping stone solution) and “strategic” (ie long-term solution) entering the architectural vernacular here in Australia.

OSS seem to be full of tactical solutions. We’re always on a journey to somewhere else. I love that mindset – getting moving now, but still keeping the future in mind. There’s just one slight problem… how many times have we seen a tactical solution that was build years before? Perhaps it’s not actually a problem at all in some cases – the short-term fix is obviously “good enough” to have survived.

As a colleague insightfully pointed out last week – “if you create a tactical solution without also preparing a strategic solution, you don’t have a tactical solution, you have a solution.

When architecting your OSS solutions, do you find yourself more easily focussing on the tactical, the strategic, or is having an eye on both the essential part of your solution?

OSS compromise, not compromised

When you’ve got multiple powerful parties involved in a decision, compromise is unavoidable. The point is not that compromise is a necessary evil. Rather, compromise can be valuable in itself, because it demonstrates that you’ve made use of diverse opinions, which is a way of limiting risk.”
Chip and Dan Heath
in their book, Decisive.

This risk perspective on compromise (ie diversity of thought), is a fascinating one in the context of OSS.

Let’s just look at Vendor Selection as one example scenario. In the lead-up to buying a new OSS, there are always lots of different requirements that are thrown into the hat. These requirements are likely to come from more than one business unit, and from a diverse set of actors / contributors. This process, the OSS Thrashing process, tends to lead to some very robust discussions. Even in the highly unlikely event of every requirement being met by a single OSS solution, there are still compromises to be made in terms of prioritisation on which features are introduced first. Or which functionality is dropped / delayed if funding doesn’t permit.

The more likely situation is that each of the product options will have different strengths and weaknesses, each possibly aligning better or worse to some of the requirement contributor needs. By making the final decision, some requirements will be included, others precluded. Compromise isn’t an option, it’s a reality. The perspective posed by the Heath brothers is whether all requirement contributors enter the OSS vendor selection process prepared for compromise (thus diversity of thought) or does one actor / business-unit seek to steamroll the process (thus introducing greater risk)?

The OSS transformation dilemma

There’s a particular carrier that I know quite well that appears to despise a particular OSS vendor… but keeps coming back to them… and keeps getting let down by them… but keeps coming back to them. And I’m not just talking about support of their existing OSS, but whole new tools.

It never made sense to me… until reading Seth Godin’s blog today. In it, he states, “…this market segment knows that things that are too good to be true can’t possibly work, and that’s fine with them, because they don’t actually want to change–they simply want to be able to tell themselves that they tried. That the organization they paid their money to failed, of course it wasn’t their failure. Once you see that this short-cut market segment exists, you can choose to serve them or to ignore them. And you can be among them or refuse to buy in

It starts to makes sense. The same carrier has a tendency to spend big money on the big-4 consultants whenever an important decision needs to be made. If the big, ambitious project then fails, the carrier’s project sponsors can say that the big-4 organization they paid their money to failed.

Does that ring true of any telco you’ve worked with? That they don’t actually want to change–they simply want to be able to tell themselves that they tried (or be seen to have tried) with their OSS transformation?

Are we actually stuck in one big dilemma? Are our OSS transformations actually so hard that they’re destined to fail, yet are already failing so badly that we desperately need to transform them? If so, then Seth’s insightful observation gives the appearance of progress AND protection from the pain of failure.

Not sure about you, but I’ll take Seth’s “refuse to buy in” option and try to incite change.

Post Implementation Review (PIR)

Have you noticed that OSS projects need to go through extensive review to get funding of business cases? That makes sense. They tend to be a big investment after all. Many OSS projects fail, so we want to make sure this one doesn’t and we perform thorough planing / due-diligence.

But I do find it interesting that we spend less time and effort on Post Implementation Reviews (PIRs). We might do the review of the project, but do we compare with the Cost Benefit Analysis (CBA) that undoubtedly goes into each business case?

OSS Project Analysis Scales

Even more interesting is that we spend even less time and effort performing ongoing analysis of an implemented OSS
against against the CBA.

Why interesting? Well, if we took the time to figure out what has really worked, we might have better (and more persuasive) data to improve our future business cases. Not only that, but more chance to reduce the effort on the business case side of the scale compared with current approaches (as per diagrams above).

What do you think?

OSS – just in time rather than just in case

We all know that once installed, OSS tend to stay in place for many years. Too much effort to air-lift in. Too much effort to air-lift back out, especially if tightly integrated over time.

The monolithic COTS (off-the-shelf) tools of the past would generally be commissioned and customised during the initial implementation project, with occasional integrations thereafter. That meant we needed to plan out what functionality might be required in future years and ask for it to be implemented, just in case. Along with all the baked-in functionality that is never needed, and the just in case but possibly never used, we ended up with a lot of bloat in our OSS.

With the current approach of implementing core OSS building blocks, then utilising rapid release and microservice techniques, we have an ongoing enhancement train. This provides us with an opportunity to build just in time, to build only functionality that we know to be essential.

This has pluses and minuses. On the plus side, we have more opportunity to restrict delivery to only what’s needed. On the minus side, a just in time mindset can build a stop-gap culture rather than strategic, long-term thinking. It’s always good to have long-term thinkers / planners on the team to steer the rapid release implementations (and reductions / refactoring) and avoid a new cause of bloat.

Would you ever alarm your lab equipment?

Something curious dawned on me the other day – I wondered how many people / organisations actively manage alarms / alerts being generated by their lab equipment?

At first glance, this would seem silly. Lab environments are in constant flux, in all sorts of semi-configured situations, and therefore likely to be alarming their heads off at different times.

As such, it would seem even sillier to send alarms, syslogs, performance counters, etc from your lab boxes through to your production alarm management platform. Events on your lab equipment simply shouldn’t be distracting your NOC teams from handling events on your active network right?

But here’s why I’m curious, in a word, DevOps. Does the DevOps model now mean that some lab equipment stays in a relatively stable state and performs near-mission-critical activities like automated regression testing?? Therefore, does it make sense to selectively monitor / manage some lab equipment?

In what other scenarios might we wish to send lab alarms to the NOC (and I’m not talking about system integration testing type scenarios, but ongoing operational scenarios)?

The OSS dart-board analogy

The dartboard, by contrast, is not remotely logical, but is somehow brilliant. The 20 sector sits between the dismal scores of five and one. Most players aim for the triple-20, because that’s what professionals do. However, for all but the best darts players, this is a mistake. If you are not very good at darts, your best opening approach is not to aim at triple-20 at all. Instead, aim at the south-west quadrant of the board, towards 19 and 16. You won’t get 180 that way, but nor will you score three. It’s a common mistake in darts to assume you should simply aim for the highest possible score. You should also consider the consequences if you miss.”
Rory Sutherland
on Wired.

When aggressive corporate goals and metrics are combined with brilliant solution architects, we tend to aim for triple-20 with our OSS solutions don’t we? The problem is, when it comes to delivery, we don’t tend to have the laser-sharp precision of a professional darts player do we? No matter how experienced we are, there tends to be hidden surprises – some technical, some personal (or should I say inter-personal?), some contractual, etc – that deflect our aim.

The OSS dart-board analogy asks the question about whether we should set the lofty goals of a triple-20 [yellow circle below], with high risk of dismal results if we miss (think too about the OSS stretch-goal rule); or whether we’re better to target the 19/16 corner of the board [blue circle below] that has scaled back objectives, but a corresponding reduction in risk.

OSS Dart-board Analogy

Roland Leners posed the following brilliant question, “What if we built OSS and IT systems around people’s willingness to change instead of against corporate goals and metrics? Would the corporation be worse off at the end?” in response to a recent post called, “Did we forget the OSS operating model?

There are too many facets to count on Roland’s question but I suspect that in many cases the corporate goals / metrics are akin to the triple-20 focus, whilst the team’s willingness to change aligns to the 19/16 corner. And that is bound to reduce delivery risk.

I’d love to hear your thoughts!!

The paint the fence automation analogy

There are so many actions that could be automated by / with / in our OSS. It can be hard to know where to start can’t it? One approach is to look at where the largest amounts of manual effort is being expended by operators. Another way is to employ the “paint the fence” analogy.

When envisaging fulfilment workflows, it’s easiest to picture actions that start with a customer and wipe down through the OSS / BSS stack.

When envisaging assurance workflows, it’s easiest to picture actions that start in the network and wipe up through the OSS / BSS stack.
Paint the fence OSS analogy

Of course there are exceptions to these rules, but to go a step further, wipe down = revenue, wipe up = costs. We want to optimise both through automation of course.

Like ensuring paint coverage when painting a fence, OSS automation has the potential to best improve Customer Experience coverage when we use brushstrokes down and up.

On the downstroke, it’s through faster service activations, quotes, response times, etc. On the upstroke, it’s through network reliability (downtime reduction), preventative maintenance, expedited notifications, etc.

You’ll notice that these are indicators that are favourable to the customers. I’m sure it won’t take much sluething to see the association to trailing metrics that are favourable to the network operators though right?

Dematerialisation of OSS

In 1972, the Club of Rome in its report The Limits to Growth predicted a steadily increasing demand for material as both economies and populations grew. The report predicted that continually increasing resource demand would eventually lead to an abrupt economic collapse. Studies on material use and economic growth show instead that society is gaining the same economic growth with much less physical material required. Between 1977 and 2001, the amount of material required to meet all needs of Americans fell from 1.18 trillion pounds to 1.08 trillion pounds, even though the country’s population increased by 55 million people. Al Gore similarly noted in 1999 that since 1949, while the economy tripled, the weight of goods produced did not change.
Wikipedia on the topic of Dematerialisation.

The weight of OSS transaction volumes appears to be increasing year on year as we add more stuff to our OSS. The touchpoint explosion is amplifying this further. Luckily, our platforms / middleware, compute, networks and storage have all been scaling as well so the increased weight has not been as noticeable as it might have been (even though we’ve all worked on OSS that have been buckling under the weight of transaction volumes right?).

Does it also make sense that when there is an incremental cost per transaction (eg via the increasingly prevalent cloud or “as a service” offerings) that we pay closer attention to transaction volumes because there is a great perception of cost to us? But not for “internal” transactions where there is little perceived incremental cost?

But it’s not so much the transaction processing volumes that are the problem directly. It’s more by implication. For each additional transaction there’s the risk of a hand-off being missed or mis-mapped or slowing down overall activity processing times. For each additional transaction type, there’s additional mapping, testing and regression testing effort as well as an increased risk of things going wrong.

Do you measure transaction flow volumes across your entire OSS suite? Does it provide an indication of where efficiency optimisation (ie dematerialisation) could occur and guide your re-factoring investments? Does it guide you on process optimisation efforts?

Vulnerability in OSS

All over the world – from America’s National Football League (NFL) to the National Basketball Association (NBA), from our own AFL to NRL – athletes and coaches are cultivating club cultures in which tales of personal hardship and woe are welcome, even desirable. All are clamouring to embrace the biggest buzzword in professional sport: vulnerability.
The most publicised incarnation of this shift was the “Triple H” sessions used at AFL winners Richmond last year, where once a fortnight a player stood and shared three personal stories about a Hero, Hardship and Highlight from their life
.”
Konrad Marshall
, GoodWeekend.

I’ve just done a quick rummage through the OSS and/or tech-related books in my bookshelves. Would you like to know what I noticed? None mention teams, teamwork or culture, let alone the V-word quoted above – vulnerability. Funny that.

I say funny, because each of the highest-performing teams I’ve worked with have also had great team culture. Conversely, the worst-performing teams I’ve worked with have seen ego overpower vulnerability and empathy. Does that resonate with your experiences too?

Professor Amy Edmondson of Harvard Business School refers to psychological safety as, “a team climate characterized by interpersonal trust and mutual respect in which people are comfortable being themselves.” Graeme Cowan states, “We become more open-minded, resilient, motivated, and persistent when we feel safe. Humor increases, as does solution-finding and divergent thinking — the cognitive process underlying creativity.”

Yet why is it that we don’t seem to rate team factors when it comes to OSS delivery? “Team bonding” in stylishly inverted commas can come across as a bit ridiculous, but informal culture building seems to be more valuable than any of the technical alignment workshops we tend to build into our project plans.

OSS are built by teams, for teams, clearly. They’re often built in politically charged situations. They’re also usually built in highly complex environments, where complexities abound in technology and process, but even more so within the people involved. Not only that, but they’re regularly built across divisional lines of business units or organisations over which (hopefully metaphorical) hand grenades can be easily thrown.

Underestimate psychological safety and vulnerability across the entire stakeholder group at your peril on OSS projects. We could benefit from looking outside the walls of OSS, to models used by sporting teams in particular, where team culture is invested in far more heavily because of the proven performance benefits they’ve delivered.

OSS, the great multipliers

Skills multiply labors by two, five, 10, 50, 100 times. You can chop a tree down with a hammer, but it takes about 30 days. That’s called labor. But if you trade the hammer in for an ax, you can chop the tree down in about 30 minutes. What’s the difference in 30 days and 30 minutes? Skills—skills make the difference.”
Jim Rohn
, here.

OSS can be great labour multipliers. They can deliver baked-in “skills” that multiply labors by two, five, 10, 50, 100 times. They can be not just a hammer, not just an axe, but a turbo-charged chainsaw.

The more pertinent question to understand with our OSS though is, “Why are we chopping this tree down?” Is it the right tree to achieve a useful outcome? Do the benefits outweigh the cost of the hammer / axe / chainsaw?

What if our really clever OSS engineers come up with a brilliant design that can reduce a task from 30 days to 30 minutes…. but it takes us 35 days to design/build/test/deploy the customisation for a once-off use? We’re clearly cutting down the wrong tree.

What if instead we could reduce this same task from 30 days to 1 day with just a quick analysis and process change? It’s nowhere near as sexy or challenging for us OSS engineers though. The very clever 30 minute solution is another case of “just because we can, doesn’t mean we should.”

The strangler fig transformation analogy

You’re probably familiar with strangler figs, which grow on a host tree, often resulting in the death of the host. You’re probably less familiar with the strangler fig analogy as an OSS transformation or cutover model.

The concept is that there is a “host tree” (ie legacy system) that needs to be obsoleted and replaced, but it’s so dominant and integral (eg because of complex and/or meshed integrations) that a big-bang replacement is unviable (eg due to risk, costs, etc).

The strangler fig (ie new solution) is developed in parallel to the host tree and is progressively grown over time. Generally, it grows through step-wise enhancement / replacement. This approach is best suited to scenarios where there are lots of transaction types, fault types, process types, use-cases, etc that can be systematically switched from host to strangler.

This approach can also be used for product consolidation (ie multiple products consolidated into one).

Clever use of automated regression testing can help with this evolving cutover approach.

What if every OSS project was a stretch goal?

What if the objectives of every large OSS project were actually perceived as a stretch goal by internal and external stakeholders of the project?

Sim Sitkin, et al describe a stretch goal as, “We’re not talking about merely challenging goals. We’re talking about management moon shots—goals that appear unattainable given current practices, skills, and knowledge.

Dymphna Boholt describes it thus:
The reality is that if everything lands in a project – if every element does what it needs to do when it was supposed to do it, and everything went off without a hitch, that’s actually a stunning success. It’s also incredibly rare.

But we set that impossibly high standard as our benchmark, and then think that everything short of that is a failure.

And in that way, the grading scale should almost be reversed, so that if you put them side by side, it’d look like:

Success = Stunning Success
Partial failure = Total Success
Total Failure = Success
Miserable Failure = Useful learning experience
Total Trainwreck = Failure.”

You know, I’d never, ever looked at OSS projects from this perspective before. I’d always taken the view that if a project I’d worked on didn’t deliver on all expectations (the triple constraint of cost, time, scope/functionality), then it had failed to meet expectations, no matter how big the achievements of the project team may have been.

Dymphna has a really interesting point though. The chances of everything going to plan on a large, complex OSS project are almost zero. There are always challenges to overcome, no matter how skillful the team, how great the planning. It’s why I call it the OctopOSS (just when you think you have all the tentacles tied down, another comes and whacks you on the back of the head).

What if we instead treated the definition of “success” on our OSS projects as an implausibly unlikely stretch goal, to act as a guiding vision for our team? What if we also re-set the expectation benchmark of stakeholders to Dymphna’s right-hand column, not their incumbent expectation at the left?

Would that be letting ourselves off the hook for “failing,” for not meeting our promises, for under-delivering? Or is it fair to resign ourselves to reality rather than delusional benchmarks? Is Dymphna’s right-hand scale effectively like passing an exam only because the bell-curve is used and your classmates have overwhelmingly scored even lower than you?

I’m asking myself, why is it that some of the projects I’m most proud of (of the achievements of my colleagues and I) are also perhaps the biggest failures (on time, cost, scope or customer usability)?

Are you as challenged by Dymphna’s perspective as I am? I’d love to hear your thoughts.

Reducing the lumps with OSS services

As promised in yesterday’s post about lumpy revenues for OSS product companies, today we’ll discuss OSS professional services revenues and the contrasting mindset compared with products.

Professional services revenues are a great way of smoothing out the lumpy revenue streams of traditional OSS product companies. There’s just one problem though. Of all the vendors I’ve worked with, I’ve found that they always have a predilection – they either have a product mindset or a services mindset and struggle to do both well because the mindsets are quite different.

Not only that but we can break professional services into two categories:

  1. Product-related services – the installation and commissioning of products; and
  2. Consultancy-based services – the value-add services that drive business value from the OSS / BSS

Product companies provide product-related services, naturally. I can’t help but think that if we as an industry provided more of the consultancy-based services, we’d have more justification for greater spend on OSS / BSS (and smoother revenue streams in the process).

Having said that, PAOSS specialises in consultancy-based services (as well as install / commission / delivery services), so we’re always happy to help organisations that need assistance in this space!!

Being an OSS map-maker

Each problem that I solved became a rule, which served afterwards to solve other problems.”
Rene Descartes
.

On a recent project, I spent quite a lot of time thinking in terms of problem statements, then mapping them into solutions that could be broken down for assignment to lots of delivery teams – feeding their Agile backlogs.

On that assignment, like the multitude of OSS projects in the past, there has been very little repetition in the solutions. The people, org structure, platforms, timelines, objectives, etc all make for a highly unique solution each time. And high uniqueness doesn’t easily equate to repeatability. If there’s no repeatability, there’s no point building repeatable tools to improve efficiency. But repeatability is highly desirable for the purpose of reliability, continual improvement, economy of scale, etc.

However, if we look a step above the solution, above the use cases, above the challenges, we have key problem statements and they do tend to be more consistent (albeit still nuanced for each OSS). These problem statements might look something like:

  • We need to find a new vendor / solution to do X (where X is the real problem statement)
  • Realised risks have impacted us badly on past projects (so we need to minimise risk on our upcoming transformation)
  • We don’t get new products out to market fast enough to keep up with competitor Y and are losing market share to them
  • Our inability to resolve network faults quickly is causing customers to lose confidence in us
  • etc

It’s at this level that we begin to have more repeatability, so it’s at this level that it makes sense to create rules, frameworks, etc that are re-usable and refinable. You’ll find some of the frameworks I use under the Free Stuff menu above.

It seems that I’m an OSS map-maker by nature, wanting the take the journey but also map it out for re-use and refinement.

I’d love to hear whether it’s a common trait and inherent in many of you too. Similarly, I’d love to hear about how you seek out and create repeatability.

How to run an OSS PoC

This is the third in a series describing the process of finding the right OSS solution for your specific needs and getting estimated pricing to help you build a business case.

The first post described the overall OSS selection process we use. The second described the way we poll the market and prepare a short-list of OSS products / vendors based on current capabilities.

Once you’ve prepared the short-list it’s time to get into specifics. We generally do this via a PoC (Proof of Concept) phase with the short-listed suppliers. We have a few very specific principles when designing the PoC:

  • We want it to reflect the operator’s context so that they can grasp what’s being presented (which can be a challenge when a vendor runs their own generic demos). This “context” is usually in the form of using the operator’s device types, naming conventions, service types, etc. It also means setting up a network scenario that is representative of the operator’s, which could be a hypothetical model, a small segment of a real network, lab model or similar
  • PoC collateral must clearly describe the PoC and related context. It should clearly identify the important scenarios and selection criteria. Ideally it should logically complement the collateral provided in the previous step (ie the requirement gathering)
  • We want it to focus on the most important conditions. If we take the 80/20 rule as a guide, will quickly identify the most common service types, devices, configurations, functions, reports, etc that we want to model
  • Identify efficacy across those most important conditions. Don’t just look for the functionality that implements those conditions, but also the speed at which they can be done at a scale required by the operator. This could include bulk load or processing capabilities and may require simulators (or real integrations – see below) to generate volume
  • We want it to be a simple as is feasible so that it minimises the effort required both of suppliers and operators
  • Consider a light-weight integration if possible. One of the biggest challenges with an OSS is getting data in and out. If you can get a rapid integration with a real network (eg a microservice, SNMP traps, syslog events or similar) then it will give an indication of integration challenges ahead. However, note the previous point as it might be quite time-consuming for both operator and supplier to set up a real-time integration
  • Take note of the level of resourcing required by each supplier to run the PoC (eg how many supplier staff, server scaling, etc.). This will give an indication of the level of resourcing the operator will need to allocate for the actual implementation, including organisational change management factors
  • Attempt to offer PoC platform consistency so that all operators are on a level playing field, which might be through designing the PoC on common devices or topologies with common interfaces. You may even look to go the opposite way if you think the rarity of your conditions could be a deal-breaker

Note that we tend to scale the size/complexity/reality of the PoC to the scale of project budget out of consideration of vendor and operator alike. If it’s a small project / budget, then we do a light PoC. If it’s a massive transformation, then the PoC definitely has to go deeper (ie more integrations, more scenarios, more data migration and integrity challenges, etc)…. although ultimately our customers decide how deep they’re comfortable in going.

Best of luck and feel free to contact us if we can assist with the running of your OSS PoC.

How to identify a short-list of best-fit OSS suppliers for you

In yesterday’s post, we talked about how to estimate OSS pricing. One of the key pillars of the approach was to first identify a short-list of vendors / integrators best-suited to implementing your specific OSS, then working closely with them to construct a pricing model.

Finding the right vendor / integrator can be a complex challenge. There are dozens, if not hundreds of OSS / BSS solutions to choose from and there are rarely like-for-like comparators. There are some generic comparison tools such as Gartner’s Magic Quadrant, but there’s no way that they can cater for the nuanced requirements of each OSS operator.

Okay, so you don’t want to hear about problems. You want solutions. Well today’s post provides a description of the approach we’ve used and refined across the many product / vendor selection processes we’ve conducted with OSS operators.

We start with a short-listing exercise. You won’t want to deal with dozens of possible suppliers. You’ll want to quickly and efficiently identify a small number of candidates that have capabilities that best match your needs. Then you can invest a majority of your precious vendor selection time in the short-list. But how do you know the up-to-date capabilities of each supplier? We’ll get to that shortly.

For the short-listing process, I use a requirement gathering and evaluation template. You can find a PDF version of the template here. Note that the content within it is out-dated and I now tend to use a more benefit-centric classification rather than feature-centric classification, but the template itself is still applicable.

STEP ONE – Requirement Gathering
The first step is to prepare a list of requirements (as per page 3 of the PDF):
Requirement Capture.
The left-most three columns in the diagram above (in white) are filled out by the operator, which classifies a list of requirements and how important they are (ie mandatory, etc). The depth of requirements (column 2) is up to you and can range from specific technical details to high-level objectives. They could even take the form of user-stories or intended benefits.

STEP TWO – Issue your requirement template to a list of possible vendors
Once you’ve identified the list of requirements, you want to identify a list of possible vendors/integrators that might be able to deliver on those requirements. The PAOSS vendor/product list might help you to identify possible candidates. We then send the requirement matrix to the vendors. Note that we also send an introduction pack that provides the context of the solution the OSS operator needs.

STEP THREE – Vendor Self-analysis
The right-most three columns in the diagram above (in aqua) are designed to be filled out by the vendor/integrator. The suppliers are best suited to fill out these columns because they best understand their own current offerings and capabilities.
Note that the status column is a pick-list of compliance level, where FC = Fully Compliant. See page 2 of the template for other definitions. Given that it is a self-assessment, you may choose to change the Status (vendor self-rankings) if you know better and/or ask more questions to validate the assessments.
The “Module” column identifies which of the vendor’s many products would be required to deliver on the requirement. This column becomes important later on as it will indicate which product modules are most important for the overall solution you want. It may allow you to de-prioritise some modules (and requirements) if price becomes an issue.

STEP FOUR – Compare Responses
Once all the suppliers have returned their matrix of responses, you can compare them at a high-level based on the summary matrix (on page 1 of the template)
OSS Requirement Summary
For each of the main categories, you’ll be able to quickly see which vendors are the most FC (Fully Compliant) or NC (Non-Compliant) on the mandatory requirements.

Of course you’ll need to analyse more deeply than just the Summary Matrix, but across all the vendor selection processes we’ve been involved with, there has always been a clear identification of the suppliers of best fit.

Hopefully the process above is fairly clear. If not, contact us and we’d be happy to guide you through the process.

Getting a price estimate for your OSS

Sometimes a simple question deserves a simple answer: “A piece of string is twice as long as half its length”. This is a brilliant answer… if you have its length… Without a strategy, how do you know if it is successful? It might be prettier, but is it solving a define business problem, saving or making money, or fulfilling any measurable goals? In other words: can you measure the string?
Carmine Porco
here.

I was recently asked how to obtain OSS pricing by a University student for a paper-based assignment. To make things harder, the target client was to be a tier-2 telco with a small SDN / NFV network.

As you probably know already, very few OSS providers make their list prices known. The few vendors that do tend to focus on the high volume, self-serve end of the market, which I’ll refer to as “Enterprise Grade.” I haven’t heard of any “Telco Grade” OSS suppliers making their list prices available to the public.

There are so many variables when finding the right OSS for a customer’s needs and the vendors have so much pricing flexibility that there is no single definitive number. There are also rarely like-for-like alternatives when selecting an OSS vendor / product. Just like the fabled piece of string, the best way is to define the business problem and get help to measure it. In the case of OSS pricing, it’s to design a set of requirements and then go to market to request quotes.

Now, I can’t imagine many vendors being prepared to invest their valuable time in developing pricing based on paper studies, but I have found them to be extremely helpful when there’s a real buyer. I’ll caveat that by saying that if the customer (eg service provider) you’re working with is prepared to invest the time to help put a list of requirements together then you have a starting point to approach the market for customised pricing.

We’ve run quite a few of these vendor selections and have refined the process along the way to streamline for vendors and customers alike. Here’s a template we’ve used as a starting point for discussions with customers:

OSS vendor selection process

Note that each customer will end up with a different mapping of the diagram above to suit their specific needs. We also have existing templates (eg Questionnaire, Requirement Matrix, etc) to support the selection process where needed.

If you’re interested in reading more about the process of finding the right OSS vendor and pricing for you, click here and here.

Of course, we’d also be delighted to help if you need assistance to develop an OSS solution, get OSS pricing estimates, develop a workable business case and/or find the right OSS vendor/products for you.

Operator involvement on OSS projects

You cannot simply have your end users give some specifications then leave while you attempt to build your new system. They need to be involved throughout the process. Ultimately, it is their tool to use.”
José Manuel De Arce
here.

As an OSS consultant and implementer, I couldn’t agree more with José’s quote above. José, by the way is an OSS Manager at Telefónica, so he sits on the operator’s side of the implementation equation. I’m glad he takes the perspective he does.

Unfortunately, many OSS operators are so busy with operations, they don’t get the time to help with defining and tuning the solutions that are being built for them. It’s understandable. They are measured by their ability to keep the network (and services) running and in a healthy state.

From the implementation side, it reminds me of this old comic:
Too busy

The comic reminds me of OSS implementations for two reasons:

  1. Without ongoing input from operators, you can only guess at how the new tools could improve their efficacy and mitigate their challenges
  2. Without ongoing involvement from operators, they don’t learn the nuances of how the new tool works or the scenarios it’s designed to resolve… what I refer to as an OSS apprenticeship

I’ve seen it time after time on OSS implementations (and other projects for that matter) – [As a customer] you get back what you put in.