Persona mapping for OSS PoCs

When selecting new applications for an OSS or to augment an existing OSS, it always makes sense to me to run a Proof of Concept. But what do we want to demonstrate in that PoC? For me, we want to run demonstrations of the factors (eg features, use-cases, processes, etc) that justify the investment.

A simple exercise you can use is to identify the personas / roles that interact with the OSS. This could include personas such as NOC operator, strategic planner, network engineer, order entry, field ops, data / analytics, application administrator, etc. The actual personas will differ within each organisation of course.

For each of those personas, we can identify and interview an individual that represents that persona.

Interview questions include:

  1. What are the key responsibilities of your role
  2. What is the most important goal / KPI for your role
  3. How does this OSS (or proposed OSS) support you meeting this goal
  4. Describe the single most important process / function that you perform using the OSS
  5. Why is it so important
  6. How often do you perform this process / function
  7. Please provide a short list of other important processes / functions you perform with this OSS

We can then build this into a matrix and seek to prioritise into a set of use-cases. Based on time and cost constraints, we can then build the top-n of those use-cases into implementation scenarios for the PoC.

OSS operationalisation at scale

We had a highly flexible network design team at a previous company. Not because we wanted to necessarily, but because we were forced to by the client’s allocation of work.

Our team was largely based on casual workers because there was little to predict whether we needed 2 designers or 50 in any given week. The workload being assigned by the client was incredibly lumpy.

But we were lucky. We only had design work. The lumpiness in design effort flowed down through the work stack into construction, test and deployment teams. The constructors had millions of dollars of equipment that they needed to mobilise and demobilise as the work ebbed and flowed. Unfortunately for the constructors, they’d prepared their rate cards on the assumption of a fairly consistent level of work coming through (it was a very big project).

This lumpiness didn’t work out for anyone in the delivery pipeline, the client included. It was actually quite instrumental in a few of the constructors going into liquidation. The client struggled to meet roll-out targets.

The allocation of work was being made via the client’s B/OSS stack. The B/OSS teams were blissfully unaware of the downstream impact of their sporadic allocation of designs. Towards the end of the project, they were starting to get more consistent and delivery teams started to get into more of a rhythm… just as the network was coming to the end of build.

As OSS builders, we sometimes get so wrapped up in delivering functionality that we can forget that one of the key requirements of an OSS is to operationalise at scale. In addition to UI / CX design, this might be something as simple as smoothing the effort allocation for work under our OSS‘s management.

Shifting a problem to the left using OSS

Interesting table below in relation to the customer satisfaction and costs of delivering various styles of customer assurance activities:

Proactive Fix Self-help L1/2/3 Assurance Field Operations
Customer Satisfaction Very High High Medium Low
Expense Low Medium High Very High

The ambition for any organisation is to perform a shift to the left on this table. In other words, to introduce assurance mechanisms that increase the likelihood of an event being captured towards the left of the table (ie before becoming a field operations issue to solve). In theory, every shift left results in greater customer satisfaction and reduced cost to the operator.

Of course it’s a generic table (eg some proactive assurance programs can be higher than a “low” cost classification), but it does tell a story.

Our OSS cover the full scope of this table. Our OSS don’t perform L1/2/3 assurance or Field Ops, but they certainly help to coordinate and manage those activities.

If you were to use this table to classify your operational costs, what does the cost profile look like? Is it heavily weighted to the right side of the table? Does your operational cost profile justify further investment in your OSS to shift some of those costs to the left?

This post from sysaid has some further shift-left concepts as they relate to service management within IT.

Ciena to acquire DonRiver

Ciena announces intent to acquire DonRiver.

Ciena Corporation has entered into a definitive agreement to acquire privately-held DonRiver, a global software and services company specializing in federated network and service inventory management solutions within the service provider Operational Support Systems (OSS) environment.

DonRiver will bring new capabilities to Ciena’s Blue Planet software and services portfolio that significantly enhance the company’s ability to deliver on its Adaptive Network vision through intelligent, closed-loop automation. Specifically, with the addition of DonRiver’s federated network and service inventory management solutions, Ciena’s Blue Planet capabilities will extend beyond network orchestration and control to also provide a unified inventory view of all elements across a provider’s network. Additionally, the DonRiver team of specialized OSS software, integration and consulting experts will complement and scale the Blue Planet organization to form a truly unique and specialized services group that is able to manage modernization projects across both IT and network operations.

“The combination of Blue Planet and DonRiver will enhance our ability to deliver closed loop automation of network services and the underlying operational processes across IT/operations and the network,” said Rick Hamilton, senior vice president of Global Software and Services at Ciena. “With this new set of technology and expertise, we can help customers realize the full benefits of network automation by helping them move away from highly complex and fragmented OSS environments to those that accurately reflect the real-time state and utilization of network resources.”

The transaction is expected to close during Ciena’s fiscal fourth quarter 2018 and is subject to customary closing conditions.

OSS data Ponzi scheme

The more data you have, the more data you need to understand the data you have. You are engaged in a data ponzi scheme…Could it be in service assurance and IT ops that more data equals less understanding?
Phil Tee
in the opening address at the AIOps Symposium.

Interesting viewpoint right?

Given that our OSS hold shed-loads of data, Phil is saying we need lots of data to understand that data. Well, yes… and possibly no.

I have a theory that data alone doesn’t talk, but it’s great at answering questions. You could say that you need lots of data, although I’d argue in semantics that you actually need lots of knowledge / awareness to ask great questions. Perhaps that knowledge / awareness comes from seeding machine-led analysis tools (or our data scientists’s brains) with lots of data.

The more data you have, the more noise that you need to find signal in amongst. That means you have to ask more questions of your data if you want to drive a return that justifies the cost of collecting and curating it all. Machine-led analytics certainly assist us in handling the volume and velocity of data our OSS create / collect. That’s just asking the same question/s over and over. There’s almost no end to the questions that can be asked of our data, just a limit on the time in which we can ask it.

Does that make data a Ponzi scheme? A Ponzi scheme pays profits to earlier investors using funds obtained from newer investors. Eventually it must collapse the scheme eventually runs out of new investors to fund profits. In a data Ponzi scheme, it pays in insights from earlier (seed) data by obtaining new (streaming) data. The stream of data reaching an OSS never runs out. If we need to invest heavily in data (eg AI / ML, etc), at what point in the investment lifecycle will we stop creating new insights?

LG U+ (Korea) signs with Comarch

Comarch to Help Korean LG U+ to Launch One of the First 5G Networks Worldwide.

Comarch and LG U+ have signed a major contract in a bid to completely revamp the Korean operator’s network resource and service management, covering Operations Support & Readiness, Fulfilment and Assurance domains, in preparation for the operator’s planned big scale 5G rollout.

The upcoming implementation will allow LG U+ to migrate from its old in-house solution to a modern and comprehensive telco ecosystem.

Comarch will oversee the implementation of a complete stack of solutions consolidating the existing tools landscape into one unified, scalable platform in the areas of mobile and fixed networks. LG U+ goals in the project are to optimize internal company processes, and to improve the overall end-user experience.

The planned OSS stack overhaul will also be instrumental in the Korean operator’s plans to launch one of the first commercial 5G networks. While supporting the operator in pursuing the latest network technology, the Comarch system will also serve 3G, 4G and fixed network domains.

The solution delivered by Comarch will help LG U+ break IT architecture silos, prepare for efficient fulfilment of modern, 5G-based services, increase network management effectiveness and cut its costs through automation. It will also provide the tools to create logical connectivity layers in a unified format, support network virtualization, and handle the monitoring of network, service and customer layers. Comarch OSS will also empower LG U+ Intelligent Assurance & Analytics including an embedded machine learning engine.

The LG U+ contract is an important milestone for Comarch. Supporting one of the first deployments of a commercial 5G network, puts our company at the true forefront of innovation. The delivery of our OSS platform, which comprises close to 20 modules, will bring our customer a world-class, integrated solution enabling the efficient management of services delivered via mobile and fixed networks. Additionally, a major implementation for a key Korean mobile carrier will definitely help us expand our presence on the Asian markets – noted Jacek Lonc, EVP Sales Telco Division at Comarch.

At LG U+ we currently use an in-house developed OSS stack. As the current IT architecture is silo-based, we experience a number of challenges regarding the introduction of new technologies such as 5G and network virtualization. The successful implementation of Comarch’s comprehensive platform will enable us to achieve a competitive advantage and increase business process efficiency – noted Hokyung Kwon, NMS Development Team Leader at LGU+.

Ericsson to acquire CENX

Ericsson to acquire CENX to boost network automation capability.

Ericsson has agreed to acquire 100 percent of the shares in CENX, boosting Ericsson’s Operations Support Systems (OSS) portfolio with vendor-agnostic service assurance and closed-loop automation capability. Ericsson has held a minority stake in CENX since 2012.

Ericsson has a market leading position in NFV and orchestration. This capability will be further enhanced with CENX’s closed-loop automation and service assurance capabilities. To unleash the potential of 5G, telecom operators need to leverage network virtualization and orchestrate and automate network slices to serve the needs of enterprise customers towards their digital transformation – all while reducing operational costs.

Mats Karlsson, Head of Solution Area OSS, Ericsson, says: “Dynamic orchestration is crucial in 5G-ready virtualized networks. By bringing CENX into Ericsson, we can continue to build upon the strong competitive advantage we have started as partners. I look forward to meeting and welcoming our new colleagues into Ericsson.”

Closed-loop automation ensures Ericsson can offer its service provider customers an orchestration solution that is optimised for 5G use cases like network slicing, taking full advantage of Ericsson’s distributed cloud offering. Ericsson’s global sales and delivery presence – along with its strong R&D – will also create economies of scale in the CENX portfolio and help Ericsson to offer in-house solutions for OSS automation and assurance.

Ed Kennedy CEO, CENX says: “Ericsson has been a great partner – and for us to take the step to fully join Ericsson gives us the best possible worldwide platform to realize CENX’s ultimate goal – autonomous networking for all. Our closed-loop service assurance automation capability complements Ericsson’s existing portfolio very well. We look forward to seeing our joint capability add great value to the transformation of both Ericsson and its customers.”

CENX, founded in 2009, is headquartered in Jersey City, New Jersey. The company achieved significant year-over-year revenue growth in the fiscal year that ended December 31, 2017. CENX employs 185 people.

The transaction is subject to customary regulatory approvals

Help needed: IoT / OSS cross-over use cases

Hi PAOSS community.

I’d like to call in a favour today if I may. I’m on the hunt for any existing use-cases and / or project sites that have integrated a significant sensor network into their OSS and existing operational processes.

That includes a strategy for handling IoT-scale integration of data collection, event / alarm processing, device management, data contextualization, data analytics, end-to-end security and applications management / enablement within existing OSS tools.

I’m looking for examples where an OSS had previously managed thousands of (network) devices and is now managing hundreds of thousands of (IoT) devices. Not necessarily IoT devices of customers as services but within an operator’s own network.

Obviously that’s an unprecedented change in scale in traditional OSS terms, but will be commonplace if our OSS are to play a part in the management of large sensor networks in the future.

There’s an element of mutual exclusivity between what an IoT management platform and OSS needs to do, but there are also some similarities. I’d love to speak with anyone who has actually bridged the gap.

If your partners don’t have to talk to you then you win

If your partners don’t have to talk to you then you win.”
Guy Lupo
.

Put another way, the best form of customer service is no customer service (ie your customers and/or partners are so delighted with your automated offerings that they have no reason to contact you). They don’t want to contact you anyway (generally speaking). They just want to consume a perfectly functional and reliable solution.

In the deep, distant past, our comms networks required operators. But then we developed automated dialling / switching. In theory, the network looked after itself and people made billions of calls per year unassisted.

Something happened in the meantime though. Telco operators the world over started receiving lots of calls about their platform and products. You could say that they’re unwanted calls. The telcos even have an acronym called CVR – Call Volume Reduction – that describes their ambitions to reduce the number of customer calls that reach contact centre agents. Tools such as chatbots and IVR have sprung up to reduce the number of calls that an operator fields.

Network as a Service (NaaS), the context within Guy’s comment above, represents the next new tool that will aim to drive CVR (amongst a raft of other benefits). NaaS theoretically allows customers to interact with network operators via impersonal contracts (in the form of APIs). The challenge will be in the reliability – ensuring that nothing falls between the cracks in any of the layers / platforms that combine to form the NaaS.

In the world of NaaS creation, Guy is exactly right – “If your partners [and customers] don’t have to talk to you then you win.” As always, it’s complexity that leads to gaps. The more complex the NaaS stack, the less likely you are to achieve CVR.

What OSS environments do you need?

When we’re planning a new OSS, we tend to be focused on the production (PROD) environment. After all, that’s where it’s primary purpose is served, to operationalise a network asset. That is where the majority of an OSS‘s value gets created.

But we also need some (roughly) equivalent environments for separate purposes. We’ll describe some of those environments below.

By default, vendors will tend to only offer licensing for a small number of database instances – usually just PROD and a development / test environment (DEV/TEST). You may not envisage that you will need more than this, but you might want to negotiate multiple / unlimited instances just in case. If nothing else, it’s worth bringing to the negotiation table even if it gets shot down because budgets are tight and / or vendor pricing is inflexible relating to extra environments.

Examples where multiple instances may be required include:

  1. Production (PROD) – as indicated above, that’s where the live network gets managed. User access and controls need to be tight here to prevent catastrophic events from happening to the OSS and/or network
  2. Disaster Recovery (DR) – depending on your high-availability (HA) model (eg cold standby, primary / redundant, active / active), you may require a DR or backup environment
  3. Sandpit (DEV / TEST) – these environments are essential for OSS operators to be able to prototype and learn freely without the risk of causing damage to production environments. There may need to be multiple versions of this environment depending on how reflective of PROD they need to be and how viable it is to take refresh / updates from PROD (aka PROD cuts). Sometimes also known as non-PROD (NP)
  4. Regression testing (REG TEST) – regression testing requires a baseline data set to continually test and compare against, flagging any variations / problems that have arisen from any change within the OSS or networks (eg new releases). This implies a need for data and applications to be shielded from the constant change occurring on other types of environments (eg DEV / TEST). In situations where testing transforms data (eg activation processes), REG TEST needs to have the ability to roll-back to the previous baseline state
  5. Training (TRAIN) – your training environments may need to be established with a repeatable set of training scenarios that also need to be re-set after each training session. This should also be separated from the constant change occurring on dev/test environments. However, due to a shortage of environments, and the relative rarity of training needed at some customers, TRAIN often ends up as another DEV or TEST environment
  6. Production Support (PROD-SUP) – this type of environment is used to prototype patches, releases or defect fixes (for defects on the PROD environment) prior to release into PROD. PROD-SUP might also be used for stress and volume testing, or SVT may require its own environment
  7. Data Migration (DATA MIG) – At times, data creation and loading needs to be prototype in a non-PROD environment. Sometimes this can be done in PROD-SUP or even a DEV / TEST environment. On other occasions it needs its own dedicated environment so as to not interrupt BAU (business as usual) activities on those other environments
  8. System Integration Testing (SIT)OSS integrate with many other systems and often require dedicated integration testing environments

Am I forgetting any? What other environments do you find to be essential on your OSS?

The OSS self-driving vehicle

I was lucky enough to get some time of a friend recently, a friend who’s running a machine-learning network assurance proof-of-concept (PoC).

He’s been really impressed with the results coming out of the PoC. However, one of the really interesting factors he’s been finding is how frequently BAU (business as usual) changes in the OSS data (eg changes in naming conventions, topologies, etc) would impact results. Little changes made by upstream systems effectively invalidated baselines identified by the machine-learning engines to key in on. Those little changes meant the engine had to re-baseline / re-learn to build back up to previous insight levels. Or to avoid invalidating the baseline, it would require re-normalising all of data prior to the identification of BAU changes.

That got me wondering whether DevOps (or any other high-change environment) might actually hinder our attempts to get machine-led assurance optimisation. But more to the point, does constant change (at all levels of a telco business) hold us back from reaching our aim of closed-loop / zero-touch assurance?

Just like the proverbial self-driving car, will we always need someone at the wheel of our OSS just in case a situation arises that the machines hasn’t seen before and/or can’t handle? How far into the future will it be before we have enough trust to take our hands off the OSS wheel and let the machines drive closed-loop processes without observation by us?

Your OSS Justice League

Is it just me or has there been a proliferation of superhero movies coming out at cinemas lately? Not only that, but movies where teams of superheros link up to defeat the baddies (eg Deadpool 2, Justice League, etc)?

The thing that strikes me as interesting is that there’s rarely an overlap of super-powers within the team. They all have their different strengths and points of difference. The sum of the parts… blah, blah, blah.

Anyway, I’m curious whether you’ve noticed the same thing as me on OSS projects, that when there are multiple team members with significant skill / experience overlap, the project can bog down in indecision? I’ve noticed this particularly when there are many architects, often super-talented ones, on a project. Instead of getting the benefit of collaboration of great minds, we can end up with too many possibilities (and possibly egos) to work through and the project stagnates.

If you were to hand-pick your all-star cast for your OSS Justice League, just like in the movies, you’d look for a team of differentiated, but hopefully complementary, super-heroes I assume. But I’m diverting away from my main point here.

Each project, just like each formidable foe in the movies, is slightly different and needs slightly different super-powers to tackle it. When selecting a cast for a movie, directors have a global pool to choose from. When selecting a cast for an OSS project, directors have traditionally chosen from their own organisation, possibly with some outside hires to fill the long-term gaps.

With the increasing availability of freelance resources (ie people who aren’t intrinsically tied to carriers or vendors), the proposition of selecting a purpose-built project team of OSS super-heroes is actually beginning to become more possible. I’m wondering how much the gig economy will change the traditional OSS project team model in coming years.

I’d love to hear your thoughts and experiences on this.

Designing an Operational Domain Manager (ODM)

A couple of weeks ago, Telstra and the TM Forum held an event in Melbourne on OSS for next gen architectures.

The diagram below comes from a presentation by Corey Clinger. It describes Telstra’s Operational Domain Manager (ODM) model that is a key component of their Network as a Service (NaaS) framework. Notice the API stubs across the top of the ODM? Corey went on to describe the TM Forum Open API model that Telstra is building upon.
Operational Domain Manager (ODM)

In a following session, Raman Balla indicated an perspective that differs from many existing OSS. The service owner (and service consumer) must know all aspects of a given service (including all dimensions, lifecycle, etc) in a common repository / catalog and it needs to be attribute-based. Raman also indicated that the aim he has for architecting NaaS is to not only standardise the service, but the entire experience around the service.

In the world of NaaS, operators can no longer just focus separately on assurance or fulfillment or inventory / capacity, etc. As per DevOps, operators are accountable for everything.

An alternate way of slicing OSS (part 2)

Last week we talked about an alternate way of slicing OSS projects. Today, we’ll look a little deeper and include some diagrams.

The traditional (aka waterfall) approach to delivering an OSS project sees one big-bang delivery of business value at the end of the implementation.
OSS project delivery via waterfall

The yellow arrows indicate the sequential nature of this style of delivery. The implications include:

  1. If the project runs out of funds before the project finishes, no (negligible) value is delivered
  2. If there’s no modularity of delivery then the project team must stay the course of the original project plan. There’s no room for prioritising or dropping or including delivery modules. Project plans are rarely perfect at first after all
  3. Any changes in project plan tend to have knock-on effects into the rest of the delivery
  4. There is only one true delivery of value, but milestones demonstrate momentum for the project… a key for change management and team morale
  5. Large deliverables represent the proverbial overload one segment of the project delivery team then under-utilises the rest in each stage.  This isn’t great for project flow or team utilisation

The alternate approach seeks to deliver in multiple phases by business value, not artefacts, as shown in the sample model below:
OSS project delivery via AgilePhased enhancements following a base platform build (eg Sandpit and/or Single-site above) could include the following, where each provides a tangible outcome / benefit for the business, thus maintaining perception of momentum (assurance use-cases cited):

  • Additional event collection (ie additional collectors / probes / mediation-devices can be added or configured)
  • Additional filters / sorting of events
  • Event prioritisation mapping / presentation
  • Event correlation
  • Fault suppression
  • Fault escalation
  • Alarm augmentation
  • Alarm thresholding
  • Root-cause analysis (intra, then inter-domain)
  • Other configurations such as latching, auto-acknowledgement, visualisation parameters, etc
  • Heart-beat function (ie devices are unreachable for a user-defined period)
  • Knowledge base (ie developing a database of activities to respond to certain events)
  • Interfacing with other systems (eg trouble-ticket, work-force management, inventory, etc)
  • Setup of roles/groups
  • Setup of skills-based routing
  • Setup of reporting
  • Setup of notifications (eg email, SMS, etc)
  • Naming convention refinements
  • etc, etc

The latter is a more Agile-style breakdown of work, but doesn’t need to be delivered using Agile methodology.

Of course there are pros and cons of each approach. I’d love to hear your thoughts and experiences with different OSS delivery approaches.

The OSS Ferrari analogy

A friend and colleague has recently been talking about a Ferrari analogy on a security project we’ve been contributing to.

The end customers have decided they want a Ferrari solution, a shiny new, super-specified new toy (or in this case toys!). There’s just one problem though. The customer has a general understanding of what it is to drive, but doesn’t have driving experience or a driver’s license yet (ie they have a general understanding of what they want but haven’t described what they plan to do with the shiny toys operationally once the keys are handed over).

To take a step further back, since the project hasn’t articulated exactly where the customers want to go with the solution, we’re asking whether a Ferrari is even the right type of vehicle to take them there. As amazing as Ferraris are, might it actually make more sense to buy a 4WD vehicle?

As indicated in yesterday’s post, sometimes the requirements gathering process identifies the goal-based expectations (ie the business requirements – where the customer wants to go), but can often just identify a set of product features (ie the functional requirements such as a turbo-charged V8 engine, mid-mount engine, flappy-paddle gear change, etc, etc). The latter leads to buying a Ferrari. The former is more likely to lead to buying the vehicle best-suited to getting to the desired destination.

The OSS Ferrari sounds nice, but…

Optimisation Support Systems

We’ve heard of OSS being an acronym for operational support systems, operations support systems, even open source software. I have a new one for you today – Optimisation Support Systems – that exists for no purpose other than to drive a mindset shift.

I think we have to transition from “expectations” in a hype sense to “expectations” in a goal sense. NFV is like any technology; it depends on a business case for what it proposes to do. There’s a lot wrong with living up to hype (like, it’s impossible), but living up to the goals set for a technology is never unrealistic. Much of the hype surrounding NFV was never linked to any real business case, any specific goal of the NFV ISG.”
Tom Nolle
in his blog here.

This is a really profound observation (and entire blog) from Tom. Our technology, OSS included, tends to be surrounded by “hyped” expectations – partly from our own optimistic desires, partly from vendor sales pitches. It’s far easier to build our expectations from hype than to actually understand and specify the goals that really matter. Goals that are end-to-end in manner and preferably quantifiable.

When embarking on a technology-led transformation, our aim is to “make things better,” obviously. A list of hundreds of functional requirements might help. However, having an up-front, clear understanding of the small number of use cases you’re optimising for tends to define much clearer goal-driven expectations.

Security and privacy as an OSS afterthought?

I often talk about OSS being an afterthought for network teams. I find that they’ll often design the network before thinking about how they’ll operationalise it with an OSS solution. That’s both in terms of network products (eg developing a new device and only thinking about building the EMS later), or building networks themselves.

It can be a bit frustrating because we feel we can give better solutions if we’re in the discussion from the outset. As OSS people, I’m sure you’ll back me up on this one. But we can’t go getting all high and mighty just yet. We might just be doing the same thing… but to security, privacy and analytics teams.

In terms of security, we’ll always consider security-based requirements (usually around application security, access management, etc) in our vendor / product selections. We’ll also include Data Control Network (DCN) designs and security appliance (eg firewalls, IPS, etc) effort in our implementation plans. Maybe we’ll even prescribe security zone plans for our OSS. But security is more than that (check out this post for example). We often overlook the end-to-end aspects such central authentication, API hardening, server / device patching, data sovereignty, etc and it then gets picked up by the relevant experts well into the project implementation.

Another one is privacy. Regulations like GDPR and the Facebook trials show us the growing importance of data privacy. I have to admit that historically, I’ve been guilty on this one, figuring that the more data sets I could stitch together, the greater the potential for unlocking amazing insights. Just one problem with that model – the more data sets that are stitched together, the more likely that privacy issues arise.

We increasingly have to figure out ways to weave security, privacy and analytics into our OSS planning up-front and not just think of them as overlays that can be developed after all of our key decisions have been made.

New OSS functionality or speed and scale?

We all know that revenue per bit (of data transferred across comms networks) is trending lower. How could we not? It’s posited as one of the reasons for declining profitability of the industry. The challenge for telcos is how to engineer an environment of low revenue per bit but still be cost viable.

I’m sure there are differentiated comms products out there in the global market. However, for the many products that aren’t differentiated, there’s a risk of commoditisation. Customers of our OSS are increasingly moving into a paradigm of commoditisation, which in turn impacts the form our OSS must mould themselves to.

The OSS we deliver can either be the bane or the saviour. They can be a differentiator where otherwise there is none. For example, getting each customer’s order ready for service (RFS) faster than competitors. Or by processing orders at scale, yet at a lower cost-base through efficiencies / repeatability such as streamlined products, processes and automations.

OSS exist to improve efficiency at scale of course, but I wonder whether we lose sight of that sometimes? I’ve noticed that we have a tendency to focus on functionality (ie delivering new features) rather than scale.

This isn’t just the OSS vendors or implementation teams either by the way. It’s often apparent in customer requirements too. If you’ve been lucky enough to be involved with any OSS procurement processes, which side of the continuum was the focus – on introducing a raft of features, or narrowing the field of view down to doing the few really important things at scale and speed?

Expanding your bag of OSS tricks

Let me ask you a question – when you’ve expanded your bag of tricks that help you to manage your OSS, where have they typically originated?

By reading? By doing? By asking? Through mentoring? Via training courses?
Relating to technical? People? Process? Product?
Operations? Network? Hardware? Software?
Design? Procure? Implement / delivery? Test? Deploy?
By retrospective thinking? Creative thinking? Refinement thinking?
Other?

If you were to highlight the questions above that are most relevant to the development of your bag of tricks, how much coverage does your pattern show?

There are so many facets to our OSS (ie. tentacles on the OctopOSS) aren’t there? We have to have a large bag of tricks. Not only that, we need to be constantly adding new tricks too right?

I tend to find that our typical approaches to OSS knowledge transfer cover only a small subset (think about discussion topics at OSS conferences that tend to just focus on the technical / architectural)… yet don’t align with how we (or maybe just I) have developed capabilities in the past.

The question then becomes, how do we facilitate the broader learnings required to make our OSS great? To introduce learning opportunities for ourselves and our teams across vaguely related fields such as project management, change management, user interface design, process / workflows, creative thinking, etc, etc.

An alternate way of slicing OSS projects

One of the biggest challenges of big bang OSS project implementations is that all of the business value (ie the OSS and its data, workflows, integrations, etc) gets delivered at once, normally at the end of a lengthy exercise.

Ok, ok, so the delivery of value is not a challenge, it’s the implications of a big delivery of value that’s the challenge – implications that include:

  1. If the project runs out of funds before the project finishes, no value is delivered
  2. If there’s no modularity of delivery then the project team must stay the course of the original project plan. There’s no room for prioritising or dropping or including delivery modules. Project plans are rarely perfect at first after all
  3. Any changes in project plan tend to have knock-on effects into the rest of the delivery due to the sequential nature of typical project plans
  4. Any delivery of value represents a milestone, which in turn demonstrates momentum for the project… a key change management and team morale strategy
  5. Large deliverables represent the proverbial “pig in the python” – only one segment of the python (ie segment of the project delivery team) is engaged (hyper-engaged) whilst the other segments remain under-utilised.  This isn’t great for project flow or utilisation

When tasked with designing a project schedule, I’ve noticed that many vendors tend to follow the typical waterfall delivery and corresponding payment milestones (eg. design, then build, then test, then deploy, then hand over). The downside of this approach is that the business value (for the customer) is delivered at the end of the handover (ie big bang). There’s no business value in delivering design artefacts for example – the customer can’t use them to perform operational tasks.

The model I prefer sees incremental business value being delivered such as:

  • Proof of Concept (PoC) build
  • Sandpit build
  • Out of the box (OOTB) production build (ie. no customisations)
  • End-to-end use case #1 delivery (ie. design, build*, test, deploy, handover)
  • E2E use case #2 delivery
  • E2E use case #n delivery

where build* includes incremental configuration, customisation, integration, data migration, etc.