“Life is a series of steps. Things are done gradually. Once in a while there is a giant step, but most of the time we are taking small, seemingly insignificant steps.”
So, what is it going to be?
Is your OctopOSS going to be introduced in phases or is it going to be introduced in one big bang?
The phased approach is the most common for a large number of reasons, least not because the only way to eat a giant OctopOSS is by taking many small bites. Other less cliched reasons are:
- OSS cause major upheaval within an organisation, so structured change management is one of the most pivotal elements of an OctopOSS implementation. Building momentum by delivering multiple successful phases is one way of limiting an organisation’s resistance to change (inertia)
- A step-by-step roll-out allows learnings to be built into each subsequent phase
- A phased approach over time allows the organisation to refine a solution that meets current requirements rather than the requirements that were defined at the start of the project (often many months prior). It should be noted that these in-flight refinements should be carefully managed so that the project doesn’t get bogged down on time and budget slippages. Markets, technologies and people are changing ever-faster, so the OSS will need to adapt in flight on any major OctopOSS project
- Smaller time increments between releases leaves less room for change in requirements, personnel, opinions. etc
- There are often organisational imperatives that mean phased roll-outs are essential. These could include budget allocations, alignment with other projects such as network initiatives, organisational re-structures, etc.
The big bang approach can work in some instances however:
- If you have the resources to build a system in parallel to your existing operational systems, it might make sense to build, test and train with the new system in a development environment. Chances are that you’ll still have to cut over from old to new in a phased approach (eg creating the dev environment, migration of data, integrating with directory services, network interfaces, pointing services to new addresses, etc)
- You’re only making a like-for-like replacement of a tool-set rather than the whole suite
- The tool-set is a new one that requires minimal configuration, integration or customisation
Even then, ongoing management of the OctopOSS via new releases, technologies, etc means that it will be a phased evolution, even if certain sub-projects might take on the appearance of a big-bang implementation.
The sequence of phases are often driven by the organisation’s current imperatives. Notwithstanding these constraints, when tackling a new OctopOSS, I try to establish phases that have the most visible impact with the least initial effort, gaining momentum for the project. This usually means introducing basic Alarm/Fault management and then Performance Management functionality because the flow of information is usually only required from the network up and can be initiated without need for complicated data mappings (although that often comes later to enable enhanced functionality such as alarm correlation, suppression, etc).
Another early win could be the establishment of dashboard functionality that draws upon existing data feeds.
The more complicated phases would include inventory, auto-discovery, flow-through provisioning, etc because there can be a two-way flow of information (ie interfaces are more complicated), data sets and mappings are more complex, processes are more complicated and naming conventions need to have been developed and refined.
Many of the statements above also hold true for data migration for your OctopOSS. Are you going to introduce data in phases or as a big-bang? On each OSS that I’ve worked on, data is referential and has to be loaded in a specific sequence to maintain referential integrity within the database.