My “best” OSS build was my first. Unfortunately the client shut it down after just over 2 years. Why? We’ll get to that. What did I learn? We’ll get to that too.
I shouldn’t say “my” OSS or “I built” because it was a massive effort from a very special team and I’m not trying to take much credit for it. But it was a pretty revolutionary system for its time and a massive influence on my career thereafter. I probably wouldn’t have gone on to found Passionate About OSS for a start!!
But first we have to go a long way back into time. We’re talking about the era between 2000 and 2002. We’re talking Taiwan’s first fixed-line carrier after deregulation (and a time when almost all of the country’s telco talent stayed with the incumbent). We’re talking about the times when OSS/BSS were hitting their precocious “teenage” years.
I had a slight chuckle on a recent vendor evaluation. One of the big-name vendors said, “We’ve just finished developing a solution for cross-domain network discovery and we think we might be the first to have done this. This is a really big deal and none of our competitors can really achieve this.”
I chuckled because it certainly wasn’t “solved first” in circa 2022. All the way back in 2001 we did a few interesting things on that OSS:
- We developed a discovery engine that could discover and reconcile “nodal” objects (ie equipment, cards, ports, etc)
- We added workflows that were able to incorporate human acknowledgement of discovered objects to determine what was populated into the OSS (eg to avoid port-flaps or other anomalies)
- We then added automation rules for what we could accept and what had to be human approved
- We then extended it to include “connectivity” objects (ie links between equipment) within a bunch of separate domains (eg SDH, DSL, DWDM, LMDS, etc)
- Finally, the really tricky nut to crack, we then managed to stitch multiple domains together
We also developed comprehensive, automated flow-through provisioning that achieved was sophisticated Service Order Management -> Product Catalogue -> Orchestration (and decision-tree based orchestration plans) -> Mediation -> pushing commands into a variety of different network devices.
This wasn’t just simple orders, but orchestration plans that performed automated and manual tasks that stitched together:
- Resource checks and reservations
- Customer device configuration
- Physical network build (including field workforce coordination)
- Access network configuration
- DSLAM provisioning
- Voice switch provisioning
- ATM switch provisioning
- Broadband gateway provisioning
- IP network configuration
- Additional infrastructure build if the bearer circuits weren’t already in place or didn’t have enough spare capacity
- As well as all the other stuff like updating service orders, coordinating / scheduling / informing RFS dates, triggering billing, etc, etc
Operators at our carrier client simply had to accept a customer order and push a button and our OSS coordinated the rest (with the help of field workers of course).
[As an aside, shortly after we proved this whole auto-provisioning, the client’s network guys decided to make changes in the network without telling us (I think it was a move from using VPIs to VCIs in the ATM network) and the whole thing broke. We had to rewrite a lot of the logic because we needed to re-model the foundational data in the network inventory, which meant auto-provisioning was out of action for a couple of weeks. That wasn’t fun!!
However, it was a great learning exercise. When you have a complex automation, the more specificity you design into the data model and the rigidity of automation rules, the more likely that it will break when even the smallest change happens anywhere in the overall environment. Better to keep things as simple and flexible as possible]
But we covered so much more ground than that. We did inventory, we did fulfilment, we did real-time assurance. We also did some really cool feedback loop stuff with the tools at hand, including using performance threshold breaches to drive self-correcting traffic engineering (a rudimentary multi-domain SON). It was such a fantastic learning opportunity about every facet of an OSS/BSS stack.
I’m not claiming that any of this was world-first, but it was such an enormous step up in sophistication from the EMS and network test kit that I’d played with prior to this first OSS project. No wonder I was totally hooked! Hooked on the systems, but also the rich multi-disciplinary data sets that I could play with and interrogate! Heady stuff for someone as curious as me!
We finished the implementation phase of the project in 2002 and handed over to the client’s operations team. I then moved on to the next OSS build project in the next country, then the next, then the next. But then I heard the sad news much later that by as early as 2004/5 the client had turned off the original OSS that we’d only just handed over in 2002. I never heard the full story about why the client decided not to use it anymore. However, it was another big learning opportunity – even though it was a technological marvel, something I’m still immensely proud of to this day, it clearly didn’t meet all the necessary layers of human-factor requirements. IMHO, it was a technical masterpiece (especially for its time), but it proved to be unusable by the client organisation.
I spend a lot more time thinking about organisational change management these days and how to ensure the entire client organisation is ready for when they’re handed the keys to their shiny new OSS by the project implementation team. It’s certainly not a 2 week training course just prior to UAT and/or handover. There’s so much more to it than that, starting right from the earliest conceptualisation phases of a transformation project.
But enough about me. I’d love to hear stories about your OSS journey – your successes, your failures, your learnings, or even where personal successes proved to be corporate failures like the example I described above.