OSS re-write – Skills shortage

This is the sixth in the Complete OSS re-write series of posts and relates to We don’t have enough skilled people. This series is designed to pose ideas on how the OSS industry could take a Control-Alt-Delete approach to all aspects of delivering operational support, which coincides with the inflection point underway in our industry via technologies such as network virtualisation (eg SDN/NFV) and sensor networks (eg Internet of Things). This series also draws inspiration from the re-write approaches that are disrupting industries such as taxis, music, hotels and many others.

We all know the challenges that exist in finding quality resources for our OSS and acknowledge that there is always a significant ramp-up period when putting new resources on.

So how can we overcome this problem by taking a new look at it?

There are two parts to look at:

  1. The implementation resources ( ie the vendors or integrators)
  2. The operators ( ie the CSPs )

The operators need to become familiar with the new tools that the project delivers and then be able to step their way through all of the variants in a CSP’s workflows using the new tools.
Rather than just sending new hires on a short OSS product training course, we need to equip them better, by giving in flight decision guidance tools. These tools already exist but are not yet in widespread use unfortunately. I believe they should be a support tool on most OSS implementations.

The implementers have a much harder task. It takes time, usually many years to develop the linchpins / tripods that add most value during any implementation.

Certainly a reduction in product and process complexity will help to get new starters up to speed faster, but there are still so many other aspects of the industry to learn. The increased standardisation and reduced customisation that we’ve spoken of earlier in the series would also help but still not the answer. Decision Support Systems are probably of less assistance than they are for operators because there tends to be less repeatability for implementer tasks.

Do you have any revolutionary ideas to help get your new implementers fast-tracked to efficiency?

If this article was helpful, subscribe to the Passionate About OSS Blog to get each new post sent directly to your inbox. 100% free of charge and free of spam.

Our Solutions


Most Recent Articles

2 Responses

  1. In my previous OSS job we tried to deskill a very complex task – That of analyzing data flows across networks under various conditions and preparing optimized or more reliable routing by, for example, changing OSPF link costs and deploying MPLS-TE.

    If we assume all OSS does something useful, faster and more automated-er, (as the RoI for OSS is usually around automating highly skilled, slow tasks) then there should only be a skills gap in deployment and configuration of the OSS.

    In my case, we used a lot of A.I. which not only did the work better/faster but was also designed to be self-training as much as possible. A.I. combined with baked-in protocol rules meant there was little or no need to write code to model the network/business rules, compared with traditional inventory-centric solutions.

    Where further custom automation was needed we used JavaScript to orchestrate the OSS. JS was chosen on the basis that it was fairly widely well known and much simpler to use than the more common full-fat Java and J2EE common to many 1990s/2000s era OSS.

    We used light-weight embedded databases instead of the more common Oracle or MySQL – it required zero maintenance.

    Did that all help? A bit, but it doesn’t radically change the cost and effort to deploy and maintaining OSS. You still need an uncommon set of skills including network knowledge, data modelling (c.f. integration tax), and coding (even if JS is a lower barrier to entry than J2EE).

  2. That’s a great success story James! Sounds like a lot of effort was put in by the implementers to make it easier for the operators.

    The one thing I’ve found in those situations is that it’s great until something changes (eg network model, protocols used, workflow refinements, etc) and then it has to be heavily re-engineered.

    Did the AI and other clever techniques overcome that problem on your solution or do you think it had the potential to be a problem in the future?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.