There’s an OSS Security Elephant in the Room!

The pandemic has been beneficial for the telco world in one way. For those who weren’t previously aware already, telecommunications is incredibly important to our modern way of life. Not just our ability to communicate with others, but our economy, the services we use, the products we buy and even more fundamentally, our safety.

Working in the telco industry, as I’m sure you do, you’ll also be well aware of all the rhetoric and politics around Chinese manufactured equipment (eg Huawei) being used in the networks of global telco providers. The theory is that having telecommunications infrastructure supplied by a third-party, particularly a third-party aligned with non-Western allies, puts national security interests at risk.

In this article, “5G: The outsourced elephant in the room,” Bert Hubert provides a brilliant look into the realities of telco network security that go far beyond just equipment supply. It breaks the national security threat into three key elements:

  • Spying (using compromised telco infrastructure to conduct espionage)
  • Availability (compromising and/or manipulating telco infrastructure so that it’s unable to work reliably)
  • Autonomy (being unable to operate a network or to recover from outages or compromises)

The first two are well understood and often discussed. The third is the real elephant in the room. The elephant OSS/BSS have a huge influence over (potentially). But we’ll get to that shortly.

Before we do, let’s summarise Bert’s analysis of security models. For 5G, he states that there’s an assumption that employees at national carriers design networks, buy equipment, install it, commission it and then hand it over to other employees to monitor and manage it. Oh, and to provide other specialised activities like lawful intercept, where a local legal system provides warrants to monitor the digital communications of (potentially) nefarious actors. Government bodies and taxpayers all assume the telcos have experienced staff with the expertise to provide all these services.

However, the reality is far different. Service providers have been outsourcing many of these functions for decades. New equipment is designed, deployed, configured, maintained and sometimes even financed by vendors for many global telcos. As Bert reinforces, “Just to let that sink in, Huawei (and their close partners) already run and directly operate the mobile telecommunication infrastructure for over 100 million European subscribers.

But let’s be very clear here. It’s not just Huawei and it’s not just Chinese manufacturers. Nor is it just mobile infrastructure. It’s also cloud providers and fixed-line networks. It’s also American manufacturers. It’s also the integrators that pull these networks and systems together. 

Bert also points out that CDRs (Call Detail Records) have been outsourced for decades. There’s a strong trend for billing providers to supply their tools via SaaS delivery models. And what are CDRs? Only metadata. Metadata that describes a subscriber’s activities and whereabouts. Data that’s powerful enough to be used to assist with criminal investigations (via lawful intercept). But where has CDR / bill processing been outsourced to? China and Israel mostly.

Now, let’s take a closer look at the autonomy factor, the real elephant in the room. Many design and operations activities have been offshored to jurisdictions where staff are more affordable. The telcos usually put clean-room facilities in place to ensure a level of security is applied to any data handled off-shore. They also put in place contractual protection mechanisms.

Those are moot points, but still not the key point here. As Bert brilliantly summarises,  “any worries about [offshore actors] being able to disrupt our communications through backdoors ignore the fact that all they’d need to do to disrupt our communications.. is to stop maintaining our networks for us!

There might be an implicit trust in “Western” manufacturers or integrators (eg Ericsson, Nokia, IBM) in designing, building and maintaining networks. However, these organisation also outsource / insource labour to international destinations where labour costs are cheaper.

If the R&D, design, configuration and operations roles are all outsourced, where do the telcos find the local resources with requisite skills to keep the network up in times when force majeure (eg war, epidemic, crime, strikes, etc) interrupts a remote workforce? How do local resources develop the required skills if the roles don’t exist locally?

Bert proposes that automation is an important part of the solution. He has a point. Many of the outsource arrangements are time and materials based contracts, so it’s in the resource suppliers’ best interests for activities to take a lot of time to maintain manually. He counters by showing how the hyperscalers (eg Google) have found ways of building automations so that their networks and infrastructure need minimal support crews.

Their support systems, unlike the legacy thinking of telco systems, have been designed with zero-touch / low-touch in mind.

If we do care about the stability, resiliency and privacy of our national networks, then something has to be done differently, vastly different! Having highly autonomous networks, OSS, BSS and related systems is a start. Having a highly skilled pool of local resources that can research, design, build, commission, operate and improve these systems would also seem important. If the business models of these telcos can’t support the higher costs of these local resources, then perhaps national security interests might have to subsidise these skills?

I wonder if the national carriers and/or local OSS/BSS / Automation suppliers are lobbying this point? I know a few governments have inserted security regulatory systems and pushed them onto the telcos to adhere to, to ensure they have suitable cyber-security mechanisms. They also have lawful intercept provisions. But do any have local operational autonomy provisions? None that I’m aware of, but feel free to leave us a comment about any you’re aware of.

PS. Hat tip to Jay for the link to Bert’s post.

How to make your OSS a Purple Cow

With well over 400 product suppliers in the OSS/BSS market it can be really difficult to stand out from the other products. Part of the reason we compiled The Blue Book OSS/BSS Vendor Directory was to allow us to quickly recall one product from another. With so much overlapping functionality and similarities in their names, some vendors can “blend” into each other when we try to recall them.

And we spend a lot of our week working with and analysing the market, the products and the customers who use them. Imagine how difficult that task would be for someone whose primary task is to operate a network (which most OSS/BSS customers do for a living).

How then can a vendor make their offerings stand out amongst this highly fragmented product market?

Seth Godin is a legendary marketer and product maker (not in OSS or BSS products though). We refer to him often here on the Passionate About OSS blog because of his brilliant and revolutionary ideas. One of those ideas turned into a product manifesto entitled Purple Cow. This book made it into our list of best books for OSS/BSS practitioners.

Purple Cow describes something phenomenal, something counterintuitive and exciting and flat-out unbelievable… Seth urges you to put a Purple Cow into everything you build, and everything you do, to create something truly noticeable.”

When you’re on a long trip in the countryside, seeing lots of brown or black cows soon gets boring, but if you saw a purple cow, you’d immediately take notice. This book provides the impetus to make your products stand out and drive word of mouth rather than having to differentiate via your marketing.

I’ve noticed the same effect when we pitch our Managed OSS/BSS Data Service to prospects. It’s the data collection and collation tools that drive most of the real business value (in our humble opinion), but it’s the visualisation tools that drive the wow factor amongst our prospects / customers.

It’s our ability to show 3D models of assets (eg towers as per the animation below) or overlay of real-time data onto the 3D models (ie digital twins), or mashing up many disparate data sources for presentation by powerful and intuitive visualisation engines.

This might seem counter-intuitive, but set aside your products’ technical functionality for a moment. Now ask yourself what is the biggest wow factor that stays with your customers and gets them talking with others in the industry? What’s phenomenal, counterintuitive, exciting and flat-out unbelievable? What’s your Purple Cow?

Having reviewed many OSS/BSS products, I can assure you there aren’t many Purple Cows in the OSS/BSS industry. If you’re responsible for products or marketing at an OSS/BSS vendor, that gives you a distinct opportunity.

Improvements in technologies such as 3D asset modelling, AI image recognition, advances in UX/CX, data collection / visualisation and many more gives you the tools to be creative. The tools to be memorable. The tools to build a Purple Cow.

It’s difficult to stand out based on functional specifications or non-functionals (unless you leave others in your dust, such as being able to ingest data at 10-20x your nearest competitor). Those features might be where the your biggest business value may lie. In fact that’s probably where it does lie. 

However, it seems that the Purple Cows in OSS/BSS appear in the unexpected and/or surprising visual experiences.

Can you imagine building an OSS/BSS Purple Cow? If interested, we’d be delighted to assist.

 

Uses of OSS Augmented Reality in the Data Centre

I was doing some research on Ubiquiti’s NMS/OSS tools yesterday and stumbled upon the brilliant Augmented Reality (AR) capability that it has (I think).

I’ve converted to a video shown below (apologies for the low-res – you can try clicking here for Ubiquiti’s full-res view)

I especially love how it uses the OLED at the left side of the chassis almost like a QR code to uniquely identify the chassis and then cross-link with current status information pulled from Ubiquiti’s UNMS (assumedly).

As mentioned in this post from all the way back in 2014, the potential for this type of AR functionality is huge if / when linked to OSS / BSS data. Some of the potential use-cases for inside the data centre as cited in the 2014 article were:

  1. Tracing a cable (or patchlead) route through the complex cabling looms in a DC
  2. Showing an overlay of key information above each rack (and then each device inside the rack):
    1. Highest alarm severity within the rack (eg a flashing red beacon above each rack that has a critical alarm on any device within it, then a red beacon on any device inside the rack that has critical alarms)
    2. Operational state of each device / card / port within the rack
    3. Active alarms on each device
    4. Current port-state of each port
    5. Performance metrics relating to each device / port (either in current metric value or as a graph/s)
    6. Configuration parameters relating to each device / port (eg associated customer, service, service type or circuit name)
  3. Showing design / topology configurations for a piece of equipment
  4. Showing routing or far-end connectivity coming from a physical port (ie where the cable ultimately terminates, which could be in the same DC or in another state / country)

I believe that some of these features have since been implemented in working solutions or proofs-of-concept but I haven’t seen any out in the wild. Have you?

I’d love to hear from you if you’ve already used these Ubiquiti tools and/or have seen AR / OSS solutions actually being used in the field. What are your thoughts on their practicality?

What other use-cases can you think of? Note that the same 2014 article also discusses some AR use-cases that extend beyond the DC.

How To Optimise A Network Assurance GUI To Get Results

In the old-school world of network assurance, we just polled our network devices and aggregated all the events into an event list. But then our networks got bigger and too many events were landing in the list for our assurance teams to process.

The next fix was to apply filters. For example, that meant dropping the Info and Warning messages because they weren’t all that important anyway…. were they?

But still, the event list just kept scrolling off the bottom of the page. Ouch. So then we looked to apply correlation and suppression rules. That is, to apply a set of correlations so that some of the alarms could be bundled together into a single event, allowing the “child” events to be suppressed.

Then we can get a bit more advanced with our rules and perform root-cause analysis (RCA). Now, we’re moving to identify patterns using learning algorithms… to reduce the volume of the event list. But with virtualised networks, higher-speed telemetry and increased network complexity, the list keeps growing and the rules have to get more “dynamic.”

Each of these approaches is designing a more highly filtered lens through which a human operator can view the health of the network. The filters and rules are effectively dumbing down the information that’s landing with the operator to solve. The objective appears to be to develop a suitably dumbed-down solution that allows us to throw lots of minimally-trained (and cheaper) human operators at the (still) high transaction count problem. That means the GUI is design to filter out and dumb down too.

But here’s the thing. The alarm list harks back decades to when telcos were happy having a team of Engineers running the NOC, resolving lots of transactions. Fast forward to today and the telcos aspire to zero-touch assurance. That implies a solution that’s designed with machines in mind rather than humans. What we really want is for the network to self-heal based on all the telemetry it’s seeing.

Unfortunately, rare events can still happen. We still need human operators in the captain’s seat ready to respond when self-healing mechanisms are no longer able to self-heal.

So instead of dumbing-down and filtering out for a large number of dumbed-down and filtered out operators, perhaps we could consider doing the opposite.

Let’s continue to build networks and automations that take responsibility for the details of every transaction (even warning / info events). But let’s instead design a GUI that is used by a small number of highly trained operators, allowing them to see the overall network health posture and respond with dynamic tools and interactions. Preferably before the event using predictive techniques (that might just learn from all those warning / info events that we would’ve otherwise discarded).

Hat tip to Jay for some of the contrarian thoughts that seeded this post.

 

009 – Managing OSS/BSS Transformation at a Mid-Tier Telco with Steven Cocchiarella

Much of the focus within OSS/BSS centres around the big-budget projects being done by the Tier-1 telcos. They get attention because there are lots of people involved, lots of OSS horsepower, with big, ambitious goals. But there’s another part of the industry that doesn’t tend to get so much public recognition – the mid-market telcos and utilities. These OSS/BSS tend to cover just as much scope. They just don’t have the same level of resources.

Our guest on this episode, Steven Cocchiarella, knows the challenges of providing a full-stack OSS/BSS for the mid-tier market. He was Director of Information Services and Business Analytics for Smithville (http://www.smithville.com), an independent telco provider in Indiana, USA. He describes the challenges he faced in this role and some of the techniques he used to circumvent some of them. Techniques such as the “cut it off and kill it,” approach.

Steven also outlines the opportunities awaiting the many OSS/BSS vendors supplying mid-market telcos like Smithville. He describes the gaps in current offerings and how they can be done better to provide better outcomes for the mid-market. He also provides examples of how he used Salesforce to provide a wrapper around the monolithic OSS/BSS he inherited, allowing him to have freer control over processes and essential data sets. Steven enjoyed these projects so much that he’s now moved on to become a Salesforce Consultant at Growth Heroes (https://growthheroes.com).

For any further questions you may have, Steven can be found at: https://www.linkedin.com/in/steven-cocchiarella-3756b811b/

OSS Functionality – Is Your Focus In Anonymous Places?

Yesterday’s article asked whether OSS tend to be anonymous and poorly designed and then compared how Jony Ive (who led the design of iPads, iPods, iPhones for Apple) might look at OSS design. Jony has described “going deep” – being big on focus, care and detail when designing products. The article looked at 8 care factors, some of which OSS vendors do very well and others, well, perhaps less so.

Today we’ll un-pack this in more detail, using a long-tail diagram to help articulate some of the thoughts. [Yes, I love to use long-tail diagrams to help prioritise many facets of OSS]

The diagram below comes from an actual client’s functionality usage profile.
Long tail of OSS

The x-axis shows a list of functionalities / use-cases. The y-axis shows the number of uses (it could equally represent usefulness or value or other scaling factor of your choice).

The colour coding is:

  • Green – This functionality exists in the product today
  • Red – This functionality doesn’t exist

The key questions to ask of this long-tail graph are:

  1. What functionality is most important? What functionality “moves the needle” for the customers? But perhaps more importantly, what functionality is NOT important
  2. What new functionality should be developed
  3. What old functionality should be re-developed

Let’s dive deeper.

#1 – What functionality is most important
In most cases, the big-impact demands on the left side of the graph are going to be important. They’re the things that customers need most and use most. These are the functions that were included in the MVP (minimum viable product) when it was first released. They’ve been around for years. All the competitors’ products also have these features because they’re so fundamental to customers. But, because they already exist, many vendors rarely come back to re-factor them.

There are other functions that are also important to customers, but might be used infrequently (eg data importers or bulk processing tools). These also “move the needle” for the customer.

#2 – What functionality should be developed (and what should not)
What I find interesting is that many vendors just add more and more functionality out at the far right side of the graph, adding to the hundreds of existing functions. They can then market all those extra features, rightly saying that their competitors don’t have these abilities…. But functionality at the far right rarely moves the needle, as described in more detail in this earlier post!

Figuring out what should be green (included) and what should be red (excluded) on this graph appears to be something Apple has done quite well with its products. OSS vendors… perhaps less so!! By the way, the less green, the less complexity (for users, developers, testers, integrators, etc), which is always an important factor for OSS implementations.

Yesterday’s post mentioned eight “care factors” to evaluate your OSS products / implementations by. But filtering out the right side of the long-tail (ie marking up more red boxes) might also help to articulate an important “don’t care factor.”

#3 – What functionality should be re-developed
Now this, for me, is the important question to ask. If the green boxes are most important, especially the ones at the far left of the graph, should these also be the ones that we keep coming back to, looking to improve them? Where usability, reliability, efficiency, de-cluttering, etc are most important?

I suspect that Apple develop hundreds of prototypes that focus on and care about the left side of the graph in incredible detail, whilst looking to reduce the green bars in the right side of the graph. My guess is that subsequent updates to their products also seek improvements to the left side…. whilst also adding some new features, turning some of the red boxes green, but rarely all the way out to the right edge of the graph. 

Summary Questions

If you are in any way responsible for OSS product development, where is your “heat map” of attention on this long tail? Trending towards the left, middle or right?

But, another question vexes me. Do current functionality-based selection / procurement practices in OSS perpetuate the need to tick hundreds of boxes (ie the right side of the long tail), even though many of those functions don’t have a material impact? There’s a reason I’ve moved to a more “prioritised” approach to vendor selection in recent years, but I suspect the functionality check-boxes of the past are still a driving force for many.

OSS – Are they anonymous, poorly made objects?

We’re surrounded by anonymous, poorly made objects. It’s tempting to think its because the people who use them don’t care – just like the people who make them. But what [Apple has] shown is that people do care. It’s not just about aesthetics. They care about things that are thoughtfully conceived and well made.”
Jony Ive (referenced here).

As you undoubtedly know, Jony Ive is the industrial design genius behind many of Apple’s ground-breaking products like the iPod, iPad, etc. You could say he knows a bit about designing products.

I’d love to see what he would make of our OSS. I suspect that he’d claim they (and I’m using the term “they” very generically here) are poorly made objects. But I’d also love the be a fly on the wall to watch how he’d go about re-designing them.

It’s doing an extreme disservice to say that all OSS are poorly designed. From a functionality perspective, they are works of art and incredible technical achievements. They are this MP3 player:

Brilliantly engineered to provide the user with a million features.

But when compared with Apple’s iPods????

The iPods actually have less functionality than the MP3 player above. But that’s part of what makes them incredibly intuitive and efficient to use.

Looking at the quote above from Ive, there’s one word that stands out – CARE. As product developers, owners, suppliers, integrators, do we care sufficiently about our products? This question could be a little incendiary to OSS product people, but there are different levels of care that I’ve seen taken in the build of OSS products:

  • Care to ensure the product can meet the requirements specified? Tick, yes absolutely in almost every case. Often requiring technical brilliance to meet the requirement/s
  • Care to ensure the code is optimised? In almost all cases, yes, tick. Developers tend to have a passionate attention for detail. Their code must not just work, but work with the most efficient algorithm possible. Check out this story about a dilemma of OSS optimisation
  • Care to ensure the user experience is optimised? Well, to be blunt, no. Many of our OSS are not intuitive enough. Even if they were intuitive once, they’ve had so much additional functionality bolted on, having a Frankenstein effect. Our products are designed and tested by people who are intimately familiar with our products’ workings. How often have you heard of products being tested by an external person, a layperson, or even a child to see how understandable they are? How often are product teams allowed the time to prototype many different UI design variants before releasing a new functionality? In most cases, it’s the first and only design that passed functional testing that’s shipped to customers. By contrast, do you think Apple allowed their first prototype of the iPod to be released to customers?
  • Care to ensure that bulk processing is optimised? In most cases, this is often also a fail. OSS exist to streamline operations, to support high volume transactions (eg fulfillment and assurance use-cases). But how many times have you seen user interfaces and test cases that are designed for a single transaction, not to benchmark the efficiency of transactions at massive scale?
  • Care to ensure the product can pass testing? Tick, yes, we submit our OSS to a barrage of tests, not to mention the creation of modern test automation suites
  • Care to ensure the product is subjected to context-sensitive testing? Not always. Check out this amusing story of a failure of testing.
  • Care to ensure that installation, integration and commissioning is simple and repeatable? This is an interesting one. Some OSS vendors know they’re providing tools to a self-service market and do a great job of ensuring their customers can become operational quickly. Others require an expert build / release team to be on site for weeks to commission a solution. Contrast this again with the iPad. It’s quick and easy to get the base solution (including Operating System and core apps) operational. It’s then equally easy to download additional functionality via the App Store. Admittedly, the iPad doesn’t need to integrate with legacy “apps” and interfaces in the same way that telco software needs too!! Eeeek!
  • Care about the customer? This can also be sporadic. Some product people / companies pay fastidious attention to the customers, the way they use the products / processes / data and the objectives they need to meet using the OSS. Others are happy to sit in their ivory towers, meet functional testing and throw the solution over the fence for the customers to deal with.

What other areas have I missed? In what other ways do we (or not) take the level of care and focus that Apple does? Leave us a comment below.

008 – Making OSS Mega Projects Happen with Ashley Neale

While it might sometimes feel like OSS mega projects just happen, there’s usually a lot that must first play out up-stream, long before us technologists get the chance to design and build. First someone must spawn the idea, then be able to persuade a bunch of other people, exciting them with the possibilities of the idea. In many cases, this happens on the customer-side with OSS projects. The internal team sees the need and collects the support (and sponsors) to bring an OSS project (and product?) to inception. The customer-side team then typically asks OSS vendors / integrators to compete for the project implementation [The Red Ocean for OSS vendors].

However, our guest in this episode, Ashley Neale, takes us into his world where he’s constantly seeking ways to do things differently, to do things better for the customer [The Blue Ocean]. Ashley talks us through his experiences of conceptualising, pitching. persuading and closing a mega project worth well over $100m. Even better than that, he then describes the challenges, pitfalls and scars that he overcame whilst leading the delivery of that project, building an OSS/BSS almost from scratch.

Ashley then takes us on his career journey from an IT course at uni, to helping Engineers to sell, through to his current role developing strategic markets for Vocus. He also shares his big-picture perspective on what we can do better and where the big opportunities still lie in the OSS industry.

For any further questions you may have, Ashley can be found at: https://www.linkedin.com/in/ashleyneale/

Ashley promised to provide references to further reading in this podcast. Here it is in all its glory. Some brilliant titles amongst them:

Sales

The Challenger Sale – Mathew Dixon

Pitch Anything – Oren Klaff

Psychology of Selling – Brian Tracy

SPIN Selling – Neil Rackham

Lead Generation for the Complex Sale – Brian Carroll

Cracking the Sales Management Code – Jason Jordan & Michelle Vazzana

Understanding Business

Good to Great – Jim Collins

Great by Choice – Jim Collins

Built to Last – Jim Collins

The Hard Thing About Hard Things – Ben Horowitz

How to Design a Great Customer Experience – Fred Wiersema

What’s your customer’s problem? – Fred Wiersema

Lean Startup – Eric Ries

Understanding People

Grit: The Power of Passion and Perseverance – Angela Duckworth

Tipping Point – Malcolm Gladwell

Blink – Malcolm Gladwell

Outliers – Malcolm Gladwell

David and Goliath – Malcolm Gladwell

Think Like a Freak – Steven Levitt & Stephen Dubner

Superfreakonomics – Steven Levitt & Stephen Dubner

Sapiens: A Brief History of Humankind – Yuval Noah Harai

Extreme Ownership – Jocko Willink & Leif Babin

12 Rules for Life – Jordan Peterson

Treating OSS Products as Your Own

I was listening to a podcast this morning and the host mentioned a concept that he calls “treating products as your own.”

In other words, there are certain products that he’s an evangelist for – he actively spruiks them as if he had shares in the company or developed the products himself. They’re products he loves so much that he’s willing to tell everyone about even though he derives no financial benefit (but wishes he owned them and could!).

So that got me thinking….  what are the OSS products that you, the reader, treat as your own? What products are so outstanding that you want to tell everyone about them, even though you have no involvement in them?

Please leave us a comment below telling us about the product/s and why they’re so great!!

How to Design an OSS / Telco Data Governance Framework

Data governance constructs a strategy and a framework through which an organization (company as well as a state…) can recognize the value of its data assets (towards its own or global goals), implement a data management system that can best leverage it whilst putting in place controls, policies and standards that simultaneously protect data (regulation & laws), ensure its quality and consistency, and make it readily available to those who need it.”
TM Forum Data Governance Team.

I just noticed an article on TM Forum’s Inform Platform today entitled, “Telecoms needs a standardized, agile data governance framework,” by Dawn Bushaus. 

The Forum will publish a white paper on data governance later this month. It has been authored with participation from a huge number of companies including Antel Uruguay, ATC IP, BolgiaTen, Deloitte, Ernst & Young, Etisalat UAE, Fujitsu, Globe Telecom, Huawei Technologies, International Free and Open Source Solutions Foundation, International Software Techniques, KCOM Group, Liquid Telecom, Netcracker Technology, Nokia, Oracle, Orange, Orange Espagne, PT XL Axiata, Rogers Communications, stc, Tech Mahindra, Tecnotree, Telefonica Moviles, Telkom and Viettel. Wow! What a list of luminaries!! Can’t wait to read what’s in it. I’m sure I’ll need to re-visit this article after taking a look at the white paper.

It reminded me that I’ve been intending to write an article about data governance for way too long! We have many data quality improvement articles, but we haven’t outlined the steps to build a data governance policy.

One of my earliest forays into OSS was for a brand new carrier with a brand new network. No brownfields challenges (but plenty of greenfields challenges!!). I started as a network SME, but was later handed responsibility for the data migration project. Would’ve been really handy to have had a TM Forum data governance guide back then! But on the plus side, I had the chance to try, fail and refine, learning so much along the way.

Not least of those learnings was that every single other member on our team was dependent on the data I was injecting into various databases (data-mig, pre-prod, prod). From trainers, to testers, to business analysts, to developers and SMEs. Every person was being held up waiting for me to model and load data from a raft of different network types and topologies, some of which were still evolving as we were doing the migration. Data was the glue that held all the other disciplines together.

We were working with a tool that was very hierarchical in its data model. That meant that our data governance and migration plan was also quite hierarchical. But that suited the database (a relational DB from Oracle) and the network models (SDH, ATM, etc) available at that time, which were also quite hierarchical in nature.

When I mentioned “try, fail and refine” above, boy did I follow that sequence… a lot!! Like the time when I was modelling ATM switches that were capable of a VPI range of 0 to 255 and a VCI range of 0 to 65,535. I created a template that saw every physical port have 255 VPIs and each VPI have 65,535 VCIs. By the time I template-loaded this port-tree for each device in the network overnight, I’d jammed a gazillion unnecessary records into the ports table. Needless to say, any query on the ports table wasn’t overly performant after that data load. The table had to be truncated and re-built more sensibly!!

But I digress. This is a how-to, not a how-not-to. Here are a few hints to building a data governance strategy:

  1. Start with a WBS or mind-map to start fomalising what your data project needs to achieve and for whom. This WBS will also help form the basis of your data migration strategy
  2. Agile wasn’t in widespread use back when I first started (by that I mean that I wasn’t aware of it in 2000). However, the Agile project methodology is brilliantly suited to data migration projects. It’s also well suited to aligning with WBS in that both methods break down large, complex projects into a hierarchy of bite-sized chunks
  3. I take an MVD (Minimum Viable Data) approach wherever possible, not necessarily because it’s expensive to store data these days, but because the life-cycle management of the data can be. And yet the extra data points are just a distraction if they’re never being used
  4. Data Governance Frameworks should cover:
    1. Data Strategy (objectives, org structure / sponsors / owners / stewardship, knowledge transfer, metrics, standards / compliance, policies, etc)
    2. Regulatory Regime (eg privacy, sovereignty, security, etc) in the jurisdiction/s you’re operating in or even just internal expectation benchmarks
    3. Data Quality Improvement Mechanisms (ensuring consistency, synchronised, availability, accuracy, usability, security)
    4. Data Retention (may overlap with regulatory requirements as well as internal policies)
    5. Data Models (aka Master Data Management – particularly if consolidating and unifying data sources)
    6. Data Migration (where “migration” incorporates collection, creation, testing, ingestion, ingestion / reconciliation / discovery pipelines, etc)
    7. Test Data (to ensure suitable test data can underpin testing, especially if automated testing is being used, such as to support CI/CD)
    8. Data Operations (ongoing life-cycle management of the data)
    9. Data Infrastructure (eg storage, collection networks, access mechanisms)
  5. Seek to “discover” data from the network where possible, but note there will be some instances where the network is master (eg current alarm state), yet other instances where the network is updated from an external system (eg network design being created in design tools and network configs are then pushed into the network) 
  6. There tend to be vastly different data flows and therefore data strategies for the different workflow types (ie assurance, fulfilment / charging / billing, inventory / resources) so consider your desired process flows
  7. Break down migration / integration into chunks, such as by domain, service type, device types, etc to suit regular small iterations to the data rather than big-bang releases
  8. I’ve always found that you build up your data in much the same way as you build up your network:
    1. Planning Phase:
      1. You start by figuring out what services you’ll be offering, which gives an initial idea about your service model, and the customers you’ll be offering them to
      2. That helps to define the type of network, equipment and topologies that will carry those services
      3. That also helps guide you on the naming conventions you’ll need to create for all the physical, logical and virtual components that will make up your network. There are many different approaches to naming conventions, but I always tend to start with ITU as a naming convention guide (click here for a link to our naming convention tool)
      4. But these are all just initial concepts for now. The next step, just like for the network engineers, is to build a small Proof of Concept (ie a small sub-set of the network / services / customers) and start trialling possible data models and namings
      5. Migration Strategy (eg list of environments, data model definition, data sources / flows, create / convert / cleanse / migration, load sequences with particular attention to cutover windows, test / verification of data sets, risks, dependencies, etc)
    2. Implementation Phase
      1. Reference data (eg service types, equipment types, speeds, connector types, device templates, etc, etc)
      2. Countries / Sites / Buildings / Rooms / Racks
      3. Equipment (depending on the granularity of your data, this could be at system, device, card/port, or serial number level of detail). This could also include logical / virtual resources (eg VNFs, apps, logical ports, VRFs, etc)
      4. Containment (ie easements, ducts, trays, catenary wires, towers, poles, etc that “contain” physical connections like cables)
      5. Physical Connectivity (cables, joints, patch-leads, radio links, etc – ideally port-to-port connectivity, but depends on the granularity of the equipment data you have)
      6. Map / geo-location of physical infrastructure
      7. Logical Connectivity (eg trails, VPNs, IP address assignments, etc)
      8. Customer Data
      9. Service Data (and SLA data)
      10. Power Feeds (noting that I’ve seen power networks cause over 50% of failures in some networks)
      11. Telemetry (ie the networks that help collect network health data for use by OSS)
      12. Other data source collection such as security, environmentals, etc
      13. Supplementary Info (eg attachments such as photos, user-guides, knowledge-bases, etc, hyperlinks to/from other sources, etc)
      14. Build Integrations / Configurations / Enrichments in OSS/BSS tools and or ingestion pipelines
      15. Implement and refine data aging / archiving automations (in-line with retention policies mentioned above)
      16. Establish data ownership rules (eg user/group policies)
      17. Implement and refine data privacy / masking automations (in-line with privacy policies mentioned above)
    3. Operations Phase
      1. Ongoing ingestion / discovery (of assurance, fulfilment, inventory / resource data sets)
      2. Continual improvement processes to avoid a data quality death spiral, especially for objects that don’t have a programmatic interface (eg passive assets like cables, pits, poles, etc). See big loop, little loop and synchronicity approaches. There are also many other data quality posts on our blog.
      3. Build and refine your rules engines (see post, “Step-by-step guide to build a systematic root-cause analysis (RCA) pipeline“)
      4. Build and refine your decision / insights engines and associated rules (eg dashboards / scorecards, analytics tools, notifications, business intelligence reports, scheduled and ad-hoc reporting for indicators such as churn prediction, revenue leakage, customer experience, operational efficiencies, etc)
      5. In addition to using reconciliation techniques to continually improve data quality, also continually verify compliance for regulatory regimes such as GDPR
      6. Ongoing refinement of change management practices

I look forward to seeing the TM Forum report when it’s released later this month, but I also look forward to hearing suggested adds / moves / changes to this OSS / BSS data governance article from you in the meantime.

If you need any help generating your own data governance framework, please contact us.

 

007 – Building Products to Solve Fundamental OSS Problems with Jay Fenton

Jay Fenton is the Founder and CEO of Savvi, makers of innovative and highly performant OSS/BSS components that redefine the state of the art. Savvi’s portfolio notably includes SNMP, Streaming Telemetry, Netflow and IoT collectors each of which operate in the 10s of millions of events per second on a single server, solving collection problems for massive networks. We also mention Luna – a unique visualisation platform for situational awareness, customer experience management and AI/ML overwatch. Luna fluidly combines multi-layer spatial, logical and time-series data, at massive scale.

Jay is a serial entrepreneur and has built a number of products from the ground up. When we say the ground up, we mean it, as he’s re-written low-level data handling libraries to be able to process data volumes far larger than previous industry standards. As well as developing OSS tools, Jay is also currently working on side projects in Virtual Reality and SocialAudio.

This is a much longer show than normal but we pack a lot into this episode. Jay delves into a history of phone phreaking as a teenager and his early days of administering carrier voice & IP networks all the way through to the many product opportunities that he sees today. We cover a range of ideas, old and new, including: cloud OSS, telcos as meta companies, the disruption of satellite internet, the potential for an entire OSS on Salesforce, the complexities of network virtualisation, ONAP, microservices, open-source, 5G plus mobile-edge compute (MEC), Rube Goldberg machines, open source in telco, risk/reward and the recent changes in the Elastic license model, and finally mixed reality (AR / VR / XR).

For further questions you may have for our guest, Jay can be found “on the Internet” (his words). We found him at https://www.linkedin.com/in/jfenton/

006 – A Career of Innovation in OSS with Francis Haysom

Francis Haysom is a Principal Analyst for Appledore Research Group, a global research and consulting firm specialising in the telecommunication and software markets with a particular focus on OSS/BSS.

In this episode, Francis takes us on a ride through his career leading OSS/BSS innovation across the last three decades with iconic companies such as Convergys, Cramer, Amdocs, Telcordia and Ericsson. He then takes us into his current research at Appledore, looking over the horizon into what comes next in the OSS/BSS world. You could say his forecast is a little “cloud-y” and “edge-y,” so strap yourself in for a story that provides a fantastic view of the history of our industry, as well as some brilliant observations and insights into where it will go next.

005 – Making Standardised Open-Source a Strategy for OSS with Vance Shipley

Vance Shipley is the CEO and Founder of SigScale. SigScale is a provider of standardised, open-source, cloud-native OSS/BSS tools, including flagship product, OCS (Online Charging System).

In this episode, Vance takes us on a fascinating journey through a career that started three decades ago with his first role as a high-tech lumberjack through to his current owner / developer role with SigScale. His journey follows an entrepreneurial pathway, developing clever solutions to the many problems that arose during the early days of deregulation of the telco industry. He also outlines the types of opportunities that are presenting themselves in the modern landscape of 5G and network virtualisation.

Vance also highlights how he has become a Venn diagram, with a unique intersection of skills that include developing solutions that incorporate low-level protocol stacks, specialising in cellular networks (and beyond) and being a key connector to bring network management standards together across 3GPP, TM Forum and LFN.

Launch of The Passionate About OSS Podcast

We’re excited to announce the Launch of The Passionate About OSS Podcast.

The first batch of five episodes can be found here, with new episodes to be released here on a weekly basis:

The Passionate About OSS Podcast

The aim of the show is to shine a light on the many brilliant people who work in the OSS industry.

We’ll interview experts in the field of OSS/BSS and telecommunications software. Guests represent the many facets of OSS including: founders, architects, business analysts, designers, developers, rainmakers, implementers, operators and much more, giving a 360 degree perspective of the industry.

We’ll delve into the pathways they’ve taken in achieving their god-like statuses, but also unlock the tips, tactics, methodologies and strategies they employ. Their successes and failures, challenges and achievements. We’ll look into the past, present and even seek to peer into what the future holds for the telco and OSS industries.

004 – An Analyst’s Perspective on the OSS Industry with James Crawshaw

James Crawshaw is a Principal Analyst with Omdia, one of the world’s top-three technology analysis agencies. James specialises in analysis of OSS, telecommunications and IT industries, which means he spends more time researching OSS technologies and firms than almost anyone on the planet.

In this episode, James provides insights into what’s currently trending in OSS, but he also describes the techniques he uses to peer over the horizon to spot nascent developments.

003 – Hints on Finding a Role in the OSS/BSS Industry with Michael Jones

Michael Jones is a Practice Principal at Analytica Resources, a recruitment role that sees him connecting employers and employees. Analytica is a niche agency that specialises in BSS and OSS placements. This provides Michael with an insider’s perspective on the successful techniques that applicants use to find their first, or next, job in the OSS / BSS industry.

Michael describes the processes that he uses to identify best-fit candidates as well as the techniques he recommends for differentiating yourself from others. He also provides tips on how employers can win in the war on talent, finding ideal candidates, some of which might not even be actively seeking roles.

002 – The Importance of Physical Network Inventory (PNI) in an Increasingly Virtualised World with Peter Dart

Peter Dart describes his journey working with some of the world’s earliest geospatial software to his current role as Chief Architect with Synchronoss. In this role, he helps to guide the roadmap of Synchronoss’ Spatial Suite, a set of tools that assist leading network operators to manage and maintain their Physical Network Inventory (PNI).

In a world where network virtualisation technologies are grabbing the attention of many, Peter articulates why, perhaps more than ever, PNI tools remain relevant. He describes the many use cases where network designers, field workers and capacity planners interact with PNI tools / data, but also delves into some of the less well known opportunities to leverage this information.

001 – From OSS Startup to IPO with Tony Kalcina

Tony Kalcina recounts the story of taking Clarity International from its founding team and customer to IPO (Initial Public Offering) and beyond. He also describes an extensive career starting with Telstra OTC through to his current role as the APAC CTO for Tech Mahindra and Ambassador for TM Forum. Tony provides insights into the formation and leadership of OSS teams and the opportunities that still exist for teams small and large in today’s OSS environment.

000 – Introduction to The Passionate About OSS Podcast

Welcome to the Passionate About OSS podcast, the show for people who are just that – Passionate About OSS – Where the OSS stands for Operational Support Systems.

In other words, the software solutions that help manage and operate the complex telecommunications networks of today.

Thanks for joining me on the episode that isn’t quite an episode – it’s more of an introduction message that describes what this show is all about.

I’ve been running the Passionate About OSS brand for a number of years now. But I’ve been passionate about OSS even longer than that, having developed the bug on my first OSS project way back in the year 2000. It’s such a fascinating and diverse subject that I love sharing the passion with others. This show means not just having conversations, but sharing them with you, the audience.

This show shines a light on some of the best and brightest in this industry, and when I say brightest, this means some unbelievably clever, genius-level, OSS experts. Being surrounded by such luminaries can be a humbling experience, especially when you’re less Nostradamus and more NostraDumbAss, like me. As humbling as it might be, even the best of the best don’t come close to knowing everything there is to know about our ever-changing industry.

That’s why we’ll look at it from the multitude of facets that OSS consists of, through the expert eyes who represent personas such as:

· Architects

· Business Analysts

· The C-suite (CEO, CMO, CFO, etc)

· Consultants

· Data Scientists, AI / ML

· Designers

· Developers

· Founders

· Operators

· Project Implementers

· Rainmakers

· Testers

· And so much more

We’ll delve into the pathways they’ve taken in achieving their god-like statuses, but also unlock the tips, tactics, methodologies and strategies they employ. Their successes and failures, challenges and achievements. We’ll look into the past, present and even seek to peer into what the future holds for the telco and OSS industries. The telcos we assist have never been more important to our society, yet they’ve never been more imperiled by market disruption. We’ll seek out ways to add even greater value to network operators and the global customer-base they support.

I expect that our guests’ stories and insights will help you along your own journey, regardless of whether you’re just starting out or have already spent decades wrestling the beast that is OSS.

Like so many of your counterparts, I’m sure you’re inquisitive enough to have loads of additional questions for our guests and I. If so, head over to PassionateAboutOSS.com and leave us a message:

1) of the questions you have,

2) your stories and journey and

3) perhaps even suggestions about the guests you’d like to hear from in future.

We’ll talk soon… but in the meantime, be sure to spread the gospel, the passion that is OSS.

Canary Releases

Do you remember back in the old days of OSS/BSS releases into production? They were a little bit stressful…. especially for the Release Managers.

I remember one Release Manager who used to set up the packages, type out the commands, then his hands would be poised above the keyboard, literally shaking. He would then position his finger above the Enter key and look away from the screen whilst pressing it!

He then couldn’t look back at the screen for at least a couple of minutes… maybe taking a sneaky peak from time to time.

It was pretty hilarious to watch! 

It was like watching a nervous soccer fan at a big game being decided by penalties. Look away! Can’t watch!

Roll-backs upon fail were equally amusing to observe (for me at least).

But luckily, we can avoid these big-bang cutovers in many cases with load balanced application architectures. This article by Danilo Sato describes Canary / Blue-Green Deployments of new / update software.

The steps are summarised in Danilo’s diagrams below:

PRE-CUTOVER

CANARY RELEASE

POST CUTOVER

The canary release model provides a technique of migrating a few selected users (eg internal / project users) to the new solution. If something happens to the canary release and rollback is required, it’s simply a case of routing all users back to the old version.

Danilo also states, “Canary releases can be used as a way to implement A/B testing due to similarities in the technical implementation. However, it is preferable to avoid conflating these two concerns: while canary releases are a good way to detect problems and regressions, A/B testing is a way to test a hypothesis using variant implementations. If you monitor business metrics to detect regressions with a canary [2], also using it for A/B testing could interfere with the results. “