Network data models

Over the years of working with different OSS products I’ve noticed two distinct approaches to building the data models that underpin OSS functionality. The two options are:

  1. Network specific data model
  2. Generic data model

The first approach sees the vendor implement a different data structure for each different network type (eg Ethernet, transmission, etc). The second has a network-agnostic approach that by default is also agnostic of equipment manufacturers. The second approach is built on the assumption that any network type can be broken down into a concept of nodes (routers, switches, muxes, modems, etc) and arcs (cables, circuits, virtual circuits, eLANS, etc).

Each approach has pros and cons.

The network specific approach supports the ability to create very specific fields to suit any requirement of that network type. It also allows very network-specific business rules and connectivity rules to be built into it. The two limitations I’ve noticed is that there is so much extra complexity to maintain but more importantly, if you want to create generic visualisation or report tools, you have to query many more tables and find a way to marry all the business and connectivity rules together.

I personally prefer the generic data model for its elegance (potentially) and flexibility (in theory you can build any network type into the application without having to change the code). There are three main downsides that I’ve noticed.

Firstly, the vendor’s product team must work hard to discover the essence of communications networks, building a data model that supports any possible type of current or future network. Secondly, the team configuring a generic OSS model must be able to think laterally to map any network situation to the generic data model. Lastly, the one-size-fits-all approach doesn’t always cater for everything a new network can throw at it, requiring changes to a data model that must also retrofit into the way previous network types are configured

If this article was helpful, subscribe to the Passionate About OSS Blog to get each new post sent directly to your inbox. 100% free of charge and free of spam.

Our Solutions

Share:

Most Recent Articles

Are your OSS RFPs feeling horny baby?

In case you hadn’t already picked up on it, today’s title paraphrases Austin Powers, the flamboyant British spy and comedic character created by Mike Myers.

4 Responses

  1. One of the hardest OSS design decisions. You missed the 3rd option – TMForum-generic – where they give you a broad data model of nodes and links then you insert all other meaningful data in name-value pairs.

    There are so many more trade-offs…

    Database performance – Generally speaking, the more specific you can make the database schema the better it will perform. There’s whole books about why (read Tom Kyte – great Oracle author) but basically if you constrain the database it means queries are more precise and predictable.
    Consistency – If your data model is specific then all systems accessing that data will be just as specific. If your model is generic, you are dependent on your own client software and other systems accessing the data to write data that is valid, and interpret generic data correctly. This is a common argument put forward by database architects for doing as much modelling in the schema as possible, rather than assuming applications can be trusted to own the job of understanding data structure.
    Data integrity – A specific data model cannot ‘contain’ data that is an invalid model of the network. Cramer sold this as a benefit – Yes, the model is very specific and painfully rigid, but it means that an item in the SDH-specific Circuit table is always a valid definition of E1 or STM-1 or whatever.
    Uncertainty – Data quality is an issue. Sometimes you want a valid database model that contains partial data – deliberately representing an incomplete or invalid object. Like an E1 circuit that doesn’t terminate on port. A more generic model better supports partial and dirty data sets.
    Data Repository vs. Process – If you want a data repository that hold browse-able data, or data destined for other OSS system, go generic because there is little benefit to being specific. If you are processing network data with network-centric rules, go specific because you need to have meaningful semantics and rules to permit navigation of the data.

    It’s really tricky. And no-one has really nailed it, at least in my experience, in the last 18 years I’ve been doing OSS. Every design is a compromise.

  2. Hi James,
    Awesome response thanks!
    Some seriously useful supplementary info in your comment.

  3. Network specific data models are typical for equipment vendor tools. Equipment vendors are focused on their equipment and networks. As a result they model their data accordingly.

    Independent OSS vendors need to provide generic ‘flavored’ data models since this is the most efficient way to model operator’s networks.
    I say ‘generic flavored’ since the term ‘generic’ can be understood in many different ways. The optimal way in my option is to be generic enough to model all technologies and (anticipated) future technologies, but assuring data consistency and integrity.

    The actual task for the tool vendors later is to model new technologies on basis of their generic data model and to recognize at what point it makes sense to extend their data model.

  4. Great points Michael.
    The discussion about how equipment manufacturer OSS versus independent OSS can heavily influence data models is very pertinent, especially since many of the independent OSS vendors have been absorbed into equipment manufacturers via acquisition.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.