OSS Sandpit – Telco Cloud / DC Inventory Prototype

This article provides a tutorial for building Telco Cloud / Data Centre components into the inventory module of our Personal OSS Sandpit Project.

This prototype has been a bit of a beast to build and includes components such as:

  • Hosting Services including:
    • IaaS (VMs, storage, network – FlexPod)
    • PaaS (ONTAP-AI a hosted AI solution, hosted voice)
    • SaaS (email, secure keys)
    • Internet ecosystems
    • Cloud provider ecosystems
    • CoLocation / Rack Management services
  • Leased Lines including:
  • Carrier MPLS network modelling, including:
    • IPAM (IP Address Management)
    • VPN / VLAN management
    • VRF and AS management
  • Virtualisation and application management
  • Equipment Layout art for:
    • Routers
    • Switches
    • Cisco FlexPod Chassis
    • NVIDIA ONTAP AI Chassis

This Telco Cloud prototype can be summarised as follows (noting that this is an invented network section):

You’ll notice:

  • A Telco Cloud network (light blue) with DCs in Melbourne, Sydney, Singapore, New York and London
  • These DCs are interconnected using MPLS over (note that each of the dotted lines actually has resilient path, which we’ll describe a little later)
  • Each DC has a number of services / platforms at its disposal (as shown by the coloured dots)
  • A single customer with three sites (dark blue in Geelong, Queens and Kuala Lumpur)
  • The Customer Edge (CE) routers at these three sites are connected by local leased lines (yellow clouds) to the Provider Edge (PE) of the Telco Cloud
  • Ports, IP addresses / subnets and BGP AS numbers are identified

What you don’t see in this diagram are the submarine landing stations and submarine + terrestrial leased lines that are also modelled, but we’ll get to that later too.

In addition, the diagram below represents rack layouts at ML1 (Melbourne) DC.

Each of the other core sites is the same, except the ONTAP-AI (rack 05), which is only based in Melbourne DC.

Device Instances

First we start by building the locations and hierarchy of devices within them in Kuwaiba. The diagram below shows the new devices we’ve built to support this Telco Cloud model:

The tree is only partially expanded and only shows ML1. You’ll notice how these assets align with the conceptual rack layout view above.

This translates to the following FlexPod Chassis Rack View in Kuwaiba, where you’ll notice I’ve created Equipment Layout artwork for UCS, switches and storage:

While fiddling with face layouts, I accidentally stumbled on a cool little feature in Kuwaiba.

If you populate the “Model” field, you get the physical / connectivity mapping (of the UCS / Compute shelf in this case):

But if you leave the model field blank then you get the Logical / Virtualisation / Application view:

If you look closely at this view above (double-click if needed), then you’ll see the various hosted customer services (IaaS, SaaS, PaaS) are stored.

The same can be seen on the AI as a Service (AIaaS). First, we see the physical layout of the NVIDIA DGX-1:

And then we can toggle to see the hosted customer AI services running on the DGX-1:

 

CoLocation (CoLo) Services

Speaking of customer services, we can also model CoLo services by allocating rackspace in the COLO rack, as follows:

Perhaps more importantly, we’ve also modelled the attributes of those services, including factors such as Power Feed, Space Type, Number of RUs, Access Type, Bandwidth, etc:

Connectivity

Next, we have to model the connectivity. 

Inter-Rack Connectivity

Firstly, we’ll start with the patching within the racks where we follow these wiring design guides from Cisco (FlexPod), which aligns with the FlexPod Chassis Rack View diagram above… 

…and NVIDIA (ONTAP-AI) respectively

 

Inter-DC Connectivity

Then we establish the Inter-DC connectivity, firstly starting with the Submarine and major terrestrial links between cities (double-click for a closer look):

Being an Australia-based Telco Cloud provider means there are extensive Leased Line links between Melbourne and Sydney:

And also terrestrial leased-lines across to the submarine landing station in Perth to support the links to Singapore:

Submarine Landing Station to DC Connectivity

However, we also have local leased lines that allow us to link the DCs to the Submarine landing stations:

From diverse landing stations at Beaconsfield (SY2) and Paddington (SY3) to the Sydney DC (SY1):

From diverse landing stations at Manasquan (US3) and Brookhaven (US4) to the New York DC (NY1). Note that I haven’t shown the US West-Coast landing stations at Morro Bay (US1) and Hillsboro (US2) but you can see the orange terrestrial links going off to them from US3 and US4 below.

I’ve only shown a single landing station in Singapore, that lands the SeaMeWe-3 and Australia-Singapore submarine cables. However, I’ve then shown diverse routes to the Singapore DC (SG1)

End-to-End DC Connectivity

All of the leased lines are now in place, but we now need to establish end-to-end routes through all these leases… connecting the dots as it were. Here are:

Diverse Routes showing all hops from ML1 to NY1….

…and Diverse Routes from ML1 to SG1

 

Customer Leased-line Connectivity

Then we build the leased lines to customer sites at Geelong (from ML1 DC), Queens (from NY1 DC) and Kuala Lumpur (from SG1 DC):

Internet Leased Lines

And finally the Leased Lines from the POI (Point of Interconnect) rack in the DC to local ISPs in each city. These are shown symbolically just to track their Leased Line identifiers rather than actual routes:

 

Mapping the MPLS Network

Now that all the physical connectivity is in place, we can record all the attributes of the MPLS network. You’ll remember that the first diagram above showed all the ports, IP addresses / subnets and BGP AS numbers.

Firstly the IP / subnet allocations

The right-hand pane shows the four major subnets (MPLS core, customer leases, Loopbacks and Customer Lease ranges respectively).

Just as a single example, the left-hand pane shows that 10.1.1.1 has been assigned to port Gi0/0/0/0 on the PE router at ML1 from the 10.1.1.0/28 subnet.

The diagram below then shows all of the MPLS Links in Kuwaiba (but note that I’ve manually overlaid the blue cloud to make it easier to see the core DC Interconnect network (with P routers in the POI rack in the DCs on its perimeter)). PE routers (in the PE rack in the DCs) are shown connecting to the P routers, but also connecting to the Customer Edge (CE) routers at customer sites. 

Then finally we show the MPLS attribute mappings.

The upper pane shows pools of VRFs (VPN00001 for the demonstrated customer’s network and another for the backbone network), as well as AS allocations.

The lower pane shows an example VRF and how it’s associated with:

  • Customer LAN IP subnet (172.16.1.0/26)
  • VLANs (VLAN10 and VLAN20 in this case)
  • PE routers on which the VRF resides

Customer Service Mappings

The following shows Service Mappings for Customer 0001. They’ve racked up a lot of services here! This service inventory can be used to assist with billing the customer each month.

 

Summary

I hope you enjoyed this introduction into how we’ve modelled a sample Carrier Cloud / DC into the Inventory module of our Personal OSS Sandpit Project. Click on the link to step back to the parent page and see what other modules and/or use-cases are available for review.

More to come on SDN and SD-WAN in future articles.

If you think there are better ways of modelling this network, if I’ve missed some of the nuances or practicalities, I’d love to hear your feedback. Leave us a note in the contact form below.

OSS Sandpit – GPON Network Inventory Prototype

This article provides a tutorial for building GPON (Gigabit Passive Optical Network) components into the inventory module of our Personal OSS Sandpit Project. It’s also a model for FTTH / FTTP (Fibre to the Home, Fibre to the Premises) as well.

This prototype build includes components such as:

  • Passive Optical Network (cables, patch panels, splices, splitters, containment, multiports, other splice joints)
  • Active GPON equipment (OLT, ONT)

This GPON prototype can be summarised as follows (this invented network section has been visualised using Google Earth). Note that you should start from the Central Office / OLT site and trace outwards towards the ONT:

This represents the physical representation of a typical GPON model such as the following:

Device Instances

The diagram below shows the new devices we’ve built to support this GPON network model:

This includes:

  • An OLT at the Central Office, (Nokia ISAM FD 7302)with:
    • 18 Available card slots
      • Containing GPON cards (Nokia FGLT-8 cards)
        • 8 GPON ports
  • An FDH (Fibre Distribution Hub) that contains patch arrays and splitters
  • Manholes / Handholes containing:
    • Splice Joints (eg AJL, LJL types)
    • Fibre Loops
    • Multiports (MPT)
  • Cables (mains, distribution and lead-ins)
  • ONTs (Optical Network Terminal devices) on the customer premises

Physical Connections

There is quite a lot of patching and port mirroring required to create the physical connectivity between the customer sites and central office.

This includes a small section of local fibre network (LFN) drawn in Google Earth:

You’ll notice the 4 x Lead-in cables emanating from handhole HH-D-005. The green squares indicate handholes. The blue diamond represent multiports and the blue arrowheads represent AJL / LJL splice joints.

These are shown as follows in the Kuwaiba OSP view:

The diagram below shows the conceptual view of the GPON network in the upper section, with corresponding physical path trace shown in the lower section (Points given for anyone who can spot the error in the text I’ve overlaid on the trace).

Click on the image above to show the more detailed trace from the OLT through the splitter at the FDH to the four homes. Note that the next version of Kuwaiba is expected to show a more fanned-out visual presentation of the trace data.

The same data can be shown in the physical tree view below. You’ll notice that four separate branches emanate from the Splitter inside the FDH, indicating a 1:4 split (although only 3 branches are visible below).

You’ll notice that in this instance, we have to perform a TraceDown (ie from OLT to premises) using the trace Physical Tree functionality. TraceUp (ie from premises to OLT) provides slightly spurious data.

We’ve excluded details about creating fibre joints (eg AJLs, LJLs, etc) and splicing to simplify the scenario, but more details and screen-caps of cable management can be found in this earlier inventory article)

Service Modelling

The diagram below shows that we’ve modelled two service types for each customer here:

  • One for GPON line rental
  • The other is for a retail service – connection to the Internet via Carrier / ISP

Note that the right-hand pane shows customer details for this service.

The diagram below shows more Service Impact details (ie the resources that the GPON service is utilising):

Note that it displays these associations in alphabetical order, not in a traceUp or traceDown sequence.

Summary

I hope you enjoyed this introduction into how we’ve modelled a sample GPON network into the Inventory module of our Personal OSS Sandpit Project. Click on the link to step back to the parent page and see what other modules and/or use-cases are available for review.

If you think there are better ways of modelling this network, if I’ve missed some of the nuances or practicalities, I’d love to hear your feedback. Leave us a note in the contact form below.

OSS Sandpit – Fixed Wireless Network Inventory Prototype

This article provides a tutorial for building Fixed Wireless (FW) Network components into the inventory module of our Personal OSS Sandpit Project.

This prototype build includes components such as:

  • A fixed wireless core network
  • Radio Links across licensed and unlicensed (5 GHz and 24GHz)
  • Line of Sight analysis of each Radio Link and Viewshed
  • Fibre links (including cable management)
  • Tower Management
  • Routing and Switching
  • Layer 2 and Layer 3 service modelling (eg VLANs, VPLS tunnels, etc)

This Fixed Wireless prototype can be summarised as follows:

The primary link is at the bottom of the image (ie 101C – RICH – SWIN – BOXH). The fibre leased-line (101C – BOXH) is for resilience only.

The main intent of this network is to carry L2 services (between Customer Sites 90001 and 90003) and L3 services (between Customer Site 90001 and DXM1). This is as depicted in the service model diagram below:

We’ll revisit the modelling of these services later.

In this post we’ll describe the following use-cases:

  • Building Reference Data like data hierarchies, device types, etc
  • Performing Site Qualification and Line of Sight Analysis between locations
  • Creating Device Instances including buildings, towers, radios, etc
  • Creating Physical Connections between devices (eg radio links, fibre links)
  • Creating Customer Service Modelling 

 

Reference Data

Starting off with the data hierarchy, we had to develop some new building blocks (data classes) to support fixed wireless assets and new link types that allow us to quickly identify the differences in link types from high-level diagrams.

We’ve developed a custom data hierarchy as follows:

  • Country
    • Radio_Infra (to separate FW core network assets from other network assets)
      • Site (core network sites – 101C, RICH, SWIN, BOXH, SURH)
        • Buildings / Comms Rooms
          • Rack
            • Equipment
      •  Tower
        • Appurtenances (ie attachments to the tower)
          • Mount Groups (ie the frames / mounts that connect attachments to the towers / poles)
            • Equipment (including radio units)
    • City
      • Customer Sites
        • Devices (ODU / antenna, IDU, routers)

This required a few new templates, including Customer Sites and FW Core Sites.

Site Qualification and Line of Sight Analysis

Being a Fixed Wireless network, effective communications links rely on line of sight between radio units.

We used Google Earth for Site Qualification and Line of Sight. Here is the plan view of our small section of network:

But we need to ensure each of these legs if visible:

Here’s a view of the 101C – RICH link:

Here’s another view of the same link showing clearance above the MCG light towers:

Here’s the link from RICH – SWIN. :

Here’s SWIN-BOXH:

And finally SWIN – SURH:

All clear on the link analyses above.

For comparison, the following diagram shows the corresponding core network, including fibre links, in Kuwaiba’s Outside Plant Module:

The colour-coding of the links in the diagram above is as follows:

  • Light Blue = 5GHz Radio Links (unlicensed, approx 150Mbps)
  • Green = 24GHz Radio Links (unlicensed, approx 1.4Gbps)
  • Orange = Licensed Radio Links
  • Royal Blue = Fibre Links

The fibre links required fibre management using Kuwaiba as shown below:

Note: The viewshed functionality in Google Earth provides a useful approximation of line of sight from a given point. The green shading in the example below approximates the areas where coverage can be achieved from BOX-MNT-01 (the first mount on tower 1 at the Box Hill core site).

Device Instances

We then create the devices in Kuwaiba to build the prototype network model shown in the first diagram above. Not all devices are shown.

Here’s a partial rack view of the first rack at 101C:

We can also drill down into patch management within the rack as follows:

Tower Management

The following diagram shows a simulation of the tower at 101 Collins St (not actual), specifically showing the mount groups and other key attributes we’ve modelled in Kuwaiba:

Attributes such as elevation, azimuth and horizontal offset have all been identified from the Line of Sight Analysis done earlier in Google Earth.

Note that the attributes of 101C-MNT-01 are shows in the right pane below:

Physical and Logical Connections

There is quite a lot of patching and port mirroring required to create the connectivity between the core sites, customer sites and data centre.

The following diagrams show the three leased fibre lines (we’ve excluded fibre joints and splicing to simplify the scenario, but more details and screen-caps of cable management can be found in this earlier inventory article):

Fibre cable from 101C to BOXH:

Fibre cable from 101C to customer site 90003 (including identification of cable name in left pane):

A full trace all the way from Customer Site 90001 to the Data Centre (L3 service chain) is shown in the diagram below. Click to view in full size:

 

Service Modelling

Re-showing the second diagram above, you’ll notice that there are a number of important service points relating to the L2 (the red, upper path) and L3 (the blue, lower path) services offered over this FW network:

If you look closely at the diagram below, you’ll notice that all of these service points have been modelled into Kuwaiba for Customer 90001:

You’ll also notice the Layer 2 services have been expanded to also show service impact (ie which devices / circuits / cables each link in the service chain relies on). The full service impact is modelled for L3 services as well in this demo, but not expanded in the screen-cap.

We’ve also modelled the Fibre Links as Leased Line Services, including service impacts shown below:

Summary

I hope you enjoyed this introduction into how we’ve modelled a sample Fixed Wireless network into the Inventory module of our Personal OSS Sandpit Project. Click on the link to step back to the parent page and see what other modules and/or use-cases are available for review.

If you think there are better ways of modelling this network, if I’ve missed some of the nuances or practicalities, I’d love to hear your feedback. Leave us a note in the contact form below.

OSS Sandpit – Smart City / IoT Network Inventory Prototype

This article provides an example of building Smart City and IoT (Internet of Things) Network components into the inventory module of our Personal OSS Sandpit Project.

This prototype build includes components such as:

  • A Command and Control Centre (CCC)
  • Satellite Earth Stations
  • Smart Buildings including:
    • Compute / Hosting (VxBlocks)
    • Comms (incl Unified Comms and In-Building Coverage)
    • Security (eg CCTV, Access Control)
    • Building Management Systems (BMS)
    • Public Address / Audio-Visual
    • HVAC 
  • An optical fibre ring network and direct fibre backhaul links to radio towers
  • Towers / masts that are affixed with 5G and LoRa antenna and radio heads as well as point to point microwave antenna
  • 5G infrastructure (see this earlier post for more details on 5G setup)
  • LoRaWAN infrastructure (including LoRa antenna / gateway, LoRa network server,  app servers and the join server)
  • IoT Sensors including:
    • Power Management Systems
    • Smart Meters
    • Parking Sensors
    • Traffic Control Systems (TCS)
    • Variable Messaging Systems (VMS)
    • Tollway Systems
    • Rail Control Systems (RCS)
    • Vehicle Detection Systems (VDS)
    • IoT Asset / Logistics Management

This smart-city / IoT prototype can be summarised as follows:

This diagram replicates a smart city I helped to design a few years ago for Ha’il in Saudi Arabia. This smart city was intended to house around 500,000 people and align with an existing university.  Dry Dock, Business Park and an Airport were also a feature of the design that we prepared in conjunction with KEO Architects and Ernst & Young. It was a really interesting exercise in design and commercial modelling.

This smart city hasn’t been built yet, so the network you see modelled in the Inventory tool below is purely hypothetical.

Note that this network was built around a GPON network model, especially for the residential areas, but we’ll be covering that in a later prototype article.

In this post we’ll describe the following use-cases:

  • Building Reference Data like data hierarchies, device types, etc
  • Creating Device Instances including rack views and the virtualised layers within them
  • Creating Physical Connections between devices (eg fibre ring, radio network backhaul)
  • Creating Logical Connections between devices (eg LoRaWAN service mappings and LoRaWAN network layout)

 

Reference Data

Starting off with the data hierarchy, we had to develop some new building blocks (data classes) to support a more granular tower asset model and IoT sensors (as per yellow highlights below):

We’ve developed a custom data hierarchy as follows:

  • Country
    • Site (for key sites – Command & Control Centre, University, Airport, Business Centre and the Dry Dock)
      • Comms Room (Room)
        • Rack
          • Equipment (including VxBlocks, routers, ODFs, etc)
            • NFVI
              • NFVs (including 5G Core, Firewalls, LoRaWAN)
                • Apps (eg LoRa Network Server, Application Servers, etc)
    • Mobile Base Station (BTS) Sites
      • Comms Huts (Building)
        • Rack
          • Equipment
      • Tower
        • Appurtenances (ie attachments to the tower)
          • Mount Groups (ie the frames / mounts that connect attachments to the towers / poles)
            • Equipment (including remote units, LoRa gateways and antenna)
    • City
      • IoT Devices

This required a few new templates, including these BTS / LoRa sites:

More importantly, we needed to include additional classes and attributes to correctly model the towers. First we needed to add Mount Groups. These mounts hold the antenna and remote units that provide 5G coverage across the estate:

You’ll notice on the left-hand pane there are three sector mounts [MNT-01 to 03] (all at 30m elevation above ground level) that to provide 360 degree 5G coverage (ie 3 x 120 degree sector cells). You’ll notice above that MNT-01 has an azimuth of 0 degrees. MNT-02 has an azimuth of 120 degrees and MNT-03 has an azimuth of 240 degrees (not shown). 

MNT-04 holds the LoRA Gateway (which is an omni-cell – providing 360 degree coverage at an elevation of 25m). Meanwhile, MNT-05 holds a point-to-point microwave radio antenna (at an elevation of 29m).

The right-hand pane shows the additional attributes required to model the tower mounts.

This tower configuration is reflected in the diagram below:

Device Instances

We then create the devices to build the prototype network model shown in the first diagram above.

And a more detailed view of the IoT Management:

You’ll also see these VNFs and Apps reflected in the Rack Layout of the VxBlock below:

Physical and Logical Connections

There is quite a lot of patching and port mirroring required to create the connectivity between the CCC, key sites, Base Stations, Earth Station, Satellite and IoT Sensor Sites, as follows:

Ha’il City Overview

Note that this mirrors the first image above, with the exception of the Airport and Satellite which are off the top of the page.

You’ll also notice that there is a fibre ring (red lines) between key sites as well as point-to-point fibre backhaul to the BTS site. IoT sensors are also shown (LoRa radio connectivity not shown though).

Fibre link between CCC and BTS001

5G antenna and remote unit are at the left, connecting to the 5G Core at right.

One leg of the Fibre Ring

You’ll notice that this is the link between a router in the CCC (Data Centre – Comms Room 1) to the Comms Room at Ha’il University.

LoRaWAN Logical Connectivity

There’s likely to be additional App Servers required, but two have been included for demonstrational purposes.

 

Satellite Modelling

Refer to this earlier post for how to model Satellite networks and services.

 

IoT Service Modelling

It shows the Traffic Light service (MIoT) as well as the infrastructure it uses (ie the IoT Traffic Light device, the nearest LoRa Gateway to these fixed IoT devices and the LoRa network server in the CCC).

We could model connectivity of all the IoT sensors back through the LoRa Gateway with physical links, but instead we’ve simplified, showing only the service utilisation. 

Service Impact Analysis (SIA)

We can also use the service relationships to determine which customer services would be affected if the LoRa Gateway (HA001-LORA-01) failed. In the image above, there would be three traffic light services affected (see under “Uses” in the bottom pane).

Similar analysis could be done using the getAffectedServices API that we demonstrated in the OSS Sandpit Inventory Intro post.

Summary

I hope you enjoyed this introduction into how we’ve modelled a sample Smart City and IoT network into the Inventory module of our Personal OSS Sandpit Project. Click on the link to step back to the parent page and see what other modules and/or use-cases are available for review.

If you think there are better ways of modelling this Smart City network, if I’ve missed some of the nuances or practicalities, I’d love to hear your feedback. Leave us a note in the contact form below.

OSS Sandpit – Satellite Network Inventory Prototype

This article provides an example of building Satellite Network components into the inventory module of our Personal OSS Sandpit Project.

This prototype build includes components such as:

  • A Satellite
  • Earth Stations
  • Satellite Aggregation Site
  • Beams (including Beam to Earth Station mappings)
  • Customer services
  • Satellite Dishes
  • Satellite Receivers (ODU / IDU)
  • Satellite Modems
  • Leased Lines (backhaul)

Our prototype is summarised in the diagram below:

We describe this via the following use-cases:

  • Building Reference Data like data hierarchies, device types, connectivity types, containment, device layouts, templates, flexible data models, etc
  • Creating Device Instances including rack views and the virtualised layers within them
  • Creating Physical Connections between devices
  • Creating Logical Connections between devices

Reference Data

Starting off with the data hierarchy, we had to develop some new building blocks (data classes) to support the devices and multiplexing used in satellite networks (including those highlighted in yellow below):

In our prototype, we’ve developed a custom containment model as follows:

  • Country
    • Site (for head-end equipment)
      • System
        • Rack
          • Equipment
    • City (for customer sites)
      • CustomerSites
        • Equipment
    • Satellite_Infra
      • Satellite Earth Stations
        • Rack
          • Equipment
      • Aggregation Site
        • Transmission System
          • Rack
            • Equipment
      • Satellite
        • Beam (downlinks)
        • Uplinks (as VirtualPorts)

In a real situation, you probably wouldn’t bother to model to this level of detail as it just makes more data to maintain. We’ve just included this detail to show some of the attributes of our sample satellite network.

The satellite network also required some new templates, especially for the Earth Stations and Customer sites that have many devices and ports that you wouldn’t want to re-create each time.

Device Instances

We then create the devices to build the prototype network model shown in the first diagram above. This includes:

  • Satellite
  • Earth Stations
  • Satellite Aggregation Site
  • Satellite Dishes
  • Satellite Receivers (ODU / IDU)
  • Satellite Modems
  • Switches
  • Routers

This diagram below shows a small snapshot of the Geraldton Earth Station. The templates we created earlier helped to avoid re-creating these hierarchies for each Earth Station and Customer Site:

Earth Stations (including Infrastructure within Geraldton):

SkyMuster II Satellite (showing beams and transceivers / uplinks):

SkyMuster II Satellite Attributes:

Satellite Customer Sites:

Customer Site Details:

Physical and Logical Connections

There is quite a lot of patching and port mirroring required to create the connectivity between the customer sites, Aggregation Site, Earth Station, Satellite and Customer Sites, as follows:

Head-end (Customer core site, Aggregation Site, Earth Station):

Earth Station to Satellite:

Satellite to Customer Sites (modelled as MPLS Links to handle many:one links to the satellite):

Customer Site (including identification of which beam is mapped to):

Note: Whilst there would be significant outside plant (OSP) such as cables, joints and splices, especially between the Earth Station (in Geraldton, in Western Australia) and the Aggregation Site (in Eastern Creek, in New South Wales), we’ve only shown a point-to-point leased fibre link. To see how fibre cables, splice joints and ODFs are modelled, refer to the introduction to the OSS Sandpit inventory module.

Service Impact Analysis (SIA)

We can also use the service relationships to determine which customer services would be affected if the Geraldton router (GER-RTR-01) failed. In the example below, there would be two services affected (see under “Uses” in the bottom pane).

The upper pane shows all of the devices and links that Customer Service C-42-00001 relies upon for service.

Similar analysis could be done using the getAffectedServices API that we demonstrated in the OSS Sandpit Inventory Intro post.

Summary

I hope you enjoyed this brief introduction into how we’ve modelled a sample Satellite network into the Inventory module of our Personal OSS Sandpit Project. Click on the link to step back to the parent page and see what other modules and/or use-cases are available for review.

If you think there are better ways of modelling this Satellite network, if I’ve missed some of the nuances or practicalities, I’d love to hear your feedback. Leave us a note in the contact form below.

OSS Information Overload, Underload

Our OSS/BSS collect a lot of information. But how much of it is used and in what ways? How do the users find the information they need to make decisions?

In some cases, our OSS completely overload the user with information. An example might be in performance metrics. Our network might have hundreds of nodes and each node is collecting dozens/hundreds of performance data points every few minutes. If we just present this as hundreds of adjacent time-series graphs then we’re making the task difficult for our users. Finding the few decisionable data points is like trying to find a paragraph of text that exists in print somewhere in the room below. Our users might never find it.

Image from mastertechmold.com 

In other cases, we underwhelm our users, not giving them all the data they need. How often do you hear of network operations staff receiving an alarm and then connecting to the device via CLI (Command Line Interface) to perform additional diagnostics? Or a capacity planner jumping between inventory, utilisation graphs, calculators, maps, etc.

For the overwhelming cases, look to deliver roll-ups / drill-downs of information. For example, show a rolled-up heat-map of all metrics across the whole of the network (eg green, orange, red). But then let users progressively drill down (eg by region, by colour / severity, by domain, etc) to find what they’re seeking.

For the underwhelming cases, walk the journey with the users – map the user personas, their most important activities, the data points they need to perform those activities (or generate via those activities) and determine whether supplementary data is required.

Personally, I’ve tended to find that cross-linked data has given me great insights. I like being able to query data and mash it up to test ad-hoc hypotheses. We don’t necessarily need all the cross-linked data baked into our OSS/BSS tools, but we do need great data query and visualisation tools.

That’s one of the reasons the data visualisation block features prominently in my OSS Sandpit architecture. 

When considering which tool to use, I looked beyond the norm as the typical telco data management tools tend to have a few limitations.

I decided to look into the types of tools used heavily in the finance industry. In part because I wanted to test Candlestick / Bollinger Band functionality, but also because finance handles massive data sets (often time-series data) and needs to present data in a way that actionable insights can be derived quickly and easily.

I’m currently envisaging different data scenarios in which to test graphing techniques like the following samples provided by Kx Dashboards using Vega:

… including:

  • Sankey diagrams – to show relative activity flow volumes
  • Candlestick / Bollinger – to show trends and exceptions (see BBs on ChartIQ, which is also integrated into Kx Dashboards)
  • Radar views – to map complex comparisons such as performance or utilisation of assets over multiple months
  • Word clouds – to show most common text / phrases in log files
  • Geo – overlay data such as customer counts, device counts, utilisation, percentage connections, network health, etc, etc onto map views
  • Radial convergence – to show the volume of interconnections between devices / ports
  • Hierarchical edge bundles – to show network hierarchies for root-cause-trace (RCT). This might include tying together the virtualisation stacks for networks like 5G, where it’s proving to be a challenge to identify root-cause.
  • Area grouping – to show predominant network connectivity between areas
  • Circular ties – to show graph data relationships
  • Not to mention all your typical graphing models such as
    • Bubble charts – to show relative volumes
    • Scatter-plots – to show network performance vs line length
    • Histograms
    • Bar charts
    • Pie charts
    • etc

OSS Sandpit – 5G Network Inventory Prototype

5G networks seem to be the big investment trend in telco at the moment. It comes with a lot of tech innovation such as network slicing and an increased use of virtualised network functions (VNFs). This article provides an example of building 5G Network components into the inventory module of our Personal OSS Sandpit Project.

This prototype build includes components such as:

  • Hosting infrastructure
  • NFVI / VIM (NFV Infrastructure and Virtualised Infrastructure Management)
  • A 5GCN (5G Core Network)
  • An IMS (IP Multimedia Subsystem)
  • An RIC (RAN Intelligent Controller)
  • Virtualised Network Functions (AUSF, AMF, NRF, CU, DU, etc – a more extensive list of examples is provided later in this article)
  • Mobile Edge Compute (MEC)
  • MEC Applications like gaming servers, CDN (Content Delivery Networks)
  • Radio Access Network (RAN) and Remote Radio Units (RRU)
  • Outside Plant for fibre fronthaul and backhaul
  • Patching between physical infrastructure
  • End to end circuits between DN (Data Network), IMS, 5GCN, gNodeB, RRU
  • Logical Modelling of 5G Reference Points

Our prototype (a Standalone 5G model) is summarised in the diagram below:

We describe this via the following use-cases:

  • Building Reference Data like data hierarchies, device types, connectivity types, containment, device layouts, templates, flexible data models, etc
  • Creating Device Instances including rack views and the virtualised layers within them
  • Creating Physical Connections between devices
  • Creating Logical Connections between devices
  • Creating Network Slices in the form of services
  • Performing Service Impact Analysis (SIA)

Reference Data

Starting off with the data hierarchy, we had to develop some new building blocks (data classes) to support the virtualisation used in 5G networks. This included some new network slice types, virtualisation concepts and various other things:

In our prototype, we’ve developed a custom containment model as follows:

  • Country
    • Site
      • System (Network Domain)
        • Rack
          • Hosting
            • NFVI / VIM
              • VNF-Groupings (eg CU, DU, MEC, IMS, etc)
                • VNF
                  • Apps (like Gaming Servers)

In a real situation, you probably wouldn’t bother to model to this level of detail as it just makes more data to maintain. We’ve just included this detail to show some of the attributes of our sample 5G network.

5G also required some new templates, especially for core infrastructure that can house dozens of VNFs, 5G reference points and apps (eg games servers, CDN, etc) that you don’t want to recreate each time.

The 5G System architecture includes the following network functions (VNFs) and others.

  • Authentication Server Function (AUSF).
  • Access and Mobility Management Function (AMF).
  • Data Network (DN), e.g. operator services, Internet access or 3rd party services.
  • Unstructured Data Storage Function (UDSF).
  • Network Exposure Function (NEF).
  • Network Repository Function (NRF).
  • Network Slice Specific Authentication and Authorization Function (NSSAAF).
  • Network Slice Selection Function (NSSF).
  • Policy Control Function (PCF).
  • Session Management Function (SMF).
  • Unified Data Management (UDM).
  • Unified Data Repository (UDR).
  • User Plane Function (UPF).
  • UE radio Capability Management Function (UCMF).
  • Application Function (AF).
  • User Equipment (UE).
  • (Radio) Access Network ((R)AN).
  • 5G-Equipment Identity Register (5G-EIR).
  • Network Data Analytics Function (NWDAF).
  • CHarging Function (CHF).

Device Instances

We then create the devices to build the prototype network model shown in the first diagram above. This includes:

  • Hosting infrastructure
  • NFVI / VIM (NFV Infrastructure and Virtualised Infrastructure Management)
  • A 5GCN (5G Core Network)
  • An IMS (IP Multimedia Subsystem)
  • An RIC (RAN Intelligent Controller)
  • Virtualised Network Functions (AUSF, AMF, NRF, CU, DU, etc)
  • Mobile Edge Compute (MEC)
  • MEC Applications like gaming servers, CDN (Content Delivery Networks)
  • Radio Access Network (RAN) and Remote Radio Units (RRU)

This diagram below shows a small snapshot of the 5G Core. The templates we created earlier sure came in handy to avoid re-creating these hierarchies for each device type:

Note that the VirtualPorts are used for 5G reference points to support logical links, which we’ll cover later.

The diagrams below show the rack-layout views of core and edge hosting respectively. You’ll notice the hierarchy of device, NFVI, VNF-group, VNFs and applications are shown:

Physical Connections

To create the physical connectivity between core, edge and RRU, we’ve re-used the fibre cables, splice joints and ODFs that we demonstrated in the introduction to the OSS Sandpit inventory module.

In this case, we’ve just used fibres that were spare from last time and patched onto the 5G network’s physical infrastructure. The diagram below shows the physical path all the way from the Data Network (DN – aka a core router) to the transmitting antenna at site 2040.

This diagram includes router, core hosting, ODFs (optical patch panels), cables, splice joints, edge hosting, Radio Units and antenna, as well as fibre front and backhaul circuits.

Logical Connections

We also decided to create the various logical connections – in the most part these are interfaces between VNFs – via the standardised 5G Reference Points. 

You can also find a reference to the various logical interfaces / reference-points in the top-right corner of the prototype diagram (first diagram above).

You can also see the full list of reference points from any given VNF, as shown in the example of the AMF below. You’ll notice that these have already been set up as logical links to other components, as shown under “mplsLink” in the bottom pane. (ie the top pane are the “ports” on the AMF, the bottom pane shows the logical links to other VNFs)

The upper pane shows the instance of AMF (on the core) and its various interface points (A-end of the interface as VirtualPorts). The lower pane shows the relationships to Z-end components via logical circuits (note that I had to model them as MPLS links, which is not quite right, but the workaround needed in the tool).

You’ll also notice that the AMF is used by a number of network slices (under “uses” in the bottom pane), but we’ll get to that next.

Network Slices

Whilst not really technically correct, we’ve simulated some network slices in the form of “internal” services. To simplify, for each network slice type we’ve created a separate service terminating at each RRU. So, we’ve associated each RRU, Mobile Edge Infra (RAN), AMF (the Access and Mobility Management Function within the core) and the NSSF (the Network Slice Selection Function within the core) to these network slice “services.”

Some samples are shown below.

BTW 3GPP has defined the following Slice Types:

  • MIoT – Massive Internet of Things – support a huge device counts with enhanced coverage and low power usage
  • URLLC –  Ultra-Reliable Low-Latency Communications – to support low-latency, mission-critical applications
  • eMBB – Enhanced Mobile Broadband – to provide high speed data for application use (eg video conferencing, etc) and
  • V2X – Vehicle to Everything

Service Impact Analysis (SIA)

We can also use the service relationships to determine which Network Slices would be affected if the AMF failed. In the example below, there would be seven slices affected (see under “Uses” in the bottom pane), including all supported via sites 2040 and 2052

Similar analysis could be done using the getAffectedServices API that we demonstrated in the OSS Sandpit Inventory Intro post.

SigScale RIM

Over the last few weeks, I’ve also been using another open-source inventory management tool from SigScale called RIM (a Resource Inventory Manager designed to support service assurance use cases). It shines a light on mobile networks in particular.

The project creators authored the TM Forum best practice document IG1217 Resource Inventory of 3GPP NRM for Service Assurance which details the rationale for, and process of, mapping 3GPP information models to TM Forum’s TMF634 (Resource Catalog Mgmt) and TMF639 (Resource Inventory Mgmt) standards.

I plan to also use RIM’s REST interface (based on TM Forum’s OpenAPIs) to share data both ways with the Kuwaiba inventory module in the future. 

Summary

I hope you enjoyed this brief introduction into how we’ve modelled a sample 5G network into the Inventory module of our Personal OSS Sandpit Project. Click on the link to step back to the parent page and see what other modules and/or use-cases are available for review.

If you think there are better ways of modelling the 5G network, if I’ve missed some of the nuances or practicalities, I’d love to hear your feedback. Leave us a note in the contact form below.

OSS Sandpit – Resource / Inventory Module

This article provides a description of the inventory baseline, one module of our Personal OSS Sandpit Project.

As outlined in the diagram below, this incorporates the Inventory solution (by Kuwaiba), the graph database that underpins it as well as its APIs and data query tools. The greyed out sections are to be described in separate articles.

OSS Sandpit Inventory Baseline

We’ve tackled inventory first as this provides the base data set about resources in the network that other tools rely upon.

As the baseline introduction to the inventory module, we’ll provide a quick introduction to the following use-cases:

  • Building Reference Data like data hierarchies, device types, connectivity types, containment, device layouts, templates, flexible data models, etc
  • Creating Device Instances including rack views
  • Creating Physical Connections between devices
  • Creating Logical Connections between devices
  • Creating Services and their relationships with resources / inventory
  • Creating Outside Plant Views on geo-maps that include buildings, pits, splice cases, cable management, splicing, towers, antenna, end-to-end L1 circuits
  • Assigning IP Addresses and subnets with an IPAM tool
  • Creating an MPLS network
  • Creating an SDH network
  • Data import / export / updates via APIs including Service Impact Analysis (SIA)
  • Data import / export / updates via a Graph Database Query Language

Reference Data

Kuwaiba has a highly flexible and extensible data model. We’ve added many custom data classes (eg device categories like routers, switches, etc) such as those shown below:

And selectively added custom attributes to each of the classes (such as the Router class below):

Once the classes are created, we then create the Containment model (ie hierarchy of data objects). In our prototype, we’ve developed a custom containment model as follows:

  • Country
    • Site
      • System (Network Domain)
        • Rack
          • Equipment and so on.

We’ve also created a series of data templates to simplify data entry, such as the Cisco ASR 9001 and Generic Router examples below:

But we can also create templates for other objects, such as cables. The following sample shows a 24 fibre cable with two loose-tubes, each containing 12 fibre strands. (Note that colour-coding on tubes and strands is important for splicing technicians and designers)

Site and Device Instances

Next, we created some sites and devices within the sites, as shown below:

You’ll notice that some devices are placed inside a rack whilst others aren’t. You’ll also notice the naming convention for all devices (eg site – system – type – index, where site = 2052, system = DIS (distribution), type = CD (CD player for messaging) and index = 01 (the first instance of CD player at this site)).

It even allows us to show rack layouts (of equipment positions inside a rack)

And even patching-level details inside the rack (pink and blue lines represent patch-leads connecting to ports on the Cisco ASR 9001 router in rack position 2):

Physical Connections

Physical connections can take the form of patch-leads or via strands / conductors inside cables.

The diagram below represents a stylised optical fibre connection that we’ve created between a CODEC at site 2000 and another CODEC at site 2052. As you’ll also notice, it traverses two patch panels (ODFs – optical distribution frames), two splice joints and three optical fibre cables.

In our inventory tool, the stylised connection above presents as follows, where A and B have been added to indicate the patch-leads from the CODECs to patch-panels (ODFs):

Logical Connections

We can also represent logical and virtual connections. In the case below, we show a logical connection from the Waveguide of an antenna, to the broadcast of that signal to a neighbouring site, which then picks up the signal at the UAST (receiver).

Outside Plant Views

Outside plant (OSP) are the cables, joints, manholes, etc that help connect sites and equipment together. In the example below, we see the OSP view of the fibre circuits we described above in “Physical Connections.” If you look closely on the GIS (map overlay) below, you’ll spot sites 2000 and 2052, as well as the cables and splice joints. The lines show the physical route that the cables follow. 

You may also have noticed that the green line is showing a radio broadcast link, which is point-to-point radio and therefore follows a straight line path from antenna to antenna.

Cable Management

Cable management and splicing / connections is supported, with tubes/strands being selected and then terminated at each end of the cable (in this case CABLE1 and its strands connect the splice case on the left pane, with the ODF on the right pane). These can be managed on a strand-by-strand basis via the central pane. From the diagram, we can see that fibre 001 in CABLE 1 is connected to F1-001 in the splice case and 001-back on the ODF from the A-end and B-end details in the bottom left corner.

From the naming convention, you’ll notice that there are two sets of cable “ports” in the splice case, as indicated by fibre numbers starting with F1 and F2 respectively.

Topology Views

The diagram below shows a topological view of the devices within a site, helping operators to visualise connectivity relationships.

Services

One of the most important roles that inventory solutions play is as a repository of equipment and capacity. They also assist in allocating available resources to customer services. In the example below, service number “2052-ABC_LR_97.3FM-BSO” has a dependency on a tower, antenna, antenna switch frame and many more devices. If any of these devices fails, it will impact this customer service, as we’ll describe in more detail below.

 

IP Address Management (IPAM) and IP Assignment

We can manage IP address ranges / subnets, such as the examples below:

And then allocate individual IP addresses to devices, such as assigning IP address 222.22.22.1 to the CODEC, as shown on the “Physical Connection” diagram above.

MPLS Network

The following provides a simple MPLS network cloud for a customer:

APIs (including Service Impact Analysis Query)

The solution has hundreds of in-built APIs that facilitate queries, adds, modifies and deletes of data. 

The example shown below is getAffectedServices, which performs a service impact analysis. In this case, if we know that the device TEST-CD-02 fails, it will affect service number “2052-ABC_LR_97.3FM-BSO.” We can also look up the attributes of that service, which could include customer and customer contact details so that we can inform them their service is degraded and that repair processes have been initiated.

Note that the left-side pane is the Request and the right-side pane is the Response across the getAffectedServices API.

Data management via queries of the Graph Database

This inventory tool uses a Neo4j graph database. Using Neo4j browser, we can connect to the database and issue cypher queries (which are analogous to SQL, a structured query language that allows you to read/write data from/to the database). 

The screenshot below shows the constellation of linked data returned after issuing the cypher query (MATCH (n:InventoryObjects…. etc)). The data can also be exported in other formats, not just the graphical form shown here. 

I hope you enjoyed the brief introduction to the Inventory module of our Personal OSS Sandpit Project. Click on the link to step back to the parent page and see what other modules and/or use-cases are available for review.

Building a Personal OSS Sandpit

Being Passionate About OSS, I’ve used this blog / site to share this passion with the OSS community. The aim is to evangelise and make operational support tools even more impactful than they already are.

But there’s a big stumbling block – the barrier to entry into the OSS industry is huge. The barrier manifests in the following ways:

  • Opportunities – due to the breadth of knowledge required to be proficient, there aren’t many entry-level, OSS-related roles
  • Knowledge / Information – such is the diversity of knowledge, no single person has expertise across all facets of OSS – facets such as software, large-scale networks, IT infrastructure, business processes, project implementation, etc, etc. Even within OSS, there’s too huge a range of functional capability for anyone to know it all. Similarly, there’s no single repository of information, although organisations like TM Forum do a great job at sharing knowledge. On top of all that, the tech-centric worlds that OSS operate within are constantly evolving and proliferating
  • Access to ToolsOSS / BSS tools tend to be highly flexible, covering all aspects of a telco’s business operations (from sales to design to operations to build). OSS tend to cost a lot and take a long time to build / configure. That means that unless you already work for a telco or an OSS product vendor, you may struggle to get hands-on experience using the tools

It’s long been an ambition to help reduce the third barrier to entry by making personal OSS sandpit environments accessible to anyone with the time and interest.

The plan is to build a step-by-step guide that allows anyone to build their own small-scale OSS sandpit and try out realistic use-cases. The aim is to keep costs to almost nothing to ensure nobody is limited from tackling the project/s.

The building blocks of the sandpit are to be open-source and ideally reflect cutting-edge technologies / architectures. The main building blocks are:

  1. Network – a simulated, multi-domain network that can be configured and tested
  2. Fulfilment / BSS – the ability to create product offerings, take customer orders for those products, then implement as services into a network
  3. Assurance / Real-time – to perform (near) real-time monitoring of the network and services using alarms / performance / telemetry / logging
  4.  Resource / Inventory – to design and store records of multi-domain networks that spans PNI (Physical Network Inventory), LNI (Logical Network Inventory), OSP (Outside Plant), ISP (Inside Plant) and more
  5. Data Visualisation & Management – being able to interact with data generated via the abovementioned building blocks. Interact via search / queries, reports, dashboards, analytics, APIs and other forms of data import / export

OSS Sandpit Concept Diagram

Until recently, this sandpit has only been an ambition. But I’m pleased to say that some of these building blocks are starting to take shape. I’ll share more details in coming days and update this page.

This includes:

  • Introduction to The Resource / Inventory Module with the following use-cases:
    • Building Reference Data like location hierarchies, device types, connectivity types, containment, device layouts, templates, flexible data models, etc
    • Creating Device Instances including rack views
    • Creating Physical Connections between devices
    • Creating Logical Connections between devices
    • Creating Services and their relationships with resources / inventory
    • Creating Outside Plant Views on geo-maps that include buildings, pits, splice cases, cable management, splicing, towers, antenna, end-to-end L1 circuits
    • Assigning IP Addresses and subnets with an IPAM tool
    • Creating an MPLS network
    • Creating an SDH network
    • Data import / export / updates via APIs including Service Impact Analysis (SIA)
    • Data import / export / updates via a Graph Database Query Language
  • Designing the Inventory / Resources of a 5G Network including:
    • Hosting infrastructure
    • NFVI / VIM
    • A 5GCN (5G Core Network)
    • An IMS (IP Multimedia Subsystem)
    • An RIC (RAN Intelligent Controller)
    • Virtualised Network Functions (AUSF, AMF, NRF, CU, DU, etc)
    • Mobile Edge Compute (MEC)
    • MEC Applications like gaming servers, CDN (Content Delivery Networks)
    • Radio Access Network (RAN) and Remote Radio Units (RRU)
    • Outside Plant for fibre fronthaul and backhaul
    • Patching between physical infrastructure
    • End to end circuits between DN (Data Network), IMS, 5GCN, gNodeB, RRU
    • Logical Modelling of 5G Reference Points
  • Designing the Inventory / Resources of a Satellite Network including:
    • A Satellite
    • Earth Stations
    • Satellite Aggregation Site
    • Beams (including Beam to Earth Station mappings)
    • Customer services
    • Satellite Dishes
    • Satellite Receivers (ODU / IDU)
    • Satellite Modems
    • Leased Lines (backhaul)
  • Designing the Inventory / Resources of a Smart City / IoT (Internet of Things) Network including:
    • A Command and Control Centre (CCC)
    • Satellite Earth Stations
    • Smart Buildings including:
      • Compute / Hosting (VxBlocks)
      • Comms (incl Unified Comms and In-Building Coverage)
      • Security (eg CCTV, Access Control)
      • Building Management Systems (BMS)
      • Public Address / Audio-Visual
      • HVAC 
    • An optical fibre ring network and direct fibre backhaul links to radio towers
    • Towers / masts that are affixed with 5G and LoRa antenna and radio heads as well as point to point microwave antenna
    • 5G infrastructure
    • LoRaWAN infrastructure (including LoRa antenna / gateway, LoRa network server,  app servers and the join server)
    • IoT Sensors including:
      • Power Management Systems
      • Smart Meters
      • Parking Sensors
      • Traffic Control Systems (TCS)
      • Variable Messaging Systems (VMS)
      • Tollway Systems
      • Rail Control Systems (RCS)
      • Vehicle Detection Systems (VDS)
      • IoT Asset / Logistics Management
  • Designing the Inventory / Resources of a Fixed Wireless (FW) Network, including:
    • A fixed wireless core network
    • Radio Links across licensed and unlicensed (5 GHz and 24GHz)
    • Line of Sight analysis of each Radio Link and Viewshed
    • Fibre links (including cable management)
    • Tower Management
    • Routing and Switching
    • Layer 2 and Layer 3 service modelling (eg VLANs, VPLS tunnels, etc)
  • Designing the Inventory / Resources of a GPON (Gigabit Passive Optical Network), including:
    • Passive Optical Network (cables, patch panels, splices, splitters, containment, multiports, other splice joints)
    • Active GPON equipment (OLT, ONT)
  • Designing the Inventory / Resources of a Telco Cloud / Data Centre, including:
    • Hosting Services including:
      • IaaS (VMs, storage, network – FlexPod)
      • PaaS (ONTAP-AI a hosted AI solution, hosted voice)
      • SaaS (email, secure keys)
      • Internet ecosystems
      • Cloud provider ecosystems
      • CoLocation / Rack Management services
    • Leased Lines including:
      • International Submarine cable routes
      • Core / DC Interconnect
      • Local customer links
      • ISP links
    • Carrier MPLS network modelling, including:
      • IPAM (IP Address Management)
      • VPN / VLAN management
      • VRF and AS management
    • Virtualisation and application management (including service management and billing)
    • Equipment Layout art for:
      • Routers
      • Switches
      • Cisco FlexPod Chassis
      • NVIDIA ONTAP AI Chassis
  • Gathering, presenting and visualising data including:
    • Many unique example graph types

More details and use-cases to come, including:

  1. Inventory modelling of:
    1. HFC / CableCo
    2. A hybrid power / telco network
    3. SDH Transmission
    4. More network virtualisation (SDN), in addition to the virtualisation scenarios covered in the 5G prototype (see above)
    5. Are there any other scenarios you’d like to see???

 

If you’d like to know more about our Personal OSS Sandpit Project, fill in the contact form below.