How is OSS/BSS service and resource availability supposed to work?

The brilliant question above was asked by a contact who is about to embark on a large OSS/BSS transformation.  That’s certainly a challenging question to start the new year with!!

The following was provided for a little more context:

  • We have a manually maintained table for each address where we can store which services are available—ie. DSL up to 5 Mbps or Fiber Data 300 Mbps

  • This manual information has no data-level connection to the actual plant used to serve the address 

  • In a “perfect world”, how does this work?

  • Where is the data stored? Ex: Does a geospatial network inventory store this data, then the BSS requests it as needed?

  • How does a typical OSS tie together physical network and equipment to products and offerings?

  • How is it typically stored? How is it accessed?

  • Sort of related to the address, we have “Facility” records that include things like the Electronics (Card, slot, port, shelf, etc) and some important “hops” along the way 

  • Right now if a tech makes changes to physical plant, we have to manually update our mapping (if the path changes), spreadsheets (if fiber assignment changes) or paper records (if copper pair assignments change).. additionally, we might need to update the Facilities database

  • It doesn’t “use” it’s “awareness” of our plant or network equipment to do anything except during service orders where certain products are tied to provisioning features—ie. callerID billing code being on an order causes a command to be issued to the switch to add that feature.

  • There is no visibility into network status.. how does this normally work?

  • I feel like I’m missing a fundamental reference point because I’ve never seen an actual working example of “Orders down, faults up”, just manually maintained records that sort of single-directionally “integrate” to network devices but only in the context of what was ordered, not in the context of what is available and what the real-time status is.

Wow! Where do we start? Certainly not an easy or obvious question by any means. In fact it’s one of the trickier of all OSS/BSS challenges.

In the old days of OSS/BSS, services tended to be circuit-oriented and there was a direct allocation of logical / physical resources to each customer service. You received an order, you created a “customer circuit” for the order, you reserved suitable / available resources in your inventory to assign to the circuit, then issued work order activities to implement the circuit. When the work order activities were complete, the circuit was ready for service.

The utilised resources in your inventory system/s were tagged with the circuit ID or service ID and therefore not available to other services. This association also allowed Service Impact Analysis (SIA) to be performed. In the background, you had to reconcile the real resources available in the network with what was being shown in your inventory solution. Relationships were traceable down through all layers of the TMN stack (as below). Status of the resources (eg a Network Element had failed) could also be associated to the inventory solution because alarms / events had linking keys to all the devices, cards, ports, logical ports, etc in inventory .

To an extent, it’s still possible to do this for the access/edge of the network. For example, from the Customer Premises Equipment (CPE) / Network Termination Device (NTD) to the telco’s access device (eg DSLAM or similar). But from that point deeper into the core of the telco network, it’s usually a dynamic allocation of resources (eg packet-switched, routed signal paths).

With modern virtualised and packet-switched networks, dynamic allocation makes its harder to directly associate actual resources with customer services at any point in time. See this earlier post for more detail on the diagram below.

Instead, we now just ask the OSS to push orders into the virtualisation cloud and expect the virtualisation managers to ensure reliable availability of resources. We’ve lost visibility inside the cloud.

So this poses the question about whether we even need visibility now. There are three main states to consider:

  1. At Service Initiation – What resources are available to assign to a service? As long as capacity planning is doing its job and keeping the available resource pool full, we just assume there will be sufficient resource and let the virtualisation manager do its thing
  2. After Service is Commissioned – What resources are assigned to the service at the current point in time? If the virtualisation manager and network are doing their highly available, highly resilient thing, then do we want to know?
  3. During an Outage – What services are impacted by resources that are degraded or not available? As operators, we definitely want to know what needs to be fixed and which customers need to be alerted.

So, let’s now get into a more “modern orchestration and abstraction” approach to associating customer services with resources. I’ve seen it done many different ways but let’s use the diagram below as a reference point (you might have to view in expanded form):

CFS RFS orchestration

 Here are a few thoughts that might help:

  • As mentioned by the contact, “orders down, faults up,” is a mindset I tend to start with too. Unfortunately, data flows often have to be custom-designed as they’re constrained by the available systems, organisation structures, data quality improvement models, preferred orchestration model, etc
  • You may have heard of CFS (Customer Facing Service) and RFS (Resource Facing Service) constructs? They’re building blocks that are often used by operators to design product offerings for customers (and then design the IT systems that support them). They’re shown as blue ovals as they’re defined in the Service Catalog (CFS shown as north-facing and RFS as south-facing)
  • CFS are services tied to a product/offering. RFS are services linked with resources
  • To simplify, I think of CFS like a customer order form (ie what fields and options are available for the customer) and RFS being the technical interfaces to the network (eg APIs into the Domain Managers and possibly NMS/EMS/VIM)
  • Examples of CFS might be VPN, Internet Access, Transport, Security, Mobility, Video, etc
    Examples of RFS might be DSL, DOCSIS, BGP (border gateway protocol), DNS, etc
    See conceptual model from Oracle here:
  • Now, let’s think of how to create this model in two halves:
      • One is design-time – that’s where you design the CFS and/or RFS service definitions, as well as designing the orchestration plan (OP) (aka provisioning plan). The OP is the workflow of activities required to activate a CFS type. This could be as simple as one CFS consuming an RFS stub with a few basic parameters mapped (eg CallingID). Others can be very complex flows if there are multiple service variants and additional data that needs to be gathered from East-West systems (eg request for next available patch-port from physical network inventory [PNI]). Some of the orchestration steps might be automated / system-driven, whilst others might be manual work order activities that need to be done by field workforce.
        Note that the “Logging and Test” box at the left is just to test your design-time configurations prior to processing a full run-time order
      • The other is run-time – that’s where the Orchestrator runs the OP to drive instance-by-instance implementation of a service (including consumption of actual resources). That is, an instantiation of one customer order through the orchestration workflow you created during design-time 
  • A CFS can map parameters from one or more RFS (there can even be hierarchical consumption of multiple RFS and CFS in some situations, but that will just confuse the situation)
  • You can also loosely think of CFS as being part of the BSS and RFS as being part of the OSS, with the service orchestration usually being a grey area in the middle
  • Now to the question about where is the data stored:
    • Design-time – CFS building block constructs are generally stored in a BSS or service catalog. Orchestration plans are often also part of modern catalogs, but could also fall within your BSS or OSS depending on your specific stack
    • Run-time (ie for each individual order) – The customer order details (eg speeds, configurations, etc) are generally stored in “the BSS.” The orchestration plan for each order then drives data flows. This is where things get very specific to individual stacks. The OP can request resource availability via east-west systems (eg inventory [LNI or PNI], DNS, address databases, WFM, billing code database, etc, etc, etc) and/or to southbound interfaces (eg NMS/EMS/Infrastructure-Manager APIs) to gather whatever information is required
    • Distributed or Centralised data – There’s no specific place where all data is collected. Some of the systems (eg PNI/LNI) above will have their own data repositories, whilst others will pull from a centralised data store or the network or other infrastructure via NMS/EMS/VIM
    • Data master – in theory the network (eg NMS/EMS/NE) should be the most accurate store of information, hence the best place to get data from (and your best visibility of current state of the network). Unfortunately, the NMS/EMS/NE often won’t have all the info you need to drive the orchestration plan. For example, if you don’t already have a cable to the requesting customer’s address, then the orchestration plan will have to include an action/s for a designer to use PNI/geospatial data to find the nearest infrastructure (eg joint/pedestal) to run a new cable from, then go through all the physical build activities, before sending the required data back to the orchestration plan. Since the physical network (eg cables, joints, etc) almost never has a programmatic interface, it will require manual effort and manual data entry. Alternatively, the NMS/EMS/VIM might not be able to tell us exactly what resource the service is consuming at any point in time
    • For Specific product offerings – There are so many different possibilities here that it’s hard to answer all the possible data flows / models. The orchestration plan within the Business Orchestration (aka Cross-domain Orchestration) layer is responsible for driving flows. It may have to perform service provisioning, network orchestration and infrastructure control. 

This is far less concise than I hoped. 

If you have a simpler way of answering the question and/or can point us to a better description, we’d love to hear from you!

Read the Passionate About OSS Blog for more or Subscribe to the Passionate About OSS Blog by Email

4 thoughts on “How is OSS/BSS service and resource availability supposed to work?

  1. Great post, Ryan – very comprehensive. The answer to your contact’s transformation questions will also depend on what’s most important *to them*. Automated provisioning is a fair goal, but if it’s one new service a week, maybe not so critical. Equally, if network reliability is an issue due to geography, service assurance might be more of a priority. It’s important that projects to transform OSS identify the particular outcomes that are most valuable to an individual operator, and use that to drive the priority, sequence and even architecture of a transformed OSS. Lots of options to choose from – is clear your experience should be of value to this contact!

  2. Thanks Robert,
    To be honest, I’m still thinking hard about how to explain this more elegantly than current!
    You’ve hit the nail on the head about figuring out what’s most important to solve and whether customer service volumes warrant a full-blown orchestration solution! Those considerations are definitely in the mix!
    I really appreciate your constructive feedback Robert!

  3. Excellent post Ryan. Working with Catalog (Product, Service/Technical and Resource) Transformation Projects. I use TMF’s model for Products – Offers, Products, CFSS (Specification), RFSS (Specification), Resources and Elements. When I started with Telecommunications the “orders down” an “faults up” was used to explain the BSS / OSS (or BOSS) stack.

    A very good summary. Thanks for posting.

    VBR/ Wallis Dudhnath

  4. Thanks Wallis.
    And thanks very much for elaborating around the TMF models. Very helpful!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.