The overlaps of DCIM with inventory, asset and config management

A regular reader of the PAOSS blog recently wrote, “I follow with passion your blog,latest post about Inventory are great [Ed. the reader is talking about this post about LNI and PNI and this one about Inventory vs Asset vs CMDB Management]. I ask you if possible have a post on Inside Plant vs Outside Plant vs Virtual network creation… we usually use CAD based tool for Inside Plant design both for TLC equipment, cabling, cross connection, Distribution Frame, rooms, virtual rooms, rows structure,etc but also for power, conditioning, lighfiring,etc. We also use Network Inventory for Datacenter and server farm modelling.Outside Plant typically deals with GIS tool for cabling infrastructure. And now also virtualizzation of Network is coming with NFV and SDN. What do you think about?”

Great question.

In the post about Inventory vs Asset vs CMDB, we used the following Venn Diagram:

Unfortunately, there’s another circle that’s not shown on this diagram, but should be – the DCIM (Data Centre Infrastructure Management) circle. The overlaps between OSS and DCIM partially answer the questions above. We wrote a 5 part series on DCIM back in 2014 (part one, two, three, four, five), so perhaps it’s time for a re-visit.

The last of those five posts even included another Venn Diagram, as follows:

OSS, DCIM, ITSM Venn Diagram

Data Centre Infrastructure Management (DCIM) shares much of its DNA with OSS, but also has a number of unique differences.

Similarities:

  • IT and network device / inventory management
  • CSPs and Data Centres tend to have many Enterprise customers, and therefore a need to align with their IT service and life-cycle management (ITIL / ITSM) methodologies
  • Electronic data collection and storage to support fulfillment and assurance workflows
  • Analytics and operational decision support
  • Planning and design tools
  • Predictive modelling
  • Process and change management
  • Capacity planning, resource allocation and provisioning

Differences (ie what Data Centres have that traditional CSP networks don’t):

  • Facilities / Building Management Systems (FMS/BMS)
  • Energy / Power management
  • Environment and heat management (HVAC) including management of hot/cold zones
  • Data Centres tend to have less outside plant or inter-site connectivity* (ie most power and network connectivity tends to reside within the Data Centres)
  • However, Data Centre cable management have some slight differences. Network links are more likely to be managed within 3D spatial systems (x, y and height) if at all, rather than the 2D (x and y coordinates) typically plotted by most OSS inventory via GIS (Geographical Information Systems) or CAD (Computer Aided Design) drawings. Data Centre cables tend to be run in spatially-dense above-rack or below-floor trayways. By comparison, cables between sites tends to be less dense and at a fairly consistent height (eg a standard depth underground or a standard height when mounted on towers/poles aboveground)
  • Alternatively, DCs may manage spatial infrastructure through naming conventions such as rooms, rack-rows, racks, rack-position rather than 3D spatial systems
  • Data Centres have traditionally had a higher proportion of virtualised assets than traditional CSPs, although that is now changing with the operator network embracing network virtualisation

 

So let’s now look at how it “might” all hang together (noting that each company is likely to be different depending on their systems and processes):

  • DCIM manages facilities, building, power / PLCs and heating/cooling/HVAC
  • PNI manages physical connectivity (between sites and within the DC) as it can generally manage connectivity to physical ports on patch-panels / frames and physical devices (eg switches and routers) inside the DC. PNI also handles splicing and patching. PNI tools can generally also manage power cabling, although not everyone uses PNI for this
  • LNI (in conjunction with EMS [Element Management Systems] and virtual resource managers) will tend to manage the virtual / logical networks including resource management and orchestration
  • LNI will also tend to provide topological views of the network (often point-to-point links between physical/logical ports rather than the cable routes shown in PNI). LNI may also potentially include rack layouts and other forms of network visualisation. However, LNI tends to only partially show spatial presentation of the data (eg physical locations of “circuit” end-points rather than spatial location of all racks and equipment in 3D)
  • Related compute / storage infrastructure could be managed by DCIM, LNI, VIM, etc
  • And any of this could be cross-referenced as assets in the Asset Management System and/or Configuration Management Database (CMDB)

I can see that CAD might still be required for trayway, HVAC ducting, etc because PNI isn’t really designed with this in mind in 3D. 

Having said that, I’d probably still attempt to get all connectivity and support designed into a spatial visualisation tool like PNI rather than CAD. Afterall, connectivity of any type can be modelled as nodes and arcs (same as PNI). It’s just that ducting tends to have a greater 3D heft than a single line / arc of a typical comms cable. 

Why is it important to have this data in a single spatial system rather than CAD? Well, I figure it should help future augmented reality (AR) use-cases like the ones described in the link.

So here’s the updated diagram:

* There are of course multi-site DC organisations that have links between their sites, but they tend to outsource their long-haul network links to traditional carriers.

Softwarisation of 5G

As you have undoubtedly noticed, 5G is generating quite a bit of buzz in telco and OSS circles.

For many it’s just an n+1 generation of mobile standards, where n is currently 4 (well, the number of recent introductions into the market mean n is probably now getting closer to 5  🙂  ).

But 5G introduces some fairly big changes from an OSS perspective. As usual with network transformations / innovations, OSS/BSS are key to operationalisation (ie monetising) the tech. This report from TM Forum suggests that more than 60% of revenues from 5G use-cases will be dependent on OSS/BSS transformation.

And this great image from the 5G PPP Architecture Working Group shows how the 5G architecture becomes a lot more software-driven than previous architectures. Interesting how all 5 “software dimensions” are the domain of our OSS/BSS isn’t it? We could replace “5G architecture” with “OSS/BSS” in the diagram below and it wouldn’t feel out of place at all.

So, you may be wondering in what ways 5G will impact our OSS/BSS:

  • Network slicing – being able to carve up the network virtually, to generate network slices that are completely different functionally, means operators will be able to offer tailored, premium service offerings to different clients. This differs from the one-size-fits-all approach being used previously. However, this means that the OSS/BSS complexity gets harder. It’s almost like you need an OSS/BSS stack for each network slice. Unless we can create massive operational efficiencies through automation, the cost to run the network will increase significantly. Definitely a no-no for the execs!!
  • Fibre deeper – since 5G will introduce increased cell density in many locations, and offer high throughput services, we’ll need to push fibre deeper into the network to support all those nano-cells, pico-cells, etc. That means an increased reliance on good outside plant (PNI – Physical Network Inventory) and workforce management (WFM) tools
  • Software defined networks, virtualisation and virtual infrastructure management (VIM) – since the networks become a lot more software-centric, that means there are more layers (and complexity) to manage.
  • Mobile Edge Compute (MEC) and virtualisation – 5G will help to serve use-cases that may need more compute at the edge of the radio network (ie base stations and cell sites). This means more cross-domain orchestration for our OSS/BSS to coordinate
  • And other use-cases where OSS/BSS will contribute including:
    • Multi-tenancy to support new business models
    • Programmability of disparate networks to create a homogenised solution (access, aggregation, core, mobile edge, satellite, IoT, cloud, etc)
    • Self-healing automations
    • Energy efficiency optimisation
    • Monitoring end-user experience
    • Zero-touch administration aspirations
    • Drone survey and augmented reality asset management
    • etc, etc

Fun times ahead for OSS transformations! I just hope we can keep up and allow the operator market to get everything it wants / needs from the possibilities of 5G.

An Asset Management / Inventory trick

Last week we discussed the nuances between Inventory, Asset and Config Management within an OSS stack. Each one of these tools are designed to supports functionality for different users / persona-groups. However, they also tend to have significant functional overlap. Chances are your organisation doesn’t have separate dedicated tools for each.

So today I’m going to share a trick I’ve used in the past when I’ve only had a PNI (Physical Network Inventory) system to work with, but need to perform asset management style functionality.

Most inventory tools are great at storing the current state of a device that exists in a network. However, they don’t tend to be so great at an asset manager’s primary function – tracking the entire life-cycle of an asset from procurement to decommissioning and sparing / maintenance along the way.

Normally the PNI just records the locations of all the active network equipment – in buildings, exchanges, comms-huts, cabinets, etc. The trick I use is to create an additional location/s for warehouses. They may (or may not) reside in the physical location of your real warehouse/s.

In almost all PNI systems, you have control over the status of the device (eg IN-SERVICE, etc). You can use this functionality to include status of SPARE, UNDER REPAIR, etc and switch a device between active network locations and the warehouse.

These status-change records give you the ability to pin-point the location of a given asset at any point in time. It also gives you trending stats, either as an individual device or as a cohort of devices (eg by make/model).

You can even build processes around it for check-in / check-out of the warehouse and maintenance scheduling.

I should point out that this works if your PNI allows you to uniquely identify a device (eg by make/model + serial number or perhaps a unique naming convention instance). If your PNI device records only show the current function of a device (eg a naming convention like SiteA-Router-0001), then you might lose sight of the device’s trail when it moves through life-cycle states (eg to the warehouse).

OSS discovers a network

Following yesterday’s post about OSS Inventory, I received another great follow-up question from another avid reader of the PAOSS blog:

Interesting thoughts Ryan! In addition to ‘faults up’, perhaps there is a case also (obvious?) for ‘discovery up’ to capture ongoing non-planned changes? Wondering have you come across any sort of reconciliation / adaptive inventory patterns like this? Workflow based? Autonomous? (Going to far into chaos theory territory ?

Yes, we did exactly that with the same tool discussed yesterday that I used back in 2000. In fact, a very clever dev and I got that company’s first-ever auto-discovery tool working on site (using a product supplied by head-office). Discovering the nodals (ie equipment, cards, ports) was fairly easy. Discovering the connectivity within a domain (we started with SDH) was tricky, but achievable. Auto-discovering cross-domain connectivity (ie DSL circuits through physical, SDH transit, ATM and logical connectivity onto the IP cloud) was much trickier as we needed to find/make linking keys across different data sources.

It was definitely workflow based with a routine-driven back-end. We didn’t just want anything that was discovered to be automatically stuffed into (or removed from) the database (think flapping ports or equipment going down temporarily). It could’ve been automous, but we introduced a manual step to approve any of the discoveries made by each automated discovery iteration.

As you know, modern networks / EMS / VIM (resource managers) are much more discoverable. They need to be for modern orchestration and resilience techniques. I don’t think it would be quite so tricky to stitch circuits together as we’re no longer so circuit-oriented as back in 2000.

However, I’d be fascinated to hear from other readers how much of a problem they have trying to marry up different data sources for discovery purposes. I’d also love to hear whether they’re using fully autonomous discovery or using the manual intervention step for the same reason we were. I imagine most are automating, because orchestration plans just need to make use of whatever resources are being presented by the underlying resource managers in near-real-time.

PS. For those wondering what “discovery” is, it’s shown in the lower grey arrow in this diagram from “Orders down, Faults up

Discovery is the process that allows data to be passed from NMS/EMS/NEs (ie the network or resource managers) directly into the inventory management database. It should be a more reliable and expedient way of sychronising the inventory with the live network. 

The reason for the upper grey arrow is because not all networks have APIs that can be “discovered.” Passive equipment like cable joints and patch-panels don’t have programmatic interfaces. Therefore we need to find other ways to get that data into the Inventory Manager.

Various forms of OSS Inventory

After reading other recent posts such as “Orders Down, Faults Up” and “How is OSS/BSS service and resource availability supposed to work?” an avid reader of the PAOSS blog posed the following brilliant question:

Do you have any thoughts on geospatial vs non geospatial network inventory systems? How often do you see physical plant mapping in a separate system from network inventory, with linkages or integrations between them, vs how often do you see physical and logical inventory being captured primarily in a geospatially oriented system?

Boy do I ever have some thoughts on this topic!! I’m sure you do too, so I’d love to hear what you think in the comments section below.

I was lucky. The first OSS/BSS that I worked on (all the way back in 2000), had both geo and non-geo (topology) views. It also had a brilliantly flexible data model that accommodated physical and logical inventory. All tightly integrated into one package. There aren’t many tools that can do all of that even today. Like I said, I was lucky to have this as a starting point!!

Like all things OSS/BSS, it starts with the personas and the key tasks they need to perform. Or from the supplier’s perspective, which customer personas they’re most actively targeting.

For example, if you have a significant Outside Plant (OSP) Network, then geo-positioning is vital. The exchanges and comms huts are easy enough to find, but pits, cable routes, easements, etc are often harder to find. It’s not uncommon for a field tech to waste time searching for a pit that’s covered in dirt, grass or snow. And knowing the exact cable route in geo view is helpful for sending field techs to the exact location of a fault (ie helping them to pinpoint the location of the bright yellow excavator that has just sliced through your inter-capital link). Geo-view is also important for OSP designers and the field workforce that builds the OSP network.

But other personas don’t care about seeing the detailed cable route. They just want to see a point-to-point topological link to represent physical connections between the ports on adjacent devices. This helps them to quickly understand the network or circuit / service view. They may also like to see an alarm overlay on the topology to quickly determine which parts of the network aren’t performing as expected. For these personas, seeing all the geo-detail just acts as visual noise that they need to subconsciously filter out to understand the topology view.

These personas also tend to want topological views of the network, not just the physical but the logical and virtual network / service overlays too.

In most cases that I can think of, the physical / OSP inventory tools show the physical devices (ports even) that the OSP network connects into. Their main focus is on the cables, joints, pits, pipes, catenaries, poles, lead-ins, patch-panels, patch-leads, splitters, etc. But showing the termination of cables onto active equipment (Inside Plant or ISP) is an important linking key between the physical and logical views.

The physical port (on the physical device) becomes the key demarcation between physical and logical worlds. The physical port connects physical cables / leads, but it also acts as the anchor point from which to create logical ports to which logical connections are made. As a result, the physical device and port tend to be shown in both physical (geo) and logical inventory tools. They also tend to be shown in both physical and logical network topology views.

In the case of the original OSS/BSS I worked on, it had separate visualisation tools for geo, network and circuit/service, but all underpinned by a common data model.

What’s the best way? Different personas will have different perspectives of course. I prefer for physical and logical inventories to be integrated out of the box (to allow simple cross-ref visually and in queries)…. but I also prefer for them to have different views (eg geo, topology, network, circuit/service) to suit different situations.

I also find it helpful if each of those views allow the ability to drill down deeper into specific sections of the graph if necessary. I’d prefer not to have all of those different views overlaid onto a geo visualisation. Too much visual clutter IMHO, but others may love it that way.

Oh, and having separate LNI (Logical Network Inventory) and PNI (Physical Network Inventory) can be a tricky thing to reconcile. The LNI will almost always have programmatic interfaces (APIs) to collect data from, but will generally have to amalgamate many different sources. Meanwhile, the PNI consists of mostly passive equipment and therefore has no API to collect latest info from. I tend to use strategies at the above-mentioned demarcation point (ie physical ports) to help establish linking keys between LNI and PNI.

BTW. There’s one aspect of the question, “How often do you see physical plant mapping in a separate system from network inventory” that I haven’t fully answered. I’ll cover the question of asset management vs inventory management vs CMDB (Configuration Management Database) in more detail in an upcoming post. [Ed. See link here]

How is OSS/BSS service and resource availability supposed to work?

The brilliant question above was asked by a contact who is about to embark on a large OSS/BSS transformation.  That’s certainly a challenging question to start the new year with!!

The following was provided for a little more context:

  • We have a manually maintained table for each address where we can store which services are available—ie. DSL up to 5 Mbps or Fiber Data 300 Mbps

  • This manual information has no data-level connection to the actual plant used to serve the address 

  • In a “perfect world”, how does this work?

  • Where is the data stored? Ex: Does a geospatial network inventory store this data, then the BSS requests it as needed?

  • How does a typical OSS tie together physical network and equipment to products and offerings?

  • How is it typically stored? How is it accessed?

  • Sort of related to the address, we have “Facility” records that include things like the Electronics (Card, slot, port, shelf, etc) and some important “hops” along the way 

  • Right now if a tech makes changes to physical plant, we have to manually update our mapping (if the path changes), spreadsheets (if fiber assignment changes) or paper records (if copper pair assignments change).. additionally, we might need to update the Facilities database

  • It doesn’t “use” it’s “awareness” of our plant or network equipment to do anything except during service orders where certain products are tied to provisioning features—ie. callerID billing code being on an order causes a command to be issued to the switch to add that feature.

  • There is no visibility into network status.. how does this normally work?

  • I feel like I’m missing a fundamental reference point because I’ve never seen an actual working example of “Orders down, faults up”, just manually maintained records that sort of single-directionally “integrate” to network devices but only in the context of what was ordered, not in the context of what is available and what the real-time status is.

Wow! Where do we start? Certainly not an easy or obvious question by any means. In fact it’s one of the trickier of all OSS/BSS challenges.

In the old days of OSS/BSS, services tended to be circuit-oriented and there was a direct allocation of logical / physical resources to each customer service. You received an order, you created a “customer circuit” for the order, you reserved suitable / available resources in your inventory to assign to the circuit, then issued work order activities to implement the circuit. When the work order activities were complete, the circuit was ready for service.

The utilised resources in your inventory system/s were tagged with the circuit ID or service ID and therefore not available to other services. This association also allowed Service Impact Analysis (SIA) to be performed. In the background, you had to reconcile the real resources available in the network with what was being shown in your inventory solution. Relationships were traceable down through all layers of the TMN stack (as below). Status of the resources (eg a Network Element had failed) could also be associated to the inventory solution because alarms / events had linking keys to all the devices, cards, ports, logical ports, etc in inventory .

To an extent, it’s still possible to do this for the access/edge of the network. For example, from the Customer Premises Equipment (CPE) / Network Termination Device (NTD) to the telco’s access device (eg DSLAM or similar). But from that point deeper into the core of the telco network, it’s usually a dynamic allocation of resources (eg packet-switched, routed signal paths).

With modern virtualised and packet-switched networks, dynamic allocation makes its harder to directly associate actual resources with customer services at any point in time. See this earlier post for more detail on the diagram below.

Instead, we now just ask the OSS to push orders into the virtualisation cloud and expect the virtualisation managers to ensure reliable availability of resources. We’ve lost visibility inside the cloud.

So this poses the question about whether we even need visibility now. There are three main states to consider:

  1. At Service Initiation – What resources are available to assign to a service? As long as capacity planning is doing its job and keeping the available resource pool full, we just assume there will be sufficient resource and let the virtualisation manager do its thing
  2. After Service is Commissioned – What resources are assigned to the service at the current point in time? If the virtualisation manager and network are doing their highly available, highly resilient thing, then do we want to know?
  3. During an Outage – What services are impacted by resources that are degraded or not available? As operators, we definitely want to know what needs to be fixed and which customers need to be alerted.

So, let’s now get into a more “modern orchestration and abstraction” approach to associating customer services with resources. I’ve seen it done many different ways but let’s use the diagram below as a reference point (you might have to view in expanded form):

CFS RFS orchestration

 Here are a few thoughts that might help:

  • As mentioned by the contact, “orders down, faults up,” is a mindset I tend to start with too. Unfortunately, data flows often have to be custom-designed as they’re constrained by the available systems, organisation structures, data quality improvement models, preferred orchestration model, etc
  • You may have heard of CFS (Customer Facing Service) and RFS (Resource Facing Service) constructs? They’re building blocks that are often used by operators to design product offerings for customers (and then design the IT systems that support them). They’re shown as blue ovals as they’re defined in the Service Catalog (CFS shown as north-facing and RFS as south-facing)
  • CFS are services tied to a product/offering. RFS are services linked with resources
  • To simplify, I think of CFS like a customer order form (ie what fields and options are available for the customer) and RFS being the technical interfaces to the network (eg APIs into the Domain Managers and possibly NMS/EMS/VIM)
  • Examples of CFS might be VPN, Internet Access, Transport, Security, Mobility, Video, etc
    Examples of RFS might be DSL, DOCSIS, BGP (border gateway protocol), DNS, etc
    See conceptual model from Oracle here:
  • Now, let’s think of how to create this model in two halves:
      • One is design-time – that’s where you design the CFS and/or RFS service definitions, as well as designing the orchestration plan (OP) (aka provisioning plan). The OP is the workflow of activities required to activate a CFS type. This could be as simple as one CFS consuming an RFS stub with a few basic parameters mapped (eg CallingID). Others can be very complex flows if there are multiple service variants and additional data that needs to be gathered from East-West systems (eg request for next available patch-port from physical network inventory [PNI]). Some of the orchestration steps might be automated / system-driven, whilst others might be manual work order activities that need to be done by field workforce.
        Note that the “Logging and Test” box at the left is just to test your design-time configurations prior to processing a full run-time order
      • The other is run-time – that’s where the Orchestrator runs the OP to drive instance-by-instance implementation of a service (including consumption of actual resources). That is, an instantiation of one customer order through the orchestration workflow you created during design-time 
  • A CFS can map parameters from one or more RFS (there can even be hierarchical consumption of multiple RFS and CFS in some situations, but that will just confuse the situation)
  • You can also loosely think of CFS as being part of the BSS and RFS as being part of the OSS, with the service orchestration usually being a grey area in the middle
  • Now to the question about where is the data stored:
    • Design-time – CFS building block constructs are generally stored in a BSS or service catalog. Orchestration plans are often also part of modern catalogs, but could also fall within your BSS or OSS depending on your specific stack
    • Run-time (ie for each individual order) – The customer order details (eg speeds, configurations, etc) are generally stored in “the BSS.” The orchestration plan for each order then drives data flows. This is where things get very specific to individual stacks. The OP can request resource availability via east-west systems (eg inventory [LNI or PNI], DNS, address databases, WFM, billing code database, etc, etc, etc) and/or to southbound interfaces (eg NMS/EMS/Infrastructure-Manager APIs) to gather whatever information is required
    • Distributed or Centralised data – There’s no specific place where all data is collected. Some of the systems (eg PNI/LNI) above will have their own data repositories, whilst others will pull from a centralised data store or the network or other infrastructure via NMS/EMS/VIM
    • Data master – in theory the network (eg NMS/EMS/NE) should be the most accurate store of information, hence the best place to get data from (and your best visibility of current state of the network). Unfortunately, the NMS/EMS/NE often won’t have all the info you need to drive the orchestration plan. For example, if you don’t already have a cable to the requesting customer’s address, then the orchestration plan will have to include an action/s for a designer to use PNI/geospatial data to find the nearest infrastructure (eg joint/pedestal) to run a new cable from, then go through all the physical build activities, before sending the required data back to the orchestration plan. Since the physical network (eg cables, joints, etc) almost never has a programmatic interface, it will require manual effort and manual data entry. Alternatively, the NMS/EMS/VIM might not be able to tell us exactly what resource the service is consuming at any point in time
    • For Specific product offerings – There are so many different possibilities here that it’s hard to answer all the possible data flows / models. The orchestration plan within the Business Orchestration (aka Cross-domain Orchestration) layer is responsible for driving flows. It may have to perform service provisioning, network orchestration and infrastructure control. 

This is far less concise than I hoped. 

If you have a simpler way of answering the question and/or can point us to a better description, we’d love to hear from you!

Inventory Management re-states its case

In a post last week we posed the question on whether Inventory Management still retains relevance. There are certainly uses cases where it remains unquestionably needed. But perhaps others that are no longer required, a relic of old-school processes and data flows.
 
If you have an extensive OSP (Outside Plant) network, you have almost no option but to store all this passive infrastructure in an Inventory Management solution. You don’t have the option of having an EMS (Element Management System) console / API to tell you the current design/location/status of the network. 
 
In the modern world of ubiquitous connection and overlay / virtual networks, Inventory Management might be less essential than it once was. For service qualification, provisioning and perhaps even capacity planning, everything you need to know is available on demand from the EMS/s. The network is a more correct version of the network inventory than external repository (ie Inventory Management) can hope to be, even if you have great success with synchronisation.
 
But I have a couple of other new-age use-cases to share with you where Inventory Management still retains relevance.
 
One is for connectivity (okay so this isn’t exactly a new-age use-case, but the scenario I’m about to describe is). If we have a modern overlay / virtual network, anything that stays within a domain is likely to be better served by its EMS equivalent. Especially since connectivity is no longer as simple as physical connections or nearest neighbours with advanced routing protocols. But anything that goes cross-domain and/or off-net needs a mechanism to correlate, coordinate and connect. That’s the role the Inventory Manager is able to do (conceptually).
 
The other is for digital twinning. OSS (including Inventory Management) was the “original twin.” It was an offline mimic of the production network. But I cite Inventory Management as having a new-age requirement for the digital twin. I increasingly foresee the need for predictive scenarios to be modelled outside the production network (ie in the twin!). We want to try failure / degradation scenarios. We want to optimise our allocation of capital. We want to simulate and optimise customer experience under different network states and loads. We’re beginning to see the compute power that’s able to drive these scenarios (and more) at scale.
 
Is it possible to handle these without an Inventory Manager (or equivalent)?

Completing an OSS design, going inside, going outside, going Navy SEAL

Our most recent post last week discussed the research organisations like DARPA (Defense Advanced Research Projects Agency) and Google are investing into group flow for the purpose of group effectiveness. It cites the cost of training ($4.25m) each elite Navy SEAL and their ability to operate as if choreographed in high pressure / noise environments.

We contrasted this with the mechanisms used in most OSS that actually prevent flow-state from occurring. Today I’m going to dive into the work that goes into creating a new design (to activate a customer), and how our current OSS designs / processes inhibit flow.

Completely independently of our post, BBC released an article last week discussing how deep focus needs to become a central pillar of our future workplace culture.

To quote,

“Being switched on at all times and expected to pick things up immediately makes us miserable, says [Cal] Newport. “It mismatches with the social circuits in our brain. It makes us feel bad that someone is waiting for us to reply to them. It makes us anxious.”

Because it is so easy to dash off a quick reply on email, Slack or other messaging apps, we feel guilty for not doing so, and there is an expectation that we will do it. This, says Newport, has greatly increased the number of things on people’s plates. “The average knowledge worker is responsible for more things than they were before email. This makes us frenetic. We should be thinking about how to remove the things on their plate, not giving people more to do…

Going cold turkey on email or Slack will only work if there is an alternative in place. Newport suggests, as many others now do, that physical communication is more effective. But the important thing is to encourage a culture where clear communication is the norm.

Newport is advocating for a more linear approach to workflows. People need to completely stop one task in order to fully transition their thought processes to the next one. However, this is hard when we are constantly seeing emails or being reminded about previous tasks. Some of our thoughts are still on the previous work – an effect called attention residue.”

That resonates completely with me. So let’s consider that and look into the collaboration process of a stylised order activation:

  1. Customer places an order via an order-entry portal
  2. Perform SQ (Service Qualification) and Credit Checks, automated processes
  3. Order is broken into work order activities (automated process)
  4. Designer1 picks up design work order activity from activity list and commences outside plant design (cables, pits, pipes). Her design pack includes:
    1. Updating AutoCAD / GIS drawings to show outside plant (new cable in existing pit/pipe, plus lead-in cable)
    2. Updating OSS to show splicing / patching changes
    3. Creates project BoQ (bill of quantities) in a spreadsheet
  5. Designer2 picks up next work order activity from activity list and commences active network design. His design pack includes:
    1. Allocation of CPE (Customer Premises Equipment) from warehouse
    2. Allocation of IP address from ranges available in IPAM (IP address manager)
    3. Configuration plan for CPE and network edge devices
  6. FieldWorkTeamLeader reviews inside plant and outside plant designs and allocates to FieldWorker1. FieldWorker1 is also issued with a printed design pack and the required materials
  7. FieldWorker1 commences build activities and finds out there’s a problem with the design. It indicates splicing the customer lead-in to fibres 1/2, but they appear to already be in use

So, what does FieldWorker1 do next?

The activity list / queue process has worked reasonably well up until this step in the process. It allowed each person to work autonomously, stay in deep focus and in the sequence of their own choosing. But now, FieldWorker1 needs her issue resolved within only a few minutes or must move on to her next job (and next site). That would mean an additional truck-roll, but also annoying the customer who now has to re-schedule and take an additional day off work to open their house for the installer.

FieldWorker1 now needs to collaborate quickly with Designer1, Designer2 and FieldWorkTeamLeader. But most OSS simply don’t provide the tools to do so. The go-forward decision in our example draws upon information from multiple sources (ie AutoCAD drawing, GIS, spreadsheet, design document, IPAM and the OSS). Not only that, but the print-outs given to the field worker don’t reflect real-time changes in data. Nor do they give any up-stream context that might help her resolve this issue.

So FieldWorker1 contacts the designers directly (and separately) via phone.

Designer1 and Designer2 have to leave deep-think mode to respond urgently to the notification from FieldWorker1 and then take minutes to pull up the data. Designer1 and Designer2 have to contact each other about conflicting data sets. Too much time passes. FieldWorker1 moves to her next job.

Our challenge as OSS designers is to create a collaborative workspace that has real-time access to all data (not just the local context as the issue probably lies in data that’s upstream of what’s shown in the design pack). Our workspace must also provide all participants with the tools to engage visually and aurally – to choreograph head-office and on-site resources into “group flow” to resolve the issue.

Even if such tools existed today, the question I still have is how we ensure our designers aren’t interrupted from their all-important deep-think mode. How do we prevent them from having to drop everything multiple times a day/hour? Perhaps the answer is in an organisational structure – where all designers have to cycle through the Design Support function (eg 1 day in a fortnight), to take support calls from field workers and help them resolve issues. It will give designers a greater appreciation for problems occurring in the field and also help them avoid responding to emails, slack messages, etc when in design mode.

 

The use of drones by OSS

The last few days have been all about organisational structuring to support OSS and digital transformations. Today we take a different tack – a more technical diversion – onto how drones might be relevant to the field of OSS.

A friend recently asked for help to look into the use of drones in his archaeological business. This got me to thinking about how they might apply in cross-over with OSS.

I know they’re already used to perform really accurate 3D cable route / corridor surveying. Much cooler than the old surveyor diagrams on A1 sheets from the old days. Apparently experts in the field can even tell if there’s rock in the surveyed area by looking at the vegetation patterns, heat and LIDAR scans.

But my main area of interest is in the physical inventory. With accurate geo-tagging available on drones and the ability to GPS correct the data, it seems like a really useful technique for getting outside plant (OSP) data into OSS inventory systems.

Or geo-correcting data for brownfields assets (it’s not uncommon for cable routes to be drawn using road centre-lines when the actual easement to the side of the road isn’t known – ie it’s offset from the real cable route). I expect that the high resolution of drone imagery will allow for identification of poles, roads and pits (if not overgrown). Perhaps even aerial lead-in cables and attachment points on buildings?
Drone-based cable corridor surveys
Have you heard of drone-based OSP asset identification and mapping data being fed into inventory systems yet? I haven’t, but it seems like the logical next step. Do you know anyone who has started to dabble in this type of work? If you do, please send me a note as I’d love to be introduced.

Once loaded into the inventory system, with 3d geo-location, we then have the ability to visualise the OSP data with augmented reality solutions.

And other applications for drone technology?

Using my graphene analogy to help fix OSS data

By now I’m sure you’ve heard about graph databases. You may’ve even read my earlier article about the benefits graph databases offer when modelling network inventory when compared with relational databases. But have you heard the Graphene Database Analogy?

I equate OSS data migration and data quality improvement with graphene, which is made up of single layers of carbon atoms in hexagonal lattices (planes).

The graphene data model

There are four concepts of interest with the graphene model:

  1. Data Planes – Preparing and ingesting data from siloes (eg devices, cards, ports) is relatively easy. ie building planes of data (black carbon atoms and bonds above)
  2. Bonds between planes – It’s the interconnections between siloes (eg circuits, network links, patch-leads, joints in pits, etc) that is usually trickier. So I envisage alignment of nodes (on the data plane or graph, not necessarily network nodes) as equivalent to bonds between carbon atoms on separate planes (red/blue/aqua lines above).
    Alignment comes in many forms:

    1. Through spatial alignment (eg a joint and pit have the same geospatial position, so the joint is probably inside the pit)
    2. Through naming conventions (eg same circuit name associated with two equipment ports)
    3. Various other linking-key strategies
    4. Nodes on each data plane can potentially be snapped together (either by an operator or an algorithm) if you find consistent ways of aligning nodes that are adjacent across planes
  3. Confidence – I like to think about data quality in terms of confidence-levels. Some data is highly reliable, other data sets less so. For example if you have two equipment ports with a circuit name identifier, then your confidence level might be 4 out of 4* because you know the exact termination points of that circuit. Conversely, let’s say you just have a circuit with a name that follows a convention of “LocA-LocB-speed-index-type” but has no associated port data. In that case you only know that the circuit terminates at LocationA and LocationB, but not which building, rack, device, card, port so your confidence level might only be 2 out of 4.
  4. Visualisation – Having these connected panes of data allows you to visualise heat-map confidence levels (and potentially gaps in the graph) on your OSS data, thus identifying where data-fix (eg physical audits) is required

* the example of a circuit with two related ports above might not always achieve 4 out of 4 if other checks are applied (eg if there are actually 3 ports with that associated circuit name in the data but we know it should represent a two-ended patch-lead).

Note: The diagram above (from graphene-info.com) shows red/blue/aqua links between graphene layers as capturing hydrogen, but is useful for approximating the concept of aligning nodes between planes

Can OSS/BSS assist CX? We’re barely touching the surface

Have you ever experienced an epic customer experience (CX) fail when dealing a network service operator, like the one I described yesterday?

In that example, the OSS/BSS, and possibly the associated people / process, had a direct impact on poor customer experience. Admittedly, that 7 truck-roll experience was a number of years ago now.

We have fewer excuses these days. Smart phones and network connected devices allow us to get OSS/BSS data into the field in ways we previously couldn’t. There’s no need for printed job lists, design packs and the like. Our OSS/BSS can leverage these connected devices to give far better decision intelligence in real time.

If we look to the logistics industry, we can see how parcel tracking technologies help to automatically provide status / progress to parcel recipients. We can see how recipients can also modify their availability, which automatically adjusts logistics delivery sequencing / scheduling.

This has multiple benefits for the logistics company:

  • It increases first time delivery rates
  • Improves the ability to automatically notify customers (eg email, SMS, chatbots)
  • Decreases customer enquiries / complaints
  • Decreases the amount of time the truck drivers need to spend communicating back to base and with clients
  • But most importantly, it improves the customer experience

Logistics is an interesting challenge for our OSS/BSS due to the sheer volume of customer interaction events handled each day.

But it’s another area that excites me even more, where CX is improved through improved data quality:

  • It’s the ability for field workers to interact with OSS/BSS data in real-time
  • To see the design packs
  • To compare with field situations
  • To update the data where there is inconsistency.

Even more excitingly, to introduce augmented reality to assist with decision intelligence for field work crews:

  • To provide an overlay of what fibres need to be spliced together
  • To show exactly which port a patch-lead needs to connect to
  • To show where an underground cable route goes
  • To show where a cable runs through trayway in a data centre
  • etc, etc

We’re barely touching the surface of how our OSS/BSS can assist with CX.

The TMN model suffers from modern network anxiety

As the TMN diagram below describes, each layer up in the network management stack abstracts but connects (as described in more detail in “What an OSS shouldn’t do“). That is, each higher layer reduces the amount if information/control within a domain that it’s responsible for, but it assumes a broader responsibility for connecting multiple domains together.
OSS abstract and connect

There’s just one problem with the diagram. It’s a little dated when we take modern virtualised infrastructure into account.

In the old days, despite what the layers may imply, it was common for an OSS to actually touch every layer of the pyramid to resolve faults. That is, OSS regularly connected to NMS, EMS and even devices (NE) to gather network health data. The services defined at the top of the stack (BSS) could be traced to the exact devices (NE / NEL) via the circuits that traversed them, regardless of the layers of abstraction. It helped for root-cause analysis (RCA) and service impact analysis (SIA).

But with modern networks, the infrastructure is virtualised, load-balanced and since they’re packet-switched, they’re completely circuitless (I’m excluding virtual circuits here by the way). The bottom three layers of the diagram could effectively be replaced with a cloud icon, a cloud that the OSS has little chance of peering into (see yellow cloud in the diagram later in this post).

The concept of virtualisation adds many sub-layers of complexity too by the way, as higlighted in the diagram below.

ONAP triptych

So now the customer services at the top of the pyramid (BSS / BML) are quite separated from the resources at the bottom, other than to say the services consume from a known pool of resources. Fault resolution becomes more abstracted as a result.

But what’s interesting is that there’s another layer that’s not shown on the typical TMN model above. That is the physical network inventory (PNI) layer. The cables, splices, joints, patch panels, equipment cards, etc that underpin every network. Yes, even virtual networks.

In the old networks the OSS touched every layer, including the missing layer. That functionality was provided by PNI management. Fault resolution also occurred at this layer through tickets of work conducted by the field workforce (Workforce Management – WFM).

In new networks, OSS/BSS tie services to resource pools (the top two layers). They also still manage PNI / WFM (the bottom, physical layer). But then there’s potentially an invisible cloud in the middle. Three distinctly different pieces, probably each managed by a different business unit or operational group.
BSS OSS cloud abstract

Just wondering – has your OSS/BSS developed control anxiety issues from losing some of the control that it once had?

Is your data getting too heavy for your OSS to lift?

Data mass is beginning to exhibit gravitational properties – it’s getting heavy – and eventually it will be too big to move.”
Guy Lupo
in this article on TM Forum’s Inform that also includes contributions from George Glass and Dawn Bushaus.

Really interesting concept, and article, linked above.

The touchpoint explosion is helping to make our data sets ever bigger… and heavier.

In my earlier days in OSS, I was tasked with leading the migration of large sets of data into relational databases for use by OSS tools. I was lucky enough to spend years working on a full-scope OSS (ie it’s central database housed data for inventory management, alarm management, performance management, service order management, provisioning, etc, etc).

Having all those data sets in one database made it incredibly powerful as an insight generation tool. With a few SQL joins, you could correlate almost any data sets imaginable. But it was also a double-edged sword. Firstly, ensuring that all of the sets would have linking keys (and with high data quality / reliability) was a data migrator’s nightmare. Secondly, all those joins being done by the OSS made it computationally heavy. It wasn’t uncommon for a device list query to take the OSS 10 minutes to provide a response in the PROD environment.

There’s one concept that makes GIS tools more inherently capable of lifting heavier data sets than OSS – they generally load data in layers (that can be turned on and off in the visual pane) and unlike OSS, don’t attempt to stitch the different sets together. The correlation between data sets is achieved through geographical proximity scans, either algorithmically, or just by the human eye of the operator.

If we now consider real-time data (eg alarms/events, performance counters, etc), we can take a leaf out of Einstein’s book and correlate by space and time (ie by geographical and/or time-series proximity between otherwise unrelated data sets). Just wondering – How many OSS tools have you seen that use these proximity techniques? Very few in my experience.

BTW. I’m the first to acknowledge that a stitched data set (ie via linking keys such as device ID between data sets) is definitely going to be richer than uncorrelated data sets. Nonetheless, this might be a useful technique if your data is getting too heavy for your OSS to lift (eg simple queries are causing minutes of downtime / delay for operators).

A defacto spatial manager

Many years ago, I was lucky enough to lead a team responsible for designing a complex inside and outside plant network in a massive oil and gas precinct. It had over 120 buildings and more than 30 networked systems.

We were tasked with using CAD (Computer Aided Design) and Office tools to design the comms and security solution for the precinct. And when I say security, not just network security, but building access control, number plate recognition, coast guard and even advanced RADAR amongst other things.

One of the cool aspects of the project was that it was more three-dimensional than a typical telco design. A telco cable network is usually planned on x and y coordinates because the y coordinate is usually on one or two planes (eg all ducts are at say 0.6m below ground level or all catenary wires between poles are at say 5m above ground). However, on this site, cable trays ran at all sorts of levels to run around critical gas processing infrastructure.

We actually proposed to implement a light-weight OSS for management of the network, including outside plant assets, due to the easy maintainability compared with CAD files. The customer’s existing CAD files may have been perfect when initially built / handed-over, but were nearly useless to us because of all the undocumented that had happened in the ensuing period. However, the customer was used to CAD files and wanted to stay with CAD files.

This led to another cool aspect of the project – we had to build out defacto OSS data models to capture and maintain the designs.

We modelled:

  • The support plane (trayway, ducts, sub-ducts, trenches, lead-ins, etc)
  • The physical connectivity plane (cables, splices, patch-panels, network termination points, physical ports, devices, etc)
  • The logical connectivity plane (circuits, system connectivity, asset utilisation, available capacity, etc)
  • Interconnection between these planes
  • Life-cycle change management

This definitely gave a better appreciation for the type of rules, variants and required data sets that reside under the hood of a typical OSS.

Have you ever had a non-OSS project that gave you a better appreciation / understanding of OSS?

I’m also curious. Have any of you used designed your physical network plane in three dimensions? With a custom or out-of-the-box tool?

The OSS Matrix – the blue or the red pill?

OSS Matrix
OSS tend to be very good at presenting a current moment in time – the current configuration of the network, the health of the network, the activities underway.

Some (but not all) tend to struggle to cope with other moments in time – past and future.

Most have tools that project into the future for the purpose of capacity planning, such as link saturation estimation (based on projecting forward from historical trend-lines). Predictive analytics is a current buzz-word as research attempts to predict future events and mitigate for them now.

Most also have the ability to look into the past – to look at historical logs to give an indication of what happened previously. However, historical logs can be painful and tend towards forensic analysis. We can generally see who (or what) performed an action at a precise timestamp, but it’s not so easy to correlate the surrounding context in which that action occurred. They rarely present a fully-stitched view in the OSS GUI that shows the state of everything else around it at that snapshot in time past. At least, not to the same extent that the OSS GUI can stitch and present current state together.

But the scenario that I find most interesting is for the purpose of network build / maintenance planning. Sometimes these changes occur as isolated events, but are more commonly run as projects, often with phases or milestone states. For network designers, it’s important to differentiate between assets (eg cables, trenches, joints, equipment, ports, etc) that are already in production versus assets that are proposed for installation in the future.

And naturally those states cross over at cut-in points. The proposed new branch of the network needs to connect to the existing network at some time in the future. Designers need to see available capacity now (eg spare ports), but be able to predict with confidence that capacity will still be available for them in the future. That’s where the “reserved” status comes into play, which tends to work for physical assets (eg physical ports) but can be more challenging for logical concepts like link utilisation.

In large organisations, it can be even more challenging because there’s not just one augmentation project underway, but many. In some cases, there can be dependencies where one project relies on capacity that is being stood up by other future projects.

Not all of these projects / plans will make it into production (eg funding is cut or a more optimal design option is chosen), so there is also the challenge of deprecating planned projects. Capability is required to find whether any other future projects are dependent on this deprecated future project.

It can get incredibly challenging to develop this time/space matrix in OSS. If you’re a developer of OSS, the question becomes whether you want to take the blue or red pill.

Blown away by one innovation – a follow-up concept

Last Friday’s blog discussed how I’ve just been blown away by the most elegant OSS innovation I’ve seen in decades.

You can read more detail via the link, but the three major factors in this simple, elegant solution to data quality problems (probably OSS‘ biggest kryptonite) are:

  1. Being able to make connections that break standard object hierarchy rules; but
  2. Having the ability to mark that standard rules haven’t been followed; and
  3. Being able to uses the markers to prioritise the fixing of data at a more convenient time

It’s effectively point 2 that has me most excited. So novel, yet so obvious in hindsight. When doing data migrations in the past, I’ve used confidence flags to indicate what I can rely on and what needs further audit / remediation / cleansing. But the recent demo I saw of the CROSS product is the first time I’ve seen it built into the user interface of an OSS.

This one factor, if it spreads, has the ability to change OSS data quality in the same way that Likes (or equivalent) have changed social media by acting as markers of confidence / quality.

Think about this for a moment – what if everyone who interacts with an OSS GUI had the ability to rank their confidence in any element of data they’re touching, with a mechanism as simple as clicking a like/dislike button (or similar)?

Bad example here but let’s say field techs are given a design pack, and upon arriving at site, find that the design doesn’t match in-situ conditions (eg the fibre pairs they’re expecting to splice a customer lead-in cable to are already carrying live traffic, which they diagnose is due to data problems in an upstream distribution joint). Rather than jeopardising the customer activation window by having to spend hours/days fixing all the trickle-down effects of the distribution joint data, they just mark confidence levels in the vicinity and get the customer connected.

The aggregate of that confidence information is then used to show data quality heat maps and help remediation teams prioritise the areas that they need to work on next. It helps to identify data and process improvements using big circle and/or little circle remediation techniques.

Possibly the most important implication of the in-built ranking system is that everyone in the end-to-end flow, from order takers to designers through to coal-face operators, can better predict whether they need to cater for potential data problems.

Your thoughts?? In what scenarios do you think it could work best, or alternatively, not work?

The double-edged sword of OSS/BSS integrations

…good argument for a merged OSS/BSS, wouldn’t you say?
John Malecki
.

The question above was posed in relation to Friday’s post about the currency and relevance of OSS compared with research reports, analyses and strategic plans as well as how to extend OSS longevity.

This is a brilliant, multi-faceted question from John. My belief is that it is a double-edged sword.

Out of my experiences with many OSS, one product stands out above all the others I’ve worked with. It’s an integrated suite of Fault Management, Performance Management, Customer Management, Product / Service Management, Configuration / orchestration / auto-provisioning, Outside Plant Management / GIS, Traffic Engineering, Trouble Ticketing, Ticket of Work Management, and much more, all tied together with the most elegant inventory data model I’ve seen.

Being a single vendor solution built on a relational database, the cross-pollination (enrichment) of data between all these different modules made it the most powerful insight engine I’ve worked with. With some SQL skills and an understanding of the data model, you could ask it complex cross-domain questions quite easily because all the data was stored in a single database. That edge of the sword made a powerful argument for a merged OSS/BSS.

Unfortunately, the level of cross-referencing that made it so powerful also made it really challenging to build an initial data set to facilitate all modules being inter-operable. By contrast, an independent inventory management solution could just pull data out of each NMS / EMS under management, massage the data for ingestion and then you’d have an operational system. The abovementioned solution also worked this way for inventory, but to get the other modules cross-referenced with the inventory required engineering rules, hand-stitched spreadsheets, rules of thumb, etc. Maintaining and upgrading also became challenges after the initial data had been created. In many cases, the clients didn’t have all of the data that was needed, so a data creation exercise needed to be undertaken.

If I had the choice, I would’ve done more of the cross-referencing at data level (eg via queries / reports) rather than entwining the modules together so tightly at application level… except in the most compelling cases. It’s an example of the chess-board analogy.

If given the option between merged (tightly coupled) and loosely coupled, which would you choose? Do you have any insights or experiences to share on how you’ve struck the best balance?

Big circle. Little circle. Crossing the red line

Data quality is the bane of many a telco. If the data quality is rubbish then the OSS tools effectively become rubbish too.

Feedback loops are one of the most underutilised tools in a data fix arsenal. However, few people realise that there are two levels of feedback loops. There’s what I refer to as big circle and little circle feedback loops.

The little circle is using feedback in data alone, using data to compare and reconcile other data. That can produce good results, but it’s only part of the story. Many data challenges extend further than that if you’re seeking a resolution. Most operators focus on this small loop, but this approach maxes out at well below 100% data accuracy.

The big circle is designing feedback loops that incorporate data quality into end-to-end processes, which includes the field-work part of the process. Whilst this approach may also never deliver data perfection, it’s much more likely to than the small loop approach.

Redline markups have been the traditional mechanism to get feedback from the field back into improving OSS data. For example, if designers issue a design pack out to field techs that prove to be incorrect, then techs return the design with redline markups to show what they’ve implemented in the field instead.

With mobile technology and the right software tools, field workers could directly update data. Unfortunately this model doesn’t seem to fit into practices that have been around for decades.

There remain great opportunities to improve the efficiency of big circle feedback loops. They probably need a new way of thinking, but still need to fit into the existing context of field worker activities.

The challenge with the big circle approach is that it tends to be more costly. It’s much cheaper to write data-fix algorithms than send workers into the field to resolve data issues.

There are a few techniques that could leverage existing field-worker movements rather than being sent to specifically resolve data issues:

  1. Have “while you’re there” functionality, that allows a field worker to perform data-fix (or other) related jobs while they’re at a location with known problems
  2. In some cases, field worker schedules don’t allow enough time to fix a problem that they find (or flagged “while you’re there). So instead, they can be incentivised to capture information about the problem (eg data fix) for future rectification
  3. Have hot-spot analysis tools. These hot spot analysis tools target the areas in the network where field-worker intervention is most likely to improve data quality (ie the areas where data is most problematic).

Digital twins

Well-designed digital twins based on business priorities have the potential to significantly improve enterprise decision making. Enterprise architecture and technology innovation leaders must factor digital twins into their Internet of Things architecture and strategy
Gartner’s Top 10 Technology Trends.

Digital twinning has established some buzz, particularly in IoT circles lately. Digital twins are basically digital representations of physical assets, including their status, characteristics, performance and behaviors.

But it’s not really all that new is it? OSS has been doing this for years. When was the first time you can recall seeing an inventory tool that showed:

  • Digital representations of devices that were physically located thousands of kilometres away (or in the room next door for that matter)
  • Visual representations of those devices (eg front-face, back-plate, rack-unit sizing, geo-spatial positioning, etc)
  • Current operational state (in-service, out-of-service, in alarm, under test, real-time performance against key metrics, etc)
  • Installed components (eg cards, ports, software, etc)
  • Customer services being carried
  • Current device configuration details
  • Nearest connected neighbours

Digital twinning, this amazing new concept, has actually been around for almost as long as OSS have. We just call it inventory management though. It doesn’t sound quite so sexy when we say it.

But how can we extend what we already do into digital twinning in other domains (eg manufacturing, etc)?

The end of cloud computing

…. but we’ve only just started and we haven’t even got close to figuring out how to manage it yet (from an aggregated view I mean, not just within a single vendor platform)!!

This article from Peter Levine of Andreesen Horowitz predicts “The end of cloud computing.”

Now I’m not so sure that this headline is going to play out in the near future, but Peter Levine does make a really interesting point in his article (and its embedded 25 min video). There are a number of nascent technologies, such as autonomous vehicles, that will need their edge devices to process immense amounts of data locally without having to backhaul it to centralised cloud servers for processing.

Autonomous vehicles will need to consume data in real-time from a multitude of in-car sensors, but only a small percentage of that data will need to be transmitted back over networks to a centralised cloud base. But that backhauled data will be important for the purpose of aggregated learning, analytics, etc, the findings of which will be shipped back to the edge devices.

Edge or fog compute is just one more platform type for our OSS to stay abreast of into the future.