Inventory Management re-states its case

In a post last week we posed the question on whether Inventory Management still retains relevance. There are certainly uses cases where it remains unquestionably needed. But perhaps others that are no longer required, a relic of old-school processes and data flows.
 
If you have an extensive OSP (Outside Plant) network, you have almost no option but to store all this passive infrastructure in an Inventory Management solution. You don’t have the option of having an EMS (Element Management System) console / API to tell you the current design/location/status of the network. 
 
In the modern world of ubiquitous connection and overlay / virtual networks, Inventory Management might be less essential than it once was. For service qualification, provisioning and perhaps even capacity planning, everything you need to know is available on demand from the EMS/s. The network is a more correct version of the network inventory than external repository (ie Inventory Management) can hope to be, even if you have great success with synchronisation.
 
But I have a couple of other new-age use-cases to share with you where Inventory Management still retains relevance.
 
One is for connectivity (okay so this isn’t exactly a new-age use-case, but the scenario I’m about to describe is). If we have a modern overlay / virtual network, anything that stays within a domain is likely to be better served by its EMS equivalent. Especially since connectivity is no longer as simple as physical connections or nearest neighbours with advanced routing protocols. But anything that goes cross-domain and/or off-net needs a mechanism to correlate, coordinate and connect. That’s the role the Inventory Manager is able to do (conceptually).
 
The other is for digital twinning. OSS (including Inventory Management) was the “original twin.” It was an offline mimic of the production network. But I cite Inventory Management as having a new-age requirement for the digital twin. I increasingly foresee the need for predictive scenarios to be modelled outside the production network (ie in the twin!). We want to try failure / degradation scenarios. We want to optimise our allocation of capital. We want to simulate and optimise customer experience under different network states and loads. We’re beginning to see the compute power that’s able to drive these scenarios (and more) at scale.
 
Is it possible to handle these without an Inventory Manager (or equivalent)?

Completing an OSS design, going inside, going outside, going Navy SEAL

Our most recent post last week discussed the research organisations like DARPA (Defense Advanced Research Projects Agency) and Google are investing into group flow for the purpose of group effectiveness. It cites the cost of training ($4.25m) each elite Navy SEAL and their ability to operate as if choreographed in high pressure / noise environments.

We contrasted this with the mechanisms used in most OSS that actually prevent flow-state from occurring. Today I’m going to dive into the work that goes into creating a new design (to activate a customer), and how our current OSS designs / processes inhibit flow.

Completely independently of our post, BBC released an article last week discussing how deep focus needs to become a central pillar of our future workplace culture.

To quote,

“Being switched on at all times and expected to pick things up immediately makes us miserable, says [Cal] Newport. “It mismatches with the social circuits in our brain. It makes us feel bad that someone is waiting for us to reply to them. It makes us anxious.”

Because it is so easy to dash off a quick reply on email, Slack or other messaging apps, we feel guilty for not doing so, and there is an expectation that we will do it. This, says Newport, has greatly increased the number of things on people’s plates. “The average knowledge worker is responsible for more things than they were before email. This makes us frenetic. We should be thinking about how to remove the things on their plate, not giving people more to do…

Going cold turkey on email or Slack will only work if there is an alternative in place. Newport suggests, as many others now do, that physical communication is more effective. But the important thing is to encourage a culture where clear communication is the norm.

Newport is advocating for a more linear approach to workflows. People need to completely stop one task in order to fully transition their thought processes to the next one. However, this is hard when we are constantly seeing emails or being reminded about previous tasks. Some of our thoughts are still on the previous work – an effect called attention residue.”

That resonates completely with me. So let’s consider that and look into the collaboration process of a stylised order activation:

  1. Customer places an order via an order-entry portal
  2. Perform SQ (Service Qualification) and Credit Checks, automated processes
  3. Order is broken into work order activities (automated process)
  4. Designer1 picks up design work order activity from activity list and commences outside plant design (cables, pits, pipes). Her design pack includes:
    1. Updating AutoCAD / GIS drawings to show outside plant (new cable in existing pit/pipe, plus lead-in cable)
    2. Updating OSS to show splicing / patching changes
    3. Creates project BoQ (bill of quantities) in a spreadsheet
  5. Designer2 picks up next work order activity from activity list and commences active network design. His design pack includes:
    1. Allocation of CPE (Customer Premises Equipment) from warehouse
    2. Allocation of IP address from ranges available in IPAM (IP address manager)
    3. Configuration plan for CPE and network edge devices
  6. FieldWorkTeamLeader reviews inside plant and outside plant designs and allocates to FieldWorker1. FieldWorker1 is also issued with a printed design pack and the required materials
  7. FieldWorker1 commences build activities and finds out there’s a problem with the design. It indicates splicing the customer lead-in to fibres 1/2, but they appear to already be in use

So, what does FieldWorker1 do next?

The activity list / queue process has worked reasonably well up until this step in the process. It allowed each person to work autonomously, stay in deep focus and in the sequence of their own choosing. But now, FieldWorker1 needs her issue resolved within only a few minutes or must move on to her next job (and next site). That would mean an additional truck-roll, but also annoying the customer who now has to re-schedule and take an additional day off work to open their house for the installer.

FieldWorker1 now needs to collaborate quickly with Designer1, Designer2 and FieldWorkTeamLeader. But most OSS simply don’t provide the tools to do so. The go-forward decision in our example draws upon information from multiple sources (ie AutoCAD drawing, GIS, spreadsheet, design document, IPAM and the OSS). Not only that, but the print-outs given to the field worker don’t reflect real-time changes in data. Nor do they give any up-stream context that might help her resolve this issue.

So FieldWorker1 contacts the designers directly (and separately) via phone.

Designer1 and Designer2 have to leave deep-think mode to respond urgently to the notification from FieldWorker1 and then take minutes to pull up the data. Designer1 and Designer2 have to contact each other about conflicting data sets. Too much time passes. FieldWorker1 moves to her next job.

Our challenge as OSS designers is to create a collaborative workspace that has real-time access to all data (not just the local context as the issue probably lies in data that’s upstream of what’s shown in the design pack). Our workspace must also provide all participants with the tools to engage visually and aurally – to choreograph head-office and on-site resources into “group flow” to resolve the issue.

Even if such tools existed today, the question I still have is how we ensure our designers aren’t interrupted from their all-important deep-think mode. How do we prevent them from having to drop everything multiple times a day/hour? Perhaps the answer is in an organisational structure – where all designers have to cycle through the Design Support function (eg 1 day in a fortnight), to take support calls from field workers and help them resolve issues. It will give designers a greater appreciation for problems occurring in the field and also help them avoid responding to emails, slack messages, etc when in design mode.

 

The use of drones by OSS

The last few days have been all about organisational structuring to support OSS and digital transformations. Today we take a different tack – a more technical diversion – onto how drones might be relevant to the field of OSS.

A friend recently asked for help to look into the use of drones in his archaeological business. This got me to thinking about how they might apply in cross-over with OSS.

I know they’re already used to perform really accurate 3D cable route / corridor surveying. Much cooler than the old surveyor diagrams on A1 sheets from the old days. Apparently experts in the field can even tell if there’s rock in the surveyed area by looking at the vegetation patterns, heat and LIDAR scans.

But my main area of interest is in the physical inventory. With accurate geo-tagging available on drones and the ability to GPS correct the data, it seems like a really useful technique for getting outside plant (OSP) data into OSS inventory systems.

Or geo-correcting data for brownfields assets (it’s not uncommon for cable routes to be drawn using road centre-lines when the actual easement to the side of the road isn’t known – ie it’s offset from the real cable route). I expect that the high resolution of drone imagery will allow for identification of poles, roads and pits (if not overgrown). Perhaps even aerial lead-in cables and attachment points on buildings?
Drone-based cable corridor surveys
Have you heard of drone-based OSP asset identification and mapping data being fed into inventory systems yet? I haven’t, but it seems like the logical next step. Do you know anyone who has started to dabble in this type of work? If you do, please send me a note as I’d love to be introduced.

Once loaded into the inventory system, with 3d geo-location, we then have the ability to visualise the OSP data with augmented reality solutions.

And other applications for drone technology?

Using my graphene analogy to help fix OSS data

By now I’m sure you’ve heard about graph databases. You may’ve even read my earlier article about the benefits graph databases offer when modelling network inventory when compared with relational databases. But have you heard the Graphene Database Analogy?

I equate OSS data migration and data quality improvement with graphene, which is made up of single layers of carbon atoms in hexagonal lattices (planes).

The graphene data model

There are four concepts of interest with the graphene model:

  1. Data Planes – Preparing and ingesting data from siloes (eg devices, cards, ports) is relatively easy. ie building planes of data (black carbon atoms and bonds above)
  2. Bonds between planes – It’s the interconnections between siloes (eg circuits, network links, patch-leads, joints in pits, etc) that is usually trickier. So I envisage alignment of nodes (on the data plane or graph, not necessarily network nodes) as equivalent to bonds between carbon atoms on separate planes (red/blue/aqua lines above).
    Alignment comes in many forms:

    1. Through spatial alignment (eg a joint and pit have the same geospatial position, so the joint is probably inside the pit)
    2. Through naming conventions (eg same circuit name associated with two equipment ports)
    3. Various other linking-key strategies
    4. Nodes on each data plane can potentially be snapped together (either by an operator or an algorithm) if you find consistent ways of aligning nodes that are adjacent across planes
  3. Confidence – I like to think about data quality in terms of confidence-levels. Some data is highly reliable, other data sets less so. For example if you have two equipment ports with a circuit name identifier, then your confidence level might be 4 out of 4* because you know the exact termination points of that circuit. Conversely, let’s say you just have a circuit with a name that follows a convention of “LocA-LocB-speed-index-type” but has no associated port data. In that case you only know that the circuit terminates at LocationA and LocationB, but not which building, rack, device, card, port so your confidence level might only be 2 out of 4.
  4. Visualisation – Having these connected panes of data allows you to visualise heat-map confidence levels (and potentially gaps in the graph) on your OSS data, thus identifying where data-fix (eg physical audits) is required

* the example of a circuit with two related ports above might not always achieve 4 out of 4 if other checks are applied (eg if there are actually 3 ports with that associated circuit name in the data but we know it should represent a two-ended patch-lead).

Note: The diagram above (from graphene-info.com) shows red/blue/aqua links between graphene layers as capturing hydrogen, but is useful for approximating the concept of aligning nodes between planes

Can OSS/BSS assist CX? We’re barely touching the surface

Have you ever experienced an epic customer experience (CX) fail when dealing a network service operator, like the one I described yesterday?

In that example, the OSS/BSS, and possibly the associated people / process, had a direct impact on poor customer experience. Admittedly, that 7 truck-roll experience was a number of years ago now.

We have fewer excuses these days. Smart phones and network connected devices allow us to get OSS/BSS data into the field in ways we previously couldn’t. There’s no need for printed job lists, design packs and the like. Our OSS/BSS can leverage these connected devices to give far better decision intelligence in real time.

If we look to the logistics industry, we can see how parcel tracking technologies help to automatically provide status / progress to parcel recipients. We can see how recipients can also modify their availability, which automatically adjusts logistics delivery sequencing / scheduling.

This has multiple benefits for the logistics company:

  • It increases first time delivery rates
  • Improves the ability to automatically notify customers (eg email, SMS, chatbots)
  • Decreases customer enquiries / complaints
  • Decreases the amount of time the truck drivers need to spend communicating back to base and with clients
  • But most importantly, it improves the customer experience

Logistics is an interesting challenge for our OSS/BSS due to the sheer volume of customer interaction events handled each day.

But it’s another area that excites me even more, where CX is improved through improved data quality:

  • It’s the ability for field workers to interact with OSS/BSS data in real-time
  • To see the design packs
  • To compare with field situations
  • To update the data where there is inconsistency.

Even more excitingly, to introduce augmented reality to assist with decision intelligence for field work crews:

  • To provide an overlay of what fibres need to be spliced together
  • To show exactly which port a patch-lead needs to connect to
  • To show where an underground cable route goes
  • To show where a cable runs through trayway in a data centre
  • etc, etc

We’re barely touching the surface of how our OSS/BSS can assist with CX.

The TMN model suffers from modern network anxiety

As the TMN diagram below describes, each layer up in the network management stack abstracts but connects (as described in more detail in “What an OSS shouldn’t do“). That is, each higher layer reduces the amount if information/control within a domain that it’s responsible for, but it assumes a broader responsibility for connecting multiple domains together.
OSS abstract and connect

There’s just one problem with the diagram. It’s a little dated when we take modern virtualised infrastructure into account.

In the old days, despite what the layers may imply, it was common for an OSS to actually touch every layer of the pyramid to resolve faults. That is, OSS regularly connected to NMS, EMS and even devices (NE) to gather network health data. The services defined at the top of the stack (BSS) could be traced to the exact devices (NE / NEL) via the circuits that traversed them, regardless of the layers of abstraction. It helped for root-cause analysis (RCA) and service impact analysis (SIA).

But with modern networks, the infrastructure is virtualised, load-balanced and since they’re packet-switched, they’re completely circuitless (I’m excluding virtual circuits here by the way). The bottom three layers of the diagram could effectively be replaced with a cloud icon, a cloud that the OSS has little chance of peering into (see yellow cloud in the diagram later in this post).

The concept of virtualisation adds many sub-layers of complexity too by the way, as higlighted in the diagram below.

ONAP triptych

So now the customer services at the top of the pyramid (BSS / BML) are quite separated from the resources at the bottom, other than to say the services consume from a known pool of resources. Fault resolution becomes more abstracted as a result.

But what’s interesting is that there’s another layer that’s not shown on the typical TMN model above. That is the physical network inventory (PNI) layer. The cables, splices, joints, patch panels, equipment cards, etc that underpin every network. Yes, even virtual networks.

In the old networks the OSS touched every layer, including the missing layer. That functionality was provided by PNI management. Fault resolution also occurred at this layer through tickets of work conducted by the field workforce (Workforce Management – WFM).

In new networks, OSS/BSS tie services to resource pools (the top two layers). They also still manage PNI / WFM (the bottom, physical layer). But then there’s potentially an invisible cloud in the middle. Three distinctly different pieces, probably each managed by a different business unit or operational group.
BSS OSS cloud abstract

Just wondering – has your OSS/BSS developed control anxiety issues from losing some of the control that it once had?

Is your data getting too heavy for your OSS to lift?

Data mass is beginning to exhibit gravitational properties – it’s getting heavy – and eventually it will be too big to move.”
Guy Lupo
in this article on TM Forum’s Inform that also includes contributions from George Glass and Dawn Bushaus.

Really interesting concept, and article, linked above.

The touchpoint explosion is helping to make our data sets ever bigger… and heavier.

In my earlier days in OSS, I was tasked with leading the migration of large sets of data into relational databases for use by OSS tools. I was lucky enough to spend years working on a full-scope OSS (ie it’s central database housed data for inventory management, alarm management, performance management, service order management, provisioning, etc, etc).

Having all those data sets in one database made it incredibly powerful as an insight generation tool. With a few SQL joins, you could correlate almost any data sets imaginable. But it was also a double-edged sword. Firstly, ensuring that all of the sets would have linking keys (and with high data quality / reliability) was a data migrator’s nightmare. Secondly, all those joins being done by the OSS made it computationally heavy. It wasn’t uncommon for a device list query to take the OSS 10 minutes to provide a response in the PROD environment.

There’s one concept that makes GIS tools more inherently capable of lifting heavier data sets than OSS – they generally load data in layers (that can be turned on and off in the visual pane) and unlike OSS, don’t attempt to stitch the different sets together. The correlation between data sets is achieved through geographical proximity scans, either algorithmically, or just by the human eye of the operator.

If we now consider real-time data (eg alarms/events, performance counters, etc), we can take a leaf out of Einstein’s book and correlate by space and time (ie by geographical and/or time-series proximity between otherwise unrelated data sets). Just wondering – How many OSS tools have you seen that use these proximity techniques? Very few in my experience.

BTW. I’m the first to acknowledge that a stitched data set (ie via linking keys such as device ID between data sets) is definitely going to be richer than uncorrelated data sets. Nonetheless, this might be a useful technique if your data is getting too heavy for your OSS to lift (eg simple queries are causing minutes of downtime / delay for operators).

A defacto spatial manager

Many years ago, I was lucky enough to lead a team responsible for designing a complex inside and outside plant network in a massive oil and gas precinct. It had over 120 buildings and more than 30 networked systems.

We were tasked with using CAD (Computer Aided Design) and Office tools to design the comms and security solution for the precinct. And when I say security, not just network security, but building access control, number plate recognition, coast guard and even advanced RADAR amongst other things.

One of the cool aspects of the project was that it was more three-dimensional than a typical telco design. A telco cable network is usually planned on x and y coordinates because the y coordinate is usually on one or two planes (eg all ducts are at say 0.6m below ground level or all catenary wires between poles are at say 5m above ground). However, on this site, cable trays ran at all sorts of levels to run around critical gas processing infrastructure.

We actually proposed to implement a light-weight OSS for management of the network, including outside plant assets, due to the easy maintainability compared with CAD files. The customer’s existing CAD files may have been perfect when initially built / handed-over, but were nearly useless to us because of all the undocumented that had happened in the ensuing period. However, the customer was used to CAD files and wanted to stay with CAD files.

This led to another cool aspect of the project – we had to build out defacto OSS data models to capture and maintain the designs.

We modelled:

  • The support plane (trayway, ducts, sub-ducts, trenches, lead-ins, etc)
  • The physical connectivity plane (cables, splices, patch-panels, network termination points, physical ports, devices, etc)
  • The logical connectivity plane (circuits, system connectivity, asset utilisation, available capacity, etc)
  • Interconnection between these planes
  • Life-cycle change management

This definitely gave a better appreciation for the type of rules, variants and required data sets that reside under the hood of a typical OSS.

Have you ever had a non-OSS project that gave you a better appreciation / understanding of OSS?

I’m also curious. Have any of you used designed your physical network plane in three dimensions? With a custom or out-of-the-box tool?

The OSS Matrix – the blue or the red pill?

OSS Matrix
OSS tend to be very good at presenting a current moment in time – the current configuration of the network, the health of the network, the activities underway.

Some (but not all) tend to struggle to cope with other moments in time – past and future.

Most have tools that project into the future for the purpose of capacity planning, such as link saturation estimation (based on projecting forward from historical trend-lines). Predictive analytics is a current buzz-word as research attempts to predict future events and mitigate for them now.

Most also have the ability to look into the past – to look at historical logs to give an indication of what happened previously. However, historical logs can be painful and tend towards forensic analysis. We can generally see who (or what) performed an action at a precise timestamp, but it’s not so easy to correlate the surrounding context in which that action occurred. They rarely present a fully-stitched view in the OSS GUI that shows the state of everything else around it at that snapshot in time past. At least, not to the same extent that the OSS GUI can stitch and present current state together.

But the scenario that I find most interesting is for the purpose of network build / maintenance planning. Sometimes these changes occur as isolated events, but are more commonly run as projects, often with phases or milestone states. For network designers, it’s important to differentiate between assets (eg cables, trenches, joints, equipment, ports, etc) that are already in production versus assets that are proposed for installation in the future.

And naturally those states cross over at cut-in points. The proposed new branch of the network needs to connect to the existing network at some time in the future. Designers need to see available capacity now (eg spare ports), but be able to predict with confidence that capacity will still be available for them in the future. That’s where the “reserved” status comes into play, which tends to work for physical assets (eg physical ports) but can be more challenging for logical concepts like link utilisation.

In large organisations, it can be even more challenging because there’s not just one augmentation project underway, but many. In some cases, there can be dependencies where one project relies on capacity that is being stood up by other future projects.

Not all of these projects / plans will make it into production (eg funding is cut or a more optimal design option is chosen), so there is also the challenge of deprecating planned projects. Capability is required to find whether any other future projects are dependent on this deprecated future project.

It can get incredibly challenging to develop this time/space matrix in OSS. If you’re a developer of OSS, the question becomes whether you want to take the blue or red pill.

Blown away by one innovation – a follow-up concept

Last Friday’s blog discussed how I’ve just been blown away by the most elegant OSS innovation I’ve seen in decades.

You can read more detail via the link, but the three major factors in this simple, elegant solution to data quality problems (probably OSS‘ biggest kryptonite) are:

  1. Being able to make connections that break standard object hierarchy rules; but
  2. Having the ability to mark that standard rules haven’t been followed; and
  3. Being able to uses the markers to prioritise the fixing of data at a more convenient time

It’s effectively point 2 that has me most excited. So novel, yet so obvious in hindsight. When doing data migrations in the past, I’ve used confidence flags to indicate what I can rely on and what needs further audit / remediation / cleansing. But the recent demo I saw of the CROSS product is the first time I’ve seen it built into the user interface of an OSS.

This one factor, if it spreads, has the ability to change OSS data quality in the same way that Likes (or equivalent) have changed social media by acting as markers of confidence / quality.

Think about this for a moment – what if everyone who interacts with an OSS GUI had the ability to rank their confidence in any element of data they’re touching, with a mechanism as simple as clicking a like/dislike button (or similar)?

Bad example here but let’s say field techs are given a design pack, and upon arriving at site, find that the design doesn’t match in-situ conditions (eg the fibre pairs they’re expecting to splice a customer lead-in cable to are already carrying live traffic, which they diagnose is due to data problems in an upstream distribution joint). Rather than jeopardising the customer activation window by having to spend hours/days fixing all the trickle-down effects of the distribution joint data, they just mark confidence levels in the vicinity and get the customer connected.

The aggregate of that confidence information is then used to show data quality heat maps and help remediation teams prioritise the areas that they need to work on next. It helps to identify data and process improvements using big circle and/or little circle remediation techniques.

Possibly the most important implication of the in-built ranking system is that everyone in the end-to-end flow, from order takers to designers through to coal-face operators, can better predict whether they need to cater for potential data problems.

Your thoughts?? In what scenarios do you think it could work best, or alternatively, not work?

The double-edged sword of OSS/BSS integrations

…good argument for a merged OSS/BSS, wouldn’t you say?
John Malecki
.

The question above was posed in relation to Friday’s post about the currency and relevance of OSS compared with research reports, analyses and strategic plans as well as how to extend OSS longevity.

This is a brilliant, multi-faceted question from John. My belief is that it is a double-edged sword.

Out of my experiences with many OSS, one product stands out above all the others I’ve worked with. It’s an integrated suite of Fault Management, Performance Management, Customer Management, Product / Service Management, Configuration / orchestration / auto-provisioning, Outside Plant Management / GIS, Traffic Engineering, Trouble Ticketing, Ticket of Work Management, and much more, all tied together with the most elegant inventory data model I’ve seen.

Being a single vendor solution built on a relational database, the cross-pollination (enrichment) of data between all these different modules made it the most powerful insight engine I’ve worked with. With some SQL skills and an understanding of the data model, you could ask it complex cross-domain questions quite easily because all the data was stored in a single database. That edge of the sword made a powerful argument for a merged OSS/BSS.

Unfortunately, the level of cross-referencing that made it so powerful also made it really challenging to build an initial data set to facilitate all modules being inter-operable. By contrast, an independent inventory management solution could just pull data out of each NMS / EMS under management, massage the data for ingestion and then you’d have an operational system. The abovementioned solution also worked this way for inventory, but to get the other modules cross-referenced with the inventory required engineering rules, hand-stitched spreadsheets, rules of thumb, etc. Maintaining and upgrading also became challenges after the initial data had been created. In many cases, the clients didn’t have all of the data that was needed, so a data creation exercise needed to be undertaken.

If I had the choice, I would’ve done more of the cross-referencing at data level (eg via queries / reports) rather than entwining the modules together so tightly at application level… except in the most compelling cases. It’s an example of the chess-board analogy.

If given the option between merged (tightly coupled) and loosely coupled, which would you choose? Do you have any insights or experiences to share on how you’ve struck the best balance?

Big circle. Little circle. Crossing the red line

Data quality is the bane of many a telco. If the data quality is rubbish then the OSS tools effectively become rubbish too.

Feedback loops are one of the most underutilised tools in a data fix arsenal. However, few people realise that there are two levels of feedback loops. There’s what I refer to as big circle and little circle feedback loops.

The little circle is using feedback in data alone, using data to compare and reconcile other data. That can produce good results, but it’s only part of the story. Many data challenges extend further than that if you’re seeking a resolution. Most operators focus on this small loop, but this approach maxes out at well below 100% data accuracy.

The big circle is designing feedback loops that incorporate data quality into end-to-end processes, which includes the field-work part of the process. Whilst this approach may also never deliver data perfection, it’s much more likely to than the small loop approach.

Redline markups have been the traditional mechanism to get feedback from the field back into improving OSS data. For example, if designers issue a design pack out to field techs that prove to be incorrect, then techs return the design with redline markups to show what they’ve implemented in the field instead.

With mobile technology and the right software tools, field workers could directly update data. Unfortunately this model doesn’t seem to fit into practices that have been around for decades.

There remain great opportunities to improve the efficiency of big circle feedback loops. They probably need a new way of thinking, but still need to fit into the existing context of field worker activities.

The challenge with the big circle approach is that it tends to be more costly. It’s much cheaper to write data-fix algorithms than send workers into the field to resolve data issues.

There are a few techniques that could leverage existing field-worker movements rather than being sent to specifically resolve data issues:

  1. Have “while you’re there” functionality, that allows a field worker to perform data-fix (or other) related jobs while they’re at a location with known problems
  2. In some cases, field worker schedules don’t allow enough time to fix a problem that they find (or flagged “while you’re there). So instead, they can be incentivised to capture information about the problem (eg data fix) for future rectification
  3. Have hot-spot analysis tools. These hot spot analysis tools target the areas in the network where field-worker intervention is most likely to improve data quality (ie the areas where data is most problematic).

Digital twins

Well-designed digital twins based on business priorities have the potential to significantly improve enterprise decision making. Enterprise architecture and technology innovation leaders must factor digital twins into their Internet of Things architecture and strategy
Gartner’s Top 10 Technology Trends.

Digital twinning has established some buzz, particularly in IoT circles lately. Digital twins are basically digital representations of physical assets, including their status, characteristics, performance and behaviors.

But it’s not really all that new is it? OSS has been doing this for years. When was the first time you can recall seeing an inventory tool that showed:

  • Digital representations of devices that were physically located thousands of kilometres away (or in the room next door for that matter)
  • Visual representations of those devices (eg front-face, back-plate, rack-unit sizing, geo-spatial positioning, etc)
  • Current operational state (in-service, out-of-service, in alarm, under test, real-time performance against key metrics, etc)
  • Installed components (eg cards, ports, software, etc)
  • Customer services being carried
  • Current device configuration details
  • Nearest connected neighbours

Digital twinning, this amazing new concept, has actually been around for almost as long as OSS have. We just call it inventory management though. It doesn’t sound quite so sexy when we say it.

But how can we extend what we already do into digital twinning in other domains (eg manufacturing, etc)?

The end of cloud computing

…. but we’ve only just started and we haven’t even got close to figuring out how to manage it yet (from an aggregated view I mean, not just within a single vendor platform)!!

This article from Peter Levine of Andreesen Horowitz predicts “The end of cloud computing.”

Now I’m not so sure that this headline is going to play out in the near future, but Peter Levine does make a really interesting point in his article (and its embedded 25 min video). There are a number of nascent technologies, such as autonomous vehicles, that will need their edge devices to process immense amounts of data locally without having to backhaul it to centralised cloud servers for processing.

Autonomous vehicles will need to consume data in real-time from a multitude of in-car sensors, but only a small percentage of that data will need to be transmitted back over networks to a centralised cloud base. But that backhauled data will be important for the purpose of aggregated learning, analytics, etc, the findings of which will be shipped back to the edge devices.

Edge or fog compute is just one more platform type for our OSS to stay abreast of into the future.

OSS at the centre of the universe

Historically, the center of the Universe had been believed to be a number of locations. Many mythological cosmologies included an axis mundi, the central axis of a flat Earth that connects the Earth, heavens, and other realms together. In the 4th century BC Greece, the geocentric model was developed based on astronomical observation, proposing that the center of the Universe lies at the center of a spherical, stationary Earth, around which the sun, moon, planets, and stars rotate. With the development of the heliocentric model by Nicolaus Copernicus in the 16th century, the sun was believed to be the center of the Universe, with the planets (including Earth) and stars orbiting it.
In the early 20th century, the discovery of other galaxies and the development of the Big Bang theory led to the development of cosmological models of a homogeneous, isotropic Universe (which lacks a central point) that is expanding at all points
.”
Wikipedia.

Perhaps I fall into a line of thinking as outdated as the axis mundi, but I passionately believe that the OSS is the centre of the universe around which all other digital technologies revolve. Even the sexy new “saviour” technologies like Internet of Things (IoT), network virtualisation, etc can only reach their promised potential if there are operational tools sitting in the background managing them and their life-cycle of processes efficiently. And the other “hero” technologies such as analytics, machine learning, APIs, etc aren’t able to do much without the data collected by operational tools.

No matter how far and wide I range in the consulting world of communications technologies and the multitude of industries they impact, I still see them coming back to what OSS can do to improve what they do.

Many people say that OSS is no longer relevant. Has the ICT world moved on to the geometric model, heliocentric or even big bang model? If so, what is at their centre?

Am I just blinded by what Sir Ken Robinson describes as, “When people are in their Element, they connect with something fundamental to their sense of identity, purpose, and well-being. Being there provides a sense of self-revelation, of defining who they really are and what they’re really meant to be doing with their lives.” Am I struggling to see the world from a perspective other than my own?

Can you imagine how you’ll interact with your OSS in 10 years?

Here’s a slightly mind-blowing fact for you – A child born when iPhone was announced will be 10 years old in 2 months (a piece of trivia courtesy of Ben Evans).

That’s nearly 10 years of digitally native workers coming into the telco workforce and 10 years of not-so-digitally native workers exiting it. We marvelled that there was a generation that had joined the workforce that had never experienced life without the Internet. The generation that has never experienced life without mobile Internet, apps, etc is now on the march.

The smart-phone revolution spawned by the iPhone has changed, and will continue to change, the way we interact with information. By contrast, there hasn’t really been much change in the way that we interact with our OSS has there? Sure, there are a few mobility apps that help the field workforce, sales agents, etc and we’re now using browsers as our clients (mostly) but a majority of OSS users still interact with OSS servers via networked PCs that are fitted with a keyboard and mouse. Not much friction has been removed.

The question remains about how other burgeoning technologies such as augmented reality and gesture-based computing will impact how we interact with our OSS in the coming decade. Are they also destined to only supplement the tasks of operators that have a mobile / spatial component to their tasks, like the field workforce?

Machine learning and Artificially Intelligent assistants represent the greater opportunity to change how we interact with our OSS, but only if we radically change our user interfaces to facilitate their strengths. The overcrowded nature of our current OSS don’t readily accommodate small form-factor displays or speech / gesture interactions. An OSS GUI built around a search / predictive / precognitive interaction model is the more likely stepping stone to drastically different OSS interactions in the next ten years. A far more frictionless OSS future.

Using deduction

Eric Raymond proposed that a computer should ‘never ask the user for any information that it can autodetect, copy, or deduce’; computer vision changes what the computer has to ask. So it’s not, really, a camera, taking photos – it’s more like an eye, that can see.
Ben Evans
here.

There’s a big buzzword going around our industry at the moment called “omnichannel.” Consider it an interaction pathway, where a user can choose to interact with any number of channels – phone, email, online, USSD, retail store, IVR, app, etc. Not only that, but smartphones have made it possible to flip backwards and forwards between channels. This can be done either as dictated by the workflow (eg using an app which launches a USSD hash-code to return a URL to current offers, etc) or by the customer choosing the channel they’re most comfortable with.

In the past, process designs have tended to be done within the silo of just one channel. One of the challenges for modern process designers is to design user journeys and state transitions that jump channels and have a multi-channel decision tree built-in. Exacerbating this challenge is transitioning data between channels so that the journey is seamless for customers – each channel is likely to have its own back-end OSS/BSS system/s after all and data handoff must happen before a transaction is completed (ie intermediate storing and transferring of records). Eric Raymond’s quote above holds true for ensuring a great customer experience in an omnichannel environment.

I’m fascinated by Ben Evans’ take on Eric’s quote and how that relates to omnichannel user journeys for telcos (see the link above for a fascinating broader context around Ben’s prediction of computer vision). When the computer (ie smartphone) begins to gain more situational awareness via its camera, an additional and potentially very powerful interaction channel presents itself.

We’ve all heard of image recognition already being available in the purchasing process within retail. Ben’s concept takes that awareness to a higher level. I haven’t heard of image recognition being used within telco yet, but I am looking forward to when Augmented Reality combines with this situational awareness (and the data made available by OSS) in our industry. Not just for customers, but for telco operators too. The design packs that a field tech uses today are going to look very different in the near future.

When phones swallowed physical objects

…after a decade in which phones swallowed physical objects, with cameras, radios, music players and so on turned into apps, AR might turn those apps back into physical objects – virtual ones, of course. On one hand cameras digitise everything, and on the other AR puts things back into the world.”
Ben Evans
here.

Similarly, for years OSS have been like a black hole – sucking in data from every physical (and logical) source they can get their hands on. Now AR (Augmented Reality) provides the mechanism for OSS to put things back into the world – as visual overlays, not just reports.

It starts with visualising underground or in-wall assets like cables, but the use cases are extraordinary in their possibilities. The days of printed design packs being handed to field techs are surely numbered. They’re already being replaced with apps but interactive visual aids will take it to a new level of sophistication.

The small-grid OSS platform

Perhaps the most egregious platform failure is to simply not see the platform play at all. It is also one of the hardest for traditional firms to avoid. Firms guilty of this oversight never get past the idea that they sell products when they could be building ecosystems. Sony, Hewlett Packard (HP), and Garmin all made the mistake of emphasizing products over platforms. Before the iPhone launched in 2007, HP dominated the handheld calculator space for science and finance. Yet today, consumers can purchase near perfect calculator apps on iTunes or on Google Play and at a fraction of the cost of a physical calculator. Apple and Google did not create these emulators; they merely enabled them by providing the platform that connects app producers and consumers who need calculators.
Sony has sold some of the best electronic products ever made: It once dominated the personal portable music space with the Walkman. It had the world’s first and best compact disc players. By 2011, its PlayStation had become the best-selling game console of all time. Yet, for all its technological prowess Sony focused too much on products and not enough on creating platforms. (What became of Sony’s players? A platform – iOS – ate them for lunch.) Garmin, as a tailored mapping device, suffered a similar fate. As of 2012, Garmin had sold 100 million units after 23 years in the market. By contrast, iPhone sold 700 million units after just eight years in the market. More people get directions from an iPhone than from a Garmin, not only because of Apple maps but also because of Google Maps and Waze. As platforms, iOS and Android have ecosystems of producers, consumers, and others that have helped them triumph over such products as the Cisco Flip camera, the Sony PSP, the Flickr photo service, the Olympus voice recorder, the Microsoft Zune, the Magnus flashlight, and the Fitbit fitness tracker.
When a platform enters the market, product managers who focus on features are not just measuring the wrong things, they’re thinking the wrong thoughts
.”
Co-authored with Marshall Van Alstyne and Geoffrey Parker here.

Recent posts have discussed the small-grid OSS concept. In effect, it’s an OSS platform that brings OSS developers and OSS users together into a single platform / marketplace / ecosystem.

As the link above shows (it’s a really interesting read in full BTW), there are many potential pitfalls in taking the platform approach. However, perhaps the most egregious platform failure is to simply not see the platform play at all.

The OSS industry has barely tapped into the platform play yet but like other industries, like Uber to taxis, OSS is primed for a platform disruption.

So far we have some service / NFV catalogues, mediation device and developer forums, as well as ecosystems like Esri, Salesforce, etc but I can’t think of any across the broader scope of OSS. Are you aware of any?

Virtual satellites

Last Friday the post, “Managing satellites,” discussed how the satellite OSS concept provides core OSS to work on Telco networks / eTOM models, whereas ITIL / ITSM has become increasingly prevalent for the tools to help service managed service contracts.

Does this concept resonate with you regarding management of virtualised networks in terms of satellites for the virtualised components and Core for the more physical activity management? Service catalogs, Service establishment, service reporting, SLA management, contracts, Service chaining and automations are natural candidates for “satellite” management as they are tied to logical/virtual resources.

There have been many discussions about OSS being superseded. This potential for disruption is particularly true in the “satellite” space.

However the “satellite” service management approach doesn’t tend to do the physical work done by the core. This includes CAD / GIS designs of physical networks,  planning (augmentation, infill), ticket of work,  field work management, physical resource management, DCIM and more.

The fully virtualised management tools may works for OTT (over the top) providers but not so well for providers that have significant physical assets to manage.

This could be true for BSS too. Satellites do service level billing and aggregated reporting for managed service customers but bill runs, clearing house and other business / billing functions are done by the core.

You can read more about this principle in the Aircraft carrier analogy.

The satellite model also bears comparison to the split up of telcos into REIT and DSP business models with satellite and core being more easily split.

Can you see other benefits (or disadvantages) of the satellite vs core OSS model?