Is your data getting too heavy for your OSS to lift?

Data mass is beginning to exhibit gravitational properties – it’s getting heavy – and eventually it will be too big to move.”
Guy Lupo
in this article on TM Forum’s Inform that also includes contributions from George Glass and Dawn Bushaus.

Really interesting concept, and article, linked above.

The touchpoint explosion is helping to make our data sets ever bigger… and heavier.

In my earlier days in OSS, I was tasked with leading the migration of large sets of data into relational databases for use by OSS tools. I was lucky enough to spend years working on a full-scope OSS (ie it’s central database housed data for inventory management, alarm management, performance management, service order management, provisioning, etc, etc).

Having all those data sets in one database made it incredibly powerful as an insight generation tool. With a few SQL joins, you could correlate almost any data sets imaginable. But it was also a double-edged sword. Firstly, ensuring that all of the sets would have linking keys (and with high data quality / reliability) was a data migrator’s nightmare. Secondly, all those joins being done by the OSS made it computationally heavy. It wasn’t uncommon for a device list query to take the OSS 10 minutes to provide a response in the PROD environment.

There’s one concept that makes GIS tools more inherently capable of lifting heavier data sets than OSS – they generally load data in layers (that can be turned on and off in the visual pane) and unlike OSS, don’t attempt to stitch the different sets together. The correlation between data sets is achieved through geographical proximity scans, either algorithmically, or just by the human eye of the operator.

If we now consider real-time data (eg alarms/events, performance counters, etc), we can take a leaf out of Einstein’s book and correlate by space and time (ie by geographical and/or time-series proximity between otherwise unrelated data sets). Just wondering – How many OSS tools have you seen that use these proximity techniques? Very few in my experience.

BTW. I’m the first to acknowledge that a stitched data set (ie via linking keys such as device ID between data sets) is definitely going to be richer than uncorrelated data sets. Nonetheless, this might be a useful technique if your data is getting too heavy for your OSS to lift (eg simple queries are causing minutes of downtime / delay for operators).

A defacto spatial manager

Many years ago, I was lucky enough to lead a team responsible for designing a complex inside and outside plant network in a massive oil and gas precinct. It had over 120 buildings and more than 30 networked systems.

We were tasked with using CAD (Computer Aided Design) and Office tools to design the comms and security solution for the precinct. And when I say security, not just network security, but building access control, number plate recognition, coast guard and even advanced RADAR amongst other things.

One of the cool aspects of the project was that it was more three-dimensional than a typical telco design. A telco cable network is usually planned on x and y coordinates because the y coordinate is usually on one or two planes (eg all ducts are at say 0.6m below ground level or all catenary wires between poles are at say 5m above ground). However, on this site, cable trays ran at all sorts of levels to run around critical gas processing infrastructure.

We actually proposed to implement a light-weight OSS for management of the network, including outside plant assets, due to the easy maintainability compared with CAD files. The customer’s existing CAD files may have been perfect when initially built / handed-over, but were nearly useless to us because of all the undocumented that had happened in the ensuing period. However, the customer was used to CAD files and wanted to stay with CAD files.

This led to another cool aspect of the project – we had to build out defacto OSS data models to capture and maintain the designs.

We modelled:

  • The support plane (trayway, ducts, sub-ducts, trenches, lead-ins, etc)
  • The physical connectivity plane (cables, splices, patch-panels, network termination points, physical ports, devices, etc)
  • The logical connectivity plane (circuits, system connectivity, asset utilisation, available capacity, etc)
  • Interconnection between these planes
  • Life-cycle change management

This definitely gave a better appreciation for the type of rules, variants and required data sets that reside under the hood of a typical OSS.

Have you ever had a non-OSS project that gave you a better appreciation / understanding of OSS?

I’m also curious. Have any of you used designed your physical network plane in three dimensions? With a custom or out-of-the-box tool?

The OSS Matrix – the blue or the red pill?

OSS Matrix
OSS tend to be very good at presenting a current moment in time – the current configuration of the network, the health of the network, the activities underway.

Some (but not all) tend to struggle to cope with other moments in time – past and future.

Most have tools that project into the future for the purpose of capacity planning, such as link saturation estimation (based on projecting forward from historical trend-lines). Predictive analytics is a current buzz-word as research attempts to predict future events and mitigate for them now.

Most also have the ability to look into the past – to look at historical logs to give an indication of what happened previously. However, historical logs can be painful and tend towards forensic analysis. We can generally see who (or what) performed an action at a precise timestamp, but it’s not so easy to correlate the surrounding context in which that action occurred. They rarely present a fully-stitched view in the OSS GUI that shows the state of everything else around it at that snapshot in time past. At least, not to the same extent that the OSS GUI can stitch and present current state together.

But the scenario that I find most interesting is for the purpose of network build / maintenance planning. Sometimes these changes occur as isolated events, but are more commonly run as projects, often with phases or milestone states. For network designers, it’s important to differentiate between assets (eg cables, trenches, joints, equipment, ports, etc) that are already in production versus assets that are proposed for installation in the future.

And naturally those states cross over at cut-in points. The proposed new branch of the network needs to connect to the existing network at some time in the future. Designers need to see available capacity now (eg spare ports), but be able to predict with confidence that capacity will still be available for them in the future. That’s where the “reserved” status comes into play, which tends to work for physical assets (eg physical ports) but can be more challenging for logical concepts like link utilisation.

In large organisations, it can be even more challenging because there’s not just one augmentation project underway, but many. In some cases, there can be dependencies where one project relies on capacity that is being stood up by other future projects.

Not all of these projects / plans will make it into production (eg funding is cut or a more optimal design option is chosen), so there is also the challenge of deprecating planned projects. Capability is required to find whether any other future projects are dependent on this deprecated future project.

It can get incredibly challenging to develop this time/space matrix in OSS. If you’re a developer of OSS, the question becomes whether you want to take the blue or red pill.

Blown away by one innovation – a follow-up concept

Last Friday’s blog discussed how I’ve just been blown away by the most elegant OSS innovation I’ve seen in decades.

You can read more detail via the link, but the three major factors in this simple, elegant solution to data quality problems (probably OSS‘ biggest kryptonite) are:

  1. Being able to make connections that break standard object hierarchy rules; but
  2. Having the ability to mark that standard rules haven’t been followed; and
  3. Being able to uses the markers to prioritise the fixing of data at a more convenient time

It’s effectively point 2 that has me most excited. So novel, yet so obvious in hindsight. When doing data migrations in the past, I’ve used confidence flags to indicate what I can rely on and what needs further audit / remediation / cleansing. But the recent demo I saw of the CROSS product is the first time I’ve seen it built into the user interface of an OSS.

This one factor, if it spreads, has the ability to change OSS data quality in the same way that Likes (or equivalent) have changed social media by acting as markers of confidence / quality.

Think about this for a moment – what if everyone who interacts with an OSS GUI had the ability to rank their confidence in any element of data they’re touching, with a mechanism as simple as clicking a like/dislike button (or similar)?

Bad example here but let’s say field techs are given a design pack, and upon arriving at site, find that the design doesn’t match in-situ conditions (eg the fibre pairs they’re expecting to splice a customer lead-in cable to are already carrying live traffic, which they diagnose is due to data problems in an upstream distribution joint). Rather than jeopardising the customer activation window by having to spend hours/days fixing all the trickle-down effects of the distribution joint data, they just mark confidence levels in the vicinity and get the customer connected.

The aggregate of that confidence information is then used to show data quality heat maps and help remediation teams prioritise the areas that they need to work on next. It helps to identify data and process improvements using big circle and/or little circle remediation techniques.

Possibly the most important implication of the in-built ranking system is that everyone in the end-to-end flow, from order takers to designers through to coal-face operators, can better predict whether they need to cater for potential data problems.

Your thoughts?? In what scenarios do you think it could work best, or alternatively, not work?

The double-edged sword of OSS/BSS integrations

…good argument for a merged OSS/BSS, wouldn’t you say?
John Malecki

The question above was posed in relation to Friday’s post about the currency and relevance of OSS compared with research reports, analyses and strategic plans as well as how to extend OSS longevity.

This is a brilliant, multi-faceted question from John. My belief is that it is a double-edged sword.

Out of my experiences with many OSS, one product stands out above all the others I’ve worked with. It’s an integrated suite of Fault Management, Performance Management, Customer Management, Product / Service Management, Configuration / orchestration / auto-provisioning, Outside Plant Management / GIS, Traffic Engineering, Trouble Ticketing, Ticket of Work Management, and much more, all tied together with the most elegant inventory data model I’ve seen.

Being a single vendor solution built on a relational database, the cross-pollination (enrichment) of data between all these different modules made it the most powerful insight engine I’ve worked with. With some SQL skills and an understanding of the data model, you could ask it complex cross-domain questions quite easily because all the data was stored in a single database. That edge of the sword made a powerful argument for a merged OSS/BSS.

Unfortunately, the level of cross-referencing that made it so powerful also made it really challenging to build an initial data set to facilitate all modules being inter-operable. By contrast, an independent inventory management solution could just pull data out of each NMS / EMS under management, massage the data for ingestion and then you’d have an operational system. The abovementioned solution also worked this way for inventory, but to get the other modules cross-referenced with the inventory required engineering rules, hand-stitched spreadsheets, rules of thumb, etc. Maintaining and upgrading also became challenges after the initial data had been created. In many cases, the clients didn’t have all of the data that was needed, so a data creation exercise needed to be undertaken.

If I had the choice, I would’ve done more of the cross-referencing at data level (eg via queries / reports) rather than entwining the modules together so tightly at application level… except in the most compelling cases. It’s an example of the chess-board analogy.

If given the option between merged (tightly coupled) and loosely coupled, which would you choose? Do you have any insights or experiences to share on how you’ve struck the best balance?

Big circle. Little circle. Crossing the red line

Data quality is the bane of many a telco. If the data quality is rubbish then the OSS tools effectively become rubbish too.

Feedback loops are one of the most underutilised tools in a data fix arsenal. However, few people realise that there are what I call big circle feedback loops as well as little circles.

The little circle is using feedback in data alone, using data to compare and reconcile other data. That can produce good results, but it’s only part of the story. Many data challenges extend further than that if you’re seeking a resolution.

The big circle is designing feedback loops that incorporate data quality into end-to-end processes, which includes the field-work part of the process.

Redline markups have been the traditional mechanism to get feedback from the field back into improving OSS data. For example, if designers issue a design pack out to field techs that prove to be incorrect, then techs return the design with redline markups to show what they’ve implemented in the field instead.

With mobile technology and the right software tools, field workers could directly update data. Unfortunately this model doesn’t seem to fit into practices that have been around for decades.

There remain great opportunities to improve the efficiency of big circle feedback loops. They probably need a new way of thinking, but still need to fit into the existing context of field workers.

Digital twins

Well-designed digital twins based on business priorities have the potential to significantly improve enterprise decision making. Enterprise architecture and technology innovation leaders must factor digital twins into their Internet of Things architecture and strategy
Gartner’s Top 10 Technology Trends.

Digital twinning has established some buzz, particularly in IoT circles lately. Digital twins are basically digital representations of physical assets, including their status, characteristics, performance and behaviors.

But it’s not really all that new is it? OSS has been doing this for years. When was the first time you can recall seeing an inventory tool that showed:

  • Digital representations of devices that were physically located thousands of kilometres away (or in the room next door for that matter)
  • Visual representations of those devices (eg front-face, back-plate, rack-unit sizing, geo-spatial positioning, etc)
  • Current operational state (in-service, out-of-service, in alarm, under test, real-time performance against key metrics, etc)
  • Installed components (eg cards, ports, software, etc)
  • Customer services being carried
  • Current device configuration details
  • Nearest connected neighbours

Digital twinning, this amazing new concept, has actually been around for almost as long as OSS have. We just call it inventory management though. It doesn’t sound quite so sexy when we say it.

But how can we extend what we already do into digital twinning in other domains (eg manufacturing, etc)?

The end of cloud computing

…. but we’ve only just started and we haven’t even got close to figuring out how to manage it yet (from an aggregated view I mean, not just within a single vendor platform)!!

This article from Peter Levine of Andreesen Horowitz predicts “The end of cloud computing.”

Now I’m not so sure that this headline is going to play out in the near future, but Peter Levine does make a really interesting point in his article (and its embedded 25 min video). There are a number of nascent technologies, such as autonomous vehicles, that will need their edge devices to process immense amounts of data locally without having to backhaul it to centralised cloud servers for processing.

Autonomous vehicles will need to consume data in real-time from a multitude of in-car sensors, but only a small percentage of that data will need to be transmitted back over networks to a centralised cloud base. But that backhauled data will be important for the purpose of aggregated learning, analytics, etc, the findings of which will be shipped back to the edge devices.

Edge or fog compute is just one more platform type for our OSS to stay abreast of into the future.

OSS at the centre of the universe

Historically, the center of the Universe had been believed to be a number of locations. Many mythological cosmologies included an axis mundi, the central axis of a flat Earth that connects the Earth, heavens, and other realms together. In the 4th century BC Greece, the geocentric model was developed based on astronomical observation, proposing that the center of the Universe lies at the center of a spherical, stationary Earth, around which the sun, moon, planets, and stars rotate. With the development of the heliocentric model by Nicolaus Copernicus in the 16th century, the sun was believed to be the center of the Universe, with the planets (including Earth) and stars orbiting it.
In the early 20th century, the discovery of other galaxies and the development of the Big Bang theory led to the development of cosmological models of a homogeneous, isotropic Universe (which lacks a central point) that is expanding at all points

Perhaps I fall into a line of thinking as outdated as the axis mundi, but I passionately believe that the OSS is the centre of the universe around which all other digital technologies revolve. Even the sexy new “saviour” technologies like Internet of Things (IoT), network virtualisation, etc can only reach their promised potential if there are operational tools sitting in the background managing them and their life-cycle of processes efficiently. And the other “hero” technologies such as analytics, machine learning, APIs, etc aren’t able to do much without the data collected by operational tools.

No matter how far and wide I range in the consulting world of communications technologies and the multitude of industries they impact, I still see them coming back to what OSS can do to improve what they do.

Many people say that OSS is no longer relevant. Has the ICT world moved on to the geometric model, heliocentric or even big bang model? If so, what is at their centre?

Am I just blinded by what Sir Ken Robinson describes as, “When people are in their Element, they connect with something fundamental to their sense of identity, purpose, and well-being. Being there provides a sense of self-revelation, of defining who they really are and what they’re really meant to be doing with their lives.” Am I struggling to see the world from a perspective other than my own?

Can you imagine how you’ll interact with your OSS in 10 years?

Here’s a slightly mind-blowing fact for you – A child born when iPhone was announced will be 10 years old in 2 months (a piece of trivia courtesy of Ben Evans).

That’s nearly 10 years of digitally native workers coming into the telco workforce and 10 years of not-so-digitally native workers exiting it. We marvelled that there was a generation that had joined the workforce that had never experienced life without the Internet. The generation that has never experienced life without mobile Internet, apps, etc is now on the march.

The smart-phone revolution spawned by the iPhone has changed, and will continue to change, the way we interact with information. By contrast, there hasn’t really been much change in the way that we interact with our OSS has there? Sure, there are a few mobility apps that help the field workforce, sales agents, etc and we’re now using browsers as our clients (mostly) but a majority of OSS users still interact with OSS servers via networked PCs that are fitted with a keyboard and mouse. Not much friction has been removed.

The question remains about how other burgeoning technologies such as augmented reality and gesture-based computing will impact how we interact with our OSS in the coming decade. Are they also destined to only supplement the tasks of operators that have a mobile / spatial component to their tasks, like the field workforce?

Machine learning and Artificially Intelligent assistants represent the greater opportunity to change how we interact with our OSS, but only if we radically change our user interfaces to facilitate their strengths. The overcrowded nature of our current OSS don’t readily accommodate small form-factor displays or speech / gesture interactions. An OSS GUI built around a search / predictive / precognitive interaction model is the more likely stepping stone to drastically different OSS interactions in the next ten years. A far more frictionless OSS future.

Using deduction

Eric Raymond proposed that a computer should ‘never ask the user for any information that it can autodetect, copy, or deduce’; computer vision changes what the computer has to ask. So it’s not, really, a camera, taking photos – it’s more like an eye, that can see.
Ben Evans

There’s a big buzzword going around our industry at the moment called “omnichannel.” Consider it an interaction pathway, where a user can choose to interact with any number of channels – phone, email, online, USSD, retail store, IVR, app, etc. Not only that, but smartphones have made it possible to flip backwards and forwards between channels. This can be done either as dictated by the workflow (eg using an app which launches a USSD hash-code to return a URL to current offers, etc) or by the customer choosing the channel they’re most comfortable with.

In the past, process designs have tended to be done within the silo of just one channel. One of the challenges for modern process designers is to design user journeys and state transitions that jump channels and have a multi-channel decision tree built-in. Exacerbating this challenge is transitioning data between channels so that the journey is seamless for customers – each channel is likely to have its own back-end OSS/BSS system/s after all and data handoff must happen before a transaction is completed (ie intermediate storing and transferring of records). Eric Raymond’s quote above holds true for ensuring a great customer experience in an omnichannel environment.

I’m fascinated by Ben Evans’ take on Eric’s quote and how that relates to omnichannel user journeys for telcos (see the link above for a fascinating broader context around Ben’s prediction of computer vision). When the computer (ie smartphone) begins to gain more situational awareness via its camera, an additional and potentially very powerful interaction channel presents itself.

We’ve all heard of image recognition already being available in the purchasing process within retail. Ben’s concept takes that awareness to a higher level. I haven’t heard of image recognition being used within telco yet, but I am looking forward to when Augmented Reality combines with this situational awareness (and the data made available by OSS) in our industry. Not just for customers, but for telco operators too. The design packs that a field tech uses today are going to look very different in the near future.

When phones swallowed physical objects

…after a decade in which phones swallowed physical objects, with cameras, radios, music players and so on turned into apps, AR might turn those apps back into physical objects – virtual ones, of course. On one hand cameras digitise everything, and on the other AR puts things back into the world.”
Ben Evans

Similarly, for years OSS have been like a black hole – sucking in data from every physical (and logical) source they can get their hands on. Now AR (Augmented Reality) provides the mechanism for OSS to put things back into the world – as visual overlays, not just reports.

It starts with visualising underground or in-wall assets like cables, but the use cases are extraordinary in their possibilities. The days of printed design packs being handed to field techs are surely numbered. They’re already being replaced with apps but interactive visual aids will take it to a new level of sophistication.

The small-grid OSS platform

Perhaps the most egregious platform failure is to simply not see the platform play at all. It is also one of the hardest for traditional firms to avoid. Firms guilty of this oversight never get past the idea that they sell products when they could be building ecosystems. Sony, Hewlett Packard (HP), and Garmin all made the mistake of emphasizing products over platforms. Before the iPhone launched in 2007, HP dominated the handheld calculator space for science and finance. Yet today, consumers can purchase near perfect calculator apps on iTunes or on Google Play and at a fraction of the cost of a physical calculator. Apple and Google did not create these emulators; they merely enabled them by providing the platform that connects app producers and consumers who need calculators.
Sony has sold some of the best electronic products ever made: It once dominated the personal portable music space with the Walkman. It had the world’s first and best compact disc players. By 2011, its PlayStation had become the best-selling game console of all time. Yet, for all its technological prowess Sony focused too much on products and not enough on creating platforms. (What became of Sony’s players? A platform – iOS – ate them for lunch.) Garmin, as a tailored mapping device, suffered a similar fate. As of 2012, Garmin had sold 100 million units after 23 years in the market. By contrast, iPhone sold 700 million units after just eight years in the market. More people get directions from an iPhone than from a Garmin, not only because of Apple maps but also because of Google Maps and Waze. As platforms, iOS and Android have ecosystems of producers, consumers, and others that have helped them triumph over such products as the Cisco Flip camera, the Sony PSP, the Flickr photo service, the Olympus voice recorder, the Microsoft Zune, the Magnus flashlight, and the Fitbit fitness tracker.
When a platform enters the market, product managers who focus on features are not just measuring the wrong things, they’re thinking the wrong thoughts
Co-authored with Marshall Van Alstyne and Geoffrey Parker here.

Recent posts have discussed the small-grid OSS concept. In effect, it’s an OSS platform that brings OSS developers and OSS users together into a single platform / marketplace / ecosystem.

As the link above shows (it’s a really interesting read in full BTW), there are many potential pitfalls in taking the platform approach. However, perhaps the most egregious platform failure is to simply not see the platform play at all.

The OSS industry has barely tapped into the platform play yet but like other industries, like Uber to taxis, OSS is primed for a platform disruption.

So far we have some service / NFV catalogues, mediation device and developer forums, as well as ecosystems like Esri, Salesforce, etc but I can’t think of any across the broader scope of OSS. Are you aware of any?

Virtual satellites

Last Friday the post, “Managing satellites,” discussed how the satellite OSS concept provides core OSS to work on Telco networks / eTOM models, whereas ITIL / ITSM has become increasingly prevalent for the tools to help service managed service contracts.

Does this concept resonate with you regarding management of virtualised networks in terms of satellites for the virtualised components and Core for the more physical activity management? Service catalogs, Service establishment, service reporting, SLA management, contracts, Service chaining and automations are natural candidates for “satellite” management as they are tied to logical/virtual resources.

There have been many discussions about OSS being superseded. This potential for disruption is particularly true in the “satellite” space.

However the “satellite” service management approach doesn’t tend to do the physical work done by the core. This includes CAD / GIS designs of physical networks,  planning (augmentation, infill), ticket of work,  field work management, physical resource management, DCIM and more.

The fully virtualised management tools may works for OTT (over the top) providers but not so well for providers that have significant physical assets to manage.

This could be true for BSS too. Satellites do service level billing and aggregated reporting for managed service customers but bill runs, clearing house and other business / billing functions are done by the core.

You can read more about this principle in the Aircraft carrier analogy.

The satellite model also bears comparison to the split up of telcos into REIT and DSP business models with satellite and core being more easily split.

Can you see other benefits (or disadvantages) of the satellite vs core OSS model?

Incident play forward

Earlier this week you may have read, “Incident playback,” a post about storing the context around incidents and being able to learn and refine responses using that context.

Today I’d like to take the concept a little further.

When talking about context, I wasn’t just referring to other live alarms, but also having data feeds such as performance metrics (eg CPU utilisation), change windows (with linkages to change management tickets / documents) and any other useful data streams that could be presented on a timeline.

Taking the playback concept and then projecting into the future, the tool could become invaluable as a scenario and network planning tool. It could use data such as capacity threshold analysis on a given area of network and predict capacity exhaustion and identify network design weaknesses in certain scenarios.

I’d see this tool presented across at least three panes:

  • A geospatial pane (or logical network map) showing capacities and alarms;
  • A data log pane, similar to an alarm list showing details relating to the selected device or devices; but most importantly
  • A timeline control pane, allowing the operator to play forward, pause, back, etc just like a movie player

Sounds like a big challenge to create. Do you think the benefits would likely outweigh the effort? Would such a tool be useful in your environment/s?

The problem with virtual reality

In the past I’ve written about a cool tool called AugView that lets you augment what you see in front of you with a spatial representation of physical assets, such as conduits, cables, etc that are actually hidden underground.

At the moment there are two limitations that restrict the usefulness of tools like AugView. The first is finding geo-positioning hardware (eg GPS units) that are accurate enough. That hurdle will be overcome soon enough.

The bigger challenge will be the accuracy of the data. Many cable and duct records go back decades and lead-in cables in particular have not always been recorded with great spatial accuracy. This could prove to be an almost insurmountable challenge for some operators being able to equip their field workers with VR tools.

I wonder whether the seismic processing technology used in mining exploration could be adapted to finding underground infrastructure. The benefits don’t seem to outweigh what would undoubtedly be an expensive audit process though do they?

Can you envisage a scenario where they might? Can you think of other tech that might do it more cost effectively? Ground penetrating radar? Other?

With knowledge comes power

You’ve got Amazon knowing everything about purchasing, Google knowing everything about what people do on the Internet, and Salesforce knowing everything about the revenue side of a business.”
Scott Raney

The big question for CSPs as they make the transition to the more modern Digital Service Provider (DSP) business models is, “what big thing can they know everything about and use to help their customers be more efficient and profitable.

Telcos were most profitable when their services supported business. The telephone was used to create leads, close deals, control supply chains, etc. With e-business models, that has shifted to online marketing, analytics on marketing (amongst other things), digital supply chains, etc.

Telcos are no longer providing the tools that organisations value most (except perhaps mobility access to all the more important content and apps).

OSS have the potential to become enormous customer insight engines on behalf of the CSPs, but only if they’re taken beyond being seen as operational tools only.

Photos in the field

With the ubiquity of smart phones, when field workers are on site, they often photograph assets to show the state of the network before and /or after their site visit.

Many organisations with OSS also have Digital Asset Management (DAM) tools that allow them to store digital assets (images, video, audio, etc) in a central repository where they can be searched, managed, retrieved and shared. Perhaps not even a DAM as such, but centrally accessible file storage.

Today’s concept is to create an app to store the photo from the phone to the DAM and then easily link to the asset in the OSS inventory manager (eg a URL from the DAM being loaded into an applicable field in the OSS), also allowing notes on that asset to be amended into the OSS.

In theory, such an app could be configured to interface into any inventory manager, making it saleable to any customer, vendor or OSS integrator.

It’s common for an image to be stored somewhere other than in the OSS, and have a hyperlink stored in the OSS. It’s just painful to store hyperlinks against specific inventory items, especially in large volumes. This app would remove the pain point by acting as the glue between phone, DAM and OSS.

A couple of other thoughts:

  • Using geo-tagging within images to help correlate with fixed assets in the OSS
  • Using asset identification tools (eg RFID, NFC, QR codes, etc ) to determine the asset
  • Aligning DAM / inventory content with virtual reality tools
  • Enforcement of image naming conventions on the DAM
  • Generating design packs and subsequent as-builts with mark-ups on top of photos from in the field

Have you come across anything similar during your travels in OSS? What other important features would such an app need?

Managing property with OSS?

There’s a slight problem about being passionate about OSS – you see everything In relation to OSS problems, solutions, analogies, etc.

l was talking with Simon, a great friend of mine recently about a new role that he’s taking on. He will be responsible for technology in the facilities used by the large bank that he works for. Those facilities include branches, offices, data centres and more, The conversation started out on the challenges facing facilities managers including energy efficiency, occupation rates, meeting room utilisation rates, cost per desk, workforce efficiency, utility allocation / billing and many other KPls.

The tech he has been considering in this space was wide and varied but primarily came down to additional types of sensors that will ultimately reduce costs for his employer. Many of these sensors sound very cool (no pun intended re. HVAC sensors). As you can imagine, the executives at the bank don’t fund cool, they fund cost-out projects,

The same is true for OSS but that’s not the overlap I was thinking about on this occasion. It was how sensor networks in buildings collect vast amounts of data, aggregate it, process it, analyse it against a particular metric or theme and then provide insights based on that benchmark. We then got onto the topic of circulating (another HVAC pun) the benchmark results for the purpose of gamification and competition between different parts of the organisation.

You can see why l consider existing OSS capabilities as being well placed for servicing the IoT space can’t you?

Then we got onto the topic of blockchain and smart contracts so my OSS-coloured glasses kicked in again but that’s a story for another day.

PS. Yes, in the pre-IoT days, we managed buildings through software like BMS (Building Management Systems), PAGA (Public Address General Alarm), physical security (eg ACS – Access Control Systems) and other tools not to mention environmentals but the IoT buzzword is taking it to another level.


10 ideas – nascent technologies

“Which 10 nascent technologies will impact and be impacted by OSS in the future?”

  1. Cloud delivery and related services models that allow carriers to reduce their OSS CAPEX load and outsource aspects of operations. The other important aspect of cloud delivery is web-scaling of infrastructure for efficient, but resilient OSS platform architectures
  2. Network virtualization for on-demand resource allocation and associated reductions in power use by CSPs, an area where OSS has barely scratched the surface
  3. Network security has the potential to share information on a much larger scale than currently. Fear is a driving force in budget allocations and network security threats are on the rise
  4. Big Data and analytics are already widely used by CSPs but flexible, data driven application models appear to be the only way to keep pace with the rapid change thrust upon our industry
  5. Machine Learning and Predictive Analytics will be the only way that operations teams will be able to keep up as we see a touchpoint explosion. The associated rise in actionable events won’t be able to be handled without machine assisted decision support
  6. Service chaining, orchestration and automation are already important but will become increasingly important due to the touchpoint explosion and the increased complexity of virtualised networks
  7. Wireless sensor networks / IoT aren’t exactly nascent despite recent hype. Telemetry networks have been around for decades. The change will come from a rapid increase in sensors for the consumer / retail market (rather than business market) and the need for multi tenanted operational solutions to monitor and manage them, not to mention supporting the building of third-party apps on them
  8. Self-organizing Networks (SON) has primarily been focussed on mobile networks but the concept extends to all network types if the OSS will support it
  9. Blockchain will potentially represent the solution to a number of current OSS problems, including data integrity and automated contract handling (eg SLAs, QoS, delivery times, etc). More on this in future blogs
  10. Telco-led ecosystems in loT, health-care and more will leverage the power of the value fabric and the power of innovation that come with it. Essential for delivering value to the varied needs of the long tail of customers (ie all customers other than the top-100)