The answer is soooo obvious…. or is it?

There’s a crowded room of OSS experts, a room filled with serious intellectual horsepower. You might be a virtu-OSS-o, but you surely know that there’s still so much to be learnt from those around you. You have the chance to unlock the experiences and insights of your esteemed colleagues. But how? The answer might seem to be obvious. You do so by asking questions. Lots of questions.

But that obvious answer might have just one little unexpected twist.

Do you ask:

  1. Ego questions – questions that demonstrate how clever you are (and thus prove to the other experts that you too are an expert); OR
  2. Embarrassing questions – questions that could potentially embarrass you (and demonstrate major deficiencies in your knowledge, perhaps suggesting that you’re not as much of as expert as everyone else)

I’ve been in those rooms and heard the questions, as you have too no doubt. What do you think the ratio of ego to embarrassing would typically be? 10 to 1? 20 to 1?

The problem with the ego questions is that they can be so specific to the context of a few that they end up steering the conversations to the depths of technology hell (of course they can also end up inspiring / enlightening too, so I’m generalising here).

But have you observed that the very best in our industry happen to ask a lot of embarrassing  questions?

A quote by Ramit Sethi splices in brilliantly here, “The very best ask lots of questions. 3 questions I almost never hear: (1) “Just a second. If you don’t mind me asking, how did you get to that?” (2) “I’m not sure I understand the conclusion — can you walk me through that?” (3) “How did you see that answer?” Ask these questions and stop worrying about being embarrassed. How else are you going to learn?

Just for laughs, next time you’re at one of these events (and I notice that TM Forum Live is coming up in May), try to guess what the ego to embarrassing ratio might be there and which set of questions are spawning the more interesting / insightful / helpful conversations.

An OSS niche market opportunity?

The survey found that 82 percent of service providers conduct less than half of customer transactions digitally, despite the fact that nearly 80 percent of respondents said they are moving forward with business-wide digital transformation programs of varying size and scale. This underscores a large perception gap in understanding, completing and benefiting from digitalization programs.

The study revealed that more than one-third of service providers have completed some aspect of digital transformation, but challenges persist; nearly three-quarters of service providers identify legacy systems and processes, challenges relating to staff and skillsets and business risk as the greatest obstacles to transforming digital services delivery.

Driving a successful digital transformation requires companies to transform myriad business and operational domains, including customer journeys, digital product catalogs, partner management platforms and networks via software-defined networking (SDN) and network functions virtualization (NFV).
Survey from Netcracker and ICT Intuition.

Interesting study from Netcracker and ICT Intuition. To re-iterate with some key numbers and take-aways:

  1. 82% of responding service providers can increase digital transactions by at least 50% (in theory).  Digital transactions tend to be significantly cheaper for service providers than manual transactions. However, some customers will work the omni-channel experience to find the channel that they’re most comfortable dealing with. In many cases, this means attempting to avoid digital experiences. As a side note, any attempts to become 100% digital are likely to require social / behavioural engineering of customers and/or an associated churn rate
  2. Nearly 75% of responding service providers identify legacy systems / processes, skillsets and business risk as biggest challenges. This reads as putting a digital interface onto back-end systems like BSS / OSS tools. This is less of a challenge for newer operators that have been designed with digitalised customer interactions in mind. The other challenge for operators is that the digital front-ends are rarely designed to bolt onto the operators’ existing legacy back-end systems and need significant integration
  3. If an operator want to build a digital transaction regime, they should expect an OSS / BSS transformation too.

To overcome these challenges, I’ve noticed that some operators have been building up separate (often low-cost) brands with digital-native front ends, back ends, processes and skills bases. These brands tend to target the ever-expanding digitally native generations and be seen as the stepping stone to obsoleting legacy solutions (and perhaps even legacy business models?).

I wonder whether this is a market niche for smaller OSS players to target and grow into whilst the big OSS brands chase the bigger-brother operator brands?

The challenges in transforming network assurance to network healing

A couple of interesting concepts have the ability to fundamentally change the way networks and services are maintained. If they can be harnessed, we could replace the term “network assurance” with “network healing.”

The first concept is SON, which has been formulated specifically with mobile radio networks in mind, but has the potential to extend into all network types.

A Self-Organizing Network (SON) is an automation technology designed to make the planning, configuration, management, optimization and healing of mobile radio access networks simpler and faster.”

One of the challenges of creating self-organising, self-optimising, self-healing networks is that every network has physical points of failure – cable cuts, equipment failure, etc. These can’t be fixed with software alone. That’s where the second concept comes in.

The second concept is smart-contract technology (possibly facilitated by Blockchain), which provides the potential for a more automated way of engaging a mini procurement / delivery / test / payment process to fix physical problems (or logical for that matter). Whilst the work might be done in the physical world, it could be done by third-parties, initiated by the OSS via microservice. Network Fix as a Service (NFaaS), with implementation, test, acceptance and payment all done in software as far as the OSS sees it.

To an extent this already happens via the issuance of ToW (Tickets of Work) to third party fault-fix teams, but it’s normally a significantly manual process currently.

However, the bigger challenge of transforming network assurance to network healing is to find a way to self-heal services that span multiple network domains. This could be physical network functions (PNF), virtual network functions (VNF) and the myriad topologies, technologies and protocols that interconnect them.

I can’t help but think that to simplify (self-healing) we first have to simplify (network variant minimisation).

If we can drastically reduce the number of variants, we have a better chance of building self-heal automations… and don’t just tell me that AI engines will solve all these problems! Maybe one day, but perhaps we can start with baby steps first.

Bringing Eminem’s blank canvas to OSS

“When you start out in your career, you have a blank canvas, so you can paint anywhere that you want because the shit ain’t been painted on yet. And then your second album comes out, and you paint a little more and you paint a little more. By the time you get to your seventh and eighth album you’ve already painted all over it. There’s nowhere else to paint.”
Eminem. (on Rick Rubin and Malcolm Gladwell’s Broken Record podcast)

To each their own. Personally, Eminem’s music has never done it for me, whether his first or eighth album, but the quote above did strike a chord (awful pun).

It takes many, many hours to paint in the detail of an OSS painting. By the time a product has been going for a few years, there’s not much room left on the canvas and the detail of the existing parts of the work is so nuanced that it’s hard to contemplate painting over.

But this doesn’t consider that over the years, OSS have been painted on many different canvases. First there were mainframes, then client-server, relational databases, XaaS, virtualisation (of servers and networks), and a whole continuum in between… not to mention the future possibilities of blockchain, AI, IoT, etc. And that’s not even considering the changes in programming languages along the way. In fact, new canvases are now presenting themselves at a rate that’s hard to keep up with.

The good thing about this is that we have the chance to start over with a blank canvas each time, to create something uniquely suited to that canvas. However, we invariably attempt to bring as much of the old thinking across as possible, immediately leaving little space left to paint something new. Constraints that existed on the old canvas don’t always apply to each new canvas, but we still have a habit of bringing them across anyway.

We don’t always ask enough questions like:

  • Does this existing process still suit the new canvas
  • Can we skip steps
  • Can we obsolete any of the old / unused functionality
  • Are old and new architectures (at all levels) easily transmutable
  • Does the user interface need to be ported or replaced
  • Do we even need a user interface (assuming the rise of machine-to-machine with IoT, etc)
  • Does the old data model have any relevance to the new canvas
  • Do the assurance rules of fixed-network services still apply to virtualised networks
  • Do the fulfillment rules of fixed-network services still apply to virtualised networks
  • Are there too many devices to individually manage or can they be managed as a cohort
  • Does the new model give us access to new data and/or techniques that will allow us to make decisions (or derive insights) differently
  • Does the old billing or revenue model still apply to the new platform
  • Can we increase modularity and abstraction between modules

“The real reason “blockchain” or “AI” may actually change businesses now or in the future, isn’t that the technology can do remarkable things that can’t be done today, it’s that it provides a reason for companies to look at new ways of working, new systems and finally get excited about what can be done when you build around technology.”
Tom Goodwin

50 exercises to ignite your OSS innovation sessions

Every project starts with an idea… an idea that someone is excited enough to sponsor.

  1. But where are your ideas being generated from?
  2. How do they get cultivated and given time to grow?
  3. How do they get pitched? and How do they get heard?
  4. How are sponsors persuaded?
  5. How do they then get implemented?
  6. How do we amplify this cycle of innovation and implementation?

I’m fascinated by these questions in OSS for the reasons outlined in The OSS Call for Innovation.

If we look at the levels of innovation (to be honest, it’s probably more a continuum than bands / levels):

  1. Process Improvement
  2. Incremental Improvement (new integrations, feature enhancement, etc)
  3. Derivative Ideas (iPhone = internet + phone + music player)
  4. Quantum Innovation (Tablet computing, network virtualisation, cloud delivery models)
  5. Radical Innovations (transistors, cellular wireless networks, Claude Shannon’s Information Theory)

We have so many immensely clever people working in our industry and we’re collectively really good at the first two levels. Our typical mode of working – which could generally be considered fire-fighting (or dare I say it, Agile) – doesn’t provide the time and headspace to work on anything in the longer life-cycles of levels 3-5. These are the levels that can be more impactful, but it’s these levels where we need to carve out time specifically for innovation planning.

If you’re ever planning to conduct innovation fire-starter sessions, I really recommend reading Richard Brynteson’s, “50 Activities for Building Innovation.” As the title implies, it provides 50 (simple but powerful) exercises to help groups to generate ideas.

Please contact us if you’d like PAOSS to help facilitate your OSS idea firestarter or road-mapping sessions.

Posing a Network Data Synchronisation Protocol (NDSP) concept

Data quality is one of the biggest challenges we face in OSS. A product could be technically perfect, but if the data being pumped into it is poor, then the user experience of the product will be awful – the OSS becomes unusable, and that in itself generates a data quality death spiral.

This becomes even more important for the autonomous, self-healing, programmable, cooperative networks being developed (think IoT, virtualised networks, Self-Organizing Networks). If we look at IoT networks for example, they’ll be expected to operate unattended for long periods, but with code and data auto-propagating between nodes to ensure a level of self-optimisation.

So today I’d like to pose a question. What if we could develop the equivalent of Network Time Protocol (NTP) for data? Just as NTP synchronises clocking across networks, Network Data Synchronisation Protocol (NDSP) would synchronise data across our networks through a feedback-loop / synchronisation algorithm.

Of course there are differences from NTP. NTP only tries to coordinate one data field (time) along a common scale (time as measured along a 64+64 bits continuum). The only parallel for network data is in life-cycle state changes (eg in-service, port up/down, etc).

For NTP, the stratum of the clock is defined (see image below from wikipedia).

This has analogies with data, where some data sources can be seen to be more reliable than others (ie primary sources rather than secondary or tertiary sources). However, there are scenarios where stratum 2 sources (eg OSS) might push state changes down through stratum 1 (eg NMS) and into stratum 0 (the network devices). An example might be renaming of a hostname or pushing a new service into the network.

One challenge would be the vast different data sets and how to disseminate / reconcile across the network without overloading it with management / communications packets. The other would be that format consistency. I once had a device type that had four different port naming conventions, and that was just within its own NMS! Imagine how many port name variations (and translations) might have existed across the multiple inventories that exist in our networks. The good thing about the NDSP concept is that it might force greater consistency across different vendor platforms.

Another would be that NDSP would become a huge security target as it would have the power to change configurations and because of its reach through the network.

So what do you think? Has the NDSP concept already been developed? Have you implemented something similar in your OSS? What are the scenarios in which it could succeed? Or fail?

A summary of RPA uses in an OSS suite

This is the sixth and final post in a series about the four styles of RPA (Robotic Process Automation) in OSS.

Over the last few days, we’ve looked into the following styles of RPA used in OSS, their implementation approaches, pros / cons and the types of automation they’re best suited to:

  1. Automating repeatable tasks – using an algorithmic approach to completing regular, mundane tasks
  2. Streamlining processes / tasks – using an algorithmic approach to assist an operator during a process or as an alternate integration technique
  3. Predefined decision support – guiding operators through a complex decision process
  4. As part of a closed-loop system – that provides a learning, improving solution

RPA tools can significantly improve the usability of an OSS suite, especially for end-to-end processes that jump between different applications (in the many ways mentioned in the above links).

However, there can be a tendency to use the power of RPAs to “solve all problems” (see this article about automating bad processes). That can introduce a life-cycle of pain for operators and RPA admins alike. Like any OSS integration, we should look to keep the design as simple and streamlined as possible before embarking on implementation (subtraction projects).

RPA in OSS feedback loops

This is the fifth in a series about the four styles of RPA (Robotic Process Automation) in OSS.

The fourth of those styles is as part of a closed-loop system such as the one described here. Here’s a diagram from that link:
OSS / DSS feedback loop

This is the most valuable style of RPA because it represents a learning and improving system.

Note though that RPA tools only represent the DSS (Decision Support System) component of the closed-loop so they need to be supplemented with the other components. Also note that an RPA tool can only perform the DSS role in this loop if it can accept feedback (eg via an API) and modify its output in response. The RPA tool could then perform fully automated tasks (ie machine-to-machine) or semi-automated (Decision support for humans).

Setting up this type of solution can be far more challenging than the earlier styles of RPA use, but the results are potentially the most powerful too.

Almost any OSS process could be enhanced by this closed-loop model. It’s just a case of whether the benefits justify the effort. Broad examples include assurance (network health / performance), fulfilment / activations, operations, strategy, etc.

The OSS / RPA parrot on the shoulder analogy

This is the fourth in a series about the four styles of RPA (Robotic Process Automation) in OSS.

The third style is Decision Support. I refer to this style as the parrot on the shoulder because the parrot (RPA) guides the operator through their daily activities. It isn’t true automation but it can provide one of the best cost-benefit ratios of the different RPA styles. It can be a great blend of human-computer decision making.

OSS processes tend to have complex decision trees and need different actions performed depending on the information being presented. An example might be a customer on-boarding, which includes credit and identity check sub-processes, followed by the customer service order entry.

The RPA can guide the operator to perform each of the steps along the process including the mandatory fields to populate for regulatory purposes. It can also recommend the correct pull-down options to select so that the operator traverses the correct branch of the decision tree of each sub-process.

This functionality can allow organisations to deliver less training than they would without decision support. It can be highly cost-effective in situations where:

  • There are many inexperienced operators, especially if there is high staff turnover such as in NOCs, contact centres, etc
  • It is essential to have high process / data quality
  • The solution isn’t intuitive and it is easy to miss steps, such as a process that requires an operator to swivel-chair between multiple applications
  • There are many branches on the decision tree, especially when some of the branches are rarely traversed, even by experienced operators

In these situations the cost of training can far outweigh the cost of building an OSS (RPA) parrot on each operator’s shoulder.

Using RPA as an alternate OSS integration

This is the third in a series about the four styles of RPA (Robotic Process Automation) in OSS.

The second of those styles is Streamlining processes / tasks by following an algorithmic approach to simplify processes for operators.

These can be particularly helpful during swivel-chair processes where multiple disparate systems are partially integrated but each needs the same data (ie reducing the amount of duplicated data entry between systems). As well as streamlining the process it also improves data consistency rates.

The most valuable aspect of this style of RPA is that it can minimise the amount of integration between systems, thus potentially reducing solution maintenance into the future. The RPA can even act as the integration technique where an API isn’t available or documentation isn’t available (think legacy systems here).

Onboarding outsiders as a new OSS business model

The majority of these new services [such as healthcare, content and media, autonomous vehicles, smart homes etc.] require partnerships and will be based on a platform business model where the customer is not aware of who is providing which part of the service and to be quite frankly honest, wont care. All as they will care about is the customer experience and the end-to-end delivery of their service that they have paid for. This is where the opportunity for the telco comes and we need to think beyond data!
Aaron Boasman-Patel
here on TM Forum Inform.

Are your OSS tools already integrating with third-party services?

Do your catalog / orchestration engines already call upon microservices from outside your organisation? Perhaps it’s something as simple as providing a content service bundled with a service provider’s standard bitpipe service. Perhaps it’s also bundled with an internal-facing analytics service or an outward-facing shopping cart service.

A telco isn’t going to want to (or be able to) provide all of these services but can use partnerships and catalog items to allow each unique customer to build the bundled offer they want.

This is where catalogs and microservices potentially represent a type of small-grid model. There are already many APIs from third-party providers and the catalog / orchestration tools already exist to support the model. For many telcos, it will take a slight mindset shift – to embrace partnerships (ie to discard the “not invented here” thinking); to allowing their many existing bit-pipe subscribers to sell and bill through the telco platform (embrace sell-through); to build platforms and processes to allow for simple certification and onboarding of third-parties.

If your current OSS isn’t already integrating with third-party services, is it on your roadmap? Then again, does it suit your proposed future business models?

Will it take open source to unlock OSS potential?

I have this sense that the other OSS, open source software, holds the key to the next wave of OSS (Operational Support Systems) innovation.

Why? Well, as yesterday’s post indicated (through Nic Brisbourne), “it’s hard to do big things in a small way.” I’d like to put a slight twist on that concept by saying, “it’s hard to do big things in a fragmented way.” [OSS is short for fragmented after all]

The skilled resources in OSS are so widely spread across many organisations (doing a lot of duplicated work) that we can’t reach a critical mass of innovation. Open source projects like ONAP represent a possible path to critical mass through sharing and augmentating code. They provide the foundation upon which bigger things can be built. If we don’t uplift the foundations across the whole industry quickly, we risk losing relevance (just ask our customers for their gripes list!).

BTW. Did you notice the news that Six Linux Foundation open source networking projects have just merged into one? The six initial projects are ONAP, OPNFV, OpenDaylight,, PDNA, and SNAS. The new project is called the LF Networking Fund (LFN).

But you may ask how organisations can protect their trade secrets whilst embracing open source innovation. Derek Sivers provides a fascinating story and line of thinking in “Why my code and ideas are public.” I really recommend having a read about Valerie.

Alternatively, if you’re equating open source with free / unprofitable, this link provides a list of highly successful organisations with significant open source contributions. There are plenty of creative ways to be rewarded for open source effort.

Comment below if you agree or disagree about whether we need OSS to unlock the potential of OSS innovation.

It’s hard to do big things in a small way

it’s hard to do big things in a small way, so I suspect incumbents have more of an advantage than they do in most industries.”
Nic Brisbourne

The quote above came from a piece about the rise of ConstructTech (ie building houses via means such as 3D printing). However, it is equally true of the OSS industry.

Our OSS tend to be behemoths, or at least the ones I work on seem to be. They’ve been developed over many years and have millions of sunk person-hours invested in them. And they’ve been customised to each client’s business like vines wrapped around a pillar. This gives enormous incumbency power and acts as a barrier to smaller innovators having a big impact in the world of OSS.

Want an example of it being hard to do big things in a small way? Ever heard of ONAP? AT&T is a massive telco with revenues to match, committed to a more software-centric future, and has developed millions of lines of code yet it still needs the broader industry to help flesh out its vision for ONAP.

There are occasionally niche products developed but it’s definitely hard to do big things in a small way. The small grid analogy proposed earlier gives more room for the long tail of innovation, allowing smaller innovators to impact the larger ecosystem.

Write a comment below if you’d like to point out an outlier to this trend.

How “what if?” scenarios can halt a project

Let’s admit it; we’ve all worked on an OSS project that has gone into a period of extended stagnation because of a fear of the unknown. I call them “What if?” scenarios. They’re the scenarios where someone asks, “What if x happens?” and then the team gets side-tracked whilst finding an answer / resolution. The problem with “What if?” scenarios is that many of them will never happen, or will happen on such rare occasions that the impact will be negligible. They’re the opposite end of the Pareto Principle – they’re the 20% that take up the 80% of effort / budget / time. They need to be minimised and/or mitigated.

In some cases, the “what if?” questions comes from a lack of understanding about the situation, the product suite and / or the future solution. That’s completely understandable because we can never predict all of the eventualities of an OSS project at the outset. That’s the OctopOSS at work – you think you have all of the tentacles under control, but another one always comes and whacks you on the back of the head.

The best way to reduce the “what if?” questions from getting out of control is to give stakeholders a sandpit / MVP / rapid-prototype / PoC environment to interact with.

The benefit of the prototype environment is that it delivers something tangible, something that stakeholders far and wide can interact with and test assumptions, usefulness, usability, boundary cases, scalability, etc. Stakeholders get to understand the context of the product and get a better feeling for what the end solution is going to look like. That way, many of the speculative “what ifs?” are bypassed and you start getting into the more productive dialogue earlier. The alternative, the creation of a document or discussion, can devolve into an almost endless set of “what-if” scenarios and opinions, especially when there are large groups of (sometimes militant) stakeholders.

The more dangerous “what if?” questions come from the experts. They’re the ones who demonstrate their intellectual prowess by finding scenario after scenario that nobody else is considering. I have huge admiration for those who can uncover potential edge cases, race conditions, loopholes in code, etc. The challenge is that they can be extremely hard to document, test for and circumvent. They’re also often very difficult to quantify or prove a likelihood of occurrence, thus consuming significant resources.

Rather than divert resources to resolving all these “what if?” questions one-by-one, I try to seek a higher-order “safety-net” solution. This might be in the form of exception handling, try-catch blocks, fall-out analysis reports, etc. Or, it might mean assigning a watching brief on the problem and handling it only if it arises in future.

The developer development analogy (to OSS investment)

In a post last week, we quoted Jim Rohn who said, “You can have more – if you become more.” Jim was surely speaking about personal growth, but we equated it to OSS needing to become more too, especially by looking beyond the walls of operations.

Your first thoughts may be, “Ohh, good idea, I’d love to get my hands on some of the CMO’s budget to invest in my OSS because I’ve already allocated all of the OSS budget (to operational imperatives no doubt).”

We all talk about getting more budget to do bigger and better things with our OSS. But apart from small windows during capital allocation cycles, “more budget” is rarely an option. Sooooo, I’d like to run a thought experiment past you today.

Rather than thinking of budget as CAPEX, what if we think of the existing (OPEX) budget as a “draw-down of utilisation” bucket? The question we have to ask is whether we are drawing down on the right stuff. If we’re drawing down to deliver on more operational initiatives, are we effectively pushing towards an asymptote? If we were to draw down to deliver something outside the (operations) box, are we increasing our chances of “becoming more?”

I equate it to “the developer development analogy.” Let’s say a developer is already proficient at 10 programming languages. If he/she allocates their yearly development budget on learning another programming language (number 11), are they really going to become a much better coder? What if instead, they chose to invest in an adjacency like user experience design or leadership or entrepreneurship, etc? Is that more likely to trigger a leap-frogging S-curve rather than asymptotic result from their investment?

And, if we become more (ie our OSS is delivering more value outside the ops box), we can have more (ie investment coming in from benefiting business units). It’s tied to the law of reciprocity (which hopefully exists in your organisation rather than the law of scavenging other people’s cash).

This is clearly a contrarian and idealistic concept, so I’d love to hear whether you think it could be workable in your organisation.

The evolving complexity of RCA

Root cause analysis (RCA) is one of the great challenges of OSS. As you know, it aims to identify the probable cause of an alarm-storm, where all alarms are actually related to a single fault.

In the past, my go-to approach was to start with a circuit hierarchy-based algorithm. If you had an awareness of the hierarchy of circuits, usually through an awareness in inventory, if you have a lower-order fault (eg Loss of Signal on a transmission link caused by a cable break), then you could suppress all higher-order alarms (ie from bearers or tributaries that were dependent upon the L1 link. That works well in the fixed networks of distant past (think SDH / PDH). This approach worked well because it was repeatable between different customer environments.

Packet-switching data networks changed that to an extent, because a data service could traverse any number of links, on-net or off-net (ie leased links). The circuit hierarchy approach was still applicable, but needed to be supplemented with other rules.

Now virtualised networking is changing it again. RCA loses a little relevance in the virtualised layer. Workloads and resource allocations are dynamic and transient, making them less suited to fixed algorithms. The objective now becomes self-healing – if a failure is identified, failed resources are spun down and new ones spun up to take the load. The circuit hierarchy approach loses relevance, but perhaps infrastructure hierarchy still remains useful. Cable breaks, server melt-downs, hanging controller applications are all examples of root causes that will cause problems in higher layers.]

Rather than fixed-rules, machine-based pattern-matching is the next big hope to cope with the dynamically changing networks.

The number of layers and complexity of the network seems to be ever increasing, and with it RCA becomes more sophisticated…. If only we could evolve to simpler networks rather than more complex ones. Wishful thinking?

Funding beyond the walls of operations

You can have more – if you become more.”
Jim Rohn.

I believe that this is as true of our OSS as it is of ourselves.

Many people use the name Operational Support Systems to put an electric fence around our OSS, to limit uses to just operational activities. However, the reach, awareness and power of what they (we) offer goes far beyond that.

We have powerful insights to deliver to almost every department in an organisation – beyond just operations and IT. But first we need to understand the challenges and opportunities faced by those departments so that we can give them something useful.

That doesn’t necessarily mean expensive integrations but unlocking the knowledge that resides in our data.

Looking for additional funding for your OSS? Start by seeking ways to make it more valuable to more people… or even one step further back – seeking to understand the challenges beyond the walls of operations.

The double-edged sword of OSS/BSS integrations

…good argument for a merged OSS/BSS, wouldn’t you say?
John Malecki

The question above was posed in relation to Friday’s post about the currency and relevance of OSS compared with research reports, analyses and strategic plans as well as how to extend OSS longevity.

This is a brilliant, multi-faceted question from John. My belief is that it is a double-edged sword.

Out of my experiences with many OSS, one product stands out above all the others I’ve worked with. It’s an integrated suite of Fault Management, Performance Management, Customer Management, Product / Service Management, Configuration / orchestration / auto-provisioning, Outside Plant Management / GIS, Traffic Engineering, Trouble Ticketing, Ticket of Work Management, and much more, all tied together with the most elegant inventory data model I’ve seen.

Being a single vendor solution built on a relational database, the cross-pollination (enrichment) of data between all these different modules made it the most powerful insight engine I’ve worked with. With some SQL skills and an understanding of the data model, you could ask it complex cross-domain questions quite easily because all the data was stored in a single database. That edge of the sword made a powerful argument for a merged OSS/BSS.

Unfortunately, the level of cross-referencing that made it so powerful also made it really challenging to build an initial data set to facilitate all modules being inter-operable. By contrast, an independent inventory management solution could just pull data out of each NMS / EMS under management, massage the data for ingestion and then you’d have an operational system. The abovementioned solution also worked this way for inventory, but to get the other modules cross-referenced with the inventory required engineering rules, hand-stitched spreadsheets, rules of thumb, etc. Maintaining and upgrading also became challenges after the initial data had been created. In many cases, the clients didn’t have all of the data that was needed, so a data creation exercise needed to be undertaken.

If I had the choice, I would’ve done more of the cross-referencing at data level (eg via queries / reports) rather than entwining the modules together so tightly at application level… except in the most compelling cases. It’s an example of the chess-board analogy.

If given the option between merged (tightly coupled) and loosely coupled, which would you choose? Do you have any insights or experiences to share on how you’ve struck the best balance?

Keeping the OSS executioner away

With the increasing pace of change, the moment a research report, competitive analysis, or strategic plan is delivered to a client, its currency and relevance rapidly diminishes as new trends, issues, and unforeseen disrupters arise.”
Soren Kaplan

By the same token as the quote above, does it follow that the currency and relevance of an OSS rapidly diminishes as soon as it is delivered to a client?

In the case of research reports, analyses and strategic plans, currency diminishes because the static data sets upon which they’re built are also losing currency. That’s not the case for an OSS – they are data collection and processing engines for streaming (ie constantly refreshing) data. [As an aside here – Relevance can still decrease if data quality is steadily deteriorating, irrespective of its currency. Meanwhile currency can decrease if the ever expanding pool of OSS data becomes so large as to be unmanagable or responsiveness is usurped by newer data processing technologies]

However, as with research reports, analyses and strategic plans, the value of an OSS is not so much related to the data collected, but the questions asked of, and answers / insights derived from, that data.

Apart from the asides mentioned above, the currency and relevance of OSS only diminish as a result of new trends, issues and disrupters if new questions can not or are not being asked with them.

You’ll recall from yesterday’s post that, “An ability to use technology to manage, interpret and visualise real data in a client’s data stores, not just industry trend data,” is as true of OSS tools as it is of OSS consultants. I’m constantly surprised that so few OSS are designed with intuitive, flexible data interrogation tools built in. It seems that product teams are happy to delegate that responsibility to off-the-shelf reporting tools or leave it up to the client to build their own.

A deeper level of OSS connection,

Yesterday we talked about the cuckoo-bird analogy and how it was preventing telcos from building more valuable platforms on top of their capital-intensive network platforms. Thanks to Dean Bubley, it gave examples of how the most successful platform plays were platforms on platforms (eg Microsoft Office on Windows, iTunes on iOS, phones on physical networks, etc).

The telcos have found it difficult to build the second layer of platform on their data networks during the Internet age to keep the cuckoo chicks out of the nest.

Telcos are great at helping customers to make connections. OSS are great at establishing and maintaining those connections. But there’s a deeper level of connection waiting for us to support – helping the telcos’ customers to make valuable connections that they wouldn’t otherwise be able to make by themselves.

In the past, telcos provided yellow pages directories to help along these lines. The internet and social media have marginalised the value of this telco-owned asset in recent years.

But the telcos still own massive subscriber bases (within our OSS / BSS suites). How can our OSS / BSS facilitate a deeper level of connection, providing the telcos’ customers with valuable connections that they would not have otherwise made?