OSS data Ponzi scheme

The more data you have, the more data you need to understand the data you have. You are engaged in a data ponzi scheme…Could it be in service assurance and IT ops that more data equals less understanding?
Phil Tee
in the opening address at the AIOps Symposium.

Interesting viewpoint right?

Given that our OSS hold shed-loads of data, Phil is saying we need lots of data to understand that data. Well, yes… and possibly no.

I have a theory that data alone doesn’t talk, but it’s great at answering questions. You could say that you need lots of data, although I’d argue in semantics that you actually need lots of knowledge / awareness to ask great questions. Perhaps that knowledge / awareness comes from seeding machine-led analysis tools (or our data scientists’s brains) with lots of data.

The more data you have, the more noise that you need to find signal in amongst. That means you have to ask more questions of your data if you want to drive a return that justifies the cost of collecting and curating it all. Machine-led analytics certainly assist us in handling the volume and velocity of data our OSS create / collect. That’s just asking the same question/s over and over. There’s almost no end to the questions that can be asked of our data, just a limit on the time in which we can ask it.

Does that make data a Ponzi scheme? A Ponzi scheme pays profits to earlier investors using funds obtained from newer investors. Eventually it must collapse the scheme eventually runs out of new investors to fund profits. In a data Ponzi scheme, it pays in insights from earlier (seed) data by obtaining new (streaming) data. The stream of data reaching an OSS never runs out. If we need to invest heavily in data (eg AI / ML, etc), at what point in the investment lifecycle will we stop creating new insights?

The OSS self-driving vehicle

I was lucky enough to get some time of a friend recently, a friend who’s running a machine-learning network assurance proof-of-concept (PoC).

He’s been really impressed with the results coming out of the PoC. However, one of the really interesting factors he’s been finding is how frequently BAU (business as usual) changes in the OSS data (eg changes in naming conventions, topologies, etc) would impact results. Little changes made by upstream systems effectively invalidated baselines identified by the machine-learning engines to key in on. Those little changes meant the engine had to re-baseline / re-learn to build back up to previous insight levels. Or to avoid invalidating the baseline, it would require re-normalising all of data prior to the identification of BAU changes.

That got me wondering whether DevOps (or any other high-change environment) might actually hinder our attempts to get machine-led assurance optimisation. But more to the point, does constant change (at all levels of a telco business) hold us back from reaching our aim of closed-loop / zero-touch assurance?

Just like the proverbial self-driving car, will we always need someone at the wheel of our OSS just in case a situation arises that the machines hasn’t seen before and/or can’t handle? How far into the future will it be before we have enough trust to take our hands off the OSS wheel and let the machines drive closed-loop processes without observation by us?

Optimisation Support Systems

We’ve heard of OSS being an acronym for operational support systems, operations support systems, even open source software. I have a new one for you today – Optimisation Support Systems – that exists for no purpose other than to drive a mindset shift.

I think we have to transition from “expectations” in a hype sense to “expectations” in a goal sense. NFV is like any technology; it depends on a business case for what it proposes to do. There’s a lot wrong with living up to hype (like, it’s impossible), but living up to the goals set for a technology is never unrealistic. Much of the hype surrounding NFV was never linked to any real business case, any specific goal of the NFV ISG.”
Tom Nolle
in his blog here.

This is a really profound observation (and entire blog) from Tom. Our technology, OSS included, tends to be surrounded by “hyped” expectations – partly from our own optimistic desires, partly from vendor sales pitches. It’s far easier to build our expectations from hype than to actually understand and specify the goals that really matter. Goals that are end-to-end in manner and preferably quantifiable.

When embarking on a technology-led transformation, our aim is to “make things better,” obviously. A list of hundreds of functional requirements might help. However, having an up-front, clear understanding of the small number of use cases you’re optimising for tends to define much clearer goal-driven expectations.

Expanding your bag of OSS tricks

Let me ask you a question – when you’ve expanded your bag of tricks that help you to manage your OSS, where have they typically originated?

By reading? By doing? By asking? Through mentoring? Via training courses?
Relating to technical? People? Process? Product?
Operations? Network? Hardware? Software?
Design? Procure? Implement / delivery? Test? Deploy?
By retrospective thinking? Creative thinking? Refinement thinking?
Other?

If you were to highlight the questions above that are most relevant to the development of your bag of tricks, how much coverage does your pattern show?

There are so many facets to our OSS (ie. tentacles on the OctopOSS) aren’t there? We have to have a large bag of tricks. Not only that, we need to be constantly adding new tricks too right?

I tend to find that our typical approaches to OSS knowledge transfer cover only a small subset (think about discussion topics at OSS conferences that tend to just focus on the technical / architectural)… yet don’t align with how we (or maybe just I) have developed capabilities in the past.

The question then becomes, how do we facilitate the broader learnings required to make our OSS great? To introduce learning opportunities for ourselves and our teams across vaguely related fields such as project management, change management, user interface design, process / workflows, creative thinking, etc, etc.

Just in time design

It’s interesting how we tend to go in cycles. Back in the early days of OSS, the network operators tended to build their OSS from the ground up. Then we went through a phase of using Commercial off-the-shelf (COTS) OSS software developed by third-party vendors. We now seem to be cycling back towards in-house development, but with collaboration that includes vendors and external assistance through open-source projects like ONAP. Interesting too how Agile fits in with these cycles.

Regardless of where we are in the cycle for our OSS, as implementers we’re always challenged with finding the Goldilocks amount of documentation – not too heavy, not too light, but just right.

The Agile Manifesto espouses, “working software over comprehensive documentation.” Sounds good to me! It perplexes me that some OSS implementations are bogged down by lengthy up-front documentation phases, especially if we’re basing the solution on COTS offerings. These can really stall the momentum of a project.

Once a solution has been selected (which often does require significant analysis and documentation), I’m more of a proponent of getting the COTS software stood up, even if only in a sandpit environment. This is where just-in-time (JIT) documentation comes into play. Rather than having every aspect of the solution documented (eg process flows, data models, high availability models, physical connectivity, logical connectivity, databases, etc, etc), we only need enough documentation for collaborative stakeholders to do their parts (eg IT to set up hardware / hosting, networks to set up physical connectivity, vendor to provide software, integrator to perform build, etc) to stand up a vanilla solution.

Then it’s time to start building trial scenarios through the solution. There’s usually quite a bit of trial and error in this stage, as we seek to optimise the scenarios for the intended users. Then we add a few more scenarios.

There’s little point trying to document the solution in detail before a scenario is trialled, but some documentation can be really helpful. For example, if the scenario is to build a small sub-section of a network, then draw up some diagrams of that sub-network that include the intended naming conventions for each object (eg device, physical connectivity, addresses, logical connectivity, etc). That allows you to determine whether there are unexpected challenges with naming conventions, data modelling, process design, etc. There are always unexpected challenges that arise!

I figure you’re better off documenting the real challenges than theorising on the “what if?” challenges, which is what often happens with up-front documentation exercises. There are always brilliant stakeholders who can imagine millions of possible challenges, but these often bog the design phase down.

With JIT design, once the solution evolves, the documentation can evolve too… if there is an ongoing reason for its existence (eg as a user guide, for a test plan, as a training cheat-sheet, a record of configuration for fault-finding purposes, etc).

Interestingly, the first value in the Agile Manifesto is, “individuals and interactions over processes and tools.” This is where the COTS vs in-house-dev comes back into play. When using COTS software, individuals, interactions and processes are partly driven by what the tools support. COTS functionality constrains us but we can still use Agile configuration and customisation to optimise our solution for our customers’ needs (where cost-benefit permits).

Having a working set of vanilla tools allows our customers to get a much better feel for what needs to be done rather than trying to understand the intent of up-front design documentation. And that’s the key to great customer outcomes – having the customers knowledgeable enough about the real solution (not hypothetical solutions) to make the most informed decisions possible.

Of course there are always challenges with this JIT design model too, especially when third-party contracts are involved!

An OSS data creation brain-fade

Many years ago, I made a data migration blunder that slowed a production OSS down to a crawl. Actually, less than a crawl. It almost became unusable.

I was tasked with creating a production database of a carrier’s entire network inventory, including data migration for a bunch of Nortel Passport ATM switches (yes, it was that long ago).

  • There were around 70 of these devices in the network
  • 14 usable slots in each device (ie slots not reserved for processing, resilience, etc)
  • Depending on the card type there were different port densities, but let’s say there were 4 physical ports per slot
  • Up to 2,000 VPIs per port
  • Up to 65,000 VCIs per VPI
  • The customer was running SPVC

To make it easier for the operator to create a new customer service, I thought I should script-create every VPI/VCI on every port on every devices. That would allow the operator to just select any available VPI/VCI from within the OSS when provisioning (or later, auto-provisioning) a service.

There was just one problem with this brainwave. For this particular OSS, each VPI/VCI represented a logical port that became an entry alongside physical ports in the OSS‘s ports table… You can see what’s about to happen can’t you? If only I could’ve….

My script auto-created nearly 510 billion VCI logical ports; over 525 billion records in the ports table if you also include VPIs and physical ports…. in a production database. And that was just the ATM switches!

So instead of making life easier for the operators, it actually brought the OSS‘s database to a near stand-still. Brilliant!!

Luckily for me, it was a greenfields OSS build and the production database was still being built up in readiness for operational users to take the reins. I was able to strip all the ports out and try again with a less idiotic data creation plan.

The reality was that there’s no way the customer could’ve ever used 2,000 x 65,000 VPI/VCI groupings I’d created on every single physical port. Put it this way, there were far less than 130 million services across all service types across all carriers across that whole country!

Instead, we just changed the service activation process to manually add new VPI/VCIs into the database on demand as one of the pre-cursor activities when creating each new customer service.

From that experience, I have reverted back to the Minimum Viable Data (MVD) mantra ever since.

Are OSS business tools or technical tools?

I’d like to get your opinion on this question – are OSS business tools or technical tools?

We can say that BSS are as the name implies – business support systems.
We can say that NMS / EMS / NEMS are network management tools – technical tools.

The OSS layer fits between those two layers . It’s where the business and technology worlds combine (collide??).
BSS_OSS_NMS_EMS_NE_abstract_connect

If we use the word Operations / Operational to represent the “O” in OSS, it might imply that they exist to help operate technology. Many people in the industry undoubtedly see OSS as technical, operational tools. If I look back to when I first started on OSS, I probably had this same perception – I primarily faced the OSS / NMS interface in the early days.

But change the “O” to operationalisation and it changes the perspective slightly. It encourages you to see that the technology / network is the means via which business models can be implemented. It’s our OSS that allow operationalisation to happen.

So, let me re-ask the question – are OSS business tools or technical tools?

They’re both right? And therefore as OSS operators / developers / implementers, we need to expand our vision of what OSS do and who they service… which helps us get to Simon Sinek’s Why for OSS.

OSS of the past probably tended to be the point of collision and friction between business and tech groups within an organisation. Some modern OSS architectures give me the impression of being meet-in-the-middle tools, which will hopefully bring more collaboration between fiefdoms. Time will tell.

Unexpected OSS indicators

Yesterday’s post talked about using customer contacts as a real-time proxy metric for friction in the business, which could also be a directional indicator for customer experience.

That got me wondering what other proxy metrics might be used by to provide predictive indicators of what’s happening in your network, OSS and/or BSS. Apparently, “Colt aims to enhance its service assurance capabilities by taking non-traditional data (signal strength, power, temperature, etc.) from network elements (cards, links, etc.) to predict potential faults,” according to James Crawshaw here on LightReading.

What about environmental metrics like humidity, temperature, movement, power stability/disturbance?

I’d love to hear about what proxies you use or what unexpected metrics you’ve found to have shone the spotlight on friction in your organisation.

Automated testing and new starters

Can you guess what automated OSS testing and OSS new starters have in common?

Both are best front-loaded.

As a consultant, I’ve been a new starter on many occasions, as well as being assigned new starters on probably even more occasions. From both sides of that fence, it’s far more effective to front-load the new starter with training / knowledge to bring them up to speed / effectiveness, as soon as possible. Far more useful from the perspective of quality, re-work, self-sufficiency, etc.

Unfortunately, this is easier said than done. For a start, new starters are generally only required because the existing team is completely busy. So busy that it’s really hard to drop everything to make time to deliver up-front training. It reminds me of this diagram.
We're too busy

Front-loading of automated testing is similar… it takes lots of time to get it operational, but once in place it allows the team to focus on business outcomes faster.

In both cases, front-loading leads to a decrease in efficiency over the first few days, but tends to justify the effort soon thereafter. What other examples of front-loading can you think of in OSS?

The OSS dart-board analogy

The dartboard, by contrast, is not remotely logical, but is somehow brilliant. The 20 sector sits between the dismal scores of five and one. Most players aim for the triple-20, because that’s what professionals do. However, for all but the best darts players, this is a mistake. If you are not very good at darts, your best opening approach is not to aim at triple-20 at all. Instead, aim at the south-west quadrant of the board, towards 19 and 16. You won’t get 180 that way, but nor will you score three. It’s a common mistake in darts to assume you should simply aim for the highest possible score. You should also consider the consequences if you miss.”
Rory Sutherland
on Wired.

When aggressive corporate goals and metrics are combined with brilliant solution architects, we tend to aim for triple-20 with our OSS solutions don’t we? The problem is, when it comes to delivery, we don’t tend to have the laser-sharp precision of a professional darts player do we? No matter how experienced we are, there tends to be hidden surprises – some technical, some personal (or should I say inter-personal?), some contractual, etc – that deflect our aim.

The OSS dart-board analogy asks the question about whether we should set the lofty goals of a triple-20 [yellow circle below], with high risk of dismal results if we miss (think too about the OSS stretch-goal rule); or whether we’re better to target the 19/16 corner of the board [blue circle below] that has scaled back objectives, but a corresponding reduction in risk.

OSS Dart-board Analogy

Roland Leners posed the following brilliant question, “What if we built OSS and IT systems around people’s willingness to change instead of against corporate goals and metrics? Would the corporation be worse off at the end?” in response to a recent post called, “Did we forget the OSS operating model?

There are too many facets to count on Roland’s question but I suspect that in many cases the corporate goals / metrics are akin to the triple-20 focus, whilst the team’s willingness to change aligns to the 19/16 corner. And that is bound to reduce delivery risk.

I’d love to hear your thoughts!!

Dematerialisation of OSS

In 1972, the Club of Rome in its report The Limits to Growth predicted a steadily increasing demand for material as both economies and populations grew. The report predicted that continually increasing resource demand would eventually lead to an abrupt economic collapse. Studies on material use and economic growth show instead that society is gaining the same economic growth with much less physical material required. Between 1977 and 2001, the amount of material required to meet all needs of Americans fell from 1.18 trillion pounds to 1.08 trillion pounds, even though the country’s population increased by 55 million people. Al Gore similarly noted in 1999 that since 1949, while the economy tripled, the weight of goods produced did not change.
Wikipedia on the topic of Dematerialisation.

The weight of OSS transaction volumes appears to be increasing year on year as we add more stuff to our OSS. The touchpoint explosion is amplifying this further. Luckily, our platforms / middleware, compute, networks and storage have all been scaling as well so the increased weight has not been as noticeable as it might have been (even though we’ve all worked on OSS that have been buckling under the weight of transaction volumes right?).

Does it also make sense that when there is an incremental cost per transaction (eg via the increasingly prevalent cloud or “as a service” offerings) that we pay closer attention to transaction volumes because there is a great perception of cost to us? But not for “internal” transactions where there is little perceived incremental cost?

But it’s not so much the transaction processing volumes that are the problem directly. It’s more by implication. For each additional transaction there’s the risk of a hand-off being missed or mis-mapped or slowing down overall activity processing times. For each additional transaction type, there’s additional mapping, testing and regression testing effort as well as an increased risk of things going wrong.

Do you measure transaction flow volumes across your entire OSS suite? Does it provide an indication of where efficiency optimisation (ie dematerialisation) could occur and guide your re-factoring investments? Does it guide you on process optimisation efforts?

Getting lost in the flow of OSS

The myth is that people play games because they want to avoid challenging work. The reality is, people play games to engage in well-designed, challenging work. The only thing they are avoiding is poorly designed work. In essence, we are replacing poorly designed work with work that provides a more meaningful challenge and offers a richer sense of progress.
And we should note at this point that just because something is a game, it doesn’t mean it’s good. As we’ll soon see, it can be argued that everything is a game. The difference is in the design.
Really good games have been ruthlessly play-tested and calibrated to the point where achieving a state of flow is almost guaranteed for many. Play-testing is just another word for iterative development, which is essentially the conducting of progressive experiments
.”
Dr Jason Fox
in his book, “The Game Changer.”

Reflect with me for a moment – when it comes to your OSS activities, in which situations do you consistently get into a state of flow?

For me, it’s in quite a few different scenarios, but one in particular stands out – building up a network model in an inventory management tool. This activity starts with building models / patterns of devices, services, connections, etc, then using the models to build a replica of the network, either manually or via data migration, within the inventory tool(s). I can lose complete track of time when doing this task. In fact I have almost every single time I’ve performed this task.

Whilst not being much of a gamer, I suspect it’s no coincidence that by far my favourite video game genre is empire-building strategy games like the Civilization series. Back in the old days, I could easily get lost in them for hours too. Could we draw a comparison from getting that same sense of achievement, seeing a network (of devices in OSS, of cities in the empire strategy games) grow rapidly as a result of your actions?

What about fans of first-person shooter games? I wonder whether they get into a state of flow on assurance activities, where they get to hunt down and annihilate every fault in their terrain?

What about fans of horse grooming and riding games? Well…. let’s not go there. 🙂

Anyway, enough of all these reflections and musings. I would like to share three concepts with you that relate to Dr Fox’s quote above:

  1. Gamification – I feel that there is MASSIVE scope for gamification of our OSS, but I’ve yet to hear of any OSS developers using game design principles
  2. Play-testing – How many OSS are you aware of that have been, “ruthlessly play-tested and calibrated?” In almost every OSS situation I’ve seen, as soon as functionality meets requirements, we stop and move on to the next feature. We don’t pause and try a few more variants to see which is most likely to result in a great design, refining the solution, “to the point where achieving a state of flow is almost guaranteed for many
  3. Richer Progress – How many of our end-to-end workflows are designed with, “a richer sense of progress” in mind? Feedback tends to come through retrospective reporting (if at all), rarely through the OSS game-play itself. Chances are that our end-to-end processes actually flow through multiple un-related applications, so it comes back to clever integration design to deliver more compelling feedback. We simply don’t use enough specialist creative designers in OSS

Training network engineers to code, not vice versa

Did any of you read the Light Reading link in yesterday’s post about Google creating automated network operations services? If you haven’t, it’s well worth a read.

If you did, then you may’ve also noticed a reference to Finland’s Elisa selling its automation smarts to other telcos. This is another interesting business model disruption for the OSS market, although I’ll reserve judgement on how disruptive it will be until Elisa sells to a few more operators.

What did catch my eye in the Elisa article (again by Light Reading’s Iain Morris), is this paragraph:
Automation has not been hassle-free for Elisa. Instilling a software culture throughout the organization has been a challenge, acknowledges [Kirsi] Valtari. Rather than recruiting software expertise, Elisa concentrated on retraining the people it already had. During internal training courses, network engineers have been taught to code in Python, a popular programming language, and to write algorithms for a self-optimizing network (or SON). “The idea was to get engineers who were previously doing manual optimization to think about automating it,” says Valtari. “These people understand network problems and so it is a win-win outcome to go down this route.”.

It provides a really interesting perspective on this diagram below (from a 2014 post about the ideal skill-set for the future of networking)

There is an undoubted increase in the level of network / IT overlap (eg SDN). Most operators appear to be taking the path of hiring for IT and hoping they’ll grow to understand networks. Elisa is going the opposite way and training their network engineers to code.

With either path, if they then train their multi-talented engineers to understand the business (the red intersect), then they’ll have OSS experts on their hands right folks?? 😉

A purple cow in our OSS paddock

A few years ago, I read a book that had a big impact on the way I thought about OSS and OSS product development. Funnily enough, the book had nothing to do with OSS or product development. It was a book about marketing – a subject that I wasn’t very familiar with at the time, but am now fascinated with.

And the book? Purple Cow by Seth Godin.
Purple Cow

The premise behind the book is that when we go on a trip into the countryside, we notice the first brown or black cows, but after a while we don’t pay attention to them anymore. The novelty has worn off and we filter them out. But if there was a purple cow, that would be remarkable. It would definitely stand out from all the other cows and be talked about. Seth promoted the concept of building something into your products that make them remarkable, worth talking about.

I recently heard an interview with Seth. Despite the book being launched in 2003, apparently he’s still asked on a regular basis whether idea X is a purple cow. His answer is always the same – “I don’t decide whether your idea is a purple cow. The market does.”

That one comment brought a whole new perspective to me. As hard as we might try to build something into our OSS products that create a word-of-mouth buzz, ultimately we don’t decide if it’s a purple cow concept. The market does.

So let me ask you a question. You’ve probably seen plenty of different OSS products over the years (I know I have). How many of them are so remarkable that you want to talk about them with your OSS colleagues, or even have a single feature that’s remarkable enough to discuss?

There are a lot of quite brilliant OSS products out there, but I would still classify almost all of them as brown cows. Brilliant in their own right, but unremarkable for their relative sameness to lots of others.

The two stand-out purple cows for me in recent times have been CROSS’ built-in data quality ranking and Moogsoft’s Incident Room model. But it’s not for me to decide. The market will ultimately decide whether these features are actual purple cows.

I’d love to hear about your most memorable OSS purple cows.

You may also be wondering how to go about developing your own purple OSS cow. Well I start by asking, “What are people complaining about?” or “What are our biggest issues?” That’s where the opportunities lie. Once discovering those issues, the challenge is solving the problem/s in an entirely different, but better, way. I figure that if people care enough to complain about those issues, then they’re sure to talk about any product that solves the problem for them.

An embarrassing experience on an overseas OSS project

The video below has been doing the rounds on LinkedIn lately. What is your key takeaway from the video?

Most would say the perfection, but for me, the perfection was a result of the hand-offs, which were almost immediate and precise, putting team-mates into better position. The final shot didn’t need the brilliance of a Messi or Ronaldo to put the ball into the net.

Whilst based overseas on an OSS project I played in an expat Australian Rules Football team. Aussie Rules, by the way, is completely different from the game of soccer played in the video above (check out this link to spot the differences and to see why I think it’s the greatest game in the world). Whilst training one afternoon, we were on an adjacent pitch to one of that country’s regional soccer teams.

Watching from the sidelines, we could see that each of the regional team’s players had a dazzling array of foot skills. When we challenged them to a game of soccer we were wondering if this was going to be embarrassing. None of us had anywhere near their talent, stamina, knowledge of the game of soccer, etc.

As it turns out, it WAS a bit embarrassing. We won 5-1, having led 5-0 up until the final few minutes. We didn’t deserve to beat such a talented team, so you’re probably wondering how (and how it relates to OSS).

Well, whenever one of their players got the ball, they’d show off their sublime skills by running rings around our players, but ultimately going around in circles and becoming corralled. They’d rarely dish off a pass when a teammate was in space like the team in the video above.

By contrast, our team was too clumsy to control the ball and had to pass it off quickly to teammates in space. It helped us bring our teammates into the game and keep moving forward. Clumsy passing and equally clumsy goals.

The analogy for OSS is that our solutions can be so complex that we get caught up in the details and go around in circles (sometimes through trying to demonstrate our intellectual skills) rather than just finding ways to reduce complexity and keep momentum heading towards the goals. In some cases, the best way-forward solution might not even use the OSS to solve certain problems.

Oh, and by the way, the regional team did score that final goal… by finally realising that they should use more passing to bring their team-mates into the game. It probably looked a little like the goal in the video above.

Assuming the other person can’t come up with the answer

Just a quick word of warning. This blog starts off away from OSS, but please persevere. It ends up back with a couple of key OSS learnings.

Long ago in the technology consulting game, I came to an important realisation. When arriving on a fresh new client site, chances are that many of the “easy technical solutions” that pop into my head to solve the client’s situation have already been tried by the client. After all, the client is almost always staffed with clever people, but they also know the local context far better than me.

Alan Weiss captures the sentiment brilliantly in the quote below.
I’ve found that in many instances a client will solve his or her own problem by talking it through while I simply listen. I may be asked to reaffirm or validate the wisdom of the solution, but the other person has engaged in some nifty self-therapy in the meantime.
I’m often told that I’m an excellent problem solver in these discussions! But all I’ve really done is listen without interrupting or even trying to interpret.
Here are the keys:
• Never feel that you’re not valuable if you’re not actively contributing.
• Practice “active listening”.
• Never cut-off or interrupt the other person.
• Stop trying to prove how smart you are.
• Stop assuming the other person can’t come up with the answer
.”

I’m male and an Engineer, so some might say I’m predisposed to immediately jumping into problem solving mode before fully understanding a situation… I have to admit that I do have to fight really hard to resist this urge (and sometimes don’t succeed). But enough about stereotypes.

One of the techniques that I’ve found to be more successful is to pose investigative questions rather than posing “brilliant” answers. If any gaps are appearing, then provide bridging connections (ie through broader industry trends, ideas, people, process, technology, contract, etc) that supplement the answers the client already has. These bridges might be built in the form of statements, but often it’s just through leading questions that allow the client to resolve / affirm for themselves.

But as promised earlier, this is more an OSS blog than a consulting one, so there is an OSS call-out.

You’ll notice in the first paragraph that I wrote “easy technical solutions,” rather than “easy solutions.” In almost all cases, the client representatives have great coverage of the technical side of the problems. They know their technology well, they’ve already tried (or thought about) many of the technology alternatives.

However, the gaps I’ve found to be surprisingly common aren’t related to technology at all. A Toyota five-why analysis shows they’re factors like organisational change management, executive buy-in, change controls, availability of skilled resources, requirement / objective mis-matches, stakeholder management, etc, as described in this recent post.

It’s not coincidence then that the blog roll here on PAOSS often looks beyond the technology of OSS.

If you’re an OSS problem solver, three messages:
1) Stop assuming the other person (client, colleague, etc) can’t come up with the answer
2) Broaden your vision to see beyond the technology solution
3) Get great at asking questions (if you aren’t already of course)

Does this align or conflict with your experiences?

I will never understand…

I will never understand why Advertising is an investment and customer service is a cost.
Let’s spend millions trying to reach people, but if they try to reach us, make our contact details impossible to find, incentivise call center workers to hang up as fast as possible or ideally outsource it to a bot. It’s absolute lunacy and it absolutely matters
.”
Tom Goodwin
here.

Couldn’t agree more Tom. In fact, we’ve spoken about this exact irony here on PAOSS a few times before (eg herehere and here).

Telcos call it CVR – Call Volume Reduction (ie reduction in the number of customers’ calls that are responded to by a real person who represents the telco). But what CVR really translates to is, “we’re happy for you to reach us on our terms (ie if you want to buy something from us), but not on your terms (ie you have a problem that needs to be resolved).” Unfortunately, customer service is the exact opposite – it has to be on the customer’s terms, not yours.

Even more unfortunately, many of the problems that need to be resolved are being caused in our OSS / BSS (not always “by” our OSS / BSS, but that’s another story). Worse still, the contact centre has no chance of determining where to start understanding the problem due to the complexity of fall-out management and the complicated data flows through our OSS / BSS.

Bill Gates said, “Your most unhappy customers are your greatest source of learning.”

Let me ask you a question – Do you have a direct line of learning from your unhappy customers to your backlog of OSS / BSS enhancements? Or even an indirect line of learning? Nope?? If so, you’re not alone.

Let me ask you another question – You’re an OSS expert. Do you have any idea what problems your customers are raising with your contact centre staff? Or perhaps that should be problems they’re not getting to raise with contact centre staff due to the “success” of CVR measures? Nope?? If so, you’re not alone here either.

Can you think of a really simple and obvious way to start fixing this?

When your ideas get stolen

When your ideas get stolen.
A few meditations from Seth Godin:
“Good for you. Isn’t it better that your ideas are worth stealing? What would happen if you worked all that time, created that book or that movie or that concept and no one wanted to riff on it, expand it or run with it? Would that be better?
You’re not going to run out of ideas. In fact, the more people grab your ideas and make magic with them, the more of a vacuum is sitting in your outbox, which means you will prompted to come up with even more ideas, right?
Ideas that spread win. They enrich our culture, create connection and improve our lives. Isn’t that why you created your idea in the first place?
The goal isn’t credit. The goal is change.”

A friend of mine has lots of great ideas. Enough to write a really valuable blog. Unfortunately he’s terrified that someone else will steal those ideas. In the meantime, he’s missing out on building a really important personal brand for himself. Do you know anyone like him?

The great thing about writing a daily blog is that it forces you to generate lots of ideas. It forces you to be constantly thinking about your subject matter and how it relates to the world. Putting them out there in the hope that others want to run with them, in the hope that they spread. In the hope that others will expand upon them and make them more powerful, teaching you along the way. At over 2000 posts now, it’s been an immensely enriching experience for me anyway. As Seth states, the goal is definitely change and we can all agree that OSS is in desperate need for change.

It is incumbent on all of us in the OSS industry to come up with a constant stream of ideas – big and small. That’s what we tend to do on a daily basis right? Do yours tend towards the smaller end of the scale, to resolve daily delivery tasks or the larger end of the scale, to solve the industry’s biggest problems?

Of your biggest ideas, how do you get them out into the world for others to riff on? How many of your ideas have been stolen and made a real difference?

If someone rips off your ideas, it’s a badge of honour and you know that you’ll always generate more…unless you don’t let your idea machine run.

Is your data AI-ready (part 2)

Further to yesterday’s post that posed the question about whether your data was AI ready for virtualised network assurance use cases, I thought I’d raise a few more notes.

The two reasons posed were:

  1. Our data sets haven’t had time to collect much elastic / dynamic network data yet
  2. Our data is riddled with human-generated data that is error-prone

On the latter case in particular, I sense that we’re going to have to completely re-architect the way we collect and store assurance data. We’re almost definitely going to have to think in terms of automated assurance actions and related logging to avoid the errors of human data creation / logging. The question becomes whether it’s worthwhile trying to wrangle all of our old data into formats that the AI engines can cope with or do we just start afresh with new models? (This brings to mind the recent “perfect data” discussion).

It will be one thing to identify patterns, but another thing entirely to identify optimum response activities and to automate those.

If we get these steps right, does it become logical that the NOC (network) and SOC (security operations centre) become conjoined… at least much more so than they tend to be today? In other words, does incident management merge network incidents and security incidents onto common analysis and response platforms? If so, does that imply another complete re-architecture? It certainly changes the operations model.

I’d love to hear your thoughts and predictions.

Are your existing data sets actually suited to seeding an AI engine?

In the virtualization domain, the old root cause technology is becoming obsolete because resources and workloads move around dynamically – we no longer have fixed network and compute resources. Existing service assurance systems in the telecommunication network were designed to manage a fixed set of resources and these assurance systems fall short in monitoring dynamic virtualized networks. Code was written using a rule based approach on known problems. Some advances have been made to develop signature patterns to determine the root cause of a problem, but this approach will also fall short in a dynamic virtualized network where autonomous changes will occur continuously.”
Patrick Kelly
here.

This quote is taken from a really interesting article by Patrick Kelly (see link above).

The old models of determining service impact and root-cause certainly struggle to hold up in the transient world of virtualised networks. Artificial Intelligence or Machine Learning or machine-led pattern identification, or whatever the technologies will be called by their developers, have a really important part to play in networks that are not just dynamic, but undergoing a touchpoint explosion.

The fascinating part of this story is that these clever new models will rely on data. Lots of data. We already have lots of data to feed into the new models. Buuuuut…. I’ve long held the reservation that there might be one slight problem… does all of our existing data actually suit the “AI” models available today?

Firstly, our existing data doesn’t include much of a history on dynamically transient networks. But the more important factor is that our networks have been managed by humans – operators who have a tendency of recording the quickest, dirtiest (and not necessarily correct or complete) set of data that allows them to restore service quickly.

Following a recent discussion with someone who’s running an AI assurance PoC for a big telco, it seems this reservation is turning out to be true. Their existing data sets just aren’t suited to the AI models. They’re having to reconsider their whole approach to their data model and how to collect / store it. They’re now starting to get positive results from the custom-built data sets.

It’s coming back to the same story as a post from last week – having connectors that can translate the different languages of ops, data, AI, etc and building a people / process / technology solution that the AI models can cope with.

You might not be ready to start an AI experiment yet, but you may like to start the journey by understanding whether your existing data is suited to AI modelling. If not, you get the chance to change it and have a great repository of data to seed an AI engine when you are ready in future. The first step on an exponential OSS journey.