If you work in an OSS/BSS product company, are you able to guess what percentage of your team spends time with clients (customer-facing)? And of that percentage, how many observe the end-user interacting with your products (user-facing)?
There is a really important distinction between these two groups, not to mention the implication it has on product innovation, but I’ll get to that shortly.
We all know that great products serve the needs of the users, solving their most important problems. We also know that when there’s an air-gap between OEM vendors and clients then products will tend to struggle to meet end-user needs.
Conversely, we don’t really want our devs sitting alongside the customer for long (even on Agile / insourcing projects) because there can be a tendency for development to veer away from the big-picture roadmap. Products can become client-specific rather than client-generic in those scenarios, which isn’t great if you’re seeking solutions with repeatability / reusability.
Let’s revisit those two opening questions.
What types of roles are customer-facing?
- Sales
- Implementation teams
- Project Management / Coordination roles
- Architects
- Business Analysts
- Testers
- Trainers
- Developers (sometimes)
- Support teams
- Post-handover support
- Ongoing Product support
That probably means more than half of your roles are customer-facing. That’s great. The architects, BAs and testers in particular act as a conduit between the client and your developers, forming an important innovation loop. They deal with the client’s project team almost every day for long periods of time. Their roles are to deal with the clients and translate for developers. If they ask the right questions of the right people and can then transcribe into a message that resonates and the devs can develop against, then all is fine. That should mean the air-gap is minimised.
I’ve probably spent close to 10 years in these sorts of client-facing roles on OSS/BSS implementation projects. That alone should give me a pretty good understanding of client needs right?
Unfortunately, there’s a serious flaw in this thinking! Take another look at the second question in the opening paragraph and see whether you can spot what the flaw is.
When you’re on an implementation team, you’re most often dealing with the client’s project team. The client’s project team is simply telling you what they think they need. In many cases the solution isn’t yet available for use for large periods of the project, so you can only talk about it, not actually use it. Not only that, but the client’s project team may not actually include many end-users. They help build OSS/BSS solutions, but might not actually use them. The OEM’s implementation team does most of their work prior to go-live of the solution. Almost everything prior to go-live is dealing with hypothetical situations. Even UAT (User Acceptance Testing) tends to only cover hypothetical scenarios.
The only true customer experience is what happens after go-live using production solutions. Unfortunately, the problem here is that most implementation teams don’t stick around long after go-live. They’re normally off to the next implementation project. At least that’s what happened most often in my case.
So let’s come back to the second question – how many (of your OEM team) observe the end-user interacting with your products?
From the list above, it’s really only your support staff. Post implementation support and ongoing product support (which tend to only interact with end-users remotely, often only via email / tickets). You might also throw in trainers if you’re training client staff on production like systems / scenarios (although most vendors just give the same generic training to all clients).
There’s a well-held belief that asking customers what the want will only provide incremental improvement, as per Henry Ford’s oft-quoted statement about customers asking for faster horses. Radical innovation is more likely to come from deep customer engagement, observing what they do with your solutions and understanding what they’re trying to achieve plus pain-points.
Therefore, let me ask you – are we putting our best observers / translators / innovation-detectives in these few real end-user-facing roles? Are we tasking them with finding the biggest problems worthy of innovation or are they just fault-fixing? Are we asking them to collect data that will guide our next-generation product developments? Would our developers and architects even listen if our observers did come up with amazing insights?
I suspect we’re leaving our best-fit roles out of our innovation super-loops! Are we air-gapping our innovation opportunities?
Would it actually be in our best interests to stay with the end-users for longer after go-live, partly to give them greater confidence handling the many scenarios that pop up, but also as a way of gaining a deeper understanding (and metrics) of our customers in the wild?
Please share your thoughts below!