We speak with buyers and sellers every week, and one word keeps repeating.
Trust.
It keeps appearing in buyer conversations and seller conversations alike.
It’s more than a word though. It’s costing the telco industry millions.
When trust is missing, 18-24 month buying cycles become the default. This is the Buyer – Seller Chasm that we talk about a lot.

You’re probably already well aware of the pain-points around lengthy procurement events. But there’s something you might not have picked up on yet. Whether you realise it or not, almost all procurement approaches aren’t just a test of capability. They’re a test of trust.
Think about all the hoops we go through when assessing vendors. Functionality / capability might be the stated assessment criteria, but when you think about the approaches used, we’re actually assessing whether we can trust the vendor to deliver.
Procurement is the mechanism buyers use to reduce perceived delivery risk, commercial risk, and political risk. And when trust stays intangible, buying cycles drag on and on. Not because key decision-makers love meetings (well, some probably do). It’s mostly because they can’t get to a mental position where they feel the decision is safe yet.
This article discusses ways to get to trust a lot faster than the default, but we have to make it an intentional objective.
We’ll do that by including some of approaches we’ve learnt along the way that build trust faster.
.
Lesson 1: Vulnerability can unlock trust – invite vendors to admit weakness
We’ve sat through hundreds (or maybe even thousands) of vendor demos over the years. Two incredibly unexpected trust-enhancement patterns have shown up during those demos. They’re so unexpected that I’m not even sure if they were an intended sales technique (I think they were just inherent in those salespeople’s personalities).
The salespeople who openly describe their solution’s weaknesses or lack of fit often build trust faster than those who claim complete excellence.
When every answer from a vendor is “yes, we’re the market leaders at that”, Buyers don’t feel reassured. Instead, Buyers immediately feel like the Sellers are being dishonest.
That’s probably not the case most of the time though.
Each salesperson probably feels like they’re being truthful. It elicits thoughts of studies such as this, suggesting that up to 93% of drivers rate themselves as being more skillful than the average driver (the “better-than-average” effect where obviously only 50% can ever be better than average). I suspect almost every OSS vendor thinks they’re better than average.
So, as a Buyer, we need to help elicit honesty from Sellers.
How? We can ask questions that make it safer to be honest for example:
- Where are you weak for our use case?
- What would make you a bad fit for us?
- What should we be cautious about in year 2 and beyond?
- What do your customers most commonly complain about after go-live?
Then listen for specificity. Genuine, trustworthy answers usually include limitations, constraints, dependencies, failure modes and mitigations or workarounds. Low-trust answers tend to be vague, evasive, or framed as “we’ve never seen that before” or “we don’t have any faults” or “we try to help out our customers too much.”
This one move can change the tone of the procurement from only testing capability to also testing trust and partnership. Using this technique, you’re testing whether this vendor can tell the truth when they have something to lose.
.
Lesson 2: Treat trust as a risk register – and share your risks once you’ve shortlisted
If trust is a major element of the chasm (it is), the fastest bridge over it is a shared map of uncertainty and methodical risk reduction.
Whether they know it or not, each Buyer decision-maker is building a trust register. Some in their heads, others more systematically.
The mistake many Buyers make is keeping this list internal.
Instead, once you’re down to your selected shortlist, it makes sense to start sharing your specific reservations, concerns and risks with each vendor.
Be open about what / how you perceive risk with them. That might include:
- Unknowns (eg about your data quality, volumes, or migration complexity, required capabilities, budget allocation, etc)
- Internal upskilling needs (new operating model, new tooling, new roles)
- Integration uncertainty (interfaces, patterns, reconciliation, etc)
- Gaps in stakeholder alignment (change management, Ops Model, security, ops, finance, architecture)
- Factors that will impact costs (more on that later)
- Decisions you haven’t made yet that will shape scope
- How to unravel the complexity. How to remove the transformation fog that prevents the Buyer seeing the path to their destination
This transparency doesn’t weaken your negotiating position. It strengthens your ability to get real evidence quickly and it reduces the risk of relationship breakdown later.
Our first book, “Mastering your OSS,” was written because there was always a disconnect between what the buyer thought they’d bought and what the seller thought they’d sold. Prime examples of disconnects were around data migration, change management, systems integrations, upskilling the team, pricing models and much more.
Removing the risks also helps with a silent killer in buying cycles. Hidden vetoes due to unstated risks.
There’s an old saying, “the confused (or fearful) mind says no.”
Stakeholders often don’t block a deal loudly or visibly. Instead, they delay it quietly or seek further rounds of information because they don’t trust what they’re seeing. A shared risk register makes it easier to surface and retire those veto points early. (note that some risks will be shared across all vendors, whilst others will be vendor-specific).
Buyers, I’d recommend for you to initiate this step.
.
Lesson 3: Clearly define what evidence counts
Many procurement processes treat demos as not much more than ad-hoc theatre.
The fix is to define what evidence counts before the first demo – for internal and external stakeholders. To be totally clear, all demos must be in a language familiar to Sellers (since it’s their internal stakeholders deciding on whether the project proceeds).
Evidence usually needs to cover:
- Desired outcomes driven by end-to-end workflows (not just screens)
- The Buyer’s data sets (your network makes/models, your topologies, your services, your perceived edge cases)
- Integration patterns and interfaces
- Non-functional requirements (performance, resilience, latency, scalability)
- Security and compliance evidence
- Operational model (monitoring, support, incident handling)
- Commercial drivers and variation triggers
At PAOSS, we operationalise this using scenario packs. For each type of OSS functionality (eg Network Inventory), we have a baseline set of use-cases / scenarios that we then refine in collaboration with each customer.
These scenario pack templates combine:
- Functionality and workflow steps (as a structured set of scenarios)
- The buyer’s own datasets/network/topologies/etc (where possible)
- Acceptance criteria (what “good” looks like)
- Evidence capture (what gets recorded, saved, and reused)
Use scenario packs in two stages:
- Initial demos (by the short-listed bidders): we share the high-level scenario list with vendors so that they can provide their initial demo against a consistent set of objectives and scenarios (but we give the Sellers freedom to demonstrate using their own preferred sequence and data set to minimise their effort)
- Proof of Concept / Proof of Value (POC/POV) (by the preferred bidder, or sometimes 2-3 finalists depending on the buyer’s procurement rules): demonstration of the same scenarios but with the buyer’s specific data, edge cases, operational tasks and with outcomes specifically measured (including ROI metrics). This takes more effort to set up, but there’s also a higher likelihood of the Seller winning the deal when they’re a preferred bidder or finalist
When you do this well, the demo stops being a performance and becomes an evidence-gathering exercise. And trust rises because you’re no longer relying on belief. You’re building proof and trust together. You’re also sharing operational knowledge with each other.
.
Lesson 4: Combine vendor-agnostic guidance with an independent scorecard
Trust is harder to establish when buyers can’t compare vendors easily or consistently. Seller solution collateral is generally designed to differentiate from their competitors. This is a case of intentionally avoiding apples-to-apples comparisons, but creates noise and complexity for evaluators. Noise creates delays. Delays create doubt. Doubts widen the chasm.
Combine two things:
- A single scorecard used across all vendors
- Vendor-agnostic framing to keep the comparison honest (it’s common for Buyers to ask an incumbent or preferred vendor to write their RFP spec, which naturally isn’t vendor agnostic!)
A simple scorecard might include:
- Capability fit (against your scenarios)
- Non-functionals (support models, complexity, regional / regulatory adherence)
- Delivery credibility (plan, roles, dependencies)
- Commercial clarity (pricing drivers, scope boundaries), especially in the early stages when the Buyer doesn’t know what to expect in terms of price-bracketing, or budget to allocate, etc
- Residual risk (what is still unknown)
Vendor-agnostic guidance for Buyers helps by providing neutral structure such as:
- Standardised capability models, questions and checklists
- Market benchmarking (what is typical vs unusual)
- Reference architectures and patterns to sanity-check claims
- Lab environments or validation approaches that reduce guesswork
We’re a strong believer in the importance of this lesson, but we’re also biased. This is one of PAOSS’ most important offers to assist OSS/BSS Buyers. We use our Blue Book OSS/BSS Vendor Directory to categorise over 750 listings.
We also use the Inverted Pyramid selection approach to eliminate what we refer to as the Three Forevers, of a typical RFP process. Together, the Vendor Directory and Inverted Pyramid approach aim to rapidly increase trust between buyers and best-fit sellers whilst helping to shrink the Buyer-Seller Chasm.
However, there’s no reason why you can’t establish your own scorecard and evaluation techniques similar to ours.
.
Lesson 5: Transparency goes both ways
Everyone knows that pricing surprises are trust killers. Scope games are too.
But this still happens. Regularly!!
Typical Buyer approaches aim to push all cost and delivery risk onto Suppliers. Suppliers often accept this… until the contracts are signed…. and then find ways to push back such as using variations as a balancing tool.
This typical approach trashes trust and partnership from the outset, as described in this article.
Buyers often push for vendor transparency (“show me the numbers”), but without clarity around scope. Therefore, the deeper need is to establish clarity on:
- What drives price (users, transactions, environments, integration volume)
- What triggers variation (scope ambiguity, customisation boundaries, migration assumptions)
- What changes the price (specific scenarios, not vague clauses)
- What risks or issues can be closed or mitigated (based on the supplier’s past experiences) (as per Lesson #2)
.
Let me share a cautionary tale as an example.
A buyer (a T1 telco) started out with a long list of capabilities needed for their first major digital transformation in years. It was to be a pivotal investment in future operations.
Based on a back-of-paper-napkin calculation (it wasn’t a project I was directly involved with, but I had many involved connections), I estimated the contract value would be around $50m.
Over the next 2-3 years of ongoing POCs, the Buyer whittled down scope to fit within aggressive budget expectations. They eventually landed with Seller contracts that were less than $10m.
Unfortunately, the minimised scope left a significant gap from what the business actually needed. Buyer blamed Seller. Seller blamed Buyer. All were at loggerheads.
But the business needs gap still had to be resolved. The Buyer had to request additional scope projects. I’ve heard from insiders that there have since been around $200m in variations. That’s over 20x the original investment!!
That’s not just a commercial blowout. It’s a trust failure too. Most of it started by procurement trying to negotiate the lowest price.
Greater transparency is required. Buyers may choose to ask vendors to disclose:
- Their risks and unknowns
- Where they think your scope isn’t clear
- Where they expect variation pressure to land later
- What assumptions they are making (explicitly)
- What is out of the box, what needs development, what roadmap items are already funded
Transparency isn’t only a seller / vendor responsibility. Buyers need to reciprocate transparency around:
- Where scope is still forming
- Highest priority problems / objectives to resolve
- Where internal alignment is incomplete (especially between a Buyer’s Delivery, Operations, Business, Commercial / Legal and Procurement teams)
- What decisions are pending that will change direction
- What other in-flight activities and other challenges may impact this project
There’s also a bigger structural issue here, as described in the link. Traditional procurement is often built around cost reduction and risk hand-off. Incentives for the procurement team involve pushing as much liability as possible onto vendors.
It can look like diligent “risk management” and cost reduction, but it often reduces trust and harms partnership from the outset. If the relationship begins as an adversarial contract exercise, it shouldn’t be surprising when trust gets buried and risk comes back later.
.
Lesson 6: Make reference calls and let “out of the box” be the baseline
Testimonials and reference checks can be an important part of building trust in a Supplier’s ability to deliver. However, there can be a few provisos:
- A Seller will naturally provide contact details to their happiest customers, so you might wish to independently reach out to other customer contacts (PAOSS might be able to help you with that)
- Each previous implementation will differ from the Buyer’s unique needs, so compare against out-of-the-box capabilities where possible rather than customisations and/or non-aligned products
- Not all Sellers will be a perfect fit for all Buyers and every product / project will have problems and pain-points arise. They’re highly complex and unique projects, so there are always problems. Yours will have them too. The most important consideration is how Buyer and Seller work together to resolve the problems
Run reference checks as an exercise of identifying the Seller’s ability to build strong partnerships as much as having the best solutions:
- What product modules and implementation features are the same/different from what we’re proposing?
- How much of the functionality delivered is available via configuration (ie data change) versus customisation (ie code change)?
- How mutually flexible have you been to resolve unexpected changes? Are changes managed by contract, by relationship, etc?
- What broke in the first 90 days after go-live and how was it handled?
- What changed in the following year (scope, cost, team, governance) and how was it handled?
- Are contract T&Cs fairly balanced or more heavily weighted towards protecting the Seller?
- How smoothly did change control and variations work in practice?
- What support model did you sign up to? Are you happy with that support? What does support look like if something breaks at 2am?
- What do you wish you’d clarified before signing?
- Would you sign with this Supplier again today?
And be ruthless with one phrase: “out of the box”.
Too many RFPs are written against a wish-list of requirements that don’t really move the needle, but need customised development by Suppliers. Customisation means increased cost (up-front and TCO), harder future upgrades, non-standard support, more time, etc.
Request a written definition of what is:
- Configuration
- Customisation
- Future roadmap (and is it funded or speculative roadmap functionality?)
- Not supported
.
Lesson 7: Run a short Proof of Value – not an endless pilot
Pre-sales purgatory happens when the POC / pilot isn’t designed to create a decision.
The cautionary tale described earlier is the perfect example here too.
There were endless rounds of POCs conducted over a period of years that were a huge cost to Buyer and Seller alike. The Buyer had seen certain functionality demonstrated and assumed it was in the contracted deliverables, but because the Seller had dropped modules to meet budgets, expected functionality had been excluded (and added back in by variation later).
A Proof of Concept (POC) is a demonstration of capability. A POC might be paid or unpaid (as mutually agreed by Buyer and Seller), but is not guaranteed to progress further than the POC. A subsequent decision is required on whether to proceed further.
A Proof of Value (POV) is similar to a POC, but is actually just a go / no-go decision gate in an approved contract. In other words, if the Buyer accepts that all essential criteria are met during the POV, then the long-term contract commences.
A POC/POV should be:
- Timeboxed to weeks (not quarters)
- Built around decision questions (what are we trying to prove?)
- Run using refined scenario packs with a suite of pre-defined workflows, data and acceptance criteria (pass/fail gates)
- Measured with outcomes, including ROI and risk reduction metrics
- With a clear next-step decision to move toward contract negotiations
One goal is to “see it working” (and have a variety of end-users see it working, not just the delivery team). But more importantly, it’s to produce evidence that increases trust, reduces uncertainty, expedites stakeholder alignment and provides a stepping stone towards a future production build.
.
Lesson 8: Detailed Implementation Plans are a significant trust signal
A surprising amount of trust is earned by delivery realism, not sales polish.
Buyers can ask for an implementation plan that includes:
- All major activities on the project / programme
- Named roles (not unnamed “resources”)
- Dependencies and assumptions
- Governance and escalation paths
- Testing approach and environments
- Cutover approach
- How change control will work
When vendors can show project-level planning artefacts, it reveals whether they understand the size and scale of the programme and how complexity will be managed. It also gives your internal stakeholders something concrete to align resources around (see RACI / WBS view below), which reduces confusion and hidden vetoes.
Whilst every project is different, most task categories remain the same / similar from project to project.
The more reusable a Supplier’s delivery plan, the more reliably repeatable it should be.
At PAOSS, we use WBS (Work Breakdown Schedules) like the example shown in the diagram below. We have delivery planning templates for almost all of our main offers, including vendor selection projects. Each provides detailed examples of what “real” looks like, so buyers can separate refined, battle-hardened, credible plans from optimistic narratives. These templates need to be tweaked for each unique customer, but a majority of tasks will remain unchanged.
Notes:
- The colour-coding effectively acts as a RACI chart, with the main box being Responsible/Accountable and the small boxes at the bottom right of each task show the stakeholders that are Consulted/Informed/Supporting
- These WBS charts also identify which activities have corresponding milestones, deliverables, out-of-scope items, etc.
- Each activity has a variety of other attributes (not shown on chart) that allow a variety of project artefacts to be built automatically / semi-automatically (including Agile backlogs and quotes).

You’re welcome to click on this link to freely navigate around the sample WBS above (you will be asked to create a free login on the DivvAI platform though).
.
Lesson 9: Use lab environments to bridge from vendor selection straight into implementation
One of the biggest hidden delays in procurement is the pause and reset that happens between POC/POV and implementation. Different teams, different artefacts, different understanding.
Trust and confidence leaks in the handover from Planning (Phase 1) to Implementation (Phase 2) in the diagram below.
The POC/POV/Pilot/Sandpit step is an important transition step.

Lab environments can close that gap by turning POC/POV into a runway for delivery.
If you keep a lab environment alive between evaluation (Phase 1) and production (Phase 2), you can use it for:
- Additional testing (integrations, performance, resilience, edge cases, migration, reconciliation, etc)
- Training and internal upskilling
- Documentation foundations (runbooks, configuration notes, operational procedures, user guides, custom training packs, etc)
- API / integration acceleration (contracts, reconciliation patterns, interface specifications, data warehouse modelling, etc)
- Business process work anchored to real workflows (there’s generally a business process reengineering [BPR] task to be performed when transforming platforms)
- Playing in a safe (ie non-prod) environment where the ramifications of mistakes are significantly reduced
Working with real systems tends to be far more intuitive for new users than reading architecture documents or user manuals.
.
Closing thoughts
Most people will agree that trust is an important factor in getting a transformation project funded.
But most don’t realise just how important it is because it’s largely unsaid.
If you’re a Buyer and want to shrink an 18-24 month sales cycle, you’re probably not even thinking about trust explicitly. Most see it as an intangible that increases over time whilst getting familiar with the potential Sellers. Similarly, it’s easy for Buyers to push the onus onto Sellers to “just be more trustworthy.”
We need to be more scientific than that!
As you see from the lessons above, as Buyers, there are quite a few things we can do to actually fast-track trust. We have to start by treating trust as an objective, as an output by design, not a by-product.
If you’d like to discuss these and other techniques for getting across the buyer-seller chasm faster, please contact us.






