◆   Field Dispatch Series — Revenue Operations   ◆   Downloaded, Not Hired   ◆

Win/Loss Analysis: Run One That Actually Changes Behaviour

Most win/loss programmes produce a slide deck. The slide deck gets presented at the QBR. Everyone nods. Nobody changes anything. Three months later, the same reasons are appearing in the CRM loss fields. The same deals are being lost to the same competitor for the same reasons. The programme has been running for a year and the win rate has not moved.

This is not a data problem. The data exists. It is a design problem. Win/loss analysis done correctly is one of the highest-leverage inputs a revenue organisation has — it exposes what is actually happening in competitive situations from the buyer's perspective, not the rep's rationalisation. But the majority of programmes are designed in a way that guarantees the output will be interesting and useless. Here is why, and how to fix it.

Why Most Win/Loss Programmes Fail

CRM Data Only

The most common version of win/loss analysis is a CRM report. Someone runs a query on closed-won and closed-lost opportunities, looks at the "reason for loss" field, and produces a bar chart. This approach has a fundamental flaw: the data was entered by the person who lost the deal and has a strong incentive to attribute the loss to factors outside their control. Price, product gaps, and "went with incumbent" dominate loss reason distributions not because they are the most common causes of loss, but because they are the most face-saving explanations available. The rep who lost a deal because they could not build internal consensus, could not get to the economic buyer, or ran a poor discovery process is not going to select "poor sales execution" from a dropdown.

CRM data tells you what reps believe or are willing to admit. It does not tell you what buyers decided or why. These are different datasets and they produce different conclusions.

No Buyer Interviews

The only way to get accurate win/loss data is to talk to the people who made the decision. Buyer interviews are the mechanism. They are also the part of the programme most organisations skip, citing difficulty in getting time with prospects post-decision, concern about re-opening closed deals, or resource constraints. These objections are real but they are usually overstated. Buyers who chose you are generally happy to talk. Buyers who chose someone else are often willing to explain why, especially if the conversation is positioned as feedback rather than a sales re-engagement. The difficulty is real but the yields are high enough to justify the investment.

Findings That Never Become Action

Even programmes that do conduct buyer interviews often fail at the final step: making the findings operational. The analysis gets packaged into a deck, the deck gets presented, and then it lives in a shared drive. Nobody has been assigned to act on specific findings. Nobody has changed the sales playbook based on the competitive intelligence. Nobody has updated the ICP based on the segments where win rates are structurally better or worse. The insight sits inert because there is no process for converting it into behaviour change.

How to Design a Win/Loss Programme That Works

A functional win/loss programme has three components: structured data collection, thematic analysis, and operational integration. Most programmes have the first, approximate the second, and skip the third entirely.

Structured Buyer Interviews

The interview is the core of the programme. It should cover six areas: the buying process and who was involved, the problem they were trying to solve and how urgent it was, how your solution was evaluated against alternatives, what the deciding factors were, what concerns or objections arose, and — for losses — what would have changed the outcome. The interviewer should not be the rep who ran the deal. It should be a neutral party, either from marketing, product, or a third-party research firm. Buyers say materially different things when they are not talking to the person whose commission depended on the outcome.

The questions that get honest answers are the ones that give the buyer credit for a rational decision. "You chose [competitor]. What was the primary reason that felt like the right call?" is better than "Why did we lose?" The former positions the buyer as a thoughtful decision-maker. The latter positions them as a judge. Tone matters.

Aim to interview both sides of every major deal: won and lost. A programme that only interviews losses tells you why you lose. A programme that interviews wins too tells you what you look like when you win — which is often different from what sales leadership assumes. Understanding the win profile is as commercially valuable as understanding the loss profile, especially when you are trying to refine the ICP. The relationship between win/loss findings and ICP is explored further at How to Build an ICP That Sales Will Actually Use.

Thematic Analysis

Individual interviews are anecdotes. Thematic analysis is intelligence. After every cohort of interviews, the findings need to be coded against a consistent framework so patterns emerge over time. What competitive situations are appearing most frequently? What product gaps are cited by multiple buyers? What sales process failures show up across multiple lost deals? What is consistent across won deals that does not appear in the lost deals?

The themes that matter are the ones that appear in more than 20% of interviews and that have a clear owner in the organisation who can act on them. A theme that appears once is a data point. A theme that appears in a third of your interviews and maps to a specific gap in your sales motion or product is a strategic finding. Those are the ones worth building action plans around.

THE FRAMEWORK

The full interrogation framework is Dispatch #007 — SDR Qualification Framework. 38 questions across four sections that expose whether your pipeline reflects real buying intent or wishful thinking. $97. Instant download.

See the full framework →

Making Findings Operational

This is where most programmes die. The findings are real, the deck is compelling, and then nothing changes. Operationalising win/loss findings requires four things: assigned owners, defined actions, a timeline, and a feedback loop.

Every major finding should have a named owner. Not "the sales team" or "product." A specific person who is accountable for determining whether and how the organisation responds to this finding. That person needs to define a concrete action — updating the battle card, changing the discovery question sequence, adding a feature to the product roadmap, adjusting the ICP definition — and a deadline for when it will be done.

The feedback loop closes the cycle. At the next win/loss review, the programme should include a check on whether the actions from the previous cycle were completed and whether they appear to have had any effect on win rates in the relevant segments. Without this, the programme is a quarterly ritual with no accountability. With it, it is a continuous improvement engine for the sales motion. The connection to conversion rates is direct — for a view on how win/loss findings translate to MQL-to-SQL performance, see MQL to SQL: Why Your Conversion Rate Is Lying to You.

Connecting Findings to ICP, Sales Process, and Product Roadmap

Win/loss analysis is most valuable when it is integrated with adjacent decisions rather than treated as a standalone programme. The three highest-leverage connections are ICP definition, sales process design, and product roadmap prioritisation.

ICP Refinement

Win/loss data is one of the best inputs for ICP refinement because it is based on actual buying decisions rather than theoretical fit criteria. If your programme reveals that you win consistently in companies with a specific revenue range, team structure, or technology stack, and lose consistently outside those parameters, that is empirical evidence for ICP revision. Most ICP definitions are built from assumptions. Win/loss data replaces assumptions with evidence. For more on building an ICP that reflects real buying behaviour, see How to Build an ICP That Sales Will Actually Use.

Sales Process Design

Process failures that appear consistently in loss interviews — failure to access the economic buyer, loss of momentum after the demo, inability to articulate ROI, late competitive positioning — should drive changes to the sales methodology, not just coaching conversations. If multiple reps are making the same mistake at the same stage, it is a process problem, not a rep problem. The fix is a process change, not a performance review. The data from win/loss interviews is the diagnostic that makes this distinction possible. SDR qualification is often an upstream version of the same problem — the deals entering pipeline that are lost late were often not properly qualified early, which the SDR productivity metrics will reflect if you look at the right numbers.

Product Roadmap

Product gaps cited in loss interviews are some of the most credible inputs a product team can receive. They are not requests from existing customers who want more features. They are requirements that caused a prospective customer to choose a competitor. The commercial weight of that signal is different. A product gap that appears in 30% of loss interviews in a specific segment represents a quantifiable revenue opportunity, not just a user story. Win/loss findings presented with that framing get prioritised differently than feature requests. The data quality that supports this analysis depends heavily on clean CRM records — the underlying challenge is detailed at CRM Data Quality: Why Your Forecast Is Always Wrong.

Programme Governance

A win/loss programme needs governance to stay alive and useful. That means a defined cadence — typically quarterly analysis cycles with monthly interview collection. It means a minimum sample size — at least 10-15 interviews per cohort before thematic conclusions are drawn. It means an owner — one person who is accountable for the programme running, the interviews being conducted, the analysis being done, and the findings being presented and acted on. And it means executive sponsorship — someone at the CRO or VP level who cares enough about the findings to hold the organisation accountable for acting on them.

Programmes without governance decay into the slide deck problem described at the start. The quarterly meeting becomes a ritual. The interviews stop getting scheduled. The findings stop being acted on. And eventually someone asks why the win rate has not improved and nobody connects it to the programme that stopped running 18 months ago. The metrics that tell you whether the programme is working belong in the same review as your broader commercial health indicators — see RevOps Metrics: The 12 Numbers That Actually Matter for the full picture.

The buyer who chose your competitor is the most valuable market research you never paid for. Most companies never ask.

Win/loss analysis is not difficult in principle. Reach out to buyers after decisions are made. Listen without defensiveness. Code the patterns. Give the findings to people with the authority and accountability to act on them. Check whether the actions worked. Repeat. The organisations that do this consistently — not perfectly, but consistently — develop a feedback loop that compounds over time. Their win rates improve. Their ICP sharpens. Their sales process closes the gaps that competitors are currently exploiting. The ones that produce slide decks keep losing to the same competitors for the same reasons and calling it a pricing problem.

DISPATCH #007

SDR Qualification Framework

38 questions that expose whether the pipeline your SDRs are building reflects real buying intent or deals that will stall at every stage. $97. Instant download.

Download the Framework — $97 See the framework →
Other Field Notes