◆   Field Dispatch #003 — Free Sample   ◆   Section 01 of 04   ◆
Dispatch #003  —  Professor Pipeline

Assumption Exposure —
Section 01 of 04

The full dispatch contains 38 questions across four sections. What follows is Section 1 in its entirety — 10 questions, each with its mechanism, what it reveals, and the red flags that mean the quota was built on fiction before the first rep received it.

Read it. If you recognise the problems, the remaining 28 questions are in the full dispatch.

Free sample — no email required

Assumption Exposure

Run before every quota-setting conversation. Run it when the model has been presented as the answer. Run it when nobody in the room can explain where the number came from. Ten questions. The model is not the evidence.

Q01 / Assumption Exposure
What historical data was used to construct this quota — and has anyone verified it?
Why it works

Quota models are routinely built on unverified historical data. Last year's actuals get adjusted for "growth expectations" without checking whether the underlying source data was accurate. "The model says so" is not a data source. The model is downstream of an assumption someone made before the first cell was populated.

What the answer reveals

If the answer is "the model" or "last year's number plus growth," that is not historical data. That is assumption stacked on assumption. Historical data means actuals by rep, by segment, by motion — verified against what actually closed, not what was forecast. If nobody can point to the primary source, the quota has no foundation.

Red flags
  • "The model was built by finance" (nobody in the room has seen the inputs)
  • "We used last year as the baseline" (without verifying last year's actuals)
  • "That's how we've always done it"
  • The spreadsheet cannot be traced to a primary data source
Q02 / Assumption Exposure
What is the assumed win rate this quota is built on — and where did that number come from?
Why it works

Win rates are the most commonly fabricated number in a quota model. They are also the most consequential. A 5-point shift in win rate can change the implied pipeline requirement by 20% or more. When win rates are assumed rather than derived, every downstream number in the model is wrong.

What the answer reveals

If nobody can point to a specific dataset and time period from which the win rate was derived, the quota is built on a guess dressed up as a percentage. Win rates need to be segmented: new business vs expansion, enterprise vs mid-market, inbound vs outbound. A blended number is an average of assumptions.

Red flags
  • "We use 25% — that's industry standard"
  • Win rate sourced from a period when the product, market, or team was materially different
  • "It hasn't changed much year on year" (without checking)
  • Win rate not segmented by motion or segment
Q03 / Assumption Exposure
What is the assumed average deal size — and how does it compare to last year's actual?
Why it works

Deal size assumptions drift upward in quota models because optimism is easier than evidence. Every product launch, every new segment, every sales hire with a bigger Rolodex creates a reason to assume deal sizes will grow. The question is whether the evidence supports the assumption, or whether the assumption was made first and the evidence was retrofitted.

What the answer reveals

If the assumed average deal size is more than 10% above last year's actual, someone needs to explain what changed in the market, the product, or the sales motion to justify it. If no structural change can be named, the deal size assumption is aspirational. A quota built on aspiration is not a plan.

Red flags
  • "Deal sizes are trending up" (anecdotally, not in the data)
  • Average deal size sourced from a small number of outlier wins
  • "The enterprise motion will lift the average" (with no enterprise pipeline to support it)
  • No comparison to prior year actuals in the model
Q04 / Assumption Exposure
How many quota-carrying reps is this number built on — and are all those roles currently filled?
Why it works

Quota models are frequently built against a planned headcount rather than an actual headcount. A plan that requires 24 quota-carrying reps when 19 are in seat is mathematically broken before the quarter starts. The gap between planned and actual headcount is the invisible load that the existing team will be asked to carry — without being told that is what they are being asked to do.

What the answer reveals

If all roles are not currently filled, the quota number must be adjusted to reflect actual capacity, or the unfilled headcount must be shown explicitly as a gap with a realistic hire date. A quota that depends on headcount that doesn't exist yet is not a plan. It is an HR dependency with a revenue number attached to it.

Red flags
  • Model built to plan headcount, not current headcount
  • "We're hiring — those roles will be filled by Q2" (with no signed offers)
  • No adjustment for ramp time on open roles
  • Existing reps asked to "cover" unfilled territories without quota relief
Q05 / Assumption Exposure
What new logo assumptions are baked in — and what is last year's actual new logo count?
Why it works

New logo acquisition is the hardest motion to accelerate and the most optimistically modelled component of any quota. New logo counts require pipeline that doesn't exist yet, sales cycles that have to be run from cold, and win rates that are typically lower than expansion. When the new logo assumption is wrong, there is no easy substitute.

What the answer reveals

If the number requires 40% more new logos than last year without a corresponding increase in pipeline capacity, headcount, or market opportunity, the assumption is fiction. New logo targets need to be traceable back to prospecting capacity, outbound volume, and inbound lead flow — not to a percentage growth assumption applied to last year's count.

Red flags
  • New logo target higher than last year with no change in SDR headcount or outbound motion
  • "New product will open up new markets" (with no ICP validation)
  • New logo and expansion combined in a single revenue line
  • Nobody has compared the new logo target to the current pipeline generation rate
Q06 / Assumption Exposure
What is the assumed ramp time for new hires — and is that reflected in the capacity model?
Why it works

Ramp time is almost never correctly modelled in quota construction. Organisations consistently underestimate how long it takes a new rep to reach full productivity — and overestimate how quickly a ramp period ends. A rep hired in January in a typical B2B sales environment rarely produces at full quota capacity before Q3. If the quota model treats new hires as fully productive from day one, every week of ramp is invisible debt.

What the answer reveals

If ramp time is not explicitly modelled — with actual hire dates, actual ramp periods based on historical data, and actual expected productivity during ramp — the capacity number is wrong. And the quota built on that capacity number is wrong by the same margin, distributed across the org in ways that nobody will be able to trace back to the original assumption.

Red flags
  • New hires modelled at 100% quota capacity from month one
  • "Ramp is three months" (with no historical data to support it)
  • No distinction between ramping and tenured rep productivity in the model
  • Ramp assumptions more optimistic than last year's actual new hire performance
Q07 / Assumption Exposure
How much of the number is dependent on renewals and expansion — and is that separated from new business?
Why it works

Conflating new business and expansion into a single revenue number is one of the most dangerous things a revenue organisation can do during quota construction. The two motions have different capacity requirements, different sales cycles, different win rates, and different risk profiles. When they are blended, neither can be properly managed, and when the plan misses, nobody can identify where it went wrong.

What the answer reveals

If the renewal and expansion contribution has not been separated from new business in the model, the quota cannot be stress-tested properly. It also means the org does not know how much of its growth plan depends on customers it already has — and therefore has no way of knowing how much of the plan is at risk if churn rates move against them.

Red flags
  • New ARR and expansion ARR reported as a single number
  • Renewal rate assumed at last year's level without verification
  • Expansion assumptions based on "land and expand" strategy with no historical expansion data
  • No gross retention figure in the model
Q08 / Assumption Exposure
What market assumptions support the growth rate — and who challenged them?
Why it works

Growth rate assumptions are the most politically loaded number in a quota model. They are typically set by the people whose compensation depends on the number being large, reviewed by a board who wants the number to be ambitious, and challenged by nobody, because the person most qualified to challenge them is the one being evaluated against hitting them.

What the answer reveals

A 25% growth target requires a market that will absorb 25% more of your product than last year. That means either the total addressable market is expanding at a comparable rate, the organisation is taking share from competitors at a measurable rate, or new products are opening previously inaccessible segments. If none of these can be evidenced, the growth rate is aspiration, not analysis.

Red flags
  • "The market is growing" (no source, no rate, no TAM analysis)
  • Growth rate derived from investor expectations rather than market data
  • No named person who stress-tested the assumption before the model was finalised
  • "We've always grown at this rate" (as justification for continuing to do so)
Q09 / Assumption Exposure
Has anyone built the quota bottom-up from rep-level capacity and compared it to the top-down number?
Why it works

The gap between a bottom-up and top-down quota is the organisation's optimism tax — the amount by which ambition exceeds what the current org is actually capable of producing. This gap is not a problem to be solved by motivation or management pressure. It is a structural reality that needs to be either resourced, accepted, or reduced. The only way to see it is to run both models.

What the answer reveals

If nobody has run the bottom-up model, the quota has not been constructed. It has been asserted. A bottom-up build starts with each rep's territory, applies their realistic win rate and deal size, accounts for ramp where applicable, and produces a number that the current organisation can credibly deliver. When that number is lower than the top-down target, the difference requires an explicit plan — not hope.

Red flags
  • No bottom-up model exists
  • Bottom-up model was built after the top-down number was set (retrofitted)
  • The two numbers were reconciled by adjusting the bottom-up assumptions to match the top-down target
  • Nobody in the room has seen both models side by side
Q10 / Assumption Exposure
If you hit 80% of this quota, what happens to the business — and was that scenario modelled?
Why it works

Quota models are almost always built to the upside. The question of what happens if the number is not hit is rarely asked with the same rigour as the question of how to hit it. This is not an accident. Modelling the downside requires acknowledging that the plan might fail — and organisations that punish failure rarely create conditions where the downside can be examined honestly.

What the answer reveals

A quota that cannot survive 80% attainment without triggering a cash crisis, a layoff, or a covenant breach is not a plan. It is a bet. If the downside scenario was never modelled, the number was set by people who were not prepared to be honest about the risk they were creating for the organisation below them. That is not quota construction. That is hope with a spreadsheet attached.

Red flags
  • No downside scenario in the model
  • "We need to hit the number" (as a reason not to model the miss)
  • Business plan is structurally dependent on 100%+ quota attainment
  • The financial model and the revenue model were built by different teams that never reconciled
End of free sample — Section 01 of 04

The remaining 28 questions are in the full dispatch.

Three more sections. A bottom-up construction audit. A capacity and coverage check. An SKO sanity protocol. A scoring rubric. Four printable worksheets.

02 Bottom-Up Construction Audit
03 Capacity & Coverage Check
04 SKO Sanity Protocol
Get the Full Dispatch — $97

Instant download. No subscription. No upsell.