- Only 24% of sales organisations achieve forecasts within 5% of actual close. 43% miss by 10% or more — every quarter.
- The costs extend beyond the revenue shortfall: hiring plans, capital allocation, investor confidence, and CAC calculations are all built on the forecast number.
- Most forecast inaccuracy is not a market signal. It is an execution signal — deals that were in the pipeline and should have closed but didn't because the right action wasn't taken.
- Forecast accuracy improves most reliably when inputs change from rep-reported confidence to activity-signal-derived probability.
The quarter closes short. Your CRO says it was "timing" — three significant deals are expected to close in the first two weeks of next quarter. The board accepts the explanation. The forecast for next quarter is submitted, and it is within 10% of where last quarter's forecast started before the same thing happened.
This is the most expensive recurring pattern in B2B business. Not because any single forecast miss is catastrophic — most are survivable, and most leadership teams manage through them. The cost is structural: every decision made using the forecast as an input inherits its inaccuracy, and those decisions compound for months or quarters after the number was wrong.
The Direct Cost: The Revenue That Wasn't There
The most visible cost of a forecast miss is the direct revenue shortfall. If the forecast was $4.2M and the result was $2.9M, the business had a $1.3M gap. That gap affects cash position, period-end financial statements, and metrics that investors or lenders track. In a VC-backed environment, it affects the narrative going into the next raise. In a bootstrapped environment, it affects the investment decisions the business can make in the following quarter.
This is the cost that gets measured. It is also, counterintuitively, one of the smaller costs of the pattern — because it is known, bounded, and recoverable in the following quarter if the conditions are right.
The Hiring Cost: Recruiting Against a Plan That Doesn't Match Reality
The headcount plan for most B2B organisations is built on revenue projections. At $X revenue, you need Y sales reps, Z SDRs, one additional customer success manager, and a part-time RevOps resource. The plan is built using the forecast as the baseline, which means that a consistently overestimated forecast produces a consistently over-ambitious headcount plan.
The cost of hiring against an optimistic forecast shows up in two ways. First, if you hire to a plan and the revenue doesn't arrive, you are paying for capacity the business cannot afford. Second, if you delay hiring because the revenue didn't arrive in Q1 as expected, you miss the compound value of having that capacity in place during Q2 and Q3. Neither outcome appears in the forecast miss post-mortem. Both are directly caused by it.
The more subtle version of this cost: when hiring decisions are made in anticipation of revenue that has been "timing-shifted" from last quarter, the organisation builds structure around pipeline that has been counted twice. The deals from last quarter that slipped, plus the new deals this quarter, plus the organic pipeline growth — the headcount plan is sizing for all three simultaneously, when in reality some of the slipped deals will be lost rather than recovered.
The Capital Allocation Cost: Investment Sized to Wrong Revenue
Marketing budgets, product investment, and infrastructure spend are typically sized as a proportion of revenue — either current revenue or forward-looking projections. When the projection is consistently inflated by pipeline that will leak before it closes, the investment decisions built on top of it are also inflated.
A marketing team sized to generate pipeline for $5M in quarterly revenue is carrying costs that the business cannot afford when it is consistently closing $3.2M. The budget was approved using the forecast number. Nobody explicitly decided to over-invest in marketing — the number they were given was wrong, and they spent accordingly.
This is one of the most damaging aspects of forecast inaccuracy, because the downstream investments are made in good faith. The CFO approved a marketing budget that made sense relative to the revenue projection. The fact that the revenue projection included 30% execution leakage that was invisible in the data was not in anyone's analysis, because the tools to see it did not exist.
The Board Credibility Cost: A Harder-to-Quantify But Real Expense
For growth-stage and scaling companies, board confidence in management's ability to forecast and execute is a real asset. A board that believes their leadership team can predict outcomes within 5% is more willing to approve investment, more supportive of ambitious plans, and less inclined to push for process changes that create friction. A board that has heard "timing issues" and "deals that slipped" four quarters in a row is making a different set of decisions.
The credibility cost is harder to model than hiring or capital allocation costs, but it is not hypothetical. Companies that demonstrate consistent forecast accuracy raise capital on better terms, retain board confidence during difficult periods, and have more latitude to pursue strategic moves that require board approval. The inverse is also true.
The CAC Distortion Cost: Marketing Investment Against a False Baseline
Customer acquisition cost is calculated as marketing spend divided by customers acquired. But the denominator — customers acquired — is directly affected by sales execution quality. Every lead that marketing generates and that sales follows up on once before abandoning is a lead that increased the numerator of the CAC equation without contributing to the denominator.
When a forecast miss is driven by execution leakage — deals that should have closed but didn't because no action was taken — the CAC calculation for that period is artificially inflated. The marketing team spent what they were supposed to spend. The conversion rate dropped because execution quality dropped. Unless you can distinguish execution-driven conversion failure from genuine lead quality failure, the CAC calculation will attribute the problem to the wrong function and produce the wrong prescription.
To understand the full cost of a recurring forecast miss pattern, work through each layer:
Direct revenue gap: Forecast minus actual close, for each of the last four quarters. Is the miss consistent in direction (over-forecast) and magnitude? Consistent over-forecasting points to a structural input problem, not a market problem.
Hiring cost: Were any roles filled in the quarter against pipeline that subsequently slipped? What is the cost of carrying that capacity while the pipeline recovers? Were any hires delayed because of the revenue miss, and what is the opportunity cost of that delay?
Investment sizing gap: What budget was approved for the quarter that was sized against the forecast number rather than actual close rates? What is the delta between that investment and what would have been approved at the actual close rate?
CAC attribution: What proportion of the conversion rate drop in the period is attributable to execution failure versus lead quality? If you cannot answer this question with data, your CAC calculation is carrying unattributed execution noise.
What Changes When the Forecast Is Built on Activity Signals
Most sales forecasts are assembled from two inputs: CRM stage values (which reflect what reps last updated) and verbal confidence assessments from reps and managers. Both of these inputs are subject to the same systematic bias: they reflect what people believe or hope will happen, not what the activity data shows is actually happening in each deal.
When a deal has been in "Proposal Sent" for 28 days with no two-way email contact, no follow-up meeting scheduled, and a rep who hasn't updated the stage because they still believe it will close — the CRM shows it as a live, high-probability deal. The forecast includes it. The planning is built on it. But the activity signals have been telling a different story for almost a month.
Forecast accuracy improves most reliably when the inputs change. Reading email activity, calendar data, and meeting patterns directly produces a probability assessment that is grounded in what is happening, not what was last recorded. When deals that look active in the CRM are shown to have no activity signal to support their stage, they can be flagged, investigated, and either re-engaged or removed from the committed pipeline — before they become a quarter-end miss and a board explanation.
The forecast miss is treated as a sales problem because it surfaces in the sales number. The costs of getting it wrong — in hiring, investment, credibility, and CAC — are distributed across the entire business and extend well beyond the quarter in which the miss occurred.