How do post-close analytics identify which forecast assumptions were right and which were wrong?
Post-Quarter Forecast Audit
Direct: Within 5 days of Q-end, compare actual closes against forecast bands by deal stage, buyer profile, and rep to isolate which assumptions broke. Rebuild model quarterly based on variance root-cause.
Operator Detail
Post-quarter analysis is where forecasting gets honest. Don't predict better next quarter; analyze where this one went wrong.
The five-part audit:
1. Stage-by-stage win rate validation:
- Compare forecasted close rates by stage vs. actual
- Example: You predicted Proposal 60%, actual was 52%
- Action: Adjust Proposal multiplier to 0.87x (52÷60) next quarter
- Variance: Track by stage; highest variance stages get weighting review
2. Deal-size bucket analysis:
- Segment actuals by deal size: <$25K, $25-50K, $50-100K, >$100K
- Compare forecast accuracy in each bucket
- Insight: Larger deals often slip more; smaller deals close faster
- Action: Apply size-based multipliers if pattern holds across 2 quarters
3. Buyer profile deep-dive:
- Compare close rates by buyer title (C-suite, VP, Manager)
- Compare by industry (SaaS, Financial Services, Healthcare, etc.)
- Compare by deal type (new business, expansion, renewal)
- Insight: Some buyer profiles close 20% faster/slower
- Action: Re-weight forecasts by buyer profile
4. Rep-level forecast accuracy:
- Which reps' estimates were closest to actuals?
- Which reps consistently over/under estimate?
- Action: Coach over-estimators; increase their weighting model discount
- Insight: Forecast quality varies by rep; use individual calibration factors
5. Slipped-deal postmortem:
- Of deals forecasted but didn't close, why?
- Competitive loss (% of slips)
- Budget delayed (% of slips)
- Legal hold (% of slips)
- Buyer disengagement (% of slips)
- Discovery mismatch (% of slips)
- Action: Early-identify these patterns; add red-flag triggers
Post-Close Analytics Template
| Category | Forecast Assumption | Actual | Variance | Root Cause | Q+1 Action |
|---|---|---|---|---|---|
| Proposal win rate | 60% | 52% | -13% | Pricing escalation | Reduce Proposal to 52% baseline |
| Enterprise deals ($100K+) | 75% | 58% | -23% | Buyer committee too large | Add committee-size multiplier |
| Tech buyer deals | 65% | 71% | +9% | Strong demand signal | Increase to 73% |
| Rep A estimates | Avg 75% | Actual 68% | -10% bias | Rep over-estimates | Cap A's closing ests at 65% |
Why This Matters
Pavilion research: companies that conduct post-quarter audits and adjust models improve forecast accuracy 18-26% in subsequent quarters. One-time analysis compounds.
Bridge Group data: 60%+ of forecast variance comes from 3-4 systematic misassumptions (wrong stage rates, buyer profile bias, seasonal under-adjustment). Audit finds them.
The Quarterly Rhythm
- Day 1-5 after Q-end: Close all deals, finalize actuals
- Day 6-8: Run audit (stage rates, deal sizes, buyer profiles, rep accuracy)
- Day 9-10: Present findings to sales leadership
- Day 11: Update weighting model for Q+1 forecast
- Day 15: Deploy new model; communicate changes to reps
TAGS: forecast-audit,post-quarter-analysis,assumption-validation,forecast-calibration,win-rate-tracking,model-improvement