Pulse ← Library
Knowledge Library · attribution-model
✓ Machine Certified10/10?

What attribution model works for a multi-touch enterprise sales motion?

4/29/2024

For enterprise multi-touch motions, run three attribution models in parallel rather than picking one: first-touch for lead-gen credit, last-touch for sales-motion credit, and W-shaped multi-touch (30/30/30/10) for pipeline diagnosis. Single-touch attribution lies systematically when the median enterprise deal sees 5-9 touches over 6+ months and involves 6.8 buying-group members on average per Gartner's B2B Buying research (https://www.gartner.com/en/sales/insights/b2b-buying-journey). The right answer isn't a model; it's a model triangulation discipline -- AND a willingness to test it adversarially with incrementality holdouts.

Why single-touch fails (the real mechanics)

A $500K deal touches marketing 7 times (content -> webinar -> whitepaper -> 3 nurture emails -> retargeting), then sales 3 times (cold call -> demo -> proposal). Last-touch credits the proposal and tells marketing they did nothing. First-touch credits the blog post and tells sales they did nothing. Both are fiction. Forrester's B2B buyer research (https://www.forrester.com/blogs/category/b2b-buying/) finds the average enterprise buying group spans 6 to 10 stakeholders and produces roughly 27 distinct buying-group interactions across the cycle -- meaning even "first touch" is a compression of 6+ first touches per account.

Salesforce's State of Sales 6th Edition (https://www.salesforce.com/resources/research-reports/state-of-sales/) found 32% of revenue leaders explicitly distrust their attribution data, and 67% of reps say enterprise wins require coordinated multi-channel outreach -- not a single decisive touch.

The three models, with real formulas

1. First-touch (lead-gen accountability)

2. Last-touch (closing-motion accountability)

3. Multi-touch (journey truth)

Realistic example ($20M ARR, 60% enterprise mix)

DealFirst TouchMid TouchesLast TouchW-Shape Credit
$300KContent (30%)Webinar conv (30%), Demo opp-create (30%)ProposalContent $90K, Webinar $90K, Demo $90K, Proposal $30K
$100KPaid Ad (30%)SDR call lead-conv (30%), Whitepaper opp-create (30%)NegotiationPaid $30K, SDR $30K, WP $30K, Neg $10K

Operational implementation (90-day rollout)

  1. Days 1-30: Force every touch into Salesforce Activity + Campaign Member (no Activity = didn't happen). Without Activity hygiene, every model is garbage-in. Target >=95% Activity-on-Opportunity coverage before any model output is shared.
  2. Days 31-60: Stand up three dashboards -- one per model -- using the same Opportunity universe (same date filter, same stage cutoff).
  3. Days 61-90: Quarterly review with CFO + RevOps as neutral arbiter. If first/last/multi tell different stories, the gap IS the insight.
  4. Year 1: Lock the model. Don't re-weight mid-year; you'll be optimizing to noise.

Bear Case (adversarial -- the case against attribution as you've been sold it)

Attribution is largely theater for boards. The honest critique:

  1. None of these models prove causation. First/last/multi-touch are correlation dressed up as math. The only causal evidence comes from randomized incrementality tests -- geo holdouts (Google Geo Experiments, Meta Conversion Lift, Nielsen MMM) -- and most CMOs refuse to run them because they expose dead spend. Salesforce's 32% distrust rate is the polite version of this.
  1. The deal-flipping touch is usually invisible. In a 6-month, 6.8-stakeholder enterprise cycle, the marginal interaction that flipped the deal is almost never the first or last in your CRM. It's a back-channel reference call, a Slack DM between champion and CFO, a hallway conversation at a customer event. If your CRM doesn't see it, no model can credit it.
  1. Multi-touch weights (40/40/20, W-shape 30/30/30/10) are arbitrary. Vendors picked them because they look balanced and pass executive smell-tests, not because randomized data justified them. Try re-running the same opportunities with three different weight schemes -- the channel rankings will move.
  1. Vendor-defined attribution is captured. When the platform that runs your campaigns also reports your campaign ROI, ask whether you trust the umpire to call balls and strikes against itself. Independent measurement (Bizible bought by Adobe; Dreamdata; data-warehouse-native models in dbt + Looker) reduces but does not eliminate this conflict.

Counter-prescription: If you must pick one model and you sell to enterprise, pick W-shaped multi-touch AND pair it with quarterly geo-level or audience-level incrementality holdouts on the top two paid channels. Anyone selling "unified attribution" without holdouts is selling vibes, not measurement.

Action: Pick W-shaped as primary. Implement for 12 months. Don't change weights. Pair with quarterly incrementality holdouts. Compare to last-touch monthly to detect drift.

Related Pulse knowledge

flowchart LR A[Account] --> B[First Touch<br/>Content/Ad] B --> C[Mid Touches<br/>Email/Webinar/Demo] C --> D[Last Touch<br/>Proposal/Negotiation] D --> E[Closed-Won] F[First-Touch<br/>100% to B] --> G[Channel Budget] H[W-Shape<br/>30/30/30/10] --> I[Journey Truth] J[Last-Touch<br/>100% to D] --> K[Sales Motion] L[Incrementality<br/>Holdout] --> M[Causal Truth]

TAGS: attribution-model, multi-touch, enterprise-sales, pipeline-analysis, marketing-sales-alignment, w-shaped, incrementality

Download:
Was this helpful?  
Sources cited
clari.comhttps://www.clari.com/blog/sales-pipeline-management/gong.iohttps://www.gong.io/blog/sales-pipeline/gartner.comhttps://www.gartner.com/en/sales/researchbvp.comhttps://www.bvp.com/atlas/state-of-the-cloud-2026news.crunchbase.comhttps://news.crunchbase.com/
Deep dive · related in the library
deal-attribution · first-touchHow do you handle deal-attribution disputes between marketing and sales (first-touch vs last-touch vs multi-touch)?snowflake · ae-careersIs a Snowflake AE role still good for my career in 2027?datadog · ae-careerIs a Datadog AE role still good for my career in 2027?datadog · win-rate-splunkWhat is Datadog enterprise win-rate vs Splunk in 2026?datadog · ae-career-2027Is a Datadog AE role still good for my career in 2027?servicenow · win-rate-salesforceWhat is ServiceNow's enterprise win-rate vs Salesforce in 2026?servicenow · ae-career-2027Is a ServiceNow AE role still good for my career in 2027?snowflake · ae-careerIs a Snowflake AE role still good for my career in 2027?salesforce · ae-roleIs a Salesforce AE role still good for my career in 2027?salesforce · career-decisionShould I work for Salesforce in 2027?
More from the library
dog-boarding · pet-servicesHow do you start a dog boarding business in 2027?volume-cronWhat replaces manual forecasting if AI agents replace SDRs natively?volume-cronShould 11x acquire Avoma in 2027?test-prep · satHow do you start a test prep (SAT/ACT/GMAT/LSAT) business in 2027?gong · avomaShould Gong acquire Avoma in 2027?volume-cron · machine-generatedIs a Apollo AE role still good for my career in 2027?moving-company · small-businessHow do you start a moving company in 2027?pet-sitting · pet-servicesHow do you start a pet sitting business in 2027?volume-cron · machine-generatedWhat replaces SDR teams if AI agents replace SDRs natively?kombucha · beverage-businessHow do you start a kombucha business in 2027?online-course-business · creator-economyHow do you start an online course business in 2027?windshield-repair · auto-glassHow do you start a windshield repair business in 2027?servicenow · workatoShould ServiceNow acquire Workato in 2027?custom-apparel · print-on-demandHow do you start a custom apparel business in 2027?screen-printing · custom-apparelHow do you start a screen printing business in 2027?