THE LEAD

Attribution models are confident. They're also wrong.

A B2B SaaS company ran a test last year. They turned off paid social for 6 weeks. Their attribution model said paid social was driving 31% of pipeline. When they killed the spend, pipeline dropped 4%.

That's a 27-point gap between what the model said and what actually happened. And this wasn't a small company running off gut feel. They had a multi-touch attribution platform, a dedicated RevOps team, and dashboards that would make your head spin.

Here's the problem: attribution models need clean inputs to produce clean outputs. They don't get them. The average B2B buyer interacts with 28 touchpoints before converting. Your model captures maybe 8 of those (and misclassifies 3).

So what's the model actually doing? It's distributing credit across the touchpoints it can see, which makes whatever you're spending the most on look like it's working the hardest. That's circular logic dressed up in a pie chart.

72% of marketing leaders say they trust their attribution data. But when Forrester tested that confidence against holdout experiments, the models were off by an average of 37%. Your attribution model isn't lying to you on purpose. It just can't see enough of the picture to tell the truth.

THE FRAMEWORK: The Attribution Reality Check (3 steps)

Before you rip out your attribution stack (don't), run these 3 checks. They'll tell you how much of your model's output you can actually trust.

1. The Holdout Test

Pick your "top performing" channel according to attribution. Pause it for 2 weeks in one geo while keeping it live in a comparable geo. Compare pipeline generation between the two. If the paused geo barely flinches, your model is overcrediting that channel. This is the most honest test in marketing, which is probably why so few teams run it.

2. The Self-Report Audit

Add "How did you hear about us?" to your demo request form. Open text field, not a dropdown. Compare what buyers say to what your model says. You'll find gaps. A SaaS company ran this and found that 41% of their "direct traffic" conversions mentioned a podcast the attribution model couldn't track at all.

3. The Spend Correlation Check

Pull your channel spend by month next to your attributed conversions by month. If attribution credit closely mirrors spend levels (more spend = more credit, proportionally), that's a red flag. A good model should show diminishing returns at higher spend levels. If it doesn't, it's probably just distributing credit by volume.

Run all 3. You'll know within a week how much of your attribution story is signal and how much is noise.

THIS WEEK ON PROFESSOR LEADS

Last week's lead scoring Shorts hit a nerve. "The 34% Overlap" and "The Junk Leads" both cracked strong view counts, and the comments confirmed what I suspected: a lot of teams are quietly wondering if their scoring model is working against them.

This week's Shorts shift to attribution and lead gen systems. Same format: number first, one insight, 30 seconds. New clips dropping daily on YouTube and LinkedIn: youtube.com/@ProfessorLeads

WORTH YOUR TIME

SparkToro's dark social research (sparktoro.com): Here's the stat that should keep your analytics team up at night: 100% of all visits from TikTok, Slack, Discord, Mastodon, and WhatsApp show up as "direct" in your analytics. No referral data at all. Your attribution model isn't just missing a few touchpoints. It's blind to entire channels where your buyers actually spend time.

Paul Newnes on the attribution-incrementality-MMM stack (deducive.com): The clearest practitioner breakdown I've seen of what replaces attribution when it breaks. Newnes makes the case that incrementality testing (asking "what sales would I lose if I turned this off?") paired with marketing mix modeling gives you the causal picture that attribution can't. Worth 15 minutes if you're trying to figure out what to actually trust.

Tom Leonard on measurement paralysis (martech.org): The contrarian angle most teams avoid: when your attribution, incrementality tests, and MMM all disagree, the worst thing you can do is freeze. Leonard argues that stacking small gains over time and validating through year-over-year business results beats waiting for perfect measurement data. If your team has ever delayed a budget decision because "the data isn't clean enough," read this.

ONE THING TO TRY THIS WEEK

This week, pull your top 10 closed-won deals from last quarter. For each one, look at the first attributed touchpoint and the last. Then call or message the champion and ask: "How did you first hear about us?" Compare their answer to what your model says. I'd bet 6 out of 10 don't match.

William DeCourcy | Performance Marketing + Lead Generation

New videos and Shorts weekly. Subscribe so you don't miss one.

Keep Reading