THE LEAD
Attribution models do something dangerous. They give you precise numbers for things they can't actually measure.
Your dashboard says paid search drove 43% of pipeline. Display drove 18%. LinkedIn drove 12%.
Those numbers add up to something that feels like a fact. But the model that produced them is making assumptions about human behavior that would embarrass a first-year psych student.
Here's what happened to one B2B team that pressure-tested their attribution.
They were running 5 channels. Their multi-touch model said Facebook was driving 31% of conversions at a $38 CPL. So they kept scaling it.
Then they paused Facebook for 6 weeks. Conversions dropped 4%.
Not 31%. Four percent. The model was off by a factor of nearly 8x.
The other 27% of "Facebook conversions" were people who would've converted anyway through organic search, direct traffic, or brand recognition built by LinkedIn and events. Facebook was getting credit for standing near the finish line.
The measurement architecture is the problem. Every last-touch and multi-touch model shares the same flaw: they measure correlation between ad exposure and conversion, then present it as causation. The gap between those two things is where your budget disappears.
Incrementality testing is the fix. Instead of asking "which channel touched the customer?", it asks "what would've happened if we'd done nothing?"
You run a holdout. You measure the difference. You get an answer that's harder to game.
Until recently, running a real incrementality test required serious spend and a data science team. That just changed (more on this in Worth Your Time).
THE FRAMEWORK
Here's a 4-step attribution audit you can run this week without special tools or a data team.
Step 1: Pull your attribution report by channel. Whatever model you're using (last-touch, linear, time-decay), export the data for last quarter. Write down each channel's attributed conversion percentage.
Step 2: Compare to self-reported attribution. Add a "how did you hear about us?" field to your highest-traffic form. Open text, not a dropdown.
Run it for 30 days. The gap between what your model says and what buyers say is your measurement error margin.
Step 3: Run a poor man's incrementality test. Pick your #2 or #3 channel by spend (not your biggest, you're not trying to tank pipeline). Pause it for 2 weeks.
Track what happens to total conversions, not just that channel's attributed conversions. If total conversions barely move, the model was inflating that channel's contribution.
Step 4: Build a channel confidence score. For each channel, rate your confidence in the attribution data from 1 to 5. A 5 means you have incrementality data or strong self-reported confirmation. A 1 means you're trusting the model blindly.
Any channel you're scaling at a confidence score of 1 or 2 is a budget risk.
The insight: most teams have 60-70% of their spend allocated based on confidence scores of 1 or 2. That's not a data-driven budget. That's a guess with a dashboard on top.
THIS WEEK ON PROFESSOR LEADS
Two new videos dropping this week:
"The Attribution Report Is Lying to You" (Tuesday) walks through the exact audit process above, with real examples of what teams found when they tested their models against incrementality data. This is the video for anyone who's ever looked at their attribution dashboard and thought "something feels off." Watch it: https://youtu.be/HNg1Mz-LI90
"I Turned Off Our Best Campaign. Revenue Went Up." (Thursday) tells the full story of a team that paused their top-performing channel for 90 days and discovered 60% of the attributed conversions would've happened without it. The second half covers what they did with the freed-up budget. Watch it: https://youtu.be/DkXolUss--A
If you caught last week's videos on CPL vs. CPR, these two pick up right where those left off. CPR tells you which channels actually produce revenue. Attribution auditing tells you which channels are getting credit they don't deserve.
WORTH YOUR TIME
Jon Loomer on Meta's attribution overhaul. Meta just split attribution into "click-through" (link clicks only) and a new category called "engage-through" (likes, saves, comments). Rolling out this month.
If you're running lead gen on Meta, your reported conversions are about to shift. Loomer's take is the sharpest breakdown I've seen, especially his point that keeping engage-through on for lead gen is probably inflating your numbers. If someone liked your ad but never clicked to grab the lead magnet, calling that a conversion is generous. Read it: https://www.jonloomer.com/meta-ads-attribution-2026/
Maurice Rahmey on Google's $5K incrementality test. Google just dropped their minimum spend for incrementality testing from roughly $100K to $5,000. Uses Bayesian methodology, runs in 7 days, and delivers 50% more conclusive results than the previous version.
A year ago, real incrementality testing was a big-company luxury. That barrier just evaporated. Maurice's LinkedIn post breaks down what changed and why it matters for mid-market teams. Read it: https://www.linkedin.com/posts/mrahmey_google-just-made-incrementality-testing-far-activity-7396188201620623360-0Ucc
How Uber saved $35M by turning off Meta ads. Uber paused all Meta performance ads in the U.S. and Canada for 3 months. No measurable decline in rider acquisition or revenue. They reallocated $35M annually.
The trigger: their analytics team noticed 20% week-over-week CPA swings that correlated with seasonality, not ad spend. This is what an incrementality test looks like at scale, and the punchline landed exactly where you'd expect. Read it: https://www.marketingtodaypodcast.com/194-historic-ad-fraud-at-uber-with-kevin-frisch/
IAB State of Data 2026 and the AI measurement problem. The IAB found that 75% of marketers say their attribution, incrementality tests, and MMM underperform on rigor, timeliness, and trust. Their response is Project Eidos, an industry-wide push to build interoperable measurement standards.
But here's the part worth paying attention to: half of the marketers now using AI for measurement say it lacks transparency and governance. AI is making measurement faster. The question worth asking: is it making measurement more accurate, or just producing confident wrong answers at higher speed? Read it: https://www.iab.com/insights/2026-state-of-data-report/
ONE THING TO TRY THIS WEEK
Add one open-text field to your highest-volume lead form: "How did you hear about us?"
Not a dropdown. Open text. Let people write whatever they want.
Run it for 30 days alongside your attribution model. Then compare.
I've seen teams discover that 40% of their "direct traffic" conversions write things like "saw your LinkedIn post" or "my colleague forwarded your email." That's pipeline your attribution model is giving zero credit to. You can't optimize what you can't see, and your model has blind spots it won't tell you about.
Takes 5 minutes to set up. The data you get back in 30 days will be worth more than your last attribution platform renewal.
That's Issue #2. See you next Tuesday.
William DeCourcy
Professor Leads
