THE LEAD
Your lead scoring model is probably the most trusted liar in your tech stack.
It assigns points with surgical precision. Job title, company size, industry, content downloads. Every lead gets a number. The number feels objective. Sales trusts it. Leadership reports on it. Budget decisions flow from it.
Here’s the problem: the number is wrong more often than it’s right.
A fintech company ran a painful exercise last year. They pulled their 200 highest-scored leads from Q3 and compared them to their actual closed deals in the same period. The overlap was 34%. Two out of three deals came from leads the model had scored as B or C tier.
The model wasn’t miscalibrated. It was measuring the wrong things. It scored demographic attributes (VP title, 500+ employee company, fintech vertical) while the actual buyers were directors at mid-market firms who had visited the pricing page 4+ times. The model couldn’t see behavior. It could only see biography.
This pattern shows up everywhere. A SaaS company generated 1,400 MQLs in Q3. Sales accepted 340 of them. Twenty-eight closed. That’s a 2% MQL-to-close rate. The scoring model counted white paper downloads and webinar registrations as intent signals. Those are learning signals. The leads that actually closed had opened support documentation and revisited the integrations page. The model didn’t track either behavior.
The deeper issue is structural. Most scoring models get built once, validated against a small sample, and then run untouched for 12-18 months. In that window, your ICP shifts. Your product changes. Your best-performing channels evolve. The model keeps scoring against a buyer profile that no longer exists.
One company discovered their model was 18 months old while their ICP had shifted twice. The model still scored enterprise marketing directors as A-tier. The actual buyers had migrated to ops managers at 100-person companies. Every “top lead” the model surfaced was a ghost of last year’s pipeline.
The final insult: score inflation. When MQL targets get set annually and the scoring model controls who qualifies, there’s a gravitational pull to lower thresholds. A team celebrated hitting their MQL number 3 months early while sales pipeline was down 15% in the same window. The model had been tuned to produce the metric leadership wanted, not the leads sales needed.
Lead scoring isn’t broken because the math is bad. It’s broken because the inputs are wrong, the model doesn’t learn, and the incentives point at the dashboard instead of the pipeline.
THE FRAMEWORK: The 3-Layer Scoring Audit
If your lead scoring model hasn’t been validated in the last 90 days, run this audit before you trust another number it produces.
Layer 1: The Overlap Test (30 minutes)
Pull your last 50 closed-won deals. Export their lead scores from the point they first entered the funnel. Now sort them. What percentage were scored in your top tier? If the overlap is below 60%, your model is guessing. Below 40%, it’s actively misrouting your best leads.
The fix isn’t recalibrating the model. It’s identifying which attributes the closed deals actually share that the model doesn’t track.
Layer 2: The Behavioral Gap (1 hour)
List every criterion in your current scoring model. Put them in two columns: demographic (who they are) and behavioral (what they do). Count the ratio.
If your model is more than 60% demographic, it’s a profiling tool, not a scoring tool. The behavioral signals that correlate with closed deals are almost always: pricing page visits (frequency and recency), integration/technical documentation views, email engagement velocity (not just opens, but reply rate and click depth), and return visit patterns.
Add the 3 strongest behavioral signals you’re currently not tracking. Weight them at 2x the demographic scores. Run the model for 30 days and compare.
Layer 3: The Decay Check (15 minutes)
Answer three questions: (1) When was the model last rebuilt from scratch? If the answer is more than 12 months, it’s stale. (2) Does the model have score decay? Leads that don’t engage for 30+ days should lose points automatically. If they don’t, you’re recycling dead leads into the MQL count. (3) Has your ICP definition changed since the model was built? If yes, the model is scoring against a buyer who no longer exists.
The output of this audit isn’t a new model. It’s a clear diagnosis of where the current model is lying to you, and which leads in your current pipeline deserve a second look.
THIS WEEK ON PROFESSOR LEADS
This week is all Shorts. No long-form companion video. Just 7 standalone clips, each one a different angle on why lead scoring models break.
A few to watch for: “The 34% Overlap” walks through a fintech company’s gap between top-scored leads and actual closed deals. “The Junk Leads” flips the script on A-tier vs. C-tier close rates. “The AI Fix That Wasn’t” covers a $40K platform that didn’t move the needle because the inputs were wrong.
The clips are dropping daily this week on YouTube and LinkedIn: https://youtube.com/@ProfessorLeads
Still getting traction from last week: “I Turned Off Our Best Campaign. Revenue Went Up.” walks through what happens when you kill a high-volume campaign that’s destroying your close rate. Watch it here: https://youtu.be/DkXolUss--A
WORTH YOUR TIME
Jon Miller on killing the MQL. The Marketo co-founder published a framework last week that splits qualification into 3 tiers: hand-raisers (buyer already qualified themselves), intent signals (pricing visits, competitive comparisons, multi-threaded account activity), and interest signals (content consumption, webinar attendance).
His sharpest point: marketing teams gamed MQL scoring to hit volume targets, sales lost trust, and now the metric is radioactive. The three-tier model gives both sides a shared vocabulary. Read it: jonmiller.com
Jeff Ignacio on evidence-based scoring. Ignacio runs RevOps Impact and nails what he calls “the lead scoring decay problem.” Someone builds a model 18 months ago on intuition and sales feedback, then moves on. Products change, ICPs shift, the model keeps scoring against a buyer who no longer exists.
His fix is refreshingly low-tech: export your CRM data, calculate actual conversion rates by attribute, rebuild weights from evidence. No ML required. Read it: revengine.substack.com
Dev Das on selection bias in predictive models. Worth the 5 minutes. Das walks through how scoring models learn the wrong patterns: your model trains on leads your sales team chose to pursue, not leads that were inherently valuable. Those are two different populations, and they diverge fast.
He cites research showing 94% of organizations suspect their customer data is inaccurate. Your model might be hiding your best growth segments while looking mathematically sound. Read it: newsletter.hackrlife.com
HubSpot on detecting AI bias in prospecting. Practical audit framework for anyone running AI-assisted lead scoring. The core test: compare your AI-generated lead list with your actual closed-won deals from last quarter. If your best deals aren’t showing up in the AI feed, you’ve got a bias problem baked into the training data.
Short read with a clear action step, which is more than most vendor blogs deliver. Read it: blog.hubspot.com
ONE THING TO TRY THIS WEEK
Pull your last 20 closed-won deals and check their original lead scores. Calculate the overlap percentage with your top-tier scored leads. If it’s below 50%, your model needs a behavioral layer. Takes 30 minutes with a CRM export and a spreadsheet. The number you get back will tell you whether your scoring model is a decision tool or a decoration.
William DeCourcy
Professor Leads
