THE LEAD

DoubleVerify measured it: 10 hours 12 minutes per week. That's how much time the average marketer spends on manual campaign tasks. Pulling reports. Reformatting dashboards. Scheduling posts. Reallocating budgets.

10 hours. Every week. On tasks that follow the same process every time.

An agentic AI system handles those tasks continuously. You set the rules once. It runs. It reports back. You review the output instead of producing it.

Here's the catch: Gartner predicts 40% of agentic AI projects will be canceled by the end of 2027. The tech works. The implementations don't. Teams buy platforms, throw tasks at them, and three months later the agent is running 8 workflows with inconsistent data and conflicting rules.

The failure pattern is almost always the same. They automated the wrong tasks first. They picked the one the VP was loudest about, or the one that sounded most impressive in the board deck. They didn't score their workflows on what actually makes a good automation candidate.

The teams that win start with a different question: which tasks consume the most time while requiring the least judgment? That intersection is where agents deliver immediately.

US enterprises already running agents on those tasks are reporting 192% average ROI. McKinsey benchmarks show 60 to 80% reduction in manual workflow time for the first wave of automations. One B2B team deployed 4 agents and freed up 10 hours per week per marketer. They didn't hire a replacement. They reallocated the team to strategy and testing. Campaign performance improved. Costs stayed flat.

That's the math. And it starts with knowing what to automate first.

THE FRAMEWORK: The Agent Audit (5-Step Scoring Method)

Before you buy a platform or build an agent, run this audit. It takes 30 minutes. Bring your team leads into a room with a whiteboard (or a shared doc).

Step 1: List every repetitive task (10 min). Go department by department. Marketing ops, demand gen, content, analytics, sales enablement. Write down every recurring task. Don't filter. Just list. You'll end up with 15 to 30.

Step 2: Score time consumed (5 min). For each task, estimate weekly hours. Include the prep, the follow-up, the "one more report" time. Score as Low (under 1 hour), Medium (1 to 4 hours), or High (4+ hours).

Step 3: Score decision complexity (5 min). For each task, ask: "Could I write a rulebook for this that someone with zero context could follow?" If yes, that's Low. If mostly yes with some exceptions, that's Medium. If the answer is "it depends," that's High.

Step 4: Plot the grid (5 min). Two axes. Time on Y (high at top), complexity on X (high at right). Drop every task into its cell.

Step 5: Pick your first 2-3 candidates (5 min). Top-left corner: high time, low complexity. Those are your golden candidates. Pure time savings, minimal risk. Second wave: top-middle (high time, medium complexity). Ignore the right column entirely for now.

The golden candidates that show up in almost every audit: analytics reporting, dashboard updates, content scheduling, lead routing, and alert generation. If those aren't in your top-left corner, double-check your scoring.

One team found 12 hours per week of high-time, low-complexity work sitting in their analytics reporting, social scheduling, and lead assignment workflows alone. Three agents. Deployed in 3 weeks. The analytics agent paid for itself in the first reporting cycle.

THIS WEEK ON THE BLOG

This week I published the full Agent Audit framework as a deep-dive article on the blog. The post walks through the 5-step process in detail, includes a comparison table of golden candidates vs. tasks to avoid first, a real-world example of what one B2B team found when they ran the audit, and a timeline for when to expect results at each phase. It also covers the 3 mistakes that kill most agent deployments (and they're not what you'd expect).

THIS WEEK ON PROFESSOR LEADS

All Shorts this week. Five clips, one theme: where agentic AI actually delivers and where it falls apart.

"The 10-Hour Drain" covers the DoubleVerify finding on time wasted. "The 40% Wipeout" breaks down why Gartner's prediction matters more than it sounds. "The 60% Bottleneck" looks at where marketing analysts actually spend their days (spoiler: it's data prep, not analysis). "The 192% Return" walks through the ROI math from US enterprise deployments. "The One-Year Jump" shows what happens when early adopters compound their head start.

Dropping daily this week: youtube.com/@ProfessorLeads

WORTH YOUR TIME

Anthropic on building effective AI agents. This guide from the team behind Claude breaks down agent architectures into 2 categories: workflows (predetermined paths) and agents (model-directed paths). The sharpest insight is that most "agent" implementations should actually be workflows. Agents add complexity. Only use them when the task genuinely requires flexible decision-making. If you can draw the logic as a flowchart, you don't need an agent. You need a workflow with LLM nodes. Read it: anthropic.com/engineering/building-effective-agents

McKinsey on where agents actually deliver value. McKinsey's research across early implementations found that the biggest ROI comes from multi-step workflows where agents can execute autonomously within defined guardrails. Their benchmark: 60 to 80% reduction in manual time for first-wave tasks. The piece also makes a compelling argument that the real value isn't in individual agents but in agent-to-agent handoffs. Worth 10 minutes. Read it: mckinsey.com

Lenny Rachitsky on AI tools that actually work for PMs. Lenny surveyed his audience on which AI tools product and marketing teams are actually using (not just piloting). The results are interesting: the tools winning aren't the flashy "autonomous AI" platforms. They're targeted agents that do one thing well. Cursor for code. Granola for meeting notes. The pattern: specificity beats ambition. Read it: lennysnewsletter.com

Jason Lemkin on why most AI startups will fail at sales. Lemkin's take: AI tools sell on demos but churn on integration. His argument is that the agent companies that survive will be the ones that solve the "last mile" problem of actually connecting to existing workflows, not the ones with the best models. Sharp and contrarian. Worth the 5 minutes. Read it: saastr.com

ONE THING TO TRY THIS WEEK

Run Step 1 and Step 2 of the Agent Audit. Just those two steps. List your team's recurring tasks, score them on time consumed. Takes 15 minutes. You'll have a clear picture of where the hours are going before you ever evaluate a single tool. Most teams that do this find at least 8 hours per week of automatable work they didn't realize was there.

William DeCourcy

Professor Leads

Keep Reading