Articles

We Automated Our SmartLead Workflow. Reply Rates Doubled.

Ibby SyedIbby Syed, Founder, Cotera
9 min readMarch 7, 2026

We Automated Our Entire SmartLead Workflow. Reply Rates Doubled.

We Automated Our Entire SmartLead Workflow. Reply Rates Doubled.

Marcus ran our outbound operation for eleven months before he finally said what everyone was thinking: "I spend more time managing SmartLead than I spend talking to prospects."

He wasn't exaggerating. At peak, we had 17 active campaigns across three sending accounts. Each campaign had its own lead list, its own sequence, its own sending schedule. Every morning Marcus would open SmartLead, check reply rates, check bounce rates, check warmup scores, look for campaigns that needed pausing, look for campaigns that needed fresh leads, and then manually load CSVs he'd pulled from Apollo the night before. By the time he finished his daily SmartLead maintenance, it was 11 AM.

I sat with him one Tuesday and timed the whole thing. Two hours and fourteen minutes. He checked 17 campaign dashboards. He paused two campaigns with bounce rates above 4%. He uploaded leads to three campaigns that were running dry. He rewrote a subject line on one campaign that had a 9% open rate. He moved warm replies from SmartLead into our CRM by hand. Two hours and fourteen minutes of clicking, copying, and context-switching.

Across a five-day week, that's over 11 hours. Eleven hours of campaign babysitting that produced zero conversations.

The Problem With Manual Campaign Management

SmartLead is good software. I want to be clear about that. The warmup infrastructure is solid. The sending rotation across mailboxes works well. The deliverability controls are better than most competitors. We were not looking to replace SmartLead.

But SmartLead, like every cold email platform, assumes a human is making the decisions. It sends the emails. A human decides which emails to send, to whom, when to pause, when to scale up, when to rewrite. The platform handles execution. The human handles everything else.

When you're running two or three campaigns, that's fine. You can keep it all in your head. When you're running 17, you cannot. Marcus had a spreadsheet tracking campaign-level metrics that he updated daily. He had a Slack reminder to check bounce rates at 9 AM. He had a recurring task to pull Apollo leads every Monday and Wednesday. He had a mental model of which campaigns were testing new sequences versus scaling proven ones.

All of this worked until it didn't. In October, Marcus went on vacation for a week. Priya covered for him. On day two, she accidentally loaded a lead list into the wrong campaign, sending a fintech-focused sequence to healthcare CFOs. On day three, she didn't notice a campaign's bounce rate spike to 7%, which started affecting the warmup score on the sending account. On day four, she paused all campaigns because she wasn't sure which ones were healthy and which ones weren't.

When Marcus came back, it took him three days to untangle everything. He said, very calmly: "We need to stop doing this by hand."

What We Actually Automated

We didn't automate SmartLead itself. SmartLead already does its job. We automated the decision-making layer above SmartLead.

The first piece was lead loading. We connected a SmartLead Apollo campaign builder agent that pulls leads from Apollo based on our ICP criteria, deduplicates them against our existing campaigns and CRM, and loads them into the right SmartLead campaign with the right tags. What used to be Marcus's Monday/Wednesday ritual of pulling CSVs, deduplicating in Google Sheets, and uploading to SmartLead now runs automatically. The agent loaded 2,340 leads in January without Marcus touching a spreadsheet.

The second piece was campaign monitoring. Instead of Marcus checking 17 dashboards every morning, the agent checks all of them continuously. When a campaign's bounce rate crosses 3%, the agent pauses it and sends Marcus a Slack message explaining why. When open rates drop below 25%, it flags the campaign for sequence review. When reply rates exceed 5%, it tags the campaign as a scaling candidate.

The third piece was the part Marcus didn't even realize he was doing manually: cross-campaign analytics. Before automation, Marcus could tell you how each individual campaign was performing. He could not easily tell you how all campaigns targeting VP-level prospects were performing collectively, or whether the four-step sequence was outperforming the three-step sequence across campaigns, or which Apollo filters produced leads that actually replied. The agent aggregates data across campaigns and produces a weekly report that answers these questions.

Before and After

I'm going to share real numbers because vague claims about "improved efficiency" don't mean anything.

Before automation (September):

  • 17 active campaigns
  • Average reply rate: 2.1%
  • Average bounce rate: 3.8%
  • Marcus's weekly SmartLead time: 11+ hours
  • Lead loading frequency: twice per week (manual)
  • Campaigns paused due to unnoticed deliverability issues: 3 that month

After automation (February):

  • 22 active campaigns
  • Average reply rate: 4.4%
  • Average bounce rate: 1.6%
  • Marcus's weekly SmartLead time: about 2 hours (review and strategy)
  • Lead loading frequency: daily (automated)
  • Campaigns paused due to unnoticed deliverability issues: zero

The reply rate doubling surprised us. We expected the time savings. We didn't expect the performance improvement. But it makes sense when you think about it. When a campaign's bounce rate spikes on a Tuesday morning and nobody catches it until Friday, you've spent four days sending from a degraded account. When an agent catches it within an hour and pauses the campaign, the damage is contained. Better deliverability means more emails hit the inbox. More inbox placement means higher open rates. Higher open rates mean more replies.

The reply rate improvement wasn't because the agent wrote better emails. Marcus still writes the sequences. The improvement came from better campaign hygiene: cleaner lead lists, faster reaction to deliverability problems, and more frequent lead loading so campaigns always had fresh contacts.

What Marcus Does Now

Marcus didn't become redundant. His job changed. He used to be an operator. Now he's a strategist.

He spends his time on things the agent can't do: writing new sequences, analyzing which messaging angles land with different personas, talking to the sales team about what prospects are saying on calls, and designing new campaign architectures. He built a new campaign structure in January that segments by company stage (seed, Series A, Series B+) with different sequences for each. That kind of thinking requires understanding the market, our product, and our buyers. An agent cannot do that.

He also reviews the agent's weekly analytics report every Monday. It takes about 30 minutes. He looks for patterns: which industries are responding, which job titles are cold, whether subject line A/B tests produced a clear winner. Then he makes strategic decisions based on those patterns. The agent surfaces the data. Marcus decides what to do with it.

Anya, our head of sales, noticed the shift. "Marcus used to be the person who kept the lights on. Now he's the person who decides where to point the lights. That's a much better use of a $95K salary."

The Parts That Are Still Manual

I don't want to oversell this. Some parts of the workflow are still manual and probably should stay that way.

Marcus writes all email sequences himself. We tried having AI generate sequences and the results were generic. Cold email copy needs to sound like a specific person with a specific opinion, not a language model with access to a company's "About" page. Marcus's best-performing sequence opens with a direct reference to a specific problem the prospect's company type faces. That level of specificity requires human judgment.

Reply handling is also still manual. When someone responds to a cold email, a human needs to read the response, understand the intent, and decide the next step. Is it a "tell me more" or a "take me off your list" or a "not now but maybe in Q3"? Each of those requires a different response, and the nuance matters too much to automate.

Campaign strategy is manual. Which companies to target, which personas to prioritize, how to position against competitors. These decisions come from market knowledge, sales call feedback, and business strategy. The agent executes the strategy. It doesn't create it.

Getting Started Is Easier Than You Think

Our full setup took about a day and a half. The Apollo integration was the longest part because we had to define our ICP criteria precisely enough for the agent to pull the right leads without human review. Once we got the filters right, everything else connected quickly.

If you're running more than five SmartLead campaigns and a human is manually monitoring all of them, you're in the same position we were in September. The sending platform works fine. The human layer above it doesn't scale.


Try These Agents

For people who think busywork is boring

Build your first agent in minutes with no complex engineering, just typing out instructions.