Articles

HubSpot Email Automation Is Stuck in 2019 — AI Makes It Actually Read the Room

Ibby SyedIbby Syed, Founder, Cotera
9 min readMarch 6, 2026

HubSpot Email Automation Is Stuck in 2019 — AI Makes It Actually Read the Room

HubSpot Email Automation

Sonia pulled me into a Slack huddle last November, audibly frustrated. She'd just had a prospect reply to a nurture email with "Actually, we just signed with [competitor] last week. Thanks for keeping us in the loop though." Standard enough. The problem? HubSpot's automated sequence sent the prospect another follow-up email two days later — the one about scheduling a demo — because the sequence didn't know the deal was dead. The prospect's reply didn't trigger any workflow. It just sat in the contact's activity timeline while the automation cheerfully kept going.

"It makes us look like idiots," Sonia said. She wasn't wrong.

I checked the contact record. Between the prospect's "we signed with someone else" reply and the next automated email, there were 47 hours. Almost two full business days where our system knew — in the sense that the information existed inside HubSpot — that this prospect had chosen a competitor. But because that information lived in an email body, not in a structured property, no workflow could see it. The sequence ran on schedule. The second email went out. The prospect didn't reply, because why would he?

This is the fundamental limitation of HubSpot email automation as it exists today. It's time-based, not context-based. Sequences fire on schedules. Workflows trigger on property changes. Neither one can read an email and understand what it says. And in a world where the most important information about a prospect lives in the text of their replies — not in structured CRM fields — that's a crippling blind spot.

How Drip Campaigns Actually Work (And Why They're Dying)

Let me trace the lineage, because understanding where drip campaigns came from explains why they're stuck.

Email drip campaigns were invented in a world where the sender had a structural information advantage. The marketer knew things the prospect didn't: product capabilities, pricing models, case studies, industry benchmarks. The drip campaign's job was to transfer that knowledge over time, building interest gradually until the prospect was ready to engage. Email one introduced the problem. Email two described the solution. Email three offered social proof. Email four asked for the meeting.

This worked when prospects were information-starved. In 2019, if you wanted to understand a SaaS product's capabilities, you often had to either talk to a salesperson or consume their content marketing. The drip campaign was a reasonable middle ground: let me educate you at your own pace until you're ready.

By 2026, prospects have all the information they need before they ever enter your CRM. They've read your G2 reviews. They've watched your YouTube demos. They've asked their network on LinkedIn. They've probably seen your competitor's comparison page that positions you as an inferior alternative. The information asymmetry that made drip campaigns effective has collapsed. Your five-email nurture sequence is telling them things they already know, at a pace they didn't choose, in a format they're increasingly ignoring.

Our own data confirms this. Open rates on our nurture sequences dropped from 28% in 2023 to 14% in 2025. Click rates went from 4.2% to 1.1%. Reply rates — the only metric that actually correlates with pipeline — went from 2.8% to 0.9%. We're sending more emails to more contacts and getting fewer meaningful responses. The trendline is unmistakable.

What "Reading the Room" Means in Practice

When I say AI can read email context, I don't mean it uses sentiment analysis to tag an email as "positive" or "negative" (which HubSpot can already do in a rudimentary way). I mean it can understand what a prospect is actually communicating and decide what should happen next.

We started using a notes intelligence agent that processes every incoming email reply, logged call note, and meeting summary in HubSpot. The agent doesn't just read individual messages — it reads the full conversation thread and the deal context, and produces an assessment.

Here's a real example from last month. A prospect named Claudia at a logistics company had been in our pipeline for six weeks. She'd received four emails from our nurture sequence and hadn't replied to any of them. In the old world, that's a cold lead. Maybe re-engage in three months.

But Claudia had replied to a separate thread — a personal email from Derek, one of our AEs, who'd met her at a trade show. In that reply, she'd written: "Timing isn't right for us this quarter. We're mid-migration on our warehouse management system and adding another tool would overwhelm the team. Can we reconnect after the migration lands? Probably late Q2."

The notes intelligence agent caught this. It identified that Claudia had provided a specific timeline (late Q2), a specific blocker (WMS migration), and a specific intent signal (she said "reconnect," not "not interested"). The agent's recommendation: pause all nurture, set a follow-up task for May 15, and when Derek reaches out, reference the WMS migration and ask how it went.

In the old system, the nurture sequence would have kept firing. Claudia would have received two more automated emails during a period where she'd explicitly asked to be left alone. And when Derek did reach out in Q2, he might not have remembered the WMS migration detail — it was buried in an email thread from months ago.

That's what reading the room means. Not sentiment scoring. Contextual understanding that drives appropriate action.

Where Sequences Still Make Sense (I'm Not Anti-Sequence)

I want to be fair. I've been critical of drip campaigns, but sequences aren't universally bad. They have a legitimate use case: initial outbound touchpoints where you have no prior relationship and no context to read.

If you're doing cold outbound — emailing a list of prospects who've never interacted with your company — a well-crafted three-email sequence is still the most efficient way to make initial contact. The emails need to be genuinely personalized (not merge-field personalized, actually personalized), but the sequencing — send email one on Monday, follow up Wednesday if no reply, final touch the following Monday — that part works fine as automation.

Where sequences break down is in every scenario where context exists. The moment a prospect replies, visits your site, talks to your team, or does anything that generates contextual information, the sequence model falls apart because it can't process that context. It just keeps running on its schedule, blind to everything that happened between its pre-programmed emails.

Our approach now is to use sequences for the first three cold touches only. After that — whether the prospect replies or not — the AI layer takes over. If they reply, the agent reads the reply and recommends next steps. If they don't reply but show engagement signals (visited the site, downloaded content, engaged on LinkedIn), the agent identifies those signals through our contact engagement analyzer and recommends a contextually relevant follow-up. If there's truly zero signal, the agent marks them for a different cadence — maybe a longer-term nurture with quarterly check-ins rather than weekly emails.

This hybrid approach has pushed our reply rates from 0.9% on pure sequence-driven email to 4.3% on the AI-guided follow-ups. Not because the emails are dramatically better written. Because they arrive at the right time, reference the right context, and demonstrate that someone on our end is actually paying attention.

The Architecture of Contextual Email

Let me describe the system we've built, because the concept of "AI reads your emails and decides what to do" can sound either magical or terrifying depending on how much detail you have.

Every email that enters HubSpot — inbound replies, outbound sent emails, forwarded threads — goes through the notes intelligence agent. The agent doesn't just read the individual email. It reads the full thread, the contact's activity history, the deal status, and the enrichment data we've collected. It produces what we call a "conversation state" — a running assessment of where this relationship stands.

The conversation state is structured. It includes: current prospect sentiment (with evidence), identified blockers, timeline indicators, decision-making authority signals, competitive mentions, and recommended next action with rationale. This gets written to a custom text property on the deal record and also posted as a note with a specific format that reps can scan quickly.

Reps see this in their daily workflow. When Ben opens his task queue in the morning, each follow-up task has the conversation state attached. He's not starting from "I need to re-read six emails to remember what's happening with this deal." He's starting from "this prospect mentioned budget approval happening in March, referenced a conversation with their CTO about security requirements, and asked a question about our SOC 2 compliance that hasn't been answered yet."

The recommended actions are suggestions, not automations. The agent might recommend: "Send SOC 2 documentation and reference the CTO's security concerns from the Dec 14 email." Ben decides whether to follow the recommendation, modify it, or ignore it entirely. About 70% of the time, he follows the recommendation closely. About 20% of the time, he modifies it based on something he knows that the agent doesn't. About 10% of the time, he ignores it because his instinct says otherwise. That 70/20/10 split is healthy. If reps followed the AI 100% of the time, I'd worry we were just building a fancier sequence engine.

What We Got Wrong Along the Way

We didn't get this right on the first attempt. Or the second.

Our first version of the notes intelligence agent was too aggressive about pausing nurture sequences. It would read any email reply as a "context signal" and pull the contact out of automated nurture. This included one-word replies like "Thanks" and auto-replies like "I'm out of office until Monday." In the first two weeks, 40% of our active nurture contacts were pulled into the AI-driven track prematurely, and about half of them didn't warrant individual attention. Vivek, who was consulting on the implementation, had to build a filtering layer that distinguished between substantive replies (actual sentences with content) and non-substantive ones (auto-replies, one-word acknowledgments, unsubscribe requests).

The second mistake was trying to have the AI draft actual reply emails. We set it up, tested it for two weeks, and killed it. The drafts were competent but lifeless. They sounded like a capable stranger — technically correct, referencing the right details, but missing the relational warmth that makes B2B email work. Claudia at the logistics company would have noticed. She'd had two personal conversations with Derek. Getting a reply that sounded like a well-informed chatbot would have damaged the relationship. Yara, one of our advisors, put it simply: "People buy from people. AI should power the person, not replace the person." We took her advice.

The third mistake was volume-related. We were running the notes intelligence agent on every contact record in HubSpot — about 14,000 contacts. The agent was churning through thousands of records daily, most of which had no recent activity and didn't need analysis. The cost spiked to $1,100 in the first month. Elena flagged it and we scoped the agent down to only process contacts with activity in the last 30 days, which cut the volume to about 800 contacts and the cost to roughly $220 per month.

The Metrics Nobody Asks About

Everybody wants to know about reply rates and conversion rates. Those improved — I mentioned the 0.9% to 4.3% reply rate jump. But the metrics I care about more are the ones that indicate relationship quality, not just response volume.

Meeting-to-opportunity conversion. When a prospect books a meeting, what percentage of those meetings result in a qualified opportunity? Before the AI layer, 41%. After: 63%. The improvement comes from better pre-meeting context — reps show up knowing what the prospect cares about, so the conversations start at a higher level.

Sales cycle length for engaged prospects. For prospects who reply to at least one email, the average sales cycle dropped from 67 days to 48 days. The AI isn't making people buy faster. It's eliminating the dead time between touches — the days where a rep was planning to follow up but hadn't, or the delays caused by needing to re-research a deal that had gone cold and warmed back up.

Prospect satisfaction signal. This is qualitative, but telling. We ask every closed-won customer: "How was the sales experience?" Before the switch, common feedback included "I had to repeat myself a few times" and "felt like your team didn't always remember our previous conversations." After the switch, that feedback disappeared. Multiple customers have said something like: "Your team was really on top of things. It felt like everyone knew what was going on." That's the notes intelligence agent at work. The rep didn't remember every detail. The AI did, and surfaced the right details at the right time.

What got worse. Sequence-enrolled contacts per month dropped by about 60%, which looks bad on a marketing dashboard but reflects the intended change — fewer people getting generic drip emails, more people getting contextual outreach. Our CMO, Marcus, had to adjust his board reporting to explain why "contacts in nurture" was declining while pipeline was growing. Some board members interpret "fewer automated emails" as "less marketing activity" rather than "more focused marketing activity." If you're going to make this shift, prepare the narrative for stakeholders who measure activity volume.

Practical Advice for Making the Switch

If you're running HubSpot email automation today and want to add contextual intelligence, here's the order I'd recommend.

First, don't touch your existing sequences yet. Just add the notes intelligence layer on top. Let it run for four weeks, reading emails and producing conversation states, without changing any automation. Use this period to validate that the AI's assessments match what your reps would say about each deal. If the AI says a deal is at risk but the rep says it's healthy, investigate who's right. Usually the AI has spotted something the rep is too optimistic to see, but not always.

Second, start pulling high-value deals out of generic sequences and into AI-guided follow-up. Any deal above your average contract value should get contextual treatment. Leave smaller deals in sequences for now. This limits your risk while you build confidence in the new approach.

Third, modify your sequences to be shorter. Instead of five-to-seven emails over three weeks, send two-to-three emails over one week. After that, the AI layer takes over regardless of whether the prospect replied. This reduces the window where a tone-deaf automated email can go out after a context-changing event.

Fourth, train your reps to read the conversation states and act on them. This sounds obvious but isn't. Some reps will ignore the AI recommendations and keep doing what they've always done. Ben adopted it immediately. Rafael took about three weeks. Sonia resisted for a month before a specific incident — the agent caught a competitive mention she'd missed — converted her. Change management matters as much as the technology.

And finally, don't stop writing good emails. The AI decides who to email, when, and with what context. The actual email still needs to sound like a human who gives a damn. Our best-performing emails are short, specific, and reference something only a person who'd been paying attention would know. The AI provides the "paying attention" part. The rep provides the human part. Neither works without the other.

HubSpot email automation was designed for a world where marketers knew more than their prospects and the challenge was distributing information at scale. That world is over. The challenge now is understanding context at scale — reading thousands of email replies, meeting notes, and engagement signals and producing the right response at the right time. That's not a sequence problem. It's an intelligence problem. And intelligence is where AI actually earns its keep.

The inbound deal creator handles the intake. The notes intelligence handles the ongoing conversation. The contact engagement analyzer watches the behavioral signals. Together, they turn HubSpot's email automation from a broadcasting tool into a listening tool. That difference changes everything downstream.


Try These Agents

  • Notes Intelligence -- Read HubSpot emails, notes, and call summaries to produce contextual conversation intelligence
  • Contact Engagement Analyzer -- Identify genuine engagement patterns and buying signals across all HubSpot contact activity
  • Inbound Deal Creator -- Create qualified deals from inbound HubSpot contacts with enrichment-based fit assessment

For people who think busywork is boring

Build your first agent in minutes with no complex engineering, just typing out instructions.