HubSpot Marketing Automation: We Threw Out Our Sequences and Built Something That Actually Listens
Marcus, our head of demand gen, came to me on a Thursday afternoon last October with a spreadsheet that made my stomach drop. He'd pulled every marketing-sourced deal from the previous quarter and traced each one back through HubSpot to the exact sequence that had touched them. Out of 340 deals marketing claimed credit for, 22 had actually been influenced by an automated workflow. Twenty-two. The rest had come through channels our automation didn't even touch — referrals, organic search clicks that bypassed nurture entirely, cold outbound from our SDR team. Our marketing automation, the thing we'd spent nine months building and $48,000 in HubSpot licensing to run, was responsible for 6.5% of our pipeline.
I didn't believe him at first. I made him pull it again. Same number.
The problem wasn't HubSpot itself. The problem was how we'd set it up. We had 47 active workflows, 12 nurture sequences, and a lead scoring model that assigned points for everything from email opens to page views to webinar registrations. On paper it looked sophisticated. In practice it was a Rube Goldberg machine that confused activity with interest and treated every contact like a widget on a conveyor belt.
So we tore it down. Not all at once — we're not masochists — but methodically, over about three months. What we rebuilt in its place looks nothing like the traditional HubSpot marketing automation playbook, and the results have been different enough that I think it's worth walking through what we changed and why.
The Core Problem With Lists and Sequences
Here's what most HubSpot marketing automation actually does. A contact fills out a form. They get added to a list based on some property — their industry, their company size, what they downloaded. That list triggers a sequence. The sequence sends three to seven emails over two to four weeks, each one slightly more aggressive about asking for a meeting. If the contact opens enough emails or clicks enough links, their lead score goes up, and eventually they become an MQL that gets tossed to sales.
This model assumes something fundamental: that a contact's readiness to buy is a function of how much marketing content they've consumed. More opens, more clicks, more page views equals more interest equals closer to purchasing. And that assumption is just... wrong. It's wrong so reliably that I'm surprised more teams haven't abandoned it.
Priya, one of our SDRs, put it to me bluntly during a pipeline review. "You send me these MQLs and I call them and half of them don't even remember filling out the form. The other half downloaded a PDF because their boss told them to research something for an internal project. I've had two real conversations out of the last thirty MQLs." Two out of thirty. A 6.7% hit rate on our most "qualified" leads.
The sequences weren't nurturing anyone. They were just adding noise to already-crowded inboxes and training our best prospects to ignore us.
What We Tried First (And Why It Failed)
Before we went the AI route, we tried fixing things within HubSpot's native toolset. We rebuilt our lead scoring model three times. We added negative scoring — deduct points if someone unsubscribes from a list, deduct points if they haven't visited the site in 30 days. We created "suppression workflows" that would pull contacts out of sequences if certain conditions were met. We built branching logic into our workflows so contacts would get different emails based on their industry and role.
All of this made the machine more complex without making it smarter. The fundamental problem remained: we were still measuring clicks and opens and treating them as proxies for buying intent. A more elaborate counting system is still a counting system.
Tomás, our RevOps lead, spent two full weeks building what he called the "intent waterfall" — a workflow that used seventeen different enrollment triggers and branched into nine different paths based on behavior combinations. It was genuinely impressive engineering. And the conversion rate from MQL to SQL went from 8% to 9%. One percentage point, after two weeks of work and a workflow so complex nobody else on the team could troubleshoot it.
That's when we started looking outside the native toolset.
The Shift: From Counting to Understanding
The thing that actually moved the needle was conceptually simple but operationally different from everything we'd been doing. Instead of counting what contacts did and scoring them on volume of activity, we started analyzing the context of their engagement.
What does that mean in practice? When a contact visits our pricing page, the old system gave them +10 points. The new approach asks: which pricing tier did they spend time on? Did they visit the comparison page afterward? Had they looked at any case studies in the same vertical? Did their company recently hire for a role our product supports? The same action — visiting a pricing page — can mean wildly different things depending on what surrounds it.
We started using a contact engagement analyzer that pulls the full picture on each contact instead of just tallying their clicks. It cross-references their HubSpot activity against what's actually happening at their company — hiring patterns, LinkedIn posts from their leadership team, recent news, tech stack indicators. The difference between "this person visited the pricing page" and "this VP of Marketing at a Series B company that's hiring three demand gen roles visited the enterprise pricing page after reading two case studies about companies their size" is the difference between a cold lead score and an actual buying signal.
This was the unlock. Not better workflows. Not smarter sequences. Better understanding of what engagement actually means in context.
What the New Stack Looks Like
I want to be specific about what we're running now, because I've read too many marketing automation articles that stay at the "think about it differently" level without telling you what to actually do.
We still use HubSpot. We still have workflows. But the workflows serve a different purpose. Instead of being the engine that drives nurture, they're the connective tissue that moves data around. The thinking — the actual decision-making about what to do with each contact — happens in the AI layer on top.
Here's the flow. A new contact enters HubSpot through any channel: form fill, chatbot conversation, webinar registration, whatever. A workflow immediately triggers an enrichment pass. Our contact enrichment agent pulls in everything we can find about that person and their company. Job title and seniority, company size and growth rate, industry, tech stack, recent funding, hiring activity, LinkedIn profile context. This takes about ninety seconds per contact. The old way — having an SDR manually Google each lead — took fifteen to twenty minutes per contact, which meant it only happened for leads that were already flagged as high-priority. Now every contact gets the full treatment.
Once enrichment is done, the engagement analyzer kicks in. It doesn't just look at what this contact did on our site. It looks at the enriched profile and maps their behavior to a contextual intent signal. A marketing coordinator at a 15-person agency downloading a whitepaper is research. A VP of Revenue at a 300-person SaaS company downloading the same whitepaper after visiting the pricing page is evaluation. Same action, entirely different meaning.
Based on that contextual analysis, the contact gets routed. High-intent contacts with strong fit go directly to sales with a full brief — not just a name and lead score, but a paragraph explaining why this person is worth calling right now. Medium-intent contacts get enrolled in targeted nurture that's specific to their situation, not a generic drip sequence. Low-fit contacts don't get nurture at all — they just sit in the database until something changes. We stopped wasting email sends on people who were never going to buy, and our deliverability improved as a side effect.
The Numbers (Honest Version)
I'm going to share our results, but I want to be upfront about what's directly attributable to the AI layer versus what's a combination of factors.
MQL-to-SQL conversion went from 8% to 23%. That's the headline number and it's real, but part of it is because we're generating fewer MQLs — the bar is higher. When you stop calling every webinar attendee an MQL, the conversion rate naturally goes up. Still, the absolute number of SQLs also went up by 40%, so it's not just a denominator trick.
Email engagement rates across our nurture sequences increased from a 12% open rate to a 31% open rate. This one I'm more cautious about attributing solely to AI. We also rewrote most of our email copy during this period, and Elena on our content team deserves credit for subject lines that actually sound like a human wrote them. The AI layer's contribution was ensuring the right emails went to the right people — but the copy improvement was just as important.
Sales feedback changed the most dramatically, and this is the metric I actually care about. Before the switch, our SDR team rated marketing-sourced leads at a 3.2 out of 10 on average (we run a monthly survey). Three months after the new system was running, that number was 7.1. The SDRs trust the leads now. They call them quickly instead of letting them sit. And that speed-to-contact improvement probably accounts for some of the conversion lift too — it's hard to isolate one variable.
What didn't improve: total pipeline volume from marketing didn't change much for the first two months. It was actually slightly down in month one because we were sending to fewer contacts. By month three it caught up, and by month five it was about 15% ahead of where it had been. If your leadership team judges marketing on monthly MQL volume, this transition is going to be a tough conversation for the first sixty days.
What Didn't Work
I don't want to paint this as some clean success story where everything went right. Plenty didn't.
Our first attempt at automated enrichment was way too aggressive. We were pulling data on literally every form fill, including the thousands of people downloading free templates who had absolutely no commercial intent. The enrichment agent was burning through API credits analyzing college students and freelancers who would never be customers. Anya, who managed the budget, flagged that we'd burned $3,200 in enrichment costs in the first month on contacts that should have been filtered out. We added a pre-qualification step — basic checks on email domain and company size before triggering enrichment — and the cost dropped to about $800 a month, which is sustainable.
We also made the mistake of trying to automate the actual email content early on. AI-generated nurture emails sounded fine on a first read, but our audience could tell. Our reply rate on AI-written emails was 60% lower than on emails written by Elena and Derek on our content team. We went back to human-written templates with AI handling the targeting and timing only. The machines are great at deciding who to email and when. They're not great at sounding like someone you'd actually want to talk to.
The third thing that didn't work was trying to run the old system and new system simultaneously for comparison. We ended up with contacts getting duplicate emails because they were enrolled in both the legacy workflows and the new AI-driven flows. Kenji on our ops team spent an entire weekend untangling enrollment conflicts. If you're making this switch, just commit. Shut down the old workflows. Run both systems in parallel on paper — analyze what each would have done — but don't actually send from both.
What I'd Tell You If You're Starting Fresh
If I were setting up HubSpot marketing automation from scratch today — no legacy workflows, no existing sequences to migrate — I'd build the whole thing around enrichment and contextual analysis from day one. I wouldn't start with "what emails should we send?" I'd start with "how will we understand each contact well enough to know whether they should get an email at all?"
The native HubSpot workflow builder is excellent infrastructure. Use it for data routing, property updates, task creation, and Slack notifications. Don't use it as your decision engine. The if/then branching in HubSpot workflows caps out at a complexity level that's useful for routing but inadequate for nuanced lead evaluation.
And invest in your content independently of your automation. The best targeting in the world doesn't help if the emails it delivers are boring. Automation decides who and when. Humans decide what and how. Keep those responsibilities separate and you'll avoid the trap of thinking better automation alone will fix a content problem.
The old way of doing HubSpot marketing automation — lists, sequences, lead scores, drip campaigns — isn't wrong so much as it's incomplete. It's the 2019 playbook running in a 2026 inbox. Adding an AI layer that actually understands context rather than just counting clicks is the difference between marketing automation that generates MQLs and marketing automation that generates pipeline.
And those are very different things, as Marcus and his spreadsheet taught me last October.
Try These Agents
- Contact Engagement Analyzer -- Analyze HubSpot contact engagement patterns to identify genuine buying signals vs. noise
- Contact Enrichment -- Enrich HubSpot contacts with company data, hiring signals, and tech stack intelligence
- Inbound Deal Creator -- Automatically create and qualify deals from inbound HubSpot contacts based on enriched profiles