Articles

AI Meeting Summaries That Actually Capture What Matters

Ibby SyedIbby Syed, Founder, Cotera
6 min readMarch 6, 2026

AI Meeting Summaries That Actually Capture What Matters

A structured meeting summary displayed alongside a conversation transcript

The first AI meeting summary I ever read was from a product planning session in April 2024. It said something like: "The team discussed the upcoming product launch and various timeline considerations. Several action items were identified and assigned to team members." I sat there staring at it. That could describe literally any meeting in the history of corporate America. I'd just spent 50 minutes in that room debating whether to delay the launch by three weeks because our checkout flow had a critical bug, and the AI summary made it sound like we'd chatted about the weather.

My colleague Nadia had the same reaction. She was the one who'd advocated loudest for the delay — she'd pulled up error logs during the meeting showing 340 failed checkouts in the previous week. The summary mentioned none of this. Just "various timeline considerations." Nadia's exact response: "If this is what AI meeting notes look like, I'm going back to writing my own."

She was right to be frustrated. But the problem wasn't AI. The problem was that nobody told the AI what to actually produce.

Generic Prompts Make Generic Summaries

I spent the next few months figuring out why some meeting summaries were useless and others were genuinely helpful. The answer was embarrassingly simple. When you ask an AI to "summarize this meeting," you get a book-report summary. Topics covered, general outcomes, vague action items. It's technically accurate the way a weather forecast of "some weather will occur tomorrow" is technically accurate.

The unlock was specificity. Instead of "summarize," we started asking for five specific things: every decision made with the exact details, every action item with the person who committed and when, every point of disagreement and how it was resolved, every open question with who needs to answer it, and the key constraints or assumptions mentioned.

Same meeting, same transcript, wildly different output. Here's a real before and after from one of our weekly planning sessions (details changed slightly):

Before (generic prompt): "The team discussed Q3 roadmap priorities. Key topics included resource allocation, the analytics dashboard project, and customer feedback integration. Several decisions were made about timelines and responsibilities."

Zero actionable information. I could have written that sentence without attending the meeting.

After (specific prompt): "Decision: Analytics dashboard approved for Q3, starting August 1. Marcus and Priya assigned as engineering leads. Customer feedback integration pushed to Q4 — blocked by third-party API not ready until September (not a prioritization choice, a hard dependency). Sprint cadence moving from 2-week to 3-week cycles. Debate context: Marcus argued 30% of engineering time lost to sprint ceremonies; Nadia concerned about reduced ability to course-correct. Compromise: trial 3-week sprints for Q3 and reassess. Action: Marcus to draft technical spec by March 15. Sarah to update roadmap doc by March 12. Open question: does the analytics dashboard need a dedicated PM, or can Sarah cover it? Decision deferred to VP Product 1:1."

Same meeting. Night and day. The first version proves a meeting happened. The second version makes the meeting useful to someone who wasn't there.

The Three Layers Nobody Thinks About

After processing a few hundred meetings through our optimized setup, I've landed on a framework that makes summaries actually valuable. Most tools deliver layer one and stop.

Layer one is structure. Topics covered, decisions made, action items assigned. This is the skeleton. Necessary, not sufficient. Our meeting action tracker handles this layer automatically — it extracts every decision, action item with owner and deadline, and unresolved question. This makes summaries scannable. Fifteen seconds to determine if a meeting is relevant to you.

Layer two is context. This is where most tools fail completely. Structure tells you what was decided. Context tells you why. When a summary says "Decision: Move the launch date to April 15," I need to know whether that was a rubber-stamp schedule adjustment or a tense 20-minute debate where engineering was pushing back and product overruled them. Those two scenarios require completely different follow-up.

At one of our retrospectives, a team lead named Raj read through a summary that said "decided to deprecate the v1 API." He assumed it was straightforward. It wasn't. The full context was that three engineers had argued against deprecation because two enterprise customers still depended on it, and the decision only passed because the VP guaranteed those customers would be migrated first. Raj didn't know any of that. He started the deprecation. One of those enterprise customers called our CEO the next day. A summary with context would have prevented that.

Layer three is attribution. Who said what. Not for blame — for knowing who has the knowledge. If someone mentions a technical constraint about database migration timing, I need to know which engineer raised it so I can follow up with the right person. "A concern was raised about migration timing" versus "Marcus raised a concern about migration timing, estimating 3 weeks instead of the planned 2" — the second version is ten times more useful because I know exactly whose desk to walk to.

How We Set This Up

We use a meeting summary digest that Fireflies feeds into. It produces summaries with all three layers. The output goes to a shared Slack channel, threaded by meeting, so people can correct anything that's off.

The adoption curve was interesting. First month with our old generic summaries: team members clicked into a summary an average of 0.3 times. I tracked this. Basically nobody read them. After we optimized the format with specific extraction prompts, clicks jumped to 2.7 per summary. People started reading summaries because summaries started containing information worth reading.

Nadia — the same person who wanted to go back to handwritten notes — became the biggest advocate. She told me in January that she references meeting summaries probably four times a day. "I used to spend 20 minutes before every follow-up call trying to remember what we discussed. Now I search the summary, get the context in 30 seconds, and actually sound prepared."

The Debate Problem

Here's something I didn't expect to matter so much. The most valuable part of a meeting summary isn't what was decided. It's what was argued about.

Decisions in summaries look like facts. Clean. Final. "We're launching April 15." But that date might have been chosen after a 20-minute back-and-forth where three alternatives were considered. Someone might have strongly objected. The decision might rest on an assumption — say, that the API partner delivers on time — that could change.

Without the debate context, a future reader sees a clean decision and takes it as settled. They don't know it was contested. They don't know the assumptions. They can't tell whether new information should trigger a revisit.

Our summary format has a "Key Discussion Points" section specifically for this. It captures disagreements and their resolutions. This section alone has prevented at least a dozen "wait, why did we decide that?" follow-up meetings — the kind of meetings that exist only because the original decision context got lost.

I counted recently. In Q4 2025, we had zero "re-litigate a past decision" meetings. In Q4 2024, before we captured debate context, we had eight. Each one was 30-45 minutes. That's roughly six hours of meeting time eliminated by one section in our summary format.

Meeting Summaries as Institutional Memory

Here's what I didn't predict. After 18 months of structured summaries, the archive became something more than a note repository. It became the best onboarding tool we have.

A new hire named Sanjay joined in November. Instead of the usual "read the wiki and ask questions" onboarding, his manager had him search meeting summaries for the past quarter. He could trace how a product decision evolved through three meetings. He could see who the experts were on different topics. He told me after his first week that he felt like he'd been on the team for months. Our Notion docs never achieved that — they're static snapshots. Meeting summaries capture the living decision-making process, disagreements and all.

We also feed structured summaries into team meeting intelligence for pattern analysis. Which topics keep coming up without resolution? Which meetings consistently produce action items and which are just recurring chat sessions? Where are decisions being made without the right people present? That meta-analysis only works when summaries are structured consistently.

Where to Start If Your Summaries Are Useless

Don't overhaul everything. Change one thing: specify what you want in the output. Instead of "summarize this meeting," ask for decisions with details, action items with owners, points of disagreement with resolutions, open questions with who owns them, and key assumptions or constraints that were mentioned.

That single change transforms summaries from something people ignore to something they rely on. Once you see the difference, you'll want to automate the format — that's where the action tracker and digest come in. But even before automation, just being specific about what you ask the AI to extract changes everything.

The gap between a useless summary and a valuable one isn't the AI model. It's never been the model. It's the instruction.


Try These Agents

  • Meeting action tracker — Extracts decisions, action items, and open questions from meetings with full attribution
  • Meeting summary digest — Produces structured daily digests of meeting summaries across your team
  • Team meeting intelligence — Analyzes meeting patterns to surface recurring topics, decision quality, and efficiency trends
  • Meeting recap slides — Converts structured meeting summaries into slide decks for stakeholder presentations

For people who think busywork is boring

Build your first agent in minutes with no complex engineering, just typing out instructions.