Articles

Our Linear-Slack Integration Was a Notification Firehose. We Fixed It.

Ibby SyedIbby Syed, Founder, Cotera
7 min readMarch 8, 2026

Our Linear-Slack Integration Was a Notification Firehose. We Fixed It.

Our Linear-Slack Integration Was a Notification Firehose. We Fixed It.

The day we connected Linear to Slack, 43 notifications hit the #engineering channel before lunch. Issue created. Issue assigned. Issue moved to In Progress. Issue priority changed. Comment added. Issue moved to In Review. PR linked. Issue moved to Done. Multiply that by a team of 18 engineers working on maybe eight issues each per cycle, and you get a channel that scrolls faster than anyone can read.

By the end of the first week, Kenji had muted #engineering. By the end of the second week, 14 of 18 engineers had muted it. By the end of the first month, Priya -- who was the one who'd requested the integration so product could track engineering progress -- admitted she hadn't opened the channel in two weeks. The integration was technically working. It was sending every update, faithfully, in real time. And nobody was reading any of it.

The Notification Taxonomy

Not all notifications are equal, but Linear's Slack integration treats them like they are. Here's what was hitting our channel on a typical day:

State changes: about 60% of all notifications. "Kenji moved ISS-412 from Todo to In Progress." Useful if you're watching that specific issue. Noise if you're not. And since most people are not watching most issues, most state change notifications are noise for most people.

Comments: about 15%. These are actually useful -- someone asking a question or sharing context. But they were buried in the state change flood. Anya said trying to find a comment notification in the channel was "like finding a receipt in a dumpster."

Assignments: about 10%. "ISS-415 assigned to Rafael." Useful for Rafael. Irrelevant to everyone else. Rafael gets a direct notification in Linear anyway, so the Slack notification is redundant for the one person who cares.

New issues: about 10%. Sometimes interesting, especially if a customer-facing bug just got filed. But "ISS-420 created" without context about severity or impact reads the same as every other notification in the stream.

Priority changes: about 5%. These should be the most attention-worthy, but they blend into the wall of text like everything else.

The fundamental problem isn't that Linear sends too many notifications. It's that the integration has one volume knob and it's set to maximum. You can choose which teams and projects feed into which Slack channels, and you can pick which event types to include. But you can't say "only notify me about high-priority state changes" or "batch these updates into a daily summary" or "skip In Progress notifications but alert on Blocked." The granularity stops at event type, and that's one level too coarse.

What We Tried First

We went through the predictable troubleshooting arc.

First, we split notifications across channels. #eng-frontend, #eng-backend, #eng-platform, #eng-mobile. This reduced per-channel volume by about 75%, which helped, but each channel still had the same signal-to-noise problem at a smaller scale. Engineers on the frontend team still saw every state change for every frontend issue, and they still muted the channel within a few weeks.

Next, we restricted event types. We turned off assignment notifications and state changes, keeping only comments and new issues. This cut the volume dramatically but killed the visibility that Priya wanted. She couldn't track engineering progress from Slack anymore, which was the whole reason we'd set up the integration.

Then we tried a webhook-to-Slack middleware. Tomás built a small service that received Linear webhooks, applied filters (only notify on Urgent or High priority issues, only notify on Blocked state), and forwarded the filtered events to Slack. This worked better. The channel went from 43 notifications per day to about 8. But maintenance was a pain. Every time we changed our Linear workflow -- added a state, renamed a label, reorganized teams -- Tomás had to update the filter rules. After two months, the filters had drifted enough that they were missing events they should have caught and catching events nobody cared about.

What Actually Worked

We replaced all of it with a standup generator agent that posts to Slack on a schedule instead of reacting to every individual event.

Every morning at 9:15, the agent reads the current state of Linear -- what's in progress for each person, what moved since yesterday, what's blocked, what's new in triage -- and posts a single structured summary to #engineering. One message. Covers everything. Priya reads it in about 90 seconds and knows exactly where engineering stands without opening Linear.

The morning summary looks something like this: Kenji is working on the notification service refactor (two issues in progress, one moved to review yesterday). Rafael has three issues in progress across the payments project. Elena's mobile team has a blocked issue -- waiting on API endpoint from backend. Two new bugs came in overnight, both triaged as medium priority. The frontend cycle is 72% complete with four days remaining.

That's one Slack message doing the work of 40 individual notifications, and it's actually readable.

We added a second post at 3pm for blocked issues only. If anything is in a Blocked state or has a "blocked" label, the agent posts it with context: what's blocked, who owns it, and how long it's been stuck. This replaced the real-time Blocked notifications we'd been trying to filter through Tomás's middleware, and it works better because the afternoon check catches issues that were blocked earlier in the day but nobody noticed in the morning summary.

The third addition was a Friday afternoon cycle report. Percentage complete, completed issues, carryover risks, and a comparison to the previous cycle. Rafael used to compile this manually. Now it shows up in Slack at 4pm and he reviews it in five minutes.

The Notification Channel Isn't Dead

We didn't turn off the native Linear-Slack integration entirely. We kept it for one specific case: comments. When someone comments on a Linear issue and @-mentions a Slack user or links to a Slack thread, that notification still goes to the relevant channel. Comments are the one event type where real-time notification actually matches how people work -- someone asked a question and needs an answer, or shared information that's time-sensitive.

Everything else -- state changes, assignments, priority updates, new issues -- gets batched into the scheduled summaries. The real-time firehose is gone. The daily rhythm of morning summary, afternoon blockers, Friday report replaced it.

Nobody has muted #engineering since we made the switch. That's the metric that matters more than any configuration detail. A channel that people actually read is infinitely more valuable than a channel that faithfully logs every event into the void.

Patterns for Other Teams

If you're setting up a Linear-Slack integration right now, here's what I'd suggest based on our experience.

Don't start with real-time notifications for everything. Start with nothing. Seriously. Connect Linear to Slack, turn off all event types, and set up a scheduled summary instead. Let the team live with summaries for two weeks and then ask what's missing. You'll find that almost nothing is missing, because the summary covers 95% of what people actually need to know.

If something does need real-time notification, add it narrowly. We found exactly one case: @-mention comments. Your team might have a second case -- maybe urgent bugs need an immediate Slack ping. Fine. Add that one event type to one channel with specific filters. Don't open the floodgates.

Keep the native integration for thread-based conversations. When someone references a Linear issue in Slack (using the Linear bot's unfurling), the thread that forms around it is genuinely useful. People discuss the issue in Slack, and the context stays linked. That's a good use of the integration. It's the broadcast-style notifications that cause problems, not the conversational ones.

Schedule your summaries around your team's rhythm. Our 9:15am summary lands right before standup. The 3pm blocker check gives people time to unblock something before end of day. The Friday report aligns with our cycle review cadence. Pick times that match when your team actually makes decisions about the information.

Marcus made an observation that stuck with me: "The old integration told us everything that happened. The new one tells us everything that matters." The difference is curation. A list of every event is data. A structured summary that highlights blockers, progress, and risks is information. Linear's native integration gives you data. An agent that reads Linear and posts to Slack on a schedule gives you information.

We went from a channel that 14 out of 18 engineers had muted to a channel that people check voluntarily every morning. Same tools. Same team. Different layer between them.


Try These Agents

For people who think busywork is boring

Build your first agent in minutes with no complex engineering, just typing out instructions.