Linear's Built-In Automations Aren't Enough. Here's What We Added.

Linear's automations are elegant. That's the right word for them. You set a rule -- when an issue moves to "In Progress," assign it to the person who moved it -- and it just works. No YAML. No scripting. No third-party plugin that breaks every time the API changes. Click, configure, done. I genuinely appreciate how clean the system is.
The problem is that clean and limited are close neighbors. After six months of running a 22-person engineering team on Linear, we had maxed out what the built-in automations could do in about week three. The remaining five months were spent building the automation layer that Linear's native tools couldn't reach.
What Linear's Automations Actually Do
Linear ships with workflow automations that trigger on state changes. When an issue enters a specific state, you can auto-assign it, add a label, set the priority, or move it to a project. When a pull request merges, the linked issue can automatically transition to "Done." When someone creates an issue with a specific label, it can auto-assign to a team member.
These rules are per-team and per-workflow. You set them once and they run reliably. Anya configured our core automations in about an hour:
When an issue moves to "In Review," it auto-assigns to our designated reviewer for that team. When a PR merges and the linked issue is in "In Review," the issue moves to "Done." When someone creates a bug (using the bug template), it auto-labels with the "bug" label and sets priority to "Urgent" if the title contains the word "production."
That third rule is where we started bumping into walls. "If the title contains a word" is about as sophisticated as Linear's conditional logic gets. There's no "if this issue mentions a customer name that's in our top-10 accounts list, escalate to P1." There's no "if more than three bugs have been filed against this project in the last 48 hours, alert the team lead." There's no "if this issue has been in Triage for more than two business days, ping the assignee."
Linear's automation rules are single-trigger, single-action, same-issue scope. They can't look across issues. They can't reason about patterns. They can't do conditional branching beyond basic field matching. For a tool that prides itself on speed and simplicity, the automation constraints match the philosophy. But philosophy doesn't triage 40 issues a week.
The Triage Problem
Forty-three new issues per week. That was our average over the last quarter. They come from everywhere -- bug reports through Intercom, feature requests from the product team, tech debt items from engineering, incidents from PagerDuty. Each one lands in the Triage state and sits there until a human reads it, understands it, and makes four decisions: priority, team, labels, and cycle.
Tomás owned triage. Every morning, he'd open the Triage view, read through the new issues, and start sorting. A bug report about slow page loads on the dashboard? Priority: High. Team: Frontend. Labels: performance, dashboard. Cycle: current, because customers are complaining. A feature request for CSV export? Priority: Low. Team: Backend. Labels: feature-request, data-export. Cycle: next, because it's not blocking anyone.
Each issue took Tomás about three minutes. At 43 issues per week, that's over two hours -- roughly half a workday, every week, spent on sorting. And Tomás is a senior engineer. Those two hours have a real cost beyond the time itself; they pull one of our most experienced people away from architecture work and code review.
Linear's automations can't help here. Auto-assign based on label only works if the label is already set, and setting the label is half the triage work. Auto-prioritize based on title keywords catches maybe 15% of cases. The rest require reading the description, understanding the context, and making a judgment call.
We deployed an issue triage agent that reads each new issue and suggests all four fields -- priority, team, labels, and cycle. It uses the issue title and description, plus historical patterns from how similar issues were triaged in the past. The suggestions appear as a comment on the issue within about 30 seconds of creation.
Tomás still reviews every triage decision. But instead of reading the issue cold and making four choices from scratch, he reads the agent's suggestion and either approves it (one click to apply the suggested fields) or adjusts. His triage time dropped from three minutes per issue to about 45 seconds. That's 30 minutes a week instead of two hours.
The Cross-Project Blind Spot
Linear organizes work into teams, projects, and cycles. Automations operate within a team's scope. This is fine when your work stays inside team boundaries, and it falls apart the moment it doesn't.
We have four engineering teams: Frontend, Backend, Platform, and Mobile. A feature like "add real-time notifications" touches all four. Frontend builds the UI. Backend builds the WebSocket service. Platform provisions the infrastructure. Mobile implements push notifications. Each team has issues in their own Linear workspace, tracked in their own cycles, with their own workflow states.
Nobody has a view that says "here's the status of real-time notifications across all four teams." Priya, who manages the project, has to open four team views, find the relevant issues in each, and mentally stitch together the status. She does this every Tuesday and Thursday, and it takes about 20 minutes each time.
Linear's automations can't aggregate across teams. There's no rule that says "when all issues linked to Project X across all teams reach Done, mark the project milestone as complete." There's no automation that detects "three of four teams have finished their portion, but the fourth hasn't started -- flag this." The project-level view in Linear shows issues from all teams, but it's a flat list, not a synthesized status.
We built an agent that runs every morning and generates a cross-project status summary. For each active project that spans multiple teams, it counts completed vs. total issues per team, identifies blockers (issues in a "Blocked" state or with a "blocked" label), and flags velocity mismatches -- one team burning through their issues while another hasn't started. The summary posts to a dedicated Slack channel. Priya reads it over coffee instead of clicking through four team views.
The Recurring Pattern Problem
Some automation needs aren't about single issues. They're about patterns across issues over time.
Elena noticed that our API team was filing roughly the same bug every two weeks: timeout errors in the payment processing service under load. Each instance was a separate issue in Linear. Each one got triaged, investigated, and patched individually. Nobody connected them because Linear doesn't surface "this looks like the same bug you fixed 13 days ago."
Elena spotted the payment timeout pattern after the fifth occurrence. That's five rounds of triage, investigation, and patching before anyone realized it was a systemic issue.
We added an agent that scans new issues against recently closed ones, looking for similar titles, descriptions, and affected components. When it finds a match, it comments with a link to the previous issue. Maybe 70% precision, but the 70% it catches saves entire investigation cycles. That payment timeout bug would have been flagged on the second occurrence instead of the fifth.
What We Didn't Automate
There's a temptation to automate everything once you start. We resisted it in a few places.
Sprint planning stays manual. An agent could propose which issues go into the next cycle based on priority and capacity, but Rafael's team gets real value from the planning conversation. The discussion about what to build next surfaces context that doesn't live in any ticket -- customer conversations, technical risks, team morale. Automating the output would skip the process that makes it useful.
Estimation stays manual too. We experimented with an agent that suggested story points based on historical similar issues. The suggestions were reasonable about half the time, wildly off the other half, and the team stopped trusting them after two sprints. Estimation is one of those activities where being wrong is more expensive than being slow, because a bad estimate warps the entire cycle's commitments.
Retrospectives stay manual. Obviously. If you're automating your retros, you've missed the point of retros.
The Automation Stack We Landed On
Six months in, here's what we run on top of Linear's native automations:
Linear's built-in rules handle the mechanical stuff: auto-assign on state change, auto-close on PR merge, auto-label from templates. These are reliable and we don't think about them.
The triage agent handles incoming issue classification. It suggests but doesn't decide. Tomás approves or overrides. This saved about 90 minutes per week.
The cross-project status agent generates morning summaries for multi-team projects. This replaced Priya's manual status checks and saved about 40 minutes per week.
The duplicate detector flags potential recurring issues. This doesn't save time directly but prevents wasted investigation cycles. Elena estimates it's avoided at least three full duplicate investigations per month.
A Friday reporting agent pulls cycle data and generates a narrative sprint summary for leadership. This replaced Rafael's manual report writing and saves about 25 minutes per week.
Total time saved: roughly four hours per week across the team. The bigger win is qualitative. Tomás is back to doing senior engineering work. Priya has cross-project visibility without clicking through four views. Elena catches systemic bugs before they become traditions.
Linear's automations are the foundation. They handle what they were designed to handle, and they do it well. Everything above that foundation -- the judgment calls, the pattern recognition, the cross-boundary synthesis -- needs something that can read the full picture and reason about it. That's where the agents live.
Try These Agents
- Linear Issue Triage Agent -- Auto-suggest priority, team, labels, and cycle for incoming issues
- Linear Sprint Status Reporter -- Generate narrative sprint summaries from Linear cycle data
- Linear Backlog Grooming Agent -- Surface stale issues, duplicates, and mis-prioritized backlog items