8 Asana Workflows We Use Daily (and 2 That Needed an Agent)

I've been building Asana Rules for about two years. The first year, I treated the Rules engine like it could handle anything. The second year, I learned where it stops. Both lessons were useful.
What I've landed on is a clean split. Eight of our daily workflows run on native Asana Rules, unchanged for months, requiring zero maintenance. Two workflows couldn't be expressed as Rules at all. They required reasoning across multiple tasks, projects, and time horizons. We handed those to agents.
Here are all ten, in the order we built them.
The Eight That Run on Rules
Section-based assignment in our intake project. New requests land in an "Incoming" section. We have three sections below it: Engineering, Design, and Marketing. When someone triages a request by dragging it into one of those sections, a Rule fires: Engineering tasks get assigned to Kenji, Design tasks to Marcus, Marketing tasks to Elena. Each Rule is a simple trigger (task added to section) and a single action (set assignee). Three Rules, ten minutes to build, running for fourteen months without a single edit.
Priya asked me once why we don't auto-triage instead of manually dragging. The answer is that manual triage takes about two seconds per task and gets it right every time. The section drag is the decision. The Rule handles the assignment. Automating the decision would be more complex than the decision itself.
Due date on status change. When a task moves to "In Review," it automatically gets a due date three business days out. Reviewers know they have three days. If the review takes longer, they adjust the date manually. But the default prevents tasks from sitting in review indefinitely with no deadline. Diana suggested this one after a design review sat untouched for eleven days because it had no due date and nobody felt urgency. Since we added the Rule, our average review cycle dropped from 4.2 days to 2.8 days.
Completion notification to project lead. When any task in our product launch project is marked complete, the project lead (Anya) gets a notification. She doesn't need to watch the board. She doesn't need to filter by completion date. Finished items come to her. For a launch with 36 tasks across four teams, this means Anya always knows the current completion count without checking. She told me she used to check the board six or seven times per day. Now she checks it twice, in the morning and after lunch, and relies on the completion pings in between.
Urgent priority escalation. When anyone sets the Priority custom field to "Urgent" on a task, the Rule moves the task to the top of the "This Sprint" section and posts a notification to the #engineering-urgent Slack channel via Asana's Slack integration. This fires maybe twice a month. When it fires, it matters. Kenji says the Slack ping is the difference between noticing an urgent task within minutes versus discovering it the next time someone opens the board.
Auto-close subtasks on parent completion. When a parent task is marked complete, all its subtasks get marked complete too. We debated this one. Sometimes subtasks represent independent work items that happen to be grouped under a parent. But in our usage, subtasks are steps toward completing the parent. If the parent is done, the steps are done. Tomás pushed for this Rule after he kept finding "orphan" subtasks that showed as incomplete in his workload view even though the parent task had been finished and shipped weeks earlier.
New task comment for context requests. When a task is created in our support project without a description, a Rule adds a comment: "This task has no description. Please add context so it can be triaged." Gentle nudge, automatic, and it reduced our "what is this task even about?" Slack messages by roughly half. Rafael, who handles support triage, used to spend twenty minutes a day asking people to clarify bare task titles. The Rule doesn't eliminate vague tasks, but it prompts people to add detail before Rafael has to chase them.
Weekly recap notification. Every Friday at 3 PM, a scheduled Rule sends Priya a summary of all tasks completed that week in our operations project. It's a simple count with task names. She copies it into her weekly update to leadership. The information is basic, but the automation saves her from manually filtering the project by completion date every Friday.
Assignee clear on archive. When a task moves to our "Archived" section, the assignee is cleared. This keeps archived tasks from showing up in people's My Tasks views. Small thing. Before we added this, Marcus had 40 archived tasks cluttering his My Tasks because nobody removed the assignment when the work was done and filed away.
What All Eight Share
Every one of these Rules has the same shape: one trigger, one or two actions, no branching, no external data lookups, no multi-task logic. They fire on a single event and do a single thing in response. They're reflex actions, not decisions.
That constraint is also what makes them reliable. None of these Rules have broken in the time we've run them. They don't conflict with each other because they operate on different triggers in different projects. They don't depend on external systems beyond Slack (and even then, if Slack is down, the worst that happens is a missed notification). They're the definition of set-and-forget automation.
The lesson from building all eight: if a workflow can be described as "when X happens, do Y," it belongs in Rules. Don't overthink it. Don't chain Rules together. Don't try to build conditional logic by layering Rules on top of each other. Build the simple Rule and move on.
The Two That Needed Something Else
Cross-project dependency tracking. We run product launches that span Engineering, Design, Marketing, and Sales Enablement. A feature needs to be built before the sales deck can be updated. The sales deck needs to be done before the training session is scheduled. The training session needs to happen before the launch date.
Asana has task dependencies within a project. Task B depends on Task A. That works when both tasks are in the same project. But our dependencies cross project boundaries. The engineering task is in the Engineering Sprint project. The sales deck task is in the Sales Enablement project. Asana can link them with a dependency, but there's no Rule that fires when a dependency in another project slips.
What we needed was something that would scan all projects involved in a launch, identify tasks where upstream dependencies are behind schedule, and alert the downstream task owners before they discover the delay themselves. That's a multi-project, multi-task read operation followed by a judgment call about which delays are meaningful and which are minor.
We set up a cross-project dependency tracker agent that runs every morning. It reads all tasks with dependencies, checks whether upstream tasks are on track, and posts alerts when a downstream task is at risk because its dependency is overdue or has no recent activity. On our last product launch, the agent flagged that the API documentation (Engineering project) was three days behind, which would delay the SDK guide (Developer Relations project), which would delay the launch webinar (Marketing project). Diana got the alert on a Tuesday and adjusted the webinar date before the marketing team had done any wasted prep work.
A Rule couldn't have done this. The Rule engine can't read across projects, can't evaluate whether a delay is propagating through a dependency chain, and can't compose a notification that explains the downstream impact. It required reading 140 tasks across four projects, identifying the dependency graph, checking dates, and making judgment calls about which delays mattered.
Weekly sprint summaries. Every Friday, Priya needs more than the basic "tasks completed this week" count that our Rule provides. She needs a narrative summary. Which tasks were planned for this sprint? How many were completed? How many carried over? What's the velocity trend compared to the last three sprints? Are there any tasks that have been in progress for more than five days, suggesting they're stuck?
The native Rule gives her a list of completed task names. That's the raw material, not the analysis. Building the analysis as a Rule would require formulas, cross-record calculations, historical comparison, and text generation. None of that exists in the Rules engine.
An agent reads the sprint project, pulls completion data, compares it to the previous three sprints' completion data, calculates velocity, identifies stuck tasks, and writes a paragraph-level summary that Priya forwards to leadership. The summary includes specific numbers: "Sprint 14: 23 of 28 planned tasks completed (82%). Velocity is up from 76% in Sprint 13. 5 tasks carrying over, 2 of which have been in progress for 7+ days (database migration, API rate limiting). Both are assigned to Kenji and blocked on infrastructure provisioning."
Priya used to write this summary manually every Friday. It took her about 45 minutes because she had to click through the sprint board, count tasks by status, look up the previous sprints' numbers in a spreadsheet she maintained, and write the narrative. The agent does it in about two minutes. Priya reviews the output, occasionally edits a sentence, and sends it. Her Friday afternoon improved noticeably.
Where the Line Is
After building all ten of these workflows, the boundary between Rules and agents is obvious to me.
Rules handle events. Something happens, something else happens in response. The logic is "when-then" with no ambiguity. The Rule doesn't need to look around, consider context, or weigh options. It fires on a trigger and executes an action.
Agents handle situations. Multiple things are happening across multiple projects. The right response depends on context that can't be captured in a trigger condition. The output isn't a single action but a composed analysis or a coordinated set of updates across multiple tasks.
The eight Rules we run are all event handlers. The two agents we run are both situation assessors. There's no overlap between them, and we haven't found a case where we tried to build an agent for something a Rule could have handled. The cost difference makes that an easy discipline to maintain: a Rule is free, an agent costs compute time. You don't reach for the expensive tool when the cheap one works.
If you're starting from scratch with Asana automation, build the simple Rules first. You'll get immediate value from assignment automation, notification routing, and status-triggered actions. Run them for a month. The workflows that you wish you could automate but can't express as Rules are your candidates for agents. You'll know them when you see them, because they'll be the ones where you keep thinking "if only the Rule could look at other tasks and decide."
It can't. That's what agents are for.
Try These Agents
- Asana Cross-Project Dependency Tracker -- Scan dependency chains across projects and flag downstream risks before they cause delays
- Asana Sprint Status Reporter -- Friday sprint summaries with velocity trends, carryover tracking, and stuck task identification
- Asana Standup Report Generator -- Daily activity digests from your Asana projects without manual status collection