Articles

Our Sprint Reports Used to Take 45 Minutes. Now an Agent Writes Them.

Ibby SyedIbby Syed, Founder, Cotera
7 min readMarch 8, 2026

Our Sprint Reports Used to Take 45 Minutes. Now an Agent Writes Them.

Our Sprint Reports Used to Take 45 Minutes. Now an Agent Writes Them.

Every other Thursday at 3:15pm, Kenji would close Slack, open Jira, and begin the ritual. He'd been the scrum master for two backend teams -- 16 engineers total -- for about a year, and the sprint report was the part of the job he dreaded most. Not because it was hard. Because it was tedious in a way that made him feel like a human copy-paste machine.

He'd open the sprint board for Team Alpha. Click into the completed issues. Count them. Note the story points. Open each one that had comments and skim for anything worth mentioning -- did we hit a blocker? Did scope change? Did something take three times longer than estimated? Then he'd check the carryover -- issues that didn't get done. For each one, he'd figure out why. Was it descoped intentionally? Blocked on another team? Just not finished? The distinction mattered to leadership but wasn't captured anywhere in Jira's data.

Then he'd do the same thing for Team Beta. Then he'd open a Google Doc and start writing.

The report always followed the same structure: what we planned, what we shipped, what carried over, what's at risk, and how velocity is trending. Leadership wanted the "so what" -- not a list of ticket numbers, but a narrative about whether the teams were healthy, on track, and properly staffed. Kenji had to synthesize that narrative from raw Jira data every two weeks.

Forty-five minutes. Sometimes an hour if there was a lot of carryover to explain. He timed himself once out of morbid curiosity. Twenty minutes pulling data, twenty minutes writing, five minutes formatting and sending. Every two weeks, for a year. That's roughly 26 hours per year of a senior person's time spent generating a document that's read in three minutes and forgotten by Monday.

What a Manual Sprint Report Actually Involves

I want to be specific about this because I think a lot of people underestimate the work. "Just pull the sprint data from Jira" sounds simple until you try it.

Jira's built-in sprint report gives you a burndown chart, a list of completed issues, and a list of issues not completed. That's the starting point, not the finished product. Here's what it doesn't give you:

  • Context for carryover. An issue that wasn't completed could mean the team got pulled into an incident, or the task was bigger than estimated, or a dependency was blocked, or someone was out sick. Jira shows you that the issue didn't finish. The scrum master has to figure out why.
  • Scope change accounting. Issues added mid-sprint don't show up differently from issues planned at the start. Kenji had to compare the sprint scope at kickoff to the sprint scope at close to see how much was added. Jira tracks this -- you can check the sprint scope change in the report -- but it's a data point, not an explanation. Leadership wants to know why scope changed and whether it was the right call.
  • Cross-sprint trends. Was this sprint's velocity higher or lower than the last three? Is there a trend? Jira has velocity charts, but they show the data without analysis. "Velocity dropped from 42 to 31" is what the chart says. "Velocity dropped because we lost two engineers to the platform migration and took on an unplanned security audit" is what leadership needs to hear.
  • Team health signals. Are the same people carrying the load every sprint? Is one engineer consistently picking up more points than anyone else? Is someone's work being blocked more often than others? The data is in Jira, scattered across assignee fields and status histories and comment threads. Assembling it into a coherent team health picture is manual work.
  • Risk assessment. What's coming next sprint that might be a problem? An epic that's 30% complete with a hard deadline in three weeks? A dependency on another team that hasn't started their part yet? Jira doesn't predict. It reports. The scrum master provides the forecast.

Kenji's 45 minutes weren't about pulling numbers. They were about reading the numbers and figuring out the story they told. That's the part that's hard to automate with Jira's built-in tools -- and that's exactly the part where an agent does well.

What Changed

Kenji set up a sprint status reporter agent and ran it alongside his manual process for two sprints to compare. The agent connects to Jira, pulls the sprint data for both teams, and generates a narrative report. Here's what the output looks like (paraphrased from one of Kenji's actual reports):

Team Alpha completed 14 of 17 planned issues (38 story points of 45 planned). Three items carried over: AUTH-234 (OAuth token refresh) is blocked on a third-party API change expected next week, AUTH-251 (session timeout handling) was descoped after the security audit took priority, and AUTH-267 (rate limiting) is 80% complete and expected to close in the first two days of the next sprint.

Velocity is 38 points, up from 34 last sprint and consistent with the four-sprint average of 37. Scope increased by two issues mid-sprint, both related to the unplanned security audit.

Team Beta completed 11 of 14 planned issues (29 story points of 36 planned). Three items carried over: PAY-189 (refund processing) is blocked on the payments vendor credential rotation, PAY-201 (invoice generation) was reassigned mid-sprint from Yuki to Suki due to Yuki's PTO, and PAY-210 (webhook retry logic) needs additional testing. Velocity is 29, down from 33 last sprint. The decline is attributable to the mid-sprint reassignment and one fewer working day due to the team offsite.

That's not a list of ticket numbers. It's the same narrative Kenji would have written, but generated in about 90 seconds instead of 45 minutes. The agent reads the same data Kenji reads -- completed issues, carryover, velocity, scope changes, comments, status histories -- and produces the same kind of output.

It's not identical to what Kenji would write. The first time he ran it, he changed about 15% of the text. He softened the language on the velocity decline ("attributable to" became "partly due to"), added a sentence about a morale win the agent didn't capture (the team shipped a feature they'd been working on for three months), and corrected one factual error (the agent attributed a carryover to a dependency block, but it was actually a deliberate descope). By the fourth sprint, his edits were down to about 5% -- a couple of word choices and one added sentence.

Why This Matters More Than It Sounds

Forty-five minutes every two weeks sounds trivial. It's not.

First, it's 45 minutes of a senior person's time. Kenji is one of the best scrum masters I've worked with. His value is in running effective ceremonies, unblocking engineers, and spotting process problems early. Sprint report assembly is administrative work that doesn't use any of those skills. Every minute he spends pulling data from Jira is a minute he's not spending on the work that actually makes the team better.

Second, the 45 minutes is the visible cost. The invisible cost is the context-switching. Kenji has to stop whatever he's doing at 3:15 on report day, shift into data-gathering mode, hold the entire sprint's narrative in his head while he writes, and then shift back to whatever he was doing before. The cognitive overhead extends well past the 45 minutes on the clock.

Third, consistency. Kenji's reports are good. But when he's on vacation, someone else writes the report, and the format changes, the depth varies, and the analysis is thinner. When the agent writes the base report, the quality is consistent regardless of who's reviewing it. The review is the human value-add. The assembly is the machine's job.

Fourth, speed. Kenji's report comes out Thursday afternoon at the earliest. The agent can generate it Thursday morning, or Wednesday night, or within minutes of the sprint closing. Leadership gets the information faster. Decisions that depend on sprint health data happen sooner.

What the Agent Can't Do

I want to be honest about the limits, because there are real ones.

The agent doesn't attend standup. It doesn't hear the tone of someone's voice when they say "yeah, it's fine" about a blocked task. Kenji picks up on signals that aren't in the data -- frustration, burnout, political tension between teams. The sprint report sometimes needs to reflect those signals, and the agent can't capture them.

The agent doesn't know what leadership cares about this week. If the CEO is fixated on the payments integration because a board member asked about it, Kenji knows to put extra detail on payments-related items. The agent gives every epic equal weight unless you tell it otherwise.

The agent occasionally misattributes causation. It'll see that an issue carried over and that a team member was on PTO the same week, and connect the two even when they're unrelated. It's making a plausible inference, not stating a fact. Kenji catches these because he knows what actually happened. The review step is non-negotiable.

These limits are real, and they're why the agent writes the report and Kenji reviews it, not the other way around. The agent handles the data extraction and the first draft. Kenji handles the judgment and the politics. It's a division of labor that plays to each side's strengths.

The Setup

The practical details, for anyone who wants to replicate this. The agent connects to Jira's API, reads sprint data for the configured boards, and generates the report. You configure which boards to pull from, what the reporting cadence is, and where the output goes (Slack, email, Confluence, or all three).

The first run takes about five minutes to configure. Kenji pointed it at his two team boards, set it to run when sprints close, and told it to post to the engineering-leads Slack channel. The agent pulled historical sprint data to establish velocity baselines automatically. No manual setup of comparison benchmarks.

It now runs every other Thursday morning. Kenji gets a notification, reads the report in Slack, makes his edits in about five minutes, and posts the final version. The retro starts at 10am. By 10:05, everyone has read the report. By 10:10, they're talking about what to do differently next sprint instead of spending the first 15 minutes reviewing what happened.

Kenji told me last month that the sprint report used to be the part of the job he dreaded. Now it's the part he barely thinks about. "I spend five minutes on it and move on. That's how it should have always been."


Try These Agents

For people who think busywork is boring

Build your first agent in minutes with no complex engineering, just typing out instructions.