PostHog Session Replay: How We Use It to Fix UX Problems Before Users Complain

Diana, a UX researcher I worked with at a B2B SaaS company, once told me something that stuck: "The best user research is watching someone use your product when they don't know you're watching." She didn't mean it in a creepy way. She meant that support tickets are written after someone has already given up. Session replays catch the frustration while it's happening.
PostHog session replay has become our go-to for this. Not because it's the fanciest tool out there. But because it sits right next to your event data, which means you can do things like "show me every session where someone rage-clicked on the checkout button and then left." Try doing that when your analytics and your replays live in separate products.
Session Replay Without Event Context Is Just Surveillance
Here's a hot take: watching random session recordings is a waste of time. I mean it. If you're sitting there watching replay after replay hoping to stumble into an insight, you're doing research the way someone pans for gold -- occasionally rewarding, mostly just staring at dirt.
Diana figured this out fast. When she first got access to PostHog's session replay feature, she spent a week watching recordings. She found a few interesting moments. A user who couldn't find the settings page. Another one who tried to drag and drop something that wasn't draggable. Good observations, but they came from hours of watching.
Then she connected the replays to PostHog's event tracking data and everything changed.
Instead of watching random sessions, she started filtering. Show me sessions where:
- A user triggered the
checkout_startedevent but never triggeredcheckout_completed - A user visited the pricing page more than three times in a single session
- A user clicked the same button more than five times in under ten seconds (rage clicking)
- A user was on a paid plan and hit an error event
Suddenly she went from panning for gold to using a metal detector. The sessions she watched were short, targeted, and almost always revealed something actionable.
This is the real power of PostHog's approach. Session replay and event analytics share the same data layer. You don't have to export user IDs from one tool and search for them in another. You filter events, and the matching sessions are right there.
Frustration Signals: What to Actually Filter For
Not all bad experiences look the same on a recording. Some are obvious -- a user staring at a spinner for 30 seconds. Others are subtle. Someone quietly abandoning a flow that looked perfectly functional from your end.
Here are the frustration signals Diana taught me to filter for, and the ones we've had the most luck with:
Rage clicks. When someone clicks the same element five or more times in quick succession, they expected something to happen and it didn't. This catches unresponsive buttons, broken links, elements that look clickable but aren't, and loading states that don't provide feedback. PostHog lets you filter session recordings by rage click events directly.
Dead clicks. A single click on a non-interactive element. Less urgent than rage clicks, but in aggregate they reveal confusing UI patterns. If 200 users clicked on a piece of text that looks like a link but isn't, your design is lying to people.
Abandoned multi-step flows. Set up events at each step of a flow (signup, onboarding, checkout, whatever has sequential steps). Then filter for sessions that completed step 2 but never hit step 3. Watch those recordings. The reason for the drop-off is almost always visible: a confusing form field, a required field that shouldn't be required, a page that loaded too slowly.
Error events followed by exit. If a user hits an error and leaves within 60 seconds, that error wasn't recoverable. Filter for these sessions and you'll find the errors that your error handling isn't actually handling.
Repeat visits to help or documentation pages. If a user visits your docs three times during a single session, they're struggling with something and your in-product experience isn't enough. Watch the session to see what they were trying to do before they went looking for help.
Pairing Replays With Your Tracking Plan
Session replay gets ten times more useful when your event tracking is already solid. If you've instrumented your product well with PostHog's capture API, every session recording has a timeline of events running alongside it. You can click on any event in the timeline and jump to that exact moment in the replay.
Diana built what she called a "replay trigger list" -- a set of event combinations that automatically warranted watching the replay. Here's a simplified version:
For onboarding drop-offs: filter for sessions where onboarding_step_1 fired but onboarding_completed didn't. Watch 10 of these per week. You'll find the same two or three friction points in over half of them.
For upgrade hesitation: filter for sessions where a user on a free plan visited the pricing page, then visited it again within 48 hours, but never triggered subscription_upgraded. These people wanted to upgrade and something stopped them.
For feature confusion: pick your newest feature. Filter for sessions where the user opened the feature (triggered the initial event) but never completed the core action. If people are opening the feature and leaving, they either don't understand it or it doesn't do what they expected.
For support ticket context: when a user submits a support ticket, look up their most recent session replay before responding. Nine times out of ten, watching the two minutes before they submitted the ticket tells you more than reading the ticket itself.
Automating Bug Reports From Replay Insights
Here's where most teams stop: a UX researcher watches a recording, notices a problem, writes it up in a Jira ticket with a screenshot, and moves on. It works, but it depends entirely on someone watching recordings. And that someone has a hundred other things to do.
What Diana started doing -- and what we've since automated with AI agents -- is turning frustration signal detection into an automated pipeline.
The setup works like this. An AI agent monitors PostHog event data for the frustration patterns I mentioned earlier: rage clicks, abandoned flows, error-then-exit sequences. When the agent detects a pattern that crosses a threshold (say, more than 15 users rage-clicked on the same element this week), it does a few things automatically.
First, it pulls the relevant session replay links from PostHog so someone can watch the specific recordings. Second, it identifies which page and element are involved, using the event properties. Third, it drafts a bug report with the affected element, the number of impacted users, the replay links, and a suggested severity level. Fourth, it posts that report to the engineering team's Slack channel or creates a ticket directly.
Diana went from spending 5 hours a week watching random replays to spending 1 hour a week reviewing pre-filtered, pre-documented issues that an agent had already identified and written up. She reviews the agent's work, adds her own interpretation where needed, and moves on. Her coverage went up while her time commitment went down.
The event tracking setup agent is how we wire up the event capture layer that feeds this entire replay filtering workflow. Without clean event data, the replay filtering is just guessing.
Session Replay Privacy: What You Need to Think About
I'd be lying if I said session replay didn't make some people on our team nervous. You're recording what users do inside your product. That's powerful, and it comes with responsibility.
PostHog handles the basics well. You can mask sensitive inputs by default, block recording on specific pages, and respect Do Not Track headers. The self-hosted option means the recordings never leave your infrastructure, which matters if you're in a regulated industry.
But the tool-level controls are only half the story. Diana established a few team-level rules that I think are worth stealing:
Never share a session recording outside the product and engineering teams without removing identifying information. Replays are for fixing the product, not for judging individual users. Never use session replays in a customer conversation ("I noticed you were on the pricing page six times..."). That's a fast way to erode trust. Set a retention period and stick to it. PostHog lets you configure how long recordings are stored. Diana set ours to 30 days. If a problem hasn't been noticed in 30 days, the recording isn't the way you'll find it. Review your recording settings quarterly. Features change, pages change, and what was safe to record six months ago might now include sensitive data.
What Session Replays Won't Tell You
Session replay shows you what happened. It doesn't tell you why. Diana was always clear about this with the team. "I can see that 40% of users stopped at step 3 of onboarding. I can see them pause, scroll around, and leave. But I don't know if they were confused by the copy, annoyed by the form length, or if their kid just started screaming and they closed the laptop."
Replays give you the observable behavior. You still need qualitative research -- user interviews, surveys, usability tests -- to understand the intent behind the behavior. The combination is powerful. The replay alone is suggestive, not conclusive.
Where an AI agent adds value here is in the aggregation layer. A single replay is anecdotal. An agent that says "47 users exhibited this exact same behavior pattern this week, here are the three most representative recordings" turns anecdote into pattern. That's a different conversation in a product review.
Start With Your Worst Funnel
If you're setting up PostHog session replay for the first time, don't try to boil the ocean. Pick the funnel with the worst conversion rate in your product. Instrument events at each step if you haven't already. Turn on session recording. Filter replays by users who dropped off at the worst step. Watch 20 recordings.
I promise you'll find something actionable within the first five. And once you've seen a real user struggle with something you built, you'll never go back to guessing.
Try These Agents
- PostHog Event Tracking Setup -- Set up the event capture layer that powers replay filtering and frustration signal detection
- PostHog User Identification Agent -- Identify users and link sessions so replays connect to real accounts
- PostHog Product Usage Tracker -- Track feature usage patterns to know which replays to prioritize