When a session ends in Lookback, the most important work begins: sense-making.
Lookback is designed so researchers can start analysis immediately, while sessions are fresh and stakeholders are engaged. You should never need to rewatch entire sessions to extract value.
This article explains the recommended post-session workflow.
Start by scanning Headlines, not transcripts
After a session, Lookback’s AI generates Headlines - short, scannable summaries of key moments in the session timeline.
Use Headlines to:
quickly understand what happened
orient yourself before watching anything
jump directly to relevant video moments
Headlines are designed to reduce cognitive load.
They help you decide where to look - not what to conclude.
Scan Headlines → jump to video → decide what matters.
Watch selectively - never rewatch the full session
You do not need to replay sessions from start to finish.
Instead:
use Headlines, notes, and live markers as anchors
watch only the moments that appear meaningful
focus on behavior, language, and context
Seeing is still believing - but only where it matters.
Create Findings as soon as something matters
Findings are the core unit of evidence in Lookback.
A Finding is:
a short, timestamped video clip
tied to a specific observed moment
grounded in what the participant said or did
Best practices:
create Findings early, while context is fresh
keep them descriptive, not interpretive
create multiple Findings per session when needed
Findings are not conclusions - they are evidence.
Use Stakeholder Goals to surface important moments automatically
If your Project has Stakeholder Goals defined, Lookback’s AI (via Eureka) can automatically surface moments in sessions that are relevant to those goals.
When something happens in a session that aligns with a stakeholder goal:
Eureka can generate suggested Findings
these appear in the Findings view and directly in the session feed
each suggestion links back to the exact video moment
This gives you another powerful starting point for analysis: Stakeholder Goals → AI-surfaced Findings → video verification
These AI-generated Findings are suggestions, not conclusions.
Researchers remain responsible for reviewing, validating, and keeping or discarding them.
Used well, this helps you:
ensure stakeholder interests are represented
avoid missing important moments
prioritize review when time is limited
connect evidence to impact earlier in the process
This works best when:
stakeholder goals are defined before sessions begin
goals are written in clear, outcome-oriented language
researchers still review the underlying video
Use notes to capture thinking, not decisions
Notes can be added:
during the session
immediately after
later during synthesis
Use notes to capture:
questions
hypotheses
stakeholder reactions
emerging patterns
Notes represent thinking in progress, not final insight.
Use transcription to find moments, not replace video
transcripts are generated automatically
text is searchable across sessions
transcripts help you locate moments quickly
Transcripts are most useful for:
finding specific language
comparing phrasing across sessions
supporting synthesis
They should not be treated as a substitute for watching key moments.
Redact sensitive content before sharing
If sensitive information appears in a session:
redact it before sharing recordings or Findings
redactions permanently remove content from playback
Common reasons to redact:
personal data
credentials
confidential systems
participant mistakes
Redaction protects both participants and stakeholders.
Share evidence early with stakeholders
Evidence can be shared as:
full sessions
individual Findings
Reels (collections of Findings)
Tagging in the session feed
Threaded discussions on Findings and notes
Stakeholders can:
watch asynchronously
comment directly on evidence in context
participate in sense-making early
trust that nothing is lost in translation
Early sharing increases trust and reduces misinterpretation later.
Group Findings into Themes when patterns emerge
Themes group related Findings across sessions.
Create Themes only when:
patterns start to repeat
evidence supports grouping
Do not rush Themes - they should emerge from evidence, not precede it.
Decide what to do next while evidence is fresh
After early review, you may decide to:
adjust tasks or instructions
refine your discussion guide
run follow-up sessions
stop data collection if saturation is reached
Lookback supports iterative research, not fixed pipelines.
What not to do after a session
Avoid these common traps:
jumping straight to conclusions
summarizing without linking to evidence
waiting days or weeks before reviewing
treating Findings as presentation assets only
The value of Lookback comes from staying close to the raw material.
