Skip to main content

Running Unmoderated Studies: Quality Checklist

A practical checklist to help you design, run, and review unmoderated studies in Lookback with high data quality and minimal participant error.

Henrik Mattsson avatar
Written by Henrik Mattsson
Updated today

Unmoderated studies succeed or fail before the first participant starts.


This checklist helps you catch the most common issues that reduce data quality - unclear intent, misunderstood tasks, silence, or partial answers.

Use it as a pre-flight and post-flight checklist for Tasks and SelfTest studies.


Before you invite participants

Study intent

  • Clear research goal defined (what you want to learn, not just what to test)

  • Tasks or instructions map directly to that goal

  • Each task focuses on one primary question

  • Success criteria are implicit (no “right answers” communicated)


Instructions & prompts

  • Instructions are written in plain language

  • No internal jargon or product shorthand

  • Explicit instruction to think out loud

  • Reminder that confusion is useful, not a failure

  • Instructions tested on someone unfamiliar with the study

Example reminder:

“Please think out loud as you complete the tasks. Say what you’re looking for, what you expect to happen, and what feels confusing.”


Task design (Tasks mode)

  • Tasks are ordered intentionally

  • Tasks don’t assume prior task success unless required

  • Randomization used only where order does not matter

  • Follow-up questions are short and specific

  • AI moderation enabled where clarification is important


Technical setup

  • Correct mode selected (Tasks vs SelfTest)

  • Landing page / prototype loads reliably

  • Mobile participants informed that the Participate app is required

  • Browser and device requirements confirmed

  • Preview Session completed end-to-end


During live unmoderated sessions

Even without a live moderator, you can still monitor quality.

  • Sessions streaming live to the dashboard

  • Early sessions reviewed for misunderstandings

  • Notes added when patterns or confusion emerge

  • Tasks adjusted or duplicated early if a major flaw appears

If the first few participants misunderstand the task, stop and fix it - don’t wait. Edits to the round update upon saving and there is no need to send new links


After sessions complete

Session review

  • Participants spoke out loud consistently

  • Tasks were completed as intended

  • Follow-up answers were substantive

  • Technical issues are flagged and excluded if needed


Evidence creation

  • Key moments turned into Findings

  • Findings reflect observed behavior, not assumptions

  • Multiple Findings used to support emerging patterns

  • Themes created as patterns emerge


Common unmoderated failure modes (and how to avoid them)

Participants misunderstand the task
→ Rewrite instructions, add context, or enable AI follow-ups

Participants don’t speak
→ Add repeated reminders and enable AI

Answers are partial or shallow
→ Break tasks into smaller steps, add clarifying questions

You realize too late the task was wrong
→ Review early sessions live and adjust immediately


When to switch approaches

If you notice:

  • repeated misunderstanding

  • strong need for probing

  • high variance in interpretation

Consider:

  • switching to moderated research, or

  • running a short moderated pilot before scaling unmoderated


Why this checklist exists

Unmoderated research scales participation - but quality still depends on design.

This checklist helps you:

  • protect qualitative depth

  • reduce wasted sessions

  • spot issues early

  • stay close to real evidence

Did this answer your question?