Duration Recording

Measure the total time behaviors occur and analyze episode length using start/stop toggles.

On this page (9)

Duration recording measures how long each behavioral episode lasts. You press a key to start timing when the behavior begins, and press it again to stop. Each episode records individually.

Sightline calculates total cumulative duration, percentage of observation time, number of episodes, and mean episode length — the metrics needed to show both overall impact and the shape of the behavior.

Recording duration episodes

During recording, each behavior appears as an on-screen toggle button showing the current state (off or timing).

To record an episode:

  • Press the number key (1, 2, 3…) or click the button to start timing
  • The button changes appearance (usually brightens or shows a “running” indicator) and the timer begins
  • Press the same key again to stop timing and mark the episode complete
  • The elapsed time for that episode records immediately

You can have multiple behaviors timing simultaneously (example: recording both “out-of-seat” and “off-task” during the same observation), though typically you track one per session.

Reviewing episodes during the session

On the recording screen, a running display shows:

  • Current episodes timing — visual indicator of which behaviors are “on”
  • Running totals — cumulative duration per behavior and percentage of observation time consumed
  • Episode count — number of distinct episodes recorded so far

If you start a timer by mistake, press Cmd+Z to undo the last episode without losing your progress.

After recording

sight·line calculates and displays:

  • Total cumulative duration — sum of all episode durations
  • Percentage of observation time — (total duration ÷ session duration) × 100
  • Number of episodes — count of distinct episodes
  • Mean duration per episode — total duration ÷ number of episodes
  • Range — shortest to longest episode duration
  • Session duration — total time observed

Results appear as:

  • Summary statistics — all metrics above, in text
  • Episode timeline — chronological list of each episode with start time and duration
  • Duration trend chart — how episode length changes across the session (are episodes getting longer or shorter?)
  • Percentage breakdown — visual representation of how much of the session the behavior consumed

Interpreting duration data

Always report three numbers:

  1. Total cumulative duration
  2. Percentage of observation time
  3. Mean duration per episode with range

These three metrics together paint a complete picture:

  • Total duration and percentage show overall impact on instructional or engagement time
  • Episode count and average length show whether the behavior occurs as many brief interruptions or fewer prolonged episodes

A student off-task for 18 minutes total could represent:

  • 18 episodes of 1 minute each (frequent, brief off-task bursts)
  • 2 episodes of 9 minutes each (infrequent but sustained disengagement)

These require very different interventions, so reporting both frequency and episode length is critical.

Duration paired with frequency recording

A complete behavior picture often uses both methods:

  • Frequency: How many episodes occurred?
  • Duration: How long was each episode?

You can record both in separate sessions with the same observation configuration, then compare:

  • If a behavior decreases in frequency but increases in episode duration, the intervention may be suppressing initiation but not sustaining engagement once the episode begins
  • If both frequency and duration decrease, the intervention is addressing the behavior comprehensively

Common mistakes to avoid

  1. Using duration for discrete behaviors — don’t use duration recording for single-event behaviors like hand raises or call-outs. Use Frequency Recording instead. Duration recording works for behaviors that have extended “on” periods.

  2. Vague start/stop criteria — your operational definition must specify exactly what counts as “on” and “off.” Example: “on-task means eyes and pencil engaged with materials; off-task means eyes diverted or pencil idle for more than 3 seconds.” This removes observer subjectivity.

  3. Forgetting to stop the timer — if you start a timer and forget to stop it before the session ends, you’ll inflate the final episode’s duration. Check your episode list after recording and correct if needed (edit the final episode directly).

  4. Not accounting for session pauses — if you pause the session timer, active behavior timers keep running. When you resume, those timers continue. This is usually the correct behavior (you want to know how long the student was off-task during the time the observation was active, including pauses). But if you pause to briefly interrupt the student, note this in your report.

Duration data and instructional impact

Duration data is especially useful for communicating impact to teachers and families. Saying “the student was off-task for 35% of the observation” is more meaningful than raw counts and directly connects to lost instructional opportunity.

When writing a report, frame duration data in context:

  • “During the 20-minute independent work period, Marcus was off-task for a cumulative 14 minutes (70%), distributed across 6 episodes. His longest off-task episode lasted 5 minutes and 40 seconds. In contrast, a randomly selected peer was off-task for 2 minutes total (10%) across 3 brief episodes during the same period.”

This gives readers the full picture: frequency, total impact, and episode length pattern.

Tips for accurate duration recording

  1. Practice before live use — time yourself reading a paragraph or doing a task to get a feel for the method. It’s more cognitively demanding than frequency recording because you’re juggling multiple episodes at once.

  2. Use double-observer IOA — duration data can vary based on how strictly you apply start/stop criteria. Conduct inter-observer agreement checks with a peer observer to verify consistency.

  3. Review your episode list after recording — check that episodes are in chronological order and durations look reasonable. Correct obvious errors (like an episode that lasted 25 minutes in a 20-minute session).

  4. Compare to peer data — if possible, observe a same-age peer using duration recording and identical start/stop criteria. The comparison contexualizes whether the target student’s behavior is notably different.

  5. Consider baseline variability — record 2–3 baseline sessions before implementing an intervention. This shows you whether the behavior naturally fluctuates day-to-day, making it easier to detect real intervention effects.

Exporting duration data

Duration data export includes:

  • PDF report — summary statistics (total, percentage, mean, range), episode timeline, and trend chart
  • CSV data file — per-episode start time, duration, and elapsed time for external analysis

See Exporting for detailed export options.