Latency Recording

Measure the time between a prompt and the student's response to assess response speed and compliance.

On this page (11)

Latency recording measures the time elapsed between a prompt, instruction, or cue and the student’s response. Each trial is recorded separately, and sight·line calculates mean latency, range, and cumulative delay.

Use it to quantify compliance speed, task initiation, response-to-name, or instruction-following — the clinical questions that a yes/no compliance percentage can’t answer.

Understanding latency in clinical context

Latency data distinguish between two different concerns:

  1. Can the student perform the behavior? (Accuracy question) — answered by whether the behavior occurs eventually
  2. How quickly do they perform it? (Speed question) — answered by latency data

A student might comply with every instruction (100% compliance accuracy) but take an average of 2 minutes to start each task while peers start within 10 seconds. Latency data capture this distinction and reveal that the clinical issue is response speed, not response ability.

Latency is especially useful in:

  • Task initiation — time from direction to begin work until pencil touches paper
  • Compliance — time from instruction to initiation of compliance behavior
  • Transition speed — time from dismissal until student begins the next activity
  • Attention responsiveness — time from “look at me” or name-call until student orients to the speaker
  • Response-to-name — time from hearing name until head turn or eye contact

Recording latency trials

During recording, each trial appears as a timed episode. sight·line measures the time between when you start the timer and when you stop it.

To record a trial:

  1. Wait for the prompt to occur (teacher gives direction, says student’s name, etc.)
  2. Press the number key (1, 2, 3…) to start the latency timer for that behavior
  3. Watch the student’s response
  4. Press the same key again when the response occurs
  5. The elapsed time (latency) records automatically

sight·line captures both the raw latency in seconds and the context (elapsed time within the session).

Reviewing trials during the session

On the recording screen, a list displays:

  • Trial number — sequential count of trials recorded
  • Latency — elapsed time for each trial (in seconds)
  • Elapsed time — where in the session the trial occurred
  • Running statistics — current mean, range, and cumulative delay

After recording

sight·line automatically calculates and displays:

  • Mean latency — average time across all trials
  • Range — shortest and longest latencies observed
  • Cumulative delay — total time lost across all trials (sum of all latencies)
  • Trial count — number of prompts delivered and measured
  • Session duration — total observation time

Results appear as:

  • Summary statistics — mean, range, and cumulative delay in clear text
  • Trial timeline — chronological list of each trial with latency
  • Trend chart — whether latencies are decreasing, increasing, or stable across the session
  • Comparison to expectation or peer — if you have classroom norms or peer data, side-by-side comparison

Interpreting latency data

Always report three numbers:

  1. Mean latency
  2. Range (shortest to longest)
  3. Number of opportunities measured

These three pieces provide context:

  • Mean latency — typical response speed
  • Range — consistency (narrow range = consistent; wide range = variable)
  • Opportunity count — stability (mean based on 3 opportunities is less stable than mean based on 15)

Cumulative delay is the most clinically useful metric for communicating impact to teachers and families:

“During the 20-minute observation, the teacher issued 8 task initiation prompts. The student’s mean latency to task initiation was 74 seconds, with a range of 22 seconds to 3 minutes and 10 seconds. Cumulatively, the student’s delays consumed 9 minutes and 52 seconds of instructional time — nearly half the observation period — time the student could have spent practicing the assigned skill.”

Latency paired with compliance data

Latency is most meaningful when paired with compliance data:

  • Compliance percentage — what proportion of prompts did the student ultimately follow? (accuracy)
  • Latency — how quickly did they respond? (speed)

A student might show 95% compliance (follows almost all instructions) but with mean latency of 90 seconds, suggesting that the issue is not refusal but slow initiation.

Conversely, a student with 85% compliance and mean latency of 8 seconds has two separate issues: some instructions are refused (compliance), and others are responded to slowly (latency).

Establishing baseline expectations

Latency data are meaningless without a benchmark. Establish one by:

  1. Observing a same-age peer using identical prompt-response definitions — compare target latencies to peer latencies
  2. Consulting classroom norms — ask the teacher “How long should it typically take a student to start work after you say ‘open your book’?”
  3. Using published guidelines — some interventions specify latency expectations (example: 5 seconds for name-response in young children is typical)

“Noah’s mean latency to task initiation was 74 seconds, compared to a mean of 6 seconds for a comparison peer observed during the same lesson. This 12:1 discrepancy suggests that Noah has a notable difficulty with response initiation speed, even though he ultimately complies with the instructions.”

Common mistakes to avoid

  1. Measuring the wrong thing — clearly define what counts as the start of the response interval. Example: does task initiation mean pencil touches paper, or does it mean pencil touches paper and student begins writing? Pin this down in setup.

  2. Inconsistent trial identification — if you’re inconsistent about when you start the timer (sometimes at the instruction, sometimes after the student has already begun), your data will be noisy. Use a clear, repeatable cue.

  3. Missing early trials — if the student has already initiated a response before you start timing (because you weren’t ready), don’t record a trial. Report only the trials you actually measured.

  4. Not accounting for environmental delays — if a teacher pauses before repeating an instruction because a student is processing, don’t count that pause as latency. Latency should measure student response time, not teacher wait time.

  5. Insufficient sample — latencies vary day-to-day. Collect at least 5–8 opportunities in a session, and ideally 2–3 sessions, before concluding that latency is or isn’t a problem.

Tips for accurate latency recording

  1. Define “response” operationally — “student opens book to page X” is clear. “Student begins task” is vague. The clearer your definition, the more consistent your timing.

  2. Use double-observer IOA — latency timing can vary based on when you decide the response has occurred. Run dual-observer sessions to verify consistency.

  3. Collect multiple trials per session — a mean based on 3 trials is unstable. Aim for 8–10+ trials per session if possible.

  4. Record in realistic contexts — don’t use artificial prompts invented for the observation. Use naturally occurring classroom instructions and transitions.

  5. Pair with descriptive notes — latency data alone don’t explain why the student is slow. Describe what the student is doing during the latency period (organizing materials, looking around, appearing confused, etc.).

  6. Consider medication or fatigue effects — latencies can vary based on time of day, whether the student is medicated, or other biological factors. Record at a consistent time if possible, or note when variations occur.

Exporting latency data

Latency data export includes:

  • PDF report — summary statistics (mean, range, cumulative delay), trial timeline, trend chart, and any comparison data
  • CSV data file — per-trial latencies with elapsed session time for external analysis

See Exporting for detailed export options.

Example: using latency for intervention planning

A student exhibits the following pattern across 3 baseline sessions:

SessionMean LatencyRangePeer MeanTrials
158 sec12–142 sec7 sec8
272 sec18–210 sec6 sec8
364 sec14–156 sec8 sec9

Baseline average: 65 seconds vs. peer 7 seconds (9:1 discrepancy)

The intervention implements a visual timer, verbal countdown, and immediate praise for initiating within 15 seconds:

SessionMean LatencyRangeImprovementTrials
448 sec8–95 sec-26%10
531 sec5–68 sec-52%10
619 sec4–42 sec-71%10

Latency is decreasing consistently toward the peer baseline, suggesting the intervention is effective. This data-driven feedback guides the teacher’s intervention refinement (continue, intensify, or modify as needed).

Latency data make this impact visible and measurable in a way that general observation or teacher report cannot.