Interval Recording

Score behaviors at fixed intervals using whole interval, partial interval, or momentary time sampling methods.

On this page (11)

Interval recording divides a session into fixed time blocks and scores whether a behavior occurs during or at each interval. The three methods below differ in scoring rules and measurement bias — pick based on what you want the data to show.

The three interval methods

sight·line supports three distinct interval recording approaches. Choose one based on your assessment question:

Whole Interval Recording

Rule: Score the behavior only if it occurs throughout the entire interval without stopping.

Interpretation: Conservative estimate. Any brief interruption cancels the score.

When to use: When you want to demonstrate that a behavior is pervasive and continuous — best for sustained behaviors like on-task engagement or positive peer interaction you hope to increase.

Key property — underestimates: A student on-task for 90% of an interval but briefly looking away will score 0 for that interval. Whole interval data tends to underestimate the true rate of the behavior.

Example: Observing on-task behavior with 15-second intervals over 20 minutes yields 80 intervals total. If the student scores 28 intervals, the result is 35% of intervals on-task — a conservative lower-bound estimate. Pair this with the caveat that actual on-task time was likely somewhat higher.

Partial Interval Recording

Rule: Score the behavior if it occurs at any point during the interval, regardless of how briefly.

Interpretation: Liberal estimate. One brief instance scores the entire interval.

When to use: When you want to capture low-rate or short-duration behaviors that might be missed by whole interval recording — useful for off-task behavior, stereotypy, or verbal outbursts.

Key property — overestimates: A single two-second outburst in a 30-second interval scores the same as 30 seconds of continuous outburst. Partial interval data tends to overestimate the true rate of the behavior.

Example: Observing off-task behavior with 15-second intervals. If the student scores 52 of 80 intervals, the result is 65% of intervals off-task. This is an upper-bound estimate; actual off-task time was likely somewhat lower. Always note this bias when reporting.

Momentary Time Sampling (MTS)

Rule: At the end of each interval, observe the student’s state at that exact moment and score accordingly. The timer pauses briefly so you can score before the next interval begins.

Interpretation: Unbiased snapshot. You capture a point-in-time sample of behavior.

When to use: The most practical method for long sessions or when observing multiple students in rotation. Provides the least biased prevalence estimate among discontinuous recording methods.

Key property — point-in-time: MTS does not track what happened during the interval, only the behavior present at the exact boundary. It can miss brief behaviors that occur and end between observation points. However, for behaviors that are relatively stable over time (engagement, on-task state, activity), MTS estimates are generally more accurate than partial interval.

Example: Observing academic engagement with 15-second intervals. At the end of each 15-second block, the timer pauses momentarily. You observe whether the student is engaged right now. If engaged at 42 of 80 observation points, the result is 53% of moments engaged — a reasonable estimate of the true proportion of the session during which the student was engaged.

Recording an observation

Once you press Start, a behavior grid appears on screen with columns for each behavior and rows for each interval.

To score an interval:

  • Click the cell at the interval you want to score
  • Or press the number key (1, 2, 3…) corresponding to the behavior
  • The cell fills with color, indicating the behavior occurred

For MTS only: The timer pauses at each interval boundary and displays a prompt (“Score now”). Observe the student’s state at that exact moment, score, and the timer resumes.

Interval cues

sight·line plays an audible tone and/or provides a visual flash at each interval boundary. Both can be configured independently in Settings:

  • Audio cues — useful for classroom observations where you need audible alerts
  • Visual cues — useful in quiet environments where audio would be disruptive
  • Off — if you prefer to track time yourself

The tone and flash help you stay synchronized without constantly watching a timer.

Reviewing during the session

On the recording screen, a grid displays:

  • Rows — each interval in chronological order
  • Columns — each behavior you defined
  • Cells — colored when behavior is scored, empty when not

You can scroll back to view past intervals or correct errors. Press Escape to return to the current interval.

After recording

sight·line automatically calculates and displays:

  • Percentage per behavior — (intervals scored ÷ total intervals) × 100
  • Total intervals — actual count of intervals completed
  • Interval-by-interval breakdown — grid visualization showing when behavior occurred
  • Trend — whether the behavior’s percentage is increasing, decreasing, or stable across the session
  • Activity comparison — if activity context was enabled, separate percentage for each activity type
  • Peer comparison — if a peer was observed, target vs. peer percentage (discrepancy ratio)

Results appear as:

  • Summary statistics — percentages, interval count, session duration
  • Interval grid — visual color-coded grid showing when each behavior occurred
  • Trend chart — running percentage across the session, useful for detecting drift or phase effects
  • Discrepancy graph — target vs. peer percentage, if comparison data available

Interpreting interval data

Report three things:

  1. Method used (whole, partial, or MTS)
  2. Interval length
  3. Percentage data

Always acknowledge the measurement properties of your chosen method:

  • Whole interval: “On-task behavior was scored in 35% of intervals. Whole interval recording tends to underestimate actual engagement; the student’s true on-task time was likely somewhat higher.”
  • Partial interval: “Off-task behavior was scored in 65% of intervals. Partial interval recording tends to overestimate low-rate behaviors; actual off-task time was likely somewhat lower.”
  • MTS: “Academic engagement was observed at 42% of momentary observation points. Momentary time sampling provides an unbiased estimate of the proportion of time engaged.”

Comparing across sessions: Because interval data are percentages, you can compare directly across sessions of different lengths. A 20-minute and 30-minute session are directly comparable: both report percentage data that is standardized.

Peer comparison: The discrepancy ratio (target % ÷ peer %) contextualizes your data. A 35% on-task vs. 80% peer on-task discrepancy of 2.3:1 tells the IEP team that the target student’s engagement is less than half the peer’s.

Common mistakes to avoid

  1. Choosing the wrong method — whole interval for behaviors you want to increase, partial for low-rate behaviors you want to decrease, MTS for stable states. Mixing methods within a study produces incomparable data.

  2. Inconsistent interval length — if your first session uses 15-second intervals and your second uses 30-second intervals, the data are less comparable. Standardize on an interval length before you start baseline.

  3. Vague behavior definitions — operational definitions must specify what counts as “on” and “off.” Example: “on-task: eyes and materials oriented; off-task: eyes diverted for >3 seconds.” Without this clarity, scoring becomes subjective.

  4. Not using peer comparison — interval data alone don’t contextualize severity. A 50% off-task rate could be typical or concerning depending on classroom norms. Always observe a comparison peer if possible.

  5. Ignoring the measurement bias — whole interval underestimates, partial overestimates. If you don’t note these biases in your report, readers will misinterpret the data.

MTS special consideration: timer pausing

For MTS only, sight·line pauses the session timer at each interval boundary while you score. This differs from other methods, where scoring doesn’t interrupt the timer.

Why: MTS explicitly requires observation of the behavior state at a specific moment. Pausing gives you that moment to score without rushing. Once you score, the timer resumes.

This is the correct MTS procedure. If you don’t score within a few seconds, the timer may auto-resume to keep the observation on schedule.

Activity context during interval recording

Each interval is automatically tagged with the current Activity Context if tracking is enabled. In results, you can see behavior rates broken down by activity type:

  • Large group instruction → 40% off-task
  • Independent work → 65% off-task
  • Transition → 80% off-task

This reveals whether behavior is activity-specific and helps identify which settings need targeted intervention.

Exporting interval data

Interval data export includes:

  • PDF report — summary statistics, interval grid, trend chart, peer comparison, and activity breakdown
  • CSV data file — per-interval scores for each behavior, with elapsed time and activity context, for external analysis

See Exporting for detailed export options.

Tips for effective interval recording

  1. Start with MTS for long sessions — if you’re recording for 30+ minutes, MTS reduces cognitive load compared to continuous whole or partial interval observation.

  2. Use whole interval when you want a conservative estimate — appropriate for behaviors like on-task or compliance that you hope to increase.

  3. Use partial interval for low-rate behaviors — appropriate for stereotypy or outbursts that might be missed by whole interval.

  4. Conduct inter-observer agreement — interval scoring can vary based on judgment calls. Run a dual-observer session to verify consistency.

  5. Pair with frequency baseline — before starting interval recording, collect 1–2 frequency sessions to establish raw rate. This gives context to your interval percentages.

See Observation Methods for detailed clinical guidance on choosing between methods and interpreting results.