Selecting, Defining, and Measuring Behavior Week 2: Seeing is Believing
Behavioral Assessment Comprehensive multi-method of data collection to identify and define behavioral targets for change
Purposes of Behavioral Assessment Screening Defining problems and/or desired achievement criteria Pinpointing target behavior to be treated Monitoring Progress Follow up
Consider Social Significance of the Behavior Habilitation? Maximize Reinforcement & minimize Punishment Prerequisite to learning New behavior? Increase Access to learning new behavior? Facilitate Social Interaction? Getting rid of aggression, increasing compliments
Social Significance Considerations Behavioral Cusp? Reading Pivotal Behavior? Response Class Age appropriate Playing with a doll Is this a real behavior of interest? On Task, Losing Weight? Replacement behavior needed?
Prioritizing Behaviors Danger to self or other How “severe”? How long has the problem been occurring? Increase higher rates of R+? Relative Importance? Reduce Negative Social Interaction? Behavior Produce R+ for others? Likelihood of change? Lit., experience, environment, resources? How much will it cost? SEE Figure 3.5 on prioritizing potential target behaviors
Methods of Assessment RIOT Record Review Interviews/Checklists Observations Tests
Record Review School Record Permanent Products Report Cards Work Samples Report Cards
Determining Whether Permanent Product Is Appropriate Is real time measurement needed? Can the Behavior Be measured by Permanent Product? Will obtaining contrived (if planned; e.g. tape recording) permanent product affect behavior? How much will it cost?
Interviews Reliability is a concern Can be used to generate hypotheses Get direct data (i.e. independent observation) to corroborate
Interviews Continued What, When, where and How questions? No WHY? What does the behavior look like? When does it occur? What happens before the behavior? What happens after the behavior? Where does the behavior occur? Who is around? What gets the behavior to stop? When is the behavior likely NOT to occur? How long does the behavior occur? How frequently does the behavior occur?
Rating Scales & Checklists More reliable than verbal report Used only as a “screener” DO NOT USE ALONE FOR INTERVENTION OR DIAGNOSIS! E.g. Behavioral Assessment System for Children E.g. Diagnostic Inventory System for Children E.g. RCMAS, CDI,
Observations This is not an anecdotal report of what someone observed for a class period
General guidelines for observations: Don’t be intrusive. Agree upon a clearly defined and observable behavior first. Observe across days/times/settings to increase reliability. Use with other forms of assessment to increase validity. Carefully consider the goal of the observation before selecting an observation tool. Always note the environmental context of the behavior. Observe students in their natural environments. Always observe peers for a comparison. TRY TO REDUCE REACTIVITY!
Observation “Systems” Save your money Very limited Use direct behavioral systematic observation methods
Direct Behavioral Observations ABC Log’s Frequency Tabulation Log’s Systematic Interval Recording
Examples of Direct Observations ABC Recording Antecedents - what occurs right before the behavior. Behavior - problem behavior (observable and defined) Consequences - what happens right after the behavior
Advantages of ABC Log Frees up Practitioner Allows for measurement that is inconvenient or inaccessible May be more accurate/complete
This is an example of an ABC that was documented each time the target behavior occurred. It is important in this situation to clearly define the target behavior. Ask the audience what patterns they see.
Examples of Direct Observations Frequency Count (RATE MEASURE!) A measure of how often a clearly defined behavior occurs within a given period of time. Examine the frequency of the behavior by tallying or counting the behavior as it occurs. Use this when the behavior is discrete (has an obvious beginning and ending) and does not occur at very high rates. This information is helpful at ALL steps of the problem solving process ALWAYS MEASURE AS RATE WHEN POSSIBLE!!!!
This is an example of another chart This is an example of another chart. The X’s are when data collection didn’t occur. Important to note, otherwise we would assume that the student’s behavior was perfect. What benefits does this one have over the other? Again the challenge is figuring out clever ways to collect the information (talk about various ways of tallying behaviors such as flipping the pages of a book, using a golf counter, etc.). Ask them for clever ways that they have kept track of behavior. Ask the audience what patterns they see.
Examples of Direct Observations Systematic Data Recording Examine percentage of target behavior by: Recording when the selected student is engaging in target behavior during 10-second intervals for 15 minutes. Peers are observed in the same way as a comparison. Requires more training than the other observation tools. This information is helpful at all steps of the problem solving process
Systematic Direct Behavioral Observations: Interval Recording Partial Interval Recording: Occurs anytime within interval Whole Interval Recording: Occurs majority of Interval Momentary Time Sampling: Within 3 seconds Planned Activity Check: Frequency count of students at moment Duration Recording: How long behavior occurs
Target Child Composite Child Behavior 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 A X TO OT Composite Child Behavior 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 A X TO OT
Measurement Issues Frequency Duration Latency: S to R Intensity Rate when possible If acquisition both accuracy and error rates Duration Total Per Occurrence Latency: S to R Interresponse Time: R to R Intensity
Derived & Definitional Measures Percentages 40% of intervals observed Trials to Criterion 10 consecutive correct trials Topography: Form/Shape Magnitude: Force/Intensity
Psychometrics of Behavioral Measurement Validity Reliability Accuracy
How to maximize valid and accurate data collection Measure behavior continuously Measure behavior the same time/place across observations Measure with solid “system” Train observers then train them again later Minimize reactivity Assess Accuracy of Measurement: Answer Key Assess Reliability: IOA on 25-33% of sessions 80-90%
IOA: Event Recording Total Count: Mean Count Per interval (Smaller/Larger ) x 100 Mean Count Per interval ( N IOA)/ n intervals Exact Count Per Interval (# Intervals of 100% IOA)/(n intervals) Trial by Trial (# Trials with Agreements/ Total number of Trials) x 100%
IOA: Timing Total Duration Mean Duration (Shorter Duration/Longer Duration) x 10 Mean Duration (IOA Rn/ N responses )* 100
IOA: Interval Recording Interval by Interval (Agreements/Agreements + Disagreements) *100% Scored Interval Calculate only when one of the two scorers scored something Unscored Interval Calculate only when one of the two scorers scored something Did not occur