Download presentation
Presentation is loading. Please wait.
Published byNigel West Modified over 9 years ago
1
Single-Case Research: Standards for Design and Analysis Thomas R. Kratochwill University of Wisconsin-Madison
2
The SCD Standards Panel Tom Kratochwill, Chair John Hitchcock Rob Horner Sam Odom David Rindskopf Will Shadish Joel Levin, Consultant
3
Three Defining Features of a SCD An individual “case” is the unit of intervention administration and data analysis. A case may be a single participant or a cluster of participants (e.g., a classroom or community). Within the design, the case provides its own control for purposes of comparison. For example, the case’s series of outcome variables prior to the intervention is compared with the series of outcome variables during (and after) the intervention. The outcome variable is measured repeatedly within and across different conditions or levels of the independent variable. These different conditions are referred to as “phases” (e.g., baseline phase, intervention phase).
4
ABAB Design
5
Alternating Intervention Design
7
SCD Standards are designed to address threats to Internal Validity Ambiguous Temporal Precedence Selection History Maturation Testing Instrumentation Additive and Interactive Effects of Threats
8
Types of questions a SCD might answer: Overarching: Which intervention is effective for this case? Is this intervention more effective than the current “baseline” or “treatment” as “usual” condition? (e.g., does Intervention A reduce problem behavior for this case?) Does adding B to Intervention A further reduce problem behavior for this case? Is Intervention B or Intervention C more effective in reducing problem behavior for this case?
9
Design and Evidence Standards Structure Evaluate the Design Meets Evidence Standards Meets Evidence Standards with Reservations Does Not Meet Evidence Standards Conduct Visual Analysis for Each Outcome Variable Strong EvidenceModerate EvidenceNo Evidence Effect-Size Estimation
10
Criteria for Single-Case Designs that Meet Evidence Standards Independent variable must be systematically manipulated The outcome variable must be measured systematically The study must include at least three attempts to demonstrate an intervention effect (replication) The phase should typically include a minimum of five data points
11
Independent Variable Must be Systematically Manipulated The researcher determines when and how the independent variable conditions change
12
The Outcome Variable Must be Measured Systematically Measurement occurs over time Inter-observer agreement is reported Inter-observer agreement must be assessed on each outcome variable in every phase and there should be measurement for at least 20% of the sessions distributed across all conditions of the study
13
The Study Must Include at Least Three Attempts to Demonstrate an Intervention Effect Designs that generally meet this standard include: ABAB Design Multiple Baseline Design Alternating Intervention Design Designs not meeting this standard include: AB Design ABA Design BAB Design
14
Importance of Replication Source: Horner & Spaulding, in press ABAB Design
15
Multiple Baseline Design Source: Horner & Spaulding, in press
16
The Phase Should Typically Include a Minimum of Five Data Points Exceptions: If an ABAB or Multiple Baseline Design study has fewer than three or four data points in any one phase used to demonstrate an effect, the study may Meet Evidence Standards with Reservations
17
Further Exceptions to the Five Data Points Criterion Alternating Treatment Design Randomized Designs Brief Functional Assessment
18
Visual Analysis of Single-Case Designs Evidence Standards Met Through Visual Analysis of Single-Case Research Data Displays WWC Reviewers Trained in Visual Analysis of Data in Single-Case Design
19
Single-Case Design Visual Analysis: Training Goals Define Six Variables used in visual analysis, and build fluency in applying those variables with attention to both main and interaction effects. Provide a Four-Step Framework for analysis of single- case designs Visual analysis Statistical analysis Apply visual analysis to ABAB, Multiple Baseline Designs, and AlternatingTreatment designs.
20
Documenting Experimental Control ◦ Three demonstrations of an “effect” at three different points in time. A “basic effect” is a change in the dependent variable when the independent variable is actively manipulated. ◦ To assess an “effect” Visual Analysis includes simultaneous assessment of: Level, Trend, Variability, Immediacy of Effect, Overlap across Adjacent Phases, Consistency of Data Pattern in Similar Phases. (Parsonson & Baer, 1978, 1992; Kratochwill & Levin, 1992)
21
Interpreting experimental control always involves assessment of data from the whole study (all phases), not just assessment of two adjacent phases. ◦ Assessment of a “basic effect” is done with adjacent phases. ◦ Assessment of experimental control, however, requires evaluation of all data in all phases.
22
Four Steps in Analysis Six Variables for Consideration Do Baseline data document a predictable pattern? Do data within each phase allow documentation of a predictable pattern? Do data between phases document basic effects? Do data across phases document experimental control? Level Trend Variability Overlap Immediacy of effect Consistency across similar phases
23
Level Trend Variability Overlap Immediacy of Effect Consistency across similar phases Stability in non-intervened series when effect demonstrated in one series
24
Magnitude of separation Greater the difference between two conditions, larger the demonstration of a functional relation Consistency of separation Greater consistency of separation between two conditions (no overlap) larger the demonstration of a functional relation Number of data points used to establish separation The more points documenting separation to larger the demonstration of a functional relation.
25
The Tradition of Applied Behavior Analysis Lack of Consensus Surrounding the Statistical Analysis of Single-Case Research Design Use of Visual Analysis in Single-Case Design in Practice Settings
26
Structured Training in Visual Analysis (e.g., compare visual analysis of novices to experts) Use Visual Analysis Protocol that Includes a Component Analysis and Judgmental Aids (e.g., Tawney & Gast, 1984) Use Visual Analysis Criteria (e.g., Dual Criterion Method and Conservative Dual Criterion Method; Fisher, Kelly & Lomas, 2003; Swaboda, Kratochwill, & Levin, 2009)
27
Use of Randomization in Design [Response- Guided versus Non-Response-Guided Experimentation (e.g., Ferron & Jones, 2006; Todman & Dugard, 1999, 2001)] Blind Visual Analysis Procedures from a “Data Analyst” (Ferron & Jones, 2006) Use Both Visual and Statistical Analysis (e.g., Borckhardt, Nash, Murphy, Moore, Shaw, & Oneil, 2008; Brossart, Parker, Olson, & Mahadevan, 2006; Ferron & Jones, 2006;..among others)
28
Randomization applied to Single-Case Design Structure Statistical Analysis of Single-Case Design to Determine Statistical Significance (e.g., randomization tests, time-series analysis, HLM ) Single-Case Design Effect Size Determination Meta-Analysis (Single-Case Design Studies or Combined with Group Design Research)
29
Special appreciation to Rob Horner for his contributions to the visual analysis training slides.
30
Thomas R. Kratochwill, PhD Educational and Psychological Training Center 1025 West Johnson Street University of Wisconsin-Madison Madison, Wisconsin 53706 E-Mail: tomkat@education.wisc.edutomkat@education.wisc.edu
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.