Motivation/Rationale for "Standards" for Single-Case Intervention Research:

Slides:



Advertisements
Similar presentations
Overview of Withdrawal Designs
Advertisements

Postgraduate Course 7. Evidence-based management: Research designs.
Other single subject designs part 2
+ Evidence Based Practice University of Utah Presented by Will Backner December 2009 Training School Psychologists to be Experts in Evidence Based Practices.
Direct Behavior Rating: An Assessment and Intervention Tool for Improving Student Engagement Class-wide Rose Jaffery, Lindsay M. Fallon, Sandra M. Chafouleas,
PTP 560 Research Methods Week 4 Thomas Ruediger, PT.
Single -Subject Designs - Ch 5 “Data collection allows teachers to make statements about the direction and magnitude of behavioral changes” (p. 116). In.
How do you know it worked
MSc Applied Psychology PYM403 Research Methods Validity and Reliability in Research.
Single-Subject Designs
Single-Case Research: Documenting Evidence-based Practice Rob Horner University of Oregon.
From where did single-case research emerge? What is the logic behind SCDs? What is high quality research? What are the quality indicators for SCDs? SPCD.
Copyright © 2011 Pearson Education, Inc. All rights reserved. Doing Research in Behavior Modification Chapter 22.
Doing Research in Behavior Modification
Chapter 11 Research Methods in Behavior Modification.
Single-Case Research: Standards for Design and Analysis Thomas R. Kratochwill University of Wisconsin-Madison.
Program Evaluation. Program evaluation Methodological techniques of the social sciences social policy public welfare administration.
INTERNATIONAL SOCIETY FOR TECHNOLOGY IN EDUCATION working together to improve education with technology Using Evidence for Educational Technology Success.
Single-Case Research Designs: Training Protocols in Visual Analysis Wendy Machalicek University of Oregon Acknowledgement: Rob Horner Tom.
Randomization: A Missing Component of the Single-Case Research Methodological Standards Adapted from Kratochwill, T. R., & Levin, J. R. (2010). Enhancing.
Classroom-Based Applications of Single-Case Designs: Methodological and Statistical Issues Joel R. Levin University of Arizona.
Current Methodological Issues in Single Case Research David Rindskopf, City University of New York Rob Horner, University of Oregon.
Preliminary Results – Not for Citation Strengthening Institutions Program Webinar on Competitive Priority on Evidence April 11, 2012 Note: These slides.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Single-Subject Experimental Research
For ABA Importance of Individual Subjects Enables applied behavior analysts to discover and refine effective interventions for socially significant behaviors.
Random Thoughts On Enhancing the Scientific Credibility of Single-Case Intervention Research: Randomization to the Rescue Thomas R. Kratochwill and Joel.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
WWC Standards for Regression Discontinuity Study Designs June 2010 Presentation to the IES Research Conference John Deke ● Jill Constantine.
Evidence-based Education and the Culture of Special Education Chair: Jack States, Wing Institute Discussant: Teri Palmer, University of Oregon.
SOCW 671 # 8 Single Subject/System Designs Intro to Sampling.
Quasi Experimental and single case experimental designs
An Expanded Model of Evidence-based Practice in Special Education Randy Keyworth Jack States Ronnie Detrich Wing Institute.
IES Summer Research Training Institute: Single-Case Intervention Design and Analysis August 18-22, 2014 The Lowell Center Madison, Wisconsin 1.
Randomized Single-Case Intervention Designs Joel R
Single-Subject and Correlational Research Bring Schraw et al.
Developing an evaluation of professional development Webinar #2: Going deeper into planning the design 1.
Characteristics of Studies that might Meet the What Works Clearinghouse Standards: Tips on What to Look For 1.
Single- Subject Research Designs
Why Consider Single-Case Design for Intervention Research: Reasons and Rationales Tom Kratochwill February 12, 2016 Wisconsin Center for Education.
IES Project Director’s Meeting June 2010 Rob Horner University of Oregon.
IES Single-Case Research Institute: Training Visual Analysis Rob Horner University of Oregon
Project VIABLE - Direct Behavior Rating: Evaluating Behaviors with Positive and Negative Definitions Rose Jaffery 1, Albee T. Ongusco 3, Amy M. Briesch.
SINGLE SUBJECT RESEARCH PREPARED FOR: DR EDDY LUARAN PREPARED BY: AFZA ARRMIZA BINTI RAZIF [ ] HANIFAH BINTI RAMLEE IZYAN NADHIRAH BINTI.
1 Negative Results and Publication Bias in Single- Case Research Applications of the WWC Standards in Literature Reviews.
Institute of Education Sciences Summer Research Training Institute: Single-Case Intervention Design and Analysis June 19-23, 2017 The Concourse Hotel.
DAY 2 Visual Analysis of Single-Case Intervention Data Tom Kratochwill
Toward a Professional Consensus on Using Single-Case Research to Identify Evidence-Based Practices : Some Initial Options from the Standards Five studies.
RESEARCH DESIGN Experimental Designs  
Chapter 11: Quasi-Experimental and Single Case Experimental Designs
Goals of the Presentation
Chapter 12 Single-Case Evaluation Designs
Supplementary Table 1. PRISMA checklist
Doing Research in Behavior Analysis
CHAPTER 2 Research Methods in Industrial/Organizational Psychology
Effective evidence-based occupational therapy
Goals of the Presentation
Effect size measures for single-case designs: General considerations
11 Single-Case Research Designs.
Masked Visual Analysis (MVA)
Research Methods: Concepts and Connections First Edition
DAY 2 Single-Case Design Drug Intervention Research Tom Kratochwill
ABAB Design Ethical considerations
Randomization: A Missing Component of the Single-Case Research Methodological Standards Joel R. Levin University of Arizona Adapted from Kratochwill, T.
Single Subject design.
Inferential Statistics
Institute of Education Sciences Summer Research Training Institute: Single-Case Intervention Design and Data Analysis June 18-22, 2019 Madison, Wisconsin.
DAY 2 Single-Case Design Drug Intervention Research Tom Kratochwill
Some Further Considerations in Combining Single Case and Group Designs
Meta-analysis, systematic reviews and research syntheses
Presentation transcript:

DAY 1 Appraisal Guidelines/Standards for Single-Case Design Research Tom Kratochwill

Motivation/Rationale for "Standards" for Single-Case Intervention Research: Professional Agreement on the Criteria for Design and Analysis of Single-Case Research: Publication criteria for peer reviewed journals. Design, Analysis, Interpretation of research findings. Grant review criteria (e.g., IES, NSF, NIMH/NIH). RFP stipulations, grant reviewer criteria;

Motivation/Standards(Continued): Conduct of Literature Reviews (e.g., Kiuhara et al., in press): Review existing studies to draw conclusions about intervention research; Draw conclusions about shortcomings of studies on methodological and statistical grounds and offer recommendations for improved research; Make recommendations about what type of research needs to be conducted in a particular area;

Motivation/Standards(Continued): Design Studies that Meet Various Appraisal Guidelines: Address the gold standard of methodology as recommended in the appraisal guideline; Address the gold standard of data analysis as recommended in the appraisal guideline; Address limitations of prior research methodology; Plan for practical and logistical features of conducting the research (e.g., how many replications, participants, settings);

Motivation/Standards(Continued): Better standards (materials) for training in single-case methods: Visual Analysis; Statistical Analysis; Development of effect size and meta-analysis technology: Meta-analyses procedures that will allow single-case research findings to reach broader audiences; Consensus on what is required to identify “evidence-based practices:” Professional agreement on what works and what does not work.

Brief Overview of Appraisal Guidelines

Single-case researchers have a number of conceptual and methodological standards to guide their synthesis work. These standards, alternatively referred to as “guidelines,” have been developed by a number of professional organizations and authors interested primarily in providing guidance for reviewing the literature in a particular content domain. The development of these standards has also provided researchers who are designing their own intervention studies with a protocol that is capable of meeting or exceeding the proposed standards.

Examples of Professional Groups with SCD Standards or Guidelines: National Reading Panel American Psychological Association (APA) Division 12/53 (Clinical/Clinical Child) American Psychological Association (APA) Division 16 (School) Horner et al. (2005). Exceptional Children What Works Clearinghouse (WWC) Consolidated Standards of Reporting Trials (CONSORT) Guidelines for N-of-1 Trials (the CONSORT Extension for N-of1 Trials [CENT] Single-Case Reporting Guideline in Behavioral Interventions (SCRIBE)

Reviews of Appraisal Guidelines Wendt and Miller (2012) identified seven “quality appraisal tools” and compared these standards to the single-case research criteria advanced by Horner et al. (2005). Wendt, O., & Miller, B. (2012). Quality appraisal of single-subject experimental designs: An overview and comparison of different appraisal tools. Education and Treatment of Children, 35, 235–268.

Reviews of Appraisal Guidelines Smith (2012) reviewed research design and various methodological characteristics of single-case designs in peer-reviewed journals, primarily from the psychological literature (over the years 2000-2010). Based on his review, six standards for appraisal of the literature were identified (some of which overlap with the Wendt and Miller review). Smith, J. D. (2012). Single-case experimental designs: A systematic review of published, research and recommendations for researchers and reviewers. Psychological Methods 17, 510-550.

Reviews of Appraisal Guidelines Maggin, Briesch, Chafouleus, Ferguson, and Clark (2013) reviewed “rubrics” for identifying empirically supported practices with single-case research including the WWC Pilot Standards.* Maggin, D. M., Briesch, A. M., Chafouleas, S. M., Ferguson, T. D., & Clark, C. (2014). A comparison of rubrics for identifying empirically supported practices with single-case research. Journal of Behavioral Education, 23, 287-311. *(Note: see a response to the Maggin et al. (2013) review by Hitchcock, Kratochwill, and Chasen (2015) in the Journal of Behavioral Education).

Brief History of the WWC Pilot Standards Some initial developments occurred with formation of a WWC Single-Case Standards Project in 2005. The WWC Single-Case Design Panel formed in 2008 and produced a White Paper on the Pilot Standards for Single-Case Intervention Research Design in 2010. The Panel produced an article on the WWC Pilot Standards in Remedial and Special Education in 2013.

Context: WWC White Paper Single-Case Intervention Research Design Standards Panel Thomas R. Kratochwill, Chair University of Wisconsin-Madison  John H. Hitchcock Ohio University  Robert H. Horner University of Oregon   Joel R. Levin University of Arizona Samuel M. Odom University of North Carolina at Chapel Hill David M. Rindskopf City University of New York William R. Shadish University of California Merced

Single-Case Research Applications and the WWC Pilot Standards What Works Clearinghouse Pilot Standards Design Standards Evidence Criteria Social Validity

Proposing New Standards Raises Some Issues in the Single-Case Design Literature Are the Standards too Stringent for reviews of the single-case design research literature? Are there missing single-case design features in the Standards that should be part of the review process? How do the Standards “stack up” against standards for other design classes (e.g., regression discontinuity, randomized controlled trials)?

Research Currently Meeting WWC Design Standards Sullivan and Shadish (2011) assessed the WWC pilot Standards related to implementation of the intervention, acceptable levels of observer agreement/reliability, opportunities to demonstrate a treatment effect, and acceptable numbers of data points in a phase. In published studies in 21 journals in 2008, they found that nearly 45% of the research met the strictest WWC standards of design and 30% met with some reservations. So, it can be concluded from this sample that around 75% of the published research during a sampling year of major journals that publish single-case intervention research would meet (or meet with reservations) the WWC design standards.

Things that Could be Added to the Standards: Development of Standards for Complex Single-Case Designs, including Randomized Designs Clarification on Ratings for Complex Single-Case Designs Clarification of Ratings for Integrity of Interventions Addition of Validity Issues for Single-Case Designs that Involve Clusters Expansion of Social Validity Criteria

Things that Could be Added to the Standards (Continued): Addition of Meta-Analysis Criteria for Single-Case Design (effect size measures) Additional Criteria for Visual Analysis Including Training in Visual Analysis Criteria for Various Methods of Statistical Analysis of Data

DAY 1 Characteristics of Scientifically Credible Single-Case Intervention Studies Based on the WWC Pilot Standards Tom Kratochwill

Context Single-case research methods developed and used within Applied Behavior Analysis Traditionally, the RCT has been featured as the “gold standard” for intervention research Considerable Investment by Institute of Education Sciences (IES): Funding of grants focused on single-case methods Formal policy that single-case studies are able to document experimental control Inclusion of single-case options in IES RFPs What Works Clearinghouse Pilot Standards White Paper Training IES/WWC reviewers Single-Case Design Institutes to Educate Researchers

Context and Other Developments Other federal agencies such as the National Science Foundation have considered proposals that involve single-case research design; Standards have been developed that have an international focus on single-case design research: Consolidated Standards of Reporting Trials (CONSORT) Guidelines for N-of-1 Trials (the CONSORT Extension for N-of1 Trials [CENT] Single-Case Reporting Guideline in Behavioral Interventions (SCRIBE)

Some Defining Features of Single-Case Intervention Research (the top 10) Experimental control: The design allows documentation of causal (e.g., functional) relations between independent and dependent variables. Individual as unit of analysis Individual provides their own control. Can treat a “group” or cluster as a participant with focus on the group as a single unit. Independent variable is actively manipulated Repeated measurement of dependent variable Measurement at multiple points in time is used. Inter-observer agreement to assess “reliability” of the dependent variable. Baseline To document social problem, and control for confounding variables.

Defining Features of Single-Case Research Design controls for threats to internal validity Opportunity for replication of basic effect at 3 different points in time. Visual Analysis Visual analysis documents basic effect at three different points in time. Statistical Analysis Statistical analysis options emerging and presented during the Institute Replication Within a study to document experimental control; Across studies to document external validity; Across studies, researchers, contexts, participants to document Evidence-Based Practices. Experimental flexibility Designs may be modified or changed within a study (sometimes called response-guided research).

Basic Design Examples Reversal/Withdrawal Designs Multiple Baseline Designs Alternating Treatment Designs

Establishing “Design Standards” as Applied to Basic Single-Case Designs: A Brief Overview ABAB Designs Multiple Baseline Designs Alternating Treatment Designs

ABAB Design Description Simple phase change designs [e.g., ABAB; BCBC design]. (In the literature, ABAB designs are sometimes referred to as withdrawal designs, intrasubject replication designs, within-series designs, or reversal designs)

ABAB Reversal/Withdrawal Designs In these designs, estimates of level, trend, and variability within a data series are assessed under similar conditions; the manipulated variable is introduced and concomitant changes in the outcome measure(s) are assessed in the level, trend, and variability between phases of the series, with special attention to the degree of overlap, immediacy of effect, and similarity of data patterns across similar phases (e.g., all baseline phases).

ABAB Reversal/Withdrawal Designs Some Design Considerations: Behavior must be reversible in the ABAB…series (e.g., return to baseline). May be ethical issues involved in reversing behavior back to baseline (A2). May be a complex study when multiple conditions need to be compared (e.g., ABABACACAC) There may be order effects in the design.

Multiple Baseline Design Description Multiple baseline design. The design can be applied across units(participants), across behaviors, across situations.

Multiple Baseline Designs In these designs, multiple AB data series are compared and introduction of the intervention is staggered across time. Comparisons are made both between and within a data series. Repetitions of a single simple phase change are scheduled, each with a new series and in which both the length and timing of the phase change differ across replications.

Multiple Baseline Design Some Design Considerations: The design is generally limited to demonstrating the effect of one independent variable on some outcome. The design depends on the “independence” of the multiple baselines (across units, settings, and behaviors). There can be practical as well as ethical issues in keeping individuals on baseline for long periods of time (as in the last series).

Alternating Treatment Designs Alternating treatments (in the behavior analysis literature, alternating treatment designs are sometimes referred to as part of a class of multi-element designs)

Alternating Treatment Design Description In these designs, estimates of level, trend, and variability in a data series are assessed on measures within specific conditions and across time. Changes/differences in the outcome measure(s) are assessed by comparing the series associated with different conditions.

Alternating Treatment Design Some Design Considerations: Behavior must be reversed during alternation of the intervention. There is the possibility of interaction/carryover effects as conditions are alternated. Comparing more than three treatments may be very challenging.

WWC Design Standards Evaluating the Quality of Single-Case Designs

Effect-Size Estimation Social Validity Assessment Evaluate the Design Meets Design Standards Meets with Reservations Does Not Meet Design Standards Evaluate the Evidence Strong Evidence Moderate Evidence No Evidence Effect-Size Estimation Social Validity Assessment

WWC Single-Case Pilot Design Standards Four Standards for Design Evaluation Systematic manipulation of independent variable Inter-assessor agreement Three attempts to demonstrate an effect at three different points in time Minimum number of phases and data points per phase, for phases used to demonstrate an effect Standard 3 Differs by Design Type Reversal / Withdrawal Designs (ABAB and variations) Alternating Treatments Designs Multiple Baseline Designs

Standard 1: Systematic Manipulation of the Independent Variable Researcher Must Determine When and How the Independent Variable Conditions Change. If Standard Is Not Met, Study Does Not Meet Design Standards.

Example of Manipulation that is Not Systematic Teacher Begins to Implement an Intervention Prematurely Because of Parent Pressure. Researcher Looks Retrospectively at Data Collected during an Intervention Program.

Standard 2: Inter-Assessor Agreement Each Outcome Variable for Each Case Must be Measured Systematically by More than One Assessor. Researcher Needs to Collect Inter-Assessor Agreement: In each phase On at least 20% of the data points in each condition (i.e., baseline, intervention) Rate of Agreement Must Meet Minimum Thresholds: (e.g., 80% agreement or Cohen’s kappa of 0.60) If No Outcomes Meet These Criteria, Study Does Not Meet Design Standards.

In Current WWC Reviews: Author Queries Occur When Study Provides Insufficient IOA Information Determine if Standard is Met Based on Response If the result of the query indicates that the study does not meet standards, treat it as such. If No Response, Assume Standard is Met if: The minimum level of agreement is reached. The study assesses IOA at least once in each phase. The study assesses IOA on at least 20% of all sessions. Footnote is added to WWC Product Indicating that IOA Not Fully Determined.

Standard 3: Three Attempts to Demonstrate an Intervention Effect at Three Different Points in Time “Attempts” Are about Phase Transitions Designs that Could Meet This Standard Include: ABAB design Multiple baseline design with three baseline phases and staggered introduction of the intervention Alternating treatment design (other designs to be discussed during the Institute) Designs Not Meeting this Standard Include: AB design ABA design Multiple baselines with three baseline phases and intervention introduced at the same time for each case

Standard 4: Minimum Number of Phases and Data Points per Phase (for Phases in Standard 3) Reversal Design MB Design AT Design Meet Standards Number of Phases 4 6 n/a With Data Points per Phase At least 5 At most 2 per phase; At least 5 per condition Meet Standards with Reservations At least 3 At least 4 per condition

Some Examples that "Meet", "Meet with Reservations," and "Does Not Meet Design Standards"

Design Evaluation Meets Design Standards IV manipulated directly IOA documented 20% of data points in each phase Design allows opportunity to assess basic effect at three different points in time. Five data points per phase ATD (at most two per phase) Meets Design Standards with Reservation All of above, except at least three data points per phase Does not Meet Design Standards

Basic effect versus Experimental Control Basic Effect (Compare any two adjacent phases/conditions): Change in the pattern of responding after manipulation of the independent variable (level, trend, variability, overlap, immediacy of effect). Experimental Control (All phases/conditions of a study): At least three demonstrations of basic effect, each at a different point in time (add assessment of similarity in pattern of data in similar phases/conditions).

When Assessing Design Standard Does the design allow for the opportunity to assess experimental control? Baseline At least five data points per phase (3 w/reservation) Opportunity to document at least 3 basic effects, each at a different point in time.

2. Each phase has at least 5 data points (3 w/reservation) Intervention X Intervention X X First Demonstration of Basic Effect Third Demonstration of Basic Effect Second Demonstration of Basic Effect 1. Baseline 2. Each phase has at least 5 data points (3 w/reservation) 3. Design allows for assessment of “basic effect” at three different points in time.

Intervention X Does Not Meet Standard

Intervention X Intervention Y Does Not Meet Standard

Intervention X Intervention X Does Not Meet Standard

Meets with Reservation Intervention X Intervention X Meets with Reservation

First Demonstration of Basic Effect Second Demonstration of Basic Effect Third Demonstration of Basic Effect

Does Not Meet Standard

Meets Standard

Alternating Treatment (Multi-element) Designs Research Question: Is there a DIFFERENCE between the effects of two or more treatment conditions on the dependent variable. Methodological Issues: How many data points to show a functional relation? Five data points per condition(meets) Four (meets with reservation) The lower the separation, or higher the overlap, the more data points are needed to document experimental control.

Escape Attn Play Food

Tangible Escape Control Attention * * *

Escape Attn Meets Standard With Reservation Escape Play Food

Escape Attn Does Not Meet Standard Escape Play Food

Questions and Discussion