Presentation is loading. Please wait.

Presentation is loading. Please wait.

Motivation/Rationale for "Standards" for Single-Case Intervention Research:

Similar presentations


Presentation on theme: "Motivation/Rationale for "Standards" for Single-Case Intervention Research:"— Presentation transcript:

1 DAY 1 Appraisal Guidelines/Standards for Single-Case Design Research Tom Kratochwill

2 Motivation/Rationale for "Standards" for Single-Case Intervention Research:
Professional Agreement on the Criteria for Design and Analysis of Single-Case Research: Publication criteria for peer reviewed journals. Design, Analysis, Interpretation of research findings. Grant review criteria (e.g., IES, NSF, NIMH/NIH). RFP stipulations, grant reviewer criteria;

3 Motivation/Standards(Continued):
Conduct of Literature Reviews (e.g., Kiuhara et al., in press): Review existing studies to draw conclusions about intervention research; Draw conclusions about shortcomings of studies on methodological and statistical grounds and offer recommendations for improved research; Make recommendations about what type of research needs to be conducted in a particular area;

4 Motivation/Standards(Continued):
Design Studies that Meet Various Appraisal Guidelines: Address the gold standard of methodology as recommended in the appraisal guideline; Address the gold standard of data analysis as recommended in the appraisal guideline; Address limitations of prior research methodology; Plan for practical and logistical features of conducting the research (e.g., how many replications, participants, settings);

5 Motivation/Standards(Continued):
Better standards (materials) for training in single-case methods: Visual Analysis; Statistical Analysis; Development of effect size and meta-analysis technology: Meta-analyses procedures that will allow single-case research findings to reach broader audiences; Consensus on what is required to identify “evidence-based practices:” Professional agreement on what works and what does not work.

6 Brief Overview of Appraisal Guidelines

7 Single-case researchers have a number of conceptual and methodological standards to guide their synthesis work. These standards, alternatively referred to as “guidelines,” have been developed by a number of professional organizations and authors interested primarily in providing guidance for reviewing the literature in a particular content domain. The development of these standards has also provided researchers who are designing their own intervention studies with a protocol that is capable of meeting or exceeding the proposed standards.

8 Examples of Professional Groups with SCD Standards or Guidelines:
National Reading Panel American Psychological Association (APA) Division 12/53 (Clinical/Clinical Child) American Psychological Association (APA) Division 16 (School) Horner et al. (2005). Exceptional Children What Works Clearinghouse (WWC) Consolidated Standards of Reporting Trials (CONSORT) Guidelines for N-of-1 Trials (the CONSORT Extension for N-of1 Trials [CENT] Single-Case Reporting Guideline in Behavioral Interventions (SCRIBE)

9 Reviews of Appraisal Guidelines
Wendt and Miller (2012) identified seven “quality appraisal tools” and compared these standards to the single-case research criteria advanced by Horner et al. (2005). Wendt, O., & Miller, B. (2012). Quality appraisal of single-subject experimental designs: An overview and comparison of different appraisal tools. Education and Treatment of Children, 35, 235–268.

10 Reviews of Appraisal Guidelines
Smith (2012) reviewed research design and various methodological characteristics of single-case designs in peer-reviewed journals, primarily from the psychological literature (over the years ). Based on his review, six standards for appraisal of the literature were identified (some of which overlap with the Wendt and Miller review). Smith, J. D. (2012). Single-case experimental designs: A systematic review of published, research and recommendations for researchers and reviewers. Psychological Methods 17,

11 Reviews of Appraisal Guidelines
Maggin, Briesch, Chafouleus, Ferguson, and Clark (2013) reviewed “rubrics” for identifying empirically supported practices with single-case research including the WWC Pilot Standards.* Maggin, D. M., Briesch, A. M., Chafouleas, S. M., Ferguson, T. D., & Clark, C. (2014). A comparison of rubrics for identifying empirically supported practices with single-case research. Journal of Behavioral Education, 23, *(Note: see a response to the Maggin et al. (2013) review by Hitchcock, Kratochwill, and Chasen (2015) in the Journal of Behavioral Education).

12 Brief History of the WWC Pilot Standards
Some initial developments occurred with formation of a WWC Single-Case Standards Project in 2005. The WWC Single-Case Design Panel formed in 2008 and produced a White Paper on the Pilot Standards for Single-Case Intervention Research Design in 2010. The Panel produced an article on the WWC Pilot Standards in Remedial and Special Education in 2013.

13 Context: WWC White Paper
Single-Case Intervention Research Design Standards Panel Thomas R. Kratochwill, Chair University of Wisconsin-Madison  John H. Hitchcock Ohio University  Robert H. Horner University of Oregon Joel R. Levin University of Arizona Samuel M. Odom University of North Carolina at Chapel Hill David M. Rindskopf City University of New York William R. Shadish University of California Merced

14 Single-Case Research Applications and the WWC Pilot Standards
What Works Clearinghouse Pilot Standards Design Standards Evidence Criteria Social Validity

15 Proposing New Standards Raises Some Issues in the Single-Case Design Literature
Are the Standards too Stringent for reviews of the single-case design research literature? Are there missing single-case design features in the Standards that should be part of the review process? How do the Standards “stack up” against standards for other design classes (e.g., regression discontinuity, randomized controlled trials)?

16 Research Currently Meeting WWC Design Standards
Sullivan and Shadish (2011) assessed the WWC pilot Standards related to implementation of the intervention, acceptable levels of observer agreement/reliability, opportunities to demonstrate a treatment effect, and acceptable numbers of data points in a phase. In published studies in 21 journals in 2008, they found that nearly 45% of the research met the strictest WWC standards of design and 30% met with some reservations. So, it can be concluded from this sample that around 75% of the published research during a sampling year of major journals that publish single-case intervention research would meet (or meet with reservations) the WWC design standards.

17 Things that Could be Added to the Standards:
Development of Standards for Complex Single-Case Designs, including Randomized Designs Clarification on Ratings for Complex Single-Case Designs Clarification of Ratings for Integrity of Interventions Addition of Validity Issues for Single-Case Designs that Involve Clusters Expansion of Social Validity Criteria

18 Things that Could be Added to the Standards (Continued):
Addition of Meta-Analysis Criteria for Single-Case Design (effect size measures) Additional Criteria for Visual Analysis Including Training in Visual Analysis Criteria for Various Methods of Statistical Analysis of Data

19 DAY 1 Characteristics of Scientifically Credible Single-Case Intervention Studies Based on the WWC Pilot Standards Tom Kratochwill

20 Context Single-case research methods developed and used within Applied Behavior Analysis Traditionally, the RCT has been featured as the “gold standard” for intervention research Considerable Investment by Institute of Education Sciences (IES): Funding of grants focused on single-case methods Formal policy that single-case studies are able to document experimental control Inclusion of single-case options in IES RFPs What Works Clearinghouse Pilot Standards White Paper Training IES/WWC reviewers Single-Case Design Institutes to Educate Researchers

21 Context and Other Developments
Other federal agencies such as the National Science Foundation have considered proposals that involve single-case research design; Standards have been developed that have an international focus on single-case design research: Consolidated Standards of Reporting Trials (CONSORT) Guidelines for N-of-1 Trials (the CONSORT Extension for N-of1 Trials [CENT] Single-Case Reporting Guideline in Behavioral Interventions (SCRIBE)

22 Some Defining Features of Single-Case Intervention Research (the top 10)
Experimental control: The design allows documentation of causal (e.g., functional) relations between independent and dependent variables. Individual as unit of analysis Individual provides their own control. Can treat a “group” or cluster as a participant with focus on the group as a single unit. Independent variable is actively manipulated Repeated measurement of dependent variable Measurement at multiple points in time is used. Inter-observer agreement to assess “reliability” of the dependent variable. Baseline To document social problem, and control for confounding variables.

23 Defining Features of Single-Case Research
Design controls for threats to internal validity Opportunity for replication of basic effect at 3 different points in time. Visual Analysis Visual analysis documents basic effect at three different points in time. Statistical Analysis Statistical analysis options emerging and presented during the Institute Replication Within a study to document experimental control; Across studies to document external validity; Across studies, researchers, contexts, participants to document Evidence-Based Practices. Experimental flexibility Designs may be modified or changed within a study (sometimes called response-guided research).

24 Basic Design Examples Reversal/Withdrawal Designs
Multiple Baseline Designs Alternating Treatment Designs

25 Establishing “Design Standards” as Applied to Basic Single-Case Designs: A Brief Overview
ABAB Designs Multiple Baseline Designs Alternating Treatment Designs

26 ABAB Design Description
Simple phase change designs [e.g., ABAB; BCBC design]. (In the literature, ABAB designs are sometimes referred to as withdrawal designs, intrasubject replication designs, within-series designs, or reversal designs)

27 ABAB Reversal/Withdrawal Designs
In these designs, estimates of level, trend, and variability within a data series are assessed under similar conditions; the manipulated variable is introduced and concomitant changes in the outcome measure(s) are assessed in the level, trend, and variability between phases of the series, with special attention to the degree of overlap, immediacy of effect, and similarity of data patterns across similar phases (e.g., all baseline phases).

28

29 ABAB Reversal/Withdrawal Designs
Some Design Considerations: Behavior must be reversible in the ABAB…series (e.g., return to baseline). May be ethical issues involved in reversing behavior back to baseline (A2). May be a complex study when multiple conditions need to be compared (e.g., ABABACACAC) There may be order effects in the design.

30 Multiple Baseline Design Description
Multiple baseline design. The design can be applied across units(participants), across behaviors, across situations.

31 Multiple Baseline Designs
In these designs, multiple AB data series are compared and introduction of the intervention is staggered across time. Comparisons are made both between and within a data series. Repetitions of a single simple phase change are scheduled, each with a new series and in which both the length and timing of the phase change differ across replications.

32

33 Multiple Baseline Design
Some Design Considerations: The design is generally limited to demonstrating the effect of one independent variable on some outcome. The design depends on the “independence” of the multiple baselines (across units, settings, and behaviors). There can be practical as well as ethical issues in keeping individuals on baseline for long periods of time (as in the last series).

34 Alternating Treatment Designs
Alternating treatments (in the behavior analysis literature, alternating treatment designs are sometimes referred to as part of a class of multi-element designs)

35 Alternating Treatment Design Description
In these designs, estimates of level, trend, and variability in a data series are assessed on measures within specific conditions and across time. Changes/differences in the outcome measure(s) are assessed by comparing the series associated with different conditions.

36

37 Alternating Treatment Design
Some Design Considerations: Behavior must be reversed during alternation of the intervention. There is the possibility of interaction/carryover effects as conditions are alternated. Comparing more than three treatments may be very challenging.

38 WWC Design Standards Evaluating the Quality of Single-Case Designs

39 Effect-Size Estimation Social Validity Assessment
Evaluate the Design Meets Design Standards Meets with Reservations Does Not Meet Design Standards Evaluate the Evidence Strong Evidence Moderate Evidence No Evidence Effect-Size Estimation Social Validity Assessment

40 WWC Single-Case Pilot Design Standards
Four Standards for Design Evaluation Systematic manipulation of independent variable Inter-assessor agreement Three attempts to demonstrate an effect at three different points in time Minimum number of phases and data points per phase, for phases used to demonstrate an effect Standard 3 Differs by Design Type Reversal / Withdrawal Designs (ABAB and variations) Alternating Treatments Designs Multiple Baseline Designs

41 Standard 1: Systematic Manipulation of the Independent Variable
Researcher Must Determine When and How the Independent Variable Conditions Change. If Standard Is Not Met, Study Does Not Meet Design Standards.

42 Example of Manipulation that is Not Systematic
Teacher Begins to Implement an Intervention Prematurely Because of Parent Pressure. Researcher Looks Retrospectively at Data Collected during an Intervention Program.

43 Standard 2: Inter-Assessor Agreement
Each Outcome Variable for Each Case Must be Measured Systematically by More than One Assessor. Researcher Needs to Collect Inter-Assessor Agreement: In each phase On at least 20% of the data points in each condition (i.e., baseline, intervention) Rate of Agreement Must Meet Minimum Thresholds: (e.g., 80% agreement or Cohen’s kappa of 0.60) If No Outcomes Meet These Criteria, Study Does Not Meet Design Standards.

44 In Current WWC Reviews: Author Queries Occur When Study Provides Insufficient IOA Information
Determine if Standard is Met Based on Response If the result of the query indicates that the study does not meet standards, treat it as such. If No Response, Assume Standard is Met if: The minimum level of agreement is reached. The study assesses IOA at least once in each phase. The study assesses IOA on at least 20% of all sessions. Footnote is added to WWC Product Indicating that IOA Not Fully Determined.

45 Standard 3: Three Attempts to Demonstrate an Intervention Effect at Three Different Points in Time
“Attempts” Are about Phase Transitions Designs that Could Meet This Standard Include: ABAB design Multiple baseline design with three baseline phases and staggered introduction of the intervention Alternating treatment design (other designs to be discussed during the Institute) Designs Not Meeting this Standard Include: AB design ABA design Multiple baselines with three baseline phases and intervention introduced at the same time for each case

46 Standard 4: Minimum Number of Phases and Data Points per Phase (for Phases in Standard 3)
Reversal Design MB Design AT Design Meet Standards Number of Phases 4 6 n/a With Data Points per Phase At least 5 At most 2 per phase; At least 5 per condition Meet Standards with Reservations At least 3 At least 4 per condition

47 Some Examples that "Meet", "Meet with Reservations," and "Does Not Meet Design Standards"

48 Design Evaluation Meets Design Standards
IV manipulated directly IOA documented 20% of data points in each phase Design allows opportunity to assess basic effect at three different points in time. Five data points per phase ATD (at most two per phase) Meets Design Standards with Reservation All of above, except at least three data points per phase Does not Meet Design Standards

49 Basic effect versus Experimental Control
Basic Effect (Compare any two adjacent phases/conditions): Change in the pattern of responding after manipulation of the independent variable (level, trend, variability, overlap, immediacy of effect). Experimental Control (All phases/conditions of a study): At least three demonstrations of basic effect, each at a different point in time (add assessment of similarity in pattern of data in similar phases/conditions).

50 When Assessing Design Standard
Does the design allow for the opportunity to assess experimental control? Baseline At least five data points per phase (3 w/reservation) Opportunity to document at least 3 basic effects, each at a different point in time.

51 2. Each phase has at least 5 data points (3 w/reservation)
Intervention X Intervention X X First Demonstration of Basic Effect Third Demonstration of Basic Effect Second Demonstration of Basic Effect 1. Baseline 2. Each phase has at least 5 data points (3 w/reservation) 3. Design allows for assessment of “basic effect” at three different points in time.

52 Intervention X Does Not Meet Standard

53 Intervention X Intervention Y Does Not Meet Standard

54 Intervention X Intervention X Does Not Meet Standard

55 Meets with Reservation
Intervention X Intervention X Meets with Reservation

56 First Demonstration of Basic Effect
Second Demonstration of Basic Effect Third Demonstration of Basic Effect

57 Does Not Meet Standard

58 Meets Standard

59

60 Alternating Treatment (Multi-element) Designs
Research Question: Is there a DIFFERENCE between the effects of two or more treatment conditions on the dependent variable. Methodological Issues: How many data points to show a functional relation? Five data points per condition(meets) Four (meets with reservation) The lower the separation, or higher the overlap, the more data points are needed to document experimental control.

61 Escape Attn Play Food

62 Tangible Escape Control Attention * * *

63 Escape Attn Meets Standard With Reservation Escape Play Food

64 Escape Attn Does Not Meet Standard Escape Play Food

65 Questions and Discussion


Download ppt "Motivation/Rationale for "Standards" for Single-Case Intervention Research:"

Similar presentations


Ads by Google