DAY 2 Visual Analysis of Single-Case Intervention Data Tom Kratochwill

Slides:



Advertisements
Similar presentations
Project VIABLE: Behavioral Specificity and Wording Impact on DBR Accuracy Teresa J. LeBel 1, Amy M. Briesch 1, Stephen P. Kilgus 1, T. Chris Riley-Tillman.
Advertisements

Peer-Assessment. students comment on and judge their colleagues work.
Overview of Withdrawal Designs
Chapter 9 Overview of Alternating Treatments Designs.
Experimental Design: Single-Participant Designs/ The Operant Approach.
Chapter 9: Multiple Baseline and Changing Criterion Designs
Susan Malone Mercer University.  “The unit has taken effective steps to eliminate bias in assessments and is working to establish the fairness, accuracy,
+ Evidence Based Practice University of Utah Presented by Will Backner December 2009 Training School Psychologists to be Experts in Evidence Based Practices.
Direct Behavior Rating: An Assessment and Intervention Tool for Improving Student Engagement Class-wide Rose Jaffery, Lindsay M. Fallon, Sandra M. Chafouleas,
PTP 560 Research Methods Week 4 Thomas Ruediger, PT.
Chapter 4 Validity.
How do you know it worked
Single-Case Designs. AKA single-subject, within subject, intra-subject design Footnote on p. 163 Not because only one participant (although might sometimes)
Chapter 7 Correlational Research Gay, Mills, and Airasian
Validity Lecture Overview Overview of the concept Different types of validity Threats to validity and strategies for handling them Examples of validity.
CORRELATIO NAL RESEARCH METHOD. The researcher wanted to determine if there is a significant relationship between the nursing personnel characteristics.
Single-Subject Designs
Critical Appraisal of Clinical Practice Guidelines
Copyright © 2011 Pearson Education, Inc. All rights reserved. Doing Research in Behavior Modification Chapter 22.
Doing Research in Behavior Modification
Chapter 11 Research Methods in Behavior Modification.
Single-Case Research: Standards for Design and Analysis Thomas R. Kratochwill University of Wisconsin-Madison.
Final Study Guide Research Design. Experimental Research.
Single Subject Research (Richards et al.) Chapter 8.
INTERNATIONAL SOCIETY FOR TECHNOLOGY IN EDUCATION working together to improve education with technology Using Evidence for Educational Technology Success.
Single-Case Research Designs: Training Protocols in Visual Analysis Wendy Machalicek University of Oregon Acknowledgement: Rob Horner Tom.
EVIDENCE ABOUT DIAGNOSTIC TESTS Min H. Huang, PT, PhD, NCS.
Current Methodological Issues in Single Case Research David Rindskopf, City University of New York Rob Horner, University of Oregon.
Single-Subject Experimental Research
Chapter 4 – Research Methods in Clinical Psych Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
For ABA Importance of Individual Subjects Enables applied behavior analysts to discover and refine effective interventions for socially significant behaviors.
SOCW 671 # 8 Single Subject/System Designs Intro to Sampling.
Copyright © Allyn & Bacon 2008 Intelligent Consumer Chapter 14 This multimedia product and its contents are protected under copyright law. The following.
CHAPTER 2 Research Methods in Industrial/Organizational Psychology
Criteria for selection of a data collection instrument. 1.Practicality of the instrument: -Concerns its cost and appropriateness for the study population.
Experimental Control Definition Is a predictable change in behavior (dependent variable) that can be reliably produced by the systematic manipulation.
Onsite Quarterly Meeting SIPP PIPs June 13, 2012 Presenter: Christy Hormann, LMSW, CPHQ Project Leader-PIP Team.
Single-Subject and Correlational Research Bring Schraw et al.
Developing an evaluation of professional development Webinar #2: Going deeper into planning the design 1.
Single- Subject Research Designs
IES Project Director’s Meeting June 2010 Rob Horner University of Oregon.
IES Single-Case Research Institute: Training Visual Analysis Rob Horner University of Oregon
Project VIABLE - Direct Behavior Rating: Evaluating Behaviors with Positive and Negative Definitions Rose Jaffery 1, Albee T. Ongusco 3, Amy M. Briesch.
CRITICALLY APPRAISING EVIDENCE Lisa Broughton, PhD, RN, CCRN.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 25 Critiquing Assessments Sherrilene Classen, Craig A. Velozo.
Copyright © 2009 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 47 Critiquing Assessments.
Ann P. Kaiser Vanderbilt University
EVALUATING EPP-CREATED ASSESSMENTS
IES Advanced Training Institute on Single-Case Research Methods
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Chapter 11: Quasi-Experimental and Single Case Experimental Designs
Chapter 4 Research Methods in Clinical Psychology
Principles of Quantitative Research
Goals of the Presentation
Single Subject Research
PBIS-ACT Preliminary Data Current Implementation
Concept of Test Validity
Chapter 12 Single-Case Evaluation Designs
CHAPTER 2 Research Methods in Industrial/Organizational Psychology
A Meta-Analysis of Video Modeling Interventions that Teach Employment Related Skills to Individuals with Autism Carol Sparber, M.Ed. Intervention Specialist.
Motivation/Rationale for "Standards" for Single-Case Intervention Research:
META ANALYSIS OF VIDEO MODELING INTERVENTIONS
Chapter Eight: Quantitative Methods
Effect size measures for single-case designs: General considerations
11 Single-Case Research Designs.
METHOD VALIDATION: AN ESSENTIAL COMPONENT OF THE MEASUREMENT PROCESS
Single-Case Designs.
Quantitative vs. Qualitative Research Method Issues
Visually Interpreting Your Client’s Progress
Eloise Forster, Ed.D. Foundation for Educational Administration (FEA)
Presentation transcript:

DAY 2 Visual Analysis of Single-Case Intervention Data Tom Kratochwill

WWC Standards Evaluating Single-Case Design Outcomes With Visual Analysis: Evidence Criteria

Effect-Size Estimation Social Validity Assessment Evaluate the Design Meets Design Standards Meets with Reservations Does Not Meet Design Standards Evaluate the Evidence Strong Evidence Moderate Evidence No Evidence Effect-Size Estimation Social Validity Assessment

Visual Analysis of Single-Case Evidence Traditional Method of Data Evaluation for SCDs Determine whether evidence of a causal relation exists Characterize the strength or magnitude of that relation Singular approach currently used by WWC and in other appraisal guidelines for rating SCD evidence Methods for Effect-Size Estimation Several methods have been proposed Some SCD WWC panel members among those who have developed these methods, but methods are still being tested and some are not comparable with group-comparison studies WWC standards for effect-size are being developed as field reaches greater consensus on appropriate statistical approaches Some options will be presented at the Institute

Goal, Rationale, Advantages, and Limitations of Visual Analysis Goal is to Identify Intervention Effects A basic effect is a change in the dependent variable in response to researcher manipulation of the independent variable. “Subjective” determination of evidence, but practice and common framework for applying visual analysis can help to improve agreement . Evidence criteria are met by examining effects that are replicated at different points. Encourages Focus on Interventions with Strong Effects Strong effects are generally desired by applied researchers and clinicians. Weak results are filtered out because effects should be clear from looking at data - viewed as an advantage by many and a disadvantage by some. Statistical evaluation can be more sensitive than visual analysis in detecting intervention effects.

Goal, Rationale, Advantages, Limitations (cont’d) Statistical Evaluation and Visual Analysis have Some Conceptual Similarities (Kazdin, 2011): Both attempt to avoid Type I and Type II errors Type I: Concluding the intervention produced an effect when it did not Type II: Concluding the intervention did not produce an effect when it did Possible Limitations of Visual Analysis Lack of concrete decision-making rules (e.g., in contrast to p<0.05 used in statistical analysis) Multiple influences need to be analyzed simultaneously

Multiple Influences Need to be Considered in Applying Visual Analysis Level: Mean of the data series within a phase Trend: Slope of the best-fit line within a phase Variability: Deviation of data around the best-fit line Percentage of Overlap: Percentage of data from an intervention phase that enters the range of data from the previous phase Immediacy: Magnitude of change between the last 3 data points in one phase and the first 3 data points in the next phase Consistency: Extent to which data patterns are similar in similar phases

Research on Visual Analysis Applied Outcome Criteria and Visual Analysis Decision Criteria in Visual Analysis Standards for Visual Analysis

Research on Visual Analysis Research on visual analysis contains a number of methodological considerations. Some limitations were recognized by Brossart et al. (2006, p. 536) who offered the following recommendations for improvement of visual-analysis research: Graphs should be fully contextualized, describing a particular client, target behavior(s), time frame, and data collection instrument. Judges should not be asked to predict the degree of statistical significance (i.e., a significance probability p-value) of a particular statistic, but rather should be asked to judge graphs according to their own criteria of practical importance, effect, or impact.

Research on Visual Analysis (Contd.) Judges should not be asked to make dichotomous yes/no decisions, but rather to judge the extent or amount of intervention effectiveness. No single statistical test should be selected as “the valid criterion”; rather,  several optional statistical tests should be tentatively compared to the visual analyst’s judgments.  Only graphs of complete SCD studies should be examined (e.g., ABAB, Alternating Treatment, and Multiple-Baseline Designs).  

Some Research Findings Lieberman, R. G., Yoder, P. J., Reichow, B., & Wolery, M. (2010). Visual analysis of multiple baseline across participants graphs when change is delayed. School Psychology Quarterly, 25, 28-44. Kahng, S. W., Chung, K-M., Gutshall, K., Pitts, S. C., Kao, J., & Girolami, K. (2010). Consistent visual analysis of intrasubject data. Journal of Applied Behavior Analysis, 43, 35-45.

Some Research Findings Wolf, K., & Slocum, T. A. (2015). A comparison of two approaches to training visual analysis of AB graphs. Journal of Applied Behavior Analysis, 48, 1-6. Wolfe, K., Seaman, M.A., Drasgow, E. (2016). Interrater agreement on the visual analysis of individual tiers and functional relations in multiple baseline designs. Behavior Modification. Advance online publication. DOI: 10.1177/0145445516644699

Lieberman, Yoder, Reichow, and Wolery (2010) tested various characteristics of multiple-baseline designs to determine whether the data features affected the judgments of visual-analysis experts (N= 36 editorial board members of journals that publish SCDs) regarding the presence of a functional relation and agreement on the outcomes. It was found that graphs with steep slopes (versus shallow slopes) when the intervention was introduced were judged as more often having a functional relation. Nevertheless, there was still some disagreement on whether the functional relation had been established. Lieberman et al. (2010) noted that training visual judges to address conditions in which there is change long after the intervention, and where there is inconsistent latency of change across units, may be helpful in reviewers’ concurrence about a functional relation.

Kahng, Chung, Gutshall, Pitts, Kao, and Girolami (2010) replicated and extended earlier research on visual analysis by including editorial board members of the Journal of Applied Behavior Analysis as participants in the study. Board members were asked to judge 36 ABAB design graphs on a 100-point scale while rating the degree of experimental control. These authors reported high levels of agreement among judges, noting that the reliability of visual analysis has improved over the years, due in part to better training in visual-analysis methods.

Wolf and Slocum (2015) compared two approaches to training visual analysis. The purpose of this study was to evaluate systematic instruction, delivered using computer-based intervention or a recorded lecture, on identifying changes in slope and level in AB graphs. Results indicated that both approaches were significantly more effective than a no-treatment control condition but were not different from each other. The authors discussed the implications of these results for training and directions for future research.

Wolf et al. (2016) evaluated whether visual analysis agreement would be higher than previously reported. They investigated agreement at the tier level (i.e., the AB comparison) and at the functional relation level in multiple baseline designs and examined the relationship between raters’ decisions at each of these levels. They asked experts (N = 52) to make judgments about changes in the dependent variable in individual tiers and about the presence of an overall functional relation in 31 multiple baseline graphs. Results indicated that interrater agreement was just at or just below minimally adequate levels for both types of decisions and that agreement at the individual tier level often resulted in agreement about the overall functional relation.

Visual Analysis Training Options https://foxylearning.com/tutorials/va The training provides an introduction to the visual analysis of AB graphs with extensive discrimination training and immediate feedback. Instructors have incorporated the training into single-case research design classes to build mastery in discriminations of slope and level change in AB graphs, which they can then build on when discussing replication of effects within full experimental designs. Also, it has been used in special education classes to teach future educators how to analyze individual student data. Wolfe, K., & Slocum, T. A. (2015). A comparison of two approaches to training visual analysis of AB graphs. Journal of Applied Behavior Analysis, 48/(2), 472-477.

Visual Analysis Training Options www.singlecase.org The purpose of Singlecase.org is to provide researchers with a tool for assessing and improving their skills at visual analysis of single-case research designs. Three sets of graphs (53 ABAB, 47 Multiple Baseline, 36 Alternating Treatments) are provided. Each graph is to be rated in terms of (a) demonstrating a functional relation and (b) demonstrating a clinical effect. Ratings are compared with national experts. Content for this site was developed by Swoboda, C., Kratochwill, T., Horner, R., Levin, J., and Albin, R., (2012). Visual Analysis Training Protocol: Applications with the Alternating Treatment, Multiple Baseline, and ABAB Designs (available from authors).

Visual Analysis Coding Options A coding manual has been developed to assist with implementing the design and evidence standards developed by the What Works Clearinghouse (WWC) Panel on evaluating single-case research. The unit of analysis for the design standards is at the case level. Maggin, D. M., Briesch, A. M., & Chafouleas, S. M. (2013). An application of the What Works Clearinghouse Standards for Evaluating Single-Subject Research: Self-management interventions. Remedial and Special Education, 34, 44-58.

Questions and Discussion