Download presentation
Presentation is loading. Please wait.
Published byAlexandrina Warren Modified over 6 years ago
1
DAY 2 Visual Analysis of Single-Case Intervention Data Tom Kratochwill
2
WWC Standards Evaluating Single-Case Design Outcomes With Visual Analysis: Evidence Criteria
3
Effect-Size Estimation Social Validity Assessment
Evaluate the Design Meets Design Standards Meets with Reservations Does Not Meet Design Standards Evaluate the Evidence Strong Evidence Moderate Evidence No Evidence Effect-Size Estimation Social Validity Assessment
4
Visual Analysis of Single-Case Evidence
Traditional Method of Data Evaluation for SCDs Determine whether evidence of a causal relation exists Characterize the strength or magnitude of that relation Singular approach currently used by WWC and in other appraisal guidelines for rating SCD evidence Methods for Effect-Size Estimation Several methods have been proposed Some SCD WWC panel members among those who have developed these methods, but methods are still being tested and some are not comparable with group-comparison studies WWC standards for effect-size are being developed as field reaches greater consensus on appropriate statistical approaches Some options will be presented at the Institute
5
Goal, Rationale, Advantages, and Limitations of Visual Analysis
Goal is to Identify Intervention Effects A basic effect is a change in the dependent variable in response to researcher manipulation of the independent variable. “Subjective” determination of evidence, but practice and common framework for applying visual analysis can help to improve agreement . Evidence criteria are met by examining effects that are replicated at different points. Encourages Focus on Interventions with Strong Effects Strong effects are generally desired by applied researchers and clinicians. Weak results are filtered out because effects should be clear from looking at data - viewed as an advantage by many and a disadvantage by some. Statistical evaluation can be more sensitive than visual analysis in detecting intervention effects.
6
Goal, Rationale, Advantages, Limitations (cont’d)
Statistical Evaluation and Visual Analysis have Some Conceptual Similarities (Kazdin, 2011): Both attempt to avoid Type I and Type II errors Type I: Concluding the intervention produced an effect when it did not Type II: Concluding the intervention did not produce an effect when it did Possible Limitations of Visual Analysis Lack of concrete decision-making rules (e.g., in contrast to p<0.05 used in statistical analysis) Multiple influences need to be analyzed simultaneously
7
Multiple Influences Need to be Considered in Applying Visual Analysis
Level: Mean of the data series within a phase Trend: Slope of the best-fit line within a phase Variability: Deviation of data around the best-fit line Percentage of Overlap: Percentage of data from an intervention phase that enters the range of data from the previous phase Immediacy: Magnitude of change between the last 3 data points in one phase and the first 3 data points in the next phase Consistency: Extent to which data patterns are similar in similar phases
8
Research on Visual Analysis
Applied Outcome Criteria and Visual Analysis Decision Criteria in Visual Analysis Standards for Visual Analysis
9
Research on Visual Analysis
Research on visual analysis contains a number of methodological considerations. Some limitations were recognized by Brossart et al. (2006, p. 536) who offered the following recommendations for improvement of visual-analysis research: Graphs should be fully contextualized, describing a particular client, target behavior(s), time frame, and data collection instrument. Judges should not be asked to predict the degree of statistical significance (i.e., a significance probability p-value) of a particular statistic, but rather should be asked to judge graphs according to their own criteria of practical importance, effect, or impact.
10
Research on Visual Analysis (Contd.)
Judges should not be asked to make dichotomous yes/no decisions, but rather to judge the extent or amount of intervention effectiveness. No single statistical test should be selected as “the valid criterion”; rather, several optional statistical tests should be tentatively compared to the visual analyst’s judgments. Only graphs of complete SCD studies should be examined (e.g., ABAB, Alternating Treatment, and Multiple-Baseline Designs).
11
Some Research Findings
Lieberman, R. G., Yoder, P. J., Reichow, B., & Wolery, M. (2010). Visual analysis of multiple baseline across participants graphs when change is delayed. School Psychology Quarterly, 25, Kahng, S. W., Chung, K-M., Gutshall, K., Pitts, S. C., Kao, J., & Girolami, K. (2010). Consistent visual analysis of intrasubject data. Journal of Applied Behavior Analysis, 43,
12
Some Research Findings
Wolf, K., & Slocum, T. A. (2015). A comparison of two approaches to training visual analysis of AB graphs. Journal of Applied Behavior Analysis, 48, 1-6. Wolfe, K., Seaman, M.A., Drasgow, E. (2016). Interrater agreement on the visual analysis of individual tiers and functional relations in multiple baseline designs. Behavior Modification. Advance online publication. DOI: /
13
Lieberman, Yoder, Reichow, and Wolery (2010) tested various characteristics of multiple-baseline designs to determine whether the data features affected the judgments of visual-analysis experts (N= 36 editorial board members of journals that publish SCDs) regarding the presence of a functional relation and agreement on the outcomes. It was found that graphs with steep slopes (versus shallow slopes) when the intervention was introduced were judged as more often having a functional relation. Nevertheless, there was still some disagreement on whether the functional relation had been established. Lieberman et al. (2010) noted that training visual judges to address conditions in which there is change long after the intervention, and where there is inconsistent latency of change across units, may be helpful in reviewers’ concurrence about a functional relation.
14
Kahng, Chung, Gutshall, Pitts, Kao, and Girolami (2010) replicated and extended earlier research on visual analysis by including editorial board members of the Journal of Applied Behavior Analysis as participants in the study. Board members were asked to judge 36 ABAB design graphs on a 100-point scale while rating the degree of experimental control. These authors reported high levels of agreement among judges, noting that the reliability of visual analysis has improved over the years, due in part to better training in visual-analysis methods.
15
Wolf and Slocum (2015) compared two approaches to training visual analysis. The purpose of this study was to evaluate systematic instruction, delivered using computer-based intervention or a recorded lecture, on identifying changes in slope and level in AB graphs. Results indicated that both approaches were significantly more effective than a no-treatment control condition but were not different from each other. The authors discussed the implications of these results for training and directions for future research.
16
Wolf et al. (2016) evaluated whether visual analysis agreement would be higher than previously reported. They investigated agreement at the tier level (i.e., the AB comparison) and at the functional relation level in multiple baseline designs and examined the relationship between raters’ decisions at each of these levels. They asked experts (N = 52) to make judgments about changes in the dependent variable in individual tiers and about the presence of an overall functional relation in 31 multiple baseline graphs. Results indicated that interrater agreement was just at or just below minimally adequate levels for both types of decisions and that agreement at the individual tier level often resulted in agreement about the overall functional relation.
17
Visual Analysis Training Options
The training provides an introduction to the visual analysis of AB graphs with extensive discrimination training and immediate feedback. Instructors have incorporated the training into single-case research design classes to build mastery in discriminations of slope and level change in AB graphs, which they can then build on when discussing replication of effects within full experimental designs. Also, it has been used in special education classes to teach future educators how to analyze individual student data. Wolfe, K., & Slocum, T. A. (2015). A comparison of two approaches to training visual analysis of AB graphs. Journal of Applied Behavior Analysis, 48/(2),
18
Visual Analysis Training Options
The purpose of Singlecase.org is to provide researchers with a tool for assessing and improving their skills at visual analysis of single-case research designs. Three sets of graphs (53 ABAB, 47 Multiple Baseline, 36 Alternating Treatments) are provided. Each graph is to be rated in terms of (a) demonstrating a functional relation and (b) demonstrating a clinical effect. Ratings are compared with national experts. Content for this site was developed by Swoboda, C., Kratochwill, T., Horner, R., Levin, J., and Albin, R., (2012). Visual Analysis Training Protocol: Applications with the Alternating Treatment, Multiple Baseline, and ABAB Designs (available from authors).
19
Visual Analysis Coding Options
A coding manual has been developed to assist with implementing the design and evidence standards developed by the What Works Clearinghouse (WWC) Panel on evaluating single-case research. The unit of analysis for the design standards is at the case level. Maggin, D. M., Briesch, A. M., & Chafouleas, S. M. (2013). An application of the What Works Clearinghouse Standards for Evaluating Single-Subject Research: Self-management interventions. Remedial and Special Education, 34,
20
Questions and Discussion
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.