Presentation is loading. Please wait.

Presentation is loading. Please wait.

Effect size measures for single-case designs: General considerations

Similar presentations


Presentation on theme: "Effect size measures for single-case designs: General considerations"— Presentation transcript:

1 Effect size measures for single-case designs: General considerations
Single-Case Intervention Research Training Institute Madison, WI - June, 2018 James E. Pustejovsky Effect size measures for single-case designs: General considerations

2 Effect size Broadly: a quantitative [index] of relations among variables (Hedges, 2008, p. 167). In context of SCDs: a quantitative index describing the direction and magnitude of a functional relationship (i.e., effect of intervention on an outcome) in a way that allows for comparison across cases and studies (Pustejovsky & Ferron, 2017) Direction and magnitude so that we can tell the difference between a strong positive effect (good), a null effect (inconsequential), and a strong negative effect (harmful!).

3 Why do we need effect sizes?
To characterize main findings of a study in a common, widely understood way. As a basis for research synthesis. “Reporting and interpreting effect sizes in the context of previously reported effects is essential to good research. It enables readers to evaluate the stability of results across samples, designs, and analyses. Reporting effect sizes also informs power analyses and meta-analyses needed in future research.” (Wilkinson & APA Task Force on Statistical Inference, 1999) Characterize main findings of a study in a common, widely understood way. Put main findings in context of previous research. Current APA publication manual says that reporting effect sizes is “almost always necessary.”

4 Characteristics of a good effect size (Lipsey & Wilson, 2001)
Readily interpretable. Comparable across studies (and cases, for SCDs) that use different operational procedures. Accompanied by a measure of sampling variability (i.e., a standard error/confidence interval). Calculable from available data. Lipsey & Wilson discussed effect sizes in the context of their use in meta-analysis. But these are really general purpose criteria. Interpretability because effect sizes are a tool for scientific communication. Really about interpretability/meaningfulness in the context of a research field. More to say about this in a minute. Standard errors or other measures of uncertainty are needed, as in any statistical analysis, because effect sizes are only estimates. Need to know how precise those estimates are. Standard errors also play a big role in meta-analysis, but that’s not our focus today. This criterion is a particular challenge with single-case effect sizes. Calculability from available data is an important criterion for an effect size to be useful in meta-analysis of studies that use between-groups designs, because typically only summary information is available to secondary analyst/meta-analyst. With single-case research, the raw data are usually available in the form of a published graph, so this is less of a constraint.

5 Comparability across studies (between-groups experiment example)
Study 1: Effect of peer vs. adult tutoring. Outcomes measured with Woodcock-Johnson III comprehension test Treatment group: Control group: p = .310 Study 2: Effect of peer vs. adult tutoring. Outcomes measured with Terra Nova comprehension test p = .024 Study 1: SMD = 0.325, 95% CI: [-0.299, 0.950] Study 2: SMD = 0.337, 95% CI: [ 0.046, 0.628] p-values are not good measures of effect size magnitude Standardized mean difference (SMD) is a better measure because its magnitude is stable across studies of different size, and which use of different scales. Use separate numbers to describe an effect’s magnitude and how precisely it is estimated. Which study produced a bigger effect?

6 Comparability across studies and cases (single-case designs)
Imagine several single-case design studies investigating the same intervention, with similar participants, similar outcome construct. What procedural factors might be different across these studies? Ideally, effect size indices should not be strongly affected by such factors. Study design (e.g., ABAB, multiple baseline, multiple probe) Measurements per phase With behavioral outcome measures like on-task behavior/problem behavior/self-injury: Observation recording system (continuous recording, momentary time sampling, partial interval recording) Session length

7 References Hedges, L. V. (2008). What are effect sizes and why do we need them? Child Development Perspectives, 2(3), 167–171. Lipsey, M. W., & Wilson, D. B. (2001). Practical Meta-Analysis. Thousand Oaks, CA: Sage Publications, Inc. Pustejovsky, J. E., & Ferron, J. M. (2017). Research synthesis and meta-analysis of single-case designs. Handbook of Special Education: Second Edition. Wilkinson, L. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54(8), 594–604.


Download ppt "Effect size measures for single-case designs: General considerations"

Similar presentations


Ads by Google