Presentation is loading. Please wait.

Presentation is loading. Please wait.

9.0 A taste of the Importance of Effect Size The Basics of Effect Size Extraction and Statistical Applications for Meta- Analysis Robert M. Bernard Philip.

Similar presentations


Presentation on theme: "9.0 A taste of the Importance of Effect Size The Basics of Effect Size Extraction and Statistical Applications for Meta- Analysis Robert M. Bernard Philip."— Presentation transcript:

1

2 9.0 A taste of the Importance of Effect Size

3 The Basics of Effect Size Extraction and Statistical Applications for Meta- Analysis Robert M. Bernard Philip C. Abrami Concordia University

4 April 12, 20054 What is an Effect size? A descriptive metric that characterizes the standardized difference (in SD units) between the mean of a control group and the mean of a treatment group (educational intervention) Can also be calculated from correlational data derived from pre-experimental designs or from repeated measures designs

5 April 12, 20055 Characteristics of Effect Sizes Can be positive or negative Interpreted as a z-score, in SD units, although individual effect sizes are not part of a z-score distribution Can be aggregated with other effect sizes and subjected to other statistical procedures such as ANOVA and multiple regression Magnitude interpretation: ≤ 0.20 is a small effect size, 0.50 is a moderate effect size and ≥ 0.80 is a large effect size (Cohen, 1992)

6 April 12, 20056 Zero Effect Size ES = 0.00 Control Group Intervention Group Overlapping Distributions

7 April 12, 20057 Moderate Effect Size Control Group Treatment Group ES = 0.40

8 April 12, 20058 Control Condition Intervention Condition ES = 0.85

9 April 12, 20059 Large Effect Size Control Group Intervention Condition ES = 0.85

10 April 12, 200510 Percentage Interpretation of Effect Sizes ES = 0.00 means that the average treatment participant outperformed 0% of the control participants ES = 0.40 means that the average treatment participant outperformed 65% of the control participants (from the Unit Normal Distribution) ES = 0.85 means that the average treatment participant outperformed 80% of the control participants

11 April 12, 200511 Independence of Effect Sizes Ideally, multiple effect sizes extracted from the same study should be independent from one another This means that the same participants should not appear in more than one effect size In studies with one control condition and multiple treatments, the treatments can be averaged, or one may be selected at random Using effect sizes derived from different measures on the same participants is legitimate

12 April 12, 200512 Independence: Treatments & Measures RO1RX1O1RX2O1RX3O1RO1RX1O1RX2O1RX3O1 R O 1 RX pooled O 1 RO1O2RO1O2 RX1O1O2RX1O1O2 One outcome Two outcomes, one for O 1 and one for O 2

13 April 12, 200513 Effect Size Extraction Effect size extraction is the process of identifying relevant statistical data in a study and calculating an effect size based on those data All effect sizes should be extracted by two coders, working independently Coders’ results should be compared and a measure of inter-coder agreement calculated and recorded In cases of disagreement, coders should resolve the discrepancy in collaboration

14 April 12, 200514 ES Calculation: Descriptive Statistics

15 April 12, 200515 Examples from Three Studies StudynEnE nCnC MEME MCMC SD E SD C SD P GG dCdC gHgH Study 1: Equal ns and roughly equal standard deviations S-141 62.559.37.05.66.30.570.510.50 Study 3: Roughly equal ns and different standard deviations S-3192262.548.614.15.612.22.481.141.11 Study 2: Different ns and roughly equal standard deviations S-2381470.480.510.810.110.5–1.00–0.96–0.95

16 April 12, 200516 Extracting Effect Sizes in the Absence of Descriptive Statistics Inferential Statistics (t-test, ANOVA, ANCOVA, etc.) when the exact statistics are provided Levels of significance, such as p <.05, when the exact statistics are not given (t can be set at the conservative t = 1.96 ( Glass, McGaw & Smith, 1981; Hedges, Shymansky & Woodworth, 1989) Studies not reporting sample sizes for control and experimental groups should be considered for exclusion

17 April 12, 200517 Other Codable Data Regarding Effect size Type of statistical data used to extract effect size (e.g., descriptives, t-value) Type of effect size, such as posttest only, adjusted in ANCOVA, etc. Direction of the statistical test Reliability of dependent measure In pretest/posttest design, the correlation between pretest and posttest

18 April 12, 200518 Examples from CT Meta-Analysis Study 1: pretest/posttest, one-group design, all descriptives present Study 2: posttest only, two-group design, all descriptives present Study 3: pretest/posttest, two-group design, all descriptives present Coding Sheet for 3 studies

19 April 12, 200519 Mean and Variability Variability ES+ Note: Results from Bernard, Abrami, Lou, et al. (2004) RER

20 April 12, 200520 Variability of Effect Size The standard error of each effect size is estimated using the following equation: The average effect size (d+) is tested using the following equation: with N – 2 degrees of freedom (Hedges & Olkin, 1985).

21 April 12, 200521 Testing Homogeneity of Effect Size Note the similarity to a t-ratio. Q is tested using the sampling distribution of  2 with k – 1 degrees of freedom where k is the number of effect sizes (Hedges & Olkin, 1985).

22 April 12, 200522 Homogeneity vs. Heterogeneity of Effect Size If homogeneity of effect size is established, then the studies in the meta-analysis can be thought of as sharing the same effect size (i.e., the mean) If homogeneity of effect size is violated (heterogeneity of effect size), then no single effect size is representative of the collection of studies (i.e., the “true” average effect size remains unknown)

23 April 12, 200523 Example with Fictitious Data StudynEnE nCnC SD P dQ Study 1192262.548.613.912.21.140.117.85 Study 2121518.716.91.84.30.420.150.33 Study 3322279.682.2–2.618.9–0.140.081.45 Study 441 62.559.33.26.30.510.051.98 Study 5382470.480.5–10.110.5–0.960.0817.66 Totals142124d+ =0.135*∑Q =29.28** *d+ is not significant, p >.05; **  2 is significant, p <.05

24 April 12, 200524 Graphing the Distribution of Effect Sizes Forest Plot Favors Control Favors Treatment Units of SD

25 April 12, 200525 Statistics in Comprehensive Meta-Analysis™ Comprehensive Meta-Analysis 1.0 is a trademark of BioStat® Note: Results from Bernard, Abrami, Lou, et al. (2004) RER

26 April 12, 200526 Examining Study Features Purpose: to attempt to explain variability in effect sizePurpose: to attempt to explain variability in effect size Any nominally coded study feature can be investigatedAny nominally coded study feature can be investigated In addition to mean effect size, variability should be investigatedIn addition to mean effect size, variability should be investigated Study features with small ks may be unstableStudy features with small ks may be unstable

27 April 12, 200527 Examining the Study Feature Gender d+ = +0.14 k = 60 Overall Effect Males d+ = –0.14 k = 18 Females d+ = +0.24 k = 32

28 April 12, 200528 ANOVA on Levels of Study Features Note: Results from Bernard, Abrami, Lou, et al. (2004) RER

29 April 12, 200529 Sensitivity Analysis Tests the robustness of the findings Asks the question: Will these results stand up when potentially distorting or deceptive elements, such as outliers, are removed? Particularly important to examine the robustness of the effect sizes of study features, as these are usually based on smaller numbers of outcomes

30 April 12, 200530 Meta-Regression An adaptation of multiple linear regression Effect sizes weighted by in regression Used to model study features and blocks of study features with the intention of explaining variation in effect size Standard errors, test statistics (z) and confidence intervals for individual predictors must be adjusted (Hedges & Olkin, 1984)

31 April 12, 200531 Selected References Bernard, R. M., Abrami, P. C., Lou, Y. Borokhovski, E., Wade, A., Wozney, L., Wallet, P.A., Fiset, M., & Huang, B. (2004). How Does Distance Education Compare to Classroom Instruction? A Meta-Analysis of the Empirical Literature. Review of Educational Research, 74(3), 379- 439. Glass, G. V., McGaw, B., & Smith, M. L. (1981). Meta- analysis in social research. Beverly Hills, CA: Sage. Hedges, L. V. & Olkin, I. (1985). Statistical methods for meta-analysis. Orlando, FL: Academic Press. Hedges, L. V., Shymansky, J. A., & Woodworth, G. (1989). A practical guide to modern methods of meta-analysis. [ERIC Document Reproduction Service No. ED 309 952].


Download ppt "9.0 A taste of the Importance of Effect Size The Basics of Effect Size Extraction and Statistical Applications for Meta- Analysis Robert M. Bernard Philip."

Similar presentations


Ads by Google