Presentation is loading. Please wait.

Presentation is loading. Please wait.

Effect size calculation in educational and behavioral research Wim Van den Noortgate ‘Power training’ Faculty Psychology and Educational Sciences, K.U.Leuven.

Similar presentations


Presentation on theme: "Effect size calculation in educational and behavioral research Wim Van den Noortgate ‘Power training’ Faculty Psychology and Educational Sciences, K.U.Leuven."— Presentation transcript:

1 Effect size calculation in educational and behavioral research Wim Van den Noortgate ‘Power training’ Faculty Psychology and Educational Sciences, K.U.Leuven Leuven, October 10 2003 Questions and comments: Wim.VandenNoortgate@ped.kuleuven.ac.be

2 1.Applications 2.A measure for each situation 3.Some specific topics

3 Applications 1.Expressing size of association 2.Comparing size of association 3.Determining power

4 Application 1: Expressing size of association Example:  M = 8 ;  F = 8.5 ;  M =  F = 1.5 => δ = 0.33  M  F

5 Application 1: Expressing size of association Example:  M = 8 ;  F = 8.5 ;  M =  F = 1.5 => δ = 0.33 scsc sEsE p (two-sided) g 8.109.341.55 0.015 (*)0.80

6 Application 1: Expressing size of association Example:  M = 8 ;  F = 8.5 ;  M =  F = 1.5 => δ = 0.33 scsc sEsE p (two-sided) g 8.10 7.60 7.96 7.70 8.17 7.86 8.19 8.11 7.86 8.34 9.34 7.59 8.81 8.25 8.81 7.93 8.15 7.94 8.53 1.55 1.23 1.38 1.49 1.76 1.24 1.79 1.76 1.89 1.39 1.55 1.47 1.59 1.65 1.33 1.58 1.78 1.97 1.64 1.79 0.015 (*) 0.98 0.078 0.28 0.87 0.040 (*) 0.65 0.95 0.89 0.71 0.80 -0.0069 0.57 0.35 0.053 0.67 -0.14 0.020 0.042 0.12

7 δ g

8 0 0.33 g scsc sEsE pg 8.10 7.60 7.96 7.70 8.17 7.86 8.19 8.11 7.86 8.34 9.34 7.59 8.81 8.25 8.81 7.93 8.15 7.94 8.53 1.55 1.23 1.38 1.49 1.76 1.24 1.79 1.76 1.89 1.39 1.55 1.47 1.59 1.65 1.33 1.58 1.78 1.97 1.64 1.79 0.015 (*) 0.98 0.078 0.28 0.87 0.040 (*) 0.65 0.95 0.89 0.71 0.80 -0.0069 0.57 0.35 0.053 0.67 -0.14 0.020 0.042 0.12 [0.17; 1.43] [-0.63; 0.62] [-0.06; 1.20] [-0.28; 0.98] [-0.57; 0.68] [0.04; 1.30] [-0.77; 0.49] [-0.61; 0.65] [-0.59; 0.67] [-0.51; 0.75]

9 Suppose simulated data are data from 10 studies, being replications of each other:

10 Comparing individual study results and combined study results 1.observed effect sizes may be negative, small, moderate and large. 2.CI relatively large 3.0 often included in confidence intervals 4.Combined effect size close to population effect size 5.CI relatively small 6.0 not included in confidence interval

11 Meta-analysis: Gene Glass (Educational Researcher, 1976, p.3): “Meta-analysis refers to the analysis of analyses”

12 Example: Raudenbush & Bryk (2002) Study Weeks previous contactgSE 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. Rosenthal et al. (1974) Conn et al. (1968) Jose & Cody (1971) Pellegrini & Hicks (1972) Evans & Rosenthal (1969) Fielder et al. (1971) Claiborn (1969) Kester & Letchworth (1972) Maxwell (1970) Carter (1970) Flowers (1966) Keshock (1970) Henrickson (1970) Fine (1972) Greiger (1970) Rosenthal & Jacobson (1968) Fleming & Anttonen (1971) Ginsburg (1970) 23300333010012331232330033301001233123 0.03 0.12 -0.14 1.18 0.26 -0.06 -0.02 -0.32 0.27 0.80 0.54 0.18 -0.02 0.23 -0.18 -0.06 0.30 0.07 -0.07 0.13 0.15 0.17 0.37 0.10 0.22 0.16 0.25 0.30 0.22 0.29 0.16 0.17 0.14 0.09 0.17 Application 2: Comparing the size of association

13 Results meta-analysis: 1.The variation between observed effect sizes is larger than could be expected based on sampling variance alone: the population effect size is probably not the same for studies. 2.The effect depends on the amount of previous contact

14 Application 3: Power calculations Power = probability to reject H 0 Power depends on - δ - α - N

15 ‘Powerful’ questions: 1.Suppose the population effect size is small (δ = 0.20), how large should my sample size (N) be, to have a high probability (say,.80) to draw the conclusion that there is an effect (power), when testing with an α-level of.05? 2.I did not find an effect, but maybe the chance to find an effect (power) with such a small sample is small anyway? (N and α from study, assume for instance that δ=g )

16 A measure for each situation

17

18 Dichotomous independent- dichotomous dependent variable Final exam Predictive test10 113020150 0302050 16040200

19 Dichotomous independent- dichotomous dependent variable 1.Risk difference:.87-.60 =.27 2.Relative risk:.87/.60 = 1.45 3.Phi: (130 x 20 – 20 x 30)/sqrt (150 x 50 x 160 x 40) = 0.29 4.Odds ratio: (130 x 20 / 20 x 30) = 4.33 Final exam Predictive test10 1130 (87 %) 20 (13 %) 150 (100 %) 030 (60 %) 20 (40%) 50 (100 %) 16040200

20 A measure for each situation

21 Dichotomous independent- continuous dependent variable 1.Independent groups, homogeneous variance: 2.Independent groups, heterogeneous variance: 3.Repeated measures (one group): 4.Repeated measures (independent groups): 5.Nonparametric measures 6.r pb

22 A measure for each situation

23 Nominal independent-nominal dependent variable 1.Contingency measures, e.g.: 1.Pearson’s coefficient 2.Cramers V 3.Phi coefficient 2.Goodman-Kruskal tau 3.Uncertainty coefficient 4.Cohen’s Kappa

24 Illness BetterSameWorse Experimental1052 Control473 Illness BetterSameWorse Control1052 Experimental473 Illness SameBetterWorse Experimental1052 Control473

25 A measure for each situation

26 Nominal independent-continuous dependent variable 1.ANOVA: multiple g’s 2.η² 3.ICC

27 A measure for each situation

28 Continuous independent-Continuous dependent variable 1.r 2.Non-normal data: Spearman ρ 3.Ordinal data: Kendall’s τ, Somer’s D, Gamma coefficient 4.Weighted Kappa

29 More complex situations 1.Two or more independent variables a)Regression models

30 1.Y continuous: Y i = a + bX + e i 1.X continuous: b estimated by 2.X dichotomous (1 = experimental, 0 = control), b estimated by 2.Y dichotomous: Logit(P(Y=1))= a + bX, If X dichotomous, b estimated by the log odds ratio

31 More complex situations 1.Two or more independent variables a)Regression models b)Stratification c)Contrast analyses in factorial designs (Rosenthal, Rosnow & Rubin,2000)

32 Number of treatments weekly 0123Mean Dose100 mg3109128.5 of50 mg14895.5 Medication0 mg14654.0 Mean1.676.007.678.676.0 SourceSSDfMSFp Between1 42011129.095.16.000002 Treatments8603286.6711.47.000002 Dose4202210.008.40.0004 Treat.x dose140623.330.93.47 Within270010825.00 Total4120119 Note: N=120 (12 x 10)

33 Number of treatments weekly 0123 Dose100 mg-3-3+1+3 of50 mg-3+1+3 medication0 mg-3+1+3 Number of treatments weekly 0123Mean Dose100 mg-1+1+3+5+2 of50 mg-3+1+30 medication0 mg-5-3+1-2 Mean-3+1+30

34 More complex situations 1.Two or more independent variables a)Regression models b)Stratification c)Contrast analyses in factorial designs 2.Multilevel models 3.Two or more dependent variables 4.Single-case studies

35

36 Y i = b 0 + b 1 phase i + e i Y i = b 0 + b 1 time i + b 2 phase i +b 3 (time i x phase i ) + e i

37 Specific topics

38 Comparability of effect sizes Example: g IG vs. g gain :

39 Comparability of effect sizes 1.Estimating different population parameters, e.g., 2.Estimating with different precision, e.g., g vs. Glass’s Δ

40 Choosing a measure 1.Design and measurement level 2.Assumptions 3.Popularity 4.Simplicity of sampling distribution Fisher’s Z = 0.5 log[(1+r)/(1-r)] Log odds ratio Ln(RR) 5.Directional effect size

41 Threats of effect sizes 1.‘Bad data’ 2.Measurement error 3.Artificial dichotomization 4.Imperfect construct validity 5.Range restriction

42

43

44 Threats of effect sizes 1.‘Bad data’ 2.Measurement error 3.Artificial dichotomization 4.Imperfect construct validity 5.Range restriction 6.Bias


Download ppt "Effect size calculation in educational and behavioral research Wim Van den Noortgate ‘Power training’ Faculty Psychology and Educational Sciences, K.U.Leuven."

Similar presentations


Ads by Google