Presentation is loading. Please wait.

Presentation is loading. Please wait.

LECTURE 4 EPSY 652 FALL 2009. Computing Effect Sizes- Mean Difference Effects Glass: e = (Mean Experimental – Mean Control )/SD o SD = Square Root (average.

Similar presentations


Presentation on theme: "LECTURE 4 EPSY 652 FALL 2009. Computing Effect Sizes- Mean Difference Effects Glass: e = (Mean Experimental – Mean Control )/SD o SD = Square Root (average."— Presentation transcript:

1 LECTURE 4 EPSY 652 FALL 2009

2 Computing Effect Sizes- Mean Difference Effects Glass: e = (Mean Experimental – Mean Control )/SD o SD = Square Root (average of two variances) for randomized designs o SD = Control standard deviation when treatment might affect variation (causes statistical problems in estimation) Hedges: Correct for sampling bias: g = e[ 1 – 3/(4N – 9) ] where N=total # in experimental and control groups Sg = [ (Ne + Nc)/NgNc + g 2 /(2(Ne + Nc) ] ½

3 Computing Effect Sizes- Mean Difference Effects Example from Spencer ADHD Adult study Glass: e = (Mean Experimental – Mean Control )/SD = (82 – 101)/21.55 =.8817 Hedges: Correct for sampling bias: g = e[ 1 – 3/(4N – 9) ] =.8817 (1 – 3/(4*110 – 9) =.8762 Note: SD computed from t-statistic of 4.2 given in article: e = t*(1/N E + 1/N C ) ½

4

5 Computing Mean Difference Effect Sizes from Summary Statistics t-statistic: e = t*(1/N E + 1/N C ) ½ F(1,df error ): e = F ½ *(1/N E + 1/N C ) ½ Point-biserial correlation: e = r*(dfe/(1-r 2 )) ½ *(1/N E + 1/N C ) ½ Chi Square (Pearson association):  =  2 /(  2 + N) e =  ½ *(N/(1-  )) ½ *(1/N E + 1/N C ) ½ ANOVA results: Compute R 2 = SS Treatment /Ss total Treat R as a point biserial correlation

6 Excel workbook for Mean difference computation

7 Story Book Reading References 1 Wasik & Bond: Beyond the Pages of a Book: Interactive Book Reading and Language Development in Preschool Classrooms. J. Ed Psych 2001 2 Justice & Ezell. Use of Storybook Reading to Increase Print Awareness in At-Risk Children. Am J Speech-Language Path 2002 3 Coyne, Simmons, Kame’enui, & Stoolmiller. Teaching Vocabulary During Shared Storybook Readings: An Examination of Differential Effects. Exceptionality 2004 4 Fielding-Barnsley & Purdie. Early Intervention in the Home for Children at Risk of Reading Failure. Support for Learning 2003

8 Coding the Outcome 1 open Wasik & Bond pdf 2 open excel file “computing mean effects example” 3 in Wasik find Ne and Nc 4 decide on effect(s) to be used- three outcomes are reported: PPVT, receptive, and expressive vocabulary at classroom and student level: what is the unit to be focused on? Multilevel issue of student in classroom, too few classrooms for reasonable MLM estimation, classroom level is too small for good power- use student data

9 Coding the Outcome 5 Determine which reported data is usable: here the AM and PM data are not usable because we don’t have the breakdowns by teacher-classroom- only summary tests can be used 6 Data for PPVT were analyzed as a pre-post treatment design, approximating a covariance analysis; thus, the interaction is the only usable summary statistic, since it is the differential effect of treatment vs. control adjusting for pretest differences with a regression weight of 1 (ANCOVA with a restricted covariance weight): Interaction ij = Grand Mean – Treat effect –pretest effect = Y… - a i.. – b. j. Graphically, the Difference of Gain inTreat(post-pre) and Gain in Control (post – pre) F for the interaction was F(l,120) = 13.69, p <.001. Convert this to an effect size using excel file Outcomes Computation What do you get? (.6527)

10 Coding the Outcome Y Control Treatment gains Gain not “predicted” from control pre post

11 Coding the Outcome 7 For Expressive and Receptive Vocabulary, only the F- tests for Treatment-Control posttest results are given: Receptive: F(l, 120) = 76.61, p <.001 Expressive: F(l, 120) =128.43, p<.001 What are the effect sizes? Use Outcomes Computation 1.544 1.999

12 Getting a Study Effect Should we average the outcomes to get a single study effect or Keep the effects separate as different constructs to evaluate later (Expressive, Receptive) or Average the PPVT and receptive outcome as a total receptive vocabulary effect? Comment- since each effect is based on the same sample size, the effects here can simply be averaged. If missing data had been involved, then we would need to use the weighted effect size equation, weighting the effects by their respective sample size within the study

13 Getting a Study Effect For this example, let’s average the three effects to put into the Computing mean effects example excel file- note that since we do not have means and SDs, we can put MeanC=0, and MeanE as the effect size we calculated, put in the SDs as 1, and put in the correct sample sizes to get the Hedges g, etc. (.6567 + 1.553 + 2.01)/3 = 1.4036

14 2 Justice & Ezell Receptive: 0.403 Expressive: 0.8606 Average = 0.6303 3 Coyne et al Taught Vocab: 0.9385 Untaught Vocab: 0.3262 Average = 0.6323 4 Fielding PPVT: -0.0764

15 Computing mean effect size Use e:\\Computing mean effects1.xls Mean

16 Computing Correlation Effect Sizes Reported Pearson correlation- use that Regression b-weight: use t-statistic reported, e = t*(1/N E + 1/N C ) ½ t-statistics: r = [ t 2 / (t 2 + df error ) ] ½ Sums of Squares from ANOVA or ANCOVA: r = (R 2 partial) ½ R 2 partial = SS Treatment /Ss total Note: Partial ANOVA or ANCOVA results should be noted as such and compared with unadjusted effects

17 Computing Correlation Effect Sizes To compute correlation-based effects, you can use the excel program “Outcomes Computation correlations” The next slide gives an example. Emphasis is on disaggregating effects of unreliability and sample-based attenuation, and correcting sample- specific bias in correlation estimation For more information, see Hunter and Schmidt (2004): Methods of Meta-Analysis. Sage. Correlational meta-analyses have focused more on validity issues for particular tests vs. treatment or status effects using means

18 Computing Correlation Effects Example

19 EFFECT SIZE DISTRIBUTION Hypothesis: All effects come from the same distribution What does this look like for studies with different sample sizes? Funnel plot- originally used to detect bias, can show what the confidence interval around a given mean effect size looks like Note: it is NOT smooth, since CI depends on both sample sizes AND the effect size magnitude

20 EFFECT SIZE DISTRIBUTION Each mean effect SE can be computed from SE = 1/  (w) For our 4 effects: 1: 0.200525 2: 0.373633 3: 0.256502 4: 0.286355 These are used to construct a 95% confidence interval around each effect

21 EFFECT SIZE DISTRIBUTION- SE of Overall Mean Overall mean effect SE can be computed from SE = 1/  (  w) For our effect mean of 0.8054, SE = 0.1297 Thus, a 95% CI is approximately (.54, 1.07) The funnel plot can be constructed by constructing a SE for each sample size pair around the overall mean- this is how the figure below was constructed in SPSS, along with each article effect mean and its CI

22

23 EFFECT SIZE DISTRIBUTION- Statistical test Hypothesis: All effects come from the same distribution: Q-test Q is a chi-square statistic based on the variation of the effects around the mean effect Q =  w i ( g – g mean ) 2 Q  2 (k-1) k

24 Example Computing Q Excel file effectdw Qiprob(Qi)sig? 10.585.43 0.71515980.397736175no 2-0.0510.24 0.73262480.392033721no 30.524.35 0.39579490.52926895no 40.029.69 0.3663190.545017585no 5-0.3040.65 10.6973490.001072891yes 60.1429.94 0.16866160.681304025no 70.6854.85 11.7274520.000615849yes 8-0.024.00 0.21256220.644766516no 0.2154 Q=25.015924 df7 prob(Q)=0.0007539

25 Computational Excel file Open excel file: Computing Q Enter the effects for the 4 studies, w for each study (you can delete the extra lines or add new ones by inserting as needed) from the Computing mean effect excel file What Q do you get? Q = 39.57 df=3 p<.001

26 Interpreting Q Nonsignificant Q means all effects could have come from the same distribution with a common mean Significant Q means one or more effects or a linear combination of effects came from two different (or more) distributions Effect component Q-statistic gives evidence for variation from the mean hypothesized effect

27 Interpreting Q- nonsignificant Some theorists state you should stop- incorrect. Homogeneity of overall distribution does not imply homogeneity with respect to hypotheses regarding mediators or moderators Example- homogeneous means correlate perfectly with year of publication (ie. r= 1.0, p<.001)

28 Interpreting Q- significant Significance means there may be relationships with hypothesized mediators or moderators Funnel plot and effect Q-statistics can give evidence for nonconforming effects that may or may not have characteristics you selected and coded for


Download ppt "LECTURE 4 EPSY 652 FALL 2009. Computing Effect Sizes- Mean Difference Effects Glass: e = (Mean Experimental – Mean Control )/SD o SD = Square Root (average."

Similar presentations


Ads by Google