1 One goal in most meta-analyses is to examine overall, or typical effects We might wish to estimate and test the value of an overall (general) parameter.

Slides:



Advertisements
Similar presentations
Independent t -test Features: One Independent Variable Two Groups, or Levels of the Independent Variable Independent Samples (Between-Groups): the two.
Advertisements

Inference for Regression
Correlation and regression
Objectives (BPS chapter 24)
EVAL 6970: Meta-Analysis Fixed-Effect and Random- Effects Models Dr. Chris L. S. Coryn Spring 2011.
EVAL 6970: Meta-Analysis Fixed-Effect and Random- Effects Models Dr. Chris L. S. Coryn Spring 2011.
Chapter 12 Simple Regression
The Simple Regression Model
Heterogeneity in Hedges. Fixed Effects Borenstein et al., 2009, pp
Hypothesis Testing: Two Sample Test for Means and Proportions
Getting Started with Hypothesis Testing The Single Sample.
Linear Regression 2 Sociology 5811 Lecture 21 Copyright © 2005 by Evan Schofer Do not copy or distribute without permission.
QUIZ CHAPTER Seven Psy302 Quantitative Methods. 1. A distribution of all sample means or sample variances that could be obtained in samples of a given.
Psy B07 Chapter 1Slide 1 ANALYSIS OF VARIANCE. Psy B07 Chapter 1Slide 2 t-test refresher  In chapter 7 we talked about analyses that could be conducted.
Inference for regression - Simple linear regression
Chapter 13: Inference in Regression
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University ECON 4550 Econometrics Memorial University of Newfoundland.
Inference for Linear Regression Conditions for Regression Inference: Suppose we have n observations on an explanatory variable x and a response variable.
Comparing Two Proportions
CHAPTER 14 MULTIPLE REGRESSION
+ Chapter 12: Inference for Regression Inference for Linear Regression.
Introduction to Linear Regression
Lecture 8 Simple Linear Regression (cont.). Section Objectives: Statistical model for linear regression Data for simple linear regression Estimation.
PCB 3043L - General Ecology Data Analysis. OUTLINE Organizing an ecological study Basic sampling terminology Statistical analysis of data –Why use statistics?
+ Chapter 12: More About Regression Section 12.1 Inference for Linear Regression.
6/4/2016Slide 1 The one sample t-test compares two values for the population mean of a single variable. The two-sample t-test of population means (aka.
DIRECTIONAL HYPOTHESIS The 1-tailed test: –Instead of dividing alpha by 2, you are looking for unlikely outcomes on only 1 side of the distribution –No.
Copyright © 2010 Pearson Education, Inc. Chapter 22 Comparing Two Proportions.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 22 Comparing Two Proportions.
Understanding Your Data Set Statistics are used to describe data sets Gives us a metric in place of a graph What are some types of statistics used to describe.
11/23/2015Slide 1 Using a combination of tables and plots from SPSS plus spreadsheets from Excel, we will show the linkage between correlation and linear.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 12 More About Regression 12.1 Inference for.
1 Copyright © 2010, 2007, 2004 Pearson Education, Inc. All Rights Reserved. Section 7.4 Estimation of a Population Mean  is unknown  This section presents.
Quality Control  Statistical Process Control (SPC)
Chapter 9: Introduction to the t statistic. The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 12 More About Regression 12.1 Inference for.
Chapter 13 Linear Regression and Correlation. Our Objectives  Draw a scatter diagram.  Understand and interpret the terms dependent and independent.
1 Consider the k studies as coming from a population of effects we want to understand. One way to model effects in meta-analysis is using random effects.
Stats Methods at IC Lecture 3: Regression.
I-squared Conceptually, I-squared is the proportion of total variation due to ‘true’ differences between studies. Proportion of total variance due to.
Chapter 8: Estimating with Confidence
Chapter 13 Simple Linear Regression
T-Tests and ANOVA I Class 15.
Regression and Correlation
Comparing Two Proportions
Dependent-Samples t-Test
Inference for Least Squares Lines
Confidence Interval Estimation
CHAPTER 12 More About Regression
Statistics: The Z score and the normal distribution
Comparing Three or More Means
PCB 3043L - General Ecology Data Analysis.
Basic Practice of Statistics - 5th Edition
Meta-analysis statistical models: Fixed-effect vs. random-effects
CHAPTER 8 Confidence Estimating with Estimating a Population 8.3 Mean
Comparing Two Proportions
Comparing Two Proportions
Introduction to ANOVA.
Estimating with Confidence
CHAPTER 12 More About Regression
Chapter 8: Estimating with Confidence
IE 355: Quality and Applied Statistics I Confidence Intervals
Regression & Correlation (1)
One-Factor Experiments
2/5/ Estimating a Population Mean.
2.3. Measures of Dispersion (Variation):
Exercise 1 Use Transform  Compute variable to calculate weight lost by each person Calculate the overall mean weight lost Calculate the means and standard.
Measures of Dispersion
Linear Regression and Correlation
MGS 3100 Business Analysis Regression Feb 18, 2016
Presentation transcript:

1 One goal in most meta-analyses is to examine overall, or typical effects We might wish to estimate and test the value of an overall (general) parameter H 0 :  = 0 Here the  could represent any of the kinds of outcomes we listed earlier in class. Estimating Common or Average Effects

2 Under the random-effects model, we can test H 0 : = 0 e.g., H 0 : = 0 The average of the  i valuesthe population is zero correlations is zero Formulas for estimates of these parameters will follow. Estimating Common or Average Effects

3 We saw above that the fixed-effects mean for the teacher expectancy data was Also our SPSS output showed a random- effects mean for the TE data of Why are they different? Estimating Common or Average Effects

4 The RE mean is just a bit higher. RE FE The FE mean is pulled towards the large studies with low effects like 6, 7, and 18.

5 For the fixed-effects model, and to get Q, we compute an effect that we will assume is common to all studies. Our book calls this M but I prefer to call it. We use a weighted mean – weighting each data point by the inverse of its variance (i.e., w i = 1/V(T i ) = 1/V i ): Estimating Common Effects

6 Since we need the FE mean for the test of homogeneity we can get it from SPSS code or from pull-down analyses or from R code. However we must decide whether to report it or not. Estimating Common Effects

7 For the random-effects model, we compute an average of the effects that we believe truly vary across studies. The model for the effect size was T i = θ i + e i with variance V(T i ) = V i * =  2 θ + V i. We use an estimate to get new V* variances and weights that incorporate both parts of the variation in the effects T i. Estimating Average Effects

8 RE FE The RE CIs are more equal in width and studies have very similar influence on the mean. These RE CIs are made using the new RE variances V i *

9 For RE we weight each data point by the new RE weight -- the inverse of its random-effects variance: w* i = 1/[V i + ] Thus to get the RE mean we compute Estimating Average Effects

10 The FE variances (V) range from about.01 to.14. This is fairly typical. Large n go with small V. We added.08 to each V value to get the RE variance V*.

11 The ratio of the largest weight to the smallest one is 16:1 for the FE weights (w) but it is only 2.5:1 for the RE weights (wstar). Large studies do not have as much relative influence under the RE model.

12 We also need a variance (or standard error) for the mean. To compute the SE for the fixed effects case we use the inverse variances of the individual effects – or equivalently the weights. The variance of the fixed- effects mean is and the SE is the square root of the variance. Estimating Common Effects

13 The variance (or SE) for the random-effects mean is very similar to the FE variance and the SE is the square root of the variance. This variance will be larger than the FE variance because of the addition of to each V i Estimating Common Effects

14 Random-effects variances However we have not seen how to estimate  2  We will consider two method-of-moments estimators. I call them SVAR and QVAR in our programs. SVAR is based on a “typical” sample variance of the T’s (like you learned in intro stats class) and QVAR is computed using Q thus it is weighted. These are not the “best” variance estimates but are easily obtained using SPSS.

15 Random-effects variances: SVAR SVAR uses the sample variance of the T’s. In the random-effects model T i =  i + e i. Therefore V(T i ) = V(  i ) + V(e i ) so V(T i ) - V(e i ) = V(  i ) We then get the expected values of each part E[V(T i )] – E[V(e i )] = E[V(  i )] =  2  So our estimator is SVAR = S T 2 -

16 Random-effects variances: QVAR QVAR is computed using Q and some sums of the weights w i and squared weights w i 2 The amount by which Q exceeds its df (for the total Q, df=k-1) is the part due to “true” differences, not sampling error There is no good non-technical explanation of how the weights work here, but the formula is QVAR = ( Q ‑ (k-1) )  w i ‑ (  w i 2 /  w i )

17 The SPSS output also gives the two values of QVAR and SVAR. Fixed-effects Homogeneity Test (Q) P-value for Homogeneity test (P).0074 Birge's ratio, ratio of Q/(k-1) I-squared, ratio 100 [Q-(k-1)]/Q.4976 Variance Component based on Homogeneity Test (QVAR).0259 Variance Component based on S 2 and v-bar (SVAR).0804 RE Lower Conf Limit for T_Dot (L_T_DOT) Weighted random-effects average of effect size based on SVAR (T_DOT).1143 RE Upper Conf Limit for T_Dot (U_T_DOT).2696

18 The new RE mean is larger and its SE is larger. The new SE is over twice the size of the FE standard error. Thus the random effects CI is also more than twice as wide. Finally we return to slides 9 and 13, and add to the V i values to get the RE mean and its SE. ModelMeanSE of MeanCI width Fixed Random

19 In this case we decide that the mean effect size is not different from zero based on the CI. We can also test H 0 : = 0 using the RE mean and SE: Z = T.*/SE(T.*) = 0.11/0.079 = We compare this sample Z to a critical Z value such as Z C = for a two-tailed test at  =.05. This Z is not large enough to reject H 0. On average the teacher expectancy effect is about a tenth of a standard deviation in the sample, but there is essentially no true difference.

20 However some studies show effects much larger than Now we can separately interpret. Our best estimate of the mean of the population effects is 0.11 and their estimated variance is 0.08 (SD =.28). This is NOT the same as the SE of the mean effect, which was on the previous slides. The values of SVAR and QVAR tell us how spread out the population of effects seems to be.

21 We can use the estimate of to draw a picture of the population of effects. The estimated variance of the population effects is 0.08 (SD =.28). We assume a normal shape for the population and center it on the RE mean. If the effects were normally distributed – the distribution of TRUE effects would look like this. 95% of the  i s would be between about and 0.66.

22 The previous slide contains excel code that allows you to make a graph of your own data. You need to double click the plot, then you will see a spreadsheet. Go to the tab labeled normal. Enter a new mean for the population (where you see 0.11) and a new SD (replace.283), then go back to the Chart tab.

23 Mixed Effects Models If each population effect differs, and T i =  i + e i we may want to model or explain the variation in the population effects. For example we may have  i =  0 +  1 X 1i + u’ i Substituting, we have T i =  0 +  1 X 1i + u’ i + e i Observed Effect predicted + Unexplained between- + Sampling effect from X value studies variation error