Funded through the ESRC’s Researcher Development Initiative Prof. Herb MarshMs. Alison O’MaraDr. Lars-Erik Malmberg Department of Education, University.

Slides:



Advertisements
Similar presentations
SADC Course in Statistics Revision of key regression ideas (Session 10)
Advertisements

Multiple Regression and Model Building
ANCOVA Workings of ANOVA & ANCOVA ANCOVA, Semi-Partial correlations, statistical control Using model plotting to think about ANCOVA & Statistical control.
Irwin/McGraw-Hill © Andrew F. Siegel, 1997 and l Chapter 12 l Multiple Regression: Predicting One Factor from Several Others.
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
Funded through the ESRC’s Researcher Development Initiative Department of Education, University of Oxford Session 2.3 – Publication bias.
The Campbell Collaborationwww.campbellcollaboration.org Moderator analyses: Categorical models and Meta-regression Terri Pigott, C2 Methods Editor & co-Chair.
LINEAR REGRESSION: Evaluating Regression Models Overview Assumptions for Linear Regression Evaluating a Regression Model.
LINEAR REGRESSION: Evaluating Regression Models. Overview Assumptions for Linear Regression Evaluating a Regression Model.
LINEAR REGRESSION: Evaluating Regression Models. Overview Standard Error of the Estimate Goodness of Fit Coefficient of Determination Regression Coefficients.
Multiple Regression Involves the use of more than one independent variable. Multivariate analysis involves more than one dependent variable - OMS 633 Adding.
Chapter 4 Multiple Regression.
Multivariate Data Analysis Chapter 4 – Multiple Regression.
Chapter 11 Multiple Regression.
Topic 3: Regression.
Overview of Meta-Analytic Data Analysis. Transformations Some effect size types are not analyzed in their “raw” form. Standardized Mean Difference Effect.
1 1 Slide © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
Today Concepts underlying inferential statistics
Chapter 7 Correlational Research Gay, Mills, and Airasian
Christopher Dougherty EC220 - Introduction to econometrics (chapter 3) Slideshow: prediction Original citation: Dougherty, C. (2012) EC220 - Introduction.
Simple Linear Regression Analysis
Relationships Among Variables
Review for Final Exam Some important themes from Chapters 9-11 Final exam covers these chapters, but implicitly tests the entire course, because we use.
Multiple Linear Regression A method for analyzing the effects of several predictor variables concurrently. - Simultaneously - Stepwise Minimizing the squared.
1 PREDICTION In the previous sequence, we saw how to predict the price of a good or asset given the composition of its characteristics. In this sequence,
Objectives of Multiple Regression
Chapter 13: Inference in Regression
Overview of Meta-Analytic Data Analysis
Funded through the ESRC’s Researcher Development Initiative
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Introduction Multilevel Analysis
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 15 Multiple Regression n Multiple Regression Model n Least Squares Method n Multiple.
CHAPTER 14 MULTIPLE REGRESSION
Funded through the ESRC’s Researcher Development Initiative Department of Education, University of Oxford Session 3.3 & 3.4: Teacher Expectancy Example.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Statistical Applications for Meta-Analysis Robert M. Bernard Centre for the Study of Learning and Performance and CanKnow Concordia University December.
Testing Hypotheses about Differences among Several Means.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
University of Warwick, Department of Sociology, 2014/15 SO 201: SSAASS (Surveys and Statistics) (Richard Lampard) Week 7 Logistic Regression I.
Multiple Regression and Model Building Chapter 15 Copyright © 2014 by The McGraw-Hill Companies, Inc. All rights reserved.McGraw-Hill/Irwin.
MGS3100_04.ppt/Sep 29, 2015/Page 1 Georgia State University - Confidential MGS 3100 Business Analysis Regression Sep 29 and 30, 2015.
The Campbell Collaborationwww.campbellcollaboration.org C2 Training: May 9 – 10, 2011 Introduction to meta-analysis.
Section 9-1: Inference for Slope and Correlation Section 9-3: Confidence and Prediction Intervals Visit the Maths Study Centre.
Department of Cognitive Science Michael J. Kalsher Adv. Experimental Methods & Statistics PSYC 4310 / COGS 6310 Regression 1 PSYC 4310/6310 Advanced Experimental.
Chapter 13 Multiple Regression
Analysis Overheads1 Analyzing Heterogeneous Distributions: Multiple Regression Analysis Analog to the ANOVA is restricted to a single categorical between.
Data Analysis in Practice- Based Research Stephen Zyzanski, PhD Department of Family Medicine Case Western Reserve University School of Medicine October.
Correlation & Regression Analysis
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Simple Linear Regression Analysis Chapter 13.
1 HETEROSCEDASTICITY: WEIGHTED AND LOGARITHMIC REGRESSIONS This sequence presents two methods for dealing with the problem of heteroscedasticity. We will.
Funded through the ESRC’s Researcher Development Initiative Department of Education, University of Oxford Session 2.1 – Revision of Day 1.
ANCOVA.
Nonparametric Statistics
An Application of Multilevel Modelling to Meta-Analysis, and Comparison with Traditional Approaches Alison O’Mara & Herb Marsh Department of Education,
Michael J. Kalsher PSYCHOMETRICS MGMT 6971 Regression 1 PSYC 4310 Advanced Experimental Methods and Statistics © 2014, Michael Kalsher.
Chapter 12 REGRESSION DIAGNOSTICS AND CANONICAL CORRELATION.
Stats Methods at IC Lecture 3: Regression.
Multiple Regression.
RDI Meta-analysis workshop - Marsh, O'Mara, & Malmberg
Multiple Regression.
Prepared by Lee Revere and John Large
Simple Linear Regression
Incremental Partitioning of Variance (aka Hierarchical Regression)
Simple Linear Regression
Basic Practice of Statistics - 3rd Edition Inference for Regression
Product moment correlation
Regression Analysis.
3.2. SIMPLE LINEAR REGRESSION
MGS 3100 Business Analysis Regression Feb 18, 2016
Presentation transcript:

Funded through the ESRC’s Researcher Development Initiative Prof. Herb MarshMs. Alison O’MaraDr. Lars-Erik Malmberg Department of Education, University of Oxford Session 1.3 – Equations

Establish research question Define relevant studies Develop code materials Locate and collate studies Pilot coding; coding Data entry and effect size calculation Main analyses Supplementary analyses Session 1.3 – Equations

3

Formula for the observed effect size in a fixed effects model Where  d j is the observed effect size in study j  δ i s the ‘true’ population effect  and e j is the residual due to sampling variance in study j In this and following formulae, we will use the symbols d and δ to refer to any measure for the observed and the true effect size, which is not necessarily the standardized mean difference.

 To calculate the overall mean observed effect size ( d j in the fixed effects equation)  where w i = weight for the individual effect size, and d i = the individual effect size.

 The effect sizes are weighted by the inverse of the variance to give more weight to effects based on large sample sizes  The standard error of each effect size is given by the square root of the sampling variance SE =  v i  The variances are calculated differently for each type of effect size. 6

 Variance for standardised mean difference effect size is calculated as  Where n 1 = sample size of group 1, n 2 is the sample size of group 2, and d i = the effect size for study i.  Variance for correlation effect size is calculated as  Where n i is the total sample size of the study 7

 Expand the general model to include predictors  Where  β s is the regression coefficient (regression slope) for the explanatory variable.  X sj is the study characteristic ( s ) of study j.

Example: Gender as a predictor of achievement

Formula for the observed effect size in a random effects model Where  d j is the observed effect size in study j  δ i s the mean ‘true’ population effect size  u j is the deviation of the true study effect size from the mean true effect size  and e j is the residual due to sampling variance in study j

 To calculate the overall mean observed effect size ( d j in the random effects equation)  where w i = weight for the individual effect size, and d i = the individual effect size.

 Random effects differs from fixed effects in the calculation of the weighting ( w i )  The weight includes 2 variance components: within- study variance ( v i ) and between-study variance ( v θ )  The new weighting for the random effects model ( w iRE ) is given by the formula:  v i is calculated the same as in the fixed effects models. 12 Recall the weighting formula for fixed effects model:

 v θ is calculated using the following formula  Where Q = Q -statistic (measure of whether effect sizes all come from the same population)  k = number of studies included in sample  w i = effect size weight, calculated based on fixed effects models. 13

 Thus, larger studies receive proportionally less weight in RE model than in FE model.  This is because a constant is added to the denominator, so the relative effect of sample size will be smaller in RE model 14

 If the homogeneity test is rejected (it almost always will be), it suggests that there are larger differences than can be explained by chance variation (at the individual participant level). There is more than one “population” in the set of different studies.  The random effects model determines how much of this between-study variation can be explained by study characteristics that we have coded.

 Expand the general model to include predictors  Where  β s is the regression coefficient (regression slope) for the explanatory variable.  X sj is the study characteristic ( s ) of study j.

Example: Gender as a predictor of achievement

Formula for the observed effect size in a multilevel model Where  d j is the observed effect size in study j   0 i s the mean ‘true’ population effect size  u j is the deviation of the true study effect size from the mean true effect size  and e j is the residual due to sampling variance in study j Note: This model treats the moderator effects as fixed and the u j s as random effects.

In this equation, predictors are included in the model.   s is the regression coefficient (regression slope) for the explanatory variable. (Equivalent to β in multiple regression.)  X sj is the study characteristic (s) of study j.

Example: Gender as a predictor of achievement

 If between-study variance = 0, the multilevel model simplifies to the fixed effects regression model  If no predictors are included the model simplifies to random effects model  If the level 2 variance = 0, the model simplifies to the fixed effects model

 Many meta-analysts use an adaptive (or “conditional”) approach IF between-study variance is found in the homogeneity test THEN use random effects model OTHERWISE use fixed effects model

 Fixed effects models are very common, even though the assumption of homogeneity is “implausible” (Noortgate & Onghena, 2003)  There is a considerable lag in the uptake of new methods by applied meta-analysts  Meta-analysts need to stay on top of these developments by  Attending courses  Wide reading across disciplines

24

 Usually start with a Q-test to determine the overall mean effect size and the homogeneity of the effect sizes (MeanES.sps macro)  If there is significant homogeneity, then:  1) should probably conduct random effects analyses instead  2) model moderators of the effect sizes (determine the source/s of variance)

The homogeneity (Q) test asks whether the different effect sizes are likely to have all come from the same population (an assumption of the fixed effects model). Are the differences among the effect sizes no bigger than might be expected by chance? d i = effect size for each study (i = 1 to k) = mean effect size = a weight for each study based on the sample size However, this (chi-square) test is heavily dependent on sample size. It is almost always significant unless the numbers (studies and people in each study) are VERY small. This means that the fixed effect model will almost always be rejected in favour of a random effects model.

Significant heterogeneity in the effect sizes therefore random effects more appropriate and/or moderators need to be modelled 27

 The analogue to the ANOVA homogeneity analysis is appropriate for categorical variables  Looks for systematic differences between groups of responses within a variable  Easy to implement using MetaF.sps macro  MetaF ES = d /W = Weight /GROUP = TXTYPE /MODEL = FE.

 Multiple regression homogeneity analysis is more appropriate for continuous variables and/or when there are multiple variables to be analysed  Tests the ability of groups within each variable to predict the effect size  Can include categorical variables in multiple regression as dummy variables  Easy to implement using MetaReg.sps macro  MetaReg ES = d /W = Weight /IVS = IV1 IV2 /MODEL = FE.

 Like the FE model, RE uses ANOVA and multiple regression to model potential moderators/predictors of the effect sizes, if the Q- test reveals significant heterogeneity  Easy to implement using MetaF.sps macro (ANOVA) or MetaReg.sps (multiple regression).  MetaF ES = d /W = Weight /GROUP = TXTYPE /MODEL = ML.  MetaReg ES = d /W = Weight /IVS = IV1 IV2 /MODEL = ML.

Significant heterogeneity in the effect sizes therefore need to model moderators 31

 Similar to multiple regression, but corrects the standard errors for the nesting of the data  Start with an intercept-only (no predictors) model, which incorporates both the outcome-level and the study-level components  This tells us the overall mean effect size  Is similar to a random effects model  Then expand the model to include predictor variables, to explain systematic variance between the study effect sizes 32

 (MLwiN screenshot)

 Using the same simulated data set with n = 15

 The random effects is better than the fixed effects approach in almost all conceivable cases  “The results of the simulation study suggest that the maximum likelihood multilevel approach is in general superior to the fixed-effects approaches, unless only a small number of studies is available. For models without moderators, the results of the multilevel approach, however, are not substantially different from the results of the traditional random- effects approaches” (p. 765)

 Multilevel models:  build on the fixed and random effects models  account for between-study variance (like random effects)  Are similar to multiple regression, but correct the standard errors for the nesting of the data. Improved modelling of the nesting of levels within studies increases the accuracy of the estimation of standard errors on parameter estimates and the assessment of the significance of explanatory variables (Bateman and Jones, 2003).  Multilevel modelling is more precise when there is greater between-study heterogeneity  Also allows flexibility in modelling the data when one has multiple moderator variables (Raudenbush & Bryk, 2002)

 Multilevel modelling has the promise of being able to include multivariate data – still being developed  Easy to implement in MLwiN (once you know how!)  See worked examples for HLM, MLwiN, SAS, & Stata at

 Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks, CA: Sage Publications.  Van den Noortgate, W., & Onghena, P. (2003). Multilevel meta-analysis: A comparison with traditional meta-analytical procedures. Educational and Psychological Measurement, 63,  Wilson’s “meta-analysis stuff” website:  Raudenbush, S.W. and Bryk, A.S. (2002). Hierarchical Linear Models (2 nd Ed.).Thousand Oaks: Sage Publications.