Heterogeneity in Hedges. Fixed Effects Borenstein et al., 2009, pp. 64-65.

Slides:



Advertisements
Similar presentations
Regression and correlation methods
Advertisements

Lesson 10: Linear Regression and Correlation
Tests of Significance for Regression & Correlation b* will equal the population parameter of the slope rather thanbecause beta has another meaning with.
Chapter 10: Estimating with Confidence
Data Analysis Statistics. Inferential statistics.
T-tests Computing a t-test  the t statistic  the t distribution Measures of Effect Size  Confidence Intervals  Cohen’s d.
PSY 307 – Statistics for the Behavioral Sciences
Statistics II: An Overview of Statistics. Outline for Statistics II Lecture: SPSS Syntax – Some examples. Normal Distribution Curve. Sampling Distribution.
Simple Linear Regression
Chapter 7 Sampling and Sampling Distributions
The Simple Regression Model
1 Simple Linear Regression Chapter Introduction In this chapter we examine the relationship among interval variables via a mathematical equation.
Data Analysis Statistics. Inferential statistics.
Chapter 9 Hypothesis Testing.
Chapter 9 - Lecture 2 Computing the analysis of variance for simple experiments (single factor, unrelated groups experiments).
Educational Research by John W. Creswell. Copyright © 2002 by Pearson Education. All rights reserved. Slide 1 Chapter 8 Analyzing and Interpreting Quantitative.
5-3 Inference on the Means of Two Populations, Variances Unknown
Chapter 10: Estimating with Confidence
Correlation and Linear Regression
Measures of Regression and Prediction Intervals
Psy B07 Chapter 1Slide 1 ANALYSIS OF VARIANCE. Psy B07 Chapter 1Slide 2 t-test refresher  In chapter 7 we talked about analyses that could be conducted.
AM Recitation 2/10/11.
Introduction to Linear Regression and Correlation Analysis
Chapter 13: Inference in Regression
Overview of Meta-Analytic Data Analysis
Correlation and Linear Regression
Moderators. Definition Moderator - A third variable that conditions the relations of two other variables Example: SAT-Quant and math grades in school.
RMTD 404 Lecture 8. 2 Power Recall what you learned about statistical errors in Chapter 4: Type I Error: Finding a difference when there is no true difference.
© 2003 Prentice-Hall, Inc.Chap 6-1 Business Statistics: A First Course (3 rd Edition) Chapter 6 Sampling Distributions and Confidence Interval Estimation.
+ The Practice of Statistics, 4 th edition – For AP* STARNES, YATES, MOORE Chapter 8: Estimating with Confidence Section 8.1 Confidence Intervals: The.
Learning Objectives Copyright © 2002 South-Western/Thomson Learning Sample Size Determination CHAPTER thirteen.
+ Chapter 12: Inference for Regression Inference for Linear Regression.
6: Fixed & Random Summaries with Generic Input Fixed and random-effects overall or summary effects for any kind of effect size Meta-analysis in R with.
1 Estimation From Sample Data Chapter 08. Chapter 8 - Learning Objectives Explain the difference between a point and an interval estimate. Construct and.
The binomial applied: absolute and relative risks, chi-square.
Models with categorical covariates, continuous covariates, or both
1 Introduction Description of the video series Meta-analysis in R with Metafor.
Schmidt & Hunter Approach to r Bare Bones. Statistical Artifacts Extraneous factors that influence observed effect Sampling error* Reliability Range restriction.
Analysis Overheads1 Analyzing Heterogeneous Distributions: Multiple Regression Analysis Analog to the ANOVA is restricted to a single categorical between.
Sampling distributions rule of thumb…. Some important points about sample distributions… If we obtain a sample that meets the rules of thumb, then…
Testing Differences between Means, continued Statistics for Political Science Levin and Fox Chapter Seven.
Sampling and Statistical Analysis for Decision Making A. A. Elimam College of Business San Francisco State University.
Fixed- v. Random-Effects. Fixed and Random Effects 1 All conditions of interest – Fixed. Sample of interest – Random. Both fixed and random-effects meta-analyses.
Chapter 13 Understanding research results: statistical inference.
+ The Practice of Statistics, 4 th edition – For AP* STARNES, YATES, MOORE Chapter 8: Estimating with Confidence Section 8.1 Confidence Intervals: The.
1 One goal in most meta-analyses is to examine overall, or typical effects We might wish to estimate and test the value of an overall (general) parameter.
1 Consider the k studies as coming from a population of effects we want to understand. One way to model effects in meta-analysis is using random effects.
I-squared Conceptually, I-squared is the proportion of total variation due to ‘true’ differences between studies. Proportion of total variance due to.
Confidence Intervals.
Chapter 8: Estimating with Confidence
Chapter 8: Estimating with Confidence
9: Basic Confidence and Prediction Intervals
Hedges’ Approach.
Schmidt & Hunter Approach to r
Gerald Dyer, Jr., MPH October 20, 2016
Chapter 8: Estimating with Confidence
Statistics II: An Overview of Statistics
Chapter 8: Estimating with Confidence
Chapter 8: Estimating with Confidence
Chapter 8: Estimating with Confidence
Chapter 8: Estimating with Confidence
Chapter 8: Estimating with Confidence
Chapter 8: Estimating with Confidence
Chapter 8: Estimating with Confidence
Chapter 8: Estimating with Confidence
RES 500 Academic Writing and Research Skills
Chapter 8: Estimating with Confidence
Chapter 8: Estimating with Confidence
Chapter 8: Estimating with Confidence
Presentation transcript:

Heterogeneity in Hedges

Fixed Effects Borenstein et al., 2009, pp

Random Effects Borenstein et al., 2009, p. 72.

Typical Model In most applications of meta-analysis, we want to infer things about a class of studies, only some of which we are able to observe. The inference we typically want is random-effects. In most meta-analyses, there is a good deal of variability after accounting for sampling error. The model we typically want is varying coefficients (a model that computes the REVC). There are always exceptions to rules, but this is the default position.

Homogeneity Test When the null (homogeneous rho) is true, Q is distributed as chi-square with (k-1) df, where k is the number of studies. This is a test of whether Random Effects Variance Component is zero.

Q Recall the chi-square is the sum of z-square (sum of deviates from the unit normal, squared.

Estimating the REVC If REVC estimate is less than zero, set to zero.

Random-Effects Weights Inverse variance weights give weight to each study depending on the uncertainty for the true value of that study. For fixed-effects, there is only sampling error. For random-effects, there is also uncertainty about where in the distribution the study came from, so 2 sources of error. The InV weight is, therefore:

I-squared Conceptually, I- squared is the proportion of total variation due to ‘true’ differences between studies. Proportion due to random effects.

Comparison Depnds on kDepends on Scale QX PX T-squaredX TX I-squared I-squared does depend on the sample size of the included studies. The random-effects variance has some size, which is indexed in units of the observed effect size (e.g., r). The larger the sample size, the smaller the sampling variance, and thus the larger I- squared. To me, the prediction interval is the most interpretable.

Confidence intervals for tau and tau-squared Insert ugly formulas here – or not. Suffice it to say that confidence intervals can be computed for the random-effects variance estimates. In metafor, compute the results of the meta-analysis using rma. Then ask for the confidence interval for the results.

Prediction or Credibility Intervals - Higgins Makes sense if random effects. M is the random effects mean (summary effect). The value of t is from the t table with your alpha and df equal to (k-2) where k is the number of independent effect sizes (studies). The variance is the squared standard error of the RE summary effect. This is the prediction interval given in Borenstein et al. 2009

Prediction or Credibility Intervals - Metafor Metafor uses z rather than t. To get the prediction interval for the mean in metafor, ask for ‘predict’ on the results. If you want the Higgins version (which you probably do unless you have k>100 studies), you will need to use Excel or some calculator to replace the critical value of z with t.

Two Examples Example in d Example in r (transformed to z ) Then the Class Exercise

Example 1 (Borenstein) Data from Borenstein et al., 209, p. 88

Run metafor Note tau and the standard error of the mean. You will need these for computing the Higgins prediction interval.

Find Confidence Intervals

Find the metafor Pred. Int. Note the difference between the confidence interval and the prediction interval (credibility interval). A large REVC makes a big difference. I prefer the prediction interval for interpretation of the magnitude of variability because it is expressed in the range of outcomes you expect to see in the metric of the observed effect sizes. In this case, it would be reasonable, based on the evidence, to see true standardized mean differences from about to.79. This is more interpretable than I-squared = 54 percent. However, note that the CI for tau-squared is large, so take this interval with a grain of salt.

Compute Higgins Pred. Int. Note that metafor is using z, not t, in computing the prediction interval. If you want to use Higgins methods, compute in Excel from estimates provided in metafor. See my spreadsheet

Example 2 McLeod 2007 Correlation between parenting style and child depression

Run metafor Note I could have input z and v from the Excel file…

Prediction Interval Because we analyzed in z, we need to translate back to r. Also we don’t have Higgins’ prediction interval. So let’s compute.

Translate back r from z

Class Exercise Use the McDaniel data (dat.mcdaniel1994 ) to compute I-squared and the prediction interval (both z and t -based) for these data. Compute using ZCOR and translate prediction intervals back into R. If the effect sizes are correlations between job interview scores and job performance criteria, what is your interpretation of the result? Interpretation of the overall mean Interpretation of the amount of variablility Interpretation of the prediction interval