Combining Effect Sizes

Slides:



Advertisements
Similar presentations
POINT ESTIMATION AND INTERVAL ESTIMATION
Advertisements

Ch. 19 Unbiased Estimators Ch. 20 Efficiency and Mean Squared Error CIS 2033: Computational Probability and Statistics Prof. Longin Jan Latecki Prepared.
Econ 140 Lecture 61 Inference about a Mean Lecture 6.
Today Today: Chapter 9 Assignment: Recommended Questions: 9.1, 9.8, 9.20, 9.23, 9.25.
Descriptive Statistics Statistical Notation Measures of Central Tendency Measures of Variability Estimating Population Values.
Chapter 10 Sampling and Sampling Distributions
Estimation of parameters. Maximum likelihood What has happened was most likely.
Multiple Regression Analysis
Chapter 17 Additional Topics in Sampling
Multiple Regression Analysis
Variability Measures of spread of scores range: highest - lowest standard deviation: average difference from mean variance: average squared difference.
SAMPLING DISTRIBUTIONS. SAMPLING VARIABILITY
Wednesday, October 3 Variability. nominal ordinal interval.
Stratified Simple Random Sampling (Chapter 5, Textbook, Barnett, V
7: Fixed & Random Summaries in r and d with Preferred Input Fixed and random-effects overall or summary effects for correlations and mean differences Meta-analysis.
STAT 4060 Design and Analysis of Surveys Exam: 60% Mid Test: 20% Mini Project: 10% Continuous assessment: 10%
A) Transformation method (for continuous distributions) U(0,1) : uniform distribution f(x) : arbitrary distribution f(x) dx = U(0,1)(u) du When inverse.
7-1 Introduction The field of statistical inference consists of those methods used to make decisions or to draw conclusions about a population. These.
QUIZ CHAPTER Seven Psy302 Quantitative Methods. 1. A distribution of all sample means or sample variances that could be obtained in samples of a given.
 Deviation is a measure of difference for interval and ratio variables between the observed value and the mean.  The sign of deviation (positive or.
Constant process Separate signal & noise Smooth the data: Backward smoother: At any give T, replace the observation yt by a combination of observations.
Overview of Meta-Analytic Data Analysis
Measurement Tools for Science Observation Hypothesis generation Hypothesis testing.
Sampling distributions, Point Estimation Week 3: Lectures 3 Sampling Distributions Central limit theorem-sample mean Point estimators-bias,efficiency Random.
1 SAMPLE MEAN and its distribution. 2 CENTRAL LIMIT THEOREM: If sufficiently large sample is taken from population with any distribution with mean  and.
Random Sampling, Point Estimation and Maximum Likelihood.
Implications for Meta-analysis Literature Comparison of Weights in Meta-analysis Under Realistic Conditions Michael T. Brannick Liu-Qin Yang Guy Cafri.
Error Component Models Methods of Economic Investigation Lecture 8 1.
6: Fixed & Random Summaries with Generic Input Fixed and random-effects overall or summary effects for any kind of effect size Meta-analysis in R with.
7-1 Introduction The field of statistical inference consists of those methods used to make decisions or to draw conclusions about a population. These.
Chapter 7 Sampling and Point Estimation Sample This Chapter 7A.
Properties of OLS How Reliable is OLS?. Learning Objectives 1.Review of the idea that the OLS estimator is a random variable 2.How do we judge the quality.
The Campbell Collaborationwww.campbellcollaboration.org C2 Training: May 9 – 10, 2011 Introduction to meta-analysis.
1Spring 02 Problems in Regression Analysis Heteroscedasticity Violation of the constancy of the variance of the errors. Cross-sectional data Serial Correlation.
Schmidt & Hunter Approach to r Bare Bones. Statistical Artifacts Extraneous factors that influence observed effect Sampling error* Reliability Range restriction.
Properties of Estimators Statistics: 1.Sufficiency 2.Un-biased 3.Resistance 4.Efficiency Parameters:Describe the population Describe samples. But we use.
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
Aron, Aron, & Coups, Statistics for the Behavioral and Social Sciences: A Brief Course (3e), © 2005 Prentice Hall Chapter 6 Hypothesis Tests with Means.
Sampling Design and Analysis MTH 494 Lecture-22 Ossam Chohan Assistant Professor CIIT Abbottabad.
Sampling Design and Analysis MTH 494 Lecture-21 Ossam Chohan Assistant Professor CIIT Abbottabad.
Ex St 801 Statistical Methods Inference about a Single Population Mean (CI)
Chapter 8 Estimation ©. Estimator and Estimate estimator estimate An estimator of a population parameter is a random variable that depends on the sample.
Which Class Did Better? Group 12A RESULTS(%) 0< x  2020< x  3030< x  5050< x  7070< x  100 FREQUENCY Group 12B RESULTS(%) 0< x  2020< x  3030
Monday, September 27 More basics.. _ “Life is a series of samples, you can infer the truth from the samples but you never see the truth.”
Experimental Evaluations Methods of Economic Investigation Lecture 4.
High Speed Heteroskedasticity Review. 2 Review: Heteroskedasticity Heteroskedasticity leads to two problems: –OLS computes standard errors on slopes incorrectly.
Chapter 4. The Normality Assumption: CLassical Normal Linear Regression Model (CNLRM)
Nick Smith, Kim Iles and Kurt Raynor
H676 Week 3 – Effect Sizes Additional week for coding?
STATISTICAL INFERENCE
Probability 9/22.
Probability and Estimation
Solving Multi-Step Equations
7-1 Introduction The field of statistical inference consists of those methods used to make decisions or to draw conclusions about a population. These.
Behavioral Statistics
H676 Week 7 – Effect sizes and other issues
Probability and Estimation
Ratio and regression estimation STAT262, Fall 2017
Introduction to Instrumentation Engineering
Variance Variance: Standard deviation:
Schmidt & Hunter Approach to r
Solving Multi-Step Equations
Variability.
Estimation of Sampling Errors, CV, Confidence Intervals
Solving Multi-Step Equations
Measures of Dispersion (Spread)
LESSON 18: CONFIDENCE INTERVAL ESTIMATION
Heteroskedasticity.
Chapter 8 Estimation.
The Use of Test Scores in Secondary Analysis
Presentation transcript:

Combining Effect Sizes Taking the Average

How to Combine (1) Study ES 1 2 .5 3 .3 Take the simple mean (add all ES, divide by number of ES) Study ES 1 2 .5 3 .3 M=(1+.5+.3)/3 M = 1.8/3 M=.6 Unbiased, consistent, but not efficient estimator. But see Bonnet for an argument for using unit wts

How to Combine (2) Study ES W 1 2 .5 3 .3 .9 W(ES) Take a weighted average Study ES W (weight) W(ES) 1 2 .5 3 .3 .9 M=(1+1+.9)/(1+2+3) M=(2.9)/6 M=.48 (cf .6 w/ unit wt) (Unit weights are special case where w=1.)

How to Combine (3) Choice of Weights (all are consistent, will give good estimates as the number of studies and sample size of studies increases) Unit Unbiased, inefficient Sample size Unbiased (maybe), efficient relative to unit Inverse variance – Reciprocal of sampling variance (or Ve+REVC) Biased (if parameter figures in sampling variance), most efficient Other – special weights depend on model, e.g., adjust for reliability (Schmidt & Hunter)

How to Combine (4) Inverse Variance Weights (fixed effects) are a function of the sample size, and sometimes also a parameter. For the mean: For r: For r transformed to z: Note that for two of these, the parameter is not part of the weight. But for r (not z transform), larger observed values will get more weight. Mean can be biased.