Presentation is loading. Please wait.

Presentation is loading. Please wait.

Meta-analysis: Conceptual and Methodological Introduction

Similar presentations


Presentation on theme: "Meta-analysis: Conceptual and Methodological Introduction"— Presentation transcript:

1 Meta-analysis: Conceptual and Methodological Introduction
PART TWO: Basic Meta-Analysis Summary Methods (or Getting Down to Business) Judith A. Hall and Jin X. Goh UMass meta-analysis workshop Dec 2, 2016

2 Making a Codesheet Iterative process Changes as you read more studies As codesheet evolves, so do your inclusion criteria Study attributes you code are for two purposes (either or both): Describing the database (how many studies of various designs, etc.) Moderator variables

3 Making a Codesheet Make your coding rules explicit and keep good records of your decisions and justifications

4 Coding Moderator Variables
Low-inference coding: Simple, objective (sample size, nationality, sex ratio) Medium-inference coding: Judgment is required Ex: is what they call “task avoidance” really just an anxiety scale?) High-inference: Subjective ratings Ex: In gender-aggression meta-analysis, coders rated how dangerous would participants think the situation is, how much would they fear retaliation) Intercoder reliability very important

5 Essential Steps for Describing Central Tendency
Calculate indices of outcome Effect size (ES)

6 Essential Steps for Describing Central Tendency
Calculate indices of outcome 1- Effect size (ES) 2- Z (standard normal deviate, e.g., 1.96 corresponds to p = .05, two-tail) Keep signs going in correct direction!

7 Effect Size (ES): r family
Pearson r on two continuous variables Point-biserial r (=Pearson r between a dichotomous variable and a continuous variable) Phi (=Pearson r between two dichotomous variables) Don’t ever use a squared index for ES. It has no sign, it could be from an omnibus analysis, and it gives a very false impression of magnitude of effect. Squared indices are not to my knowledge ever used in meta-analysis.

8 Effect Size (ES): r family
All rs must be converted to Fisher’s z for calculations (normalizing transformation), then converted back to r metric for final presentation. Use a “Fisher Z calculator” that you can find on the web, or a table in a stats book.

9 Effect Size (ES): d family
“d family”: two-group comparison expressed in SD units Cohen’s d: (M1-M2)/pooled within-group SD (where SD of each group is calculated using n in denominator) There are other ES metrics for two groups but they are less often used and not very different in practice.

10 Extracting Results Conversion formulas are essential to getting your ES Ex: What is Z if you know only r and N? Z = r X sqrt N Ex: What is r if you know only a 2-groups F and df? r = sqrt (F/(F+error df)) Ex: What is d if you know only r? d = 2r / sqrt(1-r2)

11 Common Problems and Questions
Study quality Include only “good” studies, or include all studies but code quality as a moderator? Either way, you would need to have firm criteria for what constitutes “quality” – such as random assignment or acceptable scale reliability “Quality” must be coded independent of (blind to) results

12 Common Problems and Questions
Multiple ESs in a given study Early meta-analyses did not maintain independence between ESs Almost all meta-analyses now maintain independence How to do this: Omit some ESs Average ESs within the study: most common

13 Common Problems and Questions
Multiple ESs in a given study BUT, it is generally permissible to have the same study contribute multiple ESs to different analyses Ex: Meta-analysis on relation of mood state to accurate emotion recognition. If a given study measured two mood states (e.g., anger and happiness), that study could contribute one ES to the anger-accuracy analysis and another ES to the happiness-accuracy analysis. You would explain this in your Method section.

14 Fixed vs. Random Effects
Fixed: Assumes one population ES. Results generalize to same studies with different (random) selection of participants. Weights ES by sample size. Fixed effects have more statistical power, less generalizability. More power means smaller p-values. Random: Does not assume there is one population ES. Results generalize to new studies that may have different designs. Lower statistical power, better generalization. Lower power means less impressive p-values.

15 Fixed vs. Random Effects
Authors more attuned to issues around fixed vs. random than they once were Often, authors mix up random and fixed without specifying which is which, but less often than once was the case

16 Random Effects: TWO MODELS
“Fully” random effects Apply ordinary statistical procedures: --Median, mean (unweighted), variance --one-sample t-test of mean ES against zero; 95% CI --Test effect of study characteristics (moderators) on ES using correlation, independent groups t-test, ANOVA, regression, etc. “N” for this approach is the number of studies

17 Random Effects: TWO MODELS
“Hybrid” random effects Weights ES by sample size AND adds room for between-studies variation This is what is most often referred to as “random effects,” which it is; however, it is not fully random effects, making one wonder what the generalization is to.

18 Fixed vs. Random Effects
Use ES weighted (by sample size) or unweighted? Weighting makes sense if one assumes all studies are estimating the same population ES In that case, weighting gives a better estimate of population ES (more believable, more accurate) This makes sense only if the studies have rather similar designs If not, weighting ES by sample size may actually be weighting by certain study characteristics

19 More Problems and Questions
What if ES can’t be calculated? Leave those studies out (i.e., use known effects only), or Estimate ES = 0 for those studies (thereby using all studies), or Do both and present results both ways

20 More Problems and Questions
Units of analysis: Pick one Individuals? Students aggregated within classrooms? Classrooms aggregated within schools? The more aggregated the units of analysis, the bigger ES will be (typically) DON’T MIX LEVELS OF ANALYSIS

21 More Problems and Questions
Studies are both between- and within-participants Problem because the within-participants ESs will likely be bigger Solutions: Keep these separate, or Code between/within as a moderator and use it in the analysis, or “Undo” the within analysis to make it like a between design even though it is not

22 Setting Up Your Database
Create a spreadsheet (Excel or SPSS) Ingredients for each effect: Study ID r Fisher z-transformed r Z (corresponding to the effect’s p-value) Effect size N Study attributes (potential moderators)

23 Setting up Your Database
Why not enter data directly into CMA? CMA is not versatile for data manipulation (doesn’t have functions) CMA can’t produce basic descriptives for the study attributes CMA has only certain analytic options, especially for random effects Use SPSS for all of these reasons Copy data into CMA


Download ppt "Meta-analysis: Conceptual and Methodological Introduction"

Similar presentations


Ads by Google