Wim Van den Noortgate Katholieke Universiteit Leuven, Belgium Belgian Campbell Group Workshop systematic reviews.

Slides:



Advertisements
Similar presentations
Meta-analysis: summarising data for two arm trials and other simple outcome studies Steff Lewis statistician.
Advertisements

EVAL 6970: Meta-Analysis Vote Counting, The Sign Test, Power, Publication Bias, and Outliers Dr. Chris L. S. Coryn Spring 2011.
Meta-Analysis: A Gentle Introduction to Research Synthesis Gianna Rendina-Gobioff Jeff Kromrey Research Methods in a Nutshell College of Education Presentation.
1 G Lect 2a G Lecture 2a Thinking about variability Samples and variability Null hypothesis testing.
Change in schedule… Website currently says…  August 5 th – first draft  August 19 th – second draft Lets have instead…  August 19 th – first draft.
Funded through the ESRC’s Researcher Development Initiative Department of Education, University of Oxford Session 2.3 – Publication bias.
The Campbell Collaborationwww.campbellcollaboration.org Moderator analyses: Categorical models and Meta-regression Terri Pigott, C2 Methods Editor & co-Chair.
Estimation and Reporting of Heterogeneity of Treatment Effects in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare.
Funded through the ESRC’s Researcher Development Initiative Department of Education, University of Oxford Session 2.4: 3-level meta-analyses.
Chapter 3 Producing Data 1. During most of this semester we go about statistics as if we already have data to work with. This is okay, but a little misleading.
Introduction to Meta-Analysis Joseph Stevens, Ph.D., University of Oregon (541) , © Stevens 2006.
15 de Abril de A Meta-Analysis is a review in which bias has been reduced by the systematic identification, appraisal, synthesis and statistical.
Heterogeneity in Hedges. Fixed Effects Borenstein et al., 2009, pp
Introduction to evidence based medicine
Meta-Analysis and Meta- Regression Airport Noise and Home Values J.P. Nelson (2004). “Meta-Analysis of Airport Noise and Hedonic Property Values: Problems.
Are the results valid? Was the validity of the included studies appraised?
9.0 A taste of the Importance of Effect Size The Basics of Effect Size Extraction and Statistical Applications for Meta- Analysis Robert M. Bernard Philip.
Funded through the ESRC’s Researcher Development Initiative
Advanced Statistics for Researchers Meta-analysis and Systematic Review Avoiding bias in literature review and calculating effect sizes Dr. Chris Rakes.
Reviewing systematic reviews: meta- analysis of What Works Clearinghouse computer-assisted reading interventions. October 2012 Improving Education through.
Program Evaluation. Program evaluation Methodological techniques of the social sciences social policy public welfare administration.
Publication Bias in Medical Informatics evaluation research: Is it an issue or not? Mag. (FH) Christof Machan, M.Sc. Univ-Prof. Elske Ammenwerth Dr. Thomas.
1 ICEBOH Split-mouth studies and systematic reviews Ian Needleman 1 & Helen Worthington 2 1 Unit of Periodontology UCL Eastman Dental Institute International.
Introduction Multilevel Analysis
Introduction to Systematic Reviews Afshin Ostovar Bushehr University of Medical Sciences Bushehr, /9/20151.
Funded through the ESRC’s Researcher Development Initiative Department of Education, University of Oxford Session 3.3 & 3.4: Teacher Expectancy Example.
Quantitative Research. Overview Non-experimental QualitativeCase study Phenomenology Ethnography Historical Literature Review QuantitativeObservational.
Simon Thornley Meta-analysis: pooling study results.
Statistical Applications for Meta-Analysis Robert M. Bernard Centre for the Study of Learning and Performance and CanKnow Concordia University December.
Funded through the ESRC’s Researcher Development Initiative Prof. Herb MarshMs. Alison O’MaraDr. Lars-Erik Malmberg Department of Education, University.
IMPRINT Developer’s Workshop December 6-7, 2005 Meta-analytic Reviews of the Effects of Temperature and Vibration on Performance J.L. Szalma & G. Conway.
Kirsten Fiest, PhD June 23, CONDUCTING META-ANALYSES IN HEALTH RESEARCH.
Systematic reviews to support public policy: An overview Jeff Valentine University of Louisville AfrEA – NONIE – 3ie Cairo.
Meta-analysis and “statistical aggregation” Dave Thompson Dept. of Biostatistics and Epidemiology College of Public Health, OUHSC Learning to Practice.
The Campbell Collaborationwww.campbellcollaboration.org C2 Training: May 9 – 10, 2011 Introduction to meta-analysis.
Wim Van den Noortgate Katholieke Universiteit Leuven, Belgium Belgian Campbell Group Workshop systematic reviews.
Statistical Models for the Analysis of Single-Case Intervention Data Introduction to:  Regression Models  Multilevel Models.
Advanced Meta-Analyses Heterogeneity Analyses Fixed & Random Efffects models Single-variable Fixed Effects Model – “Q” Wilson Macro for Fixed Efffects.
Effect size calculation in educational and behavioral research Wim Van den Noortgate ‘Power training’ Faculty Psychology and Educational Sciences, K.U.Leuven.
Developing a Review Protocol. 1. Title Registration 2. Protocol 3. Complete Review Components of the C2 Review Process.
Analysis Overheads1 Analyzing Heterogeneous Distributions: Multiple Regression Analysis Analog to the ANOVA is restricted to a single categorical between.
META-ANALYSIS, RESEARCH SYNTHESES AND SYSTEMATIC REVIEWS © LOUIS COHEN, LAWRENCE MANION & KEITH MORRISON.
Development and the Role of Meta- analysis on the Topic of Inflammation Donald S. Likosky, Ph.D.
Retain H o Refute hypothesis and model MODELS Explanations or Theories OBSERVATIONS Pattern in Space or Time HYPOTHESIS Predictions based on model NULL.
Systematic Synthesis of the Literature: Introduction to Meta-analysis Linda N. Meurer, MD, MPH Department of Family and Community Medicine.
Fixed- v. Random-Effects. Fixed and Random Effects 1 All conditions of interest – Fixed. Sample of interest – Random. Both fixed and random-effects meta-analyses.
Replication in Prevention Science Valentine, et al.
Analysis of Experiments
Funded through the ESRC’s Researcher Development Initiative Department of Education, University of Oxford Session 2.1 – Revision of Day 1.
Chapter 4 INTRODUCTION TO CLINICAL PSYCHOLOGY, THIRD CANADIAN EDITION by John Hunsley and Catherine M. Lee.
Is a meta-analysis right for me? Jaime Peters June 2014.
An Application of Multilevel Modelling to Meta-Analysis, and Comparison with Traditional Approaches Alison O’Mara & Herb Marsh Department of Education,
Producing Data 1.
Reviewing systematic reviews: meta- analysis of What Works Clearinghouse computer-assisted interventions. November 2011 American Evaluation Association.
Chapter 22 Inferential Data Analysis: Part 2 PowerPoint presentation developed by: Jennifer L. Bellamy & Sarah E. Bledsoe.
June 25, Regional Educational Laboratory - Southwest Review of Evidence on the Effects of Teacher Professional Development on Student Achievement:
H676 Week 3 – Effect Sizes Additional week for coding?
RDI Meta-analysis workshop - Marsh, O'Mara, & Malmberg
The Question The first step is deciding on the question to be asked:
Statistical Models for the Analysis of Single-Case Intervention Data
Heterogeneity and sources of bias
Lecture 4: Meta-analysis
Effect size measures for single-case designs: General considerations
Statistical Models for the Analysis of Single-Case Intervention Data
How Do Testing Conditions Affect Creative Performance?
Publication Bias in Systematic Reviews
META-ANALYSIS PROCEDURES
Statistical Models for the Analysis of Single-Case Intervention Data
Meta-analysis in R: An introductory guide
Presentation transcript:

Wim Van den Noortgate Katholieke Universiteit Leuven, Belgium Belgian Campbell Group Workshop systematic reviews Leuven June 4-6,

1. Modelling heterogeneity 2. Publication bias 2

3

Growing popularity of evidence-based thinking: Decisions in practice and policy should be based on scientific research about the effects of these decisions/interventions But: conflicting results (failures to replicate), especially in social sciences! 4

1. The role of chance - in measuring variables - in sampling study participants 2. Study results may be systematically biased due to - the way variables are measured - the way the study is set up 3. Studies differ from each other (e.g., in the kind of treatment, the duration of treatment, the dependent variable, the characteristics of the investigated population, …) 5

Differences between observed effect sizes due to chance only Population effect sizes all equal 6

7

Rough guidelines: 0% to 40%: might not be important 30% to 60%: may represent moderate heterogeneity 50% to 90%: may represent substantial heterogeneity 75% to 100%: considerable heterogeneity Interpretation based on both I² and heterogeneity test! = percentage of variability in effect estimates due to heterogeneity rather than chance 8

( Raudenbush, S. W. (1984). Magnitude of teacher expectancy effects on pupil IQ as a function of the credibility of expectancy induction: A synthesis of findings from 18 experiments. Journal of Educational Psychology, 76, ) Study Weeks prior contact gjgj Rosenthal et al. (1974) Conn et al. (1968) Jose & Cody (1971) Pellegrini & Hicks (1972) Evans & Rosenthal (1969) Fielder et al. (1971) Claiborn (1969) Kester & Letchworth (1972) Maxwell (1970) Carter (1970) Flowers (1966) Keshock (1970) Henrickson (1970) Fine (1972) Greiger (1970) Rosenthal & Jacobson (1968) Fleming & Anttonen (1971) Ginsburg (1970)

Q = 35,83, df = 18, I²= 50 %, p =

 Not always wise: make set of studies more homogeneous!  Can help to say something about ‘fruit’  Can help to make detailed conclusions: Does the effect depend on the kind of fruit? 11

12

Population effect size possibly depends on study category Differences between observed effect sizes within the same category due to chance only 13

( Raudenbush, S. W. (1984). Magnitude of teacher expectancy effects on pupil IQ as a function of the credibility of expectancy induction: A synthesis of findings from 18 experiments. Journal of Educational Psychology, 76, ) Study Weeks prior contact gjgj Rosenthal et al. (1974) Conn et al. (1968) Jose & Cody (1971) Pellegrini & Hicks (1972) Evans & Rosenthal (1969) Fielder et al. (1971) Claiborn (1969) Kester & Letchworth (1972) Maxwell (1970) Carter (1970) Flowers (1966) Keshock (1970) Henrickson (1970) Fine (1972) Greiger (1970) Rosenthal & Jacobson (1968) Fleming & Anttonen (1971) Ginsburg (1970)

15

Total variability in observed ES’s Variability within groups Variability between groups =+ H 0 : Q T ~  ² k-1 H 0 : Q B ~  ² J-1 H 0 : Q W ~  ² k-J Q T : homogeneity test Q B : moderator test Q W : test for within group homogeneity 16

Q total =Q Between +Q within ²² df18315 p

= Mean ES REM 18

Population effect size possibly depends on continuous study characteristic e.g., After taking into account this study characteristic, differences between observed effect sizes due to chance only 19

Initial effect is moderate (0.41, p <.001), but decreases with increasing prior contact (with per week, p <.001) 20

Population effect size possibly varies randomly over studies Differences between observed effect sizes are due to - chance - ‘true’ differences 21

22

Population effect size possibly depends on study category Differences between observed effect sizes within the same category are due to - chance - ‘true’ differences 23

Population effect size possibly depends on continuous study characteristic e.g., After taking into account this study characteristics, differences between observed effect sizes are due to - chance - ‘true’ differences 24

Random effects model with moderators: ◦ The least restrictive model: allows moderator variables & random variation ◦ Also called a ‘Mixed effects model’ 25

FEMREM Without moderator Categorical moderator Continuous moderator 26

1. Is there an overall effect? 2. How large is this effect? 3. Is the effect the same in all studies? 4. How large is the variation over studies? 5. Is this variation related to study characteristics? 6. Is there variation that remains unexplained? 7. What is the effect in the specific studies? 27

( Raudenbush, S. W. (1984). Magnitude of teacher expectancy effects on pupil IQ as a function of the credibility of expectancy induction: A synthesis of findings from 18 experiments. Journal of Educational Psychology, 76, ) Study Weeks prior contact gjgj Rosenthal et al. (1974) Conn et al. (1968) Jose & Cody (1971) Pellegrini & Hicks (1972) Evans & Rosenthal (1969) Fielder et al. (1971) Claiborn (1969) Kester & Letchworth (1972) Maxwell (1970) Carter (1970) Flowers (1966) Keshock (1970) Henrickson (1970) Fine (1972) Greiger (1970) Rosenthal & Jacobson (1968) Fleming & Anttonen (1971) Ginsburg (1970)

ParameterREM Fixed Intercept0.084 (0.052) Between study variance0.019 (0.023) 29

ParameterREMMEM Fixed Intercept0.084 (0.052)0.41 (0.087) Weeks-0.16 (0.036) Between study variance0.019 (0.023)0.00 (-) 30

1. Models can include multiple moderators 2. REM assumes randomly sampled studies 3. REM requires enough studies 4. Association (over studies) ≠ causation! Be aware of potential confounding moderators (studies are not ‘RCT participants’!) 31

Dependencies between studies ◦ E.g., research group, country, … Multiple effect sizes per study ◦ Several samples ◦ Same sample but, e.g., several indicator variables 32

 Ignoring dependence? NO!  Avoiding dependence ◦ (Randomly choosing one ES for each study) ◦ Averaging ES’s within a study ◦ Performing separate meta-analyses for each kind of treatment or indicator  Modelling dependence ◦ Performing a multivariate meta-analysis, accounting for sampling covariance. ◦ Performing a three level analysis 33

34

( Egger, M. D., & Smith, G. (1998). Meta-analysis. Bias in location and selection of studies. British Medical Journal, 316,

Proportion of publication within 5 years after conference:  81 % (of 233 trials) for significant results  68 % (of 287 trials) for nonsignificant results ( Kryzanowska, M. K., Pintilie, M., & Tennock, I. F. (2003). Factors associated with failure to publish large randomized trials presented at an oncology meeting. Journal of the American Medical Association, 290, ). 36

37

38

 Thorough search for all relevant published and unpublished study results a)Articles b)Books c)Conference papers d)Dissertations e)(Un)finished research reports f)… 39

- outliers - detection using graphs (or tests) - conduct analysis with and without outliers - calculation effect sizes : several analyses - publication bias: analysis with and without unpublished results - design & quality: compare results from studies with strong design or good quality, with those of all studies - researcher: literature search, effect size calculation, coding quality, …, done by two researchers -…-… 40

41

 Spreadsheets (e.g., MS Excel, …)  Some general statistical software (note: often not possible to fix the sampling variance) SAS Proc Mixed, Splus, R Metafor package, …  Software for meta-analysis (note: often not MEM; often only one moderator!) CMA ( RevMan, …  Software for multilevel/mixed models HLM, MLwiN, … 42

43

 Cooper, H., Hedges, L. V., & Valentine, J. C. (Eds.) (2009). The handbook of research synthesis and meta-analysis. New York: The Russell Sage Foundation.  Lipsey, M. W., & Wilson, D. B. (2001). Practical meta- analysis. Thousand Oaks, CA: Sage.  Van den Noortgate, W., & Onghena, P. (2005). Meta- analysis. In B. S. Everittt, & D. C. Howell (Eds), Encyclopedia of Statistics in Behavioral Science (Vol. 3 pp ). Chichester, UK: John Wiley & Sons. 44

 Site of David Wilson  Site of William Shadish faculty.ucmerced.edu/wshadish/ 45