Meta-Analytic Thinking Daniel Lakens Eindhoven University of Technology
Learning goals 1) The scientific literature does not reflect reality. 2) We have tools to detect bias, but we can’t fix it. 3) An unbiased literature looks less convincing to an untrained eye, but more convincing if you understand probabilities. 4) You will observe, and should publish, research lines with mixed results. 5) If before we die, there is no more publication bias, we have probably made the biggest contribution to science we can.
Problems with a focus on p <0.05 One of the biggest problems with the widespread focus on p-values is their use as a selection criterion of which findings provide ‘support’ for a hypothesis and which don’t. Due to publication bias, tests with p-values below 0.05 are much more likely to be published than those above 0.05.
Problems with a focus on p <0.05 From https://normaldeviate.wordpress.com/2012/08/16/p-values-gone-wild-and-multiscale-madness/ based on masicampo & lalande
Effect Size Effect sizes are the most important outcome of an experiment. Effect sizes allow researchers to draw meta-analytic conclusions by comparing standardized effect sizes across studies
Don’t focus on single p-values Don’t care too much about every individual study having a p-value < .05. Perform close replications and extension, report all the data, and perform a small scale meta- analysis when publishing.
Zhang, Lakens, & IJsselsteijn, 2015 3 almost identical studies, study 3 pre-registered, 1/3 with p<.05 Overall Cohen’s d = 0.37, 95% CI [0.12, 0.62], t = 2.89, p = .004
Goals of Meta-Analysis Quantify effect sizes and their uncertainty Increase power Increase precision Explore variations across studies (heterogeneity) Generate new hypotheses.
Small scale meta-analysis
Publication bias Publication bias is the term for what occurs whenever the research that appears in the published literature is systematically unrepresentative of the population of completed studies (Rothstein, Sutton, & Borenstein, 2005). One dominant type of publication bias emerges when researchers submit statistically significant findings for publication, but do not share non- significant results.
Publication bias
Effects are smaller than you think This matters for power analyses. When you perform a power analysis, you are already biased (because you wouldn’t have performed a power analysis when the original effect size is so small it looks like there is no effect).
Effects are smaller than you think
http://improvingpsych.org/
https://www.coursera.org/learn/statistical-inferences
Let’s get started! Open 9.1 LikelihoodResultsApp.R I’ll guide you through the app, but 9.1 Meta- Analytic Thinking contains what I’ll say.
Discussion Point Some ‘prestigious’ journals (which, when examined in terms of scientific quality such as reproducibility, reporting standards, and policies concerning data and material sharing, are quite low quality despite their prestige) only publish manuscripts with a large number of studies, which should all be statistically significant. If we assume an average power in psychology of 50%, only 3.25% of 5 study articles should contain exclusively significant results. If you pick up a random issue from such a prestigious journal, and see 10 articles, each reporting 5 studies, and all manuscripts have exclusively significant results, would you trust the reported findings more, or less, than when all these articles had reported mixed results? Why?
Discussion Point Unless you will power all your studies at 99.99% for the rest of your career (which would be slightly inefficient, but great if you don’t like insecurity), you will observe mixed results in lines of research. How do you plan to deal with mixed results in lines of research?
Let’s get started! (Part 2) Open the file 9.2 SingleStudyMetaAnalysis.R. I’ll guide you through the code, but 9.2 Introduction to Meta-Analysis contains what I’ll say. Open the file SimulatingMetaAnalysesSMD.R to simulate meta-analyses.
Let’s get started! (Part 2) Open the file 9.2 SimulatingMetaAnalysesSMD.R to simulate meta-analyses.
Discussion Point What if you had started at study 2? Or Study 4? How can you prevent dichotomous thinking when analyzing data?
Discussion Point One way is to perform equivalence testing – check whether the effect is not just p < 0.05, but statistically smaller than anything you’d care about. E.g., are effects smaller than d = 0.4?
Let’s get started! (Part 2) Open the file Heterogeneity.R to simulate heterogeneous meta-analyses. Take note: a good meta-analysis is more about explaining heterogeneity, than it is about concluding there is or isn’t an effect.
Make your MA reproducible!
Let’s get started! (Part 3) Open the file 9.3 DetectingPublicationBias.R I’ll guide you through the code, but 9.3 Detecting Publication Bias contains what I’ll say.
Thanks for your attention Let’s practice! @Lakens