Meta-Analytic Thinking

Slides:



Advertisements
Similar presentations
If you are viewing this slideshow within a browser window, select File/Save as… from the toolbar and save the slideshow to your computer, then open it.
Advertisements

Using Statistics in Research Psych 231: Research Methods in Psychology.
The Experimental Approach September 15, 2009Introduction to Cognitive Science Lecture 3: The Experimental Approach.
Making all research results publically available: the cry of systematic reviewers.
How to Write a Scientific Paper Hann-Chorng Kuo Department of Urology Buddhist Tzu Chi General Hospital.
Academic Viva POWER and ERROR T R Wilson. Impact Factor Measure reflecting the average number of citations to recent articles published in that journal.
Program Evaluation. Program evaluation Methodological techniques of the social sciences social policy public welfare administration.
Introduction Osborn. Daubert is a benchmark!!!: Daubert (1993)- Judges are the “gatekeepers” of scientific evidence. Must determine if the science is.
The Campbell Collaborationwww.campbellcollaboration.org C2 Training: May 9 – 10, 2011 Introduction to meta-analysis.
S-012 Testing statistical hypotheses The CI approach The NHST approach.
Retain H o Refute hypothesis and model MODELS Explanations or Theories OBSERVATIONS Pattern in Space or Time HYPOTHESIS Predictions based on model NULL.
1 Consider the k studies as coming from a population of effects we want to understand. One way to model effects in meta-analysis is using random effects.
How to write a publishable qualitative article
CHAPTER 9 Testing a Claim
Psych 231: Research Methods in Psychology
H676 Week 3 – Effect Sizes Additional week for coding?
Selecting the Best Measure for Your Study
Chapter 4 Research Methods in Clinical Psychology
Reproducibility Project: Psychology A Discussion
Experimental Psychology
Searching the Literature
CHAPTER 10 Comparing Two Populations or Groups
Unit 5: Hypothesis Testing
POWER.
Warm Up Check your understanding P. 586 (You have 5 minutes to complete) I WILL be collecting these.
Improved Weights for Estimating the Meta-analytic Mean
Scientific Method The scientific method is a guide to problem solving. It involves asking questions, making observations, and trying to figure out things.
Lecture 4: Meta-analysis
Study Pre-Registration
CHAPTER 10 Comparing Two Populations or Groups
Critical Reading of Clinical Study Results
Communicating Methods, Results, and Intentions in Empirical Research
H676 Meta-Analysis Brian Flay WEEK 1 Fall 2016 Thursdays 4-6:50
Meta-Analysis: Synthesizing the evidence
Meta-Analysis: Synthesizing evidence
CHAPTER 9 Testing a Claim
Psychology (A) old spec 2017
POWER.
The P-hacking Phenomenon
Section 7.1 Sampling Distributions
CHAPTER 10 Comparing Two Populations or Groups
CHAPTER 9 Testing a Claim
CHAPTER 4 Designing Studies
Rick Hoyle Duke Dept. of Psychology & Neuroscience
Survey Research Explain why social desirability is a problem in asking questions. Explain why the order in which questions are asked is important. Explain.
EAST GRADE course 2019 Introduction to Meta-Analysis
Psych 231: Research Methods in Psychology
Inferential Statistics
Publication Bias in Systematic Reviews
Psych 231: Research Methods in Psychology
Inference for Sampling
CHAPTER 10 Comparing Two Populations or Groups
CHAPTER 10 Comparing Two Populations or Groups
Psych 231: Research Methods in Psychology
CHAPTER 9 Testing a Claim
Research process.
Psych 231: Research Methods in Psychology
CHAPTER 10 Comparing Two Populations or Groups
CHAPTER 9 Testing a Claim
CHAPTER 10 Comparing Two Populations or Groups
Module 2 Research Methods
CHAPTER 10 Comparing Two Populations or Groups
Chapter 4 Summary.
CHAPTER 10 Comparing Two Populations or Groups
Statistics is… Mathematics: The tools used to analyze data and quantify uncertainty are mathematical in nature (e.g., probability, counting methods). English:
Meta-analysis, systematic reviews and research syntheses
CHAPTER 10 Comparing Two Populations or Groups
Inference for Distributions of Categorical Data
Open Science & Reproducibility
Introduction To Hypothesis Testing
Presentation transcript:

Meta-Analytic Thinking Daniel Lakens Eindhoven University of Technology

Learning goals 1) The scientific literature does not reflect reality. 2) We have tools to detect bias, but we can’t fix it. 3) An unbiased literature looks less convincing to an untrained eye, but more convincing if you understand probabilities. 4) You will observe, and should publish, research lines with mixed results. 5) If before we die, there is no more publication bias, we have probably made the biggest contribution to science we can.

Problems with a focus on p <0.05 One of the biggest problems with the widespread focus on p-values is their use as a selection criterion of which findings provide ‘support’ for a hypothesis and which don’t. Due to publication bias, tests with p-values below 0.05 are much more likely to be published than those above 0.05.

Problems with a focus on p <0.05 From https://normaldeviate.wordpress.com/2012/08/16/p-values-gone-wild-and-multiscale-madness/ based on masicampo & lalande

Effect Size Effect sizes are the most important outcome of an experiment. Effect sizes allow researchers to draw meta-analytic conclusions by comparing standardized effect sizes across studies

Don’t focus on single p-values Don’t care too much about every individual study having a p-value < .05. Perform close replications and extension, report all the data, and perform a small scale meta- analysis when publishing.

Zhang, Lakens, & IJsselsteijn, 2015 3 almost identical studies, study 3 pre-registered, 1/3 with p<.05 Overall Cohen’s d = 0.37, 95% CI [0.12, 0.62], t = 2.89, p = .004

Goals of Meta-Analysis Quantify effect sizes and their uncertainty Increase power Increase precision Explore variations across studies (heterogeneity) Generate new hypotheses.

Small scale meta-analysis

Publication bias Publication bias is the term for what occurs whenever the research that appears in the published literature is systematically unrepresentative of the population of completed studies (Rothstein, Sutton, & Borenstein, 2005). One dominant type of publication bias emerges when researchers submit statistically significant findings for publication, but do not share non- significant results.

Publication bias

Effects are smaller than you think This matters for power analyses. When you perform a power analysis, you are already biased (because you wouldn’t have performed a power analysis when the original effect size is so small it looks like there is no effect).

Effects are smaller than you think

http://improvingpsych.org/

https://www.coursera.org/learn/statistical-inferences

Let’s get started! Open 9.1 LikelihoodResultsApp.R I’ll guide you through the app, but 9.1 Meta- Analytic Thinking contains what I’ll say.

Discussion Point Some ‘prestigious’ journals (which, when examined in terms of scientific quality such as reproducibility, reporting standards, and policies concerning data and material sharing, are quite low quality despite their prestige) only publish manuscripts with a large number of studies, which should all be statistically significant. If we assume an average power in psychology of 50%, only 3.25% of 5 study articles should contain exclusively significant results. If you pick up a random issue from such a prestigious journal, and see 10 articles, each reporting 5 studies, and all manuscripts have exclusively significant results, would you trust the reported findings more, or less, than when all these articles had reported mixed results? Why?  

Discussion Point Unless you will power all your studies at 99.99% for the rest of your career (which would be slightly inefficient, but great if you don’t like insecurity), you will observe mixed results in lines of research. How do you plan to deal with mixed results in lines of research?

Let’s get started! (Part 2) Open the file 9.2 SingleStudyMetaAnalysis.R. I’ll guide you through the code, but 9.2 Introduction to Meta-Analysis contains what I’ll say. Open the file SimulatingMetaAnalysesSMD.R to simulate meta-analyses.

Let’s get started! (Part 2) Open the file 9.2 SimulatingMetaAnalysesSMD.R to simulate meta-analyses.

Discussion Point What if you had started at study 2? Or Study 4? How can you prevent dichotomous thinking when analyzing data?

Discussion Point One way is to perform equivalence testing – check whether the effect is not just p < 0.05, but statistically smaller than anything you’d care about. E.g., are effects smaller than d = 0.4?

Let’s get started! (Part 2) Open the file Heterogeneity.R to simulate heterogeneous meta-analyses. Take note: a good meta-analysis is more about explaining heterogeneity, than it is about concluding there is or isn’t an effect.

Make your MA reproducible!

Let’s get started! (Part 3) Open the file 9.3 DetectingPublicationBias.R I’ll guide you through the code, but 9.3 Detecting Publication Bias contains what I’ll say.

Thanks for your attention Let’s practice! @Lakens