Conducting Meta-Analysis in R: A Practical Introduction

Slides:



Advertisements
Similar presentations
Meta-analysis: summarising data for two arm trials and other simple outcome studies Steff Lewis statistician.
Advertisements

Software Development, Programming, Testing & Implementation.
SRDR Quarterly Training Brown Evidence-based Practice Center Brown University September 12 th, :00pm-2:00pm SRDR Data Import Tool A Tool to Import.
Whiteboardmaths.com © 2009 All rights reserved
POSTER TEMPLATES BY: Introduction Results Discussion References Study Objective(s) Methods (Continued) Specify the objective(s)
Sample size vs. Error A tutorial By Bill Thomas, Colby-Sawyer College.
26 June 2008 DG REGIO Evaluation Network Meeting Ex-post Evaluation of Cohesion Policy Programmes co-financed by the European Fund for Regional.
Emission Inventory Quality Assurance/Quality Control (QA/QC) Melinda Ronca-Battista ITEP/TAMS Center.
The Campbell Collaborationwww.campbellcollaboration.org C2 Training: May 9 – 10, 2011 Introduction to meta-analysis.
1 Introduction Description of the video series Meta-analysis in R with Metafor.
Advanced Meta-Analyses Heterogeneity Analyses Fixed & Random Efffects models Single-variable Fixed Effects Model – “Q” Wilson Macro for Fixed Efffects.
EDCI 696 Dr. D. Brown Presented by: Kim Bassa. Targeted Topics Analysis of dependent variables and different types of data Selecting the appropriate statistic.
TIMOTHY SERVINSKY PROJECT MANAGER CENTER FOR SURVEY RESEARCH Data Preparation: An Introduction to Getting Data Ready for Analysis.
Preparing for Data Analysis Some tips and tricks for getting your data organized so that you can do the “fun stuff”!
1 Consider the k studies as coming from a population of effects we want to understand. One way to model effects in meta-analysis is using random effects.
Meta-analysis Overview
I-squared Conceptually, I-squared is the proportion of total variation due to ‘true’ differences between studies. Proportion of total variance due to.
A quick guide to other statistical software
PROCESSING DATA.
Chapter 13 Simple Linear Regression
Development Environment
Reading Discussion: “Coding & Synthesizing Studies”
AP CSP: Cleaning Data & Creating Summary Tables
Data Sciences Unit, School of Psychology, Deakin University
H676 Week 3 – Effect Sizes Additional week for coding?
Evidence Synthesis/Systematic Reviews of Eyewitness Accuracy
Intro to Single Paper Meta-Analyses
Meta-analysis: Conceptual and Methodological Introduction
Student Registration/ Personal Needs Profile
Developing Problem Statement for Dissertation
Chapter 8 – Software Testing
Coding Manual and Process
Testing for moderators
Supplementary Table 1. PRISMA checklist
A New Look for the NHANES Website (NHANES Tutorial)
Meta-Analytic Thinking
Lecture 4: Meta-analysis
Jayne Tierney1, Angelo Tinazzi2 Sarah Burdett1,Lesley Stewart3
INTRODUCTION TO META- ANALYSIS. Meta-analysis  A statistical approach (after systematic review) to compare and combine effect sizes from a pool of independent.
Applied Software Implementation & Testing
Program Evaluation Essentials-- Part 2
H676 Meta-Analysis Brian Flay WEEK 1 Fall 2016 Thursdays 4-6:50
Checking Your Data With Outlier Analyses
Meta Analysis/Systematic Review Poster Template
March 2017 Susan Edwards, RTI International
Inverse Transformation Scale Experimental Power Graphing
Effective Feedback, Rubrics, and Grading
Systematic review and meta-analysis
Systematic reviews and meta-analysis
Gerald Dyer, Jr., MPH October 20, 2016
Code is on the Website Outline Comparison of Excel and R
An AS Lesson Using the LDS to teach content on Data Collection and Processing.
Narrative Reviews Limitations: Subjectivity inherent:
Statistics for the Social Sciences
Dr. Maryam Tajvar Department of Health Management and Economics
Spreadsheets, Modelling & Databases
Statistics for the Social Sciences
Statistics for the Social Sciences
Amos Introduction In this tutorial, you will be briefly introduced to the student version of the SEM software known as Amos. You should download the current.
Publication Bias in Systematic Reviews
Facts from figures Having obtained the results of an investigation, a scientist is faced with the prospect of trying to interpret them. In some cases the.
What are systematic reviews and why do we need them?
Student Registration/ Personal Needs Profile
M. Kezunovic (P.I.) S. S. Luo D. Ristanovic Texas A&M University
DESIGN OF EXPERIMENTS by R. C. Baker
Reproducibility and Replicability in a Fast-paced Methodological World
MetaForest Using random forests to explore heterogeneity in meta-analysis Caspar J. van Lissa, Utrecht University NL
Meta-analysis, systematic reviews and research syntheses
Overview of Computer system
Meta-analysis in R: An introductory guide
Presentation transcript:

Conducting Meta-Analysis in R: A Practical Introduction Emily Kothe & Mathew Ling School of Psychology Data Sciences Unit Deakin University While you wait, please go to RStudio and enter: install.packages(beepr); beepr::beep()

Presenters School of Psychology Data Sciences Unit Mathew Ling Emily Kothe @lingtax @emilyandthelime School of Psychology Data Sciences Unit

Background Meta-analysis: Provides a quantitative summary of the results included studies Provides a mechanism for statistically examining variation in effect sizes across studies This workshop is intended to provide a practical introduction to conducting meta- analyses in R, principally using pre-templated markdown scripts.

Meta-analysis overview Meta analysis is about synthesis of primary findings Garbage in, garbage out Weights studies by the variance of effect size estimate Less variance = More weight Necessarily concerned with Effect sizes not p-values E.g. Mean differences, Odds ratios, correlations library(metafor) example(forest.rma)

Meta-analysis overview Two general types Fixed effects Random effects Fixed-effects meta-analysis Looking for the common effect Differences across studies caused by random variation Random-effects meta-analysis Looking for average effect True differences in effects plus random variation and library(metafor) example(forest.rma)

Background This workshop focuses on use of metafor (Viechtbauer, 2010) however Data Science members can also advise on A number of other R meta-analysis packages: Psychometric, robumeta Meta-analysis using other software: stata and CMA

Overview R Setup and package installation Run a meta-analysis in R RMarkdown Templates Preparing for a meta-analysis Data extraction Effect size calculation Generalising to new projects

R Setup Creating a Project Identifying setup issues.

Live coding a meta-analysis Learning the RStudio interface via doing Live code a meta-analysis first: library(metafor) dat <- read.csv("data_withES.csv") head(dat) output <- rma(yi, vi, data=dat) output forest(output) Conduct a meta-analysis using https://github.com/ekothe/meta/blob/master/Basic_SMD_Meta-Analysis.R and the data.csv file Conduct a meta-analysis using https://github.com/ekothe/meta/blob/master/Basic_SMD_Meta-Analysis.R and gibson2002.csv file (note this requires changing names of columns in lines 6-12)

Now it’s your turn Learning the RStudio interface via doing Live code a meta-analysis first: library(metafor) dat <- read.csv("data_withES.csv") head(dat) output <- rma(yi, vi, data=dat) output forest(output) Conduct a meta-analysis using https://github.com/ekothe/meta/blob/master/Basic_SMD_Meta-Analysis.R and the data.csv file Conduct a meta-analysis using https://github.com/ekothe/meta/blob/master/Basic_SMD_Meta-Analysis.R and gibson2002.csv file (note this requires changing names of columns in lines 6-12)

Introduction to rMarkdown A Tale of Two Scripts

Reporting errors are common Nuijten et al. (2015) Audit of 250,000 published NHST between 1985- 2013. “...half of all published psychology papers that use NHST contained at least one p-value that was inconsistent with its test statistic and degrees of freedom…” “...One in eight papers contained a grossly inconsistent p-value that may have affected the statistical conclusion.”

.R vs .Rmd .R files - Person to computer communication Nothing but the code Useful for communicating tools / plans Limited at communicating knowledge .Rmd files - Person to Person communication Text AND Code (in ‘Chunks’) Eliminates analysis-reporting gap Allows reproducible reports / templates Focus just on content Inefficient for passing code-heavy helpers Just write packages!

Why markdown templates for meta-analysis particularly Most meta-analyses have common structures Inputs: effect size and SE per study Outputs: MA estimate, forest plot, funnel plot So scripting each time is reinventing the wheel Templates also: Provide guidance for interpretation and reporting. Reduce barriers to entry for inexperienced R users.

Let’s run a meta-analysis using rMarkdown Open RStudio Open the file “Basic_SMD_Meta-Analysis.Rmd” Click “Knit”

Popping the hood This analysis script is completing several steps in order to generate this report Initiating all the R packages we need for our analysis Reading in the most current version of a spreadsheet containing extracted data Calculating effect sizes Running a random effects meta-analysis Reporting the model statistics (effect size and heterogeneity tests) Generating a forest plot Generating a funnel plot

Making data.csv This will introduce the data extraction and data cleaning process we’re using

Let’s practice extracting some data We are investigate the impact of exercise on anxiety among individuals with cancer relative to a no-intervention or waitlist control. Our effect size will be standardised mean difference (SMD). Your task (if you choose to accept it) is to extract post-intervention anxiety outcomes from these studies.

Data extraction Generally you’d do this into a spreadsheet (templates here: https://osf.io/6bk7b/) Because we all want to extract to the same sheet, we’ve put ours on googledocs and provided a form to give you access. You should: Go to our data extraction form [https://goo.gl/e4fsKG] Enter study ID as Surname of first author, underscore, and year. e.g. Smith_2015 Extract mean, SD, N for each group (intervention & control)

Checking our extraction

Best practice data extraction This will introduce the data extraction and data cleaning process we’re using

Best practice extraction in a perfect world... Two reviewers extract effect size data from each paper, extraction is compared and disagreements resolved through consensus Each reviewer records Data required to calculate the effect size Test statistics (e.g. p values, t values) relevant to the effect size Assumptions made when extracting data

Minimum best practice data extraction for standardised mean difference Study ID Means group 1 N group 1 SD group 1 Means group 2 N group 2 SD group 2 Exact p value Location in Text Notes

Minimum best practice data extraction for correlations Study ID r N Exact p value Location in Text Notes

Minimum best practice data extraction for odds ratio Study ID N events group 1 N non-events group 1 N events group 2 N non-events group 2 Exact p value Location in Text Notes

Best practice extraction in an (im)perfect world... In the perfect world all studies are well described Sadly, we don’t live in a perfect world Studies commonly do not report information in the format that you would find most helpful You may need to use a number of formulas different to calculate effect sizes based on the information provided Even after trying to extract maximal data from all studies you’ll often need to contact authors

Data availability: An example Of 474 categorical effects identified in a recent analysis, only 2.74% reported data in the format we wanted 12% of studies did not report required data We need systematic methods for identifying and then converting effect sizes

REMEMBER It is especially important to use best practice to report how you extracted effect sizes when studies are not perfectly reported. You will (often) need to make assumptions when extracting data and it must be clear to readers (and future you) what those were so that your meta-analysis can be replicated

Calculating effect sizes This will introduce the data extraction and data cleaning process we’re using

Let’s practice compiling data from an incomplete dataset We are going to run “CompileR.Rmd” to look at how we can compile effect sizes when we have different statistics available Open the file “CompileR.Rmd” Click “Knit”

Conducting (random effects) meta-analysis This will introduce the data extraction and data cleaning process we’re using

Let’s practice a meta-analysis We’re going to step through re-calibrating the template together. Pay attention, because you’re going to do it yourselves next.

Verbose Meta-Analysis This is where markdown gets really cool

Sidenotes... If you want to extract the R code from any of the Rmd files we’ve discussed today you can use the following code to do so library(knitr) purl("CompileR.Rmd")

Sidenotes... All code and slides underlying this workshop are available at GitHub (https://github.com/ekothe/meta) and OSF (https://osf.io/6bk7b/). Those resources get updated when we add new features.