Presentation is loading. Please wait.

Presentation is loading. Please wait.

Data Sciences Unit, School of Psychology, Deakin University

Similar presentations


Presentation on theme: "Data Sciences Unit, School of Psychology, Deakin University"— Presentation transcript:

1 Data Sciences Unit, School of Psychology, Deakin University
Systematic Reviews and Meta-Analysis in Behavioural Medicine: A Practical Introduction to Best Practices Meta-analysis slides Dr Emily Kothe Data Sciences Unit, School of Psychology, Deakin University

2 Identification Screening Included
Records identified through database searching (n = 6355) Additional records identified through other sources (n = 0) Identification Records after duplicates removed (n = 5724) Screening Records screened (n = 5724) Records excluded (n = 5703) Full-text articles assessed for eligibility (n = 21) Full-text articles excluded with reasons (n=9) Not industy appropriate (n=5) Passive intervention (n=2) Did not measure injury rates/safety behaviour (n=1) Full-text articles excluded, with reasons (n = 9 ) Eligibility Studies included in qualitative synthesis (n = 15) Included Studies included in quantitative synthesis (n =?)

3 Identification Screening Included
Records identified through database searching (n = 6355) Additional records identified through other sources (n = 0) Identification Records after duplicates removed (n = 5724) Screening Records screened (n = 5724) Records excluded (n = 5703) Full-text articles assessed for eligibility (n = 21) Full-text articles excluded with reasons (n=9) Not industy appropriate (n=5) Passive intervention (n=2) Did not measure injury rates/safety behaviour (n=1) Full-text articles excluded, with reasons (n = 9 ) Eligibility Studies included in qualitative synthesis (n = 15) Included Studies included in quantitative synthesis (n =?)

4 Meta-analysis Provides a quantitative summary of the results included studies Provides a mechanism for statistically examining variation in effect sizes across studies

5 Decide on the appropriate effect size
When developing the protocol you should identify the effect size(s) that are most appropriate to the research question Within behavioural medicine commonly used effect sizes include Odds ratios Correlations Standardised mean differences

6 Data for odds ratios 2 x 2 frequency table
Odds ratio and confidence intervals Risk difference Risk ratio And much more…

7 Data for correlations r and N r and SE r and variance Fisher’s Z and N
Fisher’s Z and SE r and t value t value and sample size (for correlation) p value and sample size (for correlation)

8 Data for standardised mean differences
Mean, SD and N for each group d and confidence interval d and variance p value and n And much more…

9 Compiling effect sizes - A perfect world
In the perfect world where all studies are well described Two reviewers extract effect size data from each paper, extraction is compared and disagreements resolved through consensus Each reviewer records Data required to calculate the effect size Test statistics (e.g. p values, t values) relevant to the effect size Assumptions made when extracting data

10 Best practice recommendations
“Facilitate cumulative science by future-proofing meta-analyses: Disclose all meta-analytic data (effect sizes, sample sizes for each condition, test statistics and degrees of freedom, means, standard deviations, and correlations between dependent observations) for each data point. Quote relevant text from studies that describe the meta- analytic data to prevent confusion, such as when one effect size is selected from a large number of tests reported in a study. When analyzing subgroups, include quotes from the original study that underlie this classification, and specify any subjective decisions.” Lakens, Hilgard and Staaks (2016)

11 Best practice recommendations
“Facilitate quality control: Specify which effect size calculations are used and which assumptions are made for missing data (e.g., assuming equal sample sizes in each condition, imputed values for unreported effect sizes), if necessary for each effect size extracted from the literature. Specify who extracted and coded the data, knowing it is preferable that two researchers independently extract effect sizes from the literature” Lakens, Hilgard and Staaks (2016)

12 Minimum best practice data extraction for correlations
Study ID r N Exact P value Location in Text Notes

13 Minimum best practice data extraction for standardised mean difference
Study ID Means group 1 N group 1 SD group 1 Means group 2 N group 2 SD group 2 Exact p value Location in Text Notes

14 Minimum best practice data extraction for standardised mean difference
Study ID N events group 1 N non-events group 1 N events group 2 N non-events group 2 Exact p value Location in Text Notes

15 Compiling effect sizes – An imperfect world
In the perfect world all studies are well described Sadly, we don’t live in a perfect world Studies commonly do not report information in the format that you would find most helpful You may need to use a number of formulas different to calculate effect sizes based on the information provided Even after trying to extract maximal data from all studies you’ll often need to contact authors

16 Data availability: An example
Data type N % Events and sample size in each group 150 31.65% Odds ratio and confidence limits 137 28.90% Event rate and sample size in each group 84 17.72% This data is not available 57 12.03% Risk ratio and confidence limits 21 4.43% Events and non-events in each group 13 2.74% Chi-squared and total sample size 10 2.11% Log odds ratio and variance 2 0.42% Risk different and standard error 0.00% Risk difference and variance Non-events and sample size in each group Risk difference and confidence limits Log risk ratio and variance Log odds ratio and standard error Log risk ratio and standard error Peto's O-E and V Total 474 100.00% Of 474 categorical effects identified in a recent analysis, only 2.74% reported data in the format we wanted 12% of studies did not report required data Need systematic methods for identifying and then converting effect sizes

17 Compiling effect sizes – Your options
Depending on the scope of your analysis and the software you’re using you have a number of options CMA (Comprehensive Meta-Analysis) allows direct entry of 100 different data formats Online calculators can be used to hand calculate effect sizes using different data formats You can use R packages to compile effect sizes to prepare for analysis

18 Compiling effect sizes – Comparison of options
Advantages Disadvantages CMA Easy to compile effect sizes in different formats Expensive Meta-analysis is relatively “black box” Online calculators Free Allows for very specialised effect size calculations that aren’t built into any software Very time consuming if you have a lot to calculate Can be difficult to reproduce your calculation R packages Reproducible Intimidating if you’re not familiar with R

19 Compiling effect sizes – metafor example
See the “Compile dataset.rmd” for a worked example This requires the metafor package is installed

20 Compiling effect sizes – An imperfect world
REMEMBER It is especially important to use best practice to report how you extracted effect sizes when studies are not perfectly reported. You will (often) need to make assumptions when extracting data and it must be clear to readers (and future you) what those were so your meta-analysis can be replicated

21 Choosing your model Most meta-analytic software packages incorporate two statistical models for meta-analysis, the fixed-effect model and the random- effects model. You should choose which one you will use in the protocol stage

22 Fixed effect vs. Random effects meta-analysis
Fixed effect meta-analysis assumes that the underlying true effect does not vary between studies and that variations in observed effects are due to sampling error Random effects meta-analysis assumes that the true effect does vary between studies within the meta-analysis and that variation in observed effects are due to sampling error and by moderators of the effect

23 Fixed effect vs. Random effects meta-analysis
Increasingly, meta-analysts recommend that random effects meta- analysis be the default analysis Because The assumptions underlying the fixed effect model are almost never supported Using a random effects model when the fixed effect model is true gives the same results as running the fixed effect model but running a FE model when random effects is true gives misleading results

24 Running your meta-analysis
Meta-analysis requires attention to detail and is very time consuming. But once data is compiled in the format you need not very difficult.

25 Running your meta-analysis
Work through “SMD meta analysis template.rmd” and “Correlation meta analysis template.rmd” for examples of how to conduct meta- analysis of two common types of data.

26 Useful R packages metagear metafor Compute.es
The abstract screener function and plot_PRISMA are helpful. Note that this package has a lot of dependencies for functions you might not care about, more advanced R users might want to just download the scripts for the functions you need (they’re on github) metafor Good all-round package for running meta-analysis and meta-regression. Compute.es As the name implies, this package has a range of functions for computation of effect sizes

27 “Systematic Reviews and Meta-Analysis in Behavioural Medicine: A Practical Introduction to Best Practices” by Emily Kothe is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. These slides were created in December 2016 for presentation at the International Congress of Behavioral Medicine. These slides and up to date versions of associated files are available at: /OSF.IO/6BK7B


Download ppt "Data Sciences Unit, School of Psychology, Deakin University"

Similar presentations


Ads by Google