15 de Abril de 2008. A Meta-Analysis is a review in which bias has been reduced by the systematic identification, appraisal, synthesis and statistical.

Slides:



Advertisements
Similar presentations
What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic.
Advertisements

Protocol Development.
Estimating a Population Proportion
Sections 7-1 and 7-2 Review and Preview and Estimating a Population Proportion.
Sampling Distributions
27 de Maio de A Meta-Analysis is a review in which bias has been reduced by the systematic identification, appraisal, synthesis and statistical.
QUOROM checklist: are meta-analyses in good hands? Introdução à Medicina Turma Outubro 2007.
Who and How And How to Mess It up
Journal Club Alcohol and Health: Current Evidence January-February 2006.
Meta-analysis & psychotherapy outcome research
Estimating a Population Proportion
7-2 Estimating a Population Proportion
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Lecture Slides Elementary Statistics Eleventh Edition and the Triola Statistics Series by.
19 de Dezembro de A Meta-Analysis is a review in which bias has been reduced by the systematic identification, appraisal, synthesis and statistical.
Formalizing the Concepts: Simple Random Sampling.
Are the results valid? Was the validity of the included studies appraised?
AM Recitation 2/10/11.
Research Methods. Research Projects  Background Literature  Aims and Hypothesis  Methods: Study Design Data collection approach Sample Size and Power.
Reading Scientific Papers Shimae Soheilipour
Sampling January 9, Cardinal Rule of Sampling Never sample on the dependent variable! –Example: if you are interested in studying factors that lead.
Program Evaluation. Program evaluation Methodological techniques of the social sciences social policy public welfare administration.
Section Copyright © 2014, 2012, 2010 Pearson Education, Inc. Lecture Slides Elementary Statistics Twelfth Edition and the Triola Statistics Series.
Sections 6-1 and 6-2 Overview Estimating a Population Proportion.
Chap 20-1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc. Chapter 20 Sampling: Additional Topics in Sampling Statistics for Business.
Population All members of a set which have a given characteristic. Population Data Data associated with a certain population. Population Parameter A measure.
Chapter 7 Estimates and Sample Sizes
Estimating a Population Proportion
PTP 560 Research Methods Week 8 Thomas Ruediger, PT.
September 19, 2012 SYSTEMATIC REVIEWS It is necessary, while formulating the problems of which in our advance we are to find the solutions, to call into.
Evidence Based Medicine Meta-analysis and systematic reviews Ross Lawrenson.
Statistical Analysis. Statistics u Description –Describes the data –Mean –Median –Mode u Inferential –Allows prediction from the sample to the population.
Appraising Randomized Clinical Trials and Systematic Reviews October 12, 2012 Mary H. Palmer, PhD, RN, C, FAAN, AGSF University of North Carolina at Chapel.
Systematic reviews to support public policy: An overview Jeff Valentine University of Louisville AfrEA – NONIE – 3ie Cairo.
Meta-analysis and “statistical aggregation” Dave Thompson Dept. of Biostatistics and Epidemiology College of Public Health, OUHSC Learning to Practice.
Lecture 4. Sampling is the process of selecting a small number of elements from a larger defined target group of elements such that the information gathered.
Evidence-Based Medicine: What does it really mean? Sports Medicine Rounds November 7, 2007.
How to find a paper Looking for a known paper: –Field search: title, author, journal, institution, textwords, year (each has field tags) Find a paper to.
How to write a scientific article Nikolaos P. Polyzos M.D. PhD.
통계적 추론 (Statistical Inference) 삼성생명과학연구소 통계지원팀 김선우 1.
Sections 7-1 and 7-2 Review and Preview and Estimating a Population Proportion.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. All Rights Reserved. Section 7-1 Review and Preview.
How to read a paper D. Singh-Ranger. Academic viva 2 papers 1 hour to read both Viva on both papers Summary-what is the paper about.
How to Read Scientific Journal Articles
Academic Research Academic Research Dr Kishor Bhanushali M
META-ANALYSIS, RESEARCH SYNTHESES AND SYSTEMATIC REVIEWS © LOUIS COHEN, LAWRENCE MANION & KEITH MORRISON.
© 2008 McGraw-Hill Higher Education The Statistical Imagination Chapter 11: Bivariate Relationships: t-test for Comparing the Means of Two Groups.
Review Lecture 51 Tue, Dec 13, Chapter 1 Sections 1.1 – 1.4. Sections 1.1 – 1.4. Be familiar with the language and principles of hypothesis testing.
Review Characteristics This review protocol was prospectively registered with BEME (see flow diagram). Total number of participants involved in the included.
Chapter 10 The t Test for Two Independent Samples
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
© Copyright McGraw-Hill 2004
1 7.3 RANDOM VARIABLES When the variables in question are quantitative, they are known as random variables. A random variable, X, is a quantitative variable.
UNIVERSITAT DE BARCELONA Facultat de Biblioteconomia i Documentació Grau d’Informació i Documentació Research Methods Research reports Professor: Ángel.
SECTION 7.2 Estimating a Population Proportion. Practice  Pg  #6-8 (Finding Critical Values)  #9-11 (Expressing/Interpreting CI)  #17-20.
How to Read a Journal Article. Basics Always question: – Does this apply to my clinical practice? – Will this change how I treat patients? – How could.
Systematic reviews and meta-analyses: when and how to do them Andrew Smith Royal Lancaster Infirmary 18 May 2015.
1 Chapter 11 Understanding Randomness. 2 Why Random? What is it about chance outcomes being random that makes random selection seem fair? Two things:
1 Lecture 10: Meta-analysis of intervention studies Introduction to meta-analysis Selection of studies Abstraction of information Quality scores Methods.
Marginal Distribution Conditional Distribution. Side by Side Bar Graph Segmented Bar Graph Dotplot Stemplot Histogram.
Howard Community College
Writing Scientific Research Paper
Arrangements or patterns for producing data are called designs
Supplementary Table 1. PRISMA checklist
STROBE Statement revision
AP Statistics: Chapter 7
Inferences and Conclusions from Data
Arrangements or patterns for producing data are called designs
Lecture Slides Elementary Statistics Twelfth Edition
What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic. Ask What is a review?
Meta-analysis, systematic reviews and research syntheses
Presentation transcript:

15 de Abril de 2008

A Meta-Analysis is a review in which bias has been reduced by the systematic identification, appraisal, synthesis and statistical aggregation of all relevant studies on a specific topic according to a predetermined and explicit method. Overview Systematic Review Meta-Analysis

In 1987, a survey showed that only 24, out of 86 English-language meta-analyses, reported all the six areas considered important to be part of a meta-analysis: Study Design Control of Bias Statistical Analysis Application of Results Sensitivity Analysis CombinalityCombinality

In 1992 this survey was updated with 78 meta-analyses and the researchers noted that methodology has definitely improved since their first survey; However it needed better searches of the: Literature; Quality evaluations of trials; Synthesis of the results. In 1992 this survey was updated with 78 meta-analyses and the researchers noted that methodology has definitely improved since their first survey; However it needed better searches of the: Literature; Quality evaluations of trials; Synthesis of the results.

So, in 1999, several researchers suggested and created the Quality of Reporting of Meta-Analyses (QUOROM) Statement to improve and standardise reporting. The QUOROM Statement – that includes a checklist and a trial flow diagram – describes the preferred way to present the different sections of a report of a Meta- Analysis; It is organized into 21 headings and subheadings. So, in 1999, several researchers suggested and created the Quality of Reporting of Meta-Analyses (QUOROM) Statement to improve and standardise reporting. The QUOROM Statement – that includes a checklist and a trial flow diagram – describes the preferred way to present the different sections of a report of a Meta- Analysis; It is organized into 21 headings and subheadings.

The number of published meta-analyses has increased over time. According to a study, after the QUOROM statement the estimated mean quality score of the reports increased from 2.8 (95% CI; 2.3–3.2) to 3.7 ( 95% CI; 3.3–4.1), that represented an estimated improvement of 0.96 (95% CI; 0.4–1.6, p = two sided t-test). However, the QUOROM group admits itself that this checklist requires continuous research in order to improve the quality of a meta-analysis. The number of published meta-analyses has increased over time. According to a study, after the QUOROM statement the estimated mean quality score of the reports increased from 2.8 (95% CI; 2.3–3.2) to 3.7 ( 95% CI; 3.3–4.1), that represented an estimated improvement of 0.96 (95% CI; 0.4–1.6, p = two sided t-test). However, the QUOROM group admits itself that this checklist requires continuous research in order to improve the quality of a meta-analysis.

But what is Reproducibility? Why is it so important? Reproducibility is one of the main principles of the scientific method, which refers to the ability of a test or experiment to be accurately reproduced by someone else working independently. But what is Reproducibility? Why is it so important? Reproducibility is one of the main principles of the scientific method, which refers to the ability of a test or experiment to be accurately reproduced by someone else working independently.

The lack of reproducibility can lead to major consequences: a failure in the reproducibility will most probably end in results' heterogeneity; at a clinical level, if a diagnostic test is not reproducible there is the risk of a patient being wrongly diagnosed; non-reproducible items of a checklist can lead to a decrease on its credibility and, consequently, of the meta-analyses that used it as a model. The lack of reproducibility can lead to major consequences: a failure in the reproducibility will most probably end in results' heterogeneity; at a clinical level, if a diagnostic test is not reproducible there is the risk of a patient being wrongly diagnosed; non-reproducible items of a checklist can lead to a decrease on its credibility and, consequently, of the meta-analyses that used it as a model.

The question we want to answer is if the QUOROM Checklist is a reproducible method in the evaluation of Meta-Analysis. Primary Aim: Evaluate the reproducibility degree of the QUOROM Checklist The question we want to answer is if the QUOROM Checklist is a reproducible method in the evaluation of Meta-Analysis. Primary Aim: Evaluate the reproducibility degree of the QUOROM Checklist

Secondary Aims: Specify which points of the QUOROM Checklist are less reproducible; Verify if there are differences in the reproducibility between the evaluation of meta- analysis from Low Impact Factor journals and from High Impact Factor ones. Secondary Aims: Specify which points of the QUOROM Checklist are less reproducible; Verify if there are differences in the reproducibility between the evaluation of meta- analysis from Low Impact Factor journals and from High Impact Factor ones.

Our target population was the meta-analyses. We had to select a considerable sample of meta-analyses, so we decided to select a total of 52. Our inclusion criteria were: The article being published in a medicine subjects’ journal; The article being published in a journal with impact factor ≤2 or ≥8; The article reporting a meta-analysis; The article being published in the last three years ( ); Having access to online full text. Our target population was the meta-analyses. We had to select a considerable sample of meta-analyses, so we decided to select a total of 52. Our inclusion criteria were: The article being published in a medicine subjects’ journal; The article being published in a journal with impact factor ≤2 or ≥8; The article reporting a meta-analysis; The article being published in the last three years ( ); Having access to online full text.

First, we separated 40 journals using a Stratified Sampling Method. From Journals of ISI Web of Knowledge that fit our criteria (n=1234), we selected: 20 Journals 0 < IF ≤ 2 (1234 journals) 0 < IF ≤ 2 (1234 journals) IF ≥ 8 (82 journals) IF ≥ 8 (82 journals) IF – Impact Factor Low IF Journals High IF Journals

Low IF Journals: 48 meta-analyses Low IF Journals: 48 meta-analyses 26 Pool n.1 Low IF Meta- Analyses After this, we proceeded to the selection of the Meta- Analyses. For that, we used a Multi- Stage Sampling Method. The totality of the Journals’ articles were removed from each stratum, following the inclusion criteria previously described. After this, we proceeded to the selection of the Meta- Analyses. For that, we used a Multi- Stage Sampling Method. The totality of the Journals’ articles were removed from each stratum, following the inclusion criteria previously described. 26 Pool n.2 High IF Meta- Analyses High IF Journals: 219 meta-analyses We repeated the whole process of selection of the articles until we had enough meta-analyses.

Low IF Meta- Analyses High IF Meta- Analyses The impact factor of the journal from where each Meta- Analysis came, the name of the journal, the authors and the year of publication were recorded in a database, which was kept secret until the evaluation of the checklist was concluded. It was used only at the end to find out if Reproducibility and Impact Factor were related. Pool n.1 Pool n.2

Low IF Meta- Analyses Pool n.1 Pool n.2 High IF Meta- Analyses Pool n.3 52 Meta-Analyses Finally, we mixed all the articles in a single pool, occulting the strata from each one came.

Before analyzing we established some rules that helped us understanding each item of the checklist: If a certain item was present in the meta-analysis, but not in the place the checklist determines, we would not consider the item present; When a item had more than one point, we would only consider it present if the meta-analysis answered to more than half of the points; Before analyzing we established some rules that helped us understanding each item of the checklist: If a certain item was present in the meta-analysis, but not in the place the checklist determines, we would not consider the item present; When a item had more than one point, we would only consider it present if the meta-analysis answered to more than half of the points;

At the item (e), we would give more importance to the point that ensures the replication of the methods; At the item (o), the meta-analysis had to have a diagram describing trial flow, so that the item could be considered. At the item (e), we would give more importance to the point that ensures the replication of the methods; At the item (o), the meta-analysis had to have a diagram describing trial flow, so that the item could be considered.

Each student/investigator analysed a group of 4 articles and submitted them to the QUOROM Checklist. After the students’ analysis, the articles were mixed again. Then, each student analysed another 4 articles, randomly selected from the 48 articles previously analysed by the rest of the group. This way, each student/investigator analysed different articles. By analyzing a meta-analysis, each student had to insert the data in the SPSS program. For each item, number 1 was attributed to those which are covered in the meta-analyses, and number 0 to those which aren’t.

Thus, our study can be classified as an observational, cross sectional study, whose methods are characteristic of a survey study, and whose purpose is to study the reproducibility.

Our variables are: The actual Impact Factor of the journals from which we randomly selected the articles; The year of publication of the articles; The Impact Factor of the journals from which we randomly selected the articles at the year of publication; The classification of each item of the checklist: we considered thirty-six categorical variables, which can have two numerical codes: 1 or 0. These are our expected outcome of research. Our variables are: The actual Impact Factor of the journals from which we randomly selected the articles; The year of publication of the articles; The Impact Factor of the journals from which we randomly selected the articles at the year of publication; The classification of each item of the checklist: we considered thirty-six categorical variables, which can have two numerical codes: 1 or 0. These are our expected outcome of research.

From the classification of the items we had other variables: Summation of the present items by observer 1; Summation of the present items by observer 2; Average of the two summations; Difference between the summations; Number of concordances between the two observers by article. From the classification of the items we had other variables: Summation of the present items by observer 1; Summation of the present items by observer 2; Average of the two summations; Difference between the summations; Number of concordances between the two observers by article.

Concordance in each Item of the Checklist (reproducibility of each Item) We made eighteen concordance tables to calculate: The proportion of concordance and 95% confidence intervals*; Positive proportion of concordance; Negative proportion of concordance; Kappa Factor. * we used a normal distribution but with those whose limit of confidence intervals was over one, we used a binomial distribution. Concordance in each Item of the Checklist (reproducibility of each Item) We made eighteen concordance tables to calculate: The proportion of concordance and 95% confidence intervals*; Positive proportion of concordance; Negative proportion of concordance; Kappa Factor. * we used a normal distribution but with those whose limit of confidence intervals was over one, we used a binomial distribution.

Global Reproducibility The comparison of the summation of each observer was done using the ICC method (Intraclass Correlation Coefficient). Then we represented the concordance limits of the “difference between the summations” in a scatterplot: For that, we had to be sure that this variable followed a normal distribution and, if so, to calculate the mean and the standard deviation, all this by making an histogram. Global Reproducibility The comparison of the summation of each observer was done using the ICC method (Intraclass Correlation Coefficient). Then we represented the concordance limits of the “difference between the summations” in a scatterplot: For that, we had to be sure that this variable followed a normal distribution and, if so, to calculate the mean and the standard deviation, all this by making an histogram.

ICC = 0,729 The ICC method revealed that 72,9% of the total variance is explained by the variance between the articles. ICC = 0,729 The ICC method revealed that 72,9% of the total variance is explained by the variance between the articles.

Histogram: differences between the summations Following a normal distribution, it would be expected that the mean was 0, so it may have occurred a systematic error in the study. The concordance limits were [- 4,934 ; 4,434] This means that 95% of the differences between the summations are in this interval. Histogram: differences between the summations Following a normal distribution, it would be expected that the mean was 0, so it may have occurred a systematic error in the study. The concordance limits were [- 4,934 ; 4,434] This means that 95% of the differences between the summations are in this interval.

Relation between IF and Reproducibility For this analysis we didn’t use the actual impact factor, but the one at the year of publication of the articles*. We made two scatterplots, to see if there were correlation between: The “difference between the summations” and impact factor; The “number of concordances between the two observers by article” and the impact factor. * As the ISI Web of Knowledge database wasn’t updated with the impact factors of 2007, in the articles published in that year we used the impact factor of Relation between IF and Reproducibility For this analysis we didn’t use the actual impact factor, but the one at the year of publication of the articles*. We made two scatterplots, to see if there were correlation between: The “difference between the summations” and impact factor; The “number of concordances between the two observers by article” and the impact factor. * As the ISI Web of Knowledge database wasn’t updated with the impact factors of 2007, in the articles published in that year we used the impact factor of 2006.

No correlation between “impact factor” and “difference between the summations” were found, nor between impact factor and “number of concordances between the two observers by article”, because in both scatterplots [figures 4, 5] there was not any preferential orientation of the points