The CONSORT * Statement for Reporting Randomized Trials Al M. Best, PhD Department of Biostatistics *Consolidated Standards of Reporting Trials.

Slides:



Advertisements
Similar presentations
Appraisal of an RCT using a critical appraisal checklist
Advertisements

Agency for Healthcare Research and Quality (AHRQ)
Study Size Planning for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Department of O UTCOMES R ESEARCH. Daniel I. Sessler, M.D. Professor and Chair Department of O UTCOMES R ESEARCH The Cleveland Clinic Clinical Research.
Study Designs in Epidemiologic
Doug Altman Centre for Statistics in Medicine, Oxford, UK
Reporting systematic reviews and meta-analyses: PRISMA
Design & Interpretation of Randomized Trials: A Clinician’s Perspective Francis KL Chan Department of Medicine & Therapeutics CUHK.
8. Evidence-based management Step 3: Critical appraisal of studies
天 津 医 科 大 学天 津 医 科 大 学 Clinical trail. 天 津 医 科 大 学天 津 医 科 大 学 1.Historical Background 1537: Treatment of battle wounds: 1741: Treatment of Scurvy 1948:
Journal Club Alcohol, Other Drugs, and Health: Current Evidence July–August 2013.
Reading the Dental Literature
ODAC May 3, Subgroup Analyses in Clinical Trials Stephen L George, PhD Department of Biostatistics and Bioinformatics Duke University Medical Center.
Estimation and Reporting of Heterogeneity of Treatment Effects in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare.
Elements of a clinical trial research protocol
How does the process work? Submissions in 2007 (n=13,043) Perspectives.
Journal Club Alcohol and Health: Current Evidence May–June 2005.
Journal Club Alcohol and Health: Current Evidence July-August 2006.
Common Problems in Writing Statistical Plan of Clinical Trial Protocol Liying XU CCTER CUHK.
Clinical Trials Hanyan Yang
CRITICAL READING OF THE LITERATURE RELEVANT POINTS: - End points (including the one used for sample size) - Surrogate end points - Quality of the performed.
Journal Club Alcohol and Health: Current Evidence January–February 2007.
Critical Appraisal of an Article by Dr. I. Selvaraj B. SC. ,M. B. B. S
Are the results valid? Was the validity of the included studies appraised?
STrengthening the Reporting of OBservational Studies in Epidemiology
بسم الله الرحمن الرحیم. Randomised, controlled trial Population Sample Outcome Experimental intervention Control intervention Randomisation Time.
EBD for Dental Staff Seminar 2: Core Critical Appraisal Dominic Hurst evidenced.qm.
1 Experimental Study Designs Dr. Birgit Greiner Dep. of Epidemiology and Public Health.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
Lecture 17 (Oct 28,2004)1 Lecture 17: Prevention of bias in RCTs Statistical/analytic issues in RCTs –Measures of effect –Precision/hypothesis testing.
CRITICAL APPRAISAL OF SCIENTIFIC LITERATURE
Systematic Reviews.
Study design P.Olliaro Nov04. Study designs: observational vs. experimental studies What happened?  Case-control study What’s happening?  Cross-sectional.
Systematic Review Module 7: Rating the Quality of Individual Studies Meera Viswanathan, PhD RTI-UNC EPC.
EBC course 10 April 2003 Critical Appraisal of the Clinical Literature: The Big Picture Cynthia R. Long, PhD Associate Professor Palmer Center for Chiropractic.
Appraising Randomized Clinical Trials and Systematic Reviews October 12, 2012 Mary H. Palmer, PhD, RN, C, FAAN, AGSF University of North Carolina at Chapel.
Successful Concepts Study Rationale Literature Review Study Design Rationale for Intervention Eligibility Criteria Endpoint Measurement Tools.
Criteria to assess quality of observational studies evaluating the incidence, prevalence, and risk factors of chronic diseases Minnesota EPC Clinical Epidemiology.
Landmark Trials: Recommendations for Interpretation and Presentation Julianna Burzynski, PharmD, BCOP, BCPS Heme/Onc Clinical Pharmacy Specialist 11/29/07.
Clinical Writing for Interventional Cardiologists.
Discussion for a statement for biobank and cohort studies in human genome epidemiology John P.A. Ioannidis, MD International Biobank and Cohort Studies.
How to find a paper Looking for a known paper: –Field search: title, author, journal, institution, textwords, year (each has field tags) Find a paper to.
EXPERIMENTAL EPIDEMIOLOGY
How to Read Scientific Journal Articles
Lecture 9: Analysis of intervention studies Randomized trial - categorical outcome Measures of risk: –incidence rate of an adverse event (death, etc) It.
Critical Appraisal (CA) I Prepared by Dr. Hoda Abd El Azim.
Methodological quality of malaria RCTs conducted in Africa Vittoria Lutje*^, Annette Gerritsen**, Nandi Siegfried***. *Cochrane Infectious Diseases Group.
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
Research article structure: Where can reporting guidelines help? Iveta Simera The EQUATOR Network workshop 10 October 2012, Freiburg, Germany.
EVALUATING u After retrieving the literature, you have to evaluate or critically appraise the evidence for its validity and applicability to your patient.
Copyright © 2011 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 18 Systematic Review and Meta-Analysis.
G. Biondi Zoccai – Ricerca in cardiologia What to expect? Core modules IntroductionIntroduction Finding out relevant literatureFinding out relevant literature.
/ 161 Saudi Diploma in Family Medicine Center of Post Graduate Studies in Family Medicine EBM Therapy Articles Dr. Zekeriya Aktürk
Unit 11: Evaluating Epidemiologic Literature. Unit 11 Learning Objectives: 1. Recognize uniform guidelines used in preparing manuscripts for publication.
CONSORT 2010 Balakrishnan S, Pondicherry Institute of Medical Sciences.
Course: Research in Biomedicine and Health III Seminar 4: Critical assessment of evidence.
Making Randomized Clinical Trials Seem Less Random Andrew P.J. Olson, MD Assistant Professor Departments of Medicine and Pediatrics University of Minnesota.
Sample Journal Club Your Name Here.
CLINICAL PROTOCOL DEVELOPMENT
How to read a paper D. Singh-Ranger.
Supplementary Table 1. PRISMA checklist
Randomized Trials: A Brief Overview
Heterogeneity and sources of bias
GUIDELINES FOR THE DESIGN OF CLINICAL TRIALS INVESTIGATING THE EFFECT OF LIGHT TREATMENT IN DEPRESSION Klaus Martiny Psychiatric Research Unit, Frederiksborg.
STROBE Statement revision
Critical Reading of Clinical Study Results
Reading Research Papers-A Basic Guide to Critical Analysis
The Anatomy of a Scientific Article: IMRAD format
Evidence Based Practice
STROBE Statement revision
Presentation transcript:

The CONSORT * Statement for Reporting Randomized Trials Al M. Best, PhD Department of Biostatistics *Consolidated Standards of Reporting Trials

Acknowledgements This presentation is about the work of others. See: Moher D, Schulz KF, Altman DG for The CONSORT Group. “The CONSORT statement: revised recommendations for improving the quality of reports of parallel group randomized trials.” JAMA : DG Altman, KF Schulz, D Moher, M Egger, F Davidoff, D Elbourne, Peter C. Gøtzsche, and T Lang, MA, for the CONSORT Group. “The Revised CONSORT Statement for Reporting Randomized Trials: Explanation and Elaboration.” Annals of Internal Medicine, (8),

Remember?: Stampfer MJ, et al. Post-menopausal estrogen therapy and cardiovascular disease: ten-year follow-up from the Nurses’ Health Study. N Engl J Med. 1991;325: abstract: The effect of postmenopausal estrogen therapy on the risk of cardiovascular disease remains controversial. … METHODS. We followed 48,470 postmenopausal women, 30 to 63 years old, who were participants in the Nurses' Health Study,... During up to 10 years of follow-up …. RESULTS. After adjustment for age and other risk factors, the overall relative risk of major coronary disease in women currently taking estrogen was 0.56 (95 percent confidence interval, 0.40 to 0.80); … CONCLUSIONS. Current estrogen use is associated with a reduction in the incidence of coronary heart disease as well as in mortality from cardiovascular disease, …

Later, Contradicted Findings The Women’s Health Initiative, a large randomized trial, found that estrogen and progestin significantly increased the relative risk of coronary events by 29% among postmenopausal women. Refuting results were also seen in another large randomized trial, the Heart and Estrogen/progestin Replacement Study (HERS). Many reviews have documented deficiencies in reports of clinical trials.

From the JAMA abstract To comprehend the results of a randomized controlled trial (RCT), readers must understand its design, conduct, analysis, and interpretation. That goal can be achieved only through complete transparency from authors. Despite several decades of educational efforts, the reporting of RCTs needs improvement. Investigators and editors developed the original CONSORT (Consolidated Standards of Reporting Trials) statement to help authors improve reporting by using a checklist and flow diagram.

JAMA abstract continued The revised checklist includes 22 items selected because empirical evidence indicates that not reporting the information is associated with biased estimates of treatment effect or because the information is essential to judge the reliability or relevance of the findings. … The revised flow diagram depicts information from 4 stages of a trial (enrollment, intervention allocation, follow- up, and analysis). The diagram explicitly includes the number of participants, according to each intervention group, included in the primary data analysis.

JAMA abstract continued In sum, the CONSORT statement is intended to improve the reporting of an RCT, enabling readers to understand a trial’s conduct and to assess the validity of its results

Annals Abstract begins Overwhelming evidence now indicates that the quality of reporting of randomized, controlled trials (RCTs) is less than optimal. Recent methodologic analyses indicate that inadequate reporting and design are associated with biased estimates of treatment effects. Such systematic error is seriously damaging to RCTs, which boast the elimination of systematic error as their primary hallmark. Systematic error in RCTs reflects poor science, and poor science threatens proper ethical standards.

Other Study Designs Endorsement: The International Committee of Medical Journal Editors, BMJ, Lancet, JAMA, Annals of Int. Med., and others But: Not every trial is a parallel-group, randomized, double-blind, placebo-controlled clinical trial. Still standards trickle down to proposed standards in: Meta-analyses, observational studies.

Title and Abstract #1 How participants were allocated to interventions Random assignment is the preferred method [to assign treatment or other interventions to trial participants]; … three major advantages. First it eliminates bias in the assignment of treatments. Without randomization, treatment comparisons may be prejudiced, whether consciously or not, by selection of participants of a particular kind to receive a particular treatment. Second, random allocation facilitates blinding the identity of treatments to the investigators, participants, and evaluators, possibly by use of a placebo, which reduces bias after assignment of treatments. Third, random assignment permits the use of probability theory to express the likelihood that any difference in outcome between intervention groups merely reflects chance. Preventing selection and confounding biases is the most important advantage of randomization.

Introduction #2 Scientific background and explanation of rationale The Helsinki Declaration states that biomedical research involving people should be based on a thorough knowledge of the scientific literature. That is, it is unethical to expose human subjects unnecessarily to the risks of research.

Methods #3 Eligibility criteria for participants … usually restrict this population by using eligibility criteria and by performing the trial in one or a few centers. Careful descriptions of the trial participants and the setting in which they were studied are needed so that readers may assess the external validity (generalizability) of the trial results (see item 21). Of particular importance is the method of recruitment, such as by referral or self- selection (for example, through advertisements).

#3b The settings and locations where the data were collected … institutions vary greatly in their organization, experience, and resources and the baseline risk for the medical condition under investigation. Climate and other physical factors, economics, geography, and the social and cultural milieu can all affect a study’s external validity. #14 Dates defining the periods of recruitment and follow-up Knowing when a study took place and over what period participants were recruited places the study in historical context.

#4 Precise details of the interventions intended for each group and how and when they were actually administered What is meant by “control group” ? How is the “placebo” made to be indistinguishable from the test intervention? What is “usual care”?

#5 Specific objectives and hypotheses Objectives are the questions that the trial was designed to answer. Hypotheses are prespecified questions being tested. The majority of papers provide adequate info about these.

#6 Clearly defined primary and secondary outcome measures…methods used to enhance the quality of measurements The primary … is the prespecified outcome of greatest importance and is usually the one used in the sample size calculation. … Having more than one or two outcomes…is not recommended. All outcome measures … completely defined. … indicate the prespecified time point … specify who assessed outcomes … and how many assessors. … instruments. … previously developed and validated scales. full details of how …. measured … steps taken to increase the reliability of the measurements.

#7 How sample size was determined and, explanation of interim analyses and stopping rules For scientific and ethical reasons, the sample size for a trial needs to be planned carefully, with a balance between clinical and statistical considerations. Ideally, a study should be large enough to have a high probability (power) of detecting as statistically significant a clinically important difference of a given size if such a difference exists. Authors should report whether they took multiple “looks” at the data and, if so, how many there were, the statistical methods used …

Assessing the likelihood of bias in group assignment #8 Method used to generate the allocation sequence Blocking, Stratification, Minimization of Imbalance #9 Method used to implement the random allocation sequence), clarifying whether the sequence was concealed until interventions were assigned Without adequate allocation concealment, even random, sequences can be subverted. #10 Who generated the allocation sequence, who enrolled participants, and who assigned participants to their groups.

#11a Whether or not participants, those administering the interventions, and those assessing the outcomes were blinded to group assignment Patient blinding: may influence treatment effect … placebo effect … compliance bias Staff blinding: performance bias … withdrawal bias Outcome blinding: observer bias Analyst blinding: influence choice of strategies #11b If done, how the success of blinding was evaluated

#12a Statistical methods used to compare groups for primary outcome(s) … additional analyses, such as subgroup analyses and adjusted analyses It is essential to specify which statistical procedure was used for each analysis. … more than one observation per patient. Because of the high risk for spurious findings, subgroup analyses are often discouraged. Post hoc subgroup comparisons are especially likely not to be confirmed by further studies. Such analyses do not have great credibility. … imbalances in characteristics are adjusted for by using some form of multiple regression analysis. … adjusted analyses should be specified in the study protocol.

#13 Flow of participants through each stage (a diagram is strongly recommended). Specifically, for each group report the numbers of participants randomly assigned, receiving intended treatment, completing the study protocol, and analyzed for the primary outcome. a trial of chiropractic manipulation of the cervical spine for treatment of episodic tension-type headache.

#15 Baseline demographic and clinical characteristics of each group … it is important to know the characteristics of the participants who were actually recruited. This information allows readers, especially clinicians, to judge how relevant the results of a trial might be to a particular patient.

#16 Number of participants (denominator) in each group included in each analysis and whether the analysis was by “intention to treat.” State the results in absolute numbers when feasible e.g., 10 of 20, not 50%. …results should not be presented solely as summary measures, such as relative risks. Intention-to-treat analysis is generally favored because it avoids bias associated with nonrandom loss of participants. … make clear which participants are included in each analysis. Intention-to-treat analysis is not appropriate for examining adverse effects.

#17 For each primary and secondary outcome, a summary of results for each group and the estimated effect size and its precision Effect Size: the contrast between the groups. For binary outcomes, … the risk ratio (relative risk), odds ratio, or risk difference; for survival time data, … the hazard ratio or difference in median survival time; for continuous data, … the difference in means. Confidence intervals should be presented for the contrast between groups. A common error is the presentation of separate confidence intervals for the outcome in each group rather than for the treatment effect.

#18 Address multiplicity by reporting any other analyses performed, including subgroup analyses and adjusted analyses, indicating those prespecified and those exploratory Multiple analyses of the same data create a considerable risk for false-positive findings. Authors should especially resist the temptation to perform many subgroup analyses. Analyses that were prespecified in the trial protocol are much more reliable than those suggested by the data.

#19 All important adverse events or side effects in each intervention group Most interventions have unintended and often undesirable effects in addition to intended effects. Many reports of RCTs provide inadequate information on adverse events. In 192 reports of drug trials, only 39% had adequate reporting of clinical adverse events and 29% had adequate reporting of laboratory-determined toxicity. Furthermore, in one volume of a prominent general medical journal in 1998, 58% (30 of 52) of reports (mostly RCTs) did not provide any details on harmful consequences of the interventions.

Discussion #20 Interpretation of the results, taking into account study hypotheses, sources of potential bias or imprecision, and the dangers associated with multiplicity of analyses and outcomes.

Annals recommendations Structure the discussion section by presenting 1) a brief synopsis of the key findings; 2) consideration of possible mechanisms and explanations; 3) comparison with relevant findings from other published studies …; 4) limitations of the present study (and methods used to minimize and compensate for those limitations); 5) a brief section that summarizes the clinical and research implications of the work, as appropriate.

#21 Generalizability (external validity) of the trial findings Shadish, Cook and Campbell: “Most experiments are highly local but have general aspirations”. That is, there can be a conflict between the localized nature of the information in a paper and the desire of the authors to claim that their findings generalize to other people, in other settings, with comparable interventions, and other outcomes. … Cronbach: a question of external validity. That is, do “instances on which data are collected … generalize to the units, treatments, variables and settings not directly observed”?

#22 General interpretation of the results in the context of current evidence The result of an RCT is important regardless of which treatment appears better, magnitude of effect, or precision. Readers will want to know how the present trial’s results relate to those of other published RCTs. This discussion should be as systematic as possible and not limited to studies that support the results of the current trial.

Summary Most journal articles could be improved. CONSORT appropriately ‘raises the bar’ on how trials should be reported. Transparency in methods should result in an improved ability of the reader to judge the believability of a paper’s findings. See: