David Price 1,2, Eric D. Bateman 2,3, Alison Chisholm 2, Nikolaos G. Papadopoulos 2,4, Sinthia Bosnic-Anticevich 2,5, Emilio Pizzichini 2,6, Elizabeth.

Slides:



Advertisements
Similar presentations
Study Objectives and Questions for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Advertisements

Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Introduction to the User’s Guide for Developing a Protocol for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research.
Study Designs in Epidemiologic
Exploring uncertainty in cost effectiveness analysis NICE International and HITAP copyright © 2013 Francis Ruiz NICE International (acknowledgements to:
EVIDENCE BASED MEDICINE for Beginners
Design and Analysis of Clinical Study 12. Randomized Clinical Trials Dr. Tuan V. Nguyen Garvan Institute of Medical Research Sydney, Australia.
Michael Rawlins Chairman, National Institute for Health and Clinical Excellence, London Emeritus Professor, University of Newcastle upon Tyne Honorary.
Clinical Trials Hanyan Yang
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
performance INDICATORs performance APPRAISAL RUBRIC
Clinical Pharmacy Basma Y. Kentab MSc..
NANDA International Investigating the Diagnostic Language of Nursing Practice.
As noted by Gary H. Lyman (JCO, 2012) “CER is an important framework for systematically identifying and summarizing the totality of evidence on the effectiveness,
RESEARCH DESIGN.
IPhVWP Polish Presidency, Warsaw October 6 th 2011 Almath Spooner Irish Medicines Board Monitoring the outcome of risk minimisation activities.
 Be familiar with the types of research study designs  Be aware of the advantages, disadvantages, and uses of the various research design types  Recognize.
Complementary and integrative Medicine; George Lewith – Professor of Health Research School for Primary Care Research The.
Richard J. Martin 1,2, Alison M. Chisholm 2 & David Price 2,3 1. National Jewish Health, Denver, Colorado 2. Respiratory Effectiveness Group, Cambridge,
Lecture 16 (Oct 28, 2004)1 Lecture 16: Introduction to the randomized trial Introduction to intervention studies The research question: Efficacy vs effectiveness.
EVIDENCE BASED MEDICINE Health economics Ross Lawrenson.
Dr.F Eslamipour DDS.MS Orthodontist Associated professor Department of Oral Public Health Isfahan University of Medical Science.
My Own Health Report: Case Study for Pragmatic Research Marcia Ory Texas A&M Health Science Center Presentation at: CPRRN Annual Grantee Meeting October.
Exposure Definition and Measurement in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Study Designs Afshin Ostovar Bushehr University of Medical Sciences Bushehr, /4/20151.
IPhVWP Polish Presidency, Warsaw October 6 th 2011 Almath Spooner Irish Medicines Board Monitoring the outcome of risk minimisation activities.
Criteria for Assessing The Feasibility of RCTs. RCTs in Social Science: York September 2006 Today’s Headlines: “Drugs education is not working” “ having.
Summary of ICIUM Chronic Care Track Prepared by: Ricardo Perez-Cuevas Veronika Wirtz David Beran.
Dissemination and Implementation Ellen Goldstein, MA, Kevin Grumbach, MD Translating Practice into Evidence: Community Engaged Research.
Systematic Review Module 7: Rating the Quality of Individual Studies Meera Viswanathan, PhD RTI-UNC EPC.
Consumer behavior studies1 CONSUMER BEHAVIOR STUDIES STATISTICAL ISSUES Ralph B. D’Agostino, Sr. Boston University Harvard Clinical Research Institute.
Registry Data Collection in Pediatric VADs: Challenges and Opportunities INTERMACS Eighth Annual Meeting May 5, 2014 David Rosenthal, MD Stanford Children’s.
EBC course 10 April 2003 Critical Appraisal of the Clinical Literature: The Big Picture Cynthia R. Long, PhD Associate Professor Palmer Center for Chiropractic.
Overview of Chapter The issues of evidence-based medicine reflect the question of how to apply clinical research literature: Why do disease and injury.
Secondary Translation: Completing the process to Improving Health Daniel E. Ford, MD, MPH Vice Dean Johns Hopkins School of Medicine Introduction to Clinical.
Laying the Foundation for Scaling Up During Development.
Successful Concepts Study Rationale Literature Review Study Design Rationale for Intervention Eligibility Criteria Endpoint Measurement Tools.
Evidence-Based Medicine Presentation [Insert your name here] [Insert your designation here] [Insert your institutional affiliation here] Department of.
Vanderbilt Sports Medicine Chapter 5: Therapy, Part 2 Thomas F. Byars Evidence-Based Medicine How to Practice and Teach EBM.
Design and Analysis of Clinical Study 2. Bias and Confounders Dr. Tuan V. Nguyen Garvan Institute of Medical Research Sydney, Australia.
2nd Concertation Meeting Brussels, September 8, 2011 Reinhard Prior, Scientific Coordinator, HIM Evidence in telemedicine: a literature review.
1 Study Design Issues and Considerations in HUS Trials Yan Wang, Ph.D. Statistical Reviewer Division of Biometrics IV OB/OTS/CDER/FDA April 12, 2007.
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
Is the conscientious explicit and judicious use of current best evidence in making decision about the care of the individual patient (Dr. David Sackett)
EVALUATING u After retrieving the literature, you have to evaluate or critically appraise the evidence for its validity and applicability to your patient.
RTI International is a trade name of Research Triangle Institute Nancy Berkman, PhDMeera Viswanathan, PhD
UNIT-II CLINICAL DATA. UNIT-II CLINICAL DATA: Clinical Data, Application, Challenges, Solutions, Clinical Data Management System.
Biotechnology Industry Organization (BIO) Risk Management Public Workshop Day 1 - April 9, 2003 Risk Assessment in Drug and Biological Development Joanna.
Validity and utility of theoretical tools - does the systematic review process from clinical medicine have a use in conservation? Ioan Fazey & David Lindenmayer.
Erik Augustson, PhD, National Cancer Institute Susan Zbikowski, PhD, Alere Wellbeing Evaluation.
Types of Studies. Aim of epidemiological studies To determine distribution of disease To examine determinants of a disease To judge whether a given exposure.
Evidence Based Practice (EBP) Riphah College of Rehabilitation Sciences(RCRS) Riphah International University Islamabad.
Issues in Treatment Study Design John Whyte, MD, PhD Neuro-Cognitive Rehabilitation Research Network Moss Rehabilitation Research Institute.
EVIDENCE-BASED MEDICINE AND PHARMACY 1. Evidence-based medicine 2. Evidence-based pharmacy.
Methodological Issues in Implantable Medical Device(IMDs) Studies Abdallah ABOUIHIA Senior Statistician, Medtronic.
Chapter 23: Overview of the Occupational Therapy Process and Outcomes
PRAGMATIC Study Designs: Elderly Cancer Trials
The PRECIS-2 tool: Matching Intent with Methods David Hahn, MD, MS, WREN Director Department of Family Medicine & Community Health University.
Journal Club Curriculum-Study designs. Objectives  Distinguish between the main types of research designs  Randomized control trials  Cohort studies.
for Overall Prognosis Workshop Cochrane Colloquium, Seoul
Critically Appraising a Medical Journal Article
Evidence-based Medicine
Chrissie Fletcher, Amgen Ltd on behalf of IMI GetReal
Clinical Studies Continuum
Donald E. Cutlip, MD Beth Israel Deaconess Medical Center
Randomized Trials: A Brief Overview
Blood eosinophil count and exacerbation risk in patients with COPD
Regulatory perspective
Monitoring and Evaluating FGM/C abandonment programs
Regulatory Perspective of the Use of EHRs in RCTs
Presentation transcript:

David Price 1,2, Eric D. Bateman 2,3, Alison Chisholm 2, Nikolaos G. Papadopoulos 2,4, Sinthia Bosnic-Anticevich 2,5, Emilio Pizzichini 2,6, Elizabeth V. Hillyer 2, and A. Sonia Buist 2,7 1.Academic Primary Care, Division of Applied Health Sciences, University of Aberdeen, Aberdeen, UK 2.Respiratory Effectiveness Group, Cambridge, United Kingdom; 3.Division of Pulmonology, Department of Medicine, University of Cape Town and University of Cape Town Lung Institute, Cape Town, South Africa; 4.Department of Allergy, 2nd Pediatric Clinic, University of Athens, Athens, Greece; 5.Faculty of Pharmacy, University of Sydney, Camperdown, New South Wales, Australia; 6.Federal University of Santa Catarina, Department of Medicine, Floriano´ polis, Brazil; 7.Oregon Health and Science University, Portland, Oregon Complementing the Randomized Controlled Trial Evidence Base Evolution Not Revolution

Key messages Observational studies and pragmatic trials can complement classical randomized controlled trials (RCTs) o Provide data more relevant to the circumstances under which medicine is routinely practiced Evidence should be integrated from a variety of different study designs and methodologies. Real-life studies—observational studies and pragmatic trials— have utility in: o Addressing clinical questions that are unanswered by RCTs; o Testing new hypotheses and possible license extensions; o Helping to differentiate between available therapies for a given indication. o Fit within a conceptual framework of evidence relevant to clinical practice

Efficacy vs Effectiveness Efficacy trials: optimize all conditions by using highly- selected patient populations and close clinical monitoring to optimize internal validity, assessing cause and effect between an intervention and an outcome. Effectiveness studies: evaluate how interventions work in the diversity of patients treated in routine care when managed in clinical scenarios that differ widely within and between countries, and they seldom (if ever) reflect the highly interventional nature of efficacy trials: “letting the rats out of the cage and seeing what happens in real life.”

The Pros & Cons Different study designs

RCTs: “the good” The RCT is designed to answer precise questions about the efficacy of various types of medical interventions and to gather useful information about treatment-related adverse events. RCTs minimize all potential confounders and optimize internal validity by selecting an idealized, “pure” patient population, and by using close patient monitoring, consistently across all trial subjects. They provide a confident answer to the question: “Does intervention X work in an ideal (and specific subgroup of) patients receiving best standards of care?”

RCTs: “the bad” RCTs lack external validity. RCTs excluding any patients with characteristics that could affect the efficacy signal of an intervention, and by managing and monitoring patients far more intensively than would be feasible in clinical practice. RCT findings lack generalizability, particularly in: o Long-term chronic conditions that affect broad and heterogeneous patient populations, e.g. –Asthma –Chronic obstructive pulmonary disease (COPD)

Pragmatic trials: “the good” Compare interventions under more usual clinical conditions than RCTs to improve the applicability of findings to real-life issues and everyday clinical decision making. Designed to assess outcomes of healthcare interventions in the context of real-life clinical practice.: o Include heterogeneous populations o Incorporate relevant levels of clinical care to help answer practical clinical questions for healthcare providers, patients, and policymakers.

Pragmatic trials: “the bad” Pragmatic trials can face challenges in: o Maintaining adequate patient follow-up o Detecting a small treatment effect in usual care settings – requires: –Large study population, and/or –Validated instrument sensitive to the treatment effect Although less interventional than RCTs, any level of monitoring or patient engagement can alter behavior and potentially eliminate differences between two trial interventions

Observational studies: “the good” Offer high external validity Routine collection of data means that they are much greater in extent than predefined RCT datasets, and can often be obtained more quickly and at a lower cost Provide valuable data on: o How management approaches are used, o Results associated with interventions in the real world. Can detect strong associations between test interventions and predefined outcomes that are generalizable to a broad patient population

Observational studies: “the bad” Lack internal validity – limited in the extent to which they can demonstrate an unequivocal cause-and-effect relationship. No randomization or blinding, result in concerns around confounding by: o Indication o Severity Missing data can limit the interpretation of findings.

Observational studies: improvements Validity of observational studies can be strengthened by: o Identifying (and preregistering) the: –Eligible population, –Design –Outcomes –Potential confounders o Application of rigorous analytic methods to –Reduce bias –Minimise confounding –Adjust for residual confounding factors.

Complimenting RCTs with observational studies Plugging evidence gaps

RCT: important limitations The use of a non-inferiority design Few comparative efficacy trials Consistent application of historical (licensing) precedent, biases can be echoed generations of trials Do not address use of therapies: o Diverse populations o Routine care settings Expense limits their size and duration Lack power for subgroup analyses and rare treatment-related events Limited data on long-term outcomes and safety issues Unethical to take patients off “optimized therapy” to test alternatives

RCT limitations: how to plug the gaps Do not try to design RCTs to answer every question about an intervention—they intrinsically cannot. Understand the essence of the question being asked and to select the appropriate study design(s) to answer the question at hand. Draw on a diversity of different study designs and analytical approaches, a fuller picture of the utility of an intervention can be established.

Plugging the gaps: example Patient Behavior and Preference o Observational studies and pragmatic trials reflect (or can be designed to reflect) the level of physician/clinician interaction typical of routine care. They: –Capture patient activity (e.g., consultation patterns, medication adherence) and patterns of care –Highlight differences between routine practice appears and guideline recommendations –Provide important insights into patients’ experiences of their disease and its management and of their preferences, e.g.  Once vs twice-daily therapy  Oral vs inhaled therapy

Filling the gaps: more examples Ethics: o Observational studies offer a way to address interesting and important clinical questions that are ethically unevaluable or challenging in the RCT setting, e.g. –It is unethical to take patients off RCT-defined gold-standard therapy to evaluate real-life comparative effectiveness of different treatment options Subgroups of Patients o Patients with characteristics that may inhibit treatment response and who are excluded from RCTs can be included in pragmatic trials and observational studies, e.g.: –Patients with rhinitis –Smokers Where RCT blinding is infeasible, e.g. o Comparative effectiveness of inhaler device studies

Creating a fuller picture of the evidence Integrating different study designs

Integrating evidence: why? One of the major drivers of real-life research is the undue weight and supremacy of evidence that has long been attributed to RCTs. There is a need for evolution – there is a need: o To integrate evidence from all sources to arrive at treatment recommendations. o To recognize the value of evidence from a diversity of complementary approaches that: –make good each other’s methodology deficiencies –accommodate the diverse needs and circumstances under which medicine is practiced. –For guidance when n = 1.

Integrating evidence: why? Dichotomies and hierarchies of the evidence base are not helpful. Use of the term “real-life” to refer to observational studies and pragmatic trials implies that RCTs are not real-life. All in vivo studies deal with real people – different study designs provide different pieces of the evidence jigsaw

REG’s integrated evidence framework The Respiratory Effectiveness Group (REG) has proposed a new framework to enable classification of clinical research studies in terms of their general design. The framework is intended to complement the previously proposed PRECIS wheel (see later slide) The REG framework relies on two axes: o One describing the type of studied population in relation to the broadest target population o The other describing the “ecology of care” (or management approach) in relation to usual standard of care in the community. 1. Roche N, et al. Lancet Respir Med 2013;1:e29–e30.

REG’s integrated evidence framework The position of a study within the framework serves as a description of a study, not as a representation of the quality of evidence it provides. The framework is tool for describing the basic characteristics of the study design and population. Multiple studies can be placed relative to each other with respect to their relevance to the general target population, and for each study the appropriate quality assessment tools can be identified. 1. Roche N, et al. Lancet Respir Med 2013;1:e29–e30.

REG’s integrated evidence framework A means of positioning individual studies with respect to their relevance to the general target population. 1. Roche N, et al. Lancet Respir Med 2013;1:e29–e30.

Conclusions: I Classical RCTs: o Form the backbone of drug licensing and registration. o Answer critical questions, but NOT ALL questions Pragmatic trials and observational studies lack the internal validity of a registration RCT, but: o Shine light on important aspects of patient care that are not addressed by RCTs o Benefit from being less costly, allowing them to address longer-term aspects of care and to test hypotheses

RCTs traditionally occupy the preregistration space and pragmatic trials and observational studies that of post- registration, BUT observational studies and pragmatic trials can (more affordably) test a variety of hypotheses to inform the direction of future RCT expenditure

Conclusions: Redefining the hierarchy Different study designs should: o no longer be ranked in pyramids or pitted against each other at opposing ends of the quality spectrum. o be called on—as appropriate—to answer clinical questions. Devising frameworks that unite different streams of research, and establishing standards to appraise the quality of research, are important steps toward achieving a more integrated approach to evidence reviews.