The potential role of mixed treatment comparisons Deborah Caldwell Tony Ades MRC HSRC University of Bristol.

Slides:



Advertisements
Similar presentations
What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic.
Advertisements

What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy 2 nd Workshop 'Consensus working group.
Technology Appraisal of Medical Devices at NICE – Methods and Practice Mark Sculpher Professor of Health Economics Centre for Health Economics University.
How would you explain the smoking paradox. Smokers fair better after an infarction in hospital than non-smokers. This apparently disagrees with the view.
USE OF EVIDENCE IN DECISION MODELS: An appraisal of health technology assessments in the UK Nicola Cooper Centre for Biostatistics & Genetic Epidemiology,
USE OF EVIDENCE IN DECISION MODELS: An appraisal of health technology assessments in the UK Nicola Cooper Centre for Biostatistics & Genetic Epidemiology,
Understanding p-values Annie Herbert Medical Statistician Research and Development Support Unit
Nicola Cooper Centre for Biostatistics & Genetic Epidemiology,
Evidence synthesis of competing interventions when there is inconsistency in how effectiveness outcomes are measured across studies Nicola Cooper Centre.
Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Paul Tappenden Jim Chilcott Health Economics and Decision Science (HEDS) School of Health and Related Research (ScHARR) 25 th July 2005 Consensus working.
Grading the Strength of a Body of Evidence on Diagnostic Tests Prepared for: The Agency for Healthcare Research and Quality (AHRQ) Training Modules for.
A Comparison of Early Versus Late Initiation of Renal Replacement Therapy in Critically III Patients with Acute Kidney Injury: A Systematic Review and.
H. Lundbeck A/S16-Apr-151 Perspectives on Non-Inferiority Clinical Trials – based on draft FDA guidance doc DSBS 20 May 2010.
Exploring uncertainty in cost effectiveness analysis NICE International and HITAP copyright © 2013 Francis Ruiz NICE International (acknowledgements to:
Learning Programs to Accelerate the BioPharma Transition Network Meta-analysis What is a network meta-analysis? GRADE approach to confidence in estimates.
Role of Pharmacoeconomics in a Developing country context Gavin Steel for Anban Pillay Cluster Manager: Health Economics National Department of Health.
1 ADDRESSING BETWEEN-STUDY HETEROGENEITY AND INCONSISTENCY IN MIXED TREATMENT COMPARISONS Application to stroke prevention treatments for Atrial Fibrillation.
Summarising findings about the likely impacts of options Judgements about the quality of evidence Preparing summary of findings tables Plain language summaries.
ODAC May 3, Subgroup Analyses in Clinical Trials Stephen L George, PhD Department of Biostatistics and Bioinformatics Duke University Medical Center.
The role of economic modelling – a brief introduction Francis Ruiz NICE International © NICE 2014.
Critical Appraisal of Systematic Reviews Douglas Newberry.
1.A 33 year old female patient admitted to the ICU with confirmed pulmonary embolism. It was noted that she had elevated serum troponin level. Does this.
1 A Bayesian Non-Inferiority Approach to Evaluation of Bridging Studies Chin-Fu Hsiao, Jen-Pei Liu Division of Biostatistics and Bioinformatics National.
Cost-Effectiveness Analyses in the UK - Lessons from the National Institute for Clinical Excellence Mark Sculpher Professor of Health Economics Centre.
Structural uncertainty from an economists’ perspective
Michael Rawlins Chairman, National Institute for Health and Clinical Excellence, London Emeritus Professor, University of Newcastle upon Tyne Honorary.
Clinical Trials Hanyan Yang
Decision Analysis as a Basis for Estimating Cost- Effectiveness: The Experience of the National Institute for Health and Clinical Excellence in the UK.
1 ADDRESSING BETWEEN-STUDY HETEROGENEITY AND INCONSISTENCY IN MIXED TREATMENT COMPARISONS Application to stroke prevention treatments for Atrial Fibrillation.
Critical Appraisal of an Article by Dr. I. Selvaraj B. SC. ,M. B. B. S
USE OF EVIDENCE IN DECISION MODELS: An appraisal of health technology assessments in the UK Nicola Cooper Centre for Biostatistics & Genetic Epidemiology,
DISCUSSION Alex Sutton Centre for Biostatistics & Genetic Epidemiology, University of Leicester.
Making all research results publically available: the cry of systematic reviewers.
Are the results valid? Was the validity of the included studies appraised?
Value of Information Analysis Roger J. Lewis, MD, PhD Department of Emergency Medicine Harbor-UCLA Medical Center Los Angeles Biomedical Research Institute.
Published in Circulation 2005 Percutaneous Coronary Intervention Versus Conservative Therapy in Nonacute Coronary Artery Disease: A Meta-Analysis Demosthenes.
Background to Adaptive Design Nigel Stallard Professor of Medical Statistics Director of Health Sciences Research Institute Warwick Medical School
CHP400: Community Health Program- lI Research Methodology STUDY DESIGNS Observational / Analytical Studies Case Control Studies Present: Disease Past:
Systematic Reviews.
EVIDENCE BASED MEDICINE Effectiveness of therapy Ross Lawrenson.
Understanding real research 4. Randomised controlled trials.
Plymouth Health Community NICE Guidance Implementation Group Workshop Two: Debriding agents and specialist wound care clinics. Pressure ulcer risk assessment.
Validation / citations. Validation u Expert review of model structure u Expert review of basic code implementation u Reproduce original inputs u Correctly.
Deciding how much confidence to place in a systematic review What do we mean by confidence in a systematic review and in an estimate of effect? How should.
Vanderbilt Sports Medicine Chapter 5: Therapy, Part 2 Thomas F. Byars Evidence-Based Medicine How to Practice and Teach EBM.
What is a non-inferiority trial, and what particular challenges do such trials present? Andrew Nunn MRC Clinical Trials Unit 20th February 2012.
Objectives  Identify the key elements of a good randomised controlled study  To clarify the process of meta analysis and developing a systematic review.
Module 3 Finding the Evidence: Pre-appraised Literature.
Sifting through the evidence Sarah Fradsham. Types of Evidence Primary Literature Observational studies Case Report Case Series Case Control Study Cohort.
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
Is the conscientious explicit and judicious use of current best evidence in making decision about the care of the individual patient (Dr. David Sackett)
Indirect and mixed treatment comparisons Hannah Buckley Co-authors: Hannah Ainsworth, Clare Heaps, Catherine Hewitt, Laura Jefferson, Natasha Mitchell,
Risk of bolus thrombolytics Shamir Mehta, MD Director, Coronary Care Unit McMaster University Medical Center Hamilton, Ontario Paul Armstrong, MD Professor.
Copyright © 2011 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 18 Systematic Review and Meta-Analysis.
Research Design Evidence Based Medicine Concepts and Glossary.
Course: Research in Biomedicine and Health III Seminar 5: Critical assessment of evidence.
Introduction to Biostatistics, Harvard Extension School, Fall, 2005 © Scott Evans, Ph.D.1 Contingency Tables.
Is a meta-analysis right for me? Jaime Peters June 2014.
“New methods in generating evidence for everyone: Can we improve evidence synthesis approaches?” Network Meta-Analyses and Economic Evaluations Petros.
Evidence-based Medicine
Present: Disease Past: Exposure
Statistical Core Didactic
Primer on Adjusted Indirect Comparison Meta-Analyses
H676 Meta-Analysis Brian Flay WEEK 1 Fall 2016 Thursdays 4-6:50
Association between risk-of-bias assessments and results of randomized trials in Cochrane reviews: the ROBES study Jelena Savović1, Becky Turner2, David.
Issues in Hypothesis Testing in the Context of Extrapolation
Analysing RWE for HTA: Challenges, methods and critique
What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic. Ask What is a review?
Presentation transcript:

The potential role of mixed treatment comparisons Deborah Caldwell Tony Ades MRC HSRC University of Bristol

Outline of presentation Indirect comparisons and mixed treatment comparisons (MTC). Potential concerns regarding use of indirect comparisons/ MTC. Hypothetical ‘simulation’ example of MTC evidence structure. MTC of NICE appraisal for thrombolysis Addressing potential concerns Should MTC be routine and future areas of research.

Background For any given condition there are an array of possible interventions/ treatments. Treatment recommendations & decisions should be evidence based. Principle sources are systematic reviews of randomised controlled trials. Systematic reviews focus on pairwise, direct comparisons of treatments.

Indirect comparisons In absence of trials comparing treatments A versus B, an indirect estimate of odds ratio d AB is obtained from RCTs comparing A vs C and B vs C: d AB = d AC – d BC

Indirect comparisons In absence of trials comparing treatments A versus B, an indirect estimate of odds ratio d AB is obtained from RCTs comparing A vs C and B vs C: d AB = d AC – d BC

Indirect comparisons In absence of trials comparing treatments A versus B, an indirect estimate of odds ratio d AB is obtained from RCTs comparing A vs C and B vs C: d AB = d AC – d BC

Mixed treatment comparisons Where there are 3 or more treatments, compared using direct and indirect evidence from several RCTs = mixed (multiple) treatment comparisons (MTC). MTC evidence structures are pervasive in Health Technology Assessments (HTA) – decisions between >5 treatments are commonplace. A unified, coherent analysis of multiple treatments can only be achieved by including the entire evidence structure of relevant RCTs.

Potential concerns about MTC Indirect comparisons produce relatively imprecise estimates of treatment effect They are not randomised comparisons Suffer the biases of observational studies (level 3 of EBM evidence hierarchies?) Direct and indirect evidence should be considered separately. Direct evidence should take precedence.

Contradictions in the ‘received wisdom’ Why is lower level evidence used when level one is unavailable but it is irrelevant when it isn’t? What do we do when direct evidence is inconclusive but in combination with indirect is conclusive? If 5 treatments are all compared with each other does it make sense to separate the 10 direct pairwise comparisons from the 70 indirect?

Hypothetical evidence structure

Objectives ‘Simulation’ exercise to explore benefit of increasing levels of complexity in MTC evidence structures. –To examine additional benefit of including evidence routinely excluded from systematic reviews. –To what extent different MTC evidence structures give increasing levels of precision. –Address some of the concerns outlined.

Method Contrast estimates of posterior precision of log odds ratios (LOR) from i.A standard pairwise meta-analysis ii.Use of mixed treatment comparison analysis Compare estimates of posterior precision i.LOR of treatment A vs. treatment B ii.‘Average’ precision - across all 15 possible treatment comparisons. Assumptions: Equal amounts of information on each treatment comparison.

Simulation results: precision of d AB Precision of pairwise d AB = 1 Precision of MTC d AB = 1.01

Hypothetical evidence structure

Simulation results: precision of d AB Additional data on a single indirect comparison increases precision by 0.51

Simulation results: precision of d AB Each additional indirect treatment comparison increases precision in d AB by 0.5

Simulation results: precision of d AB Is there value in ‘linking’ indirect comparisons? ‘Linking’ comparison is treatment C vs D

Simulation results: precision of d AB Adding ‘linking’ comparisons doesn’t increase precision of d AB estimate. Property of this particular evidence structure

Summary of simulation results: precision of d AB If all you believe is ‘direct’ data –precision d AB stays = 1 Mixed Treatment Comparison analysis – Adding data on a single indirect comparison increases precision by 0.51 –Adding multiple indirect comparisons further increases precision –Equivalent to 2 extra trials on d AB comparison

Simulation results: ‘average’ precision

5 pieces of data/ 15 possible treatment comparisons.

Simulation results: ‘average’ precision Maximum ‘average’ pairwise precision = pieces of data/ 15 possible treatment comparisons

Simulation results: ‘average’ precision MTC allows us to say something about all 15 pairwise comparisons.

Simulation results: ‘average’ precision Equivalent number of trials 0.67*15 = 10 trials worth of data

Early thrombolysis for acute myocardial infarction (AMI).

Technology appraisal for National Institute for Clinical Excellence (Boland et al, 2003) Affects 274,000 people each year 50% die within 30 days of AMI. National Service Framework for heart disease states thrombolysis should be given within 60 minutes. Thrombolysis = pharmaceutical agents to dissolve blood clots.

Thrombolytic treatments Four treatments assessed: –Streptokinase (SK), –Tissue-plasminogen activator (t-PA), –Tenecteplase (TNK) –Reteplase (r-PA). Distinction made between accelerated t-PA and standard t-PA. SK + t-PA used in two trials

Thrombolysis conclusions (Boland et al, 2003) “Definitive (sic) conclusions on efficacy are that streptokinase is as effective as non-accelerated alteplase, that tenecteplase is as effective as accelerated alteplase, and that reteplase is at least as effective as streptokinase. “Some conclusions require interpretation of data, i.e. whether streptokinase is as effective as, or inferior to accelerated alteplase; and whether reteplase is as effective as accelerated alteplase or not. “Depending on these, two further questions on indirect comparisons arise, whether tenecteplase is superior to as streptokinase or not and whether reteplase is as effective as tenecteplase or not.”

What is needed? A single statistical analysis providing estimates for all the 15 pairwise comparisons, between 6 treatments. –Using classical or Bayesian statistical methods. An assessment of which of these treatments is most likely to be best. Method –Bayesian Markov chain Monte Carlo method

Were all relevant treatments included? Primary percutaneous transluminal coronary angioplasty (PCTA). Keeley et al meta-analysis of PCTA vs thrombolysis (22 RCTs) –PCTA is better than thrombolysis (OR 0.70 [0.58 – 0.85]) –But surely the relevant comparison is the ‘best’ thrombolytic NOT the ‘average’ one? 7 treatments 21 possible pairwise comparisons

Extended evidence structure

Consistency of odds ratios and CIs: fixed effect analysis

Lumping vs splitting: Fixed effect analysis of At-PA versus PCTA

Breaking randomisation? NO! –There are statistically invalid methods. –Our MTC analyses are based only on randomised comparisons. –Lack of assumptions about baseline risks across studies. –A weighted combination of valid estimates of treatment effect.

Generalisability Key assumption in MTC is that relative treatment effect of one treatment vs another is same across entire set of trials. Irrespective of which treatments were actually evaluated in each trial –True odds ratio of A vs B trials is exactly the same as the A vs B odds ratio in the A vs C, B vs C trials. (fixed effect) –Common distribution of treatment effects is the same across all sets of trials (random effects).

Generalisability Helpful to consider which target population we are making treatment recommendation for. –The type of patients in the previous A vs B trials? OR – The kind of patients in ALL the MTC trials? Clinical and epidemiological judgement necessary Poor judgements may introduce heterogeneity

Should MTC be routine? A more appropriate question is can MTC analyses be avoided? –No real alternative in multi-treatment decision making. Transparency –No need to lump treatments –No ‘under the table’ indirect comparisons MTC same assumptions as meta-analysis

Future areas of research What is the extra literature searching burden of MTC – how far should searches go? –Should we include discontinued treatments in the evidence base? –Placebo controlled trials? Greater awareness of MTC by commissioners of research when ‘scoping’ HTAs –NICE Obesity appraisals –Thrombolysis & PCTA