INCREASING THE TRANSPARENCY OF CEA MODELING ASSUMPTIONS: A SENSITIVITY ANALYSIS BASED ON STRENGTH OF EVIDENCE RS Braithwaite MS Roberts AC Justice.

Slides:



Advertisements
Similar presentations
Chapter 7 Sampling and Sampling Distributions
Advertisements

On Comparing Classifiers : Pitfalls to Avoid and Recommended Approach
Technology Appraisal of Medical Devices at NICE – Methods and Practice Mark Sculpher Professor of Health Economics Centre for Health Economics University.
Chapter 4 Inference About Process Quality
Study Objectives and Questions for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Fast Algorithms For Hierarchical Range Histogram Constructions
Chapter 6 Sampling and Sampling Distributions
EVAL 6970: Meta-Analysis Vote Counting, The Sign Test, Power, Publication Bias, and Outliers Dr. Chris L. S. Coryn Spring 2011.
Exploring uncertainty in cost effectiveness analysis NICE International and HITAP copyright © 2013 Francis Ruiz NICE International (acknowledgements to:
Copyright © 2011 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 12 Measures of Association.
Summarising findings about the likely impacts of options Judgements about the quality of evidence Preparing summary of findings tables Plain language summaries.
What is a sample? Epidemiology matters: a new introduction to methodological foundations Chapter 4.
Alternative antiretroviral monitoring strategies for HIV-infected patients in resource-limited settings: Opportunities to save more lives? R Scott Braithwaite,
EPIDEMIOLOGY AND BIOSTATISTICS DEPT Esimating Population Value with Hypothesis Testing.
The Importance of Decision Analytic Modelling in Evaluating Health Care Interventions Mark Sculpher Professor of Health Economics Centre for Health Economics.
Introduction to Decision Analysis
Structural uncertainty from an economists’ perspective
Chapter 7 Sampling and Sampling Distributions
Discrete Event Simulation How to generate RV according to a specified distribution? geometric Poisson etc. Example of a DEVS: repair problem.
Lecture 2: Thu, Jan 16 Hypothesis Testing – Introduction (Ch 11)
Lecture 10 Comparison and Evaluation of Alternative System Designs.
Horng-Chyi HorngStatistics II41 Inference on the Mean of a Population - Variance Known H 0 :  =  0 H 0 :  =  0 H 1 :    0, where  0 is a specified.
Chapter 6: Introduction to Formal Statistical Inference November 19, 2008.
BCOR 1020 Business Statistics
Decision Analysis as a Basis for Estimating Cost- Effectiveness: The Experience of the National Institute for Health and Clinical Excellence in the UK.
Supply Chain Management (SCM) Forecasting 3
Introduction to ModelingMonte Carlo Simulation Expensive Not always practical Time consuming Impossible for all situations Can be complex Cons Pros Experience.
Critical Appraisal of Clinical Practice Guidelines
Business Statistics - QBM117 Introduction to hypothesis testing.
‘Projections in Hindsight’ – An assessment of past emission projections reported by Member States under EU air pollution and greenhouse gas legislation.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
Verification & Validation
IN THE NAME OF GOD Flagship Course on Health Sector Reform and Sustainable Financing; Module 4: : How to construct.
Analysis and Visualization Approaches to Assess UDU Capability Presented at MBSW May 2015 Jeff Hofer, Adam Rauk 1.
Evaluation of software engineering. Software engineering research : Research in SE aims to achieve two main goals: 1) To increase the knowledge about.
Chapter 8 Introduction to Inference Target Goal: I can calculate the confidence interval for a population Estimating with Confidence 8.1a h.w: pg 481:
Reliability & Validity
1 Chapter 7 Applying Simulation to Decision Problems.
What does the value of modern medicine say about the $50,000/QALY decision rule? RS Braithwaite, 1 David O Meltzer, 2 Joseph T King, 1 Douglas Leslie,
Cost-Effectiveness Analysis: New methods for old limitations? R Scott Braithwaite, MD, MSc, FACP R Scott Braithwaite, MD, MSc, FACP Yale University School.
Is the association causal, or are there alternative explanations? Epidemiology matters: a new introduction to methodological foundations Chapter 8.
STRATEGIC ENVIRONMENTAL ASSESSMENT METHODOLOGY AND TECHNIQUES.
Statistical Inference Statistical Inference is the process of making judgments about a population based on properties of the sample Statistical Inference.
5.1 Chapter 5 Inference in the Simple Regression Model In this chapter we study how to construct confidence intervals and how to conduct hypothesis tests.
Economics 173 Business Statistics Lecture 4 Fall, 2001 Professor J. Petry
PERFORMANCE MODELS. Understand use of performance models Identify common modeling approaches Understand methods for evaluating reliability Describe requirements.
Building Simulation Model In this lecture, we are interested in whether a simulation model is accurate representation of the real system. We are interested.
AP Statistics Chapter 24 Comparing Means.
New Research in Economic Modeling and Simulation Greg Samsa PhD.
1 / 12 Michael Beer, Vladik Kreinovich COMPARING INTERVALS AND MOMENTS FOR THE QUANTIFICATION OF COARSE INFORMATION M. Beer University of Liverpool V.
Assessing Responsiveness of Health Measurements Ian McDowell, INTA, Santiago, March 20, 2001.
Handbook for Health Care Research, Second Edition Chapter 7 © 2010 Jones and Bartlett Publishers, LLC CHAPTER 7 Designing the Experiment.
Confidence Interval Estimation For statistical inference in decision making: Chapter 9.
Example The strength of concrete depends, to some extent on the method used for drying it. Two different drying methods were tested independently on specimens.
The US Preventive Services Task Force: Potential Impact on Medicare Coverage Ned Calonge, MD, MPH Chair, USPSTF.
Confidence Intervals and Hypothesis Testing Mark Dancox Public Health Intelligence Course – Day 3.
Organizations of all types and sizes face a range of risks that can affect the achievement of their objectives. Organization's activities Strategic initiatives.
Evidence-Based Mental Health PSYC 377. Structure of the Presentation 1. Describe EBP issues 2. Categorize EBP issues 3. Assess the quality of ‘evidence’
Effects of Alcohol on U.S. Adolescent Sleep Patterns: A Systematic Review Nancy Carballo, MSIV Shahrzad Bazargan-Hejazi, PhD 2015 DIDARP 10 th Annual Drug.
Chapter 6 Sampling and Sampling Distributions
Table 1. Methodological Evaluation of Observational Research (MORE) – observational studies of incidence or prevalence of chronic diseases Tatyana Shamliyan.
Chapter 9: Systems architecting: Principles (pt. 3) ISE 443 / ETM 543 Fall 2013.
1 Basics of Inferential Statistics Mark A. Weaver, PhD Family Health International Office of AIDS Research, NIH ICSSC, FHI Lucknow, India, March 2010.
PRAGMATIC Study Designs: Elderly Cancer Trials
Approach to guideline development
Unit 5: Hypothesis Testing
GATHER reporting guidelines
Statistics in Applied Science and Technology
Significance Tests: The Basics
Assessing Similarity to Support Pediatric Extrapolation
Presentation transcript:

INCREASING THE TRANSPARENCY OF CEA MODELING ASSUMPTIONS: A SENSITIVITY ANALYSIS BASED ON STRENGTH OF EVIDENCE RS Braithwaite MS Roberts AC Justice

Introduction Tragicomic anecdote

Introduction Policy makers/clinicians reluctant to use CEA because assumptions difficult to understand Using Cost-Effectiveness Analysis to Improve Health Care: Opportunities and Barriers. Neumann PJ 2005 CMS (26th National meeting of SMDM, 2004) CEA modelers may base parameter estimates on studies that have limited evidence. Modelers may not consider all studies with comparable evidence and applicability

Objective To develop a method to clarify the tradeoff between strength of evidence and precision of CEA results.

Methods Proof of concept based on hypothetical data and simplified model of HIV natural history. Question: What is the cost-effectiveness of Directly Observed Therapy (DOT) for HIV patients?

Methods Basic idea When data sources have insufficient strength of evidence, we should no longer use them to estimate model parameters. Instead, we should assume that little is known and specify them using wide probability distributions with the fewest embedded assumptions Uniform distribution

Methods Assess strength of evidence based on USPTF guidelines which specify three valuation domains Study design Extent to which design differs from controlled experiment Level 1 = best (RCT) Level 3=worst (expert opinion, anecdotal evidence) Internal validity Extent to which results represent truth in study population Good = best (little LTFU, objective assessment) Poor = worst (large or diverging LTFU, subjective assessment) External validity Extent to which results represent truth in target population High = best (similar pt characteristics, care settings) Low = worst (dissimilar pt characteristics, care settings)

Methods Vary evidence criteria in 3 domains from most to least inclusive Individually and in aggregate If evidence meets or exceeds criteria, use it to estimate parameter input distribution If evidence does not meet criteria, do not use it Use uniform distribution over plausible range sufficiently wide to be acceptable to all CEA users

Methods For natural history parameters that can only be observed rather than determined experimentally observational studies eligible for Level 1 design Overall mortality rate due to age-, sex-, and race-related causes When more than one source of evidence met criteria, we used that source with greatest statistical precision Alternative: pool weighting by inverse of variance When substituting uniform distribution make sure that direction of aggregate effect is neutral Maximizes conservatism of approach

Methods Model: extremely simple 10-parameter probabilistic simulation of DOT in HIV 17 data sources considered

Results Base Case: No evidence criteria Study Design = High All 17 data sources eligible for parameter estimation Study Design = High 13 out of 17 sources were eligible Internal Validity = Good 9 out of 17 sources were eligible External Validity = High 5 out of 17 sources were eligible All three criteria Only 3 out of 17 sources were eligible

Results: All Evidence

Results: Design = 1

Results: Internal Validity = Good

Results: External Validity = High

Results: All Evidence

Results: Design = 1

Results: Internal Validity = Good

Results: External Validity = High

Results – Overall No evidence criteria $78,000/QALY Study Design = 1 $227,000/QALY Internal Validity = Good $158,000/QALY External Validity = High >$6,000,000/QALY All three criteria > $6,000,000/QALY

Limitations Incorporates a simple model of HIV that was constructed solely for the purpose of illustrating proof of concept. Method is likely to need further refinement before it could be used on more complex and realistic simulations. Method only addresses parameter uncertainty, leaving other determinates of modeling uncertainty unexplored.

Conclusions Strength of evidence may have profound impact on the precision and estimates of CEAs With all evidence was permitted results similar to previously published DOT CEA (Goldie03) $40,000 to $75,000/QALY Little uncertainty With stricter evidence criteria our results differed markedly > $ 150,000/QALY Great uncertainty

Implications Sensitivity analysis by strength of evidence concept can be linked to any desired ranking method for strength of evidence, and therefore can be customized to facilitate its use by expert panels and organizations. Advance of this work does not lie in its specification of particular hierarchy of strength of evidence Advance lies in showing how any hierarchy can be implemented within CEA model.

Implications Users who think “any data is better than no data” will likely base inferences on model results that incorporate all data sources, regardless of strength of evidence Users who think “my judgment supersedes all but the best data” would likely only base inferences on model results that reflect only highest grades of evidence. Many models may fail to provide conclusive results when validity criteria are stringent. Nonetheless, in the long run this may help CEA to become a more essential decision making tool.