Value of Information for Complex Economic Models Jeremy Oakley Department of Probability and Statistics, University of Sheffield. Paper available from.

Slides:



Advertisements
Similar presentations
Bayesian tools for analysing and reducing uncertainty Tony OHagan University of Sheffield.
Advertisements

Simulating Single server queuing models. Consider the following sequence of activities that each customer undergoes: 1.Customer arrives 2.Customer waits.
Bayesian Health Technology Assessment: An Industry Statistician's Perspective John Stevens AstraZeneca R&D Charnwood Bayesian Statistics Focus Team Leader.
20- 1 Chapter Twenty McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc., All Rights Reserved.
Copyright © 2002 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Chapter Twenty An Introduction to Decision Making GOALS.
Managerial Decision Modeling with Spreadsheets
©The McGraw-Hill Companies, Inc. 2008McGraw-Hill/Irwin An Introduction to Decision Making Chapter 20.
CHAPTER 21 Inferential Statistical Analysis. Understanding probability The idea of probability is central to inferential statistics. It means the chance.
Cost-effectiveness models to inform trial design: Calculating the expected value of sample information Alan Brennan and J Chilcott, S Kharroubi, A O’Hagan.
1 Chapter 9 Hypothesis Testing Developing Null and Alternative Hypotheses Type I and Type II Errors One-Tailed Tests About a Population Mean: Large-Sample.
1 1 Slide © 2003 South-Western/Thomson Learning™ Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Hypothesis Testing Developing Null and Alternative Hypotheses Developing Null and Alternative Hypotheses Type I and Type II Errors Type I and Type II Errors.
1 1 Slide STATISTICS FOR BUSINESS AND ECONOMICS Seventh Edition AndersonSweeneyWilliams Slides Prepared by John Loucks © 1999 ITP/South-Western College.
BAYESIAN POSTERIOR DISTRIBUTIONS FOR PROBABILISTIC SENSITIVITY ANALYSIS Gordon B. Hazen and Min Huang, IEMS Department, Northwestern University, Evanston.
Valuing Trial Designs from a Pharmaceutical Perspective using Value Based Pricing (VBP) Penny Watson 1, Alan Brennan 1 1 Health Economics and Decision.
Engineering Economic Analysis Canadian Edition
Sensitivity Analysis for Complex Models Jeremy Oakley & Anthony O’Hagan University of Sheffield, UK.
USE OF LAPLACE APPROXIMATIONS TO SIGNIFICANTLY IMPROVE THE EFFICIENCY
Steps of a sound simulation study
1/55 EF 507 QUANTITATIVE METHODS FOR ECONOMICS AND FINANCE FALL 2008 Chapter 10 Hypothesis Testing.
An Optimal Learning Approach to Finding an Outbreak of a Disease Warren Scott Warren Powell
Results 2 (cont’d) c) Long term observational data on the duration of effective response Observational data on n=50 has EVSI = £867 d) Collect data on.
Value of Information Some introductory remarks by Tony O’Hagan.
A Two Level Monte Carlo Approach To Calculating
1 Transforming the efficiency of Partial EVSI computation Alan Brennan Health Economics and Decision Science (HEDS) Samer Kharroubi Centre for Bayesian.
1 Validation and Verification of Simulation Models.
Non-parametric Bayesian value of information analysis Aim: To inform the efficient allocation of research resources Objectives: To use all the available.
Lecture 10 Comparison and Evaluation of Alternative System Designs.
Using ranking and DCE data to value health states on the QALY scale using conventional and Bayesian methods Theresa Cain.
Estimation Error and Portfolio Optimization Global Asset Allocation and Stock Selection Campbell R. Harvey Duke University, Durham, NC USA National Bureau.
Monté Carlo Simulation MGS 3100 – Chapter 9. Simulation Defined A computer-based model used to run experiments on a real system.  Typically done on a.
1 Assessment of Imprecise Reliability Using Efficient Probabilistic Reanalysis Farizal Efstratios Nikolaidis SAE 2007 World Congress.
Case 1: Optimum inspection and maintenance rates (wind turbine is available during inspection) Case 2: Optimum inspection and maintenance rates (wind turbine.
Chapter 10 Hypothesis Testing
An Introduction to Decision Theory (web only)
Gaussian process modelling
Decision theory and Bayesian statistics. Tests and problem solving Petter Mostad
An Introduction to Decision Theory
PowerPoint presentation to accompany Operations Management, 6E (Heizer & Render) © 2001 by Prentice Hall, Inc., Upper Saddle River, N.J A-1 Operations.
Fundamentals of Data Analysis Lecture 4 Testing of statistical hypotheses.
Applications of Bayesian sensitivity and uncertainty analysis to the statistical analysis of computer simulators for carbon dynamics Marc Kennedy Clive.
Lecture 12 Statistical Inference (Estimation) Point and Interval estimation By Aziza Munir.
1 1 Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University © 2002 South-Western/Thomson Learning.
1 1 Slide © 2004 Thomson/South-Western Slides Prepared by JOHN S. LOUCKS St. Edward’s University Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Probabilistic Mechanism Analysis. Outline Uncertainty in mechanisms Why consider uncertainty Basics of uncertainty Probabilistic mechanism analysis Examples.
1 1 Slide © 2003 Thomson/South-Western Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Value of information Marko Tainio Decision analysis and Risk Management course in Kuopio
Engineering Economic Analysis Canadian Edition
1 Chapter 7 Applying Simulation to Decision Problems.
1 Managing the Computational Cost in a Monte Carlo Simulation by Considering the Value of Information E. Nikolaidis, V. Pandey, Z. P. Mourelatos April.
17 May 2007RSS Kent Local Group1 Quantifying uncertainty in the UK carbon flux Tony O’Hagan CTCD, Sheffield.
Department Author Bayesian Sample Size Determination in the Real World John Stevens AstraZeneca R&D Charnwood Tony O’Hagan University of Sheffield.
Quantitative Project Risk Analysis 1 Intaver Institute Inc. 303, 6707, Elbow Drive S.W., Calgary AB Canada T2V 0E5
ROOT and statistics tutorial Exercise: Discover the Higgs, part 2 Attilio Andreazza Università di Milano and INFN Caterina Doglioni Université de Genève.
Bayesian Statistics and Decision Analysis
Machine Design Under Uncertainty. Outline Uncertainty in mechanical components Why consider uncertainty Basics of uncertainty Uncertainty analysis for.
- 1 - Overall procedure of validation Calibration Validation Figure 12.4 Validation, calibration, and prediction (Oberkampf and Barone, 2004 ). Model accuracy.
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
1 1 Slide Simulation Professor Ahmadi. 2 2 Slide Simulation Chapter Outline n Computer Simulation n Simulation Modeling n Random Variables and Pseudo-Random.
1 Optimizing Decisions over the Long-term in the Presence of Uncertain Response Edward Kambour.
BME 353 – BIOMEDICAL MEASUREMENTS AND INSTRUMENTATION MEASUREMENT PRINCIPLES.
Introduction to emulators Tony O’Hagan University of Sheffield.
Bayesian Optimization. Problem Formulation Goal  Discover the X that maximizes Y  Global optimization Active experimentation  We can choose which values.
8 Sept 2006, DEMA2006Slide 1 An Introduction to Computer Experiments and their Design Problems Tony O’Hagan University of Sheffield.
Quantitative Project Risk Analysis
Estimation Error and Portfolio Optimization
Estimation Error and Portfolio Optimization
Estimation Error and Portfolio Optimization
Estimation Error and Portfolio Optimization
Uncertainty Propagation
Presentation transcript:

Value of Information for Complex Economic Models Jeremy Oakley Department of Probability and Statistics, University of Sheffield. Paper available from

Outline 1.Motivation 2.Expected value of perfect information (EVPI) 3.Emulators and Gaussian processes 4.Illustration: GERD model

1) Introduction An economic model is to be used to predict the cost-effectiveness of a particular treatment(s). The economic model will require the specification of various input parameters. Values of some or all of these are uncertain. This implies the output of the model, the cost- effectiveness of the treatment is also uncertain.

Introduction We wish to identify which input parameters are the most influential in driving this output uncertainty. Should we learn more about these parameters before making a decision?

Introduction A measure of importance for an input variable have been proposed, based on the expected value of perfect information (EVPI) (Felli and Hazen, 1998, Claxton 1999). Computing the values of these measures is conventionally done using Monte Carlo techniques. These invariably require a very large numbers of runs of the economic model.

Introduction For computationally expensive models, this can be completely impractical. We present an efficient alternative to Monte Carlo, in terms of the number of model runs required.

2) EVPI We work with net benefit: the monetary value or utility of a treatment is K x efficacy – cost with K the monetary value of a unit increase in efficacy. The net benefit of any treatment option will be a function of the parameters in the economic model.

EVPI Denote the net benefit of treatment option t given model parameters X to be NB (t, X ) Given X, the economic model returns NB (t, X ) for each t. The ‘true’ values of the model parameters X are uncertain.

EVPI The baseline decision is to choose t with the largest expected net benefit: NB * = max t E X {NB (t, X )} The decision maker will have utility NB * if they choose the best treatment now with no additional information.

EVPI Now suppose the decision-maker chooses to learn the value of all the uncertain input variables X before choosing a treatment. They would then choose the treatment with the highest net benefit conditional on X, i.e., they would consider max t {NB (t, X )}

EVPI Before actually observing X, they will expect to achieve a net benefit of E X [max t {NB (t, X )}] The expected value of this course of action is the expected gain in net benefit over the baseline decision: E X [max t {NB (t, X )}] – NB *. This is the (global) EVPI.

Partial EVPI Now suppose the decision-maker chooses to learn the value of a single uncertain input variable Y, an element of X before making a decision. They would then choose the treatment with the highest net benefit conditional on Y, i.e., they would consider max t E X | Y {NB (t, X )}

Partial EVPI The expected value of learning Y before Y is actually observed is then: E Y [max t E X |Y {NB (t, X )}] – NB * This is the partial expected value of perfect information (partial EVPI) for Y. The partial EVPI is zero if the decision-maker would choose the same treatment for any (plausible) value of Y.

Computing partial EVPIs We need to evaluate E Y [max t E X |Y {NB (t, X )}] for each element Y in X. The outer expectation E Y is a one-dimensional integral, and can be evaluated using numerical integration. The term max t E X |Y is the maximum of (several) higher-dimensional integrals. This requires a large Monte Carlo sample to be evaluated.

Patient Simulation Models Computing partial EVPIs for computationally cheap models, while not trivial, is relatively straightforward. However, for one class of models, patient simulation models, a sensitivity analysis using Monte Carlo methods will be out of reach for the model user.

Patient Simulation Models An example is given in Kanis et al (2002) for modelling osteoporosis: For an osteoporosis patient, a bone fracture significantly increases the risk of a subsequent fracture. Residential status of a patient needs to be tracked, in order that costs are not double- counted.

Patient Simulation Models Progress is to be modelled over a 10 year period. Including the approptiate features in the model necessitates a patient simulation approach. The net benefit for a given set of input parameters is obtained by sampling events for a large number of patients. The model takes over an hour for a single run at one set of input parameters.

Patient Simulation Models For a model with 20 uncertain input variables, computing the partial EVPI reliably using Monte Carlo for each input variable would require a possible minimum of 500,000 model runs. At one hour for each run, this would take 57 years! Something more efficient is needed…

3) Emulators For each treatment option t, and given values for the input parameters X = x, the economic model returns NB (t, x ) We think of the model as a collection of functions NB (t, x ) = f t (x) Partial EVPIs can be computed more efficiently by exploiting the `smoothness’ of each f t (x)

Emulators We can compute partial EVPIs more efficiently through the use of an emulator. An emulator is a statistical model of the original economic model which can then be used as a fast approximation to the model itself. An approach used by Sacks et al (1989) for dealing with computationally expensive computer models.

Gaussian processes Any regression technique can be used. We employ a nonparametric regression technique based on Gaussian processes (O’Hagan, 1978). The gaussian process model for the function f t (x) is non-parametric; the only assumption made about f t (x) is that it is a continuous function.

Gaussian processes In the Gaussian process model, f t (x) is thought of as an unknown function, and uncertainty about f t (x) is described by a normal distribution. Correlation between f t (x 1 ) and f t (x 2 ) is modelled parametrically as a function of ||x 1 -x 2 ||

Gaussian processes The partial EVPI for input variable Y is given by E Y [max t E X |Y {NB (t, X )}] – NB * We need to evaluate E X |Y {NB (t, X )} for each t at various values of Y. Denote G (X |Y) to be the distribution of X given Y. Then E X |Y {NB (t, X )} =  f t (x) dG (x |y )

Gaussian processes We can use Bayesian quadrature (O’Hagan, 1993) to rapidly speed up the computation: Under the Gaussian process model for f t (x),  f t (x) dG (x |y ) has a normal distribution, and can be evaluated (almost) instantaneously. This reduces the number of model runs required from 100,000s to 100s.

4) Example: GERD model The GERD model, presented in O’Brien et al (1999) predicts the cost-effectiveness of a range of treatment strategies for gastroesophageal reflux disease. Various uncertain inputs in the model related to treatment efficacies, resource uses by patients. Model outputs mean number of weeks free of GERD symptoms, and mean cost of treatment for a particular strategy.

Example: GERD model We consider a choice between three treatment strategies: ›Acute treatment with proton pump inhibitors (PPIs) for 8 weeks, then continuous maintenance treatment with PPIs at the same dose. ›Acute treatment with PPIs for 8 weeks, then continuous maintenance treatment with hydrogen receptor antagonists (H2RAs). ›Acute treatment with PPIs for 8 weeks, then continuous maintenance treatment with PPIs at the a lower dose.

Example: GERD model There are 23 uncertain input variables. Distributions for uncertain inputs detailed in Briggs et al (2002). We estimate the partial EVPI for each input variable, based on 600 runs of the GERD model. We assume a value of $250 for each week free of GERD symptoms. (It is straightforward to repeat our analysis for alternative values).

Example: GERD model

Conclusions. The use of the Gaussian process emulator allows partial EVPIs to be computed considerably more efficiently. Sensitivity analysis feasible for computationally expensive models. Can also be extended to value of sample information calculations.