Some methodological issues in value of information analysis: an application of partial EVPI and EVSI to an economic model of Zanamivir Karl Claxton and.

Slides:



Advertisements
Similar presentations
THE USE OF HISTORICAL CONTROLS IN DEVICE STUDIES Vic Hasselblad Duke Clinical Research Institute.
Advertisements

Chaff: Engineering an Efficient SAT Solver Matthew W.Moskewicz, Concor F. Madigan, Ying Zhao, Lintao Zhang, Sharad Malik Princeton University Presenting:
Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
A. The Basic Principle We consider the multivariate extension of multiple linear regression – modeling the relationship between m responses Y 1,…,Y m and.
Rulebase Expert System and Uncertainty. Rule-based ES Rules as a knowledge representation technique Type of rules :- relation, recommendation, directive,
When is there Sufficient Evidence? Karl Claxton, Department of Economics and Related Studies and Centre for Health Economics, University of York.
Decision Analysis. What is Decision Analysis? The process of arriving at an optimal strategy given: –Multiple decision alternatives –Uncertain future.
CHAPTER 21 Inferential Statistical Analysis. Understanding probability The idea of probability is central to inferential statistics. It means the chance.
Cost-effectiveness models to inform trial design: Calculating the expected value of sample information Alan Brennan and J Chilcott, S Kharroubi, A O’Hagan.
Dynamic Strategic Planning Massachusetts Institute of Technology Richard Roth Information CollectionSlide 1 of 18 Value of Information.
Prediction, Correlation, and Lack of Fit in Regression (§11. 4, 11
1 Value of Information Yot Teerawattananon, MD International Health Policy Program, Ministry of Public Health PhD candidate in Health Economics, University.
Optimal Drug Development Programs and Efficient Licensing and Reimbursement Regimens Neil Hawkins Karl Claxton CENTRE FOR HEALTH ECONOMICS.
BAYESIAN POSTERIOR DISTRIBUTIONS FOR PROBABILISTIC SENSITIVITY ANALYSIS Gordon B. Hazen and Min Huang, IEMS Department, Northwestern University, Evanston.
PROBABILISTIC MODELS David Kauchak CS451 – Fall 2013.
The uptake of value of information methods Solutions found and challenges to come Alan Brennan Director of Operational Research ScHARR.
USE OF LAPLACE APPROXIMATIONS TO SIGNIFICANTLY IMPROVE THE EFFICIENCY
Methods to Analyse The Economic Benefits of a Pharmacogenetic (PGt) Test to Predict Response to Biologic Therapy in Rheumatoid Arthritis, and to Prioritise.
Visual Recognition Tutorial
Results Doubling the sample size from n=50 to 100, increases the value of a 6 month study by 64% but the value of a 3 years study by only 34%. Similarly.
Results 2 (cont’d) c) Long term observational data on the duration of effective response Observational data on n=50 has EVSI = £867 d) Collect data on.
Value of Information Some introductory remarks by Tony O’Hagan.
1 Review of Correlation A correlation coefficient measures the strength of a linear relation between two measurement variables. The measure is based on.
A Two Level Monte Carlo Approach To Calculating
1 Transforming the efficiency of Partial EVSI computation Alan Brennan Health Economics and Decision Science (HEDS) Samer Kharroubi Centre for Bayesian.
Non-parametric Bayesian value of information analysis Aim: To inform the efficient allocation of research resources Objectives: To use all the available.
Value of Information for Complex Economic Models Jeremy Oakley Department of Probability and Statistics, University of Sheffield. Paper available from.
6.4 Prediction -We have already seen how to make predictions about our dependent variable using our OLS estimates and values for our independent variables.
Correlation Coefficients Pearson’s Product Moment Correlation Coefficient  interval or ratio data only What about ordinal data?
Ensemble Learning (2), Tree and Forest
Gaussian process modelling
Calibration and Model Discrepancy Tony O’Hagan, MUCM, Sheffield.
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
MultiCollinearity. The Nature of the Problem OLS requires that the explanatory variables are independent of error term But they may not always be independent.
Chanyoung Park Raphael T. Haftka Paper Helicopter Project.
Chapter 3 Delineating Efficient Portfolios Jordan Eimer Danielle Ko Raegen Richard Jon Greenwald.
1 1 Slide Decision Theory Professor Ahmadi. 2 2 Slide Learning Objectives n Structuring the decision problem and decision trees n Types of decision making.
Geo597 Geostatistics Ch9 Random Function Models.
LOGISTIC REGRESSION David Kauchak CS451 – Fall 2013.
Exploration Strategies for Learned Probabilities in Smart Terrain Dr. John R. Sullins Youngstown State University.
Engineering Systems Analysis for Design Massachusetts Institute of Technology Richard de Neufville © Information CollectionSlide 1 of 15 Information Collection.
Center for Radiative Shock Hydrodynamics Fall 2011 Review Assessment of predictive capability Derek Bingham 1.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 16-1 Chapter 16 Decision Making Statistics for Managers Using Microsoft.
Brett D. Higgins ^, Kyungmin Lee *, Jason Flinn *, T.J. Giuli +, Brian Noble *, and Christopher Peplin + Arbor Networks ^ University of Michigan * Ford.
The Cost of Financing Insurance Version 2.0 Glenn Meyers Insurance Services Office Inc. CAS Ratemaking Seminar March 8, 2002.
Statistics for Managers Using Microsoft Excel, 5e © 2008 Prentice-Hall, Inc.Chap 17-1 Statistics for Managers Using Microsoft® Excel 5th Edition Chapter.
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc. Chap 17-1 Chapter 17 Decision Making Basic Business Statistics 10 th Edition.
Propagation of Error Ch En 475 Unit Operations. Quantifying variables (i.e. answering a question with a number) 1. Directly measure the variable. - referred.
Statistics Presentation Ch En 475 Unit Operations.
Nonlinear Models. Agenda Omitted Variables Dummy Variables Nonlinear Models Nonlinear in variables Polynomial Regressions Log Transformed Regressions.
Optimal portfolios and index model.  Suppose your portfolio has only 1 stock, how many sources of risk can affect your portfolio? ◦ Uncertainty at the.
Contact: Jean-François Michiels, Statistician A bayesian framework for conducting effective bridging between references.
Portfolio wide Catastrophe Modelling Practical Issues.
1 Optimizing Decisions over the Long-term in the Presence of Uncertain Response Edward Kambour.
Statistics Presentation Ch En 475 Unit Operations.
Multiple Regression The equation that describes how the dependent variable y is related to the independent variables: x1, x2, xp and error term e.
CSC321: Introduction to Neural Networks and Machine Learning Lecture 15: Mixtures of Experts Geoffrey Hinton.
Cointegration in Single Equations: Lecture 5
Eawag: Swiss Federal Institute of Aquatic Science and Technology Analyzing input and structural uncertainty of deterministic models with stochastic, time-dependent.
CSC321: Lecture 8: The Bayesian way to fit models Geoffrey Hinton.
Search: Games & Adversarial Search Artificial Intelligence CMSC January 28, 2003.
Topics: Multiple Regression Analysis (MRA)
A summary of Basic Concepts in the Behavioral Theory of the Firm
Chapter 7. Classification and Prediction
Ch3: Model Building through Regression
A summary of Basic Concepts in the Behavioral Theory of the Firm
OVERVIEW OF BAYESIAN INFERENCE: PART 1
Information Gathering as a Strategy
Chapter 17 Decision Making
Regression III.
Presentation transcript:

Some methodological issues in value of information analysis: an application of partial EVPI and EVSI to an economic model of Zanamivir Karl Claxton and Tony Ades

Partial EVPIs Light at the end of the tunnel…… ……..maybe it’s a train

A simple model of Zanamivir

(£40.00)(£20.00)£0.00£20.00£40.00 Normal Distribution Mean = (£0.51) Std Dev = £12.52 inb Distribution of inb

EVPI for the decision EVPI = EV(perfect information) - EV(current information)

Partial EVPI EVPI pip = EV(perfect information about pip) - EV(current information) EV(optimal decision for a particular resolution of pip) particular resolution of pip) Expectation of this difference over all resolutions of pip EV(prior decision for the same resolution of pip) same resolution of pip) -

Partial EVPI Some implications:  information about an input is only valuable if it changes our decision  information is only valuable if pip does not resolve at its expected value General solution:  linear and non linear models  inputs can be (spuriously) correlated

Felli and Hazen (98) “short cut” EVPI pip = EVPI when resolve all other inputs at their expected value Appears counter intuitive:  we resolve all other uncertainties then ask what is the value of pip ie “residual” EVPIpip ? But:  resolving at EV does not give us any information Correct if:  linear relationship between inputs and net benefit  inputs are not correlated

So why different values?  The model is linear  The inputs are independent?

“Residual” EVPI  wrong current information position for partial EVPI  what is the value of resolving pip when we already have perfect information about all other inputs?  Expect residual EVPI pip < partial EVPI pip EVPI when resolve all other inputs at each realisation ?

Thompson and Evans (96) and Thompson and Graham (96) inb simplifies to: inb = Rearrange: pip: inb = pcz: inb = phz: inb = rsd: inb = upd: inb = phs: inb = pcs: inb =  Felli and Hazen (98) used a similar approach  Thompson and Evans (96) is a linear model  emphasis on EVPI when set others to joint expected value  requires payoffs as a function of the input of interest

Reduction in cost of uncertainty  intuitive appeal  consistent with conditional probabilistic analysis RCU E(pip) = EVPI - EVPI(pip resolved at expected value) But  pip may not resolve at E(pip) and prior decisions may change  value of perfect information if forced to stick to the prior decision ie the value of a reduction in variance  Expect RCU E(pip) < partial EVPI

Reduction in cost of uncertainty spurious correlation again? RCU pip = E pip [EVPI – EVPI(given realisation of pip)] = partial EVPI RCU pip = EVPI – E pip [EVPI(given realisation of pip)] = [EV(perfect information) - EV(current information)] - E pip [EV(perfect information, pip resolved) - EV(current information, pip resolved)] E pip [EV(perfect information, pip resolved) - EV(current information, pip resolved)]

EVPI for strategies Value of including a strategy?  EVPI with and without the strategy included  demonstrates bias  difference = EVPI associated with the strategy?  EV(perfect information, all included) – EV(perfect information, excluded) E all inputs [Max d (NB d | all inputs )] – E all inputs [Max d-1 (NB d-1 | all inputs )]

Conclusions on partials Life is beautiful …… Hegel was right ……progress is a dialectic Maths don’t lie …… ……but brute force empiricism can mislead

EVSI…… …… it may well be a train Hegel’s right again! ……contradiction follows synthesis

EVSI for model inputs  generate a predictive distribution for sample of n  sample from the predictive and prior distributions to form a preposterior  propagate the preposterior through the model  value of information for sample of n  find n* that maximises EVSI-cost sampling

EVSI for pip Epidemiological study n  prior:pip  Beta ( ,  )  predicitive:rip  Bin(pip, n)  preposterior:pip’ = (pip(  +  )+rip)/((  +  +n)  as n increases var(rip*n) falls towards var(pip)  var(pip’) < var(pip) and falls with n  pip’ are the possible posterior means

EVSIpip = reduction in the cost of uncertainty due to n obs on pip = difference in partials (EVPIpip – EVPIpip’) E pip [E other [Max d (NB d | other, pip )] - Max d E other (NB d | other, pip )] - E pip’ [E other [Max d (NB d | other, pip’ )] - Max d E other (NB d | other, pip’ )] pip’has smaller var so any realisation is less likely to change decision E pip [E other [Max d (NB d | other, pip )] > E pip’ [E other [Max d (NB d | other, pip’ )] E(pip’) = E(pip) E pip [Max d E other (NB d | other, pip )] = E pip’ [Max d E other (NB d | other, pip’ )]

EVSIpip Why not the difference in prior and preposterior EVPI?  effect of pip’ only through var(NB)  change decision for the realisation of pip’ once study is completed  difference in prior and preposterior EVPI will underestimate EVSIpip

Implications  EVSI for any input that is conjugate  generate preposterior for log odds ratio for complication and hospitalisation etc  trial design for individual endpoint (rsd)  trial designs with a number of endpoints (pcz, phz, upd, rsd)  n for an endpoint will be uncertain (n_pcz = n*pip, etc)  consider optimal n and allocation (search for n*)  combine different designs eg:  obs study (pip) and trial (upd, rsd) or obs study (pip, upd), trial (rsd)…. etc