USE OF LAPLACE APPROXIMATIONS TO SIGNIFICANTLY IMPROVE THE EFFICIENCY

Slides:



Advertisements
Similar presentations
Bayesian Health Technology Assessment: An Industry Statistician's Perspective John Stevens AstraZeneca R&D Charnwood Bayesian Statistics Focus Team Leader.
Advertisements

ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
Biointelligence Laboratory, Seoul National University
Some methodological issues in value of information analysis: an application of partial EVPI and EVSI to an economic model of Zanamivir Karl Claxton and.
Bayesian Estimation in MARK
Cost-effectiveness models to inform trial design: Calculating the expected value of sample information Alan Brennan and J Chilcott, S Kharroubi, A O’Hagan.
1 Parametric Sensitivity Analysis For Cancer Survival Models Using Large- Sample Normal Approximations To The Bayesian Posterior Distribution Gordon B.
BAYESIAN POSTERIOR DISTRIBUTIONS FOR PROBABILISTIC SENSITIVITY ANALYSIS Gordon B. Hazen and Min Huang, IEMS Department, Northwestern University, Evanston.
The uptake of value of information methods Solutions found and challenges to come Alan Brennan Director of Operational Research ScHARR.
Chapter 4: Linear Models for Classification
Valuing Trial Designs from a Pharmaceutical Perspective using Value Based Pricing (VBP) Penny Watson 1, Alan Brennan 1 1 Health Economics and Decision.
Modelling Partially & Completely Missing Preference-Based Outcome Measures (PBOMs) Keith Abrams Department of Health Sciences, University of Leicester,
Sensitivity Analysis for Complex Models Jeremy Oakley & Anthony O’Hagan University of Sheffield, UK.
Estimation A major purpose of statistics is to estimate some characteristics of a population. Take a sample from the population under study and Compute.
Predictive Automatic Relevance Determination by Expectation Propagation Yuan (Alan) Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani.
Methods to Analyse The Economic Benefits of a Pharmacogenetic (PGt) Test to Predict Response to Biologic Therapy in Rheumatoid Arthritis, and to Prioritise.
Sérgio Pequito Phd Student
Results Doubling the sample size from n=50 to 100, increases the value of a 6 month study by 64% but the value of a 3 years study by only 34%. Similarly.
An Optimal Learning Approach to Finding an Outbreak of a Disease Warren Scott Warren Powell
Results 2 (cont’d) c) Long term observational data on the duration of effective response Observational data on n=50 has EVSI = £867 d) Collect data on.
Value of Information Some introductory remarks by Tony O’Hagan.
A Two Level Monte Carlo Approach To Calculating
1 Transforming the efficiency of Partial EVSI computation Alan Brennan Health Economics and Decision Science (HEDS) Samer Kharroubi Centre for Bayesian.
17/12/2002 CHEBS Launch Seminar CHEBS Activities and Plans Tony O’Hagan Director.
Non-parametric Bayesian value of information analysis Aim: To inform the efficient allocation of research resources Objectives: To use all the available.
Using ranking and DCE data to value health states on the QALY scale using conventional and Bayesian methods Theresa Cain.
Value of Information for Complex Economic Models Jeremy Oakley Department of Probability and Statistics, University of Sheffield. Paper available from.
Nonlinear Stochastic Programming by the Monte-Carlo method Lecture 4 Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania EURO.
Estimation Error and Portfolio Optimization Global Asset Allocation and Stock Selection Campbell R. Harvey Duke University, Durham, NC USA National Bureau.
Lecture II-2: Probability Review
1 Assessment of Imprecise Reliability Using Efficient Probabilistic Reanalysis Farizal Efstratios Nikolaidis SAE 2007 World Congress.
Muhammad Moeen YaqoobPage 1 Moment-Matching Trackers for Difficult Targets Muhammad Moeen Yaqoob Supervisor: Professor Richard Vinter.
1 Bayesian methods for parameter estimation and data assimilation with crop models Part 2: Likelihood function and prior distribution David Makowski and.
Economic evaluation of health programmes Department of Epidemiology, Biostatistics and Occupational Health Class no. 17: Economic Evaluation using Decision.
Applications of Bayesian sensitivity and uncertainty analysis to the statistical analysis of computer simulators for carbon dynamics Marc Kennedy Clive.
Bayesian parameter estimation in cosmology with Population Monte Carlo By Darell Moodley (UKZN) Supervisor: Prof. K Moodley (UKZN) SKA Postgraduate conference,
A Beginner’s Guide to Bayesian Modelling Peter England, PhD EMB GIRO 2002.
Bayesian Extension to the Language Model for Ad Hoc Information Retrieval Hugo Zaragoza, Djoerd Hiemstra, Michael Tipping Presented by Chen Yi-Ting.
Probabilistic Robotics Bayes Filter Implementations.
Testing Models on Simulated Data Presented at the Casualty Loss Reserve Seminar September 19, 2008 Glenn Meyers, FCAS, PhD ISO Innovative Analytics.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
1 Managing the Computational Cost in a Monte Carlo Simulation by Considering the Value of Information E. Nikolaidis, V. Pandey, Z. P. Mourelatos April.
The good sides of Bayes Jeannot Trampert Utrecht University.
Department Author Bayesian Sample Size Determination in the Real World John Stevens AstraZeneca R&D Charnwood Tony O’Hagan University of Sheffield.
CHAPTER 17 O PTIMAL D ESIGN FOR E XPERIMENTAL I NPUTS Organization of chapter in ISSO –Background Motivation Finite sample and asymptotic (continuous)
- 1 - Overall procedure of validation Calibration Validation Figure 12.4 Validation, calibration, and prediction (Oberkampf and Barone, 2004 ). Model accuracy.
Reducing MCMC Computational Cost With a Two Layered Bayesian Approach
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
A latent Gaussian model for compositional data with structural zeroes Adam Butler & Chris Glasbey Biomathematics & Statistics Scotland.
1 Tom Edgar’s Contribution to Model Reduction as an introduction to Global Sensitivity Analysis Procedure Accounting for Effect of Available Experimental.
Statistical Methods. 2 Concepts and Notations Sample unit – the basic landscape unit at which we wish to establish the presence/absence of the species.
The Unscented Kalman Filter for Nonlinear Estimation Young Ki Baik.
Anders Nielsen Technical University of Denmark, DTU-Aqua Mark Maunder Inter-American Tropical Tuna Commission An Introduction.
Density Estimation in R Ha Le and Nikolaos Sarafianos COSC 7362 – Advanced Machine Learning Professor: Dr. Christoph F. Eick 1.
Institute of Statistics and Decision Sciences In Defense of a Dissertation Submitted for the Degree of Doctor of Philosophy 26 July 2005 Regression Model.
Predictive Automatic Relevance Determination by Expectation Propagation Y. Qi T.P. Minka R.W. Picard Z. Ghahramani.
Generalization Performance of Exchange Monte Carlo Method for Normal Mixture Models Kenji Nagata, Sumio Watanabe Tokyo Institute of Technology.
MCMC Output & Metropolis-Hastings Algorithm Part I
Alan Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani
More about Posterior Distributions
Filtering and State Estimation: Basic Concepts
Estimation Error and Portfolio Optimization
Estimation Error and Portfolio Optimization
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Estimation Error and Portfolio Optimization
Ch 3. Linear Models for Regression (2/2) Pattern Recognition and Machine Learning, C. M. Bishop, Previously summarized by Yung-Kyun Noh Updated.
Estimation Error and Portfolio Optimization
Presentation transcript:

USE OF LAPLACE APPROXIMATIONS TO SIGNIFICANTLY IMPROVE THE EFFICIENCY USE OF LAPLACE APPROXIMATIONS TO SIGNIFICANTLY IMPROVE THE EFFICIENCY CHEBS OF EXPECTED VALUE OF SAMPLE INFORMATION COMPUTATIONS. Alan Brennan, Samer Kharroubi University of Sheffield, England. a.brennan@sheffield.ac.uk . Purpose: to describe a novel process for transforming the efficiency of partial EVSI computations in health economic decision models. Background: Brennan et al. (1,2) Claxton et al (3) have promoted EVSI as a measure of the societal value of research designs to identify optimal sample sizes for primary data collection. Current Mathematical Formulation and 2 level algorithm: Partial EVSI for Parameters Current methods involve: a two level Monte Carlo simulation algorithm a large number of calculations.  = the parameters for the model (uncertain currently). d = set of possible decisions or strategies. NB(d, ) = the net benefit for decision d, and parameters  Step 0: Analysis based on Current information Set up the decision model Characterise the uncertainty in each parameter with prior probability distributions Calculate the baseline decision and its expected net benefit Given current information chose decision giving maximum expected net benefit. Expected net benefit | current information = (1) Step 1: Define a Data Collection Exercise, Simulate the Variety of Possible Results i = the parameters of interest - we propose to collect data on these  -i = the other parameters (those not of interest, i.e. remaining uncertainty) Decide on research design i.e. parameters to collect data on (i), sample size, etc. [Start loop] Sample the data collection:   a) sample the true underlying value for parameter of interest (i) from its prior uncertainty b) sample simulated data ( Xi ) given the sampled true underlying value of i Synthesise existing evidence with simulated data result is a simulated posterior probability distribution for value of parameter of interest. Evaluate the net benefit for each strategy given the new data and make a ‘revised decision’ (Re-run probabilistic analysis using Monte Carlo simulation on the decision model ) Net benefit of ‘revised decision’ | simulated data[i]: = (2) [Loop back] Step 2: Evaluate Expected Value of the Proposed Research (Perform the [loop] using Monte Carlo simulation a large number of times) Expected net benefit | proposed data: = (3) Partial Expected Value of the Proposed Sample Information (3) – (1) (4) This is a 2 level simulation due to 2 expectations (e.g. 1000 x 1000 model simulations) Application of Laplace Approximation to EVSI formula For EVSI the first term in the formula is outer expectation of inner expectation of net benefit over Xi net benefit over | Xi We use Laplace approximation to evaluate the EVSI inner expectation , hence (5) 1st order approximate EVSI = (6) Only One Expectation Posterior mode in the formula is recalculated for each simulated dataset collected Xi Illustrative Model with Normally Distributed Uncertain Parameters Two treatments T1 and T0 normally distributed cost and benefit parameters Bayesian Updating: the normal case Bayesian Updating: the normal case The 2 level algorithm 1st order Laplace approximation Results Computation Times 2 Level Algorithm 1000 x 1000 iterations = 15 minutes 1st order Laplace Approximation 1000 = 18 seconds Bayes Comparison Results are very similar order of magnitude 1st order Laplace marginally below 2 level algorithm 2 level algorithm using 1000 x 1000 is a slight over-estimate Conclusions (1). This novel application of Laplace approximations short-cuts the calculation of EVSI (2). A simple illustrative model shows that EVSI calculations using the new approach are in line with those produced by the longer 2 level Monte-Carlo sampling method. (3). Computation time reductions depend on the number of Monte-Carlo samples used to evaluate EVSI, but can be seen to be up to 100 times shorter using the approximation method. (4). Application to more complex models and assessment of the value of the 2nd order term are needed Posterior mode = Posterior mean μ1 Methodology: Laplace Approximation Sweeting and Kharroubi4 have developed a 2nd order approximation to evaluate the an expectation of function v() given available data X. (5) ______ __________________________________________ 1st order term 2nd order term = the posterior mode of the probability distribution for  uncertain parameters + and - are the solutions to a series of non-linear equations incorporating the posterior mode , and J the observed information (J= -l’’( ) ) i.e.the 2nd derivative of the prior likelihood function. J-1 is the posterior variance covariance matrix. α+ and α-are analytic expressions in terms of the prior, the first derivative of the likelihood function and the function v(  ) itself. Laplace 1 Brennan, A., Chilcott, J. B, Kharroubi, S, O'Hagan, A. Calculating Expected Value of Perfect Information:- Resolution of Conflicting Methods via a Two Level Monte Carlo Approach, presented at the 24th Annual Meeting of SMDM, October 23rd, 2002, Washington. 2002. Submitted - Journal of Medical Decision Making  2 Brennan, A. B., Chilcott, J. B., Kharroubi, S., O'Hagan, A. A Two Level Monte Carlo Approach to Calculation Expected Value of Sample Information: How To Value a Research Design. Presented at the 24th Annual Meeting of SMDM, October 23rd, 2002, Washington. 2002. 3 Claxton, K, Ades, T. Efficient Research Design: An Application of Value of Information Analysis to an Economic Model of Zanamivir. Presented at the 24th Annual Meeting of the Society for Medical Decision Making, October 21st, 2002, Washington. 2002. 4 Sweeting, J, Kharroubi, S. Some New Formulae for Posterior Expectations and Bartlett Corrections. Sociedad de Estadistica e Investigacion Operative Test, (Accepted) 2003   Acknowledgements: Particular thanks to Professor Tony O’Hagan for encouraging our ongoing work.