1 Transforming the efficiency of Partial EVSI computation Alan Brennan Health Economics and Decision Science (HEDS) Samer Kharroubi Centre for Bayesian Statistics in Health Economics (CHEBS) University of Sheffield, England IHEA July 2005
2 Expected Value of Sample Information (EVSI) EVSI works out the expected impact on decision making if we collect more data We 1.Simulate a collected sample dataset 2. Update uncertainty in parameters given data 3. ? Choose a different decision option given data 4. Quantify increase in benefit over baseline decision 5.Repeat for many sample datasets 6.Calculate the expected increase in benefit
3 EVSI The Computational Problem EVSI works out the expected impact on decision making if we collect more data Conventional Computations required 1.“Outer” Monte Carlo sample 2. Bayesian Update – analytic or MCMC 3. “Inner” Monte Carlo sample e.g. 10,000 times 4. Evaluate each net benefit function each time 5.Repeat for many sample datasets e.g. 10,000 times 6.Total e.g. 100,000,000 evaluations of net benefit
4 = uncertain model parameters t = set of possible treatments (decision options) NB(d, ) = net benefit (λ*QALY – Cost) for decision d, i = parameters of interest – possible data collection X i = data collected on the parameters of interest i Mathematical Notation EVSI = Expected Payoff given only current information Expected Payoff for each Decision given particular new data X i Expectation over sampled datasets
5 Laplace approximation Sweeting and Kharroubi (2003) developed a 2nd order approximation to evaluate the posterior expectation of any real valued smooth function v( ) with a vector of d uncertain parameters given new available data X st order 2 nd order term term
6 Eureka For EVSI the first term in the formula is We can adapt Laplace approximation to evaluate the EVSI inner expectation ! st order 2 nd order Only requires 1+3d evaluations of net benefit (Kharroubi and Brennan 2005)
7 Univariate Explanation: + + and - are 1 standard deviation away from the posterior mode ^ ^
8 Univariate Explanation: α+ α+ and α- are weights, functions of the ratio of the slopes of the log density function at θ+, θ- If distribution is symmetric then α+ = α - =½
9 Multivariate Requires Matrix Algebra for each dataset X i θ i +, θ i - are vectors. each is the i th row of a matrix θ+, θ- ›The first i -1 components are posterior modes θ 1...θ i-1 ›i th is θ i ± (k i ) -1/2, where k i is 1/first entry of {J (i) } -1 ›Remaining i +1 to d components are chosen to maximise the posterior density given the first i components α i + and α i - are vectors of weights, which are calculated based on partial derivatives of the log posterior density function at θ i +, θ i - Requires numerical optimisation ^ ^^
10 Case Studies Case Study 1 ›2 treatments – T1 versus T0 ›Uncertainty in …… 19 independent parameters ›Univariate Normal prior and data ›Net benefit function is sum-product form ›NB1= (θ 5 θ 6 θ 7 + θ 8 θ 9 θ 10 ) – ( θ 1 + θ 2 θ 3 θ 4 ) Case Study 2 ›Uncertainty in …… 19 correlated parameters ›Multivariate Normal prior and data
11 Illustrative Model
12 Case Study 1 Results (5 sets) 1 st order Laplace is accurate
13 Case Study 2: 1 st order wrong 2 nd order is accurate
14 Accuracy of inner integral approximation Parameters 6,15 Sample size n=50 Out of 1000 datasets the resulting decision between 2 treatments was different in 7 i.e. 0.7% error Monte Carlo Laplace
15 Trade-off in Computation Time
16 Computation Time What-If Analyses Efficiency gain due to Laplace approximation increases rapidly as model run time for one evaluation of net benefit increases
17 Limitations Any Type of Net benefit function ›analytic function of model parameters ›result of probabilistic model e.g. individual level simulation Characterisation of Uncertainty ›Need functional form for probability density function ›Smooth and differentiable, ›i.e. not just a histogram to sample from ›write down the equations for posterior density function and its derivative mathematically
18 Conclusions EVSI calculations using the Laplace approximation are in line with those using 2 level Monte-Carlo method in case studies so far Method is very generalisable once you understand the mathematics and algorithm Computation time reductions depend on times to compute net benefit functions
19 Thankyou 'Wisest are they who know they do not know‘ ‘Especially if they can calculate whether it’s worth finding out’
20 References Brennan, A. B., Chilcott, J. B., Kharroubi, S. A, O'Hagan, A. A Two Level Monte Carlo Approach to Calculation Expected Value of Sample Information: How To Value a Research Design. Presented at the 24th Annual Meeting of SMDM, October 23rd, 2002, Washington Ades AE, Lu G, Claxton K. Expected value of sample information calculations in medical decision modelling. Medical Decision Making Mar-Apr;24(2): Sweeting, T. J. and Kharroubi, S. A. (2003). Some new formulae for posterior expectations and Bartlett corrections. Test, 12(2): Kharroubi, S. A. and Brennan, A. (2005). A Novel Formulation for Approximate Bayesian Computation Based on Signed Roots of Log-Density Ratios. Research Report No. 553/05, Department of Probability and Statistics, University of Sheffield. Submitted to Applied Statistics.