Download presentation
Presentation is loading. Please wait.
1
Value of Information Some introductory remarks by Tony O’Hagan
2
Welcome! Welcome to the second CHEBS dissemination workshop This forms part of our Focus Fortnight on “Value of Information” Our format allows plenty of time for discussion of the issues raised in each talk, so please feel free to join in!
3
Uncertainty in models An economic model tells us the mean cost and effectiveness for each treatment, and so gives their mean net benefits We can thereby identify the most cost-effective treatment However, there are invariably many unknown parameters in the model Uncertainty in these leads to uncertainty in the net benefits, and hence in the choice of treatment
4
Responses to uncertainty We need to recognise this uncertainty when expressing conclusions from the model ›Variances or intervals around estimates of net benefits ›Cost-effectiveness acceptability curves for the choice of treatment We can identify those parameters whose uncertainty has most influence on the conclusions
5
Sensitivity and VoI One way to identify the most influential parameters is via sensitivity analysis This is primarily useful to see where research will be most effective in reducing uncertainty A more direct approach is to quantify the cost to us of uncertainty, and thereby calculate the value of reducing it Then we should engage in further research wherever it would cost less than the value of the information it will yield
6
Notation To define value of information, we need some notation ›X denotes the uncertain inputs ›E X denotes expectation with respect to the random quantity X ›t denotes the treatment number ›max t denotes taking a maximum over all the possible treatments ›U (t, X ) denotes the net benefit of treatment t when the uncertain inputs take values X
7
Baseline If we cannot get more information, we have to take a decision now, based on present uncertainty in X This gives us the baseline expected net benefit max t E X U (t, X ) The baseline decision is to use the treatment that achieves this maximal expected net benefit
8
Perfect information Suppose now that we could gain perfect information about all the unknown inputs X We would then simply maximise U (t, X ) across the various treatments, using the true value of X However, at the present time we do not know X and so the expected achieved net benefit is E X max t U (t, X )
9
EVPI The gain in expected net benefit from learning the true value of X is the difference between these two formulae E X max t U (t, X ) — max t E X U (t, X ) We call this the Expected Value of Perfect Information, EVPI
10
Partial information Now suppose we can find out the true value of Y, comprising one or more of the parameters in X (but not all of them) Then we will get expected net benefit max t E X | Y U (t, X ) where now we need to take expectation over the remaining uncertainty in X after learning Y, which is denoted by E X | Y
11
But because at the present time we do not yet know Y, our present expectation of this future expected net benefit is the relevant measure E Y max t E X | Y U (t, X ) To calculate this, we need to carry out two separate expectations To get the value of the partial information, we again subtract the baseline expected net benefit
12
Sample information In practice, we will never be able to learn the values of any of the parameters exactly What we can hope for is to do some research that will yield more information relevant to some or all of X Let this information be denoted by Y Then the previous formula still holds ›Sample information is a kind of partial information
13
Scaling up All of these formulae have been expressed at a per-patient level To get the real value of information, we need to multiply by the number of patients to whom the choice of treatment will apply ›When comparing with the cost of an experiment to get more information now, the relevant number of patients should be discounted over time
14
Computation Computing EVPI is quite straightforward ›Simple Monte Carlo (MC) sampling from the distribution of X can evaluate the baseline as well as the perfect information expected net benefit Computing the value of partial or sample information is more complex ›Two levels of sampling are needed for MC computation More sophisticated Bayesian methods are available when the model is too complex for MC
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.