1 Civil Systems Planning Benefit/Cost Analysis Scott Matthews Courses: /
and Announcements Recitation Friday HW 3 Due Today (now)
and Risk Profiles (“pmf”) Risk profile shows a distribution of possible payoffs associated with particular strategies. Chances associated with possible consequences A strategy is what you might plan to do going in to the decision. Holds your plans constant, allows chances to occur Only eliminate things YOU wouldn’t do, not things “they” might not do (you cant control them). Centered around the decision (not chance) nodes in the tree
and Risk Profiles (cont.) There are only 3 “decision strategies” in the base Texaco case: Accept the $2 billion offer (topmost branch of 1st dec. node) Counteroffer $5 Billion, but plan to refuse counteroffer (lower branch of 1st node, upper branch of second) Counteroffer $5B, but plan to accept counteroffer (lower branch of both decision nodes)
and Risk Profiles (cont.) Key concept: you do not have complete control over outcome of “the game” or “the lottery” represented by the tree BUT Consideration of risk profile for each strategy cuts out part of original tree You can plan a strategy (i.e., which branches to choose) but the other side may make choices such that you do not exactly go in the tree where you intended. Risk profile for “Accept $2 Billion” is obvious - get $2B with 100% chance.
and Profile for “Counteroffer $5B, refuse counteroffer”.. Below is just the part of original tree to consider when calculating the risk profile:
and Solving Risk Profile Solve for discrete probabilities of outcomes Make risk profile 25% chance of $0
and Cumulative Risk Profiles Percent chance that “payoff is less than x” “Accept $2B” RP and CRP are below (easy) CRP Goes 0->1 at $2B (0% chance it is below $2B, 100% chance below anything > $2B.
and CRPs for Other 2 Strategies
and Dominance To pick between strategies, it is useful to have rules by which to eliminate options Let’s construct an example - assume minimum “court award” expected is $2.5B (instead of $0). Now there are no “zero endpoints” in the decision tree.
and Stochastic Dominance: Example #1 CRP below for 2 strategies shows “Accept $2 Billion” is dominated by the other.
and Stochastic Dominance “Defined” A is better than B if: Pr(Profit > $z |A) ≥ Pr(Profit > $z |B), for all possible values of $z. Or (complementarity..) Pr(Profit ≤ $z |A) ≤ Pr(Profit ≤ $z |B), for all possible values of $z. A FOSD B iff F A (z) ≤ F B (z) for all z
and Example L1 = (0, 1/6; 1, 1/3; 2, 1/2) L2 = (0, 1/3; 1, 1/3; 2, 1/3) Given these 2 lotteries, does one first- order stochastic dominate the other?
and Value of Information We have been doing decision analysis with best guesses of probabilities Have been building trees with chance and decision nodes, finding expected values It is relevant and interesting to determine how important information might be in our decision problems. Could be in the form of paying an expert, a fortune teller, etc. Goal is to reduce/eliminate uncertainty in the decision problem.
and Willingness to Pay = EVPI We’re interested in knowing our WTP for (perfect) information about our decision. The book shows this as Bayesian probabilities, but think of it this way.. We consider the advice of “an expert who is always right”. If they say it will happen, it will. If they say it will not happen, it will not. They are never wrong. Bottom line - receiving their advice means we have eliminated the uncertainty about the event.
and Notes on EVPI Key is understanding what the relevant information is, and how it affects the tree. Quotes from pp. 501, 509 of Clemen “Redraw the tree so that the uncertainty nodes for which perfect information is (now) available come before the decision node(s).” (When multiple uncertain nodes exist..) “Move those chance nodes for which information is to be obtained so that they (all) precede the decision node.” Note: by “before” or “precede” we mean in the tree, from left to right (as opposed to in the tree solving process)
and
and
and Discussion The difference between the 2 trees (decision scenarios) is the EVPI $ $580 = $420. That is the amount up to which you would be willing to pay for advice on how to invest. If you pay less than the $420, you would expect to come out ahead, net of the cost of the information. If you pay $425 for the info, you would expect to lose $5 overall! Finding EVPI is really simple to do / PrecisionTree plug-in
and Is EVPI Additive? Pair group exercise Let’s look at handout for simple “2 parts uncertainty problem” considering the choice of where to go for a date, and the utility associated with whether it is fun or not, and whether weather is good or not. What is Expected value in this case? What is EVPI for “fun?”; EVPI for “weather?” What do the revised decision trees look like? What is EVPI for “fun and Weather?” Is EVPI fun + EVPI weather = EVPI fun+weather ?
and Additivity, cont. Now look at p,q labels on handout for the decision problem (top values in tree) Is it additive if instead p=0.3, q = 0.8? What if p=0.2 and q=0.2? Should make us think about sensitivity analysis - i.e., how much do answers/outcomes change if we change inputs..
and EVPI - Why Care? For information to “have value” it has to affect our decision Just like doing Tornado diagrams showed us which were the most sensitive variables EVPI analysis shows us which of our uncertainties is the most important, and thus which to focus further effort on If we can spend some time/money to further understand or reduce the uncertainty, it is worth it when EVPI is relatively high.
and Final Thoughts on Plugins You can combine the decision trees and the sensitivity plugins. Do “Sensitivity of Expected Values” by varying the probabilities (see end Chap 5) Also - can do EVPI Don’t need to do everything by hand! But it helps to be able to.
and Visualizing Decision Tree Results zEMV Outdoors : 100p+70q zEMV Indoors : 40p + 50q + 60(1-p-q) zEMV O > EMV I : p > -(2/3)q+1/ Outdoors Indoors
and Similar: EVII Imperfect, rather than perfect, information (because it is rarely perfect) Example: our expert acknowledges she is not always right, we use conditional probability (rather than assumption of 100% correct all the time) to solve trees. Ideally, they are “almost always right” and “almost never wrong” e.g.. P(Up Predicted | Up) is less than but close to 1. P(Up Predicted | Down) is greater than but close to 0
and Assessing the Expert
and Expert side of EVII tree This is more complicated than EVPI because we do not know whether the expert is right or not. We have to decide whether to believe her.
and Use Bayes’ Theorem “Flip” the probabilities. We know P(“Up”|Up) but instead need P(Up | “Up”). P(Up|”Up”) = P(“Up”|Up)*P(Up) P(“Up”|Up)*P(Up)+.. P(“Up”|Down)P(Down) = 0.8* * * *0.2 =0.8247
and EVII Tree Excerpt
and Rolling Back to the Top
and Transition Speaking of Information.. Facility case study for monday