Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 1 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or.

Similar presentations


Presentation on theme: "1 1 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or."— Presentation transcript:

1 1 1 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Lesson 9 Decision Analysis (including Decision Tree by Excel) n Problem Formulation n Decision Making without Probabilities n Decision Making with Probabilities n Risk Analysis and Sensitivity Analysis n Decision Analysis with Sample Information n Computing Branch Probabilities

2 2 2 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Problem Formulation n A decision problem is characterized by decision alternatives, states of nature, and resulting payoffs. n The decision alternatives are the different possible strategies the decision maker can employ. n The states of nature refer to future events, not under the control of the decision maker, which may occur. States of nature should be defined so that they are mutually exclusive and collectively exhaustive.

3 3 3 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Influence Diagrams n An influence diagram is a graphical device showing the relationships among the decisions, the chance events, and the consequences. n Squares or rectangles depict decision nodes. n Circles or ovals depict chance nodes. n Diamonds depict consequence nodes. n Lines or arcs connecting the nodes show the direction of influence.

4 4 4 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Payoff Tables n The consequence resulting from a specific combination of a decision alternative and a state of nature is a payoff. n A table showing payoffs for all combinations of decision alternatives and states of nature is a payoff table. n Payoffs can be expressed in terms of profit, cost, time, distance or any other appropriate measure.

5 5 5 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Decision Trees n A decision tree is a chronological representation of the decision problem. n Each decision tree has two types of nodes; round nodes correspond to the states of nature while square nodes correspond to the decision alternatives. n The branches leaving each round node represent the different states of nature while the branches leaving each square node represent the different decision alternatives. n At the end of each limb of a tree are the payoffs attained from the series of branches making up that limb.

6 6 6 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Decision Making without Probabilities n Three commonly used criteria for decision making when probability information regarding the likelihood of the states of nature is unavailable are: the optimistic approach the optimistic approach the conservative approach the conservative approach the minimax regret approach. the minimax regret approach.

7 7 7 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Optimistic Approach n The optimistic approach would be used by an optimistic decision maker. n The decision with the largest possible payoff is chosen. n If the payoff table was in terms of costs, the decision with the lowest cost would be chosen.

8 8 8 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Conservative Approach n The conservative approach would be used by a conservative decision maker. n For each decision the minimum payoff is listed and then the decision corresponding to the maximum of these minimum payoffs is selected. (Hence, the minimum possible payoff is maximized.) n If the payoff was in terms of costs, the maximum costs would be determined for each decision and then the decision corresponding to the minimum of these maximum costs is selected. (Hence, the maximum possible cost is minimized.)

9 9 9 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Minimax Regret Approach n The minimax regret approach requires the construction of a regret table or an opportunity loss table. n This is done by calculating for each state of nature the difference between each payoff and the largest payoff for that state of nature. n Then, using this regret table, the maximum regret for each possible decision is listed. n The decision chosen is the one corresponding to the minimum of the maximum regrets.

10 10 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Example Consider the following problem with three decision alternatives and three states of nature with the following payoff table representing profits: States of Nature States of Nature s 1 s 2 s 3 s 1 s 2 s 3 d 1 4 4 -2 d 1 4 4 -2 Decisions d 2 0 3 -1 Decisions d 2 0 3 -1 d 3 1 5 -3 d 3 1 5 -3

11 11 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Example: Optimistic Approach An optimistic decision maker would use the optimistic (maximax) approach. We choose the decision that has the largest single value in the payoff table. Maximum Maximum Decision Payoff Decision Payoff d 1 4 d 1 4 d 2 3 d 2 3 d 3 5 d 3 5 Maximaxpayoff Maximax decision

12 12 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Example: Optimistic Approach n Formula Spreadsheet

13 13 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Example: Optimistic Approach n Solution Spreadsheet

14 14 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Example: Conservative Approach A conservative decision maker would use the conservative (maximin) approach. List the minimum payoff for each decision. Choose the decision with the maximum of these minimum payoffs. Minimum Minimum Decision Payoff Decision Payoff d 1 -2 d 1 -2 d 2 -1 d 2 -1 d 3 -3 d 3 -3 Maximindecision Maximinpayoff

15 15 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Example: Conservative Approach n Formula Spreadsheet

16 16 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Example: Conservative Approach n Solution Spreadsheet

17 17 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. For the minimax regret approach, first compute a regret table by subtracting each payoff in a column from the largest payoff in that column. In this example, in the first column subtract 4, 0, and 1 from 4; etc. The resulting regret table is: s 1 s 2 s 3 s 1 s 2 s 3 d 1 0 1 1 d 1 0 1 1 d 2 4 2 0 d 2 4 2 0 d 3 3 0 2 d 3 3 0 2 Example: Minimax Regret Approach

18 18 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. For each decision list the maximum regret. Choose the decision with the minimum of these values. Maximum Maximum Decision Regret Decision Regret d 1 1 d 1 1 d 2 4 d 2 4 d 3 3 d 3 3 Example: Minimax Regret Approach Minimaxdecision Minimaxregret

19 19 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Example: Minimax Regret Approach n Formula Spreadsheet

20 20 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. n Solution Spreadsheet Example: Minimax Regret Approach

21 21 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Decision Making with Probabilities n Expected Value Approach If probabilistic information regarding the states of nature is available, one may use the expected value (EV) approach. If probabilistic information regarding the states of nature is available, one may use the expected value (EV) approach. Here the expected return for each decision is calculated by summing the products of the payoff under each state of nature and the probability of the respective state of nature occurring. Here the expected return for each decision is calculated by summing the products of the payoff under each state of nature and the probability of the respective state of nature occurring. The decision yielding the best expected return is chosen. The decision yielding the best expected return is chosen.

22 22 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. n The expected value of a decision alternative is the sum of weighted payoffs for the decision alternative. n The expected value (EV) of decision alternative d i is defined as: where: N = the number of states of nature P ( s j ) = the probability of state of nature s j P ( s j ) = the probability of state of nature s j V ij = the payoff corresponding to decision alternative d i and state of nature s j V ij = the payoff corresponding to decision alternative d i and state of nature s j Expected Value of a Decision Alternative

23 23 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Example: Burger Prince Burger Prince Restaurant is considering opening a new restaurant on Main Street. It has three different models, each with a different seating capacity. Burger Prince estimates that the average number of customers per hour will be 80, 100, or 120. The payoff table for the three models is on the next slide.

24 24 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Payoff Table Average Number of Customers Per Hour Average Number of Customers Per Hour s 1 = 80 s 2 = 100 s 3 = 120 s 1 = 80 s 2 = 100 s 3 = 120 Model A $10,000 $15,000 $14,000 Model A $10,000 $15,000 $14,000 Model B $ 8,000 $18,000 $12,000 Model B $ 8,000 $18,000 $12,000 Model C $ 6,000 $16,000 $21,000 Model C $ 6,000 $16,000 $21,000

25 25 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Expected Value Approach Calculate the expected value for each decision. The decision tree on the next slide can assist in this calculation. Here d 1, d 2, d 3 represent the decision alternatives of models A, B, C, and s 1, s 2, s 3 represent the states of nature of 80, 100, and 120.

26 26 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Decision Tree 11.2.4.4.4.2.4.4.2.4 d1d1d1d1 d2d2d2d2 d3d3d3d3 s1s1s1s1 s1s1s1s1 s1s1s1s1 s2s2s2s2 s3s3s3s3 s2s2s2s2 s2s2s2s2 s3s3s3s3 s3s3s3s3 Payoffs 10,000 15,000 14,000 8,000 18,000 12,000 6,000 16,000 21,000 22 33 44

27 27 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Expected Value for Each Decision Choose the model with largest EV, Model C. 33 d1d1d1d1 d2d2d2d2 d3d3d3d3 EMV =.4(10,000) +.2(15,000) +.4(14,000) = $12,600 = $12,600 EMV =.4(8,000) +.2(18,000) +.4(12,000) = $11,600 = $11,600 EMV =.4(6,000) +.2(16,000) +.4(21,000) = $14,000 = $14,000 Model A Model B Model C 22 11 44

28 28 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Expected Value Approach n Formula Spreadsheet

29 29 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. n Solution Spreadsheet Expected Value Approach

30 30 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Expected Value of Perfect Information n Frequently information is available which can improve the probability estimates for the states of nature. n The expected value of perfect information (EVPI) is the increase in the expected profit that would result if one knew with certainty which state of nature would occur. n The EVPI provides an upper bound on the expected value of any sample or survey information.

31 31 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Expected Value of Perfect Information n EVPI Calculation Step 1: Step 1: Determine the optimal return corresponding to each state of nature. Determine the optimal return corresponding to each state of nature. Step 2: Step 2: Compute the expected value of these optimal returns. Compute the expected value of these optimal returns. Step 3: Step 3: Subtract the EV of the optimal decision from the amount determined in step (2). Subtract the EV of the optimal decision from the amount determined in step (2).

32 32 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Calculate the expected value for the optimum payoff for each state of nature and subtract the EV of the optimal decision. Calculate the expected value for the optimum payoff for each state of nature and subtract the EV of the optimal decision. EVPI=.4(10,000) +.2(18,000) +.4(21,000) - 14,000 = $2,000 EVPI=.4(10,000) +.2(18,000) +.4(21,000) - 14,000 = $2,000 Expected Value of Perfect Information

33 33 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. n Spreadsheet Expected Value of Perfect Information

34 34 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Risk Analysis n Risk analysis helps the decision maker recognize the difference between: the expected value of a decision alternative, and the expected value of a decision alternative, and the payoff that might actually occur the payoff that might actually occur n The risk profile for a decision alternative shows the possible payoffs for the decision alternative along with their associated probabilities.

35 35 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Risk Profile n Model C Decision Alternative.10.20.30.40.50 5 10 15 20 25 Probability Profit ($thousands)

36 36 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Sensitivity Analysis n Sensitivity analysis can be used to determine how changes to the following inputs affect the recommended decision alternative: probabilities for the states of nature probabilities for the states of nature values of the payoffs values of the payoffs n If a small change in the value of one of the inputs causes a change in the recommended decision alternative, extra effort and care should be taken in estimating the input value.

37 37 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Bayes’ Theorem and Posterior Probabilities n Knowledge of sample (survey) information can be used to revise the probability estimates for the states of nature. n Prior to obtaining this information, the probability estimates for the states of nature are called prior probabilities. n With knowledge of conditional probabilities for the outcomes or indicators of the sample or survey information, these prior probabilities can be revised by employing Bayes' Theorem. n The outcomes of this analysis are called posterior probabilities or branch probabilities for decision trees.

38 38 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Computing Branch Probabilities n Branch (Posterior) Probabilities Calculation Step 1: Step 1: For each state of nature, multiply the prior probability by its conditional probability for the indicator -- this gives the joint probabilities for the states and indicator. For each state of nature, multiply the prior probability by its conditional probability for the indicator -- this gives the joint probabilities for the states and indicator.

39 39 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Computing Branch Probabilities n Branch (Posterior) Probabilities Calculation Step 2: Step 2: Sum these joint probabilities over all states -- this gives the marginal probability for the indicator. Sum these joint probabilities over all states -- this gives the marginal probability for the indicator. Step 3: Step 3: For each state, divide its joint probability by the marginal probability for the indicator -- this gives the posterior probability distribution. For each state, divide its joint probability by the marginal probability for the indicator -- this gives the posterior probability distribution.

40 40 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Expected Value of Sample Information n The expected value of sample information (EVSI) is the additional expected profit possible through knowledge of the sample or survey information.

41 41 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Expected Value of Sample Information n EVSI Calculation Step 1: Step 1: Determine the optimal decision and its expected return for the possible outcomes of the sample using the posterior probabilities for the states of nature. Determine the optimal decision and its expected return for the possible outcomes of the sample using the posterior probabilities for the states of nature. Step 2: Step 2: Compute the expected value of these optimal returns. Compute the expected value of these optimal returns. Step 3: Step 3: Subtract the EV of the optimal decision obtained without using the sample information from the amount determined in step (2). Subtract the EV of the optimal decision obtained without using the sample information from the amount determined in step (2).

42 42 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Efficiency of Sample Information n Efficiency of sample information is the ratio of EVSI to EVPI. n As the EVPI provides an upper bound for the EVSI, efficiency is always a number between 0 and 1.

43 43 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Burger Prince must decide whether or not to purchase a marketing survey from Stanton Marketing for $1,000. The results of the survey are "favorable" or "unfavorable". The conditional probabilities are: P(favorable | 80 customers per hour) =.2 P(favorable | 80 customers per hour) =.2 P(favorable | 100 customers per hour) =.5 P(favorable | 100 customers per hour) =.5 P(favorable | 120 customers per hour) =.9 P(favorable | 120 customers per hour) =.9 Should Burger Prince have the survey performed by Stanton Marketing? Sample Information

44 44 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Influence Diagram RestaurantSize Profit Avg. Number of Customers Per Hour MarketSurveyResults MarketSurvey DecisionChanceConsequence

45 45 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Favorable Favorable State Prior Conditional Joint Posterior State Prior Conditional Joint Posterior 80.4.2.08.148 80.4.2.08.148 100.2.5.10.185 100.2.5.10.185 120.4.9.36.667 120.4.9.36.667 Total.54 1.000 Total.54 1.000 P(favorable) =.54 Posterior Probabilities

46 46 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Unfavorable State Prior Conditional Joint Posterior State Prior Conditional Joint Posterior 80.4.8.32.696 80.4.8.32.696 100.2.5.10.217 100.2.5.10.217 120.4.1.04.087 120.4.1.04.087 Total.46 1.000 Total.46 1.000 P(unfavorable) =.46 P(unfavorable) =.46 Posterior Probabilities

47 47 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. n Formula Spreadsheet Posterior Probabilities

48 48 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. n Solution Spreadsheet Posterior Probabilities

49 49 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Decision Tree n Top Half s 1 (.148) s 2 (.185) s 3 (.667) $10,000 $15,000 $14,000 $8,000 $18,000 $12,000 $6,000 $16,000 $21,000 I 1 I 1(.54) d1d1d1d1 d2d2d2d2 d3d3d3d3 22 44 55 66 11

50 50 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. n Bottom Half s 1 (.696) s 2 (.217) s 3 (.087) $10,000 $15,000 $18,000 $14,000 $8,000 $12,000 $6,000 $16,000 $21,000 I 2 I 2(.46) d1d1d1d1 d2d2d2d2 d3d3d3d3 77 99 88 33 11 Decision Tree

51 51 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. I 2 I 2(.46) d1d1d1d1 d2d2d2d2 d3d3d3d3 EMV =.696(10,000) +.217(15,000) +.087(14,000)= $11,433 +.087(14,000)= $11,433 EMV =.696(8,000) +.217(18,000) +.087(12,000) = $10,554 +.087(12,000) = $10,554 EMV =.696(6,000) +.217(16,000) +.087(21,000) = $9,475 +.087(21,000) = $9,475 I 1 I 1(.54) d1d1d1d1 d2d2d2d2 d3d3d3d3 EMV =.148(10,000) +.185(15,000) +.667(14,000) = $13,593 +.667(14,000) = $13,593 EMV =.148 (8,000) +.185(18,000) +.667(12,000) = $12,518 +.667(12,000) = $12,518 EMV =.148(6,000) +.185(16,000) +.667(21,000) = $17,855 +.667(21,000) = $17,855 44 55 66 77 88 99 22 33 11 $17,855 $11,433 Decision Tree

52 52 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. If the outcome of the survey is "favorable”, choose Model C. If it is “unfavorable”, choose Model A. EVSI =.54($17,855) +.46($11,433) - $14,000 = $900.88 EVSI =.54($17,855) +.46($11,433) - $14,000 = $900.88 Since this is less than the cost of the survey, the survey should not be purchased. Expected Value of Sample Information

53 53 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Efficiency of Sample Information The efficiency of the survey: EVSI/EVPI = ($900.88)/($2000) =.4504

54 54 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. End of Lesson 9


Download ppt "1 1 Slide © 2008 Thomson South-Western. All Rights Reserved © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or."

Similar presentations


Ads by Google