Presentation is loading. Please wait.

Presentation is loading. Please wait.

IndE 311 Stochastic Models and Decision Analysis

Similar presentations


Presentation on theme: "IndE 311 Stochastic Models and Decision Analysis"— Presentation transcript:

1 IndE 311 Stochastic Models and Decision Analysis
UW Industrial Engineering Instructor: Prof. Zelda Zabinsky

2 Operations Research “The Science of Better”

3 Operations Research Modeling Toolset
311 Queueing Theory 310 Markov Chains PERT/ CPM Network Programming Simulation Decision Analysis Stochastic Programming Inventory Theory Linear Programming Markov Decision Processes Dynamic Programming Nonlinear Programming Forecasting Integer Programming Game Theory 312

4 IndE 311 Decision analysis Markov chains Queueing theory
Decision making without experimentation Decision making with experimentation Decision trees Utility theory Markov chains Modeling Chapman-Kolmogorov equations Classification of states Long-run properties First passage times Absorbing states Queueing theory Basic structure and modeling Exponential distribution Birth-and-death processes Models based on birth-and-death Models with non-exponential distributions Applications of queueing theory Waiting cost functions Decision models

5 Decision Analysis Chapter 15

6 Decision Analysis Decision making without experimentation
Decision making criteria Decision making with experimentation Expected value of experimentation Decision trees Utility theory

7 Decision Making without Experimentation

8 Goferbroke Example Goferbroke Company owns a tract of land that may contain oil Consulting geologist: “1 chance in 4 of oil” Offer for purchase from another company: $90k Can also hold the land and drill for oil with cost $100k If oil, expected revenue $800k, if not, nothing Payoff Alternative Oil Dry Drill for oil Sell the land Chance 1 in 4 3 in 4

9 Notation and Terminology
Actions: {a1, a2, …} The set of actions the decision maker must choose from Example: States of nature: {1, 2, ...} Possible outcomes of the uncertain event.

10 Notation and Terminology
Payoff/Loss Function: L(ai, k) The payoff/loss incurred by taking action ai when state k occurs. Example: Prior distribution: Distribution representing the relative likelihood of the possible states of nature. Prior probabilities: P( = k) Probabilities (provided by prior distribution) for various states of nature.

11 Decision Making Criteria
Can “optimize” the decision with respect to several criteria Maximin payoff Minimax regret Maximum likelihood Bayes’ decision rule (expected value)

12 Maximin Payoff Criterion
For each action, find minimum payoff over all states of nature Then choose the action with the maximum of these minimum payoffs State of Nature Min Payoff Action Oil Dry Drill for oil 700 -100 Sell the land 90

13 Minimax Regret Criterion
For each action, find maximum regret over all states of nature Then choose the action with the minimum of these maximum regrets (Payoffs) State of Nature Action Oil Dry Drill for oil 700 -100 Sell the land 90 (Regrets) State of Nature Max Regret Action Oil Dry Drill for oil Sell the land

14 Maximum Likelihood Criterion
Identify the most likely state of nature Then choose the action with the maximum payoff under that state of nature State of Nature Action Oil Dry Drill for oil 700 -100 Sell the land 90 Prior probability 0.25 0.75

15 Bayes’ Decision Rule (Expected Value Criterion)
For each action, find expectation of payoff over all states of nature Then choose the action with the maximum of these expected payoffs State of Nature Expected Payoff Action Oil Dry Drill for oil 700 -100 Sell the land 90 Prior probability 0.25 0.75

16 Sensitivity Analysis with Bayes’ Decision Rule
What is the minimum probability of oil such that we choose to drill the land under Bayes’ decision rule? State of Nature Expected Payoff Action Oil Dry Drill for oil 700 -100 Sell the land 90 Prior probability p 1-p

17 Decision Making with Experimentation

18 Goferbroke Example (cont’d)
State of Nature Action Oil Dry Drill for oil 700 -100 Sell the land 90 Prior probability 0.25 0.75 Option available to conduct a detailed seismic survey to obtain a better estimate of oil probability Costs $30k Possible findings: Unfavorable seismic soundings (USS), oil is fairly unlikely Favorable seismic soundings (FSS), oil is fairly likely

19 Posterior Probabilities
Do experiments to get better information and improve estimates for the probabilities of states of nature. These improved estimates are called posterior probabilities. Experimental Outcomes: {x1, x2, …} Example: Cost of experiment:  Posterior Distribution: P( = k | X = xj)

20 Goferbroke Example (cont’d)
Based on past experience: If there is oil, then the probability that seismic survey findings is USS = 0.4 = P(USS | oil) the probability that seismic survey findings is FSS = 0.6 = P(FSS | oil) If there is no oil, then the probability that seismic survey findings is USS = 0.8 = P(USS | dry) the probability that seismic survey findings is FSS = 0.2 = P(FSS | dry)

21 Bayes’ Theorem Calculate posterior probabilities using Bayes’ theorem:
Given P(X = xj |  = k), find P( = k | X = xj)

22 Goferbroke Example (cont’d)
We have P(USS | oil) = 0.4 P(FSS | oil) = 0.6 P(oil) = 0.25 P(USS | dry) = 0.8 P(FSS | dry) = 0.2 P(dry) = 0.75 P(oil | USS) = P(oil | FSS) = P(dry | USS) = P(dry | FSS) =

23 Goferbroke Example (cont’d)
Optimal policies If finding is USS: State of Nature Expected Payoff Action Oil Dry Drill for oil 700 -100 Sell the land 90 Posterior probability If finding is FSS: State of Nature Expected Payoff Action Oil Dry Drill for oil 700 -100 Sell the land 90 Posterior probability

24 The Value of Experimentation
Do we need to perform the experiment? As evidenced by the experimental data, the experimental outcome is not always “correct”. We sometimes have imperfect information. 2 ways to access value of information Expected value of perfect information (EVPI) What is the value of having a crystal ball that can identify true state of nature? Expected value of experimentation (EVE) Is the experiment worth the cost?

25 Expected Value of Perfect Information
Suppose we know the true state of nature. Then we will pick the optimal action given this true state of nature. State of Nature Action Oil Dry Drill for oil 700 -100 Sell the land 90 Prior probability 0.25 0.75 E[PI] = expected payoff with perfect information =

26 Expected Value of Perfect Information
EVPI = E[PI] – E[OI] where E[OI] is expected value with original information (i.e. without experimentation) EVPI for the Goferbroke problem =

27 Expected Value of Experimentation
We are interested in the value of the experiment. If the value is greater than the cost, then it is worthwhile to do the experiment. Expected Value of Experimentation: EVE = E[EI] – E[OI] where E[EI] is expected value with experimental information.

28 Goferbroke Example (cont’d)
Expected Value of Experimentation: EVE = E[EI] – E[OI] EVE =

29 Decision Trees

30 Decision Tree Tool to display decision problem and relevant computations Nodes on a decision tree called __________. Arcs on a decision tree called ___________. Decision forks represented by a __________. Chance forks represented by a ___________. Outcome is determined by both ___________ and ____________. Outcomes noted at the end of a path. Can also include payoff information on a decision tree branch

31 Goferbroke Example (cont’d) Decision Tree

32 Analysis Using Decision Trees
Start at the right side of tree and move left a column at a time. For each column, if chance fork, go to (2). If decision fork, go to (3). At each chance fork, calculate its expected value. Record this value in bold next to the fork. This value is also the expected value for branch leading into that fork. At each decision fork, compare expected value and choose alternative of branch with best value. Record choice by putting slash marks through each rejected branch. Comments: This is a backward induction procedure. For any decision tree, such a procedure always leads to an optimal solution.

33 Goferbroke Example (cont’d) Decision Tree Analysis

34 Painting problem Painting at an art gallery, you think is worth $12,000 Dealer asks $10,000 if you buy today (Wednesday) You can buy or wait until tomorrow, if not sold by then, can be yours for $8,000 Tomorrow you can buy or wait until the next day: if not sold by then, can be yours for $7,000 In any day, the probability that the painting will be sold to someone else is 50% What is the optimal policy?

35 Drawer problem Two drawers Choose one drawer
One drawer contains three gold coins, The other contains one gold and two silver. Choose one drawer You will be paid $500 for each gold coin and $100 for each silver coin in that drawer Before choosing, you may pay me $200 and I will draw a randomly selected coin, and tell you whether it’s gold or silver and which drawer it comes from (e.g. “gold coin from drawer 1”) What is the optimal decision policy? EVPI? EVE? Should you pay me $200?

36 Utility Theory

37 Validity of Monetary Value Assumption
Thus far, when applying Bayes’ decision rule, we assumed that expected monetary value is the appropriate measure In many situations and many applications, this assumption may be inappropriate

38 Choosing between ‘Lotteries’
Assume you were given the option to choose from two ‘lotteries’ Lottery 1 50:50 chance of winning $1,000 or $0 Lottery 2 Receive $50 for certain Which one would you pick? .5 $1,000 .5 $0 1 $50

39 Choosing between ‘lotteries’
How about between these two? Lottery 1 50:50 chance of winning $1,000 or $0 Lottery 2 Receive $400 for certain Or these two? Lottery 2 Receive $700 for certain .5 $1,000 .5 $0 1 $400 .5 $1,000 .5 $0 1 $700

40 Utility Think of a capital investment firm deciding whether or not to invest in a firm developing a technology that is unproven but has high potential impact How many people buy insurance? Is this monetarily sound according to Bayes’ rule? So... is Bayes’ rule invalidated? No- because we can use it with the utility for money when choosing between decisions We’ll focus on utility for money, but in general it could be utility for anything (e.g. consequences of a doctor’s actions)

41 A Typical Utility Function for Money
u(M) 4 3 What does this mean? 2 1 M $100 $250 $500 $1,000

42 Decision Maker’s Preferences
Risk-averse Avoid risk Decreasing utility for money Risk-neutral Monetary value = Utility Linear utility for money Risk-seeking (or risk-prone) Seek risk Increasing utility for money Combination of these u(M) M u(M) M u(M) M u(M) M

43 Constructing Utility Functions
When utility theory is incorporated into a real decision analysis problem, a utility function must be constructed to fit the preferences and the values of the decision maker(s) involved Fundamental property: The decision maker is indifferent between two alternative courses of action that have the same utility

44 Indifference in Utility
Consider two lotteries The example decision maker we discussed earlier would be indifferent between the two lotteries if p is 0.25 and X is … p is 0.50 and X is … p is 0.75 and X is … p $1,000 1 $X 1-p $0

45 Goferbroke Example (with Utility)
We need the utility values for the following possible monetary payoffs: 45° Monetary Payoff Utility -130 -100 60 90 670 700 u(M) M

46 Constructing Utility Functions Goferbroke Example
u(0) is usually set to 0. So u(0)=0 We ask the decision maker what value of p makes him/her indifferent between the following lotteries: The decision maker’s response is p=0.2 So… p 700 1 1-p -130

47 Constructing Utility Functions Goferbroke Example
We now ask the decision maker what value of p makes him/her indifferent between the following lotteries: The decision maker’s response is p=0.15 So… p 700 1 90 1-p

48 Constructing Utility Functions Goferbroke Example
We now ask the decision maker what value of p makes him/her indifferent between the following lotteries: The decision maker’s response is p=0.1 So… p 700 1 60 1-p

49 Goferbroke Example (with Utility) Decision Tree

50 Exponential Utility Functions
One of the many mathematically prescribed forms of a “closed-form” utility function It is used for risk-averse decision makers only Can be used in cases where it is not feasible or desirable for the decision maker to answer lottery questions for all possible outcomes The single parameter R is the one such that the decision maker is indifferent between 0.5 R 1 and (approximately) 0.5 -R/2


Download ppt "IndE 311 Stochastic Models and Decision Analysis"

Similar presentations


Ads by Google