Download presentation
Presentation is loading. Please wait.
Published byMavis Dixon Modified over 6 years ago
1
ECE457 Applied Artificial Intelligence Spring 2008 Lecture #10
Decision Making ECE457 Applied Artificial Intelligence Spring 2008 Lecture #10
2
Outline Maximum Expected Utility (MEU) Decision network
Making decisions Russell & Norvig, chapter 16 ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 2
3
Acting Under Uncertainty
With no uncertainty, rational decision is to pick action with “best” outcome Two actions #1 leads to great outcome #2 leads to good outcome It’s only rational to pick #1 Assumes outcome is 100% certain With uncertainty, it’s a little harder #1 has 1% probability to lead to great outcome #2 has 90% probability to lead to good outcome What is the rational decision? ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 3
4
Acting Under Uncertainty
Maximum Expected Utility (MEU) Pick action that leads to best outcome averaged over all possible outcomes of the action How do we compute the MEU? Easy once we know the probability of each outcome and their utility ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 4
5
Utility Value of a state or outcome Computed by utility function
U(S) = utility of state S U(S) [0,1] if normalized ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 5
6
Expected Utility Sum of utility of each possible outcome times probability of that outcome Known evidence E about the world Action A has i possible outcomes, with probability P(Resulti(A)|Do(A),E) Utility of each outcome is U(Resulti(A)) Evaluation function of the state of the world given Resulti(A) EU(A|E)=i P(Resulti(A)|Do(A),E) U(Resulti(A)) ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 6
7
Maximum Expected Utility
List all possible actions Aj For each action, list all possible outcomes Resulti(Aj) Compute EU(Aj|E) Pick action that maximises EU ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 7
8
Utility of Money Use money as measure of utility? Example
A1 = 100% chance of $1M A2 = 50% chance of $3M or nothing EU(A2) = $1.5M > $1M = EU(A1) Is that rational? ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 8
9
Utility of Money Utility/Money relationship is logarithmic, not linear
Example EU(A2) = .45 < .46 = EU(A1) Insurance EU(paying) = –U(value of premium) EU(not paying) = U(value of premium) – U(value of house) * P(losing house) ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 9
10
Axioms Given three states A, B, C A B A ~ B A B
The agent prefers A to B A ~ B The agent is indifferent between A and B A B The agent prefers A to B or is indifferent between A and B [p1, A; p2, B; p3, C] A can occur with probability p1, B can occur with probability p2, C can occur with probability p3 ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 10
11
Axioms Orderability Transitivity Continuity Substituability
(A B) (B A) (A ~ B) Transitivity (A B) (B C) (A C) Continuity A B C p [p, A; 1-p, C] ~ B Substituability A ~ B [p, A; 1-p, C] ~ [p, B; 1-p, C] Monotonicity A B ( p q [p, A; 1-p, B] [q, A; 1-q, B] ) Decomposability [p, A; 1-p, [q, B; 1-q, C]] ~ [p, A; (1-p)q, B; (1-p)(1-q), C] ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 11
12
Axioms Utility principle Maximum utility principle
U(A) > U(B) A B U(A) = U(B) A ~ B Maximum utility principle U([p1, A1; … ; pn, An]) = i piU(Ai) Given these axioms, MEU is rational! ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 12
13
Decision Network Our agent makes decisions given evidence
Observed variables and conditional probability tables of hidden variables Similar to conditional probability Probability of variables given other variables Relationships represented graphically in Bayesian network Could we make a similar graph here? ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 13
14
Decision Network Sometimes called influence diagram
Like a Bayesian Network for decision making Start with variables of problem Add decision variables that the agent controls Add utility variable that specify how good each state is ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 14
15
Decision Network Chance node (oval) Decision node (rectangle)
Uncertain variable Like in Bayesian network Decision node (rectangle) Choice of action Parents: variables affecting decision, evidence Children: variables affected by decision Utility node (diamond) Utility function Parents: variables affecting utility Typically only one in network ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 15
16
Decision Network Example
P(E) F 0.01 T 0.5 0.9 0.99 W E H F 0.2 T 0.6 0.8 0.99 Study Happiness PassExam Lucky L P(W) F 0.01 T 0.4 P(L) = 0.75 Win ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 16
17
Decision Network Example
Run into friends Sunny Bomber Patio Join your friends Have $ U ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 17
18
Making a Rational Decision
At a decision node Given a combination of values of evidence variables, and each possible action given this evidence Compute the EU of each action you can decide to do Decide to do the action with the maximum EU Policy: choice of action (not necessarily the best) for each possible combination of values of evidence variables ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 18
19
Policy Decision node Di
Can take values in domain dom(Di) Has set of parents Pi that take values in domain dom(Pi) Policy is a set of mappings i of dom(Pi) to dom(Di) i associates a decision to each state the parents of Di can be in associates a series of decisions to each state the network can be in ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 19
20
Policy Have $ Bomber Patio Sunny Policy on going to Bomber patio
bp($,S) = BP bp(¬$,S) = BP bp($,¬S) = ¬BP bp(¬$,¬S) = ¬BP ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 20
21
Value of a Policy Expected utility if decisions are taken according to the policy EU() = x P(x) U(x,(x)) EU(bp) = $,s P($,S) U($,S,bp($,S)) Optimal policy * is the one with the highest expected utility EU(*) EU() for all policies ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 21
22
Computing the Optimal Policy
Start from last decision node before utility For each combination of values of a node’s parents Compute the expected utility of each decision Set policy as decision that maximises utility Work backward to the first decision in the network ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 22
23
Computing the Optimal Policy
Run into friends Sunny Bomber Patio Have $ Join your friends U Compute the optimal policy for JF For each combination of BP, RF and $, make a decision JF and compute U(JF,$) Set the policy as the max utility decision for each combination of BP and RF ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 23
24
Computing the Optimal Policy
Run into friends Sunny Bomber Patio Have $ Join your friends U Compute the optimal policy for BP given JF(BP,RF) For each combination of S and $, make a decision BP, which will affect RF and JF JF is decided by optimal policy So we can compute U(JF,$) ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 24
25
Decision Network Example
Bob wants to buy a used car. Unfortunately, the car he’s considering has a 50% chance of being a lemon. Before buying, he can decide to take the car to a mechanic to have it inspected. The mechanic will report if the car is good or bad, but he can make mistakes, and the inspection is expensive. Bob prefers owning a good car to not owning a car, and prefers that to owning a lemon. Should Bob have the car inspected first or not? ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 25
26
Decision Network Example
Report Buy Inspect Lemon U ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 26
27
Decision Network Example
P(G) P(¬G) P(N) F 1 T 0.9 0.1 0.2 0.8 Utility cost of inspection = -50 l b i U F -300 T 1000 -600 -350 950 -650 Report Buy Inspect Lemon U P(L) = 0.5 ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 27
28
Decision Network Example
Compute EU of Buy and Not Buy given all combinations of evidence Select action with MEU given each case Compute EU of Inspect and Not Inspect given all combinations of evidence and then select Buy/Not Buy action Decide on Inspect or Not Inspect, depending on MEU ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 28
29
Decision Network Example
Compute the expected utility of buying and not buying the car given the evidence The evidence is whether or not Bob got the car inspected, and what the result of the inspection is EU(b|i,r) = l P(l|b,i,r)U(b,i,l) ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 29
30
Decision Network Example
EU(B|¬I,N) = l P(l|B,¬I,N)U(B,¬I,l) EU(B|¬I,N) = P(L)U(B,L,¬I) + P(¬L)U(B,¬L,¬I) EU(B|¬I,N) = 0.5 * * 1000 EU(B|¬I,N) = 200 EU(¬B|¬I,N) = l P(l|¬B,¬I,N)U(¬B,¬I,l) EU(¬B|¬I,N) = P(L)U(¬B,L,¬I) + P(¬L)U(¬B,¬L,¬I) EU(¬B|¬I,N) = 0.5 * * -300 EU(¬B|¬I,N) = -300 Rational decision, if Bob doesn’t get the car inspected, is to buy it ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 30
31
Decision Network Example
EU(B|I,G) = l P(l|B,I,G)U(B,I,l) We’re missing some information! From the network, we know P(L) and P(G|L), but not P(L|G) nor P(G) Compute P(G) using marginalization P(G) = P(G|L)P(L) + P(G|¬L)P(¬L) = 0.55 Compute P(L|G) using Bayes’ Theorem P(L|G) = P(G|L)P(L)/P(G) = 0.18 P(¬L|G) = P(G|¬L)P(¬L)/P(G) = 0.82 ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 31
32
Decision Network Example
EU(B|I,G) = l P(l|B,I,G)U(B,I,l) EU(B|I,G) = P(L|G)U(B,L,I) + P(¬L|G)U(B,¬L,I) EU(B|I,G) = 0.18 * * 950 EU(B|I,G) = 662 EU(¬B|I,G) = l P(l|¬B,I,G)U(¬B,I,l) EU(¬B|I,G) = P(L|G)U(¬B,L,I) + P(¬L|G)U(¬B,¬L,I) EU(¬B|I,G) = 0.18 * * -350 EU(¬B|I,G) = -350 Rational decision, if Bob gets the car inspected and the report says it’s good, is to buy it ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 32
33
Decision Network Example
EU(B|I,¬G) = l P(l|B,I,G)U(B,I,l) EU(B|I,¬G) = P(L|G)U(B,L,I) + P(¬L|G)U(B,¬L,I) EU(B|I,¬G) = 0.89 * * 950 EU(B|I,¬G) = -474 EU(¬B|I,¬G) = l P(l|¬B,I,¬G)U(¬B,I,l) EU(¬B|I,¬G) = P(L|¬G)U(¬B,L,I) + P(¬L|¬G)U(¬B,¬L,I) EU(¬B|I,¬G) = 0.89 * * -350 EU(¬B|I,¬G) = -350 Rational decision, if Bob gets the car inspected and the report says it’s not good, is to not buy it ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 33
34
Decision Network Example
*B(I,R) EU ¬I N B 200 G 662 ¬G ¬B -350 ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 34
35
Decision Network Example
Should Bob get the car inspected? EU(i) = l,r P(l,r|i)U(l,i,b) P(l,r|i) = P(r|l,i)P(l|i) = P(r|l,i)P(l) EU(i) = l,r P(r|l,i)P(l)U(l,i,b) ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 35
36
Decision Network Example
EU(i) = l,r P(r|l,i)P(l)U(l,i,b) EU(¬I) = P(N|L,¬I)P(L)U(L,¬I,B) + P(N|¬L,¬I)P(¬L)U(¬L,¬I,B) EU(¬I) = 1 * 0.5 * * 0.5 * 1000 EU(¬I) = 200 ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 36
37
Decision Network Example
EU(i) = l,r P(r|l,i)P(l)U(l,i,b) EU(I) = P(G|L,I)P(L)U(L,I,B) + P(G|¬L,I)P(¬L)U(¬L,I,B) + P(¬G|L,I)P(L)U(L,I,¬B) + P(¬G|¬L,I)P(¬L)U(¬L,I,¬B) EU(I) = 0.2 * 0.5 * * 0.5 * * 0.5 * * 0.5 * -350 EU(I) = 205 ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 37
38
Decision Network Example
EU(I) = 205 > EU(¬I) = 200 Therefore, Bob should get the car inspected * = { *B(I,R) , *I } *I EU I 205 I R *B(I,R) EU ¬I N B 200 G 662 ¬G ¬B -350 ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 38
39
Value of Information Utility of decision without inspection is 200
Utility of decision with inspection is 205, utility of the decision minus the utility cost of the inspection Utility of decision is 255 At what point is the utility cost of the inspection too high? 255 – Utility Cost < 200 Value of the information gained from the inspection is 55 ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 39
40
Value of Information Information has value if The value is
It causes a change in the decision The new decision has higher utility than the old one The value is Non-negative Zero for irrelevant facts Zero for information already known ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 40
41
Categories of AI Humanly Rationally Think Act
Logic and probabilistic reasoning Act Tree searching, iterative improvement ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 41
42
Exercise Accident (A) Pr W A U T S 0.4 F 0.7 L 0.3 0.6 0.1 1 0.9 U
0.9 U Wear Protection (Pr) Which Way (W) W P(A) S 0.6 L 0.3 ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 42
43
Exercise Which way to go if you wear protection?
EU( S | Pr ) = P(A|S) * U(Pr,S,A) + P(~A|S) * U(Pr,S,~A) EU( S | Pr ) = 0.6 * * 0.7 EU( S | Pr ) = 0.52 EU( L | Pr ) = P(A|L) * U(Pr,L,A) + P(~A|L) * U(Pr,L,~A) EU( L | Pr ) = 0.3 * * 0.6 EU( L | Pr ) = 0.51 Rational decision, if you wear protection, is to go the short way ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 43
44
Exercise Which way to go if you don’t wear protection?
EU( S | ~Pr ) = P(A|S) * U(~Pr,S,A) + P(~A|S) * U(~Pr,S,~A) EU( S | ~Pr ) = 0.6 * * 1 EU( S | ~Pr ) = 0.46 EU( L | ~Pr ) = P(A|L) * U(~Pr,L,A) + P(~A|L) * U(~Pr,L,~A) EU( L | ~Pr ) = 0.3 * * 0.9 EU( L | ~Pr ) = 0.63 Rational decision, if you don’t wear protection, is to go the long way ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 44
45
Exercise Given the decision on which way to go, should we wear protection? EU( Pr ) = P(A|S) * U(Pr,S,A) + P(~A|S) * U(Pr,S,~A) EU( Pr ) = 0.6 * * 0.7 EU( Pr ) = 0.52 EU( ~Pr ) = P(A|L) * U(~Pr,L,A) + P(~A|L) * U(~Pr,L,~A) EU( ~Pr ) = 0.3 * * 0.9 EU( ~Pr ) = 0.63 Rational decision is to not wear protection And therefore go the long way ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 45
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.