Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 188: Artificial Intelligence Spring 2009 Lecture 20: Decision Networks 4/2/2009 John DeNero – UC Berkeley Slides adapted from Dan Klein.

Similar presentations


Presentation on theme: "CS 188: Artificial Intelligence Spring 2009 Lecture 20: Decision Networks 4/2/2009 John DeNero – UC Berkeley Slides adapted from Dan Klein."— Presentation transcript:

1 CS 188: Artificial Intelligence Spring 2009 Lecture 20: Decision Networks 4/2/2009 John DeNero – UC Berkeley Slides adapted from Dan Klein

2 Announcements  Written 3 released tonight, due April 14 2

3 Decision Networks  MEU: choose the action which maximizes the expected utility given the evidence  Can directly operationalize this with decision networks  Bayes nets with nodes for utility and actions  Lets us calculate the expected utility for each action  New node types:  Chance nodes (just like BNs)  Actions (rectangles, must be parents, act as observed evidence)  Utility node (diamond, depends on action and chance nodes) Weather Forecast Umbrella U 3

4 Decision Networks  Action selection:  Instantiate all evidence  Set action node(s) each possible way  Calculate posterior for all parents of utility node, given the evidence  Calculate expected utility for each action  Choose maximizing action Weather Forecast Umbrella U 4

5 Example: Decision Networks Weather Umbrella U WP(W) sun0.7 rain0.3 AWU(A,W) leavesun100 leaverain0 takesun20 takerain70 Umbrella = leave Umbrella = take Optimal decision = leave 5

6 Evidence in Decision Networks  Find P(W|F=bad)  Select for evidence  First we join P(W) and P(bad|W)  Then we normalize Weather Forecast WP(W) sun0.7 rain0.3 FP(F|rain) good0.1 bad0.9 FP(F|sun) good0.8 bad0.2 WP(W) sun0.7 rain0.3 WP(F=bad|W) sun0.2 rain0.9 WP(W,F=bad) sun0.14 rain0.27 WP(W | F=bad) sun0.34 rain0.66 6 Umbrella U

7 Example: Decision Networks Weather Forecast =bad Umbrella U AWU(A,W) leavesun100 leaverain0 takesun20 takerain70 WP(W|F=bad) sun0.34 rain0.66 Umbrella = leave Umbrella = take Optimal decision = take 7 [Demo]

8 Conditioning on Action Nodes  An action node can be a parent of a chance node  Chance node conditions on the outcome of the action  Action nodes are like observed variables in a Bayes’ net, except we max over their values 8 S’ A U S T(s,a,s’) R(s,a,s’)

9 Value of Information  Idea: compute value of acquiring each possible piece of evidence  Can be done directly from decision network  Example: buying oil drilling rights  Two blocks A and B, exactly one has oil, worth k  Prior probabilities 0.5 each, & mutually exclusive  Drilling in either A or B has MEU = k/2  Fair price of drilling rights: k/2  Question: what’s the value of information  Value of knowing which of A or B has oil  Value is expected gain in MEU from new info  Survey may say “oil in a” or “oil in b,” prob 0.5 each  If we know OilLoc, MEU is k (either way)  Gain in MEU from knowing OilLoc?  VPI(OilLoc) = k/2  Fair price of information: k/2 OilLoc DrillLoc U DOU aak ab0 ba0 bbk OP a1/2 b 9

10 Value of Perfect Information  Current evidence E=e, utility depends on S=s  Potential new evidence E’: suppose we knew E’ = e’  BUT E’ is a random variable whose value is currently unknown, so:  Must compute expected gain over all possible values  (VPI = value of perfect information) 10

11 VPI Example: Weather Weather Forecast Umbrella U AWU leavesun100 leaverain0 takesun20 takerain70 MEU with no evidence MEU if forecast is bad MEU if forecast is good FP(F) good0.59 bad0.41 Forecast distribution 11 7.8

12 VPI Example: Ghostbusters TBGP(T,B,G) tbg0.16 tb gg t bb g0.24 t bb gg 0.04  t bg0.04 tt b gg 0.24 tt bb g0.06 tt bb gg  Reminder: ghost his hidden, sensors are noisy  T: Top square is red B: Bottom square is red G: Ghost is in the top  Sensor model: P( t | g ) = 0.8 P( t |  g ) = 0.4 P( b | g) = 0.4 P( b |  g ) = 0.8 Joint Distribution [Demo]

13 VPI Example: Ghostbusters TBGP(T,B,G) tbg0.16 tb gg t bb g0.24 t bb gg 0.04  t bg0.04 tt b gg 0.24 tt bb g0.06 tt bb gg Utility of bust is 2, no bust is 0  Q1: What’s the value of knowing T if I know nothing?  Q1’: E P(T) [MEU(t) – MEU()]  Q2: What’s the value of knowing B if I already know that T is true (red)?  Q2’: E P(B|t) [MEU(t,b) – MEU(t)]  How low can the value of information ever be? Joint Distribution [Demo]

14 VPI Properties  Nonnegative in expectation  Nonadditive ---consider, e.g., obtaining E j twice  Order-independent 14

15 Quick VPI Questions  The soup of the day is either clam chowder or split pea, but you wouldn’t order either one. What’s the value of knowing which it is?  If you have $10 to bet and odds are 3 to 1 that Berkeley will beat Stanford, what’s the value of knowing the outcome in advance, assuming you can make a fair bet for either Cal or Stanford?  What if you are morally obligated not to bet against Cal, but you can refrain from betting? 15


Download ppt "CS 188: Artificial Intelligence Spring 2009 Lecture 20: Decision Networks 4/2/2009 John DeNero – UC Berkeley Slides adapted from Dan Klein."

Similar presentations


Ads by Google