Download presentation
Presentation is loading. Please wait.
Published byGilbert Shelton Modified over 8 years ago
1
Decision Theory: Single & Sequential Decisions. VE for Decision Networks. CPSC 322 – Decision Theory 2 Textbook §9.2 April 1, 2011
2
2 Course Overview Environment Problem Type Logic Planning Deterministic Stochastic Constraint Satisfaction Search Arc Consistency Search Logics STRIPS Variables + Constraints Variable Elimination Bayesian Networks Decision Networks Markov Processes Static Sequential Representation Reasoning Technique Uncertainty Decision Theory Course Module Variable Elimination Value Iteration Planning Decision theory: acting under uncertainty As CSP (using arc consistency)
3
Lecture Overview Recap: Utility and Expected Utility Single-Stage Decision Problems –Single-Stage decision networks –Variable elimination (VE) for computing the optimal decision Sequential Decision Problems –General decision networks –Time-permitting: Policies –Next lecture: variable elimination for finding the optimal policy in general decision networks 3
4
Utility Utility: a measure of desirability of possible worlds to an agent –Let U be a real-valued function such that U(w) represents an agent's degree of preference for world w Simple goals can still be specified: e.g. –Worlds that satisfy the goal have utility 100 –Other worlds have utility 0 Utilities can be more complicated –For example, in the robot delivery domains, they could involve Amount of damage Reached the target room? Energy left Time taken 4
5
Delivery Robot Example Decision variable 1: the robot can choose to wear pads –Yes: protection against accidents, but extra weight –No: fast, but no protection Decision variable 2: the robot can choose the way –Short way: quick, but higher chance of accident –Long way: safe, but slow Random variable: is there an accident? Agent decides Chance decides 5 Utility 35 95 30 75 353 100 35 0 80
6
Possible worlds and decision variables A possible world specifies a value for each random variable and each decision variable For each assignment of values to all decision variables –the probabilities of the worlds satisfying that assignment sum to 1. 6 Utility 35 95 3530 75 353 100 350 80 0.2 0.8 Conditional probability 0.01 0.99 0.01 0.99 0.2 0.8
7
Expected utility of a decision 7 Utility 35 95 E[U|D] 83 3530 75 353 100 350 80 74.55 80.6 79.2 0.01 0.99 0.2 0.8 0.01 0.99 0.2 0.8 Conditional probability
8
Lecture Overview Recap: Utility and Expected Utility Single-Stage Decision Problems –Single-Stage decision networks –Variable elimination (VE) for computing the optimal decision Sequential Decision Problems –General decision networks –Time-permitting: Policies –Next lecture: variable elimination for finding the optimal policy in general decision networks 8
9
Single Action vs. Sequence of Actions Single Action (aka One-Off Decisions) –One or more primitive decisions that can be treated as a single macro decision to be made before acting –E.g., “WearPads” and “WhichWay” can be combined into macro decision (WearPads, WhichWay) with domain {yes,no} × {long, short} Sequence of Actions (Sequential Decisions) – Repeat: make observations decide on an action carry out the action – Agent has to take actions not knowing what the future brings This is fundamentally different from everything we’ve seen so far Planning was sequential, but we still could still think first and then act 9
10
Optimal single-stage decision Given a single (macro) decision variable D –the agent can choose D=d i for any value d i dom(D) 10
11
What is the optimal decision in the example? 0.01 0.99 0.2 0.8 0.01 0.99 0.2 0.8 Utility 35 95 Conditional probability E[U|D] 83 3530 75 353 100 350 80 74.55 80.6 79.2 (Wear pads, long way) (Wear pads, short way) (No pads, short way) (No pads, long way)
12
Optimal decision in robot delivery example 0.01 0.99 0.2 0.8 0.01 0.99 0.2 0.8 Utility 35 95 Conditional probability E[U|D] 83 3530 75 353 100 350 80 74.55 80.6 79.2 Best decision: (wear pads, short way)
13
Single-Stage decision networks Extend belief networks with: –Decision nodes, that the agent chooses the value for Parents: only other decision nodes allowed Domain is the set of possible actions Drawn as a rectangle –Exactly one utility node Parents: all random & decision variables on which the utility depends Does not have a domain Drawn as a diamond Explicitly shows dependencies –E.g., which variables affect the probability of an accident? 13
14
Types of nodes in decision networks A random variable is drawn as an ellipse. –Arcs into the node represent probabilistic dependence –As in Bayesian networks: a random variable is conditionally independent of its non-descendants given its parents A decision variable is drawn as an rectangle. –Arcs into the node represent information available when the decision is made A utility node is drawn as a diamond. –Arcs into the node represent variables that the utility depends on. –Specifies a utility for each instantiation of its parents
15
Example Decision Network Decision nodes do not have an associated table. The utility node does not have a domain. 15 Which way Accident Wear PadsUtility long true true long true false long false true long false false short true true short true false short false true short false false 30 0 75 80 35 3 95 100 Which Way WAccident AP(A|W) long short true false true false 0.01 0.99 0.2 0.8
16
Computing the optimal decision: we can use VE Denote –the random variables as X 1, …, X n –the decision variables as D –the parents of node N as pa(N) To find the optimal decision we can use VE: 1.Create a factor for each conditional probability and for the utility 2.Sum out all random variables, one at a time This creates a factor on D that gives the expected utility for each d i 3.Choose the d i with the maximum value in the factor 16
17
VE Example: Step 1, create initial factors 17 Which way W Accident A Pads PUtility long true true long true false long false true long false false short true true short true false short false true short false false 30 0 75 80 35 3 95 100 Which Way WAccident AP(A|W) long short true false true false 0.01 0.99 0.2 0.8 f 1 (A,W) f 2 (A,W,P) Abbreviations: W = Which Way P = Wear Pads A = Accident
18
VE example: step 2, sum out A Step 2a: compute product f 1 (A,W) × f 2 (A,W,P) What is the right form for the product f 1 (A,W) × f 2 (A,W,P)? f(A,P)f (A,W)f(A)f(A,P,W) 18
19
VE example: step 2, sum out A What is the right form for the product f 1 (A,W) × f 2 (A,W,P)? It is f(A,P,W): the domain of the product is the union of the multiplicands’ domains f(A,P,W) = f 1 (A,W) × f 2 (A,W,P) –I.e., f(A=a,P=p,W=w) = f 1 (A=a,W=w) × f 2 (A=a,W=w,P=p) 19 Step 2a: compute product f(A,W,P) = f 1 (A,W) × f 2 (A,W,P)
20
Which way W Accident A Pads P f(A,W,P) long true true long true false long false true long false false short true true short true false short false true short false false 0.01 * 30 ??? VE example: step 2, sum out A 20 Which way W Accident A Pads P f 2 (A,W,P) long true true long true false long false true long false false short true true short true false short false true short false false 30 0 75 80 35 3 95 100 Which way WAccident A f 1 (A,W) long short true false true false 0.01 0.99 0.2 0.8 f (A=a,P=p,W=w) = f 1 (A=a,W=w) × f 2 (A=a,W=w,P=p) 0.01 * 800.99 * 30 0.99 * 800.8 * 30 Step 2a: compute product f(A,W,P) = f 1 (A,W) × f 2 (A,W,P)
21
Which way W Accident A Pads P f(A,W,P) long true true long true false long false true long false false short true true short true false short false true short false false 0.01 * 30 0.01*0 0.99*75 0.99*80 0.2*35 0.2*3 0.8*95 0.8*100 VE example: step 2, sum out A Step 2a: compute product f(A,W,P) = f 1 (A,W) × f 2 (A,W,P) 21 Which way W Accident A Pads P f 2 (A,W,P) long true true long true false long false true long false false short true true short true false short false true short false false 30 0 75 80 35 3 95 100 f (A=a,P=p,W=w) = f 1 (A=a,W=w) × f 2 (A=a,W=w,P=p) Which way WAccident A f 1 (A,W) long short true false true false 0.01 0.99 0.2 0.8
22
VE example: step 2, sum out A 22 Step 2b: sum A out of the product f(A,W,P): Which way WPads P f 3 (W,P) long short true false true false 0.01*30+0.99*75=74.55 ?? 0.99*80 + 0.8*95 0.2*35 + 0.2*0.3 0.2*35 + 0.8*95 0.8 * 95 + 0.8*100 Which way W Accident A Pads P f(A,W,P) long true true long true false long false true long false false short true true short true false short false true short false false 0.01 * 30 0.01*0 0.99*75 0.99*80 0.2*35 0.2*3 0.8*95 0.8*100
23
VE example: step 2, sum out A 23 Which way WPads P f 3 (W,P) long short true false true false 0.01*30+0.99*75=74.55 0.01*0+0.99*80=79.2 0.2*35+0.8*95=83 0.2*3+0.8*100=80.6 Which way W Accident A Pads P f(A,W,P) long true true long true false long false true long false false short true true short true false short false true short false false 0.01 * 30 0.01*0 0.99*75 0.99*80 0.2*35 0.2*3 0.8*95 0.8*100 Step 2b: sum A out of the product f(A,W,P):
24
The final factor encodes the expected utility of each decision Thus, taking the short way but wearing pads is the best choice, with an expected utility of 83 VE example: step 3, choose decision with max E(U) 24 Which way WPads P f 3 (W,P) long short true false true false 0.01*30+0.99*75=74.55 0.01*0+0.99*80=79.2 0.2*35+0.8*95=83 0.2*3+0.8*100=80.6 Which way W Accident A Pads P f(A,W,P) long true true long true false long false true long false false short true true short true false short false true short false false 0.01 * 30 0.01*0 0.99*75 0.99*80 0.2*35 0.2*3 0.8*95 0.8*100 Step 2b: sum A out of the product f(A,W,P):
25
Variable Elimination for Single-Stage Decision Networks: Summary 1.Create a factor for each conditional probability and for the utility 2.Sum out all random variables, one at a time –This creates a factor on D that gives the expected utility for each d i 3.Choose the d i with the maximum value in the factor 25
26
Lecture Overview Recap: Utility and Expected Utility Single-Stage Decision Problems –Single-Stage decision networks –Variable elimination (VE) for computing the optimal decision Sequential Decision Problems –General decision networks and Policies –Next lecture: variable elimination for finding the optimal policy in general decision networks 26
27
Sequential Decision Problems An intelligent agent doesn't make a multi-step decision and carry it out blindly –It would take new observations it makes into account A more typical scenario: –The agent observes, acts, observes, acts, … Subsequent actions can depend on what is observed –What is observed often depends on previous actions –Often the sole reason for carrying out an action is to provide information for future actions For example: diagnostic tests, spying General Decision networks: –Just like single-stage decision networks, with one exception: the parents of decision nodes can include random variables 27
28
Sequential Decision Problems: Example Example for sequential decision problem –Treatment depends on Test Result (& others) Each decision D i has an information set of variables pa(D i ), whose value will be known at the time decision D i is made –pa(Test) = {Symptoms} –pa(Treatment) = {Test, Symptoms, TestResult} Decision node: Agent decides Chance node: Chance decides
29
Another example for sequential decision problems –Call depends on Report and SeeSmoke (and on CheckSmoke) Sequential Decision Problems: Example Decision node: Agent decides Chance node: Chance decides
30
Sequential Decision Problems What should an agent do? –What an agent should do depends on what it will do in the future E.g. agent only needs to check for smoke if that will affect whether it calls –What an agent does in the future depends on what it did before E.g. when making the decision it needs to whether it checked for smoke –We will get around this problem as follows The agent has a conditional plan of what it will do in the future We will formalize this conditional plan as a policy 30
31
Policies for Sequential Decision Problems This policy means that when the agent has observed o dom(pD i ), it will do δ i (o) CheckSmoke Report δ cs 1 δ cs 2 δ cs 3 δ cs 4 TTTFF FTFTF Call Definition (Policy) A policy is a sequence of δ 1,….., δ n decision functions δ i : dom(pa(D i )) → dom(D i ) There are 2 2 =4 possible decision functions δ cs for Check Smoke: Decision function needs to specify a value for each instantiation of parents
32
Policies for Sequential Decision Problems Definition (Policy) A policy is a sequence of δ 1,….., δ n decision functions δ i : dom(pa(D i )) → dom(D i ) There are 2 8 =256 possible decision functions δ cs for Call: Decision function needs to specify a value for each instantiation of parents ReportCheckSSeeS δ call 1 true false true false true false true false true false true false true false true Call δ call n false ……
33
How many policies are there? If a decision D has k binary parents, how many assignments of values to the parents are there? 33 k2k2 2k2+k2k2k
34
How many policies are there? If a decision D has k binary parents, how many assignments of values to the parents are there? –2 k If there are b possible value for a decision variable, how many different decision functions are there for it if it has k binary parents? 34 b2kb2k 2 kp b*2 k 2kb2kb
35
How many policies are there? If a decision D has k binary parents, how many assignments of values to the parents are there? –2 k If there are b possible value for a decision variable, how many different decision functions are there for it if it has k binary parents? –b 2 k 35
36
Compare and contrast stochastic single-stage (one-off) decisions vs. multistage decisions Define a Utility Function on possible worlds Define and compute optimal one-off decisions Represent one-off decisions as single stage decision networks Compute optimal decisions by Variable Elimination Next time: –Variable Elimination for finding optimal policies 36 Learning Goals For Today’s Class
37
Announcements Assignment 4 is due on Monday Final exam is on Monday, April 11 –The list of short questions is online … please use it! Office hours next week –Simona: Tuesday, 10-12 (no office hours on Monday!) –Mike: Wednesday 1-2pm, Friday 10-12am –Vasanth: Thursday, 3-5pm –Frank: X530: Tue 5-6pm, Thu 11-12am DMP 110: 1 hour after each lecture Optional Rainbow Robot tournament: Friday, April 8 –Hopefully in normal classroom (DMP 110) –Vasanth will run the tournament, I’ll do office hours in the same room (this is 3 days before the final)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.