Presentation is loading. Please wait.

Presentation is loading. Please wait.

Deciding Under Probabilistic Uncertainty

Similar presentations


Presentation on theme: "Deciding Under Probabilistic Uncertainty"— Presentation transcript:

1 Deciding Under Probabilistic Uncertainty
Russell and Norvig: Sect ,Chap. 17 CS121 – Winter 2003 Deciding Under Probabilistic Uncertainty

2 Non-deterministic vs. Probabilistic Uncertainty
? b a c {a,b,c} decision that is best for worst case ~ Adversarial search ? b a c {a(pa),b(pb),c(pc)} decision that maximizes expected utility value Probabilistic model Non-deterministic model Deciding Under Probabilistic Uncertainty

3 One State/One Action Example
U(S0) = 100 x x x 0.1 = = 62 s3 s2 s1 0.2 0.7 0.1 100 50  utility of state Deciding Under Probabilistic Uncertainty

4 One State/Two Actions Example
U1(S0) = 62 U2(S0) = 74 U(S0) = max{U1(S0),U2(S0)} = 74 s0 A1 A2 s3 s2 s1 s4 0.2 0.7 0.1 0.2 0.8 100 50 70 80 Deciding Under Probabilistic Uncertainty

5 Introducing Action Costs
U1(S0) = 62 – 5 = 57 U2(S0) = 74 – 25 = 49 U(S0) = max{U1(S0),U2(S0)} = 57 s0 A1 A2 -5 -25 s3 s2 s1 s4 0.2 0.7 0.1 0.2 0.8 100 50 70 80 Deciding Under Probabilistic Uncertainty

6 Example: Finding Juliet
A robot, Romeo, is in Charles’ office and must deliver a letter to Juliet Juliet is either in her office, or in the conference room. Without other prior knowledge, each possibility has probability 0.5 The robot’s goal is to minimize the time spent in transit Charles’ off. Juliet’s off. Conf. room 10min 5min Deciding Under Probabilistic Uncertainty

7 Example: Finding Juliet
States are: S0: Romeo in Charles’ office S1: Romeo in Juliet’s office and Juliet here S2: Romeo in Juliet’s office and Juliet not here S3: Romeo in conference room and Juliet here S4: Romeo in conference room and Juliet not here Actions are: GJO (go to Juliet’s office) GCR (go to conference room) Deciding Under Probabilistic Uncertainty

8 Deciding Under Probabilistic Uncertainty
Utility Computation Juliet’s off. 5min 10min Charles’ off. GJO GCR 1 2 3 4 10min Conf. room -10 -10 -10 -15 Deciding Under Probabilistic Uncertainty

9 n-Step Decision Process
1 2 3 There is a single initial state States reached after i steps are all different from those reached after ji steps Each state i has a its reward R(i) Each state reached after n steps is terminal The goal is to maximize the sum of rewards Deciding Under Probabilistic Uncertainty

10 n-Step Decision Process
Utility of state i: U(i) = R(i) + maxa SkP(k | a.i) U(k) i 1 2 3 Two possible actions from i : a1 and a2 a1 leads to k11 with probability P11 or k12 with probability P12 P11 = P(k11 | a1.i) P12 = P(k12 | a1.i) a2 leads to k21 with probability P21 or k22 with probability P22 U(i) = R(i) + max {P11 U(k11) + P12 U(k12), P21 U(k21) + P22 U(k22)} Deciding Under Probabilistic Uncertainty

11 n-Step Decision Process
Utility of state i: U(i) = R(i) + maxa SkP(k | a.i) U(k) Best choice of action at state i: P*(i) = arg maxa SkP(k | a.i) U(k) i 1 2 3 Optimal policy For j = n-1, n-2, …, 0 do: For every state si attained after step j Compute the utility of si Label that state with the corresponding best action Deciding Under Probabilistic Uncertainty

12 n-Step Decision Process …
… with costs on actions instead of rewards on states i Utility of state i: U(i) = maxa (SkP(k | a.i) U(k) – Ca) Best choice of action at state i: P*(i) = arg maxa (SkP(k|a.i)U(k)–Ca) 1 2 3 Deciding Under Probabilistic Uncertainty

13 Deciding Under Probabilistic Uncertainty
1 2 3 i GJO GCR 1 2 3 4 Deciding Under Probabilistic Uncertainty

14 Deciding Under Probabilistic Uncertainty
1 2 3 i GJO 2 3 GCR GCR 4 1 Deciding Under Probabilistic Uncertainty GJO

15 Deciding Under Probabilistic Uncertainty
Target Tracking target robot The robot must keep the target in view The target’s trajectory is not known in advance The environment may or may not be known Deciding Under Probabilistic Uncertainty

16 Deciding Under Probabilistic Uncertainty
Target Tracking target robot The robot must keep the target in view The target’s trajectory is not known in advance The environment may or may not be known Deciding Under Probabilistic Uncertainty

17 States Are Indexed by Time
([i,j], [u,v], t) ([i+1,j], [u,v], t+1) ([i+1,j], [u-1,v], t+1) ([i+1,j], [u+1,v], t+1) ([i+1,j], [u,v-1], t+1) ([i+1,j], [u,v+1], t+1) right State = (robot-position, target-position, time) Action = (stop, up, down, right, left) Outcome of an action = 5 possible states, each with probability 0.2 Each state has 25 successors Deciding Under Probabilistic Uncertainty

18 h-Step Planning Process
Planning horizon h “Terminal” states: States where the target is not visible States at depth h R(state) = at where 0 < a < 1 discounting Reward function: Target visible  +1 Target not visible  0 Maximizing the sum of rewards ~ maximizing escape time Deciding Under Probabilistic Uncertainty

19 h-Step Planning Process
Planning horizon h The planner computes the optimal policy over this tree of states, but only the first step of the policy is executed. Then everything is repeated again … (sliding horizon) Deciding Under Probabilistic Uncertainty

20 h-Step Planning Process
Planning horizon h Deciding Under Probabilistic Uncertainty

21 h-Step Planning Process
Planning horizon h h is chosen such that the computation of the optimal policy over the tree can be computed in one increment of time Deciding Under Probabilistic Uncertainty

22 Example With No Planner
Deciding Under Probabilistic Uncertainty

23 Deciding Under Probabilistic Uncertainty
Example With Planner Deciding Under Probabilistic Uncertainty

24 Deciding Under Probabilistic Uncertainty
Other Example Deciding Under Probabilistic Uncertainty

25 h-Step Planning Process
Planning horizon h The optimal policy over this tree is not the optimal policy that would have been computed if a prior model of the environment had been available, along with an arbitrarily fast computer Deciding Under Probabilistic Uncertainty

26 Deciding Under Probabilistic Uncertainty
1 2 3 i GJO GCR 1 2 3 4 Deciding Under Probabilistic Uncertainty

27 Simple Robot Navigation Problem
In each state, the possible actions are U, D, R, and L Deciding Under Probabilistic Uncertainty

28 Probabilistic Transition Model
In each state, the possible actions are U, D, R, and L The effect of U is as follows (transition model): With probability 0.8 the robot moves up one square (if the robot is already in the top row, then it does not move) Deciding Under Probabilistic Uncertainty

29 Probabilistic Transition Model
In each state, the possible actions are U, D, R, and L The effect of U is as follows (transition model): With probability 0.8 the robot moves up one square (if the robot is already in the top row, then it does not move) With probability 0.1 the robot moves right one square (if the robot is already in the rightmost row, then it does not move) Deciding Under Probabilistic Uncertainty

30 Probabilistic Transition Model
In each state, the possible actions are U, D, R, and L The effect of U is as follows (transition model): With probability 0.8 the robot moves up one square (if the robot is already in the top row, then it does not move) With probability 0.1 the robot moves right one square (if the robot is already in the rightmost row, then it does not move) With probability 0.1 the robot moves left one square (if the robot is already in the leftmost row, then it does not move) Deciding Under Probabilistic Uncertainty

31 Deciding Under Probabilistic Uncertainty
Markov Property The transition properties depend only on the current state, not on previous history (how that state was reached) Deciding Under Probabilistic Uncertainty

32 Deciding Under Probabilistic Uncertainty
Sequence of Actions 3 2 1 1 2 3 4 Planned sequence of actions: (U, R) U is executed Deciding Under Probabilistic Uncertainty

33 Deciding Under Probabilistic Uncertainty
Sequence of Actions 3 2 1 1 2 3 4 Planned sequence of actions: (U, R) U is executed Deciding Under Probabilistic Uncertainty

34 Deciding Under Probabilistic Uncertainty
Sequence of Actions 3 2 1 1 2 3 4 Planned sequence of actions: (U, R) U is executed R is executed Deciding Under Probabilistic Uncertainty

35 Deciding Under Probabilistic Uncertainty
Sequence of Actions 3 2 1 1 2 3 4 Planned sequence of actions: (U, R) U is executed R is executed Deciding Under Probabilistic Uncertainty

36 Deciding Under Probabilistic Uncertainty
Sequence of Actions [3,2] 3 2 1 1 2 3 4 Planned sequence of actions: (U, R) Deciding Under Probabilistic Uncertainty

37 Deciding Under Probabilistic Uncertainty
Sequence of Actions [3,2] [4,2] [3,3] 3 2 1 1 2 3 4 Planned sequence of actions: (U, R) U is executed Deciding Under Probabilistic Uncertainty

38 Deciding Under Probabilistic Uncertainty
Histories [3,2] [4,2] [3,3] 3 2 1 [3,3] [3,2] [4,1] [4,2] [4,3] [3,1] 1 2 3 4 Planned sequence of actions: (U, R) U has been executed R is executed There are 9 possible sequences of states – called histories – and 6 possible final states for the robot! Deciding Under Probabilistic Uncertainty

39 Probability of Reaching the Goal
3 Note importance of Markov property in this derivation 2 1 1 2 3 4 P([4,3] | (U,R).[3,2]) = P([4,3] | R.[3,3]) x P([3,3] | U.[3,2]) P([4,3] | R.[4,2]) x P([4,2] | U.[3,2]) P([4,3] | R.[3,3]) = 0.8 P([4,3] | R.[4,2]) = 0.1 P([3,3] | U.[3,2]) = 0.8 P([4,2] | U.[3,2]) = 0.1 P([4,3] | (U,R).[3,2]) = 0.65 Deciding Under Probabilistic Uncertainty

40 Deciding Under Probabilistic Uncertainty
Utility Function 3 +1 2 -1 1 1 2 3 4 The robot needs to recharge its batteries [4,3] provides power supply [4,2] is a sand area from which the robot cannot escape [4,3] or [4,2] are terminal states Reward of a terminal state: +1 or -1 Reward of a non-terminal state: -1/25 Utility of a history: sum of rewards of traversed states Goal: Maximize the utility of the history Deciding Under Probabilistic Uncertainty

41 Deciding Under Probabilistic Uncertainty
Histories are potentially unbounded and the same state can be reached many times +1 3 -1 2 1 1 2 3 4 Deciding Under Probabilistic Uncertainty

42 Utility of an Action Sequence
+1 3 -1 2 1 1 2 3 4 Consider the action sequence (U,R) from [3,2] Deciding Under Probabilistic Uncertainty

43 Utility of an Action Sequence
[3,2] [4,2] [3,3] [4,1] [4,3] [3,1] +1 3 -1 2 1 1 2 3 4 Consider the action sequence (U,R) from [3,2] A run produces one among 7 possible histories, each with some probability Deciding Under Probabilistic Uncertainty

44 Utility of an Action Sequence
[3,2] [4,2] [3,3] [4,1] [4,3] [3,1] +1 3 -1 2 1 1 2 3 4 Consider the action sequence (U,R) from [3,2] A run produces one among 7 possible histories, each with some probability The utility of the sequence is the expected utility of the histories: U = ShUh P(h) Deciding Under Probabilistic Uncertainty

45 Optimal Action Sequence
[3,2] [4,2] [3,3] [4,1] [4,3] [3,1] +1 3 -1 2 1 1 2 3 4 Consider the action sequence (U,R) from [3,2] A run produces one among 7 possible histories, each with some probability The utility of the sequence is the expected utility of the histories The optimal sequence is the one with maximal utility Deciding Under Probabilistic Uncertainty

46 Optimal Action Sequence
[3,2] [4,2] [3,3] [4,1] [4,3] [3,1] +1 3 -1 2 1 1 2 3 4 Consider the action sequence (U,R) from [3,2] A run produces one among 7 possible histories, each with some probability The utility of the sequence is the expected utility of the histories The optimal sequence is the one with maximal utility But is the optimal action sequence what we want to compute? NO!! Except if sequence is executed blindly (open-loop strategy) Deciding Under Probabilistic Uncertainty

47 Reactive Agent Algorithm
observable state Repeat: s  sensed state If s is terminal then exit a  choose action (given s) Perform a Deciding Under Probabilistic Uncertainty

48 Deciding Under Probabilistic Uncertainty
Utility of state i: U(i) = maxa (SkP(k | a.i) U(k) – Ca) Best choice of action at state i: P*(i) = arg maxa (SkP(k|a.i)U(k)–Ca) 1 2 3 i Deciding Under Probabilistic Uncertainty

49 Policy (Reactive/Closed-Loop Strategy)
-1 +1 2 3 1 4 A policy P is a complete mapping from states to actions Deciding Under Probabilistic Uncertainty

50 Reactive Agent Algorithm
Repeat: s  sensed state If s is terminal then exit a  P(s) Perform a Deciding Under Probabilistic Uncertainty

51 Optimal Policy Note that [3,2] is a “dangerous”
+1 3 -1 2 1 Note that [3,2] is a “dangerous” state that the optimal policy tries to avoid 1 2 3 4 A policy P is a complete mapping from states to actions The optimal policy P* is the one that always yields a history (ending at a terminal state) with maximal expected utility Makes sense because of Markov property Deciding Under Probabilistic Uncertainty

52 Optimal Policy How to compute P*? This problem is called a
+1 3 -1 2 1 1 2 3 4 This problem is called a Markov Decision Problem (MDP) A policy P is a complete mapping from states to actions The optimal policy P* is the one that always yields a history with maximal expected utility How to compute P*? Deciding Under Probabilistic Uncertainty

53 Deciding Under Probabilistic Uncertainty
Utility of state i: U(i) = maxa (SkP(k | a.i) U(k) – Ca) Best choice of action at state i: P*(i) = arg maxa (SkP(k|a.i)U(k)–Ca) 1 2 3 i Deciding Under Probabilistic Uncertainty

54 Deciding Under Probabilistic Uncertainty
The trick used in target-tracking (indexing state by time) can be applied … -1 +1 … but would yield a large tree + sub-optimal policy Deciding Under Probabilistic Uncertainty

55 Deciding Under Probabilistic Uncertainty
First-Step Analysis Simulate one step: U(i) = R(i) + maxa SkP(k | a.i) U(k) P*(i) = arg maxa SkP(k | a.i) U(k) ( Principle of Max Expected Utility) +1 -1 ? Deciding Under Probabilistic Uncertainty

56 Deciding Under Probabilistic Uncertainty
What is the Difference? 1 2 3 -1 +1 P*(i) = arg maxa SkP(k | a.i) U(k) U(i) = R(i) + maxa SkP(k | a.i) U(k) Deciding Under Probabilistic Uncertainty

57 Deciding Under Probabilistic Uncertainty
Value Iteration Initialize the utility of each non-terminal state si to U0(i) = 0 For t = 0, 1, 2, …, do: Ut+1(i)  R(i) + maxa SkP(k | a.i) Ut(k) -1 +1 2 3 1 4 Deciding Under Probabilistic Uncertainty

58 Deciding Under Probabilistic Uncertainty
Value Iteration Note the importance of terminal states and connectivity of the state-transition graph Initialize the utility of each non-terminal state si to U0(i) = 0 For t = 0, 1, 2, …, do: Ut+1(i)  R(i) + maxa SkP(k | a.i) Ut(k) Ut([3,1]) t 30 20 10 0.611 0.5 Not very different from indexing states by time 0.812 0.868 0.918 +1 3 0.762 0.660 -1 2 0.705 0.655 0.611 0.388 1 1 2 3 4 Deciding Under Probabilistic Uncertainty

59 Deciding Under Probabilistic Uncertainty
Policy Iteration Pick a policy P at random Deciding Under Probabilistic Uncertainty

60 Deciding Under Probabilistic Uncertainty
Policy Iteration Pick a policy P at random Repeat: Compute the utility of each state for P U (i) = R(i) + SkP(k | P(i).i) U (k) Set of linear equations (often a sparse system) Deciding Under Probabilistic Uncertainty

61 Deciding Under Probabilistic Uncertainty
Policy Iteration Pick a policy P at random Repeat: Compute the utility of each state for P U (i) = R(i) + SkP(k | P(i).i) U (k) Compute the policy P’ given these utilities P’(i) = arg maxa SkP(k | a.i) U(k) If P’ = P then return P else P = P’ Deciding Under Probabilistic Uncertainty

62 Deciding Under Probabilistic Uncertainty
Application of First Step Analysis Computing the Probability of Folding pfold of a Protein HIV integrase 1- pfold pfold Merge with previous Unfolded state Folded state Deciding Under Probabilistic Uncertainty

63 Computation Through Simulation
Ensemble property = global many paths property 10K to 30K independent simulations Deciding Under Probabilistic Uncertainty

64 Deciding Under Probabilistic Uncertainty
Capture the stochastic nature of molecular motion by a probabilistic roadmap vi vj Pij animation Deciding Under Probabilistic Uncertainty

65 Deciding Under Probabilistic Uncertainty
Edge probabilities Follow Metropolis criteria: vi Pij Pii Self-transition probability: animation vj Deciding Under Probabilistic Uncertainty

66 Deciding Under Probabilistic Uncertainty
First-Step Analysis U: Unfolded set F: Folded set One linear equation per node Solution gives pfold for all nodes No explicit simulation run All pathways are taken into account Sparse linear system l k j Pik Pil Pij m animation Pim i Pii Let fi = pfold(i) After one step: fi = Pii fi + Pij fj + Pik fk + Pil fl + Pim fm =1 Deciding Under Probabilistic Uncertainty

67 Partially Observable Markov Decision Problem
Uncertainty in sensing: A sensing operation returns multiple states, with a probability distribution Deciding Under Probabilistic Uncertainty

68 Example: Target Tracking
There is uncertainty in the robot’s and target’s positions, and this uncertainty grows with further motion There is a risk that the target escape behind the corner requiring the robot to move appropriately But there is a positioning landmark nearby. Should the robot tries to reduce position uncertainty? Deciding Under Probabilistic Uncertainty

69 Deciding Under Probabilistic Uncertainty
Summary Probabilistic uncertainty Utility function Optimal policy Maximal expected utility Value iteration Policy iteration Deciding Under Probabilistic Uncertainty


Download ppt "Deciding Under Probabilistic Uncertainty"

Similar presentations


Ads by Google