Decision Making Under Uncertainty CMSC 471 – Fall 2011 Class #23-24 – Thursday, November 17 / Tuesday, November 22 R&N, Chapters 15.1-15.2, 16.1-16.3,

Slides:



Advertisements
Similar presentations
Decision Making Under Uncertainty Russell and Norvig: ch 16, 17 CMSC 671 – Fall 2005 material from Lise Getoor, Jean-Claude Latombe, and Daphne Koller.
Advertisements

Making Simple Decisions Chapter 16 Some material borrowed from Jean-Claude Latombe and Daphne Koller by way of Marie desJadines,
Markov Decision Process
Partially Observable Markov Decision Process (POMDP)
SA-1 Probabilistic Robotics Planning and Control: Partially Observable Markov Decision Processes.
Dynamic Bayesian Networks (DBNs)
Decision Theoretic Planning
Lirong Xia Hidden Markov Models Tue, March 28, 2014.
MDP Presentation CS594 Automated Optimal Decision Making Sohail M Yousof Advanced Artificial Intelligence.
Hidden Markov Models Reading: Russell and Norvig, Chapter 15, Sections
An Introduction to Markov Decision Processes Sarah Hickmott
1 Reasoning Under Uncertainty Over Time CS 486/686: Introduction to Artificial Intelligence Fall 2013.
Markov Decision Processes
Planning under Uncertainty
Probabilistic reasoning over time So far, we’ve mostly dealt with episodic environments –One exception: games with multiple moves In particular, the Bayesian.
Announcements Homework 3: Games Project 2: Multi-Agent Pacman
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Decision Making Under Uncertainty Russell and Norvig: ch 16, 17 CMSC421 – Fall 2005.
Making Decisions under Probabilistic Uncertainty (Where an agent optimizes what it gets on average, but it may get more... or less ) R&N: Chap. 17, Sect.
Decision Making Under Uncertainty Russell and Norvig: ch 16 CMSC421 – Fall 2006.
CPSC 422, Lecture 14Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14 Feb, 4, 2015 Slide credit: some slides adapted from Stuart.
Department of Computer Science Undergraduate Events More
CS 188: Artificial Intelligence Fall 2009 Lecture 19: Hidden Markov Models 11/3/2009 Dan Klein – UC Berkeley.
Decision Making Under Uncertainty Russell and Norvig: ch 16, 17 CMSC421 – Fall 2003 material from Jean-Claude Latombe, and Daphne Koller.
CMSC 671 Fall 2003 Class #26 – Wednesday, November 26 Russell & Norvig 16.1 – 16.5 Some material borrowed from Jean-Claude Latombe and Daphne Koller by.
Utility Theory & MDPs Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart.
MAKING COMPLEX DEClSlONS
Decision Making in Robots and Autonomous Agents Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy.
Deciding Under Probabilistic Uncertainty Russell and Norvig: Sect ,Chap. 17 CS121 – Winter 2003.
CS 416 Artificial Intelligence Lecture 17 Reasoning over Time Chapter 15 Lecture 17 Reasoning over Time Chapter 15.
Decision Making Under Uncertainty CMSC 671 – Fall 2010 R&N, Chapters , , material from Lise Getoor, Jean-Claude Latombe, and.
4 Proposed Research Projects SmartHome – Encouraging patients with mild cognitive disabilities to use digital memory notebook for activities of daily living.
Decision Theoretic Planning. Decisions Under Uncertainty  Some areas of AI (e.g., planning) focus on decision making in domains where the environment.
Decision Making Under Uncertainty CMSC 471 – Spring 2014 Class #12– Thursday, March 6 R&N, Chapters , material from Lise Getoor, Jean-Claude.
Decision Making Under Uncertainty CMSC 471 – Spring 2041 Class #25– Tuesday, April 29 R&N, material from Lise Getoor, Jean-Claude Latombe, and.
Announcements  Upcoming due dates  Wednesday 11/4, 11:59pm Homework 8  Friday 10/30, 5pm Project 3  Watch out for Daylight Savings and UTC.
Probability and Time. Overview  Modelling Evolving Worlds with Dynamic Baysian Networks  Simplifying Assumptions Stationary Processes, Markov Assumption.
Department of Computer Science Undergraduate Events More
1 Chapter 17 2 nd Part Making Complex Decisions --- Decision-theoretic Agent Design Xin Lu 11/04/2002.
Markov Decision Process (MDP)
Web-Mining Agents Agents and Rational Behavior Decision-Making under Uncertainty Simple Decisions Ralf Möller Universität zu Lübeck Institut für Informationssysteme.
Planning Under Uncertainty. Sensing error Partial observability Unpredictable dynamics Other agents.
Making Simple Decisions Chapter 16 Some material borrowed from Jean-Claude Latombe and Daphne Koller by way of Marie desJadines,
Markov Decision Processes AIMA: 17.1, 17.2 (excluding ), 17.3.
Web-Mining Agents Agents and Rational Behavior Decision-Making under Uncertainty Complex Decisions Ralf Möller Universität zu Lübeck Institut für Informationssysteme.
Matching ® ® ® Global Map Local Map … … … obstacle Where am I on the global map?                                   
Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk Undergrad TAs: Sam Johnson, Nikhil Johri CS 440 / ECE 448 Introduction to Artificial Intelligence.
CS 541: Artificial Intelligence Lecture VIII: Temporal Probability Models.
Ralf Möller Universität zu Lübeck Institut für Informationssysteme
Ralf Möller Universität zu Lübeck Institut für Informationssysteme
CS b659: Intelligent Robotics
Making complex decisions
Markov Decision Processes
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14
Markov Decision Processes
Probabilistic Reasoning over Time
Probabilistic Reasoning over Time
Markov Decision Processes
Announcements Homework 3 due today (grace period through Friday)
Rational Decisions and
13. Acting under Uncertainty Wolfram Burgard and Bernhard Nebel
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14
CS 416 Artificial Intelligence
Reinforcement Learning Dealing with Partial Observability
Deciding Under Probabilistic Uncertainty
Markov Decision Processes
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 7
Markov Decision Processes
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 3
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14
Presentation transcript:

Decision Making Under Uncertainty CMSC 471 – Fall 2011 Class #23-24 – Thursday, November 17 / Tuesday, November 22 R&N, Chapters , , material from Lise Getoor, Jean-Claude Latombe, and Daphne Koller 1

MODELING UNCERTAINTY OVER TIME 2

Temporal Probabilistic Agent environment agent ? sensors actuators t 1, t 2, t 3, … 3

Time and Uncertainty The world changes; we need to track and predict it Examples: diabetes management, traffic monitoring Basic idea: Copy state and evidence variables for each time step Model uncertainty in change over time, incorporating new observations as they arrive Reminiscent of situation calculus, but for stochastic domains X t – set of unobservable state variables at time t e.g., BloodSugar t, StomachContents t E t – set of evidence variables at time t e.g., MeasuredBloodSugar t, PulseRate t, FoodEaten t Assumes discrete time steps 4

States and Observations Process of change is viewed as series of snapshots, each describing the state of the world at a particular time Each time slice is represented by a set of random variables indexed by t: 1. the set of unobservable state variables X t 2. the set of observable evidence variables E t The observation at time t is E t = e t for some set of values e t The notation X a:b denotes the set of variables from X a to X b 5

Stationary Process/Markov Assumption Markov Assumption: X t depends on some previous X i s First-order Markov process: P(X t |X 0:t-1 ) = P(X t |X t-1 ) kth order: depends on previous k time steps Sensor Markov assumption: P(E t |X 0:t, E 0:t-1 ) = P(E t |X t ) That is: The agent’s observations depend only on the actual current state of the world Assume stationary process: Transition model P(X t |X t-1 ) and sensor model P(E t |X t ) are time-invariant (i.e., they are the same for all t) That is, the changes in the world state are governed by laws that do not themselves change over time 6

Complete Joint Distribution Given: Transition model: P(X t |X t-1 ) Sensor model: P(E t |X t ) Prior probability: P(X 0 ) Then we can specify the complete joint distribution of a sequence of states: 7

Example Rain t-1 Umbrella t-1 Rain t Umbrella t Rain t+1 Umbrella t+1 R t-1 P(R t |R t-1 ) TFTF RtRt P(U t |R t ) TFTF This should look a lot like a finite state automaton (since it is one...) 8

Inference Tasks Filtering or monitoring: P(X t |e 1,…,e t ) Compute the current belief state, given all evidence to date Prediction: P(X t+k |e 1,…,e t ) Compute the probability of a future state Smoothing: P(X k |e 1,…,e t ) Compute the probability of a past state (hindsight) Most likely explanation: arg max x1,..xt P(x 1,…,x t |e 1,…,e t ) Given a sequence of observations, find the sequence of states that is most likely to have generated those observations 9

Examples Filtering: What is the probability that it is raining today, given all of the umbrella observations up through today? Prediction: What is the probability that it will rain the day after tomorrow, given all of the umbrella observations up through today? Smoothing: What is the probability that it rained yesterday, given all of the umbrella observations through today? Most likely explanation: If the umbrella appeared the first three days but not on the fourth, what is the most likely weather sequence to produce these umbrella sightings? 10

Filtering We use recursive estimation to compute P(X t+1 | e 1:t+1 ) as a function of e t+1 and P(X t | e 1:t ) We can write this as follows: This leads to a recursive definition: f 1:t+1 =  FORWARD (f 1:t, e t+1 ) QUIZLET: What is α? 11

Filtering Example Rain t-1 Umbrella t-1 Rain t Umbrella t Rain t+1 Umbrella t+1 R t-1 P(R t |R t-1 ) TFTF RtRt P(U t |R t ) TFTF What is the probability of rain on Day 2, given a uniform prior of rain on Day 0, U 1 = true, and U 2 = true? 12

DECISION MAKING UNDER UNCERTAINTY

Decision Making Under Uncertainty Many environments have multiple possible outcomes Some of these outcomes may be good; others may be bad Some may be very likely; others unlikely What’s a poor agent to do?? 14

Non-Deterministic vs. Probabilistic Uncertainty ? bac {a,b,c}  decision that is best for worst case ? bac {a(p a ), b(p b ), c(p c )}  decision that maximizes expected utility value Non-deterministic modelProbabilistic model ~ Adversarial search 15

Expected Utility Random variable X with n values x 1,…,x n and distribution (p 1,…,p n ) E.g.: X is the state reached after doing an action A under uncertainty Function U of X E.g., U is the utility of a state The expected utility of A is EU[A] =  i=1,…,n p(x i |A)U(x i ) 16

s0s0 s3s3 s2s2 s1s1 A U(A1, S0) = 100 x x x 0.1 = = 62 One State/One Action Example 17

s0s0 s3s3 s2s2 s1s1 A A2 s4s U (A1, S0) = 62 U (A2, S0) = 74 U (S0) = max a {U(a,S0)} = 74 One State/Two Actions Example 18

s0s0 s3s3 s2s2 s1s1 A A2 s4s U (A1, S0) = 62 – 5 = 57 U (A2, S0) = 74 – 25 = 49 U (S0) = max a {U(a, S0)} = Introducing Action Costs 19

MEU Principle A rational agent should choose the action that maximizes agent’s expected utility This is the basis of the field of decision theory The MEU principle provides a normative criterion for rational choice of action 20

Not quite… Must have a complete model of: Actions Utilities States Even if you have a complete model, decision making is computationally intractable In fact, a truly rational agent takes into account the utility of reasoning as well (bounded rationality) Nevertheless, great progress has been made in this area recently, and we are able to solve much more complex decision-theoretic problems than ever before 21

Axioms of Utility Theory Orderability (A>B)  (A<B)  (A~B) Transitivity (A>B)  (B>C)  (A>C) Continuity A>B>C   p [p,A; 1-p,C] ~ B Substitutability A~B  [p,A; 1-p,C]~[p,B; 1-p,C] Monotonicity A>B  (p≥q  [p,A; 1-p,B] >~ [q,A; 1-q,B]) Decomposability [p,A; 1-p, [q,B; 1-q, C]] ~ [p,A; (1-p)q, B; (1-p)(1-q), C] 22

Money Versus Utility Money <> Utility More money is better, but not always in a linear relationship to the amount of money Expected Monetary Value Risk-averse: U(L) < U(S EMV(L) ) Risk-seeking: U(L) > U(S EMV(L) ) Risk-neutral: U(L) = U(S EMV(L) ) 23

Value Function Provides a ranking of alternatives, but not a meaningful metric scale Also known as an “ordinal utility function” Sometimes, only relative judgments (value functions) are necessary At other times, absolute judgments (utility functions) are required 24

Multiattribute Utility Theory A given state may have multiple utilities...because of multiple evaluation criteria...because of multiple agents (interested parties) with different utility functions We will talk about this more later in the semester, when we discuss multi-agent systems and game theory 25

SEQUENTIAL DECISION MAKING UNDER UNCERTAINTY

Sequential Decision Making Finite Horizon Infinite Horizon 27

Simple Robot Navigation Problem In each state, the possible actions are U, D, R, and L 28

Probabilistic Transition Model In each state, the possible actions are U, D, R, and L The effect of U is as follows (transition model): With probability 0.8, the robot moves up one square (if the robot is already in the top row, then it does not move) 29

Probabilistic Transition Model In each state, the possible actions are U, D, R, and L The effect of U is as follows (transition model): With probability 0.8 the robot moves up one square (if the robot is already in the top row, then it does not move) With probability 0.1 the robot moves right one square (if the robot is already in the rightmost row, then it does not move) 30

Probabilistic Transition Model In each state, the possible actions are U, D, R, and L The effect of U is as follows (transition model): With probability 0.8 the robot moves up one square (if the robot is already in the top row, then it does not move) With probability 0.1 the robot moves right one square (if the robot is already in the rightmost row, then it does not move) With probability 0.1 the robot moves left one square (if the robot is already in the leftmost row, then it does not move) 31

Markov Property The transition properties depend only on the current state, not on the previous history (how that state was reached) 32

Sequence of Actions Planned sequence of actions: (U, R) [3,2] 33

Sequence of Actions Planned sequence of actions: (U, R) U is executed [3,2] [4,2][3,3][3,2] 34

Histories Planned sequence of actions: (U, R) U has been executed R is executed There are 9 possible sequences of states – called histories – and 6 possible final states for the robot! [3,2] [4,2][3,3][3,2] [3,3][3,2][4,1][4,2][4,3][3,1] 35

Probability of Reaching the Goal P([4,3] | (U,R).[3,2]) = P([4,3] | R.[3,3]) x P([3,3] | U.[3,2]) + P([4,3] | R.[4,2]) x P([4,2] | U.[3,2]) Note importance of Markov property in this derivation P([3,3] | U.[3,2]) = 0.8 P([4,2] | U.[3,2]) = 0.1 P([4,3] | R.[3,3]) = 0.8 P([4,3] | R.[4,2]) = 0.1 P([4,3] | (U,R).[3,2]) =

Utility Function [4,3] provides power supply [4,2] is a sand area from which the robot cannot escape

Utility Function [4,3] provides power supply [4,2] is a sand area from which the robot cannot escape The robot needs to recharge its batteries

Utility Function [4,3] provides power supply [4,2] is a sand area from which the robot cannot escape The robot needs to recharge its batteries [4,3] or [4,2] are terminal states

Utility of a History [4,3] provides power supply [4,2] is a sand area from which the robot cannot escape The robot needs to recharge its batteries [4,3] or [4,2] are terminal states The utility of a history is defined by the utility of the last state (+1 or –1) minus n/25, where n is the number of moves

Utility of an Action Sequence +1 Consider the action sequence (U,R) from [3,2]

Utility of an Action Sequence +1 Consider the action sequence (U,R) from [3,2] A run produces one of 7 possible histories, each with some probability [3,2] [4,2][3,3][3,2] [3,3][3,2][4,1][4,2][4,3][3,1] 42

Utility of an Action Sequence +1 Consider the action sequence (U,R) from [3,2] A run produces one among 7 possible histories, each with some probability The utility of the sequence is the expected utility of the histories: U =  h U h P(h) [3,2] [4,2][3,3][3,2] [3,3][3,2][4,1][4,2][4,3][3,1] 43

Optimal Action Sequence +1 Consider the action sequence (U,R) from [3,2] A run produces one among 7 possible histories, each with some probability The utility of the sequence is the expected utility of the histories The optimal sequence is the one with maximal utility [3,2] [4,2][3,3][3,2] [3,3][3,2][4,1][4,2][4,3][3,1] 44

Optimal Action Sequence +1 Consider the action sequence (U,R) from [3,2] A run produces one among 7 possible histories, each with some probability The utility of the sequence is the expected utility of the histories The optimal sequence is the one with maximal utility But is the optimal action sequence what we want to compute? [3,2] [4,2][3,3][3,2] [3,3][3,2][4,1][4,2][4,3][3,1] only if the sequence is executed blindly! 45

Accessible or observable state Repeat:  s  sensed state  If s is terminal then exit  a  choose action (given s)  Perform a Reactive Agent Algorithm 46

Policy (Reactive/Closed-Loop Strategy) A policy  is a complete mapping from states to actions

Repeat:  s  sensed state  If s is terminal then exit  a   (s)  Perform a Reactive Agent Algorithm 48

Optimal Policy +1 A policy  is a complete mapping from states to actions The optimal policy  * is the one that always yields a history (ending at a terminal state) with maximal expected utility Makes sense because of Markov property Note that [3,2] is a “dangerous” state that the optimal policy tries to avoid 49

Optimal Policy +1 A policy  is a complete mapping from states to actions The optimal policy  * is the one that always yields a history with maximal expected utility This problem is called a Markov Decision Problem (MDP) How to compute  *? 50

Additive Utility History H = (s 0,s 1,…,s n ) The utility of H is additive iff: U (s 0,s 1,…,s n ) = R (0) + U (s 1,…,s n ) =  R (i) Reward 51

Additive Utility History H = (s 0,s 1,…,s n ) The utility of H is additive iff: U (s 0,s 1,…,s n ) = R (0) + U (s 1,…,s n ) =  R (i) Robot navigation example: R (n) = +1 if s n = [4,3] R (n) = -1 if s n = [4,2] R (i) = -1/25 if i = 0, …, n-1 52

Principle of Max Expected Utility History H = (s 0,s 1,…,s n ) Utility of H: U (s 0,s 1,…,s n ) =  R (i)  First-step analysis  U (i) = R (i) + max a  k P (k | a.i) U (k)  *(i) = arg max a  k P (k | a.i) U (k) +1 53

Defining State Utility Problem: When making a decision, we only know the reward so far, and the possible actions We’ve defined utility retroactively (i.e., the utility of a history is obvious once we finish it) What is the utility of a particular state in the middle of decision making? Need to compute expected utility of possible future histories 54

Value Iteration Initialize the utility of each non-terminal state s i to U 0 (i) = 0 For t = 0, 1, 2, …, do: U t+1 (i)  R (i) + max a  k P (k | a.i) U t (k)

Value Iteration Initialize the utility of each non-terminal state s i to U 0 (i) = 0 For t = 0, 1, 2, …, do: U t+1 (i)  R (i) + max a  k P (k | a.i) U t (k) U t ([3,1]) t Note the importance of terminal states and connectivity of the state-transition graph 56 EXERCISE: Verify that U([3,3]) = in the limit

Policy Iteration Pick a policy  at random 57

Policy Iteration Pick a policy  at random Repeat: Compute the utility of each state for  U t+1 (i)  R (i) +  k P (k |  (i).i) U t (k) 58

Policy Iteration Pick a policy  at random Repeat: Compute the utility of each state for  U t+1 (i)  R (i) +  k P (k |  (i).i) U t (k) Compute the policy  ’ given these utilities  ’(i) = arg max a  k P (k | a.i) U (k) 59

Policy Iteration Pick a policy  at random Repeat: Compute the utility of each state for  U t+1 (i)  R (i) +  k P (k |  (i).i) U t (k) Compute the policy  ’ given these utilities  ’(i) = arg max a  k P(k | a.i) U (k) If  ’ =  then return  Or solve the set of linear equations: U (i) = R (i) +  k P (k |  (i).i) U (k) (often a sparse system) 60

Infinite Horizon In many problems, e.g., the robot navigation example, histories are potentially unbounded and the same state can be reached many times One trick: Use discounting to make an infinite horizon problem mathematically tractable What if the robot lives forever? 61