CSE-573 Artificial Intelligence Partially-Observable MDPS (POMDPs)

Slides:



Advertisements
Similar presentations
Markov Decision Process
Advertisements

Partially Observable Markov Decision Process (POMDP)
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Solving POMDPs Using Quadratically Constrained Linear Programs Christopher Amato.
SA-1 Probabilistic Robotics Planning and Control: Partially Observable Markov Decision Processes.
SARSOP Successive Approximations of the Reachable Space under Optimal Policies Devin Grady 4 April 2013.
Decision Theoretic Planning
MDP Presentation CS594 Automated Optimal Decision Making Sohail M Yousof Advanced Artificial Intelligence.
1 Slides for the book: Probabilistic Robotics Authors: Sebastian Thrun Wolfram Burgard Dieter Fox Publisher: MIT Press, Web site for the book & more.
An Introduction to Markov Decision Processes Sarah Hickmott
Partially Observable Markov Decision Process By Nezih Ergin Özkucur.
主講人:虞台文 大同大學資工所 智慧型多媒體研究室
Markov Decision Processes
Planning under Uncertainty
1 Policies for POMDPs Minqing Hu. 2 Background on Solving POMDPs MDPs policy: to find a mapping from states to actions POMDPs policy: to find a mapping.
POMDPs: Partially Observable Markov Decision Processes Advanced AI
SA-1 1 Probabilistic Robotics Planning and Control: Markov Decision Processes.
KI Kunstmatige Intelligentie / RuG Markov Decision Processes AIMA, Chapter 17.
Markov Decision Processes
An Introduction to PO-MDP Presented by Alp Sardağ.
Incremental Pruning CSE 574 May 9, 2003 Stanley Kok.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Optimal Fixed-Size Controllers for Decentralized POMDPs Christopher Amato Daniel.
Markov Decision Processes
Department of Computer Science Undergraduate Events More
Reinforcement Learning Yishay Mansour Tel-Aviv University.
Dynamic Bayesian Networks CSE 473. © Daniel S. Weld Topics Agency Problem Spaces Search Knowledge Representation Reinforcement Learning InferencePlanningLearning.
Utility Theory & MDPs Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart.
Instructor: Vincent Conitzer
MAKING COMPLEX DEClSlONS
CSE-473 Artificial Intelligence Partially-Observable MDPS (POMDPs)
Generalized and Bounded Policy Iteration for Finitely Nested Interactive POMDPs: Scaling Up Ekhlas Sonu, Prashant Doshi Dept. of Computer Science University.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
1 ECE-517 Reinforcement Learning in Artificial Intelligence Lecture 7: Finite Horizon MDPs, Dynamic Programming Dr. Itamar Arel College of Engineering.
CSE-573 Reinforcement Learning POMDPs. Planning What action next? PerceptsActions Environment Static vs. Dynamic Fully vs. Partially Observable Perfect.
TKK | Automation Technology Laboratory Partially Observable Markov Decision Process (Chapter 15 & 16) José Luis Peralta.
Computer Science CPSC 502 Lecture 14 Markov Decision Processes (Ch. 9, up to 9.5.3)
Model-based Bayesian Reinforcement Learning in Partially Observable Domains by Pascal Poupart and Nikos Vlassis (2008 International Symposium on Artificial.
Department of Computer Science Undergraduate Events More
Reinforcement Learning Yishay Mansour Tel-Aviv University.
1 Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
A Tutorial on the Partially Observable Markov Decision Process and Its Applications Lawrence Carin June 7,2006.
MDPs (cont) & Reinforcement Learning
Decision Theoretic Planning. Decisions Under Uncertainty  Some areas of AI (e.g., planning) focus on decision making in domains where the environment.
Decision Making Under Uncertainty CMSC 471 – Spring 2041 Class #25– Tuesday, April 29 R&N, material from Lise Getoor, Jean-Claude Latombe, and.
Decision Making Under Uncertainty Lec #10: Partially Observable MDPs UIUC CS 598: Section EA Professor: Eyal Amir Spring Semester 2006 Some slides by Jeremy.
CPS 570: Artificial Intelligence Markov decision processes, POMDPs
Department of Computer Science Undergraduate Events More
COMP 2208 Dr. Long Tran-Thanh University of Southampton Reinforcement Learning.
1 (Chapter 3 of) Planning and Control in Stochastic Domains with Imperfect Information by Milos Hauskrecht CS594 Automated Decision Making Course Presentation.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
Planning Under Uncertainty. Sensing error Partial observability Unpredictable dynamics Other agents.
Generalized Point Based Value Iteration for Interactive POMDPs Prashant Doshi Dept. of Computer Science and AI Institute University of Georgia
CS 416 Artificial Intelligence Lecture 20 Making Complex Decisions Chapter 17 Lecture 20 Making Complex Decisions Chapter 17.
Markov Decision Processes AIMA: 17.1, 17.2 (excluding ), 17.3.
Partial Observability “Planning and acting in partially observable stochastic domains” Leslie Pack Kaelbling, Michael L. Littman, Anthony R. Cassandra;
Probabilistic Robotics Probability Theory Basics Error Propagation Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics.
CS 541: Artificial Intelligence Lecture X: Markov Decision Process Slides Credit: Peter Norvig and Sebastian Thrun.
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 7
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 3
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14
Markov ó Kalman Filter Localization
Markov Decision Processes
Markov Decision Processes
CSE 473: Artificial Intelligence
CS 416 Artificial Intelligence
Reinforcement Learning Dealing with Partial Observability
Reinforcement Nisheeth 18th January 2019.
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 7
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 3
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 7
Presentation transcript:

CSE-573 Artificial Intelligence Partially-Observable MDPS (POMDPs)

Todo  Key slides don’t have Y axis labeled – NOT value

Classical Planning What action next? PerceptsActions Environment Static Fully Observable Perfect Predictable Discrete Deterministic

Stochastic Planning (MDPs, Reinforcement Learning) What action next? PerceptsActions Environment Static Fully Observable Perfect Stochastic Unpredictable Discrete

Partially-Observable MDPs What action next? PerceptsActions Environment Static Partially Observable Noisy Stochastic Unpredictable Discrete

66 Classical Planning hellheaven World deterministic State observable Sequential Plan Reward

77 MDP-Style Planning hellheaven World stochastic State observable Policy

88 Stochastic, Partially Observable sign hell?heaven?

Markov Decision Process (MDP)  S : set of states  A : set of actions  P r(s’|s,a): transition model  R (s,a,s’): reward model   : discount factor  s 0 : start state

Partially-Observable MDP  S : set of states  A : set of actions  Pr (s’|s,a): transition model  R (s,a,s’): reward model   : discount factor  s 0 : start state  E set of possible evidence (observations)  Pr (e|s)

 11 Belief State sign 50%  State of agent’s mind  Not just of world Note: POMDP Probs  = 1

Planning in Belief Space sign 50% sign 50% sign 50% sign 50% N W E Exp. Reward: 0 For now, assume movement is deterministic

Partially-Observable MDP  S : set of states  A : set of actions  Pr (s’|s,a): transition model  R (s,a,s’): reward model   : discount factor  s 0 : start state  E set of possible evidence (observations)  Pr (e|s)

Evidence Model  S = {s wb, s eb, s wm, s em s wul, s eul s wur, s eur }  E= {heat}  Pr(e|s): Pr(heat | s eb ) = 1.0 Pr(heat | s wb ) = 0.2 Pr(heat | s other ) = 0.0 sign s eb s wb

Planning in Belief Space sign 50% sign 100% 0% S sign 17% 83%  heat heat Pr(heat | s eb ) = 1.0 Pr(heat | s wb ) = 0.2

Objective of a Fully Observable MDP  Find a policy  : S → A  which maximizes expected discounted reward given an infinite horizon assuming full observability

Objective of a POMDP  Find a policy  : BeliefStates( S) → A  A belief state is a probability distribution over states  which maximizes expected discounted reward given an infinite horizon assuming partial & noisy observability

Planning in HW 4  Map Estimate  Now “know” state  Solve MDP  1818

Best plan to eat final food?

Problem with Planning from MAP Estimate  Best action for belief state over k worlds may not be the best action in any one of those worlds 49% 51% 10% 90%

22 POMDPs In POMDPs we apply the very same idea as in MDPs. Since the state is not observable, the agent has to make its decisions based on the belief state which is a posterior distribution over states. Let b be the belief of the agent about the state under consideration. POMDPs compute a value function over belief space: 

23 Problems Each belief is a probability distribution, thus, each value in a POMDP is a function of an entire probability distribution. This is problematic, since probability distributions are continuous. How many belief states are there? For finite worlds with finite state, action, and measurement spaces and finite horizons, however, we can effectively represent the value functions by piecewise linear functions.

24 An Illustrative Example measurementsaction u 3 state x 2 payoff measurements actions u 1, u 2 payoff state x 1

25 What is Belief Space? measurementsaction u 3 state x 2 payoff measurements actions u 1, u 2 payoff state x 1 Value P(state=x 1 )

26 The Parameters of the Example The actions u 1 and u 2 are terminal actions. The action u 3 is a sensing action that potentially leads to a state transition. The horizon is finite and =1.

27 Payoff in POMDPs In MDPs, the payoff (or return) depended on the state of the system. In POMDPs, however, the true state is not exactly known. Therefore, we compute the expected payoff by integrating over all states:

28 Payoffs in Our Example If we are totally certain that we are in state x 1 and execute action u 1, we receive a reward of -100 If, on the other hand, we definitely know that we are in x 2 and execute u 1, the reward is In between it is the linear combination of the extreme values weighted by the probabilities measurementsaction u 3 state x 2 payoff measurements actions u 1, u 2 payoff state x 1 = 100 – 200 p 1

29 Payoffs in Our Example If we are totally certain that we are in state x 1 and execute action u 1, we receive a reward of -100 If, on the other hand, we definitely know that we are in x 2 and execute u 1, the reward is In between it is the linear combination of the extreme values weighted by the probabilities measurementsaction u 3 state x 2 payoff measurements actions u 1, u 2 payoff state x 1 = 100 – 200 p 1 = 150 p 1 – 50

30 Payoffs in Our Example (2)

31 The Resulting Policy for T=1 Given a finite POMDP with time horizon = 1 Use V 1 (b) to determine the optimal policy. Corresponding value:

32 Piecewise Linearity, Convexity The resulting value function V 1 (b) is the maximum of the three functions at each point It is piecewise linear and convex.

33 Pruning With V 1 (b), note that only the first two components contribute. The third component can be safely pruned

34 Increasing the Time Horizon Assume the robot can make an observation before deciding on an action. V 1 (b)

35 Increasing the Time Horizon What if the robot can observe before acting? Suppose it perceives z 1 : p(z 1 | x 1 )=0.7 and p(z 1 | x 2 )=0.3. Given the obs z 1 we update the belief using...? Now, V 1 (b | z 1 ) is given by Bayes rule.

36 Value Function b’(b|z 1 ) V 1 (b) V 1 (b|z 1 )

37 Expected Value after Measuring But, we do not know in advance what the next measurement will be, So we must compute the expected belief

38 Expected Value after Measuring But, we do not know in advance what the next measurement will be, So we must compute the expected belief

39 Resulting Value Function The four possible combinations yield the following function which then can be simplified and pruned.

40 Value Function b’(b|z 1 ) p(z 1 ) V 1 (b|z 1 ) p(z 2 ) V 2 (b|z 2 ) \bar{V} 1 (b)

41 State Transitions (Prediction) When the agent selects u 3 its state may change. When computing the value function, we have to take these potential state changes into account.

42 Resulting Value Function after executing u 3 Taking the state transitions into account, we finally obtain.

43 Value Function after executing u 3 \bar{V} 1 (b) \bar{V} 1 (b|u 3 )

44 Value Function for T=2 Taking into account that the agent can either directly perform u 1 or u 2 or first u 3 and then u 1 or u 2, we obtain (after pruning)

45 Graphical Representation of V 2 (b) u 1 optimal u 2 optimal unclear outcome of measuring is important here

46 Deep Horizons We have now completed a full backup in belief space. This process can be applied recursively. The value functions for T=10 and T=20 are

47 Deep Horizons and Pruning

48 Why Pruning is Essential Each update introduces additional linear components to V. Each measurement squares the number of linear components. Thus, an unpruned value function for T=20 includes more than ,864 linear functions. At T=30 we have ,012,337 linear functions. The pruned value functions at T=20, in comparison, contains only 12 linear components. The combinatorial explosion of linear components in the value function are the major reason why exact solution of POMDPs is usually impractical

49

50 POMDP Approximations Point-based value iteration QMDPs AMDPs

51 Point-based Value Iteration Maintains a set of example beliefs Only considers constraints that maximize value function for at least one of the examples

52 Point-based Value Iteration Exact value function PBVI Value functions for T=30

53 QMDPs QMDPs only consider state uncertainty in the first step After that, the world becomes fully observable.

54

55 Augmented MDPs Augmentation adds uncertainty component to state space, e.g. Planning is performed by MDP in augmented state space Transition, observation and payoff models have to be learned

56 POMDP Summary POMDPs compute the optimal action in partially observable, stochastic domains. For finite horizon problems, the resulting value functions are piecewise linear and convex. In each iteration the number of linear constraints grows exponentially. Until recently, POMDPs only applied to very small state spaces with small numbers of possible observations and actions. But with PBVI, |S| = millions