Download presentation
Presentation is loading. Please wait.
Published byAleesha Cummings Modified over 8 years ago
1
Markov Decision Processes Chapter 17 Mausam
2
Planning Agent What action next? PerceptsActions Environment Static vs. Dynamic Fully vs. Partially Observable Perfect vs. Noisy Deterministic vs. Stochastic Instantaneous vs. Durative
3
Classical Planning What action next? PerceptsActions Environment Static Fully Observable Perfect Instantaneous Deterministic
4
Stochastic Planning: MDPs What action next? PerceptsActions Environment Static Fully Observable Perfect Stochastic Instantaneous
5
MDP vs. Decision Theory Decision theory – episodic MDP -- sequential
6
Markov Decision Process (MDP) S : A set of states A : A set of actions P r(s’|s,a): transition model C (s,a,s’): cost model G : set of goals s 0 : start state : discount factor R ( s,a,s’): reward model factored Factored MDP absorbing/ non-absorbing
7
Objective of an MDP Find a policy : S → A which optimizes minimizes expected cost to reach a goal maximizes expected reward maximizes expected (reward-cost) given a ____ horizon finite infinite indefinite assuming full observability discounted or undiscount.
8
Role of Discount Factor ( ) Keep the total reward/total cost finite useful for infinite horizon problems Intuition (economics): Money today is worth more than money tomorrow. Total reward: r 1 + r 2 + 2 r 3 + … Total cost: c 1 + c 2 + 2 c 3 + …
9
Examples of MDPs Goal-directed, Indefinite Horizon, Cost Minimization MDP Most often studied in planning, graph theory communities Infinite Horizon, Discounted Reward Maximization MDP Most often studied in machine learning, economics, operations research communities Oversubscription Planning: Non absorbing goals, Reward Max. MDP Relatively recent model most popular
10
AND/OR Acyclic Graphs vs. MDPs P RQST G P RST G a b ab c ccc ccc 0.6 0.40.5 0.6 0.40.5 C(a) = 5, C(b) = 10, C(c) =1 Expectimin works V(Q/R/S/T) = 1 V(P) = 6 – action a Expectimin doesn’t work infinite loop V(R/S/T) = 1 Q(P,b) = 11 Q(P,a) = ???? suppose I decide to take a in P Q(P,a) = 5+ 0.4*1 + 0.6Q(P,a) = 13.5
11
Bellman Equations for MDP 1 Define J*(s) {optimal cost} as the minimum expected cost to reach a goal from this state. J* should satisfy the following equation:
12
Bellman Equations for MDP 2 Define V*(s) {optimal value} as the maximum expected discounted reward from this state. V* should satisfy the following equation:
13
Bellman Backup (MDP 2 ) Given an estimate of V* function (say V n ) Backup V n function at state s calculate a new estimate (V n+1 ) : Q n+1 (s,a) : value/cost of the strategy: execute action a in s, execute n subsequently n = argmax a ∈ Ap(s) Q n (s,a) V R V ax
14
Bellman Backup V 0 = 0 V 0 = 1 V 0 = 2 Q 1 (s,a 1 ) = 2 + 0 Q 1 (s,a 2 ) = 5 + 0.9 £ 1 + 0.1 £ 2 Q 1 (s,a 3 ) = 4.5 + 2 max V 1 = 6.5 ( a greedy = a 3 2 4.5 5 a2a2 a1a1 a3a3 s0s0 s1s1 s2s2 s3s3 0.9 0.1
15
Value iteration [Bellman’57] assign an arbitrary assignment of V 0 to each state. repeat for all states s compute V n+1 (s) by Bellman backup at s. until max s |V n+1 (s) – V n (s)| < Iteration n+1 Residual(s) -convergence
16
Comments Decision-theoretic Algorithm Dynamic Programming Fixed Point Computation Probabilistic version of Bellman-Ford Algorithm for shortest path computation MDP 1 : Stochastic Shortest Path Problem Time Complexity one iteration: O(| S | 2 | A |) number of iterations: poly(| S |, | A |, 1/ 1- ) Space Complexity: O(| S |) Factored MDPs = Planning under uncertainty exponential space, exponential time
17
Convergence Properties V n → V* in the limit as n → 1 - convergence: V n function is within of V* Optimality: current policy is within 2 of optimal Monotonicity V 0 ≤ p V * ⇒ V n ≤ p V* (V n monotonic from below) V 0 ≥ p V * ⇒ V n ≥ p V* (V n monotonic from above) otherwise V n non-monotonic
18
Policy Computation Optimal policy is stationary and time-independent. for infinite/indefinite horizon problems Policy Evaluation A system of linear equations in | S | variables. ax R V R V V
19
Changing the Search Space Value Iteration Search in value space Compute the resulting policy Policy Iteration Search in policy space Compute the resulting value
20
Policy iteration [Howard’60] assign an arbitrary assignment of 0 to each state. repeat Policy Evaluation: compute V n+1 the evaluation of n Policy Improvement: for all states s compute n+1 (s): argmax a 2 Ap(s) Q n+1 (s,a) until n+1 n Advantage searching in a finite (policy) space as opposed to uncountably infinite (value) space ⇒ convergence faster. all other properties follow! costly: O(n 3 ) approximate by value iteration using fixed policy Modified Policy Iteration
21
Modified Policy iteration assign an arbitrary assignment of 0 to each state. repeat Policy Evaluation: compute V n+1 the approx. evaluation of n Policy Improvement: for all states s compute n+1 (s): argmax a 2 Ap(s) Q n+1 (s,a) until n+1 n Advantage probably the most competitive synchronous dynamic programming algorithm.
22
Applications Stochastic Games Robotics: navigation, helicopter manuevers… Finance: options, investments Communication Networks Medicine: Radiation planning for cancer Controlling workflows Optimize bidding decisions in auctions Traffic flow optimization Aircraft queueing for landing; airline meal provisioning Optimizing software on mobiles Forest firefighting …
23
Extensions Heuristic Search + Dynamic Programming AO*, LAO*, RTDP, … Factored MDPs add planning graph style heuristics use goal regression to generalize better Hierarchical MDPs hierarchy of sub-tasks, actions to scale better Reinforcement Learning learning the probability and rewards acting while learning – connections to psychology Partially Observable Markov Decision Processes noisy sensors; partially observable environment popular in robotics
24
Summary: Decision Making Classical planning sequential decision making in deterministic world domain independent heuristic generation Decision theory one step decision making under uncertainty value of information utility theory Markov Decision Process sequential decision making under uncertainty
25
Asynchronous Value Iteration States may be backed up in any order instead of an iteration by iteration As long as all states backed up infinitely often Asynchronous Value Iteration converges to optimal
26
Asynch VI: Prioritized Sweeping Why backup a state if values of successors same? Prefer backing a state whose successors had most change Priority Queue of (state, expected change in value) Backup in the order of priority After backing a state update priority queue for all predecessors
27
Asynch VI: Real Time Dynamic Programming [Barto, Bradtke, Singh’95] Trial: simulate greedy policy starting from start state; perform Bellman backup on visited states RTDP: repeat Trials until value function converges
28
Min ? ? s0s0 VnVn VnVn VnVn VnVn VnVn VnVn VnVn Q n+1 (s 0,a) V n+1 (s 0 ) a greedy = a 2 RTDP Trial Goal a1a1 a2a2 a3a3 ?
29
Comments Properties if all states are visited infinitely often then V n → V* Advantages Anytime: more probable states explored quickly Disadvantages complete convergence can be slow!
30
Reinforcement Learning
31
Still have an MDP Still looking for policy New twist: don’t know P r and/or R i.e. don’t know which states are good and what actions do Must actually try out actions to learn
32
Model based methods Visit different states, perform different actions Estimate P r and R Once model built, do planning using V.I. or other methods Con: require _huge_ amounts of data
33
Model free methods Directly learn Q*(s,a) values sample = R (s,a,s’) + max a’ Q n (s’,a’) Nudge the old estimate towards the new sample Q n+1 (s,a) (1- )Q n (s,a) + [sample]
34
Properties Converges to optimal if If you explore enough If you make learning rate ( ) small enough But not decrease it too quickly ∑ i (s,a,i) = ∞ ∑ i (s,a,i) < ∞ where i is the number of visits to (s,a)
35
Model based vs. Model Free RL Model based estimate O(| S | 2 | A |) parameters requires relatively larger data for learning can make use of background knowledge easily Model free estimate O(| S || A |) parameters requires relatively less data for learning
36
Exploration vs. Exploitation Exploration: choose actions that visit new states in order to obtain more data for better learning. Exploitation: choose actions that maximize the reward given current learnt model. - greedy Each time step flip a coin With prob , take an action randomly With prob 1- take the current greedy action Lower over time increase exploitation as more learning has happened
37
Q-learning Problems Too many states to visit during learning Q(s,a) is still a BIG table We want to generalize from small set of training examples Techniques Value function approximators Policy approximators Hierarchical Reinforcement Learning
38
Task Hierarchy: MAXQ Decomposition [Dietterich’00] Root TakeGive Navigate(loc) DeliverFetch Extend-arm GrabRelease Move e Move w Move s Move n Children of a task are unordered
39
Partially Observable Markov Decision Processes
40
Partially Observable MDPs What action next? PerceptsActions Environment Static Partially Observable Noisy Stochastic Instantaneous Unpredictable
41
Stochastic, Fully Observable
42
Stochastic, Partially Observable
43
POMDPs In POMDPs we apply the very same idea as in MDPs. Since the state is not observable, the agent has to make its decisions based on the belief state which is a posterior distribution over states. Let b be the belief of the agent about the current state POMDPs compute a value function over belief space: γ a b, a a
44
POMDPs Each belief is a probability distribution, value fn is a function of an entire probability distribution. Problematic, since probability distributions are continuous. Also, we have to deal with huge complexity of belief spaces. For finite worlds with finite state, action, and observation spaces and finite horizons, we can represent the value functions by piecewise linear functions.
45
Applications Robotic control helicopter maneuvering, autonomous vehicles Mars rover - path planning, oversubscription planning elevator planning Game playing - backgammon, tetris, checkers Neuroscience Computational Finance, Sequential Auctions Assisting elderly in simple tasks Spoken dialog management Communication Networks – switching, routing, flow control War planning, evacuation planning
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.