Download presentation
Presentation is loading. Please wait.
Published byBarrie Owens Modified over 8 years ago
1
1 Markov Decision Processes Basics Concepts Alan Fern
2
Some AI Planning Problems Fire & Rescue Response Planning Solitaire Real-Time Strategy Games Helicopter Control Legged Robot ControlNetwork Security/Control
3
3 Some AI Planning Problems Health Care Personalized treatment planning Hospital Logistics/Scheduling Transportation Autonomous Vehicles Supply Chain Logistics Air traffic control Assistive Technologies Dialog Management Automated assistants for elderly/disabled Household robots Sustainability Smart grid Forest fire management …..
4
4 Common Elements We have a systems that changes state over time Can (partially) control the system state transitions by taking actions Problem gives an objective that specifies which states (or state sequences) are more/less preferred Problem: At each moment must select an action to optimize the overall (long-term) objective Produce most preferred state sequences
5
5 Observations Actions ???? world/ system State of world/system Observe-Act Loop of AI Planning Agent action Goal maximize expected reward over lifetime
6
6 World State Action from finite set ???? Stochastic/Probabilistic Planning: Markov Decision Process (MDP) Model Goal maximize expected reward over lifetime Markov Decision Process
7
7 State describes all visible info about game Action are the different choices of dice to roll (or to select a category to score) ???? Example MDP Goal maximize score at end of game
8
8 Markov Decision Processes An MDP has four components: S, A, R, T: finite state set S finite action set A transition function T(s,a,s’) = Pr(s’ | s,a) Probability of going to state s’ after taking action a in state s bounded, real-valued reward function R(s,a) Immediate reward we get for being in state s and taking action a Roughly speaking the objective is to select actions in order to maximize total reward over time For example in a goal-based domain R(s,a) may equal 1 for goal states and 0 for all others (or -1 reward for non-goal states)
9
Roll(die1,die2) State Roll(die1) Roll(die1,die2,die3)... Actions
10
Roll(die1,die2) … 1,1 1,2 6,6 Probabilistic state transition State Reward: only get reward for “category selection” actions. Reward equal to points gained.
11
11 What is a solution to an MDP? MDP Planning Problem: Input: an MDP (S,A,R,T) Output: ???? Should the solution to an MDP be just a sequence of actions such as ( a1,a2,a3, ….) ? Consider a single player card game like Blackjack/Solitaire. No! In general an action sequence is not sufficient Actions have stochastic effects, so the state we end up in is uncertain This means that we might end up in states where the remainder of the action sequence doesn’t apply or is a bad choice A solution should tell us what the best action is for any possible situation/state that might arise
12
12 Policies (“plans” for MDPs) For this class we will assume that we are given a finite planning horizon H I.e. we are told how many actions we will be allowed to take A solution to an MDP is a policy that says what to do at any moment Policies are functions from states and times to actions π :S x T → A, where T is the non-negative integers π (s,t) tells us what action to take at state s when there are t stages-to-go A policy that does not depend on t is called stationary, otherwise it is called non-stationary
13
13 What is a solution to an MDP? MDP Planning Problem: Input: an MDP (S,A,R,T) Output: a policy such that ???? We don’t want to output just any policy We want to output a “good” policy One that accumulates lots of reward
14
14 Value of a Policy
15
15
16
16 What is a solution to an MDP?
17
17 Computational Problems
18
18 Computational Problems Dynamic programming techniques can be used for both policy evaluation and optimization Polynomial time in # of states and actions http://web.engr.oregonstate.edu/~afern/classes/cs533/ http://web.engr.oregonstate.edu/~afern/classes/cs533/ Is polytime in # of states and actions good? Not when these numbers are enormous! As is the case for most realistic applications Consider Klondike Solitaire, Computer Network Control, etc Enters Monte-Carlo Planning
19
19 Approaches for Large Worlds: Monte-Carlo Planning Often a simulator of a planning domain is available or can be learned from data 19 Klondike Solitaire Fire & Emergency Response
20
20 Large Worlds: Monte-Carlo Approach Often a simulator of a planning domain is available or can be learned from data Monte-Carlo Planning: compute a good policy for an MDP by interacting with an MDP simulator 20 World Simulator Real World action State + reward
21
21 Example Domains with Simulators Traffic simulators Robotics simulators Military campaign simulators Computer network simulators Emergency planning simulators large-scale disaster and municipal Sports domains Board games / Video games Go / RTS In many cases Monte-Carlo techniques yield state-of-the-art performance.
22
22 MDP: Simulation-Based Representation A simulation-based representation gives: S, A, R, T, I: finite state set S (|S| is generally very large) finite action set A (|A|=m and will assume is of reasonable size) Stochastic, real-valued, bounded reward function R(s,a) = r Stochastically returns a reward r given input s and a Stochastic transition function T(s,a) = s’ (i.e. a simulator) Stochastically returns a state s’ given input s and a Probability of returning s’ is dictated by Pr(s’ | s,a) of MDP Stochastic initial state function I. Stochastically returns a state according to an initial state distribution These stochastic functions can be implemented in any language!
23
23 Computational Problems
24
24 Trajectories We can use the simulator to observe trajectories of any policy π from any state s: Let Traj(s, π, h) be a stochastic function that returns a length h trajectory of π starting at s. Traj(s, π, h) s 0 = s For i = 1 to h-1 s i = T(s i-1, π(s i-1 )) Return s 0, s 1, …, s h-1 The total reward of a trajectory is given by
25
25 Policy Evaluation
26
26 Sampling-Error Bound approximation due to sampling Note that the r i are samples of random variable R(Traj(s, π, h)) We can apply the additive Chernoff bound which bounds the difference between an expectation and an emprical average
27
27 Aside: Additive Chernoff Bound Let X be a random variable with maximum absolute value Z. An let x i i=1,…,w be i.i.d. samples of X The Chernoff bound gives a bound on the probability that the average of the x i are far from E[X] Let {x i | i=1,…, w} be i.i.d. samples of random variable X, then with probability at least we have that, equivalently
28
28 Aside: Coin Flip Example Suppose we have a coin with probability of heads equal to p. Let X be a random variable where X=1 if the coin flip gives heads and zero otherwise. (so Z from bound is 1) E[X] = 1*p + 0*(1-p) = p After flipping a coin w times we can estimate the heads prob. by average of x i. The Chernoff bound tells us that this estimate converges exponentially fast to the true mean (coin bias) p.
29
29 Sampling Error Bound approximation due to sampling We get that, with probability at least Can increase w to get arbitrarily good approximation.
30
Two Player MDP (aka Markov Games) action State/ reward action State/ reward So far we have only discussed single-player MDPs/games Your labs and competition will be 2-player zero-sum games (zero sum means sum of player rewards is zero) We assume players take turns (non-simultaneous moves) Player 1 Player 2 Markov Game
31
31 Simulators for 2-Player Games
32
32 Finite Horizon Value of Game
33
33 Summary Markov Decision Processes (MDPs) are common models for sequential planning problems The solution to an MDP is a policy The goodness of a policy is measured by its value function Expected total reward over H steps Monte Carlo Planning (MCP) is used for enormous MDPs for which we have a simulator Evaluating a policy via MCP is very easy
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.