Computational Modeling Lab Wednesday 18 June 2003 Reinforcement Learning an introduction part 3 Ann Nowé By Sutton.

Slides:



Advertisements
Similar presentations
1 Dynamic Programming Week #4. 2 Introduction Dynamic Programming (DP) –refers to a collection of algorithms –has a high computational complexity –assumes.
Advertisements

Decision Theoretic Planning
1 Monte Carlo Methods Week #5. 2 Introduction Monte Carlo (MC) Methods –do not assume complete knowledge of environment (unlike DP methods which assume.
1 Temporal-Difference Learning Week #6. 2 Introduction Temporal-Difference (TD) Learning –a combination of DP and MC methods updates estimates based on.
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 4: Dynamic Programming pOverview of a collection of classical solution.
Infinite Horizon Problems
Planning under Uncertainty
Università di Milano-Bicocca Laurea Magistrale in Informatica Corso di APPRENDIMENTO E APPROSSIMAZIONE Lezione 6 - Reinforcement Learning Prof. Giancarlo.
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 4: Dynamic Programming pOverview of a collection of classical solution.
91.420/543: Artificial Intelligence UMass Lowell CS – Fall 2010
Markov Decision Processes
Chapter 5: Monte Carlo Methods
4/1 Agenda: Markov Decision Processes (& Decision Theoretic Planning)
Chapter 6: Temporal Difference Learning
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 From Sutton & Barto Reinforcement Learning An Introduction.
Chapter 6: Temporal Difference Learning
Reinforcement Learning (2) Bob Durrant School of Computer Science University of Birmingham (Slides: Dr Ata Kabán)
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 3: The Reinforcement Learning Problem pdescribe the RL problem we will.
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 4: Dynamic Programming pOverview of a collection of classical solution.
Reinforcement Learning
Reinforcement Learning  Basic idea:  Receive feedback in the form of rewards  Agent’s utility is defined by the reward function  Must learn to act.
1 ECE-517: Reinforcement Learning in Artificial Intelligence Lecture 6: Optimality Criterion in MDPs Dr. Itamar Arel College of Engineering Department.
1 ECE-517 Reinforcement Learning in Artificial Intelligence Lecture 7: Finite Horizon MDPs, Dynamic Programming Dr. Itamar Arel College of Engineering.
CSE-573 Reinforcement Learning POMDPs. Planning What action next? PerceptsActions Environment Static vs. Dynamic Fully vs. Partially Observable Perfect.
Reinforcement Learning 主講人:虞台文 Content Introduction Main Elements Markov Decision Process (MDP) Value Functions.
1 Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
Reinforcement Learning
1 Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
Attributions These slides were originally developed by R.S. Sutton and A.G. Barto, Reinforcement Learning: An Introduction. (They have been reformatted.
Quiz 6: Utility Theory  Simulated Annealing only applies to continuous f(). False  Simulated Annealing only applies to differentiable f(). False  The.
Computational Modeling Lab Wednesday 18 June 2003 Reinforcement Learning an introduction part 4 Ann Nowé By Sutton.
Reinforcement Learning 主講人:虞台文 大同大學資工所 智慧型多媒體研究室.
1 ECE 517: Reinforcement Learning in Artificial Intelligence Lecture 8: Dynamic Programming – Value Iteration Dr. Itamar Arel College of Engineering Department.
Announcements  Upcoming due dates  Wednesday 11/4, 11:59pm Homework 8  Friday 10/30, 5pm Project 3  Watch out for Daylight Savings and UTC.
CMSC 471 Fall 2009 MDPs and the RL Problem Prof. Marie desJardins Class #23 – Tuesday, 11/17 Thanks to Rich Sutton and Andy Barto for the use of their.
CSE 473Markov Decision Processes Dan Weld Many slides from Chris Bishop, Mausam, Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer.
Reinforcement Learning Dynamic Programming I Subramanian Ramamoorthy School of Informatics 31 January, 2012.
Reinforcement Learning Elementary Solution Methods
Sutton & Barto, Chapter 4 Dynamic Programming. Programming Assignments? Course Discussions?
Computational Modeling Lab Wednesday 18 June 2003 Reinforcement Learning an introduction part 5 Ann Nowé By Sutton.
Reinforcement Learning  Basic idea:  Receive feedback in the form of rewards  Agent’s utility is defined by the reward function  Must learn to act.
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 3: The Reinforcement Learning Problem pdescribe the RL problem we will.
Markov Decision Process (MDP)
Reinforcement learning
Chapter 6: Temporal Difference Learning
Chapter 5: Monte Carlo Methods
CMSC 671 – Fall 2010 Class #22 – Wednesday 11/17
Reinforcement learning
Chapter 3: The Reinforcement Learning Problem
CMSC 471 Fall 2009 RL using Dynamic Programming
Chapter 4: Dynamic Programming
Chapter 4: Dynamic Programming
Instructors: Fei Fang (This Lecture) and Dave Touretzky
Quiz 6: Utility Theory Simulated Annealing only applies to continuous f(). False Simulated Annealing only applies to differentiable f(). False The 4 other.
CS 188: Artificial Intelligence Fall 2007
Chapter 3: The Reinforcement Learning Problem
Chapter 3: The Reinforcement Learning Problem
September 22, 2011 Dr. Itamar Arel College of Engineering
October 6, 2011 Dr. Itamar Arel College of Engineering
Chapter 6: Temporal Difference Learning
Chapter 9: Planning and Learning
CMSC 471 – Fall 2011 Class #25 – Tuesday, November 29
Chapter 4: Dynamic Programming
Reinforcement Learning Dealing with Partial Observability
Markov Decision Processes
Markov Decision Processes
Reinforcement Learning (2)
Presentation transcript:

Computational Modeling Lab Wednesday 18 June 2003 Reinforcement Learning an introduction part 3 Ann Nowé By Sutton and Barto

Computational Modeling Lab Policy and Value Iteration Evaluate the policy, and locally improve it V(s) V(s 2 ’ ) V(s 2 ) Q(s,a) Q(s 2,a 2 ) Q(s 2,a 1 ) s1s1 s2s2 Estimate quality of an action for a state If model is known use PI of DP If model is not known use PI of RL If model is known use VI of DP If model is not known use VI of RL Q(s 1,a 2 ) Q(s 1,a 1 ) V(s 3 ’ ) V(s 3 ) V(s 1 ’ ) V(s 1 )

Computational Modeling Lab Value Functions The value of a state is the expected return starting from that state; it depends on the agent’s policy: The value of taking an action in a state under policy  is the expected return starting from that state, taking that action, and thereafter following  :

Computational Modeling Lab Bellman Equation for a Policy  The basic idea: So: Or, without the expectation operator:

Computational Modeling Lab Gridworld, example Actions: north, south, east, west ; deterministic. If would take agent off the grid: no move but reward = –1 Other actions produce reward = 0, except actions that move agent out of special states A and B as shown. State-value function for equiprobable random policy;  = 0.9

Computational Modeling Lab Optimal Value Functions For finite MDPs, policies can be partially ordered: There are always one or more policies that are better than or equal to all the others. These are the optimal policies. We denote them all  *. Optimal policies share the same optimal state-value function: Optimal policies also share the same optimal action-value function: This is the expected return for taking action a in state s and thereafter following an optimal policy.

Computational Modeling Lab Bellman Optimality Equation for V* The value of a state under an optimal policy must equal the expected return for the best action from that state: The relevant backup diagram: is the unique solution of this system of nonlinear equations.

Computational Modeling Lab Bellman Optimality Equation for Q* The relevant backup diagram: is the unique solution of this system of nonlinear equations.

Computational Modeling Lab Why Optimal State-Value Functions are Useful Any policy that is greedy with respect to is an optimal policy. Therefore, given, one-step-ahead search produces the long-term optimal actions. Given, the agent does not even have to do a one-step- ahead search:

Computational Modeling Lab Gridworld example ** 

Computational Modeling Lab What About Optimal Action-Value Functions? Given, the agent does not even have to do a one-step-ahead search:

Computational Modeling Lab Policy Iteration in Dynamic Programming Policy Evaluation: for a given policy , compute the state-value function Recall:

Computational Modeling Lab Compute iteratively: Policy Iteration in Dynamic Programming Policy Evaluation: for a given policy , compute the state-value function How to solve? Dynamic programming operator

Computational Modeling Lab Iterative Policy Evaluation

Computational Modeling Lab Policy Improvement Suppose we have computed for a deterministic policy . For a given state s, would it be better to do an action ?

Computational Modeling Lab Policy Improvement Cont.

Computational Modeling Lab Policy Iteration in DP policy improvement “greedification” policy evaluation

Computational Modeling Lab A Small Gridworld example An undiscounted episodic task Nonterminal states: 1, 2,..., 14; One terminal state (shown twice as shaded squares) Actions that would take agent off the grid leave state unchanged Reward is –1 until the terminal state is reached  is set to 1

Computational Modeling Lab Iterative Policy Evaluation for the Small Gridworld

Computational Modeling Lab Iterative Policy Evaluation for the Small Gridworld, cont. Improved policy ∞

Computational Modeling Lab Jack’s Car Rental, an example $10 for each car rented (must be available when request recorded) Two locations, maximum of 20 cars at each Cars returned and requested randomly  Poisson distribution, n returns/requests with prob  1st location: average requests = 3, average returns = 3  2nd location: average requests = 4, average returns = 2 Can move up to 5 cars between locations overnight States, Actions, Rewards? Transition probabilities?

Computational Modeling Lab Jack’s Car Rental, an example (cont.)

Computational Modeling Lab Value Iteration in DP Recall the full policy-evaluation backup: Here is the full value-iteration backup:

Computational Modeling Lab Value Iteration Cont. Q(s,a)

Computational Modeling Lab Asynchronous DP All the DP methods described so far require exhaustive sweeps of the entire state set. Asynchronous DP does not use sweeps. Instead it works like this:  Repeat until convergence criterion is met:  Pick a state at random and apply the appropriate backup Still need lots of computation, but does not get locked into hopelessly long sweeps Can you select states to backup intelligently? YES: an agent’s experience can act as a guide.

Computational Modeling Lab Generalized Policy Iteration Generalized Policy Iteration (GPI): any interaction of policy evaluation and policy improvement, independent of their granularity. A geometric metaphor for convergence of GPI:

Computational Modeling Lab Efficiency of DP To find an optimal policy is polynomial in the number of states… BUT, the number of states is often astronomical, e.g., often growing exponentially with the number of state variables (what Bellman called “the curse of dimensionality”). In practice, classical DP can be applied to problems with a few millions of states. Asynchronous DP can be applied to larger problems, and appropriate for parallel computation. It is surprisingly easy to come up with MDPs for which DP methods are not practical.

Computational Modeling Lab Summary Policy evaluation: backups without a max Policy improvement: form a greedy policy, if only locally Policy iteration: alternate the above two processes Value iteration: backups with a max Generalized Policy Iteration (GPI) Asynchronous DP: a way to avoid exhaustive sweeps