R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 From Sutton & Barto Reinforcement Learning An Introduction.

Slides:



Advertisements
Similar presentations
Lecture 18: Temporal-Difference Learning
Advertisements

Programming exercises: Angel – lms.wsu.edu – Submit via zip or tar – Write-up, Results, Code Doodle: class presentations Student Responses First visit.
1 Dynamic Programming Week #4. 2 Introduction Dynamic Programming (DP) –refers to a collection of algorithms –has a high computational complexity –assumes.
Monte-Carlo Methods Learning methods averaging complete episodic returns Slides based on [Sutton & Barto: Reinforcement Learning: An Introduction, 1998]
11 Planning and Learning Week #9. 22 Introduction... 1 Two types of methods in RL ◦Planning methods: Those that require an environment model  Dynamic.
Computational Modeling Lab Wednesday 18 June 2003 Reinforcement Learning an introduction part 3 Ann Nowé By Sutton.
1 Monte Carlo Methods Week #5. 2 Introduction Monte Carlo (MC) Methods –do not assume complete knowledge of environment (unlike DP methods which assume.
1 Reinforcement Learning Introduction & Passive Learning Alan Fern * Based in part on slides by Daniel Weld.
1 Temporal-Difference Learning Week #6. 2 Introduction Temporal-Difference (TD) Learning –a combination of DP and MC methods updates estimates based on.
Markov Decision Processes & Reinforcement Learning Megan Smith Lehigh University, Fall 2006.
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 4: Dynamic Programming pOverview of a collection of classical solution.
Reinforcement Learning
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction From Sutton & Barto Reinforcement Learning An Introduction.
Università di Milano-Bicocca Laurea Magistrale in Informatica Corso di APPRENDIMENTO E APPROSSIMAZIONE Lezione 6 - Reinforcement Learning Prof. Giancarlo.
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 9: Planning and Learning pUse of environment models pIntegration of planning.
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 4: Dynamic Programming pOverview of a collection of classical solution.
Reinforcement Learning Tutorial
Chapter 8: Generalization and Function Approximation pLook at how experience with a limited part of the state set be used to produce good behavior over.
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 2: Evaluative Feedback pEvaluating actions vs. instructing by giving correct.
Reinforcement Learning Mitchell, Ch. 13 (see also Barto & Sutton book on-line)
Chapter 5: Monte Carlo Methods
Policies and exploration and eligibility, oh my!.
Chapter 6: Temporal Difference Learning
Chapter 6: Temporal Difference Learning
Reinforcement Learning: Learning algorithms Yishay Mansour Tel-Aviv University.
Reinforcement Learning Yishay Mansour Tel-Aviv University.
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 4: Dynamic Programming pOverview of a collection of classical solution.
Reinforcement Learning
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 9: Planning and Learning pUse of environment models pIntegration of planning.
Policies and exploration and eligibility, oh my!.
CS Reinforcement Learning1 Reinforcement Learning Variation on Supervised Learning Exact target outputs are not given Some variation of reward is.
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Formulating MDPs pFormulating MDPs Rewards Returns Values pEscalator pElevators.
Chapter 8: Generalization and Function Approximation pLook at how experience with a limited part of the state set be used to produce good behavior over.
Chapter 7: Eligibility Traces
1 ECE-517: Reinforcement Learning in Artificial Intelligence Lecture 6: Optimality Criterion in MDPs Dr. Itamar Arel College of Engineering Department.
1 ECE-517 Reinforcement Learning in Artificial Intelligence Lecture 11: Temporal Difference Learning (cont.), Eligibility Traces Dr. Itamar Arel College.
1 ECE-517 Reinforcement Learning in Artificial Intelligence Lecture 7: Finite Horizon MDPs, Dynamic Programming Dr. Itamar Arel College of Engineering.
Q-learning, SARSA, and Radioactive Breadcrumbs S&B: Ch.6 and 7.
Reinforcement Learning Yishay Mansour Tel-Aviv University.
CMSC 471 Fall 2009 Temporal Difference Learning Prof. Marie desJardins Class #25 – Tuesday, 11/24 Thanks to Rich Sutton and Andy Barto for the use of their.
Reinforcement Learning Eligibility Traces 主講人:虞台文 大同大學資工所 智慧型多媒體研究室.
1 ECE 517: Reinforcement Learning in Artificial Intelligence Lecture 8: Dynamic Programming – Value Iteration Dr. Itamar Arel College of Engineering Department.
Schedule for presentations. 6.1: Chris? – The agent is driving home from work from a new work location, but enters the freeway from the same point. Thus,
Monte Carlo Methods. Learn from complete sample returns – Only defined for episodic tasks Can work in different settings – On-line: no model necessary.
CMSC 471 Fall 2009 MDPs and the RL Problem Prof. Marie desJardins Class #23 – Tuesday, 11/17 Thanks to Rich Sutton and Andy Barto for the use of their.
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 5: Monte Carlo Methods pMonte Carlo methods learn from complete sample.
Reinforcement Learning Elementary Solution Methods
Sutton & Barto, Chapter 4 Dynamic Programming. Programming Assignments? Course Discussions?
Reinforcement Learning: Learning algorithms Yishay Mansour Tel-Aviv University.
1 Passive Reinforcement Learning Ruti Glick Bar-Ilan university.
Chapter 6: Temporal Difference Learning
Chapter 5: Monte Carlo Methods
CMSC 471 – Spring 2014 Class #25 – Thursday, May 1
Biomedical Data & Markov Decision Process
CMSC 671 – Fall 2010 Class #22 – Wednesday 11/17
Reinforcement learning
CMSC 471 Fall 2009 RL using Dynamic Programming
Chapter 4: Dynamic Programming
Chapter 4: Dynamic Programming
Instructors: Fei Fang (This Lecture) and Dave Touretzky
Chapter 2: Evaluative Feedback
September 22, 2011 Dr. Itamar Arel College of Engineering
October 6, 2011 Dr. Itamar Arel College of Engineering
Chapter 6: Temporal Difference Learning
Chapter 9: Planning and Learning
CMSC 471 – Fall 2011 Class #25 – Tuesday, November 29
Chapter 7: Eligibility Traces
Chapter 4: Dynamic Programming
Chapter 2: Evaluative Feedback
Reinforcement Learning (2)
Presentation transcript:

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 From Sutton & Barto Reinforcement Learning An Introduction

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 2 DP Value Iteration Recall the full policy-evaluation backup: Here is the full value-iteration backup:

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 3 Asynchronous DP pAll the DP methods described so far require exhaustive sweeps of the entire state set. pAsynchronous DP does not use sweeps. Instead it works like this: Repeat until convergence criterion is met: –Pick a state at random and apply the appropriate backup pStill need lots of computation, but does not get locked into hopelessly long sweeps pCan you select states to backup intelligently? YES: an agent’s experience can act as a guide.

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 4 Generalized Policy Iteration Generalized Policy Iteration (GPI): any interaction of policy evaluation and policy improvement, independent of their granularity. A geometric metaphor for convergence of GPI:

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 5 Efficiency of DP pTo find an optimal policy is polynomial in the number of states… pBUT, the number of states is often astronomical, e.g., often growing exponentially with the number of state variables (what Bellman called “the curse of dimensionality”). pIn practice, classical DP can be applied to problems with a few millions of states. pAsynchronous DP can be applied to larger problems, and appropriate for parallel computation. pIt is surprisingly easy to come up with MDPs for which DP methods are not practical.

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 6 DP - Summary pPolicy evaluation: backups without a max pPolicy improvement: form a greedy policy, if only locally pPolicy iteration: alternate the above two processes pValue iteration: backups with a max pFull backups (to be contrasted later with sample backups) pGeneralized Policy Iteration (GPI) pAsynchronous DP: a way to avoid exhaustive sweeps pBootstrapping: updating estimates based on other estimates

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 7 Chapter 5: Monte Carlo Methods pMonte Carlo methods learn from complete sample returns Only defined for episodic tasks pMonte Carlo methods learn directly from experience On-line: No model necessary and still attains optimality Simulated: No need for a full model

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 8 Monte Carlo Policy Evaluation  Goal: learn V  (s)  Given: some number of episodes under  which contain s pIdea: Average returns observed after visits to s pEvery-Visit MC: average returns for every time s is visited in an episode pFirst-visit MC: average returns only for first time s is visited in an episode pBoth converge asymptotically 12345

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 9 First-visit Monte Carlo policy evaluation

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 10 Blackjack example pObject: Have your card sum be greater than the dealers without exceeding 21. pStates (200 of them): current sum (12-21) dealer’s showing card (ace-10) do I have a useable ace? pReward: +1 for winning, 0 for a draw, -1 for losing pActions: stick (stop receiving cards), hit (receive another card) pPolicy: Stick if my sum is 20 or 21, else hit

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 11 Blackjack value functions

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 12 Backup diagram for Monte Carlo pEntire episode included pOnly one choice at each state (unlike DP) pMC does not bootstrap pTime required to estimate one state does not depend on the total number of states

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 13 Monte Carlo Estimation of Action Values (Q) pMonte Carlo is most useful when a model is not available We want to learn Q *  Q  (s,a) - average return starting from state s and action a following  pAlso converges asymptotically if every state-action pair is visited pExploring starts: Every state-action pair has a non-zero probability of being the starting pair

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 14 Monte Carlo Control pMC policy iteration: Policy evaluation using MC methods followed by policy improvement pPolicy improvement step: greedify with respect to value (or action-value) function

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 15 Convergence of MC Control pGreedified policy meets the conditions for policy improvement: pAnd thus must be ≥  k by the policy improvement theorem pThis assumes exploring starts and infinite number of episodes for MC policy evaluation pTo solve the latter: update only to a given level of performance alternate between evaluation and improvement per episode

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 16 Monte Carlo Exploring Starts Fixed point is optimal policy  * Now proven (almost)

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 17 Blackjack example continued pExploring starts pInitial policy as described before

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 18 On-policy Monte Carlo Control greedy non-max pOn-policy: learn about policy currently executing pHow do we get rid of exploring starts? Need soft policies:  (s,a) > 0 for all s and a e.g.  -soft policy:  Similar to GPI: move policy towards greedy policy (i.e.  - soft)  Converges to best  -soft policy

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 19 On-policy MC Control

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 20 Off-policy Monte Carlo control pBehavior policy generates behavior in environment pEstimation policy is policy being learned about pAverage returns from behavior policy by probability their probabilities in the estimation policy

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 21 Learning about  while following

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 22 Off-policy MC control

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 23 Incremental Implementation pMC can be implemented incrementally saves memory pCompute the weighted average of each return incremental equivalentnon-incremental

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 24 MC - Summary pMC has several advantages over DP: Can learn directly from interaction with environment No need for full models No need to learn about ALL states Less harm by Markovian violations (later in book) pMC methods provide an alternate policy evaluation process pOne issue to watch for: maintaining sufficient exploration exploring starts, soft policies pIntroduced distinction between on-policy and off-policy methods pNo bootstrapping (as opposed to DP)

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 25 Monte Carlo is important in practice pAbsolutely pWhen there are just a few possibilities to value, out of a large state space, Monte Carlo is a big win pBackgammon, Go, …