Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction From Sutton & Barto Reinforcement Learning An Introduction.

Slides:



Advertisements
Similar presentations
Lecture 18: Temporal-Difference Learning
Advertisements

Programming exercises: Angel – lms.wsu.edu – Submit via zip or tar – Write-up, Results, Code Doodle: class presentations Student Responses First visit.
TEMPORAL DIFFERENCE LEARNING Mark Romero – 11/03/2011.
Monte-Carlo Methods Learning methods averaging complete episodic returns Slides based on [Sutton & Barto: Reinforcement Learning: An Introduction, 1998]
11 Planning and Learning Week #9. 22 Introduction... 1 Two types of methods in RL ◦Planning methods: Those that require an environment model  Dynamic.
Computational Modeling Lab Wednesday 18 June 2003 Reinforcement Learning an introduction part 3 Ann Nowé By Sutton.
INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN © The MIT Press, Lecture.
1 Monte Carlo Methods Week #5. 2 Introduction Monte Carlo (MC) Methods –do not assume complete knowledge of environment (unlike DP methods which assume.
1 Reinforcement Learning Introduction & Passive Learning Alan Fern * Based in part on slides by Daniel Weld.
1 Temporal-Difference Learning Week #6. 2 Introduction Temporal-Difference (TD) Learning –a combination of DP and MC methods updates estimates based on.
1 Eligibility Traces (ETs) Week #7. 2 Introduction A basic mechanism of RL. λ in TD(λ) refers to an eligibility trace. TD methods such as Sarsa and Q-learning.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Università di Milano-Bicocca Laurea Magistrale in Informatica Corso di APPRENDIMENTO E APPROSSIMAZIONE Lezione 6 - Reinforcement Learning Prof. Giancarlo.
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 9: Planning and Learning pUse of environment models pIntegration of planning.
Chapter 8: Generalization and Function Approximation pLook at how experience with a limited part of the state set be used to produce good behavior over.
Reinforcement Learning Mitchell, Ch. 13 (see also Barto & Sutton book on-line)
Chapter 5: Monte Carlo Methods
Nash Q-Learning for General-Sum Stochastic Games Hu & Wellman March 6 th, 2006 CS286r Presented by Ilan Lobel.
Chapter 6: Temporal Difference Learning
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 From Sutton & Barto Reinforcement Learning An Introduction.
Chapter 6: Temporal Difference Learning
Reinforcement Learning: Learning algorithms Yishay Mansour Tel-Aviv University.
Reinforcement Learning: Generalization and Function Brendan and Yifang Feb 10, 2015.
Reinforcement Learning
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 9: Planning and Learning pUse of environment models pIntegration of planning.
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
1 Tópicos Especiais em Aprendizagem Prof. Reinaldo Bianchi Centro Universitário da FEI 2006.
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Formulating MDPs pFormulating MDPs Rewards Returns Values pEscalator pElevators.
Temporal Difference Learning By John Lenz. Reinforcement Learning Agent interacting with environment Agent receives reward signal based on previous action.
Chapter 8: Generalization and Function Approximation pLook at how experience with a limited part of the state set be used to produce good behavior over.
Efficiency in ML / AI 1.Data efficiency (rate of learning) 2.Computational efficiency (memory, computation, communication) 3.Researcher efficiency (autonomy,
Chapter 7: Eligibility Traces
1 ECE-517 Reinforcement Learning in Artificial Intelligence Lecture 11: Temporal Difference Learning (cont.), Eligibility Traces Dr. Itamar Arel College.
1 ECE-517 Reinforcement Learning in Artificial Intelligence Lecture 7: Finite Horizon MDPs, Dynamic Programming Dr. Itamar Arel College of Engineering.
Learning Theory Reza Shadmehr & Jörn Diedrichsen Reinforcement Learning 2: Temporal difference learning.
Q-learning Watkins, C. J. C. H., and Dayan, P., Q learning,
Bayesian Reinforcement Learning Machine Learning RCC 16 th June 2011.
Learning Theory Reza Shadmehr & Jörn Diedrichsen Reinforcement Learning 3: TD( ) and eligibility traces.
Decision Making Under Uncertainty Lec #8: Reinforcement Learning UIUC CS 598: Section EA Professor: Eyal Amir Spring Semester 2006 Most slides by Jeremy.
Reinforcement Learning
CMSC 471 Fall 2009 Temporal Difference Learning Prof. Marie desJardins Class #25 – Tuesday, 11/24 Thanks to Rich Sutton and Andy Barto for the use of their.
INTRODUCTION TO Machine Learning
Reinforcement Learning Eligibility Traces 主講人:虞台文 大同大學資工所 智慧型多媒體研究室.
CHAPTER 16: Reinforcement Learning. Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 Introduction Game-playing:
Schedule for presentations. 6.1: Chris? – The agent is driving home from work from a new work location, but enters the freeway from the same point. Thus,
Off-Policy Temporal-Difference Learning with Function Approximation Doina Precup McGill University Rich Sutton Sanjoy Dasgupta AT&T Labs.
Retraction: I’m actually 35 years old. Q-Learning.
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 5: Monte Carlo Methods pMonte Carlo methods learn from complete sample.
Reinforcement Learning Elementary Solution Methods
Reinforcement Learning: Learning algorithms Yishay Mansour Tel-Aviv University.
TD(0) prediction Sarsa, On-policy learning Q-Learning, Off-policy learning.
1 Passive Reinforcement Learning Ruti Glick Bar-Ilan university.
Chapter 6: Temporal Difference Learning
Chapter 5: Monte Carlo Methods
Reinforcement Learning
CMSC 671 – Fall 2010 Class #22 – Wednesday 11/17
Reinforcement learning
Chapter 3: The Reinforcement Learning Problem
CMSC 471 Fall 2009 RL using Dynamic Programming
Chapter 4: Dynamic Programming
Chapter 4: Dynamic Programming
Instructors: Fei Fang (This Lecture) and Dave Touretzky
Chapter 3: The Reinforcement Learning Problem
Chapter 3: The Reinforcement Learning Problem
October 6, 2011 Dr. Itamar Arel College of Engineering
Chapter 6: Temporal Difference Learning
CMSC 471 – Fall 2011 Class #25 – Tuesday, November 29
Chapter 7: Eligibility Traces
Chapter 4: Dynamic Programming
Presentation transcript:

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction From Sutton & Barto Reinforcement Learning An Introduction

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Chapter 7: Eligibility Traces

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction N-step TD Prediction pIdea: Look farther into the future when you do TD backup (1, 2, 3, …, n steps)

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction pMonte Carlo: pTD: Use V to estimate remaining return pn-step TD: 2 step return: n-step return: Mathematics of N-step TD Prediction

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Learning with N-step Backups pBackup (on-line or off-line): pError reduction property of n-step returns pUsing this, you can show that n-step methods converge Maximum error using n-step return Maximum error using V

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Random Walk Examples pHow does 2-step TD work here? pHow about 3-step TD?

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction A Larger Example pTask: 19 state random walk pDo you think there is an optimal n? for everything?

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Averaging N-step Returns  n-step methods were introduced to help with TD( ) understanding pIdea: backup an average of several returns e.g. backup half of 2-step and half of 4- step pCalled a complex backup Draw each component Label with the weights for that component One backup

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Forward View of TD( )  TD( ) is a method for averaging all n-step backups weight by n-1 (time since visitation) -return:  Backup using -return:

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction -return Weighting Function Until terminationAfter termination

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Relation to TD(0) and MC  -return can be rewritten as:  If = 1, you get MC:  If = 0, you get TD(0) Until terminationAfter termination

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Forward View of TD( ) II pLook forward from each state to determine update from future states and rewards:

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction -return on the Random Walk pSame 19 state random walk as before  Why do you think intermediate values of are best?

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Backward View  Shout  t backwards over time  The strength of your voice decreases with temporal distance by 

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Backward View of TD( ) pThe forward view was for theory pThe backward view is for mechanism pNew variable called eligibility trace On each step, decay all traces by  and increment the trace for the current state by 1 Accumulating trace

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction On-line Tabular TD( )

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Relation of Backwards View to MC & TD(0) pUsing update rule:  As before, if you set to 0, you get to TD(0)  If you set to 1, you get MC but in a better way Can apply TD(1) to continuing tasks Works incrementally and on-line (instead of waiting to the end of the episode)

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Forward View = Backward View  The forward (theoretical) view of TD( ) is equivalent to the backward (mechanistic) view for off-line updating pThe book shows:  On-line updating with small  is similar Backward updatesForward updates algebra shown in book

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction n-step TD vs TD(  pSame 19 state random walk  TD( ) performs a bit better

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Control: Sarsa( ) pSave eligibility for state-action pairs instead of just states

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Sarsa( ) Algorithm

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Sarsa( ) Gridworld Example pWith one trial, the agent has much more information about how to get to the goal not necessarily the best way pCan considerably accelerate learning

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Three Approaches to Q( ) pHow can we extend this to Q- learning? pIf you mark every state action pair as eligible, you backup over non-greedy policy Watkins: Zero out eligibility trace after a non-greedy action. Do max when backing up at first non- greedy choice.

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Watkins’s Q( )

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Peng’s Q( ) pDisadvantage to Watkins’s method: Early in learning, the eligibility trace will be “cut” (zeroed out) frequently resulting in little advantage to traces pPeng: Backup max action except at end Never cut traces pDisadvantage: Complicated to implement

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Naïve Q( ) pIdea: is it really a problem to backup exploratory actions? Never zero traces Always backup max at current action (unlike Peng or Watkins’s) pIs this truly naïve? pWorks well is preliminary empirical studies What is the backup diagram?

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Comparison Task From McGovern and Sutton (1997). Towards a better Q( )  Compared Watkins’s, Peng’s, and Naïve (called McGovern’s here) Q( ) on several tasks. See McGovern and Sutton (1997). Towards a Better Q( ) for other tasks and results (stochastic tasks, continuing tasks, etc) pDeterministic gridworld with obstacles 10x10 gridworld 25 randomly generated obstacles 30 runs  = 0.05,  = 0.9, = 0.9,  = 0.05, accumulating traces

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Comparison Results From McGovern and Sutton (1997). Towards a better Q( )

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Convergence of the Q( )’s pNone of the methods are proven to converge. Much extra credit if you can prove any of them. pWatkins’s is thought to converge to Q *  Peng’s is thought to converge to a mixture of Q  and Q * pNaïve - Q * ?

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Eligibility Traces for Actor-Critic Methods  Critic: On-policy learning of V . Use TD( ) as described before. pActor: Needs eligibility traces for each state-action pair. pWe change the update equation: pCan change the other actor-critic update: to where

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Replacing Traces pUsing accumulating traces, frequently visited states can have eligibilities greater than 1 This can be a problem for convergence pReplacing traces: Instead of adding 1 when you visit a state, set that trace to 1

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Replacing Traces Example pSame 19 state random walk task as before  Replacing traces perform better than accumulating traces over more values of

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Why Replacing Traces? pReplacing traces can significantly speed learning pThey can make the system perform well for a broader set of parameters pAccumulating traces can do poorly on certain types of tasks Why is this task particularly onerous for accumulating traces?

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction More Replacing Traces pOff-line replacing trace TD(1) is identical to first-visit MC pExtension to action-values: When you revisit a state, what should you do with the traces for the other actions? Singh and Sutton say to set them to zero:

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Implementation Issues with Traces pCould require much more computation But most eligibility traces are VERY close to zero pIf you implement it in Matlab, backup is only one line of code and is very fast (Matlab is optimized for matrices)

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Variable  Can generalize to variable  Here is a function of time Could define

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Conclusions pProvides efficient, incremental way to combine MC and TD Includes advantages of MC (can deal with lack of Markov property) Includes advantages of TD (using TD error, bootstrapping) pCan significantly speed learning pDoes have a cost in computation

Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction The two views