Download presentation
Presentation is loading. Please wait.
1
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction From Sutton & Barto Reinforcement Learning An Introduction
2
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Chapter 7: Eligibility Traces
3
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction N-step TD Prediction pIdea: Look farther into the future when you do TD backup (1, 2, 3, …, n steps)
4
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction pMonte Carlo: pTD: Use V to estimate remaining return pn-step TD: 2 step return: n-step return: Mathematics of N-step TD Prediction
5
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Learning with N-step Backups pBackup (on-line or off-line): pError reduction property of n-step returns pUsing this, you can show that n-step methods converge Maximum error using n-step return Maximum error using V
6
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Random Walk Examples pHow does 2-step TD work here? pHow about 3-step TD?
7
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction A Larger Example pTask: 19 state random walk pDo you think there is an optimal n? for everything?
8
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Averaging N-step Returns n-step methods were introduced to help with TD( ) understanding pIdea: backup an average of several returns e.g. backup half of 2-step and half of 4- step pCalled a complex backup Draw each component Label with the weights for that component One backup
9
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Forward View of TD( ) TD( ) is a method for averaging all n-step backups weight by n-1 (time since visitation) -return: Backup using -return:
10
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction -return Weighting Function Until terminationAfter termination
11
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Relation to TD(0) and MC -return can be rewritten as: If = 1, you get MC: If = 0, you get TD(0) Until terminationAfter termination
12
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Forward View of TD( ) II pLook forward from each state to determine update from future states and rewards:
13
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction -return on the Random Walk pSame 19 state random walk as before Why do you think intermediate values of are best?
14
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Backward View Shout t backwards over time The strength of your voice decreases with temporal distance by
15
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Backward View of TD( ) pThe forward view was for theory pThe backward view is for mechanism pNew variable called eligibility trace On each step, decay all traces by and increment the trace for the current state by 1 Accumulating trace
16
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction On-line Tabular TD( )
17
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Relation of Backwards View to MC & TD(0) pUsing update rule: As before, if you set to 0, you get to TD(0) If you set to 1, you get MC but in a better way Can apply TD(1) to continuing tasks Works incrementally and on-line (instead of waiting to the end of the episode)
18
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Forward View = Backward View The forward (theoretical) view of TD( ) is equivalent to the backward (mechanistic) view for off-line updating pThe book shows: On-line updating with small is similar Backward updatesForward updates algebra shown in book
19
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction n-step TD vs TD( pSame 19 state random walk TD( ) performs a bit better
20
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Control: Sarsa( ) pSave eligibility for state-action pairs instead of just states
21
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Sarsa( ) Algorithm
22
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Sarsa( ) Gridworld Example pWith one trial, the agent has much more information about how to get to the goal not necessarily the best way pCan considerably accelerate learning
23
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Three Approaches to Q( ) pHow can we extend this to Q- learning? pIf you mark every state action pair as eligible, you backup over non-greedy policy Watkins: Zero out eligibility trace after a non-greedy action. Do max when backing up at first non- greedy choice.
24
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Watkins’s Q( )
25
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Peng’s Q( ) pDisadvantage to Watkins’s method: Early in learning, the eligibility trace will be “cut” (zeroed out) frequently resulting in little advantage to traces pPeng: Backup max action except at end Never cut traces pDisadvantage: Complicated to implement
26
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Naïve Q( ) pIdea: is it really a problem to backup exploratory actions? Never zero traces Always backup max at current action (unlike Peng or Watkins’s) pIs this truly naïve? pWorks well is preliminary empirical studies What is the backup diagram?
27
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Comparison Task From McGovern and Sutton (1997). Towards a better Q( ) Compared Watkins’s, Peng’s, and Naïve (called McGovern’s here) Q( ) on several tasks. See McGovern and Sutton (1997). Towards a Better Q( ) for other tasks and results (stochastic tasks, continuing tasks, etc) pDeterministic gridworld with obstacles 10x10 gridworld 25 randomly generated obstacles 30 runs = 0.05, = 0.9, = 0.9, = 0.05, accumulating traces
28
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Comparison Results From McGovern and Sutton (1997). Towards a better Q( )
29
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Convergence of the Q( )’s pNone of the methods are proven to converge. Much extra credit if you can prove any of them. pWatkins’s is thought to converge to Q * Peng’s is thought to converge to a mixture of Q and Q * pNaïve - Q * ?
30
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Eligibility Traces for Actor-Critic Methods Critic: On-policy learning of V . Use TD( ) as described before. pActor: Needs eligibility traces for each state-action pair. pWe change the update equation: pCan change the other actor-critic update: to where
31
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Replacing Traces pUsing accumulating traces, frequently visited states can have eligibilities greater than 1 This can be a problem for convergence pReplacing traces: Instead of adding 1 when you visit a state, set that trace to 1
32
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Replacing Traces Example pSame 19 state random walk task as before Replacing traces perform better than accumulating traces over more values of
33
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Why Replacing Traces? pReplacing traces can significantly speed learning pThey can make the system perform well for a broader set of parameters pAccumulating traces can do poorly on certain types of tasks Why is this task particularly onerous for accumulating traces?
34
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction More Replacing Traces pOff-line replacing trace TD(1) is identical to first-visit MC pExtension to action-values: When you revisit a state, what should you do with the traces for the other actions? Singh and Sutton say to set them to zero:
35
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Implementation Issues with Traces pCould require much more computation But most eligibility traces are VERY close to zero pIf you implement it in Matlab, backup is only one line of code and is very fast (Matlab is optimized for matrices)
36
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Variable Can generalize to variable Here is a function of time Could define
37
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction Conclusions pProvides efficient, incremental way to combine MC and TD Includes advantages of MC (can deal with lack of Markov property) Includes advantages of TD (using TD error, bootstrapping) pCan significantly speed learning pDoes have a cost in computation
38
Adapted from R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction The two views
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.