Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Right Way to do Reinforcement Learning with Function Approximation Rich Sutton AT&T Labs with thanks to Satinder Singh, David McAllester, Mike Kearns.

Similar presentations


Presentation on theme: "The Right Way to do Reinforcement Learning with Function Approximation Rich Sutton AT&T Labs with thanks to Satinder Singh, David McAllester, Mike Kearns."— Presentation transcript:

1 The Right Way to do Reinforcement Learning with Function Approximation Rich Sutton AT&T Labs with thanks to Satinder Singh, David McAllester, Mike Kearns

2 The Prize To find the “Right Way” to do RL with FA –sound (stable, non-divergent) –ends up with a good policy –gets there quickly, efficiently –applicable to any (discrete-time, finite-state) MDP –compatible with (at least) linear FA –online and incremental To prove that it is so Tensions: –Proof and practice often pull in different directions –Lack of knowledge  negative knowledge –We have not handled this well as a field critical to viability of RL !

3 Outline Questions History: from policy to value back to policy Problem Definition –Why function approximation changes everything REINFORCE Policy Gradient Theory Do we need values? Do we need TD? –Return baselines - using values without bias –TD/boostrapping/truncation may not be possible without bias but seems essential for reducing variance

4 Questions Is RL theory fundamentally different/harder with FA? yes Are value methods unsound with FA? absolutely not Should we prefer policy methods for other reasons? probably Is it sufficient to learn just a policy, not value? apparently not Didn’t we already do all this policy stuff in the 1980s? only some of it Can values be used without introducing bias? yes Can TD (bootstrapping) be done without bias? I wish Is TD much more efficient than Monte Carlo? apparently Is it TD that makes FA hard? yes and no, but mostly no So are we stuck with dual, “actor-critic” methods? maybe so Are we talking about genetic algorithms? No! What about learning “heuristic” or “relative” values. Are these policy methods or value methods? policy

5 The Swing towards Value Functions Early RL methods all used parameterized policies But adding value functions seemed key to efficiency Why not just learn action value functions and compute policies from them! A prediction problem - almost supervised Fewer parameters Enabled first proofs of convergence to optimal policy Impressive applications using FA So successful that early policy work was bypassed Cleaner Simpler Easier to use Q-learning Watkins, 1989

6 Theory hit a brick wall for RL with FA Q-learning shown to diverge with linear FA Many counterexamples to convergence Widespread scepticism about any argmax VF solution –that is, about any way to get conventional convergence But is this really a problem? In practice, on-policy methods perform well Is this only a problem for our theory? With Gordon’s latest result these concerns seem to have been hasty, now invalid The Swing away from Value Functions Why? Diagram

7 Why FA makes RL hard All the states interact and must be balanced, traded off Which states are visited is affected by the policy A small change (or error) in the VF estimate  can cause a large, discontinuous change in the policy  can cause a large change in the VF estimate

8 Diagram of What Happens in — Value Function Space — inadmissable value functions value functions consistent with  True V* Region of  * best admissable policy Original naïve hope guaranteed convergence to good policy Res gradient et al. guaranteed convergence to less desirable policy Sarsa, TD( ) & other on-policy methods chattering without divergence or guaranteed convergence Q-learning, DP & other off-policy methods divergence possible

9 …and towards Policy Parameterization A parameterized policy (PP) can be changed continuously A PP can find stochastic policies –the optimal policy will often be stochastic with FA A PP can omit the argmax (the action-space search) –necessary for large/continuous action spaces A PP can be more direct, simpler Prior knowledge is often better expressed as a PP A PP method can be proven convergent to a local optimum for general differentiable FA REINFORCE Williams, 1988

10 Defining the Problem (RL with FA) Part I: Parameterized Policies Finite state and action sets Discrete time Transition probabilities Expected rewards Stochastic policy possibly parameterized S  A  N  t  0, 1, 2, 3,  p ss a  Prs t  1  ss t  s,a t  a  s a  Er t  1 s t  s,a t  a  (s,a)  a t  as t  s   w.l.o.g.  n n  N     e.g.,

11 Examples of Policy Parameterizations Gibbs or  =1 Normalization features of  = weights Ranking #s, one per action Action Probabilities   (s t,a) Gibbs or  =1 features of and Ranking # for repeat Action Probabilities   (s t,a) s t s t a  a  A a Ranking #s are mechanistically like action values But do not have value semantics Many “heuristic” or “relative” values are better viewed as ranking #s

12 More Policy Parameterizations Continuous actions work too, e.g.: Gaussian Sampler features of  = weights implicitly determine the continuous distribution s t mean of std. dev. of a t a t a t   (s t,a) Much stranger parameterizations are possible e.g., cascades of interacting stochastic processes such as in a communications network or factory We require only that our policy process produces

13 Choose  to maximize a measure of total future reward, called the return Values are expected returns Defining the Problem (RL with FA) II R t V  (s)  ER t s t  s,  Q  (s,a)  ER t s t  s,a t  a,  Optimal policies  *  argmax  V  (s)  s  S  * (s,a)  argmax a Q  (s,a) Value methods maintain a parameterized approximation to a value function, And then compute their policy, e.g., V ,Q ,V  *, or Q  *  (s)  argmax a ˆ Q  (s,a)

14 FA Breaks the Standard Problem Def’n! Discounted case  One infinite, ergodic episode: s 0,a 0,r 1,s 1,a 1,r 2,s 2,  Let  be the space of all policies Let  be all policies consistent with the parameterization Problem: depends on s ! no one policy—in  —is best for all states states compete for control of  Need an overall (not per state) measure of policy quality, e.g., d  (s)  asymptotic fraction of time spent in s under  But! Thm: J(  ) is independent of  J(  ) = ave. reward/step Return:  

15 RL Cases Consistent with FA Average-reward case Episodic case Many epsiodes, all starting from One infinite, ergodic episode: s 0,a 0,r 1,s 1,a 1,r 2,s 2, 

16 Outline Questions History: from policy to value back to policy Problem Definition –Why function approximation changes everything REINFORCE Policy Gradient Theory Do we need values? Do we need TD? –Return baselines - using values without bias –TD/boostrapping/truncation may not be possible without bias but seems essential for reducing variance

17 Do we need Values at all? Extended REINFORCE (Williams, 1988) offline updating episodic case Thm: There is also an online, incremental implementation using eligibility traces Converges to a local optimum of J for general diff. FA! Simple, clean, a single parameter... Why didn’t we love this algorithm in 1988?? No TD/bootstrapping (a Monte Carlo method)  thought to be inefficient Extended to average-case (Baxter and Bartlett, 1999)

18 Policy Gradient Theorem Thm: Marbach & Tsitsiklis ‘98 Jaakkola Singh Jordan ‘95 Cao & Chen ‘97 Sutt McAl Sing Mans ‘99 Konda & Tsitsiklis ‘99 Williams ‘88 how often s occurs under  does not involve

19 Policy Gradient Theory Thm: how often s occurs under  how often s,a occurs under   t

20 REINFORCE  t

21 OR actor- critic OR general form, includes all above possible TD/bootstrapping ideal baseline? REINFORCE  t

22 Conjecture: The ideal baseline is In which case our error term is an advantage Baird ‘93 No bias is introduced by an approximation here: How important is a baseline to the efficiency of REINFORCE? Apparently very important, but previous tests flawed

23 Random MDP Testbed 50 Randomly constructed episodic MDPs –50 states, uniform starting distribution –2 actions/state –2 possible next states per action –expected rewards  (1,1); actual rewards +  (0,0.1) –0.1 prob of termination on each step State aggregation FA - 5 groups of 10 states each Gibbs action selection Baseline learned by gradient descent Parameters initially Step-size parameters  

24 Effect of Learned Baseline.01.11 10.0 10.8 No Baseline       J (  ) after 50 episodes Much better to learn a baseline approximating V  REINFORCE with per-episode updating

25 Can We TD without Introducing Bias? Thm: An approximation Q can replace Q without bias if it is of the form and has converged to a local optimum. Sutton et al. ‘99 Konda & Tsitsiklis ‘99 However! Thm: Under batch updating, such a Q results in exactly the same updates as REINFORCE. There is no useful bootstrapping. Empirically, there is also no win with per-episode updating Singh McAllester Sutton unpublished

26 Effect of Unbiased Linear Q^.01.11 0.1 1.7 J (  ) after 50 episodes  REINFORCE Unbiased at best  ˆ Q per-episode updating

27 TD Creates Bias; Must We TD? Is TD really more efficient than Monte Carlo? Apparently “Yes”, but this question deserves a better answer

28 Is it TD that makes FA hard? Yes, TD prediction with FA is trickier than Monte Carlo –even the linear case converges only near an optimum –nonlinear cases can even diverge No, TD is not the reason the control case is hard This problem is intrinsic to control + FA It happens even with Monte Carlo methods small change in value discontinuous change in policy large change in state distribution large

29 Small Sample Importance Sampling - A Superior Eligibility Term? Thm: how often s occurs under  how often s,a occurs under   t n?n? n?n?

30 Questions Is RL theory fundamentally different/harder with FA? yes Are value methods unsound with FA? absolutely not Should we prefer policy methods for other reasons? probably Is it sufficient to learn just a policy, not value? apparently not Can values be used without introducing bias? yes Can TD (bootstrapping) be done without bias? I wish Is TD much more efficient than Monte Carlo? apparently Is it TD that makes FA hard? yes and no, but mostly no So are we stuck with dual, “actor-critic” methods? maybe so


Download ppt "The Right Way to do Reinforcement Learning with Function Approximation Rich Sutton AT&T Labs with thanks to Satinder Singh, David McAllester, Mike Kearns."

Similar presentations


Ads by Google