An Introduction to Reinforcement Learning (Part 1) Jeremy Wyatt Intelligent Robotics Lab School of Computer Science University of Birmingham

Slides:



Advertisements
Similar presentations
Reinforcement Learning Peter Bodík. Previous Lectures Supervised learning –classification, regression Unsupervised learning –clustering, dimensionality.
Advertisements

Reinforcement learning
INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN © The MIT Press, Lecture.
Reinforcement Learning
1 Reinforcement Learning Introduction & Passive Learning Alan Fern * Based in part on slides by Daniel Weld.
1 Temporal-Difference Learning Week #6. 2 Introduction Temporal-Difference (TD) Learning –a combination of DP and MC methods updates estimates based on.
Partially Observable Markov Decision Process By Nezih Ergin Özkucur.
COSC 878 Seminar on Large Scale Statistical Machine Learning 1.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Reinforcement learning
Università di Milano-Bicocca Laurea Magistrale in Informatica Corso di APPRENDIMENTO E APPROSSIMAZIONE Lezione 6 - Reinforcement Learning Prof. Giancarlo.
Reinforcement Learning Tutorial
Chapter 8: Generalization and Function Approximation pLook at how experience with a limited part of the state set be used to produce good behavior over.
Markov Decision Processes CSE 473 May 28, 2004 AI textbook : Sections Russel and Norvig Decision-Theoretic Planning: Structural Assumptions.
Bayesian Reinforcement Learning with Gaussian Processes Huanren Zhang Electrical and Computer Engineering Purdue University.
Reinforcement Learning
Reinforcement Learning Mitchell, Ch. 13 (see also Barto & Sutton book on-line)
לביצוע מיידי ! להתחלק לקבוצות –2 או 3 בקבוצה להעביר את הקבוצות – היום בסוף השיעור ! ספר Reinforcement Learning – הספר קיים online ( גישה מהאתר של הסדנה.
1 Hybrid Agent-Based Modeling: Architectures,Analyses and Applications (Stage One) Li, Hailin.
Reinforcement Learning: Learning algorithms Yishay Mansour Tel-Aviv University.
Reinforcement Learning Yishay Mansour Tel-Aviv University.
Reinforcement Learning (1)
Making Decisions CSE 592 Winter 2003 Henry Kautz.
Exploration in Reinforcement Learning Jeremy Wyatt Intelligent Robotics Lab School of Computer Science University of Birmingham, UK
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
1 Reinforcement Learning: Learning algorithms Function Approximation Yishay Mansour Tel-Aviv University.
CS Reinforcement Learning1 Reinforcement Learning Variation on Supervised Learning Exact target outputs are not given Some variation of reward is.
Reinforcement Learning
General Polynomial Time Algorithm for Near-Optimal Reinforcement Learning Duke University Machine Learning Group Discussion Leader: Kai Ni June 17, 2005.
Chapter 8: Generalization and Function Approximation pLook at how experience with a limited part of the state set be used to produce good behavior over.
1 ECE-517 Reinforcement Learning in Artificial Intelligence Lecture 7: Finite Horizon MDPs, Dynamic Programming Dr. Itamar Arel College of Engineering.
Overcoming the Curse of Dimensionality with Reinforcement Learning Rich Sutton AT&T Labs with thanks to Doina Precup, Peter Stone, Satinder Singh, David.
Introduction to Reinforcement Learning Dr Kathryn Merrick 2008 Spring School on Optimisation, Learning and Complexity Friday 7 th.
Reinforcement Learning 主講人:虞台文 Content Introduction Main Elements Markov Decision Process (MDP) Value Functions.
Reinforcement Learning Ata Kaban School of Computer Science University of Birmingham.
Solving POMDPs through Macro Decomposition
© D. Weld and D. Fox 1 Reinforcement Learning CSE 473.
Decision Making Under Uncertainty Lec #8: Reinforcement Learning UIUC CS 598: Section EA Professor: Eyal Amir Spring Semester 2006 Most slides by Jeremy.
Reinforcement Learning
Reinforcement Learning Yishay Mansour Tel-Aviv University.
A Tutorial on the Partially Observable Markov Decision Process and Its Applications Lawrence Carin June 7,2006.
Conformant Probabilistic Planning via CSPs ICAPS-2003 Nathanael Hyafil & Fahiem Bacchus University of Toronto.
INTRODUCTION TO Machine Learning
CHAPTER 16: Reinforcement Learning. Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 Introduction Game-playing:
MDPs (cont) & Reinforcement Learning
Reinforcement Learning with Laser Cats! Marshall Wang Maria Jahja DTR Group Meeting October 5, 2015.
Course Overview  What is AI?  What are the Major Challenges?  What are the Main Techniques?  Where are we failing, and why?  Step back and look at.
Reinforcement Learning
Reinforcement Learning Based on slides by Avi Pfeffer and David Parkes.
Reinforcement Learning: Learning algorithms Yishay Mansour Tel-Aviv University.
Decision Making Under Uncertainty Lec #9: Approximate Value Function UIUC CS 598: Section EA Professor: Eyal Amir Spring Semester 2006 Some slides by Jeremy.
REINFORCEMENT LEARNING Unsupervised learning 1. 2 So far ….  Supervised machine learning: given a set of annotated istances and a set of categories,
Reinforcement Learning for 3 vs. 2 Keepaway P. Stone, R. S. Sutton, and S. Singh Presented by Brian Light.
CS 5751 Machine Learning Chapter 13 Reinforcement Learning1 Reinforcement Learning Control learning Control polices that choose optimal actions Q learning.
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 7
Reinforcement learning (Chapter 21)
Reinforcement Learning
An Overview of Reinforcement Learning
Autonomous Cyber-Physical Systems: Reinforcement Learning for Planning
Reinforcement learning
Hierarchical POMDP Solutions
Instructors: Fei Fang (This Lecture) and Dave Touretzky
Chapter 8: Generalization and Function Approximation
Reinforcement Learning (2)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 7
Reinforcement Learning (2)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 7
Presentation transcript:

An Introduction to Reinforcement Learning (Part 1) Jeremy Wyatt Intelligent Robotics Lab School of Computer Science University of Birmingham

2 What is Reinforcement Learning (RL) ? Learning from punishments and rewards Agent moves through world, observing states and rewards Adapts its behaviour to maximise some function of reward s9s9 s5s5 s4s4 s2s2 …… … s3s r9r9 r5r5 r4r4 r1r1 s1s1 a9a9 a5a5 a4a4 a2a2 … a3a3 a1a1

3 Return: A long term measure of performance Let’s assume our agent acts according to some rules, called a policy,  The return R t is a measure of long term reward collected after time t r9r9 r5r5 r4r4 r1r1

4 Value = Utility = Expected Return R t is a random variable So it has an expected value in a state under a given policy RL problem is to find optimal policy    that maximises the expected value in every state

5 Markov Decision Processes (MDPs) The transitions between states are uncertain The probabilities depend only on the current state Transition matrix P, and reward function R r = r = 0 a1a1 a2a2

6 Summary of the story so far… Some key elements of RL problems: –A class of sequential decision making problems –We want  * –Performance metric: Short term Long term r t+1 r t+2 r t+3 atat a t+1 a t+2 stst s t+1 s t+2 s t+3

7 Summary of the story so far… Some common elements of RL solutions –Exploit conditional independence –Randomised interaction r t+1 r t+2 r t+3 atat a t+1 a t+2 stst s t+1 s t+2 s t+3

8 Bellman equations Conditional independence allows us to define expected return in terms of a recurrence relation: where and where s  (s)

9 Two types of bootstrapping We can bootstrap using explicit knowledge of P and R (Dynamic Programming) Or we can bootstrap using samples from P and R (Temporal Difference learning) stst s t+1 r t+1 s 435  (s) a t =  (s t )

10 TD(0) learning t:=0  is the policy to be evaluated Initialise arbitrarily for all Repeat select an action a t from  (s t ) observe the transition update according to t:=t+1 stst s t+1 r t+1 a t

11 On and Off policy learning On policy: evaluate the policy you are following, e.g. TD learning Off-policy: evaluate one policy while following another policy E.g. One step Q-learning stst s t+1 r t+1 a t s t+1 atat stst r t+1

12 Off policy learning of control Q-learning is powerful because –it allows us to evaluate   –while taking non-greedy actions (explore)  -greedy is a simple and popular exploration rule: take a greedy action with probability  Take a random action with probability 1-  Q-learning is guaranteed to converge for MDPs (with the right exploration policy) Is there a way of finding   with an on-policy learner?

13 On policy learning of control: Sarsa Discard the max operator Learn about the policy you are following Change the policy gradually Guaranteed to converge for Greedy in the Limit Infinite Exploration Policies atat s t+1 stst a t+1 r t+1

14 On policy learning of control: Sarsa t:=0 Initialise arbitrarily for all select an action a t from explore( ) Repeat observe the transition select an action a t+1 from explore( ) update according to t:=t+1 atat s t+1 stst r t+1

15 Summary: TD, Q-learning, Sarsa TD learning One step Q-learning Sarsa learning stst r t+1 a t s t+1 atat stst r t+1 s t+1 atat stst a t+1 r t+1

16 We add an eligibility for each state: where Update in every state proportional to the eligibility Speeding up learning: Eligibility traces, TD(  TD learning only passes the TD error to one state a t-2 a t-1 atat r t-1 rtrt r t+1 s t-2 s t-1 stst s t+1

17 Eligibility traces for learning control: Q( ) There are various eligibility trace methods for Q-learning Update for every s,a pair Pass information backwards through a non-greedy action Lose convergence guarantee of one step Q-learning Watkin’s Solution: zero all eligibilities after a non-greedy action Problem: you lose most of the benefit of eligibility traces r t+1 s t+1 rtrt a t-1 atat a t+1 s t-1 stst a greedy

18 Eligibility traces for learning control: Sarsa( ) Solution: use Sarsa since it’s on policy Update for every s,a pair Keeps convergence guarantees atat a t+1 a t+2 r t+1 r t+2 stst s t+1 s t+2

19 Approximate Reinforcement Learning Why? –To learn in reasonable time and space (avoid Bellman’s curse of dimensionality) –To generalise to new situations Solutions –Approximate the value function –Search in the policy space –Approximate a model (and plan)

20 Linear Value Function Approximation Simplest useful class of problems Some convergence results We’ll focus on linear TD(  Weight vector at time t Feature vector for state s Our value estimate Our objective is to minimise

21 Value Function Approximation: features There are numerous schemes, CMACs and RBFs are popular CMAC: n tiles in the space (aggregate over all tilings) Features Properties –Coarse coding –Regular tiling  efficient access –Use random hashing to reduce memory

22 Linear Value Function Approximation We perform gradient descent using The update equation for TD  becomes Where the eligibility trace is an n-dim vector updated using If the states are presented with the frequency they would be seen under the policy  you are evaluating TD  converges close to

23 Value Function Approximation (VFA) Convergence results Linear TD  converges if we visit states using the on-policy distribution Off policy Linear TD(  and linear Q learning are known to diverge in some cases Q-learning, and value iteration used with some averagers (including k-Nearest Neighbour and decision trees) has almost sure convergence if particular exploration policies are used A special case of policy iteration with Sarsa style updates and linear function approximation converges Residual algorithms are guaranteed to converge but only very slowly

24 Value Function Approximation (VFA) TD-gammon TD( ) learning and a Backprop net with one hidden layer 1,500,000 training games (self play) Equivalent in skill to the top dozen human players Backgammon has ~10 20 states, so can’t be solved using DP

25 Model-based RL: structured models Transition model P is represented compactly using a Dynamic Bayes Net (or factored MDP) V is represented as a tree Backups look like goal regression operators Converging with the AI planning community

26 Reinforcement Learning with Hidden State Learning in a POMDP, or k-Markov environment Planning in POMDPs is intractable Factored POMDPs look promising Policy search can work well atat a t+1 a t+2 r t+1 r t+2 stst s t+1 s t+2 otot o t+1 o t+2

27 Policy Search Why not search directly for a policy? Policy gradient methods and Evolutionary methods Particularly good for problems with hidden state

28 Other RL applications Elevator Control (Barto & Crites) Space shuttle job scheduling (Zhang & Dietterich) Dynamic channel allocation in cellphone networks (Singh & Bertsekas)

29 Hot Topics in Reinforcement Learning Efficient Exploration and Optimal learning Learning with structured models (eg. Bayes Nets) Learning with relational models Learning in continuous state and action spaces Hierarchical reinforcement learning Learning in processes with hidden state (eg. POMDPs) Policy search methods

30 Reinforcement Learning: key papers Overviews R. Sutton and A. Barto. Reinforcement Learning: An Introduction. The MIT Press, J. Wyatt, Reinforcement Learning: A Brief Overview. Perspectives on Adaptivity and Learning. Springer Verlag, L.Kaelbling, M.Littman and A.Moore, Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research, 4: , Value Function Approximation D. Bersekas and J.Tsitsiklis. Neurodynamic Programming. Athena Scientific, Eligibility Traces S.Singh and R. Sutton. Reinforcement learning with replacing eligibility traces. Machine Learning, 22: , 1996.

31 Reinforcement Learning: key papers Structured Models and Planning C. Boutillier, T. Dean and S. Hanks. Decision Theoretic Planning: Structural Assumptions and Computational Leverage. Journal of Artificial Intelligence Research, 11:1-94, R. Dearden, C. Boutillier and M.Goldsmidt. Stochastic dynamic programming with factored representations. Artificial Intelligence, 121(1-2):49-107, B. Sallans. Reinforcement Learning for Factored Markov Decision Processes Ph.D. Thesis, Dept. of Computer Science, University of Toronto, K. Murphy. Dynamic Bayesian Networks: Representation, Inference and Learning. Ph.D. Thesis, University of California, Berkeley, 2002.

32 Reinforcement Learning: key papers Policy Search R. Williams. Simple statistical gradient algorithms for connectionist reinforcement learning. Machine Learning, 8: R. Sutton, D. McAllester, S. Singh, Y. Mansour. Policy Gradient Methods for Reinforcement Learning with Function Approximation. NIPS 12, Hierarchical Reinforcement Learning R. Sutton, D. Precup and S. Singh. Between MDPs and Semi-MDPs: a framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112: R. Parr. Hierarchical Control and Learning for Markov Decision Processes. PhD Thesis, University of California, Berkeley, A. Barto and S. Mahadevan. Recent Advances in Hierarchical Reinforcement Learning. Discrete Event Systems Journal 13: 41-77, 2003.

33 Reinforcement Learning: key papers Exploration N. Meuleau and P.Bourgnine. Exploration of multi-state environments: Local Measures and back-propagation of uncertainty. Machine Learning, 35: , J. Wyatt. Exploration control in reinforcement learning using optimistic model selection. In Proceedings of 18 th International Conference on Machine Learning, POMDPs L. Kaelbling, M. Littman, A. Cassandra. Planning and Acting in Partially Observable Stochastic Domains. Artificial Intelligence, 101:99-134, 1998.