Learning First Order Markov Models for Control Pieter Abbeel and Andrew Y. Ng, Poster 48 Tuesday Consider modeling an autonomous RC-car’s dynamics from.

Slides:



Advertisements
Similar presentations
Viktor Zhumatiya, Faustino Gomeza,
Advertisements

Dialogue Policy Optimisation
HMM II: Parameter Estimation. Reminder: Hidden Markov Model Markov Chain transition probabilities: p(S i+1 = t|S i = s) = a st Emission probabilities:
Use of Kalman filters in time and frequency analysis John Davis 1st May 2011.
Dynamic Bayesian Networks (DBNs)
1. Algorithms for Inverse Reinforcement Learning 2
Adam Coates, Pieter Abbeel, and Andrew Y. Ng Stanford University ICML 2008 Learning for Control from Multiple Demonstrations TexPoint fonts used in EMF.
MDP Presentation CS594 Automated Optimal Decision Making Sohail M Yousof Advanced Artificial Intelligence.
Reinforcement Learning & Apprenticeship Learning Chenyi Chen.
Infinite Horizon Problems
Planning under Uncertainty
GS 540 week 6. HMM basics Given a sequence, and state parameters: – Each possible path through the states has a certain probability of emitting the sequence.
Apprenticeship learning for robotic control Pieter Abbeel Stanford University Joint work with Andrew Y. Ng, Adam Coates, Morgan Quigley.
Reinforcement learning
Announcements Homework 3: Games Project 2: Multi-Agent Pacman
Using Inaccurate Models in Reinforcement Learning Pieter Abbeel, Morgan Quigley and Andrew Y. Ng Stanford University.
Reinforcement Learning
Pieter Abbeel and Andrew Y. Ng Apprenticeship Learning via Inverse Reinforcement Learning Pieter Abbeel Stanford University [Joint work with Andrew Ng.]
Apprenticeship Learning by Inverse Reinforcement Learning Pieter Abbeel Andrew Y. Ng Stanford University.
Apprenticeship Learning by Inverse Reinforcement Learning Pieter Abbeel Andrew Y. Ng Stanford University.
Pieter Abbeel and Andrew Y. Ng Apprenticeship Learning via Inverse Reinforcement Learning Pieter Abbeel Stanford University [Joint work with Andrew Ng.]
Nov 14 th  Homework 4 due  Project 4 due 11/26.
1 Hybrid Agent-Based Modeling: Architectures,Analyses and Applications (Stage One) Li, Hailin.
Discriminative Training of Kalman Filters P. Abbeel, A. Coates, M
7. Experiments 6. Theoretical Guarantees Let the local policy improvement algorithm be policy gradient. Notes: These assumptions are insufficient to give.
Discretization Pieter Abbeel UC Berkeley EECS
Exploration and Apprenticeship Learning in Reinforcement Learning Pieter Abbeel and Andrew Y. Ng Stanford University.
Maximum Likelihood (ML), Expectation Maximization (EM)
Fast Temporal State-Splitting for HMM Model Selection and Learning Sajid Siddiqi Geoffrey Gordon Andrew Moore.
Pieter Abbeel and Andrew Y. Ng Apprenticeship Learning via Inverse Reinforcement Learning Pieter Abbeel and Andrew Y. Ng Stanford University.
Pieter Abbeel and Andrew Y. Ng Reinforcement Learning and Apprenticeship Learning Pieter Abbeel and Andrew Y. Ng Stanford University.
Our acceleration prediction model Predict accelerations: f : learned from data. Obtain velocity, angular rates, position and orientation from numerical.
Learning and Planning for POMDPs Eyal Even-Dar, Tel-Aviv University Sham Kakade, University of Pennsylvania Yishay Mansour, Tel-Aviv University.
CS Reinforcement Learning1 Reinforcement Learning Variation on Supervised Learning Exact target outputs are not given Some variation of reward is.
Gaussian process modelling
Computational Stochastic Optimization: Bridging communities October 25, 2012 Warren Powell CASTLE Laboratory Princeton University
Reinforcement Learning
1 ECE-517 Reinforcement Learning in Artificial Intelligence Lecture 7: Finite Horizon MDPs, Dynamic Programming Dr. Itamar Arel College of Engineering.
Apprenticeship Learning for Robotic Control Pieter Abbeel Stanford University Joint work with: Andrew Y. Ng, Adam Coates, J. Zico Kolter and Morgan Quigley.
Reinforcement Learning
Brian Macpherson Ph.D, Professor of Statistics, University of Manitoba Tom Bingham Statistician, The Boeing Company.
Reinforcement Learning 主講人:虞台文 Content Introduction Main Elements Markov Decision Process (MDP) Value Functions.
Motor Control. Beyond babbling Three problems with motor babbling: –Random exploration is slow –Error-based learning algorithms are faster but error signals.
1 Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
Learning to Navigate Through Crowded Environments Peter Henry 1, Christian Vollmer 2, Brian Ferris 1, Dieter Fox 1 Tuesday, May 4, University of.
CS Statistical Machine learning Lecture 24
Decision Theoretic Planning. Decisions Under Uncertainty  Some areas of AI (e.g., planning) focus on decision making in domains where the environment.
Reinforcement learning (Chapter 21)
CPS 570: Artificial Intelligence Markov decision processes, POMDPs
Lecture 3: MLE, Bayes Learning, and Maximum Entropy
COMP 2208 Dr. Long Tran-Thanh University of Southampton Reinforcement Learning.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Reinforcement Learning: Learning algorithms Yishay Mansour Tel-Aviv University.
Deep Learning and Deep Reinforcement Learning. Topics 1.Deep learning with convolutional neural networks 2.Learning to play Atari video games with Deep.
Reinforcement Learning. Overview Supervised Learning: Immediate feedback (labels provided for every input). Unsupervised Learning: No feedback (no labels.
Chapter 6: Temporal Difference Learning
Reinforcement Learning (1)
Reinforcement learning (Chapter 21)
Policy Compression for MDPs
Hidden Markov Models Part 2: Algorithms
Announcements Homework 3 due today (grace period through Friday)
October 6, 2011 Dr. Itamar Arel College of Engineering
Chapter 6: Temporal Difference Learning
CS 188: Artificial Intelligence Fall 2008
Reinforcement Nisheeth 18th January 2019.
Markov Decision Processes
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 7
Markov Decision Processes
Reinforcement Learning (2)
Presentation transcript:

Learning First Order Markov Models for Control Pieter Abbeel and Andrew Y. Ng, Poster 48 Tuesday Consider modeling an autonomous RC-car’s dynamics from a sequence of states and actions collected at 100Hz. We have training data: (s 1, a 1, s 2, a 2, …). We’d like to build a model of the MDP’s transition probabilities P(s t+1 |s t, a t ). Slide #1

Learning First Order Markov Models for Control Pieter Abbeel and Andrew Y. Ng, Poster 48 Tuesday If we use maximum likelihood (ML) to fit the parameters of the MDP, then we are constrained to fit only the 1-step transitions: max   t p(s t+1 | s t, a t ) But in RL, our goal is to maximize the long-term rewards, so we aren’t really interested in the 1/100 th -second dynamics. The dynamics on longer time-scales are often only poorly approximated (assuming the system isn’t really first-order). Algorithms for building models that better capture dynamics on longer time-scales. Experiments on autonomous RC car driving. Slide #2

Learning First Order Markov Models for Control Pieter Abbeel and Andrew Y. Ng Stanford University

Autonomous RC Car

Motivation Consider modeling an RC-car’s dynamics from a sequence of states and actions collected at 100Hz. Maximum likelihood fitting of a first order Markov model constrains the model to fit only the 1-step transitions. However for control applications, we do not care only about the dynamics on the time-scale of 1/100 of a second, but also about longer time-scales.

Motivation If we use maximum likelihood (ML) to fit the parameters of a first-order Markov model, then we are constrained to fit only the 1-step transitions. The dynamics on longer time-scales are often only poorly approximated [unless the system dynamics are really first-order]. However for control: interested in maximizing the long-term expected rewards.

Regardless of true model, ML will return same model with. Random Walk Example Random walk. Consider two cases  Increments  i perfectly correlated: Var(S T ) = T 2.  Increments  i independent: Var(S T ) = T.

Examples of physical systems Influence of wind disturbances on helicopter  Very small over one time step.  Strong correlations lead to substantial effect over time. First order ML model may overestimate ability to control helicopter and car [thinking variance is O(T) rather than O(T 2 )]. This leads to danger of, e.g., flying too close to a building, or driving on too narrow a road. Systematic model errors can show up as correlated noise. E.g., oversteering or understeering of car.

Problem statement The learning problem:  Given: state/action sequence data from a system.  Goal: model the system for purposes of control (such as to use with a RL algorithm). Even when dynamics are not governed by an MDP, we often would still like to model it as such (rather than as a POMDP), since MDPs are much easier to solve. How do we learn an accurate first order Markov model from data for control? [Our ideas are also applicable to higher order, and/or more structured models such as dynamic Bayesian networks and mixed memory Markov models.]

Preliminaries and Notation Finite-state decision process (DP)  S: set of states,  A: set of actions,  P: set of state transition probabilities [not Markov!]   : discount factor,  D: initial state distribution,  R: reward function, 8 s R(s) · R max. We will fit a model, with estimates of the transition probabilities. Value of state s 0 in under policy 

Where is the variational distance. Parameter estimation when no actions Consider d var is hard to optimize from samples, but can be upper-bounded by a function of KL-divergence. Minimizing KL-divergence is, in turn, identical to minimizing log-loss.

d var  KL  log-likelihood [The last step reflects we are equally interested in every state as possible starting state s 0.]

The resulting lagged objective Given a training sequence s 0:T, we propose to use Compare this to the maximum likelihood objective

S2S2 S1S1 S2S2 S1S1 Lagged objective vs. ML S0S0 S3S3 S2S2 S1S1 S0S0 S1S1 S2S2 S1S1 S3S3 S2S2 S0S0 S2S2 S3S3 S1S1 S0S0 S3S3 Consider a length four training sequence, which could have various dependencies. ML takes into account only the following transitions: Our lagged objective also takes into account: [Yellow nodes are observed, white nodes are unobserved.]

M-step: update such that EM-algorithm to optimize lagged objective E-step: compute expected counts and store in stats. I.e., 8 t, k, l, i, j

Computational Savings for E-step Inference for E-step can be done using standard forward and backward message passing. For every pair (t, t+k), the forward messages at position t+i depend on t only, not on k. So, computation of different terms in the inner-summation can share messages. Similarly for backward messages. This reduces the number of message computations by a factor T. Often only interested in some maximum horizon H. I.e., in the inner-summation of the objective only consider k=1,…,H. Reduction from O(T 3 ) to O(T H 2 ). More substantial savings: (S t =i, S t+k =j) and (S t’ =i, S t’+k =j) contribute same to stats(.,. ) Computing stats(.,. ) contribution for all such pairs only once. Further reduction to O(|S| 2 H 2 ).

Incorporating actions If actions are incorporated, our objective becomes: The EM-algorithm is trivially extended by conditioning on the actions during the E-step. Forward messages need to be computed only once for every t, backward messages once for every t+k. [as before] Number of possibilities for a t:t+k-1 is O(|A| k ). Use only a few deterministic exploration policies.  Can still obtain same computational savings as before.

Experiment 1: shortest vs. safest path Actions are 4 compass directions. Move in intended direction with probability 0.7, and a random direction with probability 0.3. The directions of the “random transitions” are dependent, and correlated over time. A parameter q controls the correlation between the directions of the random transitions on different time steps (uncorrelated if q=0, perfectly correlated if q=1). We will fit a first order Markov model to these dynamics (with each grid position being a state). [Details: Noise process governed by a Markov process (not directly observable by the agent) with each of the 4 directions as states, with Prob(staying in same state) = q.]

Experiment 1: shortest vs. safest path [Details: Learning was done using a 200,000 length state-action sequence. Reported results are averages over 5 independent trials. The exploration policy used independent random actions at each time step.] If the noise is strongly correlated across time (large q), our model estimates the dynamics to have a higher “effective noise level.” As a consequence the more cautious policy (path B) is used. (q)

Experiment 2: Queue Actions: 3 service rates, with faster service rates being more expensive. q 0 = 0 reward = 0 q 1 = p reward = -1 q 2 =.75 reward = -10 Queue buffer length = 20; buffer overflow results in reward Customers arrive over time to be served. At every time, the arrival probability equals p. Service rate = probability that the customer first in queue gets serviced successfully in the current time step.

P( arrival | slow mode ) = 0.01 P( arrival | fast mode ) = 0.99 Steady state: P(slow mode)=0.8, P(fast mode)=0.2 Experiment 2: Queue Underlying (unobserved!) arrival process has 2 different modes (fast arrivals and slow arrivals) Additional parameter determines how rapidly system changes between fast and slow modes. Fast switching Slow switching between modes

Experiment 2: Queue Estimate/Learn first order Markov model with  State = size of the queue, Actions = 3 service rates  Exploration policy = repeatedly use same service rate for 25 time-steps. We used 8000 such trials. 15% better performance at high correlation levels. Same performance at low correlation levels.

Experiment 3: RC-car Consider the situation where the RC-car can choose between 2 paths A curvy path with high reward if successful in reaching the goal. An easier path with lower reward if successful in reaching the goal We build a dynamics model of the car, and find a policy/controller in simulation for following each of the paths. The decision about which path to follow is then made based upon this simulation.

RC-car model  : angular direction the RC-car is headed  : angular velocity V : velocity of the RC-car (kept constant) u t : steering input to the car ( 2 [-1,1]) C 1, C 2, C 3 : parameters of the model, estimated using linear regression w t : noise term, zero-mean Gaussian with variance  2. Using the lagged objective, we re-estimate the variance  2, and compare its performance to the first-order estimate of  2.

Controller We use the following controller desired steering angle = p 1 *(y-y des ) + p 2 *(  -  des ); u = f(desired steering angle); We optimize over the parameters p 1, p 2 to follow the straight line y=0, for which we set y des =0,  des =0. For the specific two trajectories, y des (x),  des (x) are optimized as a function of the current x position. For localization, we use an overhead camera.

Simulated performance on curvy trajectory Plot shows 100 sample runs in simulation under the ML-model. The ML-model predicts the RC- car can follow the curvy road >95% of the time. Plot shows 10 sample runs in simulation under the lag-learned model. The lag-learned model predicts the RC-car can follow the curvy road < 10% of the time. Green lines: simulated trajectories, Black lines: road boundaries.

Simulated performance on easier trajectory Plot shows 100 sample runs in simulation under the ML-model. The ML-model predicts the RC- car can follow the easier road >99% of the time. Plot shows 100 sample runs in simulation under the lag-learned model. The lag-learned model predicts the RC-car can follow the curvy road > 70% of the time. Green lines: simulated trajectories, Black lines: road boundaries.  ML would choose the curvy road if high reward along curvy road.

Actual performance on easier trajectory [Movies available.] The real RC-car succeeded on the easier road 20/20 times. The real RC-car failed on the curvy road 19/20 times.

RC-car movie

Conclusions Maximum likelihood with a first order Markov model only tries to model the 1-step transition dynamics. For many control applications, we desire an accurate model of the dynamics on longer time-scales. We showed that, by using an objective that takes into account the longer time scales, in many cases a better dynamical model (and a better controller) is obtained. Special thanks to Mark Woodward, Dave Dostal, Vikash Gilja and Sebastian Thrun.

Cut out slides follow

Lagged objective vs. ML Consider a length four training sequence, which could have various dependencies. ML takes into account only the following transitions. Our lagged objective also takes into account [Shaded nodes are observed, white nodes are unobserved.]

Experiment 2: Queue [use this one or previous one?] Queue size at time t Queue size at time t+1 s (t) s (t+1) = s (t) +1 s (t+1) = s (t) s (t+1) = s (t) -1 arrival no arrival unsuccessful servicing successful servicing Choice of actions between 3 service rates q0 = 0 reward = 0 q1 = p reward = -1 q2 =.75 reward = -10 Buffer size = 20. Buffer overflow results in reward of Arrival probability = p

Actual performance on curvy trajectory [Movies available.] Green lines: simulated trajectories, Black lines: road boundaries. Real trajectories obtained as obtained on floor. The actual RC-car fell off the curvy trajectory 19/20 times.

Alternative title slides follow

Learning First Order Markov Models for Control Pieter Abbeel and Andrew Y. Ng Stanford University

Learning First

Order Markov

Models for

Control

Pieter Abbeel and Andrew Y. Ng Stanford University

Pieter Abbeel and Andrew Y. Ng Stanford University