Download presentation
Presentation is loading. Please wait.
Published byDeirdre O’Brien’ Modified over 9 years ago
1
Apprenticeship Learning for Robotics, with Application to Autonomous Helicopter Flight Pieter Abbeel Stanford University Joint work with: Andrew Y. Ng, Adam Coates, J. Zico Kolter and Morgan Quigley
2
Pieter Abbeel Preliminaries: reinforcement learning. Apprenticeship learning algorithms. Experimental results on various robotic platforms. Outline
3
Pieter Abbeel Reinforcement learning (RL) System Dynamics P sa state s 0 s1s1 System dynamics P sa … System Dynamics P sa s T-1 sTsT s2s2 a0a0 a1a1 a T-1 reward R(s 0 ) R(s 2 )R(s T-1 )R(s 1 )R(s T )+++…++ Example reward function: R(s) = - || s – s * || Goal: Pick actions over time so as to maximize the expected score: E[ R ( s 0 ) + R ( s 1 ) + … + R ( s T )] Solution: policy which specifies an action for each possible state for all times t = 0, 1, …, T.
4
Pieter Abbeel Model-based reinforcement learning Run RL algorithm in simulator. Control policy
5
Pieter Abbeel Apprenticeship learning algorithms use a demonstration to help us find a good reward function, a good dynamics model, a good control policy. Reinforcement learning (RL) Reward Function R Reinforcement Learning Control policy Dynamics Model P sa
6
Pieter Abbeel Apprenticeship learning: reward Reward Function R Reinforcement Learning Control policy Dynamics Model P sa
7
Pieter Abbeel Reward function trades off: Height differential of terrain. Gradient of terrain around each foot. Height differential between feet. … (25 features total for our setup) Many reward functions: complex trade-off
8
Pieter Abbeel Example result [ICML 2004, NIPS 2008]
9
Pieter Abbeel Compact description: reward function ~ trajectory (rather than a trade-off). Reward function for aerobatics?
10
Pieter Abbeel Perfect demonstrations are extremely hard to obtain. Multiple trajectory demonstrations: Every demonstration is a noisy instantiation of the intended trajectory. Noise model captures (among others): Position drift. Time warping. If different demonstrations are suboptimal in different ways, they can capture the “intended” trajectory implicitly. [Related work: Atkeson & Schaal, 1997.] Reward: Intended trajectory
11
Pieter Abbeel Example: airshow demos
12
Pieter Abbeel Probabilistic graphical model for multiple demonstrations
13
Pieter Abbeel Step 1: find the time-warping, and the distributional parameters We use EM, and dynamic time warping to alternatingly optimize over the different parameters. Step 2: find the intended trajectory Learning algorithm
14
Pieter Abbeel After time-alignment
15
Pieter Abbeel Apprenticeship learning for the dynamics model Reward Function R Reinforcement Learning Control policy Dynamics Model P sa
16
Pieter Abbeel Algorithms such as E 3 (Kearns and Singh, 2002) learn the dynamics by using exploration policies, which are dangerous/impractical for many systems. Our algorithm Initializes model from a demonstration. Repeatedly executes “exploitation policies'' that try to maximize rewards. Provably achieves near-optimal performance (compared to teacher). Machine learning theory: Complicated non-IID sample generating process. Standard learning theory bounds not applicable. Proof uses martingale construction over relative losses. Apprenticeship learning for the dynamics model [ICML 2005]
17
Pieter Abbeel Learning the dynamics model Details of algorithm for learning dynamics from data: Exploiting structure from physics. Lagged learning criterion. [NIPS 2005, 2006]
18
Pieter Abbeel Related work Bagnell & Schneider, 2001; LaCivita et al., 2006; Ng et al., 2004a; Roberts et al., 2003; Saripalli et al., 2003.; Ng et al., 2004b; Gavrilets, Martinos, Mettler and Feron, 2002. Maneuvers presented here are significantly more difficult than those flown by any other autonomous helicopter.
19
Pieter Abbeel Autonomous nose-in funnel
20
Pieter Abbeel Accuracy
21
Pieter Abbeel Modeling extremely complex: Our dynamics model state: Position, orientation, velocity, angular rate. True state: Air (!), head-speed, servos, deformation, etc. Key observation: In the vicinity of a specific point along a specific trajectory, these unknown state variables tend to take on similar values. Non-stationary maneuvers
22
Pieter Abbeel Example: z-acceleration
23
Pieter Abbeel 1. Time align trajectories. 2. Learn locally weighted models in the vicinity of the trajectory. W(t’) = exp(- (t – t’) 2 / 2 ) Local model learning algorithm
24
Pieter Abbeel Autonomous flips
25
Pieter Abbeel Apprenticeship learning: RL algorithm Reward Function R Reinforcement Learning Control policy Dynamics Model P sa (Crude) model [None of the demos exactly equal to intended trajectory.] (Sloppy) demonstration or initial trial Small number of real-life trials
26
Pieter Abbeel Input to algorithm: approximate model. Start by computing the optimal policy according to the model. Algorithm Idea Real-life trajectory Target trajectory The policy is optimal according to the model, so no improvement is possible based on the model.
27
Pieter Abbeel Algorithm Idea (2) Update the model such that it becomes exact for the current policy.
28
Pieter Abbeel Algorithm Idea (2) Update the model such that it becomes exact for the current policy.
29
Pieter Abbeel Algorithm Idea (2) The updated model perfectly predicts the state sequence obtained under the current policy. We can use the updated model to find an improved policy.
30
Pieter Abbeel Algorithm 1.Find the (locally) optimal policy for the model. 2.Execute the current policy and record the state trajectory. 3.Update the model such that the new model is exact for the current policy . 4.Use the new model to compute the policy gradient and update the policy: := + 5.Go back to Step 2. Notes: The step-size parameter is determined by a line search. Instead of the policy gradient, any algorithm that provides a local policy improvement direction can be used. In our experiments we used differential dynamic programming.
31
Pieter Abbeel Performance Guarantees Let the local policy improvement algorithm be policy gradient. Notes: These assumptions are insufficient to give the same performance guarantees for model-based RL. The constant K depends only on the dimensionality of the state, action, and policy ( ), the horizon H and an upper bound on the 1st and 2nd derivatives of the transition model, the policy and the reward function.
32
Pieter Abbeel Our expert pilot provides 5-10 demonstrations. Our algorithm aligns trajectories, extracts intended trajectory as target, learns local models. We repeatedly run controller, collect model errors, until satisfactory performance is obtained. We use receding-horizon differential dynamic programming (DDP) to find the controller Experimental Setup
33
Pieter Abbeel [Switch to Quicktime for HD airshow.] Airshow
34
Pieter Abbeel Airshow accuracy
35
Pieter Abbeel Tic-toc
36
Pieter Abbeel [Switch to Quicktime for HD chaos.] Chaos
37
Pieter Abbeel Conclusion Apprenticeship learning algorithms help us find better controllers by exploiting teacher demonstrations. Algorithmic instantiations: Inverse reinforcement learning Learn trade-offs in reward. Learn “intended” trajectory. Model learning No explicit exploration. Local models. Control with crude model + small number of trials.
38
Pieter Abbeel Automate more general advice taking. Guaranteed safe exploration---safely learning to outperform the teacher. Autonomous helicopters Assist in wildland fire fighting. Auto-rotation landings. Fixed-wing formation flight. Potential savings for even three aircraft formation: 20%. Current and future work
39
Apprenticeship Learning via Inverse Reinforcement Learning, Pieter Abbeel and Andrew Y. Ng. In Proc. ICML, 2004. Learning First Order Markov Models for Control, Pieter Abbeel and Andrew Y. Ng. In NIPS 17, 2005. Exploration and Apprenticeship Learning in Reinforcement Learning, Pieter Abbeel and Andrew Y. Ng. In Proc. ICML, 2005. Modeling Vehicular Dynamics, with Application to Modeling Helicopters, Pieter Abbeel, Varun Ganapathi and Andrew Y. Ng. In NIPS 18, 2006. Using Inaccurate Models in Reinforcement Learning, Pieter Abbeel, Morgan Quigley and Andrew Y. Ng. In Proc. ICML, 2006. An Application of Reinforcement Learning to Aerobatic Helicopter Flight, Pieter Abbeel, Adam Coates, Morgan Quigley and Andrew Y. Ng. In NIPS 19, 2007. Hierarchical Apprenticeship Learning with Application to Quadruped Locomotion, J. Zico Kolter, Pieter Abbeel and Andrew Y. Ng. In NIPS 20, 2008.
40
Pieter Abbeel Full multiple demonstration model
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.