Standing Balance Control Using a Trajectory Library Chenggang Liu and Chris Atkeson
Outline Introduction Robot Model Neighboring Optimal Control Balance Controller Simulation and Experiment Results
Introduction Multiple Balance Strategies from One Optimization Criterion.
Robot Model
Trajectory Library Generation Combined Method Direct minimization with SNOPT (Sequential Quadratic Programming) Differential Dynamic Programming A library on a uniform grid of initial conditions
Neighboring Optimal Control Given the discrete time dynamics of the robot: and the optimal value function: The neighboring optimal control is given by: Where
Neighboring Optimal Control Having the optimal trajectories of state over time, , control over time, , and the gain matrices over time, . A local approximation to optimal control is: where is the closest state to the current state on , and are those corresponding to on and .
Controller Architecture
Trajectory Library Generation The library is refined to get a new library on an adaptive grid of initial conditions according to the proposed controller’s performance.
State/Push Estimation State to be estimated: Measurements:
State/Push Estimation State transition and observation models
State/Push Estimation To predict the next state before measurements are taken: To update the state after measurements are taken:
Simulation Results Constant push of 42 Newtons at head Short forward push at head of 50 Newtons, lasting 0.5 seconds Random pushes sequence
Simulation Results Comparison with the optimal controller.
Experiment Result Push forward Trajectory index State index
Experiment Result Push backward Trajectory index State index
Thanks! For more information: http://www.andrew.cmu.edu/user/cgliu/CurrentRD.html