CS 326 A: Motion Planning 2 Dynamic Constraints and Optimal Planning
Nonholonomic vs. Dynamic Constraints Nonholonomic constraint: q’ = f(q,u) where u is the control input (function of time) Dynamic constraint: s = (q,q’), the state of the system s’ = f(s,u) where u is the control input
Aerospace Robotics Lab Robot air bearing gas tank air thrusters obstacles robot
Modeling of Robot x y f q = (x,y) s = (q,q’) u = (f, ) x” = (f/m) cos y” = (f/m) sin f f max
Example with Moving Obstacles
General Case For an arbitrary mechanical linkage: u = M(q)q” + C(q,q’) + G(q) + F(q,q’) where: - M is the inertia matrix - C is the vector of centrifugal and Coriolis terms - G is the vector of gravity terms - F is the vector of friction terms
Optimality of a Trajectory Often one seeks a trajectory that optimizes a given criterion, e.g.: –smallest number of backup maneuvers, –minimal execution time, –minimal energy consumption
Path Planning Approaches Direct planning: –Build a tree of milestones until a connection to the goal has been made LaValle and Kuffner, and Donald et al.’s papers Two-phase planning: –Compute a collision-free path ignoring constraints –Optimize this path into a trajectory satisfying the kinodynamic constraints Bobrow’s paper
Path Optimization Steepest descent technique. Parameterize the geometry of a trajectory, e.g., by defining control points through which cubic spines are fitted. Vary the parameters. For the new values re-compute the optimal control. If better value of criterion, vary further.
Two-Phase Planning Gives good results in practice But computationally expensive (real- time planning possible?) No performance guarantee regarding optimality of computed trajectory
Direct Planning Optimality guarantee in Donald et al.’s paper, but running time exponential in number of degrees of freedom No optimality guarantee in LaValle and Kuffner’s paper, but provably quick convergence of algorithm, allowing for real-time planning (and re-planning) among moving obstacles