P. Ögren (KTH) N. Leonard (Princeton University) A Convergent Dynamic Window Approach to Obstacle Avoidance & Obstacle Avoidance in Formation P. Ögren (KTH) N. Leonard (Princeton University)
Problem Formulation Drive a robot from A to B through a partially unknown environment without collisions. B Differential drive robots can be feedback linearized to this. A
Background: The Dynamic Unicycle (or a Tank?) q
Desirable Properties No collisions Convergence to goal position Efficient, large inputs ‘Real time’ ‘Reactive’, to changes
Background: Two main Obstacle Avoidance approaches Reactive/Behavior Based Biologically motivated Fast, local rules. ‘The world is the map’ No proofs. Changing environment not a problem Deliberative/Sense-Plan-Act Trajectory planning/tracking Navigation function (Koditschek ’92). Provable features. Changes are a problem Combine the two?
Background: The Navigation Function (NF) tool One local/global min at goal. Gradient gives direction to goal. Solves ‘maze’ problems. Obstacles and NF level curves Goal
Basic Idea DWA, Fox et. al. Exact Navigation, Convergent DWA Control Lyapunov Function (CLF) DWA, Fox et. al. and Brock et al Model Predictive Control (MPC) MPC/CLF Framework, Primbs ’99 Convergent DWA Exact Navigation, using Art. Pot. Fcn. Koditscheck ’92 ‘Real time’ Efficient, large inputs ‘Reactive’, to changes Convergence proof. No collisions
Background: Model Predictive Control (MPC) Idea: Given a good model, we can simulate the result of different control choices (over time T) and apply the best. Feedback: repeat simulation every t<T seconds. How is this used in the Dynamic Window Approach?
Global Dynamic Window Approach (Brock and Khatib ‘99) Robot Cirular arc pseudo-trajectories Vx Vy Dynamic Window Control Options Obstacles Vmax Current Velocity Velocity Space
Global Dynamic Window Approach (continued) Check arcs for collision free length. Chose control by optimization of the heuristic utility function: Speeds up to 1m/s indoors with XR 4000 robot (Good!). No proofs. (Counter example!) Idea: See as Model Predictive Control (MPC) Use navigation function as CLF
Background: Control Lyapunov Function (CLF) Idea: If the energy of a system decreases all the time, it will eventually “stop”. A CLF, V, is an “energy-like” function such that V x
Exact Robot Navigation using Artificial Potential Functions, (Rimon and Koditscheck ‘92) C1 Navigation Function NF(p) constructed. NF(p)=NFmax at obstacles of Sphere and Star worlds. Control: Features: Lyapunov function: => No collisions. Bounded Control. Convergence Proof Drawbacks Hard to (re)calculate. Inefficient Idea: Use C0 Control Lyapunov Function.
Our Navigation Function (NF) One local/global min at goal. Calculate shortest path in discretization. Make continuous surface by careful interpolation using triangles. Provable properties. The discretization
MPC/CLF framework Primbs general form: Here we write:
The resulting scheme: Lyapunov Function and Control Lyapunov function candidate: gives the following set of controls, incl. Compare: Acceleration of down hill skier.
Safety and Discretization The CLF gives stability, what about safety? In MPC, consider controls stop without collision. Plan to first accelerate: then brake: Apply first part and replan. Compare: Being able to stop in visible part of road ) safety
Evaluated MPC Trajectories
Simulation Trajectory
Single Vehicle Conclusions Properties: No collisions (stop safely option) Convergence to goal position (CLF) Efficient (MPC). Reactive (MPC). Real time (?), small discretized control set, formalizing earlier approach. Can this scheme be extended to the multi vehicle case?
Why Multi Agent Robotics? Applications: Search and Rescue missions Carry large/awkward objects Adaptive sensing Satellite imaging in formation Motivations: Flexibility Robustness Performance Price
Obstacle Avoidance in Formation How do we use singel vehicle Obstacle Avoidance?
Desirable properties No collisions Convergence to goal position Efficient, large inputs ‘Real time’ ‘Reactive’, to changes & Distributed/Local information
A Leader-Follower Structure Two Cases: No explicit information exchange ) leader acceleration, u1, is a disturbance Feedforward of u1) time delays and calibration errors are disturbances Information flow How big deviations will the disturbances cause?
Background: Input to State Stability (ISS) We will use the ISS to calculate ”Uncertainty Regions”
ISS ) Uncertainty Region
How do we calculate a map of ”free” leader positions? Formation Leader Obstacles, an extension of Configuration Space Obstacles ”Occupied” leader pos. Obstacle ”Free” leader pos. How do we calculate a map of ”free” leader positions?
Formation Leader Map Unc. Region and Obstacles Formation Obstacles Computable by conv2 (matlab). Leader does obstacle avoidance in new map. Followers do formation keeping under disturbance.
Simulation Trajectories
Final Conclusions Obstacle Avoidance extended to formations by assuming leader-follower structure and ISS. Future directions Rotations Expansions Braking formation ) ¸ 3 dim NF
Comparison