Fast Iterative Alignment of Pose Graphs with Poor Initial Estimates Edwin Olson John Leonard, Seth Teller
Where are we going?
Problem summary ► Robot moves around environment. Poses are connected by constraints (odometry). Feature Pose Constraint ► Constraint = rigid body transformation + covariance matrix
Problem summary ► Re-observing features allows constraints between non-successive poses Feature Pose Constraint
Problem summary ► Goal: find the arrangement of poses and features that best satisfies the constraints. (e.g., maximizes the probability) An improbable initial configurationMaximum likelihood configuration Poorly satisfied constraints SLAM
Why is this hard? ► Huge number of unknowns |x| = 10 3 is a small problem ► Non-linear constraints Function of robot orientation ► Lousy initial estimates Odometry! ► (Today, we’ll ignore SLAM’s other difficult sub-problems: data association, exploration) Cost surface from a laser-scan matching problem. Many local maxima/minima
Our Work ► We present an algorithm for optimizing pose graphs Very fast Estimates non-linear maximum likelihood ► Unlike EKF, EIF and friends Works well even with poor initial estimates Under 100 lines of code
A peek at results Gauss-Seidel, 60 sec. Multi-Level Relaxation, 8.6 sec. Our method, 2.8 sec. Noisy (simulated) input: 3500 poses 3499 temporal constraints 2100 spatial constraints Ground Truth
Our Method: Overview ► Marginalize out features ► Use “incremental” state space ► Enforce constraints serially Linearize constraint to global incremental constraint Taking smaller steps as t → ∞
Marginalizing Features Marginalize features, leaving only trajectory Problem now involves only inter-pose constraints Then compute features (trivial, as in FastSLAM) Marginalize
Marginalization: Cons ► Con: More edges in graph Feature with N observations leads to O(N 2 ) edges Slower/harder to solve ► (Information Matrix less sparse) Marginalize
Marginalization: Not so bad? ► Pro: Pose-pose edges better model data inter-dependence “This cloud of points matches this cloud of points” ► Individual point associations are not independent. More correct to use a single “lumped” constraint ► Bonus: Makes it easier to undo bad data associations later Observations at t=5Observations at t=500 Data association Lumped Constraint
Iterative Optimization ► Do forever: Pick a constraint Descend in direction of constraint’s gradient ► Scale gradient magnitude by alpha/iteration ► Clamp step size iteration++ ► alpha/iteration → 0 as t → ∞ ► Robustness to local concavities Hop around the state space, “stick” in the best one ► Good solution very fast, “perfect” solution only as t → ∞
Importance of state space ► Choice of state representation affects gradient ► Pick a state space that: Is computationally efficient Interacts well with optimization algorithm constraint error
Previous work: global state space ► Global state: x = [ x 0 y 0 θ 0 x 1 y 1 θ 1 x 2 y 2 θ 2 … ] T ► Constraint between pose a and b = f ( x a,y a,θ a,x b,y b,θ b ) Gradient = 0 for all but poses a and b 6 ► Slow convergence ► Error for some links can go up dramatically constraint error 6 Step direction
State Space: Relative/Incremental ► Robot motion is cumulative Adjustment of one pose affects neighbors ► Each constraint affects multiple nodes ► Faster convergence ► Error decreases more predictably Step direction
Relative versus Incremental ► Relative (prior work): State vector: rigid-body transformations P 3 = P 0 T 1 T 2 T 3 More realistic Projective ► Small adjustments can have large effects ► Instability O(N) time to compute update ► Incremental (contribution): State vector: global-relative motions P 3 = P 0 + D 1 + D 2 + D 3 Less realistic (inexact) Behaves linearly ► Small adjustments have small effects ► Good convergence properties O(log N) to compute update
Algorithm Complexity ► For N poses, M constraints: ► Memory: O(N+M) 1M poses, 2M constraints → 176MB EKF/EIF: O(N 2 ) ► Time: Impose constraint: O(log N) ► (using incremental state space and tree-based data structure) One full iteration: O(M log N) Convergence: hard to make guarantees (nonlinear problem) ► Δx bounded at each time step, but ∫ Δx dt might not be ► In practice, SLAM problems fairly well behaved after a few iterations
Gauss Seidel Relaxation (exponential time scale)
Our algorithm (slowed down by ~4x)
Multi-Level Relaxation ► Optimization result after 8.6 seconds ► Converges to (very) good result in 30 minutes ► Thanks to Udo Frese for running this data set
Killian Court (Real Data) Laser-scan derived open-loop estimate
Continuing work… ► Several 3D implementations under way Both with and without full-rank constraints ► Incremental (not batch) version ► Adaptive learning rate ► Reports of success with our algorithm from other universities…
Questions? ► Updated paper, movies, source code: