ROBOT MAPPING AND EKF SLAM
Slides for the book: Probabilistic Robotics Authors: Sebastian Thrun Wolfram Burgard Dieter Fox Publisher: MIT Press, 2005. Web site for the book & more slides: http://www.probabilistic-robotics.org/
Landmark-based Localization
Extended Kalman Filters
Nonlinear Dynamic Systems
Non-Gaussian Distributions in Kalman filters?
We use Taylor expansion to linearize Last slide main observation We use Taylor expansion to linearize
First Order Taylor Expansions
Reminder: Jacobian Matrix nonlinear x=(x1,x2,…,xn) Tangent plane represents multi-dimensional derivative
Reminder: Jacobian Matrix Jacobian Matrix represents the gradient of a scalar valued function
EKF Linearization is done through First Order Taylor Expansion Linearization for prediction Linearization for correction
matrix vector vector Matrix-vector multiplication again
) ( ) When we insert to Kalman filter equations ( nonlinear From previous slide ( ) ( ) When we insert to Kalman filter equations
When we insert to Kalman filter equations nonlinear From previous slide ( ) When we insert to Kalman filter equations
EKF Linearization: First Order Taylor Series Expansion EKF Linearization: First Order Taylor Series Expansion. Matrix vs Scalar Scalar case MATRIX case
EKF Algorithm compared with Kalman Algorithm Was in linear Kalman
Extended Kalman Filter Algorithm
Linearity Assumption Revisited Grey represents true distribution of p(y) Linearity Assumption Revisited Linear mapping of Mean: y=ax+b This is ideal case of pushing gaussian through a linear system Mean of p(y) Mean of p(x)
This is a real case of pushing a gaussian through a non-linear system Grey represents true distribution of p(y) Non-linear Function Non-linear function g(x) This is a real case of pushing a gaussian through a non-linear system Gaussian of p(y) We are approximating grey by blue, not good Mean of p(x)
EKF Linearization (1): Taylor approximation and EKF Gaussian This example shows that Gaussian of EKF better represents estimated value than the Gaussian mean Better than in last slide. The mean is closer to grey shape
EKF Linearization (2) Now the height is reduced to 2. More uncertainty EKF even better
EKF Linearization (3) Even more difference of EKF Gaussian and real Gaussian, but better estimation Smaller max value
Conclusion: NON-GAUSSIAN distribution of output requires EKF or something better that would calculate the mean better In this case the input Gaussian does not create an output Gaussian! And Gaussian estimation of output is worse than EKF estimation of Gaussian with respect to mean Gaussian with high value of standard deviation
In this case EKF works similarly to KF EKF Linearization (5) Gaussian similar to EKF gaussian In this case EKF works similarly to KF Narrow and short gaussian
Summary on Extended Kalman Filters
Literature on Kalman Filter and EKF
EKF Localization Example
Localization of two-dimensional robot “Using sensory information to locate the robot in its environment is the most fundamental problem to providing a mobile robot with autonomous capabilities.” [Cox ’91] Given Map of the environment. Sequence of sensor measurements. Wanted Estimate of the robot’s position. Problem classes Position tracking Global localization Kidnapped robot problem (recovery)
EKF_localization ( mt-1, St-1, ut, zt, m): Prediction Phase: (x,y,) Jacobian of g w.r.t location velocity Velocity and radial velocity are controls Radial velocity Jacobian of g w.r.t control Motion noise Predicted mean Predicted covariance
EKF_localization ( mt-1, St-1, ut, zt, m): Correction phase: r = range = angle Predicted measurement mean Jacobian of h w.r.t location Pred. measurement covariance Kalman gain Updated mean Updated covariance
EKF Prediction Step for four cases Updated mean Updated covariance Predicted mean Predicted covariance
EKF Prediction Step for four cases: now we take other system Predicted mean Predicted covariance Updated mean Updated covariance
EKF Observation Prediction Step robot landmark Z = measurement Similarly for three other cases but sizes and shapes are different Predicted robot covariance
EKF Correction Step
Estimation Sequence (1) Based on measurement only Based on Kalman Initial true robot landmark Initial estimated robot
Estimation Sequence (2) And we see a difference between estimated and real positions of the robot
Comparison to GroundTruth
EKF Summary Highly efficient: Polynomial in measurement dimensionality k and state dimensionality n: O(k2.376 + n2) Not optimal! Can diverge if nonlinearities are large! Works surprisingly well even when all assumptions are violated!
SLAM Review
Noise in motion Noise in measurement
EKF SLAM
State vector has size 3+2n 3 coordinates for robot’s pose 2N coordinates for N features
State Representation for EKF SLAM Mean Value Matrix (Vector) Covariance matrix
State Representation for EKF SLAM Please observe how these matrices are partitioned to submatrices Covariance matrix Mean Value Matrix
Just another notation for the same stuff as in last slide
One more often used notation for the same stuff as in last slide
Filter Cycle for the EKF SLAM
In green we show the part related to state prediction
Now in green we have new estimated state and new estimated covariance matrix
A concrete example of EKF SLAM
EKF-SLAM: a concrete example of robot on a plane Robot moves in the plane Velocity-based motion model: V = speed = radial velocity Robot observes point landmarks Range-bearing sensor Known data association to landmark points Known number of landmarks.
State Vector of mean estimates N landmarks Covariance matrix Nothing known about landmarks
Last slide Remember notations F T x and g of our matrices See next slide
Now our task is to calculate matrix G
Note that we denote the Jacobian of motion by Gxt
From previous slides This slides just shows how to calculate the Jacobian using a new notation
Noise in motion
Noise in measurement
Use notation F from one of previous slides
Now we have to calculate Jacobian H and calculate Kalman Gain Matrix K
Implementation Notes
Loop Closing
EKF SLAM Properties
Practical Examples of applications