SEIF, EnKF, EKF SLAM, Fast SLAM, Graph SLAM

Slides:



Advertisements
Similar presentations
Mobile Robot Localization and Mapping using the Kalman Filter
Advertisements

EKF, UKF TexPoint fonts used in EMF.
Mapping with Known Poses
Probabilistic Robotics
Probabilistic Robotics
Probabilistic Robotics
Probabilistic Robotics SLAM. 2 Given: The robot’s controls Observations of nearby features Estimate: Map of features Path of the robot The SLAM Problem.
(Includes references to Brian Clipp
The GraphSLAM Algorithm Daniel Holman CS 5391: AI Robotics March 12, 2014.
IR Lab, 16th Oct 2007 Zeyn Saigol
Lab 2 Lab 3 Homework Labs 4-6 Final Project Late No Videos Write up
Probabilistic Robotics
Nonlinear Optimization for Optimal Control
Simultaneous Localization and Mapping
Bayes Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the.
Introduction to Mobile Robotics Bayes Filter Implementations Gaussian filters.
SA-1 Body Scheme Learning Through Self-Perception Jürgen Sturm, Christian Plagemann, Wolfram Burgard.
Probabilistic Robotics: Kalman Filters
gMapping TexPoint fonts used in EMF.
Introduction to Kalman Filter and SLAM Ting-Wei Hsu 08/10/30.
Probabilistic Robotics
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Probabilistic Robotics
Discriminative Training of Kalman Filters P. Abbeel, A. Coates, M
Probabilistic Robotics: Motion Model/EKF Localization
SA-1 Probabilistic Robotics Mapping with Known Poses.
Probabilistic Robotics
Maximum Likelihood (ML), Expectation Maximization (EM)
Course AE4-T40 Lecture 5: Control Apllication
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Kalman Filtering Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
SLAM: Simultaneous Localization and Mapping: Part II BY TIM BAILEY AND HUGH DURRANT-WHYTE Presented by Chang Young Kim These slides are based on: Probabilistic.
Overview and Mathematics Bjoern Griesbach
Particle Filters++ TexPoint fonts used in EMF.
ROBOT MAPPING AND EKF SLAM
Itamar Kahn, Thomas Lin, Yuval Mazor
Markov Localization & Bayes Filtering
/09/dji-phantom-crashes-into- canadian-lake/
Computer vision: models, learning and inference Chapter 19 Temporal models.
9-1 SA-1 Probabilistic Robotics: SLAM = Simultaneous Localization and Mapping Slide credits: Wolfram Burgard, Dieter Fox, Cyrill Stachniss, Giorgio Grisetti,
3D SLAM for Omni-directional Camera
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
Probabilistic Robotics Robot Localization. 2 Localization Given Map of the environment. Sequence of sensor measurements. Wanted Estimate of the robot’s.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics Bayes Filter Implementations.
Stochastic Gradient Descent and Tree Parameterizations in SLAM
Young Ki Baik, Computer Vision Lab.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
CSE-473 Mobile Robot Mapping. Mapping with Raw Odometry.
NCAF Manchester July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre.
Extended Kalman Filter
SLAM Tutorial (Part I) Marios Xanthidis.
Sebastian Thrun Michael Montemerlo
10-1 Probabilistic Robotics: FastSLAM Slide credits: Wolfram Burgard, Dieter Fox, Cyrill Stachniss, Giorgio Grisetti, Maren Bennewitz, Christian Plagemann,
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
SLAM : Simultaneous Localization and Mapping
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
CSE-473 Mobile Robot Mapping.
Probabilistic Robotics Graph SLAM
Robótica Móvil CC5316 Clase 16: SLAM
PSG College of Technology
Simultaneous Localization and Mapping
A Short Introduction to the Bayes Filter and Related Models
Probabilistic Map Based Localization
Probabilistic Robotics
Extended Kalman Filter
Kalman Filtering COS 323.
Extended Kalman Filter
Probabilistic Robotics Bayes Filter Implementations FastSLAM
Presentation transcript:

SEIF, EnKF, EKF SLAM, Fast SLAM, Graph SLAM Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAAAAA

Information Filter From an analytical point of view == Kalman filter Difference: keep track of the inverse covariance rather than the covariance matrix [matter of some linear algebra manipulations to get into this form] Why interesting? Inverse covariance matrix = 0 is easier to work with than covariance matrix = infinity (case of complete uncertainty) Inverse covariance matrix is often sparser than the covariance matrix --- for the “insiders”: inverse covariance matrix entry (i,j) = 0 if xi is conditionally independent of xj given some set {xk, xl, …} Downside: when extended to non-linear setting, need to solve a linear system to find the mean (around which one can then linearize) See Probabilistic Robotics pp. 78-79 for more in-depth pros/cons and Probabilistic Robotics Chapter 12 for its relevance to SLAM (then often referred to as the “sparse extended information filter (SEIF)”)

Ensemble Kalman filter (enKF) Represent the Gaussian distribution by samples Empirically: even 40 samples can track the atmospheric state with high accuracy with enKF <-> UKF: 2 * n sigma-points, n = 106 + then still forms covariance matrices for updates The intellectual innovation: Transforming the Kalman filter updates into updates which can be computed based upon samples and which produce samples while never explicitly representing the covariance matrix

KF enKF ? Keep track of ensemble [x1, …, xN] Keep track of ¹, § Prediction: Correction: Return mt, St Keep track of ¹, § Keep track of ensemble [x1, …, xN] Can update the ensemble by simply propagating through the dynamics model + adding sampled noise ?

enKF correction step KF: Current ensemble X = [x1, …, xN] Build observations matrix Z = [zt+v1 … zt+vN] where vi are sampled according to the observation noise model Then the columns of X + Kt(Z – Ct X) form a set of random samples from the posterior Note: when computing Kt, leave §t in the format §t = [x1-¹t … xN-¹t] [x1-¹t … xN-¹t]T Can also compute C_t from samples

How about C? Indeed, would be expensive to build up C. However: careful inspection shows that C only appears as in: C X C § CT = C X XT CT  can simply compute h(x) for all columns x of X and compute the empirical covariance matrices required [details left as exercise]

References for enKF Mandel, 2007 “A brief tutorial on the Ensemble Kalman Filter” Evensen, 2009, “The ensemble Kalman filter for combined state and parameter estimation”

KF Summary Kalman filter exact under linear Gaussian assumptions Extension to non-linear setting: Extended Kalman filter Unscented Kalman filter Extension to extremely large scale settings: Ensemble Kalman filter Sparse Information filter Main limitation: restricted to unimodal / Gaussian looking distributions Can alleviate by running multiple XKFs + keeping track of the likelihood; but this is still limited in terms of representational power unless we allow a very large number of them

Kalman filter property If system is observable (=dual of controllable!) then Kalman filter will converge to the true state. System is observable iff O = [C ; CA ; CA2 ; … ; CAn-1] is full column rank (1) Intuition: if no noise, we observe y0, y1, … and we have that the unknown initial state x0 satisfies: y0 = C x0 y1 = CA x0 ... yK = CAK x0 This system of equations has a unique solution x0 iff the matrix [C; CA; … CAK] has full column rank. B/c any power of a matrix higher than n can be written in terms of lower powers of the same matrix, condition (1) is sufficient to check (i.e., the column rank will not grow anymore after having reached K=n-1).

Simple self-quiz The previous slide assumed zero control inputs at all times. Does the same result apply with non-zero control inputs? What changes in the derivation?

EKF/UKF SLAM State: (nR, eR, R, nA, eA, nB, eB, nC, eC, nD, eD, nE, eE, nF, eF, nG, eG, nH, eH) Now map = location of landmarks (vs. gridmaps) Transition model: Robot motion model; Landmarks stay in place R B E H D A G C F

Simultaneous Localization and Mapping (SLAM) In practice: robot is not aware of all landmarks from the beginning Moreover: no use in keeping track of landmarks the robot has not received any measurements about  Incrementally grow the state when new landmarks get encountered.

Simultaneous Localization and Mapping (SLAM) Landmark measurement model: robot measures [ xk; yk ], the position of landmark k expressed in coordinate frame attached to the robot: h(nR, eR, R, nk, ek) = [xk; yk] = R() ( [nk; ek] - [nR; eR] ) Often also some odometry measurements E.g., wheel encoders As they measure the control input being applied, they are often incorporated directly as control inputs (why?)

Victoria Park Data Set [courtesy by E. Nebot]

Victoria Park Data Set Vehicle [courtesy by E. Nebot]

Data Acquisition [courtesy by E. Nebot]

Estimated Trajectory [courtesy by E. Nebot]

EKF SLAM Application [courtesy by J. Leonard]

EKF SLAM Application odometry estimated trajectory [courtesy by John Leonard]

Landmark-based Localization

EKF-SLAM: practical challenges Defining landmarks Laser range finder: Distinct geometric features (e.g. use RANSAC to find lines, then use corners as features) Camera: “interest point detectors”, textures, color, … Often need to track multiple hypotheses Data association/Correspondence problem: when seeing features that constitute a landmark --- Which landmark is it? Closing the loop problem: how to know you are closing a loop? Can split off multiple EKFs whenever there is ambiguity; Keep track of the likelihood score of each EKF and discard the ones with low likelihood score Computational complexity with large numbers of landmarks.

KF over very large state spaces High-dimensional ocean and atmospheric circulation models (106 dimensional state space) SLAM with 106 landmarks Becomes computationally very challenging to work with the 106 x 106 covariance matrix (terabytes!) In SLAM community: information filter which keeps tracks of the inverse covariance matrix, which can often be well approximated by a sparse matrix In civil engineering community: ensemble Kalman filter, with applications often being in tracking systems described by partial differential equations

EKF_localization ( mt-1, St-1, ut, zt, m): Correction: Predicted measurement mean Jacobian of h w.r.t location Pred. measurement covariance Kalman gain Updated mean Updated covariance

EKF Prediction Step

EKF Observation Prediction Step

EKF Correction Step

Estimation Sequence (1)

Estimation Sequence (2)

Comparison to GroundTruth

Fast SLAM Rao-Blackwellized particle filter Robot state = x, y, µ (just like gMapping) Map = Landmark based (vs. map = gridmap for gMapping) Key observation (why Rao Blackwellization is so useful): Location of landmark i is independent of location of landmark j given the entire robot pose sequence Instead of joint Gaussian over poses of all landmarks, can just keep track of Gaussian for each landmark separately Linear scaling with number of landmarks (rather than quadratic)

SLAM thus far Landmark based vs. occupancy grid Probability distribution representation: EKF vs. particle filter vs. Rao-Blackwellized particle filter EKF, SEIF, FastSLAM are all “online” Currently popular 4th alternative: GraphSLAM

Graph-based Formulation Use a graph to represent the problem Every node in the graph corresponds to a pose of the robot during mapping Every edge between two nodes corresponds to the spatial constraints between them Goal: Find a configuration of the nodes that minimize the error introduced by the constraints

Problem Formulation The problem can be described by a graph Goal: Find the assignment of poses to the nodes of the graph which minimizes the negative log likelihood of the observations: Observation of from error nodes

Approaches 2D approaches: Lu and Milios, ‘97 Montemerlo et al., ‘03 Howard et al., ‘03 Dellaert et al., ‘03 Frese and Duckett, ‘05 Olson et al., ‘06 Grisetti et al., ’07 Tipaldi et al.,’ 07 3D approaches: Nuechter et al., ‘05 Dellaert et al., ‘05 Triebel et al., ’06 Grisetti et al., ’08/’09 Say more on 3D approaches

Graph-Based SLAM in a Nutshell Problem described as a graph Every node corresponds to a robot position and to a laser measurement An edge between two nodes represents a data-dependent spatial constraint between the nodes [KUKA Hall 22, courtesy P. Pfaff & G. Grisetti]

Graph-Based SLAM in a Nutshell Problem described as a graph Every node corresponds to a robot position and to a laser measurement An edge between two nodes represents a data-dependent spatial constraint between the nodes [KUKA Hall 22, courtesy P. Pfaff & G. Grisetti]

Graph-Based SLAM in a Nutshell Once we have the graph, we determine the most likely map by “moving” the nodes [KUKA Hall 22, courtesy P. Pfaff & G. Grisetti]

Graph-Based SLAM in a Nutshell Once we have the graph, we determine the most likely map by “moving” the nodes … like this. [KUKA Hall 22, courtesy P. Pfaff & G. Grisetti]

Graph-Based SLAM in a Nutshell Once we have the graph, we determine the most likely map by “moving” the nodes … like this. Then we render a map based on the known poses [KUKA Hall 22, courtesy P. Pfaff & G. Grisetti]

Graph-based Visual SLAM Visual odometry Loop Closing Real world example: flying robot in corridor. Map inconsistent. Construct graph, optimize. [ courtesy B. Steder]

The KUKA Production Site

The KUKA Production Site

The KUKA Production Site scans 59668 total acquisition time 4,699.71 seconds traveled distance 2,587.71 meters total rotations 262.07 radians size 180 x 110 meters processing time < 30 minutes