EKF, UKF TexPoint fonts used in EMF.

Slides:



Advertisements
Similar presentations
Mapping with Known Poses
Advertisements

Beam Sensor Models Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
Robot Localization Using Bayesian Methods
Lab 2 Lab 3 Homework Labs 4-6 Final Project Late No Videos Write up
Nonlinear Optimization for Optimal Control
Markov Localization & Bayes Filtering 1 with Kalman Filters Discrete Filters Particle Filters Slides adapted from Thrun et al., Probabilistic Robotics.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Bayes Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the.
Introduction to Mobile Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics: Kalman Filters
Particle Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
gMapping TexPoint fonts used in EMF.
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Probability: Review TexPoint fonts used in EMF.
Maximum A Posteriori (MAP) Estimation Pieter Abbeel UC Berkeley EECS TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.:
Probabilistic Robotics: Motion Model/EKF Localization
Probabilistic Robotics
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
Maximum Likelihood (ML), Expectation Maximization (EM)
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Course AE4-T40 Lecture 5: Control Apllication
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Bayesian Filtering for Location Estimation D. Fox, J. Hightower, L. Liao, D. Schulz, and G. Borriello Presented by: Honggang Zhang.
Smoother Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the TexPoint.
Kalman Filtering Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
Gaussians Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the TexPoint.
Overview and Mathematics Bjoern Griesbach
Particle Filters++ TexPoint fonts used in EMF.
G. Hendeby Recursive Triangulation Using Bearings-Only Sensors TARGET ‘06 Austin Court, Birmingham Recursive Triangulation Using Bearings-Only Sensors.
ROBOT MAPPING AND EKF SLAM
Slam is a State Estimation Problem. Predicted belief corrected belief.
EKF and UKF Day EKF and RoboCup Soccer simulation of localization using EKF and 6 landmarks (with known correspondences) robot travels in a circular.
Kalman filter and SLAM problem
Muhammad Moeen YaqoobPage 1 Moment-Matching Trackers for Difficult Targets Muhammad Moeen Yaqoob Supervisor: Professor Richard Vinter.
Colorado Center for Astrodynamics Research The University of Colorado STATISTICAL ORBIT DETERMINATION Project Report Unscented kalman Filter Information.
Markov Localization & Bayes Filtering
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
Computer vision: models, learning and inference Chapter 19 Temporal models.
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.2: Kalman Filter Jürgen Sturm Technische Universität München.
Probabilistic Robotics Robot Localization. 2 Localization Given Map of the environment. Sequence of sensor measurements. Wanted Estimate of the robot’s.
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics Bayes Filter Implementations.
Young Ki Baik, Computer Vision Lab.
Mobile Robot Localization (ch. 7)
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
1 Assignment, Project and Presentation Mobile Robot Localization by using Particle Filter by Chong Wang, Chong Fu, and Guanghui Luo. Tracking, Mapping.
NCAF Manchester July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre.
Unscented Kalman Filter 1. 2 Linearization via Unscented Transform EKF UKF.
Extended Kalman Filter
Nonlinear State Estimation
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
The Unscented Kalman Filter for Nonlinear Estimation Young Ki Baik.
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Using Sensor Data Effectively
PSG College of Technology
Unscented Kalman Filter
Unscented Kalman Filter
Probabilistic Robotics
Unscented Kalman Filter
Bayes and Kalman Filter
Extended Kalman Filter
Kalman Filtering COS 323.
6.891 Computer Experiments for Particle Filtering
Extended Kalman Filter
Presentation transcript:

EKF, UKF TexPoint fonts used in EMF. Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAAAAA

Kalman Filter Kalman Filter = special case of a Bayes’ filter with dynamics model and sensory model being linear Gaussian: 2 -1 Should add off-set to linear system --- make it affine Then “linearization” will work out more cleanly for nonlinear systems to directly map into “linear (affine) systems”

Kalman Filtering Algorithm At time 0: For t = 1, 2, … Dynamics update: Measurement update:

Nonlinear Dynamical Systems Most realistic robotic problems involve nonlinear functions: Versus linear setting:

Linearity Assumption Revisited x p(x) p(y)

Non-linear Function y p(y) p(x) x “Gaussian of p(y)” has mean and variance of y under p(y)

EKF Linearization (1)

EKF Linearization (2) p(x) has high variance relative to region in which linearization is accurate.

EKF Linearization (3) p(x) has small variance relative to region in which linearization is accurate.

EKF Linearization: First Order Taylor Series Expansion Dynamics model: for xt “close to” ¹t we have: Measurement model: for xt “close to” ¹t we have:

EKF Linearization: Numerical Numerically compute Ft column by column: Here ei is the basis vector with all entries equal to zero, except for the i’t entry, which equals 1. If wanting to approximate Ft as closely as possible then ² is chosen to be a small number, but not too small to avoid numerical issues

Ordinary Least Squares Given: samples {(x(1), y(1)), (x(2), y(2)), …, (x(m), y(m))} Problem: find function of the form f(x) = a0 + a1 x that fits the samples as well as possible in the following sense:

Ordinary Least Squares Recall our objective: Let’s write this in vector notation: , giving: Set gradient equal to zero to find extremum: (See the Matrix Cookbook for matrix identities, including derivatives.)

Ordinary Least Squares For our example problem we obtain a = [4.75; 2.00] a0 + a1 x

Ordinary Least Squares 10 20 30 40 22 24 26 More generally: In vector notation: , gives: Set gradient equal to zero to find extremum (exact same derivation as two slides back):

Vector Valued Ordinary Least Squares Problems So far have considered approximating a scalar valued function from samples {(x(1), y(1)), (x(2), y(2)), …, (x(m), y(m))} with A vector valued function is just many scalar valued functions and we can approximate it the same way by solving an OLS problem multiple times. Concretely, let then we have: In our vector notation: This can be solved by solving a separate ordinary least squares problem to find each row of

Vector Valued Ordinary Least Squares Problems Solving the OLS problem for each row gives us: Each OLS problem has the same structure. We have

Vector Valued Ordinary Least Squares and EKF Linearization Approximate xt+1 = ft(xt, ut) with affine function a0 + Ft xt by running least squares on samples from the function: {( xt(1), y(1)=ft(xt(1),ut), ( xt(2), y(2)=ft(xt(2),ut), …, ( xt(m), y(m)=ft(xt(m),ut)} Similarly for zt+1 = ht(xt)

OLS and EKF Linearization: Sample Point Selection OLS vs. traditional (tangent) linearization: OLS traditional (tangent)

OLS Linearization: choosing samples points Perhaps most natural choice: reasonable way of trying to cover the region with reasonably high probability mass

Analytical vs. Numerical Linearization Numerical (based on least squares or finite differences) could give a more accurate “regional” approximation. Size of region determined by evaluation points. Computational efficiency: Analytical derivatives can be cheaper or more expensive than function evaluations Development hint: Numerical derivatives tend to be easier to implement If deciding to use analytical derivatives, implementing finite difference derivative and comparing with analytical results can help debugging the analytical derivatives

EKF Algorithm At time 0: For t = 1, 2, … Dynamics update: Measurement update:

EKF Algorithm Extended_Kalman_filter( mt-1, St-1, ut, zt): Correction: Prediction: Correction: Return mt, St

Localization “Using sensory information to locate the robot in its environment is the most fundamental problem to providing a mobile robot with autonomous capabilities.” [Cox ’91] Given Map of the environment. Sequence of sensor measurements. Wanted Estimate of the robot’s position. Problem classes Position tracking Global localization Kidnapped robot problem (recovery)

Landmark-based Localization

EKF_localization ( mt-1, St-1, ut, zt, m): Prediction: Jacobian of g w.r.t location Jacobian of g w.r.t control Motion noise Predicted mean Predicted covariance

EKF_localization ( mt-1, St-1, ut, zt, m): Correction: Predicted measurement mean Jacobian of h w.r.t location Pred. measurement covariance Kalman gain Updated mean Updated covariance

EKF Prediction Step

EKF Observation Prediction Step

EKF Correction Step

Estimation Sequence (1)

Estimation Sequence (2)

Comparison to GroundTruth

EKF Summary Highly efficient: Polynomial in measurement dimensionality k and state dimensionality n: O(k2.376 + n2) Not optimal! Can diverge if nonlinearities are large! Works surprisingly well even when all assumptions are violated!

Linearization via Unscented Transform EKF UKF

UKF Sigma-Point Estimate (2) EKF UKF

UKF Sigma-Point Estimate (3) EKF UKF

UKF Sigma-Point Estimate (4)

UKF intuition why it can perform better [Julier and Uhlmann, 1997] Assume we know the distribution over X and it has a mean \bar{x} Y = f(X) EKF approximates f by first order and ignores higher-order terms UKF uses f exactly, but approximates p(x).

Self-quiz When would the UKF significantly outperform the EKF? y x Analytical derivatives, finite-difference derivatives, and least squares will all end up with a horizontal linearization  they’d predict zero variance in Y = f(X) y When the f(mu) is significantly mismatched with E(f(x)) --- e.g., f concave or convex  KEF gives systematic over/underestimate x

A crude preliminary investigation of whether we can get EKF to match UKF by particular choice of points used in the least squares fitting Beyond scope of course, just including for completeness.

Original unscented transform Picks a minimal set of sample points that match 1st, 2nd and 3rd moments of a Gaussian: \bar{x} = mean, Pxx = covariance, i  i’th column, x 2 <n · : extra degree of freedom to fine-tune the higher order moments of the approximation; when x is Gaussian, n+· = 3 is a suggested heuristic L = \sqrt{P_{xx}} can be chosen to be any matrix satisfying: L LT = Pxx [Julier and Uhlmann, 1997]

Unscented Transform Sigma points Weights Pass sigma points through nonlinear function Recover mean and covariance

Unscented Kalman filter Dynamics update: Can simply use unscented transform and estimate the mean and variance at the next time from the sample points Observation update: Use sigma-points from unscented transform to compute the covariance matrix between xt and zt. Then can do the standard update.

[Table 3.4 in Probabilistic Robotics]

UKF Summary Highly efficient: Same complexity as EKF, with a constant factor slower in typical practical applications Better linearization than EKF: Accurate in first two terms of Taylor expansion (EKF only first term) + capturing more aspects of the higher order terms Derivative-free: No Jacobians needed Still not optimal!

Unscented Transform Sigma points Weights Pass sigma points through nonlinear function Recover mean and covariance

UKF_localization ( mt-1, St-1, ut, zt, m): Prediction: Motion noise Measurement noise Augmented state mean Augmented covariance Sigma points Prediction of sigma points Predicted mean Predicted covariance

UKF_localization ( mt-1, St-1, ut, zt, m): Correction: Measurement sigma points Predicted measurement mean Pred. measurement covariance Cross-covariance Kalman gain Updated mean Updated covariance

EKF_localization ( mt-1, St-1, ut, zt, m): Correction: Predicted measurement mean Jacobian of h w.r.t location Pred. measurement covariance Sigma – k h sigma = sigma – k s k Kalman gain Updated mean Updated covariance

UKF Prediction Step

UKF Observation Prediction Step

UKF Correction Step

EKF Correction Step

Estimation Sequence EKF PF UKF

Estimation Sequence EKF UKF

Prediction Quality EKF UKF

Kalman Filter-based System [Arras et al. 98]: Laser range-finder and vision High precision (<1cm accuracy) [Courtesy of Kai Arras]

Multi- hypothesis Tracking

Localization With MHT Belief is represented by multiple hypotheses Each hypothesis is tracked by a Kalman filter Additional problems: Data association: Which observation corresponds to which hypothesis? Hypothesis management: When to add / delete hypotheses? Huge body of literature on target tracking, motion correspondence etc.

MHT: Implemented System (1) Hypotheses are extracted from LRF scans Each hypothesis has probability of being the correct one: Hypothesis probability is computed using Bayes’ rule Hypotheses with low probability are deleted. New candidates are extracted from LRF scans. [Jensfelt et al. ’00]

MHT: Implemented System (2) Courtesy of P. Jensfelt and S. Kristensen

MHT: Implemented System (3) Example run # hypotheses P(Hbest) Map and trajectory #hypotheses vs. time Courtesy of P. Jensfelt and S. Kristensen