Lecture 11: Kalman Filters CS 344R: Robotics Benjamin Kuipers.

Slides:



Advertisements
Similar presentations
State Space Models. Let { x t :t T} and { y t :t T} denote two vector valued time series that satisfy the system of equations: y t = A t x t + v t (The.
Advertisements

Probabilistic Reasoning over Time
Notes Sample vs distribution “m” vs “µ” and “s” vs “σ” Bias/Variance Bias: Measures how much the learnt model is wrong disregarding noise Variance: Measures.
Modeling Uncertainty over time Time series of snapshot of the world “state” we are interested represented as a set of random variables (RVs) – Observable.
Lecture 13: Mapping Landmarks CS 344R: Robotics Benjamin Kuipers.
(Includes references to Brian Clipp
Robot Localization Using Bayesian Methods
Observers and Kalman Filters
Kalman Filter CMPUT 615 Nilanjan Ray. What is Kalman Filter A sequential state estimator for some special cases Invented in 1960’s Still very much used.
Introduction to Mobile Robotics Bayes Filter Implementations Gaussian filters.
Kalman’s Beautiful Filter (an introduction) George Kantor presented to Sensor Based Planning Lab Carnegie Mellon University December 8, 2000.
Probabilistic Robotics: Kalman Filters
The Simple Linear Regression Model: Specification and Estimation
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
Introduction to Kalman Filter and SLAM Ting-Wei Hsu 08/10/30.
Mobile Intelligent Systems 2004 Course Responsibility: Ola Bengtsson.
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Discriminative Training of Kalman Filters P. Abbeel, A. Coates, M
Prepared By: Kevin Meier Alok Desai
Probabilistic Robotics
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Course AE4-T40 Lecture 5: Control Apllication
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Kalman Filtering Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
Overview and Mathematics Bjoern Griesbach
Separate multivariate observations
ROBOT MAPPING AND EKF SLAM
Kalman filter and SLAM problem
Colorado Center for Astrodynamics Research The University of Colorado STATISTICAL ORBIT DETERMINATION Project Report Unscented kalman Filter Information.
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 13 Oct 14, 2005 Nanjing University of Science & Technology.
Computer vision: models, learning and inference Chapter 19 Temporal models.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.2: Kalman Filter Jürgen Sturm Technische Universität München.
The Kalman Filter ECE 7251: Spring 2004 Lecture 17 2/16/04
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics Bayes Filter Implementations.
Human-Computer Interaction Kalman Filter Hanyang University Jong-Il Park.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
Mobile Robot Localization (ch. 7)
Processing Sequential Sensor Data The “John Krumm perspective” Thomas Plötz November 29 th, 2011.
Robotics Research Laboratory 1 Chapter 7 Multivariable and Optimal Control.
NCAF Manchester July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre.
An Introduction To The Kalman Filter By, Santhosh Kumar.
École Doctorale des Sciences de l'Environnement d’Île-de-France Année Universitaire Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
Extended Kalman Filter
Nonlinear State Estimation
Cameron Rowe.  Introduction  Purpose  Implementation  Simple Example Problem  Extended Kalman Filters  Conclusion  Real World Examples.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
Chapter 27: Linear Filtering - Part I: Kalman Filter Standard Kalman filtering – Linear dynamics.
The Unscented Kalman Filter for Nonlinear Estimation Young Ki Baik.
Robust Localization Kalman Filter & LADAR Scans
École Doctorale des Sciences de l'Environnement d’ Î le-de-France Année Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
École Doctorale des Sciences de l'Environnement d’Île-de-France Année Universitaire Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
École Doctorale des Sciences de l'Environnement d’ Î le-de-France Année Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
VANET – Stochastic Path Prediction Motivations Route Discovery Safety Warning Accident Notifications Strong Deceleration of Tra ffi c Flow Road Hazards.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Uncertainty and Error Propagation
STATISTICAL ORBIT DETERMINATION Kalman (sequential) filter
Probability Theory and Parameter Estimation I
ECE 7251: Signal Detection and Estimation
CH 5: Multivariate Methods
PSG College of Technology
Probabilistic Robotics
Kalman Filtering: Control with Limited/Noisy Measurements
Lecture 10: Observers and Kalman Filters
Bayes and Kalman Filter
Extended Kalman Filter
Uncertainty and Error Propagation
Extended Kalman Filter
Presentation transcript:

Lecture 11: Kalman Filters CS 344R: Robotics Benjamin Kuipers

Up To Higher Dimensions Our previous Kalman Filter discussion was of a simple one-dimensional model. Now we go up to higher dimensions: –State vector: –Sense vector: –Motor vector: First, a little statistics.

Expectations Let x be a random variable. The expected value E[x] is the mean: –The probability-weighted mean of all possible values. The sample mean approaches it. Expected value of a vector x is by component.

Variance and Covariance The variance is E[ (x-E[x]) 2 ] Covariance matrix is E[ (x-E[x])(x-E[x]) T ] –Divide by N  1 to make the sample variance an unbiased estimator for the population variance.

Biased and Unbiased Estimators Strictly speaking, the sample variance is a biased estimate of the population variance. An unbiased estimator is: But: “If the difference between N and N  1 ever matters to you, then you are probably up to no good anyway …” [Press, et al]

Covariance Matrix Along the diagonal, C ii are variances. Off-diagonal C ij are essentially correlations.

Independent Variation x and y are Gaussian random variables (N=100) Generated with  x =1  y =3 Covariance matrix:

Dependent Variation c and d are random variables. Generated with c=x+y d=x-y Covariance matrix:

Discrete Kalman Filter Estimate the state of a linear stochastic difference equation –process noise w is drawn from N(0,Q), with covariance matrix Q. with a measurement –measurement noise v is drawn from N(0,R), with covariance matrix R. A, Q are nxn. B is nxl. R is mxm. H is mxn.

Estimates and Errors is the estimated state at time-step k. after prediction, before observation. Errors: Error covariance matrices: Kalman Filter’s task is to update

Time Update (Predictor) Update expected value of x Update error covariance matrix P Previous statements were simplified versions of the same idea:

Measurement Update (Corrector) Update expected value –innovation is Update error covariance matrix Compare with previous form

The Kalman Gain The optimal Kalman gain K k is Compare with previous form

Extended Kalman Filter Suppose the state-evolution and measurement equations are non-linear: –process noise w is drawn from N(0,Q), with covariance matrix Q. –measurement noise v is drawn from N(0,R), with covariance matrix R.

The Jacobian Matrix For a scalar function y=f(x), For a vector function y=f(x),

Linearize the Non-Linear Let A be the Jacobian of f with respect to x. Let H be the Jacobian of h with respect to x. Then the Kalman Filter equations are almost the same as before!

EKF Update Equations Predictor step: Kalman gain: Corrector step:

“Catch The Ball” Assignment State evolution is linear (almost). –What is A? –B=0. Sensor equation is non-linear. –What is y=h(x)? –What is the Jacobian H(x) of h with respect to x? Errors are treated as additive. Is this OK? –What are the covariance matrices Q and R?

TTD Intuitive explanations for APA T and HPH T in the update equations.