An Introduction to Kalman Filtering by Arthur Pece

Slides:



Advertisements
Similar presentations
Introduction to Data Assimilation NCEO Data-assimilation training days 5-7 July 2010 Peter Jan van Leeuwen Data Assimilation Research Center (DARC) University.
Advertisements

Mobile Robot Localization and Mapping using the Kalman Filter
INTRODUCTION TO MACHINE LEARNING Bayesian Estimation.
1 Approximated tracking of multiple non-rigid objects using adaptive quantization and resampling techniques. J. M. Sotoca 1, F.J. Ferri 1, J. Gutierrez.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 10: The Bayesian way to fit models Geoffrey Hinton.
Robot Localization Using Bayesian Methods
Observers and Kalman Filters
Kalman Filter CMPUT 615 Nilanjan Ray. What is Kalman Filter A sequential state estimator for some special cases Invented in 1960’s Still very much used.
Lecture 11: Recursive Parameter Estimation
280 SYSTEM IDENTIFICATION The System Identification Problem is to estimate a model of a system based on input-output data. Basic Configuration continuous.
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Adaptive Rao-Blackwellized Particle Filter and It’s Evaluation for Tracking in Surveillance Xinyu Xu and Baoxin Li, Senior Member, IEEE.
SYSTEMS Identification
Tracking using the Kalman Filter. Point Tracking Estimate the location of a given point along a sequence of images. (x 0,y 0 ) (x n,y n )
Particle Filtering for Non- Linear/Non-Gaussian System Bohyung Han
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
© 2003 by Davi GeigerComputer Vision November 2003 L1.1 Tracking We are given a contour   with coordinates   ={x 1, x 2, …, x N } at the initial frame.
Novel approach to nonlinear/non- Gaussian Bayesian state estimation N.J Gordon, D.J. Salmond and A.F.M. Smith Presenter: Tri Tran
Tracking with Linear Dynamic Models. Introduction Tracking is the problem of generating an inference about the motion of an object given a sequence of.
Kalman Filtering Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
A Unifying Review of Linear Gaussian Models
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
Principles of the Global Positioning System Lecture 13 Prof. Thomas Herring Room A;
Slam is a State Estimation Problem. Predicted belief corrected belief.
1 Formation et Analyse d’Images Session 7 Daniela Hall 7 November 2005.
Principles of the Global Positioning System Lecture 11 Prof. Thomas Herring Room A;
Muhammad Moeen YaqoobPage 1 Moment-Matching Trackers for Difficult Targets Muhammad Moeen Yaqoob Supervisor: Professor Richard Vinter.
BraMBLe: The Bayesian Multiple-BLob Tracker By Michael Isard and John MacCormick Presented by Kristin Branson CSE 252C, Fall 2003.
Computer vision: models, learning and inference Chapter 19 Temporal models.
1 / 41 Inference and Computation with Population Codes 13 November 2012 Inference and Computation with Population Codes Alexandre Pouget, Peter Dayan,
Computer vision: models, learning and inference Chapter 19 Temporal models.
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
Jamal Saboune - CRV10 Tutorial Day 1 Bayesian state estimation and application to tracking Jamal Saboune VIVA Lab - SITE - University.
Computer Vision - A Modern Approach Set: Tracking Slides by D.A. Forsyth The three main issues in tracking.
Probabilistic Robotics Bayes Filter Implementations.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Modern Navigation Thomas Herring
Overview Particle filtering is a sequential Monte Carlo methodology in which the relevant probability distributions are iteratively estimated using the.
A unifying framework for hybrid data-assimilation schemes Peter Jan van Leeuwen Data Assimilation Research Center (DARC) National Centre for Earth Observation.
Data assimilation and forecasting the weather (!) Eugenia Kalnay and many friends University of Maryland.
Modern Navigation Thomas Herring MW 11:00-12:30 Room
SYSTEMS Identification Ali Karimpour Assistant Professor Ferdowsi University of Mashhad Reference: “System Identification Theory For The User” Lennart.
Sequential Monte-Carlo Method -Introduction, implementation and application Fan, Xin
An Introduction To The Kalman Filter By, Santhosh Kumar.
Short Introduction to Particle Filtering by Arthur Pece [ follows my Introduction to Kalman filtering ]
Kalman Filtering And Smoothing
Tracking with dynamics
Nonlinear State Estimation
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
Cameron Rowe.  Introduction  Purpose  Implementation  Simple Example Problem  Extended Kalman Filters  Conclusion  Real World Examples.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability.
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
Zhaoxia Fu, Yan Han Measurement Volume 45, Issue 4, May 2012, Pages 650–655 Reporter: Jing-Siang, Chen.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Particle Filtering for Geometric Active Contours
Probabilistic Robotics
Kalman Filtering: Control with Limited/Noisy Measurements
Introduction to particle filter
Lecture 10: Observers and Kalman Filters
Filtering and State Estimation: Basic Concepts
Introduction to particle filter
Bayes and Kalman Filter
Principles of the Global Positioning System Lecture 11
Chapter14-cont..
Principles of the Global Positioning System Lecture 13
Parametric Methods Berlin Chen, 2005 References:
Kalman Filter: Bayes Interpretation
Presentation transcript:

An Introduction to Kalman Filtering by Arthur Pece

Generative model for a generic signal

Basic concepts in tracking/filtering State variables x; observation y: both are vectors Discrete time: x(t), y(t), x(t+1), y(t+1) Probability P pdf [density] p(v) of vector variable v : p(v*) = lim P(v* < v < v*+dv) / dv dv->0.

Basic concepts: Gaussian pdf A Gaussian pdf is completely characterized by 2 parameters: its mean vector its covariance matrix

Basic concepts: prior and likelihood Prior pdf of variable v: in tracking, this is usually the probability conditional on the previous estimate: p[ v(t) | v(t-1) ] Likelihood: pdf of the observation, given the state variables: p[ y(t) | x(t) ]

Basic concepts: Bayes’ theorem Posterior pdf is proportional to prior pdf times likelihood: p[ x(t) | x(t-1), y(t) ] = p[ x(t) | x(t-1) ] p[ y(t) | x(t) ] / Z where Z = p[ y(t) ]

Basic concepts: recursive Bayesian estimation Posterior pdf given the set y(1:t) of all observations up to time t: p[ x(t) | y(1:t) ] = p[ y(t) | x(t) ]. p[ x(t) | x(t-1) ]. p[ x(t-1) | y(1:t-1) ] / Z 1

Basic concepts: recursive Bayesian estimation p[ x(t) | y(1:t) ] = p[ y(t) | x(t) ]. p[ x(t) | x(t-1) ]. p[ y(t-1) | x(t-1) ]. p[ x(t-1) | x(t-2) ]. p[ x(t-2) | y(1:t-2) ] / Z 2

Basic concepts: recursive Bayesian estimation p[ x(t) | y(1:t) ] = p[ y(t) | x(t) ]. p[ x(t) | x(t-1) ]. p[ y(t-1) | x(t-1) ]. p[ x(t-1) | x(t-2) ]. p[ y(t-2) | x(t-2) ]. p[ x(t-2) | x(t-3) ]. … / Z*

Kalman model

Kalman model in words Dynamical model: the current state x(t) is a linear (vector) function of the previous state x(t-1) plus additive Gaussian noise Observation model: the observation y(t) is a linear (vector) function of the state x(t) plus additive Gaussian noise

Problems in visual tracking Dynamics is nonlinear, non-Gaussian Pose and shape are nonlinear, non-Gaussian functions of the system state Most important: what is observed is not image coordinates, but pixel grey-level values: a nonlinear function of object shape and pose, with non-additive, non-Gaussian noise

More detailed model

Back to Kalman A Gaussian pdf, propagated through a linear system, remains Gaussian If Gaussian noise is added to a variable with Gaussian pdf, the resulting pdf is still Gaussian (sum of covariances) ---> The predicted state pdf is Gaussian if the previous state pdf was Gaussian ---> The observation pdf is Gaussian if the state pdf is Gaussian

Kalman dynamics

Kalman observation

Kalman posterior pdf The product of 2 Gaussian densities is still Gaussian (sum of inverse covariances) ---> the posterior pdf of the state is Gaussian if prior pdf and likelihood are Gaussian

Kalman filter Operates in two steps: prediction and update Prediction: propagate mean and covariance of the state through the dynamical model Update: combine prediction and innovation (defined below) to obtain the state estimate with maximum posterior pdf

Note on the symbols From now on, the symbol x represents no longer the ”real” state (which we cannot know) but the mean of the posterior Gaussian pdf The symbol A represents the covariance of the posterior Gaussian pdf x and A represent mean and covariance of the prior Gaussian pdf

Kalman prediction Prior mean: previous mean vector times dynamical matrix: x(t) = D x(t-1) Prior covariance matrix: previous covariance matrix pre- AND post- multiplied by dynamical matrix PLUS noise covariance: A(t) = D T A(t-1) D + N

Kalman update In the update step, we must reason backwards, from effect (observation) to cause (state): we must ”invert” the generative process. Hence the update is more complicated than the prediction.

Kalman update (continued) Basic scheme: Predict the observation from the current state estimate Take the difference between predicted and actual observation (innovation) Project the innovation back to update the state

Kalman innovation Observation matrix F The innovation v is given by: v = y - F x Observation-noise covariance R The innovation has covariance W: W = F T A F + R

Kalman update: state mean vector Posterior mean vector: add weighted innovation to predicted mean vector weigh the innovation by the relative covariances of state and innovation: larger covariance of the innovation --> larger uncertainty of the innovation --> smaller weight of the innovation

Kalman gain Predicted state covariance A Innovation covariance W Observation matrix F Kalman gain K = A F T W -1 Posterior state mean: s = s + K v

Kalman update: state covariance matrix Posterior covariance matrix: subtract weighted covariance of the innovation weigh the covariance of the innovation by the Kalman gain: A = A - K T W K Why subtract? Look carefully at the equation: larger innovation covariance --> smaller Kalman gain K --> smaller amount subtracted!

Kalman update: state covariance matrix (continued) Another equivalent formulation requires matrix inversion (sum of inverse covariances) Advanced note: The equations given here are for the usual covariance form of the Kalman filter It is possible to work with inverse covariance matrices all the time (in prediction and update): this is called the information form of the Kalman filter

Summary of Kalman equations Prediction : x(t) = D x(t-1) A(t) = D T A(t-1) D + N Update: innovation: v = y - F x innov. cov: W = F T A F + R Kalman gain: K = A F T W -1 posterior mean: s = s + K v posterior cov: A = A - K T W K

Kalman equations with control input u Prediction : x(t) = D x(t-1) + C u(t-1) A(t) = D T A(t-1) D + N Update: innovation: v = y - F x innov. cov: W = F T A F + R Kalman gain: K = A F T W -1 posterior mean: s = s + K v posterior cov: A = A - K T W K