Observers and Kalman Filters

Slides:



Advertisements
Similar presentations
State Space Models. Let { x t :t T} and { y t :t T} denote two vector valued time series that satisfy the system of equations: y t = A t x t + v t (The.
Advertisements

Probabilistic Reasoning over Time
Use of Kalman filters in time and frequency analysis John Davis 1st May 2011.
Modeling Uncertainty over time Time series of snapshot of the world “state” we are interested represented as a set of random variables (RVs) – Observable.
Lecture 13: Mapping Landmarks CS 344R: Robotics Benjamin Kuipers.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Introduction to Mobile Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics: Kalman Filters
The Simple Linear Regression Model: Specification and Estimation
Mobile Intelligent Systems 2004 Course Responsibility: Ola Bengtsson.
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Probability theory 2011 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different definitions.
Descriptive statistics Experiment  Data  Sample Statistics Experiment  Data  Sample Statistics Sample mean Sample mean Sample variance Sample variance.
Tracking using the Kalman Filter. Point Tracking Estimate the location of a given point along a sequence of images. (x 0,y 0 ) (x n,y n )
Probabilistic Robotics
Visual Recognition Tutorial1 Random variables, distributions, and probability density functions Discrete Random Variables Continuous Random Variables.
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Course AE4-T40 Lecture 5: Control Apllication
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Kalman Filtering Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
Probability theory 2008 Outline of lecture 5 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different.
Lecture II-2: Probability Review
Overview and Mathematics Bjoern Griesbach
Standard error of estimate & Confidence interval.
Variance and covariance Sums of squares General linear models.
Separate multivariate observations
ROBOT MAPPING AND EKF SLAM
Slam is a State Estimation Problem. Predicted belief corrected belief.
Statistical learning and optimal control:
Lecture 11: Kalman Filters CS 344R: Robotics Benjamin Kuipers.
Computer vision: models, learning and inference Chapter 19 Temporal models.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.2: Kalman Filter Jürgen Sturm Technische Universität München.
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics Bayes Filter Implementations.
Human-Computer Interaction Kalman Filter Hanyang University Jong-Il Park.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
Risk Analysis & Modelling Lecture 2: Measuring Risk.
General ideas to communicate Dynamic model Noise Propagation of uncertainty Covariance matrices Correlations and dependencs.
2 Introduction to Kalman Filters Michael Williams 5 June 2003.
Robotics Research Laboratory 1 Chapter 7 Multivariable and Optimal Control.
ECE-7000: Nonlinear Dynamical Systems Overfitting and model costs Overfitting  The more free parameters a model has, the better it can be adapted.
An Introduction to Kalman Filtering by Arthur Pece
An Introduction To The Kalman Filter By, Santhosh Kumar.
State Estimation and Kalman Filtering Zeeshan Ali Sayyed.
Tracking with dynamics
Extended Kalman Filter
Nonlinear State Estimation
Cameron Rowe.  Introduction  Purpose  Implementation  Simple Example Problem  Extended Kalman Filters  Conclusion  Real World Examples.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
The Unscented Kalman Filter for Nonlinear Estimation Young Ki Baik.
Robust Localization Kalman Filter & LADAR Scans
École Doctorale des Sciences de l'Environnement d’ Î le-de-France Année Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
VANET – Stochastic Path Prediction Motivations Route Discovery Safety Warning Accident Notifications Strong Deceleration of Tra ffi c Flow Road Hazards.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Uncertainty and Error Propagation
STATISTICAL ORBIT DETERMINATION Kalman (sequential) filter
CH 5: Multivariate Methods
PSG College of Technology
Lecture 9: Behavior Languages
Probabilistic Robotics
Lecture 10: Observers and Kalman Filters
Bayes and Kalman Filter
Kalman Filtering COS 323.
Multivariate Methods Berlin Chen
Multivariate Methods Berlin Chen, 2005 References:
Uncertainty and Error Propagation
Kalman Filter: Bayes Interpretation
Presentation transcript:

Observers and Kalman Filters CS 393R: Autonomous Robots Slides Courtesy of Benjamin Kuipers

Good Afternoon Colleagues Are there any questions?

Stochastic Models of an Uncertain World Actions are uncertain. Observations are uncertain. i ~ N(0,i) are random variables

Observers The state x is unobservable. The sense vector y provides noisy information about x. An observer is a process that uses sensory history to estimate x. Then a control law can be written

Kalman Filter: Optimal Observer

Estimates and Uncertainty Conditional probability density function

Gaussian (Normal) Distribution Completely described by N(, 2) Mean  Standard deviation , variance  2

The Central Limit Theorem The sum of many random variables with the same mean, but with arbitrary conditional density functions, converges to a Gaussian density function. If a model omits many small unmodeled effects, then the resulting error should converge to a Gaussian density function.

Illustrating the Central Limit Thm Add 1, 2, 3, 4 variables from the same distribution.

Detecting Modeling Error Every model is incomplete. If the omitted factors are all small, the resulting errors should add up to a Gaussian. If the error between a model and the data is not Gaussian, Then some omitted factor is not small. One should find the dominant source of error and add it to the model.

Estimating a Value Suppose there is a constant value x. Distance to wall; angle to wall; etc. At time t1, observe value z1 with variance The optimal estimate is with variance

A Second Observation At time t2, observe value z2 with variance

Merged Evidence

Update Mean and Variance Weighted average of estimates. The weights come from the variances. Smaller variance = more certainty

From Weighted Average to Predictor-Corrector This version can be applied “recursively”.

Predictor-Corrector Update best estimate given new data Update variance:

Static to Dynamic Now suppose x changes according to

Dynamic Prediction At t2 we know At t3 after the change, before an observation. Next, we correct this prediction with the observation at time t3.

Dynamic Correction At time t3 we observe z3 with variance Combine prediction with observation.

Qualitative Properties Suppose measurement noise is large. Then K(t3) approaches 0, and the measurement will be mostly ignored. Suppose prediction noise is large. Then K(t3) approaches 1, and the measurement will dominate the estimate.

Kalman Filter Takes a stream of observations, and a dynamical model. At each step, a weighted average between prediction from the dynamical model correction from the observation. The Kalman gain K(t) is the weighting, based on the variances and With time, K(t) and tend to stabilize.

Simplifications We have only discussed a one-dimensional system. Most applications are higher dimensional. We have assumed the state variable is observable. In general, sense data give indirect evidence. We will discuss the more complex case next.

Up To Higher Dimensions Our previous Kalman Filter discussion was of a simple one-dimensional model. Now we go up to higher dimensions: State vector: Sense vector: Motor vector: First, a little statistics.

Expectations Let x be a random variable. The expected value E[x] is the mean: The probability-weighted mean of all possible values. The sample mean approaches it. Expected value of a vector x is by component.

Variance and Covariance The variance is E[ (x-E[x])2 ] Covariance matrix is E[ (x-E[x])(x-E[x])T ] Divide by N1 to make the sample variance an unbiased estimator for the population variance.

Covariance Matrix Along the diagonal, Cii are variances. Off-diagonal Cij are essentially correlations.

Independent Variation x and y are Gaussian random variables (N=100) Generated with x=1 y=3 Covariance matrix:

Dependent Variation c and d are random variables. Generated with c=x+y d=x-y Covariance matrix:

Discrete Kalman Filter Estimate the state x  n of a linear stochastic difference equation process noise w is drawn from N(0,Q), with covariance matrix Q. with a measurement z  m measurement noise v is drawn from N(0,R), with covariance matrix R. A, Q are nn. B is nl. R is mm. H is mn.

Estimates and Errors is the estimated state at time-step k. after prediction, before observation. Errors: Error covariance matrices: Kalman Filter’s task is to update

Time Update (Predictor) Update expected value of x Update error covariance matrix P Previous statements were simplified versions of the same idea:

Measurement Update (Corrector) Update expected value innovation is Update error covariance matrix Compare with previous form

The Kalman Gain The optimal Kalman gain Kk is Compare with previous form

Extended Kalman Filter Suppose the state-evolution and measurement equations are non-linear: process noise w is drawn from N(0,Q), with covariance matrix Q. measurement noise v is drawn from N(0,R), with covariance matrix R.

The Jacobian Matrix For a scalar function y=f(x), For a vector function y=f(x),

Linearize the Non-Linear Let A be the Jacobian of f with respect to x. Let H be the Jacobian of h with respect to x. Then the Kalman Filter equations are almost the same as before!

EKF Update Equations Predictor step: Kalman gain: Corrector step: TTD: Intuitive explanations for APA^T and HPH^T in the update equations.