Lecture 10: Observers and Kalman Filters

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

State Space Models. Let { x t :t T} and { y t :t T} denote two vector valued time series that satisfy the system of equations: y t = A t x t + v t (The.
Probabilistic Reasoning over Time
Modeling Uncertainty over time Time series of snapshot of the world “state” we are interested represented as a set of random variables (RVs) – Observable.
Russell and Norvig, AIMA : Chapter 15 Part B – 15.3,
Lecture 13: Mapping Landmarks CS 344R: Robotics Benjamin Kuipers.
Observers and Kalman Filters
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Introduction to Mobile Robotics Bayes Filter Implementations Gaussian filters.
1 Introduction to Kalman Filters Michael Williams 5 June 2003.
Mobile Intelligent Systems 2004 Course Responsibility: Ola Bengtsson.
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Descriptive statistics Experiment  Data  Sample Statistics Experiment  Data  Sample Statistics Sample mean Sample mean Sample variance Sample variance.
4. Convergence of random variables  Convergence in probability  Convergence in distribution  Convergence in quadratic mean  Properties  The law of.
Tracking using the Kalman Filter. Point Tracking Estimate the location of a given point along a sequence of images. (x 0,y 0 ) (x n,y n )
Prepared By: Kevin Meier Alok Desai
Probabilistic Robotics
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
© 2003 by Davi GeigerComputer Vision November 2003 L1.1 Tracking We are given a contour   with coordinates   ={x 1, x 2, …, x N } at the initial frame.
Kalman Filtering Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
Lecture II-2: Probability Review
Standard error of estimate & Confidence interval.
Review of Probability.
Slam is a State Estimation Problem. Predicted belief corrected belief.
Mobile Robot controlled by Kalman Filter
Statistical learning and optimal control:
Lecture 11: Kalman Filters CS 344R: Robotics Benjamin Kuipers.
Lab 4 1.Get an image into a ROS node 2.Find all the orange pixels (suggest HSV) 3.Identify the midpoint of all the orange pixels 4.Explore the findContours.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.2: Kalman Filter Jürgen Sturm Technische Universität München.
Physics 270 – Experimental Physics. Standard Deviation of the Mean (Standard Error) When we report the average value of n measurements, the uncertainty.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics Bayes Filter Implementations.
Karman filter and attitude estimation Lin Zhong ELEC424, Fall 2010.
Human-Computer Interaction Kalman Filter Hanyang University Jong-Il Park.
2 Introduction to Kalman Filters Michael Williams 5 June 2003.
7 sum of RVs. 7-1: variance of Z Find the variance of Z = X+Y by using Var(X), Var(Y), and Cov(X,Y)
HCI/ComS 575X: Computational Perception Instructor: Alexander Stoytchev
Robotics Research Laboratory 1 Chapter 7 Multivariable and Optimal Control.
ECE-7000: Nonlinear Dynamical Systems Overfitting and model costs Overfitting  The more free parameters a model has, the better it can be adapted.
An Introduction to Kalman Filtering by Arthur Pece
HCI / CprE / ComS 575: Computational Perception Instructor: Alexander Stoytchev
An Introduction To The Kalman Filter By, Santhosh Kumar.
HCI/ComS 575X: Computational Perception Instructor: Alexander Stoytchev
State Estimation and Kalman Filtering Zeeshan Ali Sayyed.
Tracking with dynamics
Cameron Rowe.  Introduction  Purpose  Implementation  Simple Example Problem  Extended Kalman Filters  Conclusion  Real World Examples.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
The Unscented Kalman Filter for Nonlinear Estimation Young Ki Baik.
Robust Localization Kalman Filter & LADAR Scans
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
STATISTICAL ORBIT DETERMINATION Kalman (sequential) filter
Using Sensor Data Effectively
Tracking We are given a contour G1 with coordinates G1={x1 , x2 , … , xN} at the initial frame t=1, were the image is It=1 . We are interested in tracking.
PSG College of Technology
Lecture 9: Behavior Languages
Probabilistic Robotics
State Estimation Probability, Bayes Filtering
Lecture 17 Kalman Filter.
Probabilistic Map Based Localization
Bayes and Kalman Filter
Kalman Filtering COS 323.
NONLINEAR AND ADAPTIVE SIGNAL ESTIMATION
Kalman Filter: Bayes Interpretation
NONLINEAR AND ADAPTIVE SIGNAL ESTIMATION
Combining Noisy Measurements
Presentation transcript:

Lecture 10: Observers and Kalman Filters CS 344R: Robotics Benjamin Kuipers

Stochastic Models of an Uncertain World Actions are uncertain. Observations are uncertain. i ~ N(0,i) are random variables

Observers The state x is unobservable. The sense vector y provides noisy information about x. An observer is a process that uses sensory history to estimate x. Then a control law can be written

Kalman Filter: Optimal Observer

Estimates and Uncertainty Conditional probability density function

Gaussian (Normal) Distribution Completely described by N(,) Mean  Standard deviation , variance  2

The Central Limit Theorem The sum of many random variables with the same mean, but with arbitrary conditional density functions, converges to a Gaussian density function. If a model omits many small unmodeled effects, then the resulting error should converge to a Gaussian density function.

Estimating a Value Suppose there is a constant value x. Distance to wall; angle to wall; etc. At time t1, observe value z1 with variance The optimal estimate is with variance

A Second Observation At time t2, observe value z2 with variance

Merged Evidence

Update Mean and Variance Weighted average of estimates. The weights come from the variances. Smaller variance = more certainty

From Weighted Average to Predictor-Corrector This version can be applied “recursively”.

Predictor-Corrector Update best estimate given new data Update variance:

Static to Dynamic Now suppose x changes according to

Dynamic Prediction At t2 we know At t3 after the change, before an observation. Next, we correct this prediction with the observation at time t3.

Dynamic Correction At time t3 we observe z3 with variance Combine prediction with observation.

Qualitative Properties Suppose measurement noise is large. Then K(t3) approaches 0, and the measurement will be mostly ignored. Suppose prediction noise is large. Then K(t3) approaches 1, and the measurement will dominate the estimate.

Kalman Filter Takes a stream of observations, and a dynamical model. At each step, a weighted average between prediction from the dynamical model correction from the observation. The Kalman gain K(t) is the weighting, based on the variances and With time, K(t) and tend to stabilize.

Simplifications We have only discussed a one-dimensional system. Most applications are higher dimensional. We have assumed the state variable is observable. In general, sense data give indirect evidence. We will discuss the more complex case next.