HCI/ComS 575X: Computational Perception Instructor: Alexander Stoytchev

Slides:



Advertisements
Similar presentations
Probabilistic Reasoning over Time
Advertisements

State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 10: The Bayesian way to fit models Geoffrey Hinton.
Lecture 13: Mapping Landmarks CS 344R: Robotics Benjamin Kuipers.
(Includes references to Brian Clipp
Lab 2 Lab 3 Homework Labs 4-6 Final Project Late No Videos Write up
Observers and Kalman Filters
Presenter: Yufan Liu November 17th,
Kalman’s Beautiful Filter (an introduction) George Kantor presented to Sensor Based Planning Lab Carnegie Mellon University December 8, 2000.
Linear dynamic systems
280 SYSTEM IDENTIFICATION The System Identification Problem is to estimate a model of a system based on input-output data. Basic Configuration continuous.
1 Introduction to Kalman Filters Michael Williams 5 June 2003.
Mobile Intelligent Systems 2004 Course Responsibility: Ola Bengtsson.
Tracking using the Kalman Filter. Point Tracking Estimate the location of a given point along a sequence of images. (x 0,y 0 ) (x n,y n )
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
© 2003 by Davi GeigerComputer Vision November 2003 L1.1 Tracking We are given a contour   with coordinates   ={x 1, x 2, …, x N } at the initial frame.
Novel approach to nonlinear/non- Gaussian Bayesian state estimation N.J Gordon, D.J. Salmond and A.F.M. Smith Presenter: Tri Tran
Tracking with Linear Dynamic Models. Introduction Tracking is the problem of generating an inference about the motion of an object given a sequence of.
Overview and Mathematics Bjoern Griesbach
HCI / CprE / ComS 575: Computational Perception
ROBOT MAPPING AND EKF SLAM
Slam is a State Estimation Problem. Predicted belief corrected belief.
1 Formation et Analyse d’Images Session 7 Daniela Hall 7 November 2005.
Principles of the Global Positioning System Lecture 11 Prof. Thomas Herring Room A;
Kalman filter and SLAM problem
Mobile Robot controlled by Kalman Filter
Statistical learning and optimal control:
Lecture 11: Kalman Filters CS 344R: Robotics Benjamin Kuipers.
/09/dji-phantom-crashes-into- canadian-lake/
Computer vision: models, learning and inference Chapter 19 Temporal models.
Lab 4 1.Get an image into a ROS node 2.Find all the orange pixels (suggest HSV) 3.Identify the midpoint of all the orange pixels 4.Explore the findContours.
Computer Vision - A Modern Approach Set: Tracking Slides by D.A. Forsyth The three main issues in tracking.
Probabilistic Robotics Bayes Filter Implementations.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Modern Navigation Thomas Herring
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.1: Bayes Filter Jürgen Sturm Technische Universität München.
The “ ” Paige in Kalman Filtering K. E. Schubert.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
2 Introduction to Kalman Filters Michael Williams 5 June 2003.
Robotics Research Laboratory 1 Chapter 7 Multivariable and Optimal Control.
An Introduction to Kalman Filtering by Arthur Pece
Lesson 2 – kalman Filters
HCI / CprE / ComS 575: Computational Perception Instructor: Alexander Stoytchev
NCAF Manchester July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre.
An Introduction To The Kalman Filter By, Santhosh Kumar.
Short Introduction to Particle Filtering by Arthur Pece [ follows my Introduction to Kalman filtering ]
Object Tracking - Slide 1 Object Tracking Computer Vision Course Presentation by Wei-Chao Chen April 05, 2000.
HCI/ComS 575X: Computational Perception Instructor: Alexander Stoytchev
Tracking with dynamics
HCI/ComS 575X: Computational Perception Instructor: Alexander Stoytchev
HCI/ComS 575X: Computational Perception Instructor: Alexander Stoytchev
Robust Localization Kalman Filter & LADAR Scans
DSP-CIS Part-III : Optimal & Adaptive Filters Chapter-9 : Kalman Filters Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
Zhaoxia Fu, Yan Han Measurement Volume 45, Issue 4, May 2012, Pages 650–655 Reporter: Jing-Siang, Chen.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Tracking We are given a contour G1 with coordinates G1={x1 , x2 , … , xN} at the initial frame t=1, were the image is It=1 . We are interested in tracking.
Kalman’s Beautiful Filter (an introduction)
Probabilistic Robotics
Lecture 10: Observers and Kalman Filters
Filtering and State Estimation: Basic Concepts
Particle Filtering.
Probabilistic Map Based Localization
Bayes and Kalman Filter
Principles of the Global Positioning System Lecture 11
Chapter14-cont..
Kalman Filter: Bayes Interpretation
Nome Sobrenome. Time time time time time time..
Presentation transcript:

HCI/ComS 575X: Computational Perception Instructor: Alexander Stoytchev

The Kalman Filter (part 2) HCI/ComS 575X: Computational Perception Iowa State University, SPRING 2006 Copyright © 2006, Alexander Stoytchev February 15, 2006

Brown and Hwang (1992) “Introduction to Random Signals and Applied Kalman Filtering” Ch 5: The Discrete Kalman Filter

Maybeck, Peter S. (1979) Chapter 1 in ``Stochastic models, estimation, and control'', Mathematics in Science and Engineering Series, Academic Press.

A Simple Recursive Example Problem Statement: Given the measurement sequence: z 1, z 2, …, z n find the mean [Brown and Hwang (1992)]

First Approach 1. Make the first measurement z 1 Store z 1 and estimate the mean as µ 1 =z 1 2. Make the second measurement z 2 Store z 1 along with z 2 and estimate the mean as µ 2 = (z 1 +z 2 )/2 [Brown and Hwang (1992)]

First Approach (cont’d) 3. Make the third measurement z 3 Store z 3 along with z 1 and z 2 and estimate the mean as µ 3 = (z 1 +z 2 +z 3 )/3 [Brown and Hwang (1992)]

First Approach (cont’d) n. Make the n-th measurement z n Store z n along with z 1, z 2,…, z n-1 and estimate the mean as µ n = (z 1 + z 2 + … + z n )/n [Brown and Hwang (1992)]

Second Approach 1. Make the first measurement z 1 Compute the mean estimate as µ 1 =z 1 Store µ 1 and discard z 1 [Brown and Hwang (1992)]

Second Approach (cont’d) 2.Make the second measurement z 2 Compute the estimate of the mean as a weighted sum of the previous estimate µ 1 and the current measurement z 2: µ 2 = 1/2 µ 1 +1/2 z 2 Store µ 2 and discard z 2 and µ 1 [Brown and Hwang (1992)]

Second Approach (cont’d) 3.Make the third measurement z 3 Compute the estimate of the mean as a weighted sum of the previous estimate µ 2 and the current measurement z 3: µ 3 = 2/3 µ 2 +1/3 z 3 Store µ 3 and discard z 3 and µ 2 [Brown and Hwang (1992)]

Second Approach (cont’d) n.Make the n-th measurement z n Compute the estimate of the mean as a weighted sum of the previous estimate µ n-1 and the current measurement z n: µ n = (n-1)/n µ n-1 +1/n z n Store µ n and discard z n and µ n-1 [Brown and Hwang (1992)]

Analysis The second procedure gives the same result as the first procedure. It uses the result for the previous step to help obtain an estimate at the current step. The difference is that it does not need to keep the sequence in memory. [Brown and Hwang (1992)]

A simple example using diagrams

Conditional density of position based on measured value of z 1 [Maybeck (1979)]

Conditional density of position based on measured value of z 1 [Maybeck (1979)] position measured position uncertainty

Conditional density of position based on measurement of z 2 alone [Maybeck (1979)]

Conditional density of position based on measurement of z 2 alone [Maybeck (1979)] measured position 2 uncertainty 2

Conditional density of position based on data z 1 and z 2 [Maybeck (1979)] position estimate uncertainty estimate

Propagation of the conditional density [Maybeck (1979)]

Propagation of the conditional density [Maybeck (1979)] movement vector expected position just prior to taking measurement 3

Propagation of the conditional density [Maybeck (1979)] movement vector expected position just prior to taking measurement 3

Propagation of the conditional density z3z3 σ x (t 3 ) measured position 3 uncertainty 3

Updating the conditional density after the third measurement z3z3 σ x (t 3 ) position uncertainty position estimate x(t3)

Questions?

Now let’s do the same thing …but this time we’ll use math

How should we combine the two measurements? [Maybeck (1979)] σZ1σZ1 σZ2σZ2

Calculating the new mean Scaling Factor 1 Scaling Factor 2

Calculating the new mean Scaling Factor 1 Scaling Factor 2

Calculating the new mean Scaling Factor 1 Scaling Factor 2 Why is this not z 1 ?

Calculating the new variance [Maybeck (1979)] σZ1σZ1 σZ2σZ2

Calculating the new variance Scaling Factor 1 Scaling Factor 2

Calculating the new variance Scaling Factor 1 Scaling Factor 2

Calculating the new variance Scaling Factor 1 Scaling Factor 2

Calculating the new variance

Why is this result different from the one given in the paper?

Remember the Gaussian Properties?

If and Then This is a 2 not a

The scaling factors must be squared! Scaling Factor 1 Scaling Factor 2

Therefore the new variance is Try to derive this on your own.

Another Way to Express The New Position [Maybeck (1979)]

Another Way to Express The New Position [Maybeck (1979)]

Another Way to Express The New Position [Maybeck (1979)]

The equation for the variance can also be rewritten as [Maybeck (1979)]

Adding Movement [Maybeck (1979)]

Adding Movement [Maybeck (1979)]

Adding Movement [Maybeck (1979)]

Properties of K If the measurement noise is large K is small 0 [Maybeck (1979)]

Another Example

A Simple Example Consider a ship sailing east with a perfect compass trying to estimate its position. You estimate the position x from the stars as z 1 =100 with a precision of σ x =4 miles x 100 [

A Simple Example (cont’d) Along comes a more experienced navigator, and she takes her own sighting z 2 She estimates the position x= z 2 =125 with a precision of σ x =3 miles How do you merge her estimate with your own? x [

A Simple Example (cont’d) x x 2 =116 [

With the distributions being Gaussian, the best estimate for the state is the mean of the distribution, so… or alternately where K t is referred to as the Kalman gain, and must be computed at each time step A Simple Example (cont’d) Correction Term [

OK, now you fall asleep on your watch. You wake up after 2 hours, and you now have to re-estimate your position Let the velocity of the boat be nominally 20 miles/hour, but with a variance of σ 2 w =4 miles 2 /hour What is the best estimate of your current position? A Simple Example (cont’d) x x 2 =116 x - 3 =? [

The next effect is that the gaussian is translated by a distance and the variance of the distribution is increased to account for the uncertainty in dynamics A Simple Example (cont’d) x x 2 =116x - 3 =156 [

OK, this is not a very accurate estimate. So, since you’ve had your nap you decide to take another measurement and you get z 3 =165 miles Using the same update procedure as the first update, we obtain and so on… A Simple Example (cont’d) [

In this example, prediction came from using knowledge of the vehicle dynamics to estimate its change in position An analogy with a robot would be integrating information from the robot kinematics (i.e. you give it a desired [x, y, α] velocities for a time Δt) to estimate changed in position The correction is accomplished through making exteroceptive observations and then fusing this with your current estimate This is akin to updating position estimates using landmark information, etc. In practice, the prediction rate is typically much higher than the correction The Predictor-Corrector Approach [

Kalman Filter Diagram [Brown and Hwang (1992)]

The process to be estimated

THE END