Download presentation
Presentation is loading. Please wait.
Published byEugene Bruce Modified over 9 years ago
1
HCI/ComS 575X: Computational Perception Instructor: Alexander Stoytchev http://www.cs.iastate.edu/~alex/classes/2006_Spring_575X/
2
The Kalman Filter (part 3) HCI/ComS 575X: Computational Perception Iowa State University, SPRING 2006 Copyright © 2006, Alexander Stoytchev February 20, 2006
3
Brown and Hwang (1992) “Introduction to Random Signals and Applied Kalman Filtering” Ch 5: The Discrete Kalman Filter
4
Arthur Gelb, Joseph Kasper, Raymond Nash, Charles Price, Arthur Sutherland (1974) Applied Optimal Estimation MIT Press.
5
Let’s Start With a Demo Matlab Program Written by John Burnett
6
A Simple Recursive Example Problem Statement: Given the measurement sequence: z 1, z 2, …, z n find the mean [Brown and Hwang (1992)]
7
First Approach 1. Make the first measurement z 1 Store z 1 and estimate the mean as µ 1 =z 1 2. Make the second measurement z 2 Store z 1 along with z 2 and estimate the mean as µ 2 = (z 1 +z 2 )/2 [Brown and Hwang (1992)]
8
First Approach (cont’d) 3. Make the third measurement z 3 Store z 3 along with z 1 and z 2 and estimate the mean as µ 3 = (z 1 +z 2 +z 3 )/3 [Brown and Hwang (1992)]
9
First Approach (cont’d) n. Make the n-th measurement z n Store z n along with z 1, z 2,…, z n-1 and estimate the mean as µ n = (z 1 + z 2 + … + z n )/n [Brown and Hwang (1992)]
10
Second Approach 1. Make the first measurement z 1 Compute the mean estimate as µ 1 =z 1 Store µ 1 and discard z 1 [Brown and Hwang (1992)]
11
Second Approach (cont’d) 2.Make the second measurement z 2 Compute the estimate of the mean as a weighted sum of the previous estimate µ 1 and the current measurement z 2: µ 2 = 1/2 µ 1 +1/2 z 2 Store µ 2 and discard z 2 and µ 1 [Brown and Hwang (1992)]
12
Second Approach (cont’d) 3.Make the third measurement z 3 Compute the estimate of the mean as a weighted sum of the previous estimate µ 2 and the current measurement z 3: µ 3 = 2/3 µ 2 +1/3 z 3 Store µ 3 and discard z 3 and µ 2 [Brown and Hwang (1992)]
13
Second Approach (cont’d) n.Make the n-th measurement z n Compute the estimate of the mean as a weighted sum of the previous estimate µ n-1 and the current measurement z n: µ n = (n-1)/n µ n-1 +1/n z n Store µ n and discard z n and µ n-1 [Brown and Hwang (1992)]
14
Comparison Batch MethodRecursive Method
15
Analysis The second procedure gives the same result as the first procedure. It uses the result for the previous step to help obtain an estimate at the current step. The difference is that it does not need to keep the sequence in memory. [Brown and Hwang (1992)]
16
Second Approach (rewrite the general formula) µ n = (n-1)/n µ n-1 +1/n z n
17
Second Approach (rewrite the general formula) µ n = (n-1)/n µ n-1 +1/n z n µ n = µ n-1 + 1/n (z n - µ n-1 )
18
Second Approach (rewrite the general formula) µ n = (n-1)/n µ n-1 +1/n z n µ n = µ n-1 + 1/n (z n - µ n-1 ) Old Estimate Difference Between New Reading and Old Estimate Gain Factor
19
Second Approach (rewrite the general formula)
20
How should we combine the two measurements? [Maybeck (1979)] σZ1σZ1 σZ2σZ2
21
Calculating the new mean Scaling Factor 1 Scaling Factor 2
22
Calculating the new mean Scaling Factor 1 Scaling Factor 2
23
Calculating the new mean Scaling Factor 1 Scaling Factor 2 Why is this not z 1 ?
24
Calculating the new variance [Maybeck (1979)] σZ1σZ1 σZ2σZ2
25
Calculating the new variance Scaling Factor 1 Scaling Factor 2
26
The scaling factors must be squared! Scaling Factor 1 Scaling Factor 2
27
The scaling factors must be squared! Scaling Factor 1 Scaling Factor 2
28
The new variance is
29
What makes these scaling factors special? Are there other ways to combine the two measurements? They minimize the error between the prediction and the true value of X. They are optimal in the least-squares sense.
30
Minimize the error
31
What is the minimum value? [http://home.ubalt.edu/ntsbarsh/Business-stat/otherapplets/function.gif]
32
What is the minimum value? [http://home.ubalt.edu/ntsbarsh/Business-stat/otherapplets/function.gif]
33
Finding the Minimum Value Y= 9x 2 - 50x + 50 dY/dx = 18 x -50 =0 The minimum is obtained when x=50/18=2.77777(7) The minimum value is Y(x min ) = 9*(50/18) 2 -50*(50/18) +50 = -19.44444(4)
34
Start with two measurements v 1 and v 2 represent zero mean noise
35
Formula for the estimation error The new estimate is The error is
36
Expected value of the error If the estimate is unbiased this should hold
38
Find the Mean Square Error = ?
40
Mean Square Error
41
Minimize the mean square error
42
Finding S 1 Therefore
43
Finding S 2
44
Finally we get what we wanted
45
Finding the new variance
46
Formula for the new variance
47
New Topic: Particle Filters
48
Michael Isard and Andrew Blake (1998) ``CONDENSATION -- conditional density propagation for visual tracking'', International Journal of Computer Vision, 29, 1, 5--28.
49
Movies from the CONDENSATION Web Page
50
More about this next time
51
THE END
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.