Presentation is loading. Please wait.

Presentation is loading. Please wait.

HCI / CprE / ComS 575: Computational Perception Instructor: Alexander Stoytchev

Similar presentations


Presentation on theme: "HCI / CprE / ComS 575: Computational Perception Instructor: Alexander Stoytchev"— Presentation transcript:

1 HCI / CprE / ComS 575: Computational Perception Instructor: Alexander Stoytchev http://www.ece.iastate.edu/~alexs/classes/2010_Spring_575/

2 The Kalman Filter (part 1) HCI/ComS 575X: Computational Perception Iowa State University Copyright © Alexander Stoytchev

3 Lecture Plan Administrative Stuff More details on Project Proposals One last example with Color & Movement Start with the Kalman Filter

4 Administrative Stuff Wiki HW2 due this Thursday HW3 will go out this week as well

5 Brown and Hwang (1992) “Introduction to Random Signals and Applied Kalman Filtering” Ch 5: The Discrete Kalman Filter Readings for Today:

6 Maybeck, Peter S. (1979) Chapter 1 in ``Stochastic models, estimation, and control'', Mathematics in Science and Engineering Series, Academic Press.

7 Arthur Gelb, Joseph Kasper, Raymond Nash, Charles Price, Arthur Sutherland (1974) Applied Optimal Estimation MIT Press.

8 Rudolf Emil Kalman [http://www.cs.unc.edu/~welch/kalman/kalmanBiblio.html]

9 Definition A Kalman filter is simply an optimal recursive data processing algorithm Under some assumptions the Kalman filter is optimal with respect to virtually any criterion that makes sense.

10 Definition “The Kalman filter incorporates all information that can be provided to it. It processes all available measurements, regardless of their precision, to estimate the current value of the variables of interest.” [Maybeck (1979)]

11 Why do we need a filter? No mathematical model of a real system is perfect Real world disturbances Imperfect Sensors

12 Application: Radar Tracking

13 Application: Lunar Landing

14

15 Application: Missile Tracking

16 Application: Sailing

17 Application: Robot Navigation

18 Application: Other Tracking

19 Application: Head Tracking

20 Face & Hand Tracking

21 A Simple Recursive Example Problem Statement: Given the measurement sequence: z 1, z 2, …, z n find the mean [Brown and Hwang (1992)]

22 First Approach 1. Make the first measurement z 1 Store z 1 and estimate the mean as µ 1 =z 1 2. Make the second measurement z 2 Store z 1 along with z 2 and estimate the mean as µ 2 = (z 1 +z 2 )/2 [Brown and Hwang (1992)]

23 First Approach (cont’d) 3. Make the third measurement z 3 Store z 3 along with z 1 and z 2 and estimate the mean as µ 3 = (z 1 +z 2 +z 3 )/3 [Brown and Hwang (1992)]

24 First Approach (cont’d) n. Make the n-th measurement z n Store z n along with z 1, z 2,…, z n-1 and estimate the mean as µ n = (z 1 + z 2 + … + z n )/n [Brown and Hwang (1992)]

25 Second Approach 1. Make the first measurement z 1 Compute the mean estimate as µ 1 =z 1 Store µ 1 and discard z 1 [Brown and Hwang (1992)]

26 Second Approach (cont’d) 2.Make the second measurement z 2 Compute the estimate of the mean as a weighted sum of the previous estimate µ 1 and the current measurement z 2: µ 2 = 1/2 µ 1 +1/2 z 2 Store µ 2 and discard z 2 and µ 1 [Brown and Hwang (1992)]

27 Second Approach (cont’d) 3.Make the third measurement z 3 Compute the estimate of the mean as a weighted sum of the previous estimate µ 2 and the current measurement z 3: µ 3 = 2/3 µ 2 +1/3 z 3 Store µ 3 and discard z 3 and µ 2 [Brown and Hwang (1992)]

28 Second Approach (cont’d) n.Make the n-th measurement z n Compute the estimate of the mean as a weighted sum of the previous estimate µ n-1 and the current measurement z n: µ n = (n-1)/n µ n-1 +1/n z n Store µ n and discard z n and µ n-1 [Brown and Hwang (1992)]

29 Comparison Batch MethodRecursive Method

30 Analysis The second procedure gives the same result as the first procedure. It uses the result for the previous step to help obtain an estimate at the current step. The difference is that it does not need to keep the sequence in memory. [Brown and Hwang (1992)]

31 Second Approach (rewrite the general formula) µ n = (n-1)/n µ n-1 +1/n z n µ n = (n/n) µ n-1 - (1/n) µ n-1 +1/n z n

32 Second Approach (rewrite the general formula) µ n = (n-1)/n µ n-1 +1/n z n µ n = (n/n) µ n-1 - (1/n) µ n-1 +1/n z n µ n = µ n-1 + 1/n (z n - µ n-1 ) Old Estimate Difference Between New Reading and Old Estimate Gain Factor

33 Second Approach (rewrite the general formula) Old Estimate Difference Between New Reading and Old Estimate Gain Factor

34 Gaussian Properties

35 The Gaussian Function

36 Gaussian pdf

37 Properties If and Then

38 pdf for

39 Properties

40 Summation and Subtraction

41 A simple example using diagrams

42 Conditional density of position based on measured value of z 1 [Maybeck (1979)]

43 Conditional density of position based on measured value of z 1 [Maybeck (1979)] position measured position uncertainty

44 Conditional density of position based on measurement of z 2 alone [Maybeck (1979)]

45 Conditional density of position based on measurement of z 2 alone [Maybeck (1979)] measured position 2 uncertainty 2

46 Conditional density of position based on data z 1 and z 2 [Maybeck (1979)] position estimate uncertainty estimate

47 Propagation of the conditional density [Maybeck (1979)]

48 Propagation of the conditional density [Maybeck (1979)] movement vector expected position just prior to taking measurement 3

49 Propagation of the conditional density [Maybeck (1979)] movement vector expected position just prior to taking measurement 3

50 Propagation of the conditional density z3z3 σ x (t 3 ) measured position 3 uncertainty 3

51 Updating the conditional density after the third measurement z3z3 σ x (t 3 ) position uncertainty position estimate x(t3)

52

53 Questions?

54 Now let’s do the same thing …but this time we’ll use math

55 How should we combine the two measurements? [Maybeck (1979)] σZ1σZ1 σZ2σZ2

56 Calculating the new mean Scaling Factor 1 Scaling Factor 2

57 Calculating the new mean Scaling Factor 1 Scaling Factor 2

58 Calculating the new mean Scaling Factor 1 Scaling Factor 2 Why is this not z 1 ?

59 Calculating the new variance [Maybeck (1979)] σZ1σZ1 σZ2σZ2

60 Calculating the new variance Scaling Factor 1 Scaling Factor 2

61 Calculating the new variance Scaling Factor 1 Scaling Factor 2

62 Calculating the new variance Scaling Factor 1 Scaling Factor 2

63 Calculating the new variance

64

65

66 Why is this result different from the one given in the paper?

67 Remember the Gaussian Properties?

68 If and Then This is a 2 not a

69 The scaling factors must be squared! Scaling Factor 1 Scaling Factor 2

70 The scaling factors must be squared! Scaling Factor 1 Scaling Factor 2

71 Therefore the new variance is Try to derive this on your own.

72 Another Way to Express The New Position [Maybeck (1979)]

73 Another Way to Express The New Position [Maybeck (1979)]

74 Another Way to Express The New Position [Maybeck (1979)]

75 The equation for the variance can also be rewritten as [Maybeck (1979)]

76 Adding Movement [Maybeck (1979)]

77 Adding Movement [Maybeck (1979)]

78 Adding Movement [Maybeck (1979)]

79 Properties of K If the measurement noise is large K is small 0 [Maybeck (1979)]

80 The Kalman Filter (part 2) HCI/ComS 575X: Computational Perception Iowa State University Copyright © Alexander Stoytchev

81 Let’s Start With a Demo Matlab Program Written by John Burnett

82 Another Example

83 A Simple Example Consider a ship sailing east with a perfect compass trying to estimate its position. You estimate the position x from the stars as z 1 =100 with a precision of σ x =4 miles x 100 [www.cse.lehigh.edu/~spletzer/cse398_Spring05/lec011_Localization2.ppt]

84 A Simple Example (cont’d) Along comes a more experienced navigator, and she takes her own sighting z 2 She estimates the position x= z 2 =125 with a precision of σ x =3 miles How do you merge her estimate with your own? x 100 125 [www.cse.lehigh.edu/~spletzer/cse398_Spring05/lec011_Localization2.ppt]

85 A Simple Example (cont’d) x x 2 =116 [www.cse.lehigh.edu/~spletzer/cse398_Spring05/lec011_Localization2.ppt]

86 With the distributions being Gaussian, the best estimate for the state is the mean of the distribution, so… or alternately where K t is referred to as the Kalman gain, and must be computed at each time step A Simple Example (cont’d) Correction Term [www.cse.lehigh.edu/~spletzer/cse398_Spring05/lec011_Localization2.ppt]

87 OK, now you fall asleep on your watch. You wake up after 2 hours, and you now have to re-estimate your position Let the velocity of the boat be nominally 20 miles/hour, but with a variance of σ 2 w =4 miles 2 /hour What is the best estimate of your current position? A Simple Example (cont’d) x x 2 =116 x - 3 =? [www.cse.lehigh.edu/~spletzer/cse398_Spring05/lec011_Localization2.ppt]

88 The next effect is that the gaussian is translated by a distance and the variance of the distribution is increased to account for the uncertainty in dynamics A Simple Example (cont’d) x x 2 =116x - 3 =156 [www.cse.lehigh.edu/~spletzer/cse398_Spring05/lec011_Localization2.ppt]

89 OK, this is not a very accurate estimate. So, since you’ve had your nap you decide to take another measurement and you get z 3 =165 miles Using the same update procedure as the first update, we obtain and so on… A Simple Example (cont’d) [www.cse.lehigh.edu/~spletzer/cse398_Spring05/lec011_Localization2.ppt]

90 In this example, prediction came from using knowledge of the vehicle dynamics to estimate its change in position An analogy with a robot would be integrating information from the robot kinematics (i.e. you give it a desired [x, y, α] velocities for a time Δt) to estimate changed in position The correction is accomplished through making exteroceptive observations and then fusing this with your current estimate This is akin to updating position estimates using landmark information, etc. In practice, the prediction rate is typically much higher than the correction The Predictor-Corrector Approach [www.cse.lehigh.edu/~spletzer/cse398_Spring05/lec011_Localization2.ppt]

91

92 Calculating the new mean Scaling Factor 1 Scaling Factor 2

93 Calculating the new variance Scaling Factor 1 Scaling Factor 2

94 What makes these scaling factors special? Are there other ways to combine the two measurements? They minimize the error between the prediction and the true value of X. They are optimal in the least-squares sense.

95 Minimize the error

96 What is the minimum value? [http://home.ubalt.edu/ntsbarsh/Business-stat/otherapplets/function.gif]

97 What is the minimum value? [http://home.ubalt.edu/ntsbarsh/Business-stat/otherapplets/function.gif]

98 Finding the Minimum Value Y= 9x 2 - 50x + 50 dY/dx = 18 x -50 =0 The minimum is obtained when x=50/18=2.77777(7) The minimum value is Y(x min ) = 9*(50/18) 2 -50*(50/18) +50 = -19.44444(4)

99 Start with two measurements v 1 and v 2 represent zero mean noise

100 Formula for the estimation error The new estimate is The error is

101 Expected value of the error If the estimate is unbiased this should hold

102

103 Find the Mean Square Error = ?

104

105 Mean Square Error

106 Minimize the mean square error

107 Finding S 1 Therefore

108 Finding S 2

109 Finally we get what we wanted

110 Finding the new variance

111 Formula for the new variance

112 Kalman Filter Diagram [Brown and Hwang (1992)]

113 The process to be estimated

114

115 THE END


Download ppt "HCI / CprE / ComS 575: Computational Perception Instructor: Alexander Stoytchev"

Similar presentations


Ads by Google