Download presentation
Presentation is loading. Please wait.
1
Introduction to Mobile Robotics Bayes Filter Implementations Gaussian filters
2
Prediction Correction Bayes Filter Reminder
3
Gaussians -- Univariate Multivariate
4
Properties of Gaussians
5
We stay in the “Gaussian world” as long as we start with Gaussians and perform only linear transformations. Multivariate Gaussians
6
6 Discrete Kalman Filter Estimates the state x of a discrete-time controlled process that is governed by the linear stochastic difference equation with a measurement
7
7 Components of a Kalman Filter Matrix (nxn) that describes how the state evolves from t to t-1 without controls or noise. Matrix (nxl) that describes how the control u t changes the state from t to t-1. Matrix (kxn) that describes how to map the state x t to an observation z t. Random variables representing the process and measurement noise that are assumed to be independent and normally distributed with covariance R t and Q t respectively.
8
8 Kalman Filter Updates in 1D
9
9
11
Kalman Filter Updates
12
Linear Gaussian Systems: Initialization Initial belief is normally distributed:
13
Dynamics are linear function of state and control plus additive noise: Linear Gaussian Systems: Dynamics
15
Observations are linear function of state plus additive noise: Linear Gaussian Systems: Observations
17
Kalman Filter Algorithm 1. Algorithm Kalman_filter( t-1, t-1, u t, z t ): 2. Prediction: 3. 4. 5. Correction: 6. 7. 8. 9. Return t, t
18
18 The Prediction-Correction-Cycle Prediction
19
19 The Prediction-Correction-Cycle Correction
20
20 The Prediction-Correction-Cycle Correction Prediction
21
Kalman Filter Summary Highly efficient: Polynomial in measurement dimensionality k and state dimensionality n: O(k 2.376 + n 2 ) Optimal for linear Gaussian systems! Most robotics systems are nonlinear!
22
Nonlinear Dynamic Systems Most realistic robotic problems involve nonlinear functions
23
Linearity Assumption Revisited
24
Non-linear Function
25
EKF Linearization (1)
26
EKF Linearization (2)
27
EKF Linearization (3)
28
Prediction: Correction: EKF Linearization: First Order Taylor Series Expansion
29
EKF Algorithm 1.Extended_Kalman_filter ( t-1, t-1, u t, z t ): 2. Prediction: 3. 4. 5. Correction: 6. 7. 8. 9. Return t, t
30
Localization Given Map of the environment. Sequence of sensor measurements. Wanted Estimate of the robot’s position. Problem classes Position tracking Global localization Kidnapped robot problem (recovery) “Using sensory information to locate the robot in its environment is the most fundamental problem to providing a mobile robot with autonomous capabilities.” [Cox ’91]
31
Landmark-based Localization
32
1.EKF_localization ( t-1, t-1, u t, z t, m): Prediction: 6. 7. 8. Motion noise Jacobian of g w.r.t location Predicted mean Predicted covariance Jacobian of g w.r.t control
33
1.EKF_localization ( t-1, t-1, u t, z t, m): Correction: 6. 7. 8. 9. 10. Predicted measurement mean Pred. measurement covariance Kalman gain Updated mean Updated covariance Jacobian of h w.r.t location
34
EKF Prediction Step
35
EKF Observation Prediction Step
36
EKF Correction Step
37
Estimation Sequence (1)
38
Estimation Sequence (2)
39
Comparison to GroundTruth
40
EKF Summary Highly efficient: Polynomial in measurement dimensionality k and state dimensionality n: O(k 2.376 + n 2 ) Not optimal! Can diverge if nonlinearities are large! Works surprisingly well even when all assumptions are violated!
41
Linearization via Unscented Transform EKF UKF
42
UKF Sigma-Point Estimate (2) EKF UKF
43
UKF Sigma-Point Estimate (3) EKF UKF
44
44 Unscented Transform Intuition f(…) Sample a 2N+1 set of points from an N dimensional Gaussian distribution For each sample compute its mapping via f() Recover a Gaussian approximation from the samples
45
Unscented Transform Sigma points Weights Pass sigma points through nonlinear function Recover mean and covariance
46
46 Unscented Prediction g(x t-1,u t-1 ) x t-1 u t-1 Construct a N+M dimensional Gaussian from the from the previous state distribution and the controls For each of the 2(N+M)+1 samples (i), compute its mapping via g(x,u) Recover a Gaussian approximation from the samples
47
UKF_localization ( t-1, t-1, u t, z t, m): Prediction: Motion noise Measurement noise Augmented state mean Augmented covariance Sigma points Prediction of sigma points Predicted mean Predicted covariance
48
48 Unscented Correction h(x t ) Sample from the predicted state and the observation noise, to obtain the expected measurement R Compute the cross correlation matrix of measurements and states, and perform a Kalman update.
49
UKF_localization ( t-1, t-1, u t, z t, m): Correction: Measurement sigma points Predicted measurement mean Pred. measurement covariance Cross-covariance Kalman gain Updated mean Updated covariance
50
1.EKF_localization ( t-1, t-1, u t, z t, m): Correction: 6. 7. 8. 9. 10. Predicted measurement mean Pred. measurement covariance Kalman gain Updated mean Updated covariance Jacobian of h w.r.t location
51
UKF Prediction Step
52
UKF Observation Prediction Step
53
UKF Correction Step
54
EKF Correction Step
55
Estimation Sequence EKF PF UKF
56
Estimation Sequence EKF UKF
57
Prediction Quality EKF UKF
58
58 UKF Summary Highly efficient: Same complexity as EKF, with a constant factor slower in typical practical applications Better linearization than EKF: Accurate in first two terms of Taylor expansion (EKF only first term) Derivative-free: No Jacobians needed Still not optimal!
59
[Arras et al. 98]: Laser range-finder and vision High precision (<1cm accuracy) Kalman Filter-based System Courtesy of K. Arras
60
Multi- hypothesis Tracking
61
Belief is represented by multiple hypotheses Each hypothesis is tracked by a Kalman filter Additional problems: Data association: Which observation corresponds to which hypothesis? Hypothesis management: When to add / delete hypotheses? Huge body of literature on target tracking, motion correspondence etc. Localization With MHT
62
Hypotheses are extracted from LRF scans Each hypothesis has probability of being the correct one: Hypothesis probability is computed using Bayes’ rule Hypotheses with low probability are deleted. New candidates are extracted from LRF scans. MHT: Implemented System (1) [Jensfelt et al. ’00]
63
MHT: Implemented System (2) Courtesy of P. Jensfelt and S. Kristensen
64
MHT: Implemented System (3) Example run Map and trajectory # hypotheses #hypotheses vs. time P(H best ) Courtesy of P. Jensfelt and S. Kristensen
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.