Markov Localization & Bayes Filtering with Kalman Filters Discrete Filters Particle Filters Slides adapted from Thrun et al., Probabilistic Robotics
Markov Localization The robot doesn’t know where it is. Thus, a reasonable initial believe of it’s position is a uniform distribution.
Markov Localization A sensor reading is made (USE SENSOR MODEL) indicating a door at certain locations (USE MAP). This sensor reading should be integrated with prior believe to update our believe (USE BAYES).
Markov Localization The robot is moving (USE MOTION MODEL) which adds noise.
Markov Localization A new sensor reading (USE SENSOR MODEL) indicates a door at certain locations (USE MAP). This sensor reading should be integrated with prior believe to update our believe (USE BAYES).
Markov Localization The robot is moving (USE MOTION MODEL) which adds noise. …
Bayes Formula
Bayes Rule with Background Knowledge
Normalization Algorithm:
Recursive Bayesian Updating Markov assumption: zn is independent of z1,...,zn-1 if we know x.
Putting oberservations and actions together: Bayes Filters Given: Stream of observations z and action data u: Sensor model P(z|x). Action model P(x|u,x’). Prior probability of the system state P(x). Wanted: Estimate of the state X of a dynamical system. The posterior of the state is also called Belief:
Graphical Representation and Markov Assumption Underlying Assumptions Static world Independent noise Perfect model, no approximation errors
Bayes Filters z = observation u = action x = state Bayes Markov Total prob. Markov Markov
Prediction Correction
Bayes Filter Algorithm Algorithm Bayes_filter( Bel(x),d ): h=0 If d is a perceptual data item z then For all x do Else if d is an action data item u then Return Bel’(x)
Bayes Filters are Familiar! Kalman filters Particle filters Hidden Markov models Dynamic Bayesian networks Partially Observable Markov Decision Processes (POMDPs)
Probabilistic Robotics Bayes Filter Implementations Gaussian filters
Linear transform of Gaussians Univariate Linear transform of Gaussians
Multivariate Gaussians We stay in the “Gaussian world” as long as we start with Gaussians and perform only linear transformations.
Discrete Kalman Filter Estimates the state x of a discrete-time controlled process that is governed by the linear stochastic difference equation with a measurement
Linear Gaussian Systems: Initialization Initial belief is normally distributed:
Linear Gaussian Systems: Dynamics Dynamics are linear function of state and control plus additive noise:
Linear Gaussian Systems: Observations Observations are linear function of state plus additive noise:
Kalman Filter Algorithm Algorithm Kalman_filter( mt-1, St-1, ut, zt): Prediction: Correction: Return mt, St
Kalman Filter Summary Highly efficient: Polynomial in measurement dimensionality k and state dimensionality n: O(k2.376 + n2) Optimal for linear Gaussian systems! Most robotics systems are nonlinear!
Nonlinear Dynamic Systems Most realistic robotic problems involve nonlinear functions
Linearity Assumption Revisited
Non-linear Function
EKF Linearization (1)
EKF Linearization (2)
EKF Linearization (3)
EKF Linearization: First Order Taylor Series Expansion Prediction: Correction:
EKF Algorithm Extended_Kalman_filter( mt-1, St-1, ut, zt): Prediction: Correction: Return mt, St
Localization “Using sensory information to locate the robot in its environment is the most fundamental problem to providing a mobile robot with autonomous capabilities.” [Cox ’91] Given Map of the environment. Sequence of sensor measurements. Wanted Estimate of the robot’s position. Problem classes Position tracking Global localization Kidnapped robot problem (recovery)
Landmark-based Localization
EKF Summary Highly efficient: Polynomial in measurement dimensionality k and state dimensionality n: O(k2.376 + n2) Not optimal! Can diverge if nonlinearities are large! Works surprisingly well even when all assumptions are violated!
Kalman Filter-based System [Arras et al. 98]: Laser range-finder and vision High precision (<1cm accuracy) [Courtesy of Kai Arras]
Multi- hypothesis Tracking
Localization With MHT Belief is represented by multiple hypotheses Each hypothesis is tracked by a Kalman filter Additional problems: Data association: Which observation corresponds to which hypothesis? Hypothesis management: When to add / delete hypotheses? Huge body of literature on target tracking, motion correspondence etc.
MHT: Implemented System (2) Courtesy of P. Jensfelt and S. Kristensen
Probabilistic Robotics Bayes Filter Implementations Discrete filters
Piecewise Constant
Discrete Bayes Filter Algorithm Algorithm Discrete_Bayes_filter( Bel(x),d ): h=0 If d is a perceptual data item z then For all x do Else if d is an action data item u then Return Bel’(x)
Grid-based Localization
Sonars and Occupancy Grid Map
Probabilistic Robotics Bayes Filter Implementations Particle filters
Sample-based Localization (sonar)
Particle Filters Represent belief by random samples Estimation of non-Gaussian, nonlinear processes Monte Carlo filter, Survival of the fittest, Condensation, Bootstrap filter, Particle filter Filtering: [Rubin, 88], [Gordon et al., 93], [Kitagawa 96] Computer vision: [Isard and Blake 96, 98] Dynamic Bayesian Networks: [Kanazawa et al., 95]d
Importance Sampling Weight samples: w = f / g
Importance Sampling with Resampling: Landmark Detection Example
Particle Filters
Sensor Information: Importance Sampling
Robot Motion
Sensor Information: Importance Sampling
Robot Motion
Particle Filter Algorithm Algorithm particle_filter( St-1, ut-1 zt): For Generate new samples Sample index j(i) from the discrete distribution given by wt-1 Sample from using and Compute importance weight Update normalization factor Insert For Normalize weights
Particle Filter Algorithm Importance factor for xit: draw xit from p(xt | xit-1,ut-1) draw xit-1 from Bel(xt-1)
Motion Model Reminder Start
Proximity Sensor Model Reminder Sonar sensor Laser sensor
Initial Distribution
After Incorporating Ten Ultrasound Scans
After Incorporating 65 Ultrasound Scans
Estimated Path
Localization for AIBO robots
Limitations The approach described so far is able to track the pose of a mobile robot and to globally localize the robot. How can we deal with localization errors (i.e., the kidnapped robot problem)?
Approaches Randomly insert samples (the robot can be teleported at any point in time). Insert random samples proportional to the average likelihood of the particles (the robot has been teleported with higher probability when the likelihood of its observations drops).
Global Localization
Kidnapping the Robot
Recovery from Failure
Summary Particle filters are an implementation of recursive Bayesian filtering They represent the posterior by a set of weighted samples. In the context of localization, the particles are propagated according to the motion model. They are then weighted according to the likelihood of the observations. In a re-sampling step, new particles are drawn with a probability proportional to the likelihood of the observation.