Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.

Slides:



Advertisements
Similar presentations
Mobile Robot Localization and Mapping using the Kalman Filter
Advertisements

Jose-Luis Blanco, Javier González, Juan-Antonio Fernández-Madrigal University of Málaga (Spain) Dpt. of System Engineering and Automation May Pasadena,
State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
CSCE643: Computer Vision Bayesian Tracking & Particle Filtering Jinxiang Chai Some slides from Stephen Roth.
Lirong Xia Approximate inference: Particle filter Tue, April 1, 2014.
Monte Carlo Localization for Mobile Robots Karan M. Gupta 03/10/2004
Lab 2 Lab 3 Homework Labs 4-6 Final Project Late No Videos Write up
Markov Localization & Bayes Filtering 1 with Kalman Filters Discrete Filters Particle Filters Slides adapted from Thrun et al., Probabilistic Robotics.
Localization David Johnson cs6370. Basic Problem Go from thisto this.
TOWARD DYNAMIC GRASP ACQUISITION: THE G-SLAM PROBLEM Li (Emma) Zhang and Jeff Trinkle Department of Computer Science, Rensselaer Polytechnic Institute.
Tracking Objects with Dynamics Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/21/15 some slides from Amin Sadeghi, Lana Lazebnik,
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
Particle Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
Graphical Models for Mobile Robot Localization Shuang Wu.
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
Introduction to Kalman Filter and SLAM Ting-Wei Hsu 08/10/30.
A brief Introduction to Particle Filters
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Nonlinear and Non-Gaussian Estimation with A Focus on Particle Filters Prasanth Jeevan Mary Knox May 12, 2006.
Particle Filter/Monte Carlo Localization
Monte Carlo Localization
Particle Filters for Mobile Robot Localization 11/24/2006 Aliakbar Gorji Roborics Instructor: Dr. Shiri Amirkabir University of Technology.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
Probabilistic Robotics Bayes Filter Implementations Particle filters.
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
Sequential Monte Carlo and Particle Filtering Frank Wood Gatsby, November 2007 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this.
Particle Filtering for Non- Linear/Non-Gaussian System Bohyung Han
Bayesian Filtering for Location Estimation D. Fox, J. Hightower, L. Liao, D. Schulz, and G. Borriello Presented by: Honggang Zhang.
HCI / CprE / ComS 575: Computational Perception
Bayesian Filtering for Robot Localization
Mobile Robot controlled by Kalman Filter
T EMPORAL P ROBABILISTIC M ODELS P T 2. A GENDA Kalman filtering Dynamic Bayesian Networks Particle filtering.
Markov Localization & Bayes Filtering
Computer vision: models, learning and inference Chapter 19 Temporal models.
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
Recap: Reasoning Over Time  Stationary Markov models  Hidden Markov models X2X2 X1X1 X3X3 X4X4 rainsun X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3.
Mapping and Localization with RFID Technology Matthai Philipose, Kenneth P Fishkin, Dieter Fox, Dirk Hahnel, Wolfram Burgard Presenter: Aniket Shah.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
Probabilistic Robotics Bayes Filter Implementations.
Young Ki Baik, Computer Vision Lab.
Overview Particle filtering is a sequential Monte Carlo methodology in which the relevant probability distributions are iteratively estimated using the.
-Arnaud Doucet, Nando de Freitas et al, UAI
Mobile Robot Localization (ch. 7)
Processing Sequential Sensor Data The “John Krumm perspective” Thomas Plötz November 29 th, 2011.
Maximum a posteriori sequence estimation using Monte Carlo particle filters S. J. Godsill, A. Doucet, and M. West Annals of the Institute of Statistical.
City College of New York 1 Dr. Jizhong Xiao Department of Electrical Engineering City College of New York Advanced Mobile Robotics.
Hilbert Space Embeddings of Conditional Distributions -- With Applications to Dynamical Systems Le Song Carnegie Mellon University Joint work with Jonathan.
An Introduction to Kalman Filtering by Arthur Pece
State Estimation and Kalman Filtering
Sequential Monte-Carlo Method -Introduction, implementation and application Fan, Xin
CSE-473 Project 2 Monte Carlo Localization. Localization as state estimation.
Short Introduction to Particle Filtering by Arthur Pece [ follows my Introduction to Kalman filtering ]
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use dynamics models.
Tracking with dynamics
Cameron Rowe.  Introduction  Purpose  Implementation  Simple Example Problem  Extended Kalman Filters  Conclusion  Real World Examples.
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
Monte Carlo Localization for Mobile Robots Frank Dellaert 1, Dieter Fox 2, Wolfram Burgard 3, Sebastian Thrun 4 1 Georgia Institute of Technology 2 University.
Probabilistic Robotics Probability Theory Basics Error Propagation Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics.
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
CS 541: Artificial Intelligence Lecture VIII: Temporal Probability Models.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Particle Filters: Theory and applications to mobile robot localization and map building Jose Luis Blanco Claraco August 2005 System Engineering and Automation.
HMM: Particle filters Lirong Xia. HMM: Particle filters Lirong Xia.
Probabilistic Robotics
Introduction to particle filter
Filtering and State Estimation: Basic Concepts
Introduction to particle filter
Particle Filtering.
HMM: Particle filters Lirong Xia. HMM: Particle filters Lirong Xia.
Presentation transcript:

Particle Filtering

Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models to estimate ground truth, unobserved variables, make forecasts

Hidden Markov Model Use observations to get a better idea of where the robot is at time t X0X0 X1X1 X2X2 X3X3 z1z1 z2z2 z3z3 Hidden state variables Observed variables Predict – observe – predict – observe…

Last Class Kalman Filtering and its extensions Exact Bayesian inference for Gaussian state distributions, process noise, observation noise What about more general distributions? Key representational issue How to represent and perform calculations on probability distributions?

Particle Filtering (aka Sequential Monte Carlo) Represent distributions as a set of particles Applicable to non-gaussian high-D distributions Convenient implementations Widely used in vision, robotics

Simultaneous Localization and Mapping (SLAM) Mobile robots Odometry Locally accurate Drifts significantly over time Vision/ladar/sonar Inaccurate locally Global reference frame Combine the two State: (robot pose, map) Observations: (sensor input)

General problem x t ~ Bel(x t ) (arbitrary p.d.f.) x t+1 = f(x t,u,  p ) z t+1 = g(x t+1,  o )  p ~ arbitrary p.d.f.,  o ~ arbitrary p.d.f. Process noise Observation noise

Particle Representation Bel(x t ) = {(w k,x k ), k=1,…,n} w k are weights, x k are state hypotheses Weights sum to 1 Approximates the underlying distribution

Monte Carlo Integration If P(x) ≈ Bel(x) = {(w k,x k ), k=1,…,N} E P [f(x)] = integral[ f(x)P(x)dx ] ≈  k w k f(x k ) What might you want to compute? Mean: set f(x) = x Variance: f(x) = x 2 (recover Var(x) = E[x 2 ]-E[x] 2 ) P(y): set f(x) = P(y|x) Because P(y) = integral[ P(y|x)P(x)dx ]

Recovering the Distribution Kernel density estimation P(x) =  k w k K(x,x k ) K(x,x k ) is the kernel function Better approximation as # particles, kernel sharpness increases

Filtering Steps Predict Compute Bel’(x t+1 ): distribution of x t+1 using dynamics model alone Update Compute a representation of P(x t+1 |z t+1 ) via likelihood weighting for each particle in Bel’(x t+1 ) Resample to produce Bel(x t+1 ) for next step

Predict Step Given input particles Bel(x t ) Distribution of x t+1 =f(x t,u t,  ) determined by sampling  from its distribution and then propagating individual particles Gives Bel’(x t+1 )

Particle Propagation

Update Step Goal: compute a representation of P(x t+1 | z t+1 ) given Bel’(x t+1 ), z t+1 P(x t+1 | z t+1 ) =  P(z t+1 | x t+1 ) P(x t+1 ) P(x t+1 ) = Bel’(x t+1 ) (given) Each state hypothesis x k  Bel’(x t+1 ) is reweighted by P(z t+1 | x t+1 ) Likelihood weighting: w k  w k P(z t+1 |x t+1 =x k ) Then renormalize to 1

Update Step w k  w k ’ * P(z t+1 | x t+1 =x k ) 1D example: g(x,  o ) = h(x) +  o  o ~ N( ,  ) P(z t+1 | x t+1 =x k ) = C exp(- (h(x)-z t+1 ) 2 / 2  2 ) In general, distribution can be calibrated using experimental data

Resampling Likelihood weighted particles may no longer represent the distribution efficiently Importance resampling: sample new particles proportionally to weight

Sampling Importance Resampling (SIR) variant Predict Update Resample

Particle Filtering Issues Variance Std. dev. of a quantity (e.g., mean) computed as a function of the particle representation ~ 1/sqrt(N) Loss of particle diversity Resampling will likely drop particles with low likelihood They may turn out to be useful hypotheses in the future

Other Resampling Variants Selective resampling Keep weights, only resample when # of “effective particles” < threshold Stratified resampling Reduce variance using quasi-random sampling Optimization Explicitly choose particles to minimize deviance from posterior …

Storing more information with same # of particles Unscented Particle Filter Each particle represents a local gaussian, maintains a local covariance matrix Combination of particle filter + Kalman filter Rao-Blackwellized Particle Filter State (x 1,x 2 ) Particle contains hypothesis of x 1, analytical distribution over x 2 Reduces variance

Recap Bayesian mechanisms for state estimation are well understood Representation challenge Methods: Kalman filters: highly efficient closed-form solution for Gaussian distributions Particle filters: approximate filtering for high-D, non- Gaussian distributions Implementation challenges for different domains (localization, mapping, SLAM, tracking)