CARNEGIE MELLON UNIVERSITY

Slides:



Advertisements
Similar presentations
Jose-Luis Blanco, Javier González, Juan-Antonio Fernández-Madrigal University of Málaga (Spain) Dpt. of System Engineering and Automation May Pasadena,
Advertisements

State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
Probabilistic Robotics
Probabilistic Robotics
Probabilistic Robotics
Probabilistic Robotics SLAM. 2 Given: The robot’s controls Observations of nearby features Estimate: Map of features Path of the robot The SLAM Problem.
Advanced Mobile Robotics
(Includes references to Brian Clipp
Robot Localization Using Bayesian Methods
IR Lab, 16th Oct 2007 Zeyn Saigol
Lab 2 Lab 3 Homework Labs 4-6 Final Project Late No Videos Write up
Probabilistic Robotics
Simultaneous Localization and Mapping
Introduction to Mobile Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics: Kalman Filters
1 Distributed localization of networked cameras Stanislav Funiak Carlos Guestrin Carnegie Mellon University Mark Paskin Stanford University Rahul Sukthankar.
gMapping TexPoint fonts used in EMF.
Introduction to Kalman Filter and SLAM Ting-Wei Hsu 08/10/30.
Probabilistic Robotics
SEIF, EnKF, EKF SLAM, Fast SLAM, Graph SLAM
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Probabilistic Robotics
Discriminative Training of Kalman Filters P. Abbeel, A. Coates, M
Particle Filters for Mobile Robot Localization 11/24/2006 Aliakbar Gorji Roborics Instructor: Dr. Shiri Amirkabir University of Technology.
Probabilistic Robotics
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
Stanford CS223B Computer Vision, Winter 2006 Lecture 11 Filters / Motion Tracking Professor Sebastian Thrun CAs: Dan Maynes-Aminzade, Mitul Saha, Greg.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
SLAM: Simultaneous Localization and Mapping: Part II BY TIM BAILEY AND HUGH DURRANT-WHYTE Presented by Chang Young Kim These slides are based on: Probabilistic.
ROBOT MAPPING AND EKF SLAM
Kalman filter and SLAM problem
Markov Localization & Bayes Filtering
/09/dji-phantom-crashes-into- canadian-lake/
Computer vision: models, learning and inference Chapter 19 Temporal models.
9-1 SA-1 Probabilistic Robotics: SLAM = Simultaneous Localization and Mapping Slide credits: Wolfram Burgard, Dieter Fox, Cyrill Stachniss, Giorgio Grisetti,
3D SLAM for Omni-directional Camera
Mapping and Localization with RFID Technology Matthai Philipose, Kenneth P Fishkin, Dieter Fox, Dirk Hahnel, Wolfram Burgard Presenter: Aniket Shah.
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
Complete Pose Determination for Low Altitude Unmanned Aerial Vehicle Using Stereo Vision Luke K. Wang, Shan-Chih Hsieh, Eden C.-W. Hsueh 1 Fei-Bin Hsaio.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics Bayes Filter Implementations.
Young Ki Baik, Computer Vision Lab.
CSE-473 Mobile Robot Mapping. Mapping with Raw Odometry.
Unscented Kalman Filter 1. 2 Linearization via Unscented Transform EKF UKF.
Vision-based SLAM Enhanced by Particle Swarm Optimization on the Euclidean Group Vision seminar : Dec Young Ki BAIK Computer Vision Lab.
SLAM Tutorial (Part I) Marios Xanthidis.
Nonlinear State Estimation
Sebastian Thrun Michael Montemerlo
The Unscented Kalman Filter for Nonlinear Estimation Young Ki Baik.
10-1 Probabilistic Robotics: FastSLAM Slide credits: Wolfram Burgard, Dieter Fox, Cyrill Stachniss, Giorgio Grisetti, Maren Bennewitz, Christian Plagemann,
SLAM Techniques -Venkata satya jayanth Vuddagiri 1.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
CSE-473 Mobile Robot Mapping.
Probabilistic Robotics Graph SLAM
Robótica Móvil CC5316 Clase 16: SLAM
Probabilistic Robotics
Unscented Kalman Filter
Unscented Kalman Filter
Simultaneous Localization and Mapping
Probabilistic Robotics
Introduction to Robot Mapping
A Short Introduction to the Bayes Filter and Related Models
Jose-Luis Blanco, Javier González, Juan-Antonio Fernández-Madrigal
Unscented Kalman Filter
Bayes and Kalman Filter
Probabilistic Robotics
Extended Kalman Filter
Simultaneous Localization and Mapping
Extended Kalman Filter
Probabilistic Robotics Bayes Filter Implementations FastSLAM
Presentation transcript:

CARNEGIE MELLON UNIVERSITY FASTSLAM 2.0:AN IMPROVED PARTICLE FILTERING ALGORITHM FOR SIMULTANEOUS LOCALIZATION AND MAPPING MICHAEL MONTEMARLO AND SEBASTIAN THRUN SCHOOL OF COMPUTER SCIENCE CARNEGIE MELLON UNIVERSITY DAPHNE KOLLER AND BEN WEGBREIT COMPUTER SCIENCE DEPARTMENT STANFORD UNIVERSITY

Abstract This paper describes a modified version of FastSLAM, a algorithm which was originally proposed by Montemerlo et al. ,to overcomes important deficiencies of the original alogorithm. It has also been proven in the paper that the new proposed alogorithm converges for linear SLAM problems.

Related Works EKF Slam by Smith and Cheeseman Prior works by Doucet and Colleagues MCMC techniques for neural network Parallel works by van der Merwe, using unscented filtering step for generating proposal distribution.

Simultaneous localization and mapping The posterior distribution is given by: P (θ, st |zt,ut,nt) Where , N landmark locations: θt = θ1 , θ2 , ………………. , θN Path of the vehicle: st = s1 , s2 …….. st Measurement / Observations : zt = z1 , z2 ,……. , zt Robot Control Inputs : ut = u1 , u2 ,…….. ,un Data Association variables : nt = n1 ,n2 ,…….. ,nt

FastSLAM In FastSLAM the posterior can be factored as: p (θ, st |zt,ut,nt)=p(st |zt,ut,nt) ∏ p (θn |st ,zt,ut,nt ) n Assuming the fact that, if one knew the path of the vehicle , the landmark positions can be estimated independently to each other.

N landmark locations: mt = m1 , m2 , ………………. , mN Path of the vehicle: xt = x1 , x2 …….. xt Measurement / Observations : zt = z1 , z2 ,……. , zt Robot Control Inputs : ut = u1 , u2 ,…….. ,un

The FastSLAM samples the path using a particle filter thus each particle has its own map, consisting of N extended Kalman filters.

S t[m] ~ p(st | s t-1[m] , ut ) On each update in FastSLAM begins with sampling new poses based on the recent motion command ut . S t[m] ~ p(st | s t-1[m] , ut ) target s[m] The importance Factor wt[m]= ---------------------- proposal s[m]

Wt[m] = Ƞ ⌡ p( zt | θn , s[m]t , nt ) p (θn | st-1,[m] ,zt-1 ,nt-1 ) Importance factor Wt[m] = Ƞ ⌡ p( zt | θn , s[m]t , nt ) p (θn | st-1,[m] ,zt-1 ,nt-1 ) ~ Ɲ (zt ; g(θn ,s[m]t ),Rt ) ~ Ɲ( μ[m]n,t-1 ,∑[m]n, t-1 )

If data association is unknown ,each particle m in FastSLAM makes its own local data association decision nt by maximizing the measurement likelihood. n = argmax w[m]t (nt )

Drawbacks of FastSLAM 1.0 The inefficiency of the FastSLAM algorithm arises from its proposal distribution , which only considers the motion command but not the measurement. This approach is troublesome if the noise in the vehicle motion is large relative to the measurement noise which is typically the case in many real world robot systems.

S t[m] ~ p(st | s t-1[m] , ut ,zt , nt ) FastSLAM 2.0 “FastSLAM 2.0 implements a single new idea: poses are sampled under consideration of both the motion ut and the measurement zt” S t[m] ~ p(st | s t-1[m] , ut ,zt , nt )

The proposal distribution can be reformulated as:

Approximating by a linear function we get: = = denotes predicted measurement, = denotes the predicted robot pose = denotes the predicted landmark location =

And the Jacobians are given by: Under this EKF style approximation the proposal distribution is Gaussian: Where the matrix : And the Jacobians are given by: = =

Updating the observed landmark estimate similarly to the EKF Update

The Importance Weights

Unknown Data Associations The data association for nt is computed similar to the FastSLAM :

Convergence Theorem: For Linear SLAM,FastSLAM with M=1particles converges in expectation to the correct map if all features are observed infinitely often and if the location of one feature is known in advance.

The Victoria Park Dataset FastSLAM1.0 vs FastSLAM2.0 Experimental Results EKF 7807 sec Regular FastSLAM,M=50 particles 403 sec FastSLAM 2.0,M=1 particle 140 sec The Victoria Park Dataset FastSLAM1.0 vs FastSLAM2.0

Discussion FastSLAM 2.0 algorithm utilizes a different proposal distribution , making it more efficient in situations where motion noise is higher than measurement noise. The proof of convergence is the first convergence result for a constant time SLAM algorithm. FastSLAM2.0 outperforms EKF SLAM approach by a large margin and it successfully generates an accurate map using a single particle on benchmark darta set.