(Includes references to Brian Clipp

Slides:



Advertisements
Similar presentations
Mobile Robot Localization and Mapping using the Kalman Filter
Advertisements

Probabilistic Robotics
Reducing Drift in Parametric Motion Tracking
Probabilistic Robotics
Probabilistic Robotics SLAM. 2 Given: The robot’s controls Observations of nearby features Estimate: Map of features Path of the robot The SLAM Problem.
Kiyoshi Irie, Tomoaki Yoshida, and Masahiro Tomono 2011 IEEE International Conference on Robotics and Automation Shanghai International Conference Center.
Robot Localization Using Bayesian Methods
IR Lab, 16th Oct 2007 Zeyn Saigol
Lab 2 Lab 3 Homework Labs 4-6 Final Project Late No Videos Write up
Probabilistic Robotics
Markov Localization & Bayes Filtering 1 with Kalman Filters Discrete Filters Particle Filters Slides adapted from Thrun et al., Probabilistic Robotics.
Simultaneous Localization and Mapping
Observers and Kalman Filters
SA-1 Probabilistic Robotics Probabilistic Sensor Models Beam-based Scan-based Landmarks.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Introduction to Mobile Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics: Kalman Filters
SA-1 Probabilistic Robotics Probabilistic Sensor Models Beam-based Scan-based Landmarks.
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
Introduction to Kalman Filter and SLAM Ting-Wei Hsu 08/10/30.
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Probabilistic Robotics
Particle Filters for Mobile Robot Localization 11/24/2006 Aliakbar Gorji Roborics Instructor: Dr. Shiri Amirkabir University of Technology.
Visual Odometry for Ground Vehicle Applications David Nister, Oleg Naroditsky, James Bergen Sarnoff Corporation, CN5300 Princeton, NJ CPSC 643, Presentation.
Probabilistic Robotics
Probabilistic Robotics Bayes Filter Implementations Particle filters.
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Novel approach to nonlinear/non- Gaussian Bayesian state estimation N.J Gordon, D.J. Salmond and A.F.M. Smith Presenter: Tri Tran
ROBOT MAPPING AND EKF SLAM
Slam is a State Estimation Problem. Predicted belief corrected belief.
MCMC: Particle Theory By Marc Sobel. Particle Theory: Can we understand it?
Markov Localization & Bayes Filtering
Localization and Mapping (3)
Computer vision: models, learning and inference Chapter 19 Temporal models.
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
3D SLAM for Omni-directional Camera
Computer vision: models, learning and inference Chapter 19 Temporal models.
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
Mapping and Localization with RFID Technology Matthai Philipose, Kenneth P Fishkin, Dieter Fox, Dirk Hahnel, Wolfram Burgard Presenter: Aniket Shah.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics Bayes Filter Implementations.
Young Ki Baik, Computer Vision Lab.
Simultaneous Localization and Mapping
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
Mobile Robot Localization (ch. 7)
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
City College of New York 1 Dr. Jizhong Xiao Department of Electrical Engineering City College of New York Advanced Mobile Robotics.
State Estimation and Kalman Filtering
Vision-based SLAM Enhanced by Particle Swarm Optimization on the Euclidean Group Vision seminar : Dec Young Ki BAIK Computer Vision Lab.
Tracking with dynamics
SLAM Tutorial (Part I) Marios Xanthidis.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
A Low-Cost and Fail-Safe Inertial Navigation System for Airplanes Robotics 전자공학과 깡돌가
Robust Localization Kalman Filter & LADAR Scans
10-1 Probabilistic Robotics: FastSLAM Slide credits: Wolfram Burgard, Dieter Fox, Cyrill Stachniss, Giorgio Grisetti, Maren Bennewitz, Christian Plagemann,
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
SLAM : Simultaneous Localization and Mapping
Paper – Stephen Se, David Lowe, Jim Little
Probabilistic Robotics
A Short Introduction to the Bayes Filter and Related Models
Motion Models (cont) 2/16/2019.
Probabilistic Map Based Localization
Probabilistic Robotics
Principle of Bayesian Robot Localization.
Probabilistic Robotics Bayes Filter Implementations FastSLAM
Presentation transcript:

(Includes references to Brian Clipp Marginal Particle and Multirobot Slam: SLAM=‘SIMULTANEOUS LOCALIZATION AND MAPPING’ By Marc Sobel (Includes references to Brian Clipp Comp 790-072 Robotics)

The SLAM Problem Given Estimate Robot controls Nearby measurements Robot state (position, orientation) Map of world features

SLAM Applications Indoors Space Undersea Underground Images – Probabilistic Robotics

Outline Sensors SLAM Full vs. Online SLAM Marginal Slam Multirobot marginal slam Example Algorithms Extended Kalman Filter (EKF) SLAM FastSLAM (particle filter)

Types of Sensors Odometry Laser Ranging and Detection (LIDAR) Acoustic (sonar, ultrasonic) Radar Vision (monocular, stereo etc.) GPS Gyroscopes, Accelerometers (Inertial Navigation) Etc.

Sensor Characteristics Noise Dimensionality of Output LIDAR- 3D point Vision- Bearing only (2D ray in space) Range Frame of Reference Most in robot frame (Vision, LIDAR, etc.) GPS earth centered coordinate frame Accelerometers/Gyros in inertial coordinate frame

A Probabilistic Approach Notation:

Full vs. Online classical SLAM Full SLAM calculates the robot pose over all time up to time t given the signal and odometry: Online SLAM calculates the robot pose for the current time t

Full vs. Online SLAM Full SLAM Online SLAM

Classical Fast and EKF Slam Robot Environment: (1) N distances: mt ={d(xt,L1) ,….,d(xt,LN) }; m measures distances from landmarks at time t. (2) Robot pose at time t: xt. (3) (Scan) Measurements at time t: zt Goal: Determine the poses x1:T given scans z1:t,odometry u1:T, and map measurements m.

EKF SLAM (Extended Kalman Filter) As the state vector moves, the robot pose moves according to the motion function, g(ut,xt). This can be linearized into a Kalman Filter via: The Jacobian J depends on translational and rotational velocity. This allows us to assume that the motion and hence distances are Gaussian. We can calculate the mean μ and covariance matrix Σ for the particle xt at time t.

Outline of EKF SLAM By what preceded we assume that the map vectors m (measuring distance from landmarks) are independent multivariate normal: Hence we now have:

Conditional Independence For constructing the weights associated with the classical fast slam algorithm, under moderate assumptions, we get: We use the notation, And calculate that:

Problems With EKF SLAM Uses uni-modal Gaussians to model non-Gaussian probability density function

Particle Filters (without EKF) The use of EKF depends on the odometry (u’s) and motion model (g’s) assumptions in a very nonrobust way and fails to allow for multimodality in the motion model. In place of this, we can use particle filters without assuming a motion model by modeling the particles without reference to the parameters.

Particle Filters (an alternative) Represent probability distribution as a set of discrete particles which occupy the state space

Particle Filters For constructing the weights associated with the classical fast slam algorithm, under moderate assumptions, we get: (for x’s simulated by q)

Resampling Assign each particle a weight depending on how well its estimate of the state agrees with the measurements Randomly draw particles from previous distribution based on weights creating a new distribution

Particle Filter Update Cycle Generate new particle distribution For each particle Compare particle’s prediction of measurements with actual measurements Particles whose predictions match the measurements are given a high weight Resample particles based on weight

Particle Filter Advantages Can represent multi-modal distributions

Problems with Particle Filters Degeneracy: As time evolves particles increase in dimensionality. Since there is error at each time point, this evolution typically leads to vanishingly small interparticle (relative to intraparticle) variation. We frequently require estimates of the ‘marginal’ rather than ‘conditional’ particle distribution. Particle Filters do not provide good methods for estimating particle features.

Marginal versus nonmarginal Particle Filters Marginal particle filters attempt to update the X’s using their marginal (posterior) distribution rather than their conditional (posterior) distribution. The update weights take the general form,

Marginal Particle update We want to update by using the old weights rather than conditioning on the old particles.

Marginal Particle Filters We specify the proposal distribution ‘q’ via:

Marginal Particle Algorithm (1) (2) Calculate the importance weights:

Updating Map features in the marginal model Up to now, we haven’t assumed any map features. Let θ={θt} denote the e.g., distances of the robot at time t from the given landmarks. We then write for the probability associated with scan Zt given the position x1:t. We’d like to update θ. This should be based, not on the gradient , but rather on the gradient, .

Taking the ‘right’ derivative The gradient, is highly non-robust; we are essentially taking derivatives of noise. By contrast, the gradient, is robust and represents the ‘right’ derivative.

Estimating the Gradient of a Map We have that,

Simplification We can then show for the first term that:

Simplification II For the second term, we convert into a discrete sum by defining ‘derivative weights’ And combining them with the standard weights.

Estimating the Gradient We can further write that:

Gradient (continued) We can therefore update the gradient weights via:

Parameter Updates We update θ by:

Normalization The β’s are normalized differently from the w’s. In effect we put: And then compute that:

Weight Updates

The Bayesian Viewpoint Retain a posterior sample of θ at time t-1. Call this (i=1,…,I) At time t, update this sample:

Multi Robot Models Write for the poses and scan statistics for the r robots. At each time point the needed weights have r indices: We also need to update the derivative weights – the derivative is now a matrix derivative.

Multi-Robot SLAM The parameter is now a matrix (with time being the row values and robot index being the column. Updates depend on derivatives with respect to each timepoint and with respect to each robot.