Presentation is loading. Please wait.

Presentation is loading. Please wait.

(Includes references to Brian Clipp

Similar presentations


Presentation on theme: "(Includes references to Brian Clipp"— Presentation transcript:

1 (Includes references to Brian Clipp
Marginal Particle and Multirobot Slam: SLAM=‘SIMULTANEOUS LOCALIZATION AND MAPPING’ By Marc Sobel (Includes references to Brian Clipp Comp Robotics)

2 The SLAM Problem Given Estimate Robot controls Nearby measurements
Robot state (position, orientation) Map of world features

3 SLAM Applications Indoors Space Undersea Underground
Images – Probabilistic Robotics

4 Outline Sensors SLAM Full vs. Online SLAM Marginal Slam
Multirobot marginal slam Example Algorithms Extended Kalman Filter (EKF) SLAM FastSLAM (particle filter)

5 Types of Sensors Odometry Laser Ranging and Detection (LIDAR)
Acoustic (sonar, ultrasonic) Radar Vision (monocular, stereo etc.) GPS Gyroscopes, Accelerometers (Inertial Navigation) Etc.

6 Sensor Characteristics
Noise Dimensionality of Output LIDAR- 3D point Vision- Bearing only (2D ray in space) Range Frame of Reference Most in robot frame (Vision, LIDAR, etc.) GPS earth centered coordinate frame Accelerometers/Gyros in inertial coordinate frame

7 A Probabilistic Approach
Notation:

8 Full vs. Online classical SLAM
Full SLAM calculates the robot pose over all time up to time t given the signal and odometry: Online SLAM calculates the robot pose for the current time t

9 Full vs. Online SLAM Full SLAM Online SLAM

10 Classical Fast and EKF Slam
Robot Environment: (1) N distances: mt ={d(xt,L1) ,….,d(xt,LN) }; m measures distances from landmarks at time t. (2) Robot pose at time t: xt. (3) (Scan) Measurements at time t: zt Goal: Determine the poses x1:T given scans z1:t,odometry u1:T, and map measurements m.

11 EKF SLAM (Extended Kalman Filter)
As the state vector moves, the robot pose moves according to the motion function, g(ut,xt). This can be linearized into a Kalman Filter via: The Jacobian J depends on translational and rotational velocity. This allows us to assume that the motion and hence distances are Gaussian. We can calculate the mean μ and covariance matrix Σ for the particle xt at time t.

12 Outline of EKF SLAM By what preceded we assume that the map vectors m (measuring distance from landmarks) are independent multivariate normal: Hence we now have:

13 Conditional Independence
For constructing the weights associated with the classical fast slam algorithm, under moderate assumptions, we get: We use the notation, And calculate that:

14 Problems With EKF SLAM Uses uni-modal Gaussians to model non-Gaussian probability density function

15 Particle Filters (without EKF)
The use of EKF depends on the odometry (u’s) and motion model (g’s) assumptions in a very nonrobust way and fails to allow for multimodality in the motion model. In place of this, we can use particle filters without assuming a motion model by modeling the particles without reference to the parameters.

16 Particle Filters (an alternative)
Represent probability distribution as a set of discrete particles which occupy the state space

17 Particle Filters For constructing the weights associated with the classical fast slam algorithm, under moderate assumptions, we get: (for x’s simulated by q)

18 Resampling Assign each particle a weight depending on how well its estimate of the state agrees with the measurements Randomly draw particles from previous distribution based on weights creating a new distribution

19 Particle Filter Update Cycle
Generate new particle distribution For each particle Compare particle’s prediction of measurements with actual measurements Particles whose predictions match the measurements are given a high weight Resample particles based on weight

20 Particle Filter Advantages
Can represent multi-modal distributions

21 Problems with Particle Filters
Degeneracy: As time evolves particles increase in dimensionality. Since there is error at each time point, this evolution typically leads to vanishingly small interparticle (relative to intraparticle) variation. We frequently require estimates of the ‘marginal’ rather than ‘conditional’ particle distribution. Particle Filters do not provide good methods for estimating particle features.

22 Marginal versus nonmarginal Particle Filters
Marginal particle filters attempt to update the X’s using their marginal (posterior) distribution rather than their conditional (posterior) distribution. The update weights take the general form,

23 Marginal Particle update
We want to update by using the old weights rather than conditioning on the old particles.

24 Marginal Particle Filters
We specify the proposal distribution ‘q’ via:

25 Marginal Particle Algorithm
(1) (2) Calculate the importance weights:

26 Updating Map features in the marginal model
Up to now, we haven’t assumed any map features. Let θ={θt} denote the e.g., distances of the robot at time t from the given landmarks. We then write for the probability associated with scan Zt given the position x1:t. We’d like to update θ. This should be based, not on the gradient , but rather on the gradient,

27 Taking the ‘right’ derivative
The gradient, is highly non-robust; we are essentially taking derivatives of noise. By contrast, the gradient, is robust and represents the ‘right’ derivative.

28 Estimating the Gradient of a Map
We have that,

29 Simplification We can then show for the first term that:

30 Simplification II For the second term, we convert into a discrete sum by defining ‘derivative weights’ And combining them with the standard weights.

31 Estimating the Gradient
We can further write that:

32 Gradient (continued) We can therefore update the gradient weights via:

33 Parameter Updates We update θ by:

34 Normalization The β’s are normalized differently from the w’s. In effect we put: And then compute that:

35 Weight Updates

36 The Bayesian Viewpoint
Retain a posterior sample of θ at time t-1. Call this (i=1,…,I) At time t, update this sample:

37 Multi Robot Models Write for the poses and scan statistics for the r robots. At each time point the needed weights have r indices: We also need to update the derivative weights – the derivative is now a matrix derivative.

38 Multi-Robot SLAM The parameter is now a matrix (with time being the row values and robot index being the column. Updates depend on derivatives with respect to each timepoint and with respect to each robot.


Download ppt "(Includes references to Brian Clipp"

Similar presentations


Ads by Google