Download presentation
1
Itamar Kahn, Thomas Lin, Yuval Mazor
Robotic Mapping 6.834 Student Lecture Itamar Kahn, Thomas Lin, Yuval Mazor
14
Decoupled Stochastic Mapping
A Computationally Efficient Method for Large-Scale Concurrent Mapping and Localization John J. Leonard and Hands Jacob S. Feder, MIT, 2000
15
Robotic Mapping Problem
Acquire a spatial model of a robot’s environment Identify features in the environment E.g., landmarks, distinctive objects or shapes in the environment, etc. Estimate the robot location in reference to the features Correct for noise (error in estimation) contributed by the sensors and controls
16
Feature based approach to Concurrent Mapping and Localization (CML)
What is DSM? Feature based approach to Concurrent Mapping and Localization (CML) SM: Use Extended Kalman filtering (EKF) to build a map through spatial relationship of features PROBLEM: EKF based solutions are O(n2), where n is the number of features Results from the number of correlations between the vehicles and features SOLUTION: Break into submaps and apply SM only on submaps
17
F[4]: the whole wall is a single feature in the map
What is a Feature? A map is obtained by defining visual features dynamics and observation function Determine the relevant visual features These maybe specific to the to be mapped environment (e.g., walls in a room, obstacles in an under water envirnoment, etc.) F[4]: the whole wall is a single feature in the map
18
Overview Kalman and Extended Kalman Filters
Conventional Stochastic Mapping Decoupled Stochastic Mapping Algorithm Testing
19
Kalman Filter Mini Tutorial
The mini tutorial is an adaptation of a tutorial presented at ACM SIGGRAPH 2001 by Greg Welch and Gary Bishop (UNC). The slides of the tutorial are available at More information (papers, software, links , etc) is available at
20
Kalman Filter KF operates by IN: Noisy data --> OUT:less noisy
Predicting the new state and its uncertainty Correcting with the new measurement IN: Noisy data --> OUT:less noisy
21
Kalman Filter Example 2D Position-Only (e.g., 2D Tablet)
Process Model: Measurement Model: state transition state state noise measurement matrix measurement state noise
22
Kalman Filter Example Preparation and Initialization State transition:
Process Noise Covariance: Measurement Noise Covariance: Initialization: state at t0 error covariance estimate at t0
23
Kalman Filter Example Predict Correct predict next state
predicted error covariance correct state and error covariance correct for the discrepancy between predicted and actual measurement minimize the a posteriori error covariance (Kalman gain)
24
Kalman Filter Time Update Predict Correct Measurement Update
25
Kalman Filter Example Extend example to 2D Position-Velocity
Process model: Measurement model:
26
Kalman Filter But, Kalman filter is not enough !!!
Only matrix operations allowed (only works for linear systems) Measurement is a linear function of state Next state is linear function of previous state Can’t estimate non-linear variables (e.g., gain, rotation, projection, etc.)
27
Extended Kalman Filter
Nonlinear Process (Model) Process dynamics: A becomes a(x) Measurement: H becomes h(x) Filter Reformulation Use functions instead of matrices Use Jacobians to project forward, and to relate measurement to state (first order Taylor expansion)
28
Extended Kalman Filter
Predict Correct A is the Jacobian matrix of partial derivatives of f with respect to x W is the Jacobian matrix of partial derivatives of f with respect to w H is the Jacobian matrix of partial derivatives of h with respect to x H is the Jacobian matrix of partial derivatives of h with respect to v
29
Use SM to generate maps (solve CML)
Stochastic Mapping Use SM to generate maps (solve CML) Size-varying Kalman filter Add and Update of representation Build a map through spatial relationship
30
Stochastic Mapping Estimated locations of the robot and the features in the map Estimated error covariance
31
Stochastic Mapping The dynamic model of the robot is given by
The observation model for the system is given by
32
Augmented Stochastic Mapping
Given these assumptions, an extended Kalman filter (EKF) is employed to estimate the state and covariance .
33
Decoupled Stochastic Mapping
Stochastic Mapping: complexity O(n2) Solution: DSM Divide the environment into multiple submaps Each submap has a vehicle position estimate and a set of features estimates
34
Decoupled Stochastic Mapping
DSM: Divide the map into smaller submaps Map of landmarks Inverse covariance matrix dependencies are local the map is divided in 4 sub maps
35
How do we move from map to map?
B Cross-map relocation Using one pose as in EM is like hill-climbing from one node. Using posterior samples is like hill-climbing from many nodes - more likely to find global maximum A B Cross-map updating
36
Single-pass vs. Multi-pass DSM
Using one pose as in EM is like hill-climbing from one node. Using posterior samples is like hill-climbing from many nodes - more likely to find global maximum
37
Decoupled Stochastic Mapping
Vehicle travels to a previously visited area: Cross-map relocation
38
Decoupled Stochastic Mapping
Facilitate spatial convergence by bringing more accurate vehicle estimates from lower to higher maps: Cross-map updating Using EKF, estimate vehicle location in submap B: Use state as measurement and covariance in A, as predictions for state in B.
39
Methods Comparison Full covariance ASM Single-pass DSM Multi-pass DSM
40
Testing
41
Limitations Sensor noise modeled by Gaussian process
Limited map dimensionality
42
Hybrid Approaches A Real-Time Algorithm for Mobile Mapping with Applications to Multi-Robot and 3D Mapping Sebastian Thrun, Carnegie Mellon University Wolfram Burgard, University of Freiberg Dieter Fox, Carnegie Mellon University
43
Overview Concurrent mapping and localization using 2D laser range finders Mapping: Fast scan-matching Localization: Sample-basedprobabilities Motivation: 3D-Maps and large cyclic environments
44
Benefits Computation is all real-time Builds 3D maps
Handles cycles in a map Accurate map generation in the absence of odometric data
45
Mapping Basics A map is a collection of sensor scans, o, and robot positions (poses), s For every time, t, a new data scan and pose is added to the map: Mt is the map at time t Ot is the scan at t St is the pose at t dt is the data at time t Dt = (
46
Map likelihood The most likely map: where:
47
Mapping The PDF has an elliptical/banana shape
Posterior pose, s, after moving distance a from s’:
48
PDF Intuition If a scan shows free space it is unlikely that future scans will show obstacles in that space Darker regions indicate lower probability of an obstacle
49
Maximizing Map Likelihood
Goal: Find the most-likely map given all the data the robot has seen Infeasible to maximize in real-time Two possibilities: Have the robot stop and calculate after every scan (not real-time) Assume map is correct, add new data (large error growth)
50
Background Methods Incremental Localization Expectation Maximization
51
Incremental Localization (IL)
Assume previous map and localizations are accurate Append new sensor scans to the old map Localize based on updated map Can be done in real-time Fail on cyclic environments as error grows unbounded
52
Incremental Localization
IL never corrects old errors based on new information Errors can grow unbounded While traversing a cycle in a map, error growth leads the robot to “get lost” and the map breaks down
53
Expectation Maximization (EM)
Store scans and pose data probabilistically Search through all possible previous maps (from times 0-t) and find the most likely maps After each scan, or set number of scans, recalculate
54
Expectation Maximization
Can handle cyclic environments Batch algorithms - not real-time
55
Goal Combine IL and EM in a real-time algorithm that can handle maps with cycles Use posterior estimation like in EM Incremental map construction with maximum likelihood estimators as in IL
56
Conventional Incremental Map
Given a scan and odometry reading, determine the most likely pose. Use that pose to increment the map. Never go back to change it.
57
Conventional Incremental Map
This approach works in non-cyclic environments Pose errors necessarily grow Past poses cannot be revised Search algorithms cannot find solutions to close loops
58
Incremental Map Problem
59
Posterior Incremental Mapping
Basic premise: Use Markov localization to compute the full posterior over robot poses Probability distribution over poses based on sensor data:
60
Posterior Incremental Mapping
Posterior is where the robot believes it is. Can be incrementally updated over time Updated pose and maps:
61
Posterior Incremental Mapping
Use the posterior belief to determine the most likely pose Uncertainty grows during a loop The robot has a larger window to search to close the loop
62
Implementation Details
Take samples of posterior beliefs Save computation and easier to generalize Use gradient descent on each sample to find globally maximum likelihood function. Using one pose as in EM is like hill-climbing from one node. Using posterior samples is like hill-climbing from many nodes - more likely to find global maximum
63
Backwards Correction When a loops closes successfully, we can go back and correct our pose estimates Distribute the error ∆st among all poses in the loop Use gradient descent for all poses in the loop to maximize likelihood Distributing the error negates maximum likelihood but is a reasonable intermediate
64
Handling a Cycle
65
Multi-Robot Extensions
Using posterior estimation extends naturally to environments with multiple robots Each robot need not know any other robot’s initial pose BUT every robot localize itself within the map of an initial Team Leader robot
66
Multi-Robot Extensions
Use Monte Carlo Localization Initially any location is likely Posterior estimation localizes the robot in the Team Leader’s map
67
Results - Cycle Mapping
Groundrules: Every scan used for localization Scans appended to map every two meters Random odometric errors (30˚ or 1 meter) Error generates large error during the cycle but within acceptable range of “true” pose Posterior estimation finds the true pose and corrects prior beliefs
68
Mapping without Odometry
Same as before but with no odometric data Traversing the cycles leads to very large error growth Once again, on cycle completion the errors are found and fixed Final map is virtually identical to map generated with odometric data
69
Limitations Non-optimal Nested cycles Dynamic environments
Changing the map backwards in time can be dangerous Pseudo-Real Time
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.