Download presentation
Published byBeverley Tabitha Nelson Modified over 9 years ago
1
Takehisa YAIRI RCAST, University of Tokyo
Session : VISION, GRAPHICS AND ROBOTICS Map Building without Localization by Dimensionality Reduction Techniques Takehisa YAIRI RCAST, University of Tokyo
2
Outline Background Related Works Proposed Framework - LFMDR Experiment
Motivation, Purpose and Problem to consider Related Works SLAM, and Mapping with DR methods Proposed Framework - LFMDR Basic idea, Assumptions, Formalization Experiment Visibility-Only and Bearing-Only Mappings Conclusion
3
Motivation Map building SLAM (Simultaneous Localization and Mapping)
An essential capability for intelligent agents SLAM (Simultaneous Localization and Mapping) Has been mainstream for many years Very successful both in theory and practice I like SLAM too, but I feel something’s missing.. Are the mapping and localization really inseparable ? Are the motion and measurement models necessary ? How about the aspect of map building as an abstraction of the world ? Is there another map building framework ?
4
Purpose Reconsider the robot map building from the viewpoint of dimensionality reduction and propose an alternative framework Localization-Free Mapping by Dimensionality Reduction (LFMDR) No localization, no motion and measurement models Heuristics : Closely located objects tend to share similar histories of being observed by a robot t=N t=2 Reduce dimensionality, while preserving locality t=1
5
Map Building Problem to Consider
Feature-based map (i.e. Not topological, not occupancy-grid) A map is represented by 2-D coordinates of objects There EXIST motion and measurement models But, they are not necessarily known in advance m Positions of objects Motion model (State transition model) Measurement model (Observation model) Move Observation State Exist, but may be unknown
6
Related Works : SLAM [Thrun 02]
Problem : “Estimate m and x1:t from y1:t , given f and g” Solutions: Kalman Filter with extended state Incremental maximum likelihood [Thrun, et.al. 98] Rao-Blackwellized Particle Filter [Montemerlo, et.al. 02] Motion and measurement models must be given Estimations of map and robot position are coupled Given: Input: Output: Measurement data Motion model Map Measurement model Robot trajectory
7
Related Works : Dimensionality Reduction and Mapping (1)
Idea of using DR for robot map building is not new itself .. [Brunskill & Roy 05] PPCA to extract low-dimensional geometric features (line segments) from range measurements [Pierce & Kuipers 97] PCA to obtain low-level mappings between robot’s actions and perceptions (sensorimotor mapping) Point features (High dimensional) DR Line segments (Low dimensional)
8
Related Works : Dimensionality Reduction and Mapping (2)
Another existing idea is to estimate robot’s states (locations, poses) from a sequence of high dimensional observation data Appearance manifolds [Ham, et.al. 05] LLP + Kalman filter Action respecting embedding [Bowling, et.al. 05] SDE Wifi-SLAM [Ferris, et.al. 07] GP-LVM Observation Space Dimensionality Reduction x2 State Space q x1
9
Related Works : Dimensionality Reduction and Mapping (cont.)
Observation Space y(1) y(M) y(j) Observation data (from time 1 to N) Dimension of features Time DR x2 State Space Treat row vectors as data points Estimate x1:N and g, from y1:N , given f q trajectory x1
10
Proposed Framework : LFMDR (1) Assumptions
All objects are uniquely identifiable Measurement model can be decomposed to homogeneous submodels for individual objects Locations of at least 3 objects are known in advance (Anchor objects) Decomposable An observation about an object is roughly dependent only on its location, given the map and robot’s position The second assumption may look too restrictive, but, ..
11
Proposed Framework : LFMDR (2) Interpretation as a DR Problem
Imagine a mapping between an object position and its history of observation Observation Data Matrix Mapping Time Observation History Space (N-dimensional) XY coordinates (2-dimensional) “If two objects are closely located, their histories of observation are similar ”
12
Proposed Framework : LFMDR (3) Illustration
13
Proposed Framework : LFMDR (4) Procedure
Explore the environment and obtain observation history data Y1:N Break Y1:N into a set of column vectors {y(j)1:N}j=1,…,M Apply a DR method to the vectors and obtain a set of 2-D vectors Perform the optimal Affine transformation w.r.t anchor objects, and obtain final estimates
14
Features of LFMDR (1) (Comparison with SLAM)
Common Based on state space model Different No assumption that motion and measurement models are known Map is directly estimated without robot localization (localization-free mapping) Off-line procedure Larger amount of data required Assumption of no missing data Advantages Disadvantages
15
Features of LFMDR (2) (Comparison with Other DR-based Approaches)
Comparison with [Brunskill & Roy 05] Global vs. Local Comparison with [Ham,et.al. 05] [Bowling,et.al. 05] [Ferris,et.al. 07] Column vectors vs. Row vectors (i.e., Object positions vs. Robot positions) DR DR v.s. DR DR v.s.
16
Experiment Applied to 2 different situations Common settings:
[Case 1] Visibility-only mapping [Case 2] Bearing-only mapping Common settings: 2.5[m]x2.5[m] square region 50 objects (incl. 4 anchors) Exploration with random direction change and obstacle avoidance Evaluation Mean Position Error (MPE) Mean Orientation Error (MOE) Averaged over 25 runs WEBOTS simulator Triangle Orientation A A Difference C B B C A-B-C A-C-B
17
DR Methods Linear PCA SMACOF-MDS [DeLeeuw 77]
(a) Equal weights, (b) kNN-based weighting Kernel PCA [Scholkopf,et.al. 98] (a) Gaussian, (b) Polynomial ISOMAP [Tennenbaum,et.al. 00] LLE [Roweis&Saul 00] Laplacian Eigenmap [Belkin&Niyogi 02] Hessian LLE [Donoho&Grimes 03] SDE [Weinberger, et.al. 05] Parameters (k, s2, d) were tuned manually
18
Case 1 : Visibility-Only Mapping Description
Building a map using only visibility information i.e., Whether each object is visible (1) or not (0) An assumption in this simulation: An object is visible if its horizontal visual angle of non-occluded part is larger than 5 deg
19
Case 1 : Visibility-Only Mapping Visibility Measurements
Observation history vector of an object Visibility Observation Data (Binary matrix) Column Normalization Object ID Time Euc. norm Compensate variety of the frequencies the objects are observed Observation History Space
20
Case 1 : Visibility-Only Mapping Maps After 2000 Time Steps
LPCA KPCA (Gaussian, s2=0.5) SMACOF (k=5) Isomap(k=6) LLE (k=8) LEM (k=6) HLLE (k=8) SDE (k=7)
21
Case 1 : Visibility-Only Mapping Mean Position Errors
22
Case 1 : Visibility-Only Mapping Final Map Errors
DR methods Opt. param. MPE [m] Rnk MOE[%] LPCA(CMDS) - 1.055 10 18.19 8 SMA(UNWGT) - 0.421 7 5.86 6 SMA(WGT) K=5 0.206 4 4.83 KPCA(GAUS) s2 = 0.5 0.926 8 23.29 9 KPCA(POLY) d=8 0.953 9 27.03 10 ISOMAP K=6 0.177 2 4.11 LLE K=8 0.241 5 5.4 LEM K=6 0.352 6 8.17 7 HLLE K=8 0.192 3 4.24 SDE K=7 0.138 1 3.65
23
Case 2 : Bearing-Only Mapping Description
Building a map only with bearing measurements Motivated by recent popularity of Bearing-Only SLAM Assuming all objects are always visible (No missing observation) (Relative direction angles to objects) Bearing angles
24
Case 2 : Bearing-Only Mapping Bearing Measurements
Original Bearing Data Object ID Use a unit directional vector instead of bearing angle 1 2 j M 1 q1,1 q2,1 : qj,1 qM,1 q1,2 q2,2 qj,2 qM,2 q1,N q2,N qj,N qM,N Time 2 -p p N Discontinuity Unit directional vectors 1 2 j M Observation History Space cosq1,1 cosq2,1 : cosqj,1 cosqM,1 sinq1,1 sinq2,1 sinqj,1 sinqM,1 cosq1,2 cosq2,2 cosqj,2 cosqM,2 sinq1,2 sinq2,2 sinqj,2 sinqM,2 cosq1,N cosq2,N cosqj,N cosqM,N sinq1,N sinq2,N sinqj,N sinqM,N Time 1 2N-dimensional 2 N
25
Case 2 : Bearing-Only Mapping Maps After 2000 Time Steps
LPCA SMACOF (k=8) Isomap (k=9) LLE (k=8) LEM (k=7) SDE (k=7)
26
Case 2 : Bearing-Only Mapping Mean Position Errors
27
Case 2 : Bearing-Only Mapping Final Map Errors
DR methods Opt. param. MPE [m] Rnk MOE[%] LPCA(CMDS) - 0.168 5 2.33 SMA(UNWGT) - 0.101 4 1.38 3 SMA(WGT) K=8 0.0609 1 KPCA(GAUS) s2=1.0 3.47 9 49.2 KPCA(POLY) d=2 0.605 8 9.15 ISOMAP K=9 0.0979 3 1.83 4 LLE K=8 0.173 6 3.03 LEM K=7 0.367 7 8.46 HLLE - NA SDE K=7 0.0741 2 1.36 (*) It might imply the distribution approaches to linear
28
Conclusion Reconsidered robot map building from the viewpoint of dimensionality reduction Proposed a new framework named LFMDR Motion and measurement models are not required Not need to estimate robot’s poses (localization-free) However, larger amount of data is needed Tested on two types of sensor measurements Visibility information, and Bearing angles Compared a variety of DR methods
29
Future Works Relaxation of restrictions Scalability On-line algorithm
Missing measurements Data association problem Scalability Mapping of a larger number of objects On-line algorithm Tracking of moving objects Multi-sensor fusion e.g. mapping with bearing and range measurements
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.