Download presentation
Presentation is loading. Please wait.
Published byFelicia Stone Modified over 9 years ago
1
Ross Creed Temple University ICAR Workshop June 20, 2011
2
Talk Outline 10 min – Background and Motivation 10 min – Discussion of Temple Map Evaluation Toolkit 10 min – Demo and Questions
3
Project Overview Project seeks to address the following questions: Is it possible to develop a world modeling framework that can enable autonomous navigation of AGVs with particular emphasis on requirements and constraints imposed by manufacturing environments such as factory floors where humans and machines work side by side? – Such a framework should not only provide an accurate representation of the environment but also should be smart and flexible in terms of task-dependent goals and sensor modalities. Furthermore, can the framework be sufficiently generic so that it can be extended to other domains? How can we develop scientifically sound methodologies to evaluate world modeling schemes for use in manufacturing environments? – It is our view that such methodologies are better developed by taking into account requirements imposed by end-users and domain specific constraints that are grounded in practicality How can we design experiments and test methods to enable performance evaluation and benchmarking towards characterizing constituent components of navigation and world modeling systems that provide statistically significant results and quantifiable performance data?
4
(Quantitative) Map Quality is a performance measure of how well a robot or team of robots can explore, understand and interpret the operational domain; subsequently, indicative of the utility of the robot-generated map Evaluation Philosophy To design and develop capable, dependable, and affordable robotic systems, their performance must be measurable (quantitative) Repeatable and reproducible test artifacts and measurement methodologies to capture performance data focus research efforts, provide direction, and accelerate the advancement of mobile robot capabilities (objective) Only by involving users, developers and integrators in a coupled fashion, can meaningful solutions be produced that can stand the ever-varying requirements imposed by: 1) tasks that are either application or environment dependent, 2) hardware and software advancements/restrictions that affect the development cycle, and 3) budgetary constraints that interrupt and hamper sustained progress Map Evaluation: Motivation & Background
5
Qualitative comparison of resulting maps is used to assess performance, e.g. visual inspection Common practice in the literature to compare newly developed mapping algorithms with former methods by presenting images of generated maps suboptimal, particularly when applied to large–scale maps clearly not a good choice of evaluation hard to inter-compare results Prevalent problem spanning multiple domains: rescue, manufacturing, military, service robotics, … No accepted standard nor consensus on objective evaluation procedures exist today for quantitatively measuring the performance of robotic mapping systems against user–defined requirements Motivation & Background
6
Which is better? A. Kleiner, 2009. OR J. Elseberg, 2010.
7
Mapping, in general, is spatial analysis of environmental features of interest. Inherent to this process is its task dependency, hence there is no 'optimal general mapping'. Mapping can be divided into two classes: topographic and topological mapping. Topographic: concerned with detailed, correct geometry Topological: correct spatial relation between features only They are often referred to as 'global correctness' vs. 'local accuracy‘ and can be related to GRID and POSE based approaches in map evaluation. QUANTIFYING ROBOTIC MAPPING
8
A toolkit developed for the 2008 Robocup Rescue Rescue Competition, the ‘Jacobs Map Analysis Toolkit’ (A. Birk, Jacobs University, Bremen) Purely Grid based (topographic evaluation) Depending on the application, this can introduce massive (fatal) problems. Example: A is correct environment, B and C different maps. In a rescue scenario, if B is favored, responders would try to approach the victim through the wrong (left) hallway AB C
9
In contrast, pose based approaches would point out the global error, but also the local correctness. Map C would be preferred. However, there are (of course) counter examples, showing disadvantages of pose based approaches.
10
Analytic research on such examples can lead to general approaches to evaluation. In the example case, a simple solution would be a hybrid approach: combined pose/grid based evaluation, emphasizing the advantages of both approaches. ab cd
11
The Temple Hybrid Map Evaluation Toolkit Compares a created map versus a ground- truth map Accounts for both pose-based and geometric difference between maps Portable (Java implementation) and Flexible (allows for segment and picture based maps)
12
Running the Toolkit Thanks to the Java webstart technologies, the only download required is the ~1Kb jnlp file After this file is launched, the required java files are acquired and the program is run No installations required! OS Independent!
13
Map Import Since map evaluation is not a standardized procedure, it follows that map formats are not standardized either The program is set up to handle segment-based maps, and pictoral-based maps (JPG, GIF, PNG, etc.) Pictoral based maps are converted into segments using a binarization (human in the loop) and line finding algorithm
14
Map Chopping For pose-based evaluation the ground truth map can be “chopped” into sections These sections can be rotated, translated, and scaled to better fit the target map
15
Map Alignment In the final step before evaluation, the ground-truth map is fitted to the target map, keeping track of the transforms of each map segment These translations can be performed either by the user, or by an algorithm, or a combination of both
16
Map Evaluation The final quantitative value is a combination of the values of the transformations in the Map Alignment step, with the geometric measure given from the Jacobs Toolkit implementation M = αT+βA+γB – T is the total translation of all the chops, and α is the Translational weight factor – A is the total absolute angular difference of all the chops, and β is the Angular weight factor – B is the geometrical difference between maps (derived from the measure in Birk et al.) and γ is the Geometrical weight factor The weights in this equation can be modified to give more emphasis to pose-based( increase γ) or geometric based (increase α, β) applications For this measure, 0 is perfectly matched, and as the measure increases the maps are less similar
17
Demo of Toolkit
18
Questions?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.