ICRA 2002 Topological Mobile Robot Localization Using Fast Vision Techniques Paul Blaer and Peter Allen Dept. of Computer Science, Columbia University.

Slides:



Advertisements
Similar presentations
Model Based Radiographic Image Classification This method implements an object oriented knowledge representation to automatically determines the body part.
Advertisements

Automated Shot Boundary Detection in VIRS DJ Park Computer Science Department The University of Iowa.
Peter Allen, Ioannis Stamos*,Alejandro Troccoli, Ben Smith, Marius Leordeanu*, Y.C. Hsu Dept. of Computer Science, Columbia University Dept. of Computer.
An appearance-based visual compass for mobile robots Jürgen Sturm University of Amsterdam Informatics Institute.
Electrical & Computer Engineering Dept. University of Patras, Patras, Greece Evangelos Skodras Nikolaos Fakotakis.
Jürgen Wolf 1 Wolfram Burgard 2 Hans Burkhardt 2 Robust Vision-based Localization for Mobile Robots Using an Image Retrieval System Based on Invariant.
Color spaces CIE - RGB space. HSV - space. CIE - XYZ space.
Foreground Modeling The Shape of Things that Came Nathan Jacobs Advisor: Robert Pless Computer Science Washington University in St. Louis.
Simultaneous Localization & Mapping - SLAM
GIS and Image Processing for Environmental Analysis with Outdoor Mobile Robots School of Electrical & Electronic Engineering Queen’s University Belfast.
A New Omnidirectional Vision Sensor for the Spatial Semantic Hierarchy E. Menegatti, M. Wright, E. Pagello Dep. of Electronics and Informatics University.
Automated Construction of Environment Models by a Mobile Robot Thesis Proposal Paul Blaer January 5, 2005.
Fast Synthetic Vision, Memory, and Learning Models for Virtual Humans by J. Kuffner and J. Latombe presented by Teresa Miller CS 326 – Motion Planning.
Autonomous Vehicle: Navigation by a Line Created By: Noam Brown and Amir Meiri Mentor: Johanan Erez and Ronel Veksler Location: Mayer Building (Electrical.
Automatic 2D-3D Registration Student: Lingyun Liu Advisor: Prof. Ioannis Stamos.
Visual Odometry for Ground Vehicle Applications David Nister, Oleg Naroditsky, James Bergen Sarnoff Corporation, CN5300 Princeton, NJ CPSC 643, Presentation.
Christian Siagian Laurent Itti Univ. Southern California, CA, USA
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Vision for mobile robot navigation Jannes Eindhoven
Intelligent Ground Vehicle Competition Navigation Michael Lebson - James McLane - Image Processing Hamad Al Salem.
PixelLaser: Range scans from image segmentation Nicole Lesperance ’11 Michael Leece ’11 Steve Matsumoto ’12 Max Korbel ’13 Kenny Lei ’15 Zach Dodds ‘62.
A Brief Overview of Computer Vision Jinxiang Chai.
Waikato Margaret Jefferies Dept of Computer Science University of Waikato.
A Bayesian Approach For 3D Reconstruction From a Single Image
Robot Compagnion Localization at home and in the office Arnoud Visser, Jürgen Sturm, Frans Groen University of Amsterdam Informatics Institute.
Indoor Localization Using a Modern Smartphone Carick Wienke Advisor: Dr. Nicholas Kirsch Although indoor localization is an important tool for a wide range.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
3D SLAM for Omni-directional Camera
1 U.S. Department of the Interior U.S. Geological Survey National Center for EROS Remote Sensing Technologies Group Digital Aerial Imaging Systems: Current.
Flow Separation for Fast and Robust Stereo Odometry [ICRA 2009]
CIS 601 Fall 2003 Introduction to Computer Vision Longin Jan Latecki Based on the lectures of Rolf Lakaemper and David Young.
ECGR4161/5196 – July 26, 2011 Read Chapter 5 Exam 2 contents: Labs 0, 1, 2, 3, 4, 6 Homework 1, 2, 3, 4, 5 Book Chapters 1, 2, 3, 4, 5 All class notes.
Localization for Mobile Robot Using Monocular Vision Hyunsik Ahn Jan Tongmyong University.
Access Control Via Face Recognition. Group Members  Thilanka Priyankara  Vimalaharan Paskarasundaram  Manosha Silva  Dinusha Perera.
Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University.
General ideas to communicate Show one particular Example of localization based on vertical lines. Camera Projections Example of Jacobian to find solution.
Visual SLAM Visual SLAM SPL Seminar (Fri) Young Ki Baik Computer Vision Lab.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Figure ground segregation in video via averaging and color distribution Introduction to Computational and Biological Vision 2013 Dror Zenati.
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot
Tutorial Visual Perception Towards Computer Vision
CSE-473 Mobile Robot Mapping. Mapping with Raw Odometry.
October Andrew C. Gallagher, Jiebo Luo, Wei Hao Improved Blue Sky Detection Using Polynomial Model Fit Andrew C. Gallagher, Jiebo Luo, Wei Hao Presented.
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot Yinxiao Li and Stanley T. Birchfield The Holcombe Department of Electrical and Computer.
Determining the location and orientation of webcams using natural scene variations Nathan Jacobs.
Turning Autonomous Navigation and Mapping Using Monocular Low-Resolution Grayscale Vision VIDYA MURALI AND STAN BIRCHFIELD CLEMSON UNIVERSITY ABSTRACT.
Autonomous Robots Vision © Manfred Huber 2014.
Student Name: Honghao Chen Supervisor: Dr Jimmy Li Co-Supervisor: Dr Sherry Randhawa.
Visual Odometry for Ground Vehicle Applications David Nistér, Oleg Naroditsky, and James Bergen Sarnoff Corporation CN5300 Princeton, New Jersey
Monte Carlo Localization for Mobile Robots Frank Dellaert 1, Dieter Fox 2, Wolfram Burgard 3, Sebastian Thrun 4 1 Georgia Institute of Technology 2 University.
Alan Cleary ‘12, Kendric Evans ‘10, Michael Reed ‘12, Dr. John Peterson Who is Western State? Western State College of Colorado is a small college located.
Estimating Quality of Canola Seed Using a Flatbed Scanner.
Che-An Wu Background substitution. Background Substitution AlphaMa p Trimap Depth Map Extract the foreground object and put into another background Objective.
SLAM Techniques -Venkata satya jayanth Vuddagiri 1.
AN ACTIVE VISION APPROACH TO OBJECT SEGMENTATION – Paul Fitzpatrick – MIT CSAIL.
ParkNet: Drive-by Sensing of Road-Side Parking Statistics Irfan Ullah Department of Information and Communication Engineering Myongji university, Yongin,
Leaves Recognition By Zakir Mohammed Indiana State University Computer Science.
A Plane-Based Approach to Mondrian Stereo Matching
CSE-473 Mobile Robot Mapping.
Multiple View Geometry
CS201 Lecture 02 Computer Vision: Image Formation and Basic Techniques
Incorporating Ancillary Data for Classification
Week 6 Cecilia La Place.
SoC and FPGA Oriented High-quality Stereo Vision System
Introduction Computer vision is the analysis of digital images
Radio Propagation Simulation Based on Automatic 3D Environment Reconstruction D. He A novel method to simulate radio propagation is presented. The method.
Introduction to Robot Mapping
Digital Image Processing
Introduction Computer vision is the analysis of digital images
Presentation transcript:

ICRA 2002 Topological Mobile Robot Localization Using Fast Vision Techniques Paul Blaer and Peter Allen Dept. of Computer Science, Columbia University {psb15, May 13, 2002

GPS DGPS Scanner Network Camera PTU Compass Autonomous Vehicle for Exploration and Navigation in Urban Environments PC Sonars Current Localization Methods: Odometry. Differential GPS. Vision. The AVENUE Robot: Autonomous. Operates in outdoor urban environments. Builds accurate 3-D models.

Range Scanning Outdoor Structures

Italian House: Textured 3-D Model

Second view

Flat Iron Building 3 novel views

Main Vision System Georgiev and Allen ‘02

Topological Localization Odometry and GPS can fail. Fine vision techniques need an estimate of the robot’s current position Omnidirectional Camera Histogram Matching of Omnidirectional Images: Fast. Rotationally-invariant. Finds the region of the robot. This region serves as a starting estimate for the main vision system.

Related Work Omnidirectional Imaging for Navigation. Ulrich and Nourbakhsh ’00 (with histogramming); Cassinis et al. ’96; Winters et al. ’00. Topological Navigation. Kuipers and Byun ’91; Radhakrishnan and Nourbakhsh ’99. Other Approaches to Robot Localization. Castellanos et al. ’98; Durrant-Whyte et al. ’99; Leonard and Feder ’99; Thrun et al. ’98; Dellaert et al. ’99; Simmons and Koenig ’ AVENUE Allen, Stamos, Georgiev, Gold, Blaer ’01.

Building the Database Divide the environment into a logical set of regions. Reference images must be taken in all of these regions. The images should be taken in a zig-zag pattern to cover as many different locations as possible. Each image is reduced to three 256-bucket histograms, for the red, green, and blue color bands.

An Image From the Omnidirectional Camera

Masking the Image The camera points up to get a clear picture of the buildings. The camera pointing down would give images of the ground’s brick pattern - not useful for histogramming. With the camera pointing up, the sun and sky enter the picture and cause major color variations. We mask out the center of the image to block out most of the sky. We also mask out the superstructure associated with the camera.

Environmental Effects Indoor environments Controlled lighting conditions No histogram adjustments Outdoor environments Large variations in lighting Use a histogram normalization with the percentage of color at each pixel:

Indoor Image Non-Normalized Histogram

Outdoor Image Normalized Histogram

Matching an Unknown Image with the Database Compare unknown image to each image in the database. Initially we treat each band (r, g, b) separately. The difference between two histograms is the sum of the absolute differences of each bucket. Better to sum the differences for all three bands into a single metric than to treat each band separately. The region of the database image with the smallest difference is the selected region for this unknown.

Image Matching to the Database

The Test Environments Indoor (Hallway View) Outdoor (Aerial View) Outdoor (Campus Map)

Indoor Results Region Images Tested Non-Normalized % Correct Normalized % Correct %95% 21283%92% 3977%89% 4520% 52374%91% 6989%78% 750%20% 85100%40% Total8978%80%

Ambiguous Regions South Hallway North Hallway

Outdoor Results Region Images Tested Non-Normalized % Correct Normalized % Correct 15058%95% 25011%39% 35029%71% 45025%62% 55049%55% 65030%57% 75028%61% 85041%78% Total40034%65%

The Best Candidate Regions Correct Region, Lowest Difference Wrong Region, Second-Lowest Difference

Conclusions In 80% of the cases we were able to narrow down the robot’s location to only 2 or 3 possible regions without any prior knowledge of the robot’s position. Our goal was to reduce the number of possible models that the fine-grained visual localization method needed to examine. Our method effectively quartered the number of regions that the fine-grained method had to test.

Future Work What is needed is a fast secondary discriminator to distinguish between the 2 or 3 possible regions. Histograms are limited in nature because of their total reliance on the color of the scene. To counter this we want to incorporate more geometric data into our database, such as edge images.