Vision for Mobile Robot Navigation. Context Introduction Indoor Navigation Map-Based Approaches Map-Building Mapless Navigation Outdoor Navigation Outdoor.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

Alignment Visual Recognition “Straighten your paths” Isaiah.
A Novel Approach of Assisting the Visually Impaired to Navigate Path and Avoiding Obstacle-Collisions.
Dynamic Occlusion Analysis in Optical Flow Fields
University of Amsterdam Search, Navigate, and Actuate - Quantitative Navigation Arnoud Visser 1 Search, Navigate, and Actuate Quantative Navigation.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
(Includes references to Brian Clipp
Mapping: Scaling Rotation Translation Warp
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
Visual Navigation in Modified Environments From Biology to SLAM Sotirios Ch. Diamantas and Richard Crowder.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
ECE 7340: Building Intelligent Robots QUALITATIVE NAVIGATION FOR MOBILE ROBOTS Tod S. Levitt Daryl T. Lawton Presented by: Aniket Samant.
A Study of Approaches for Object Recognition
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Mobile Robotics: 10. Kinematics 1
Study on Mobile Robot Navigation Techniques Presenter: 林易增 2008/8/26.
CS292 Computational Vision and Language Visual Features - Colour and Texture.
Vision for mobile robot navigation Jannes Eindhoven
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
1/53 Key Problems Localization –“where am I ?” Fault Detection –“what’s wrong ?” Mapping –“what is my environment like ?”
Stockman MSU/CSE Math models 3D to 2D Affine transformations in 3D; Projections 3D to 2D; Derivation of camera matrix form.
Overview and Mathematics Bjoern Griesbach
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Markov Localization & Bayes Filtering
Biologically-inspired Visual Landmark Navigation for Mobile Robots
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
3D SLAM for Omni-directional Camera
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
Complete Pose Determination for Low Altitude Unmanned Aerial Vehicle Using Stereo Vision Luke K. Wang, Shan-Chih Hsieh, Eden C.-W. Hsueh 1 Fei-Bin Hsaio.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
Localization for Mobile Robot Using Monocular Vision Hyunsik Ahn Jan Tongmyong University.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
University of Amsterdam Search, Navigate, and Actuate - Qualitative Navigation Arnoud Visser 1 Search, Navigate, and Actuate Qualitative Navigation.
Generalized Hough Transform
Ground Truth Free Evaluation of Segment Based Maps Rolf Lakaemper Temple University, Philadelphia,PA,USA.
1 Formation et Analyse d’Images Session 7 Daniela Hall 25 November 2004.
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
General ideas to communicate Show one particular Example of localization based on vertical lines. Camera Projections Example of Jacobian to find solution.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
1 Distributed and Optimal Motion Planning for Multiple Mobile Robots Yi Guo and Lynne Parker Center for Engineering Science Advanced Research Computer.
9 Introduction to AI Robotics (MIT Press), copyright Robin Murphy 2000 Chapter 9: Topological Path Planning1 Part II Chapter 9: Topological Path Planning.
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple.
Mobile Robot Localization (ch. 7)
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
3D Imaging Motion.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Turning Autonomous Navigation and Mapping Using Monocular Low-Resolution Grayscale Vision VIDYA MURALI AND STAN BIRCHFIELD CLEMSON UNIVERSITY ABSTRACT.
Robotics Club: 5:30 this evening
Range-Only SLAM for Robots Operating Cooperatively with Sensor Networks Authors: Joseph Djugash, Sanjiv Singh, George Kantor and Wei Zhang Reading Assignment.
(c) 2000, 2001 SNU CSE Biointelligence Lab Finding Region Another method for processing image  to find “regions” Finding regions  Finding outlines.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Autonomous Robots Robot Path Planning (3) © Manfred Huber 2008.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
John Morris Stereo Vision (continued) Iolanthe returns to the Waitemata Harbour.
Camera calibration from multiple view of a 2D object, using a global non linear minimization method Computer Engineering YOO GWI HYEON.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
COGNITIVE APPROACH TO ROBOT SPATIAL MAPPING
Paper – Stephen Se, David Lowe, Jim Little
Probabilistic Robotics
Part II Chapter 9: Topological Path Planning
Dynamical Statistical Shape Priors for Level Set Based Tracking
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

Vision for Mobile Robot Navigation

Context Introduction Indoor Navigation Map-Based Approaches Map-Building Mapless Navigation Outdoor Navigation Outdoor Navigation in Structured Environments Unstructured Outdoor Navigation Illumination and Out door Navigation The Future Conclusions

1. Introduction Two Separate Vision-based navigation for indoor robots. Ex) FINALE Uses a topological representation helping the robot figure out which neural networks to use in what part of the space. Vision-based navigation for outdoor robots. Ex) NAVLAB Can analyze intensity image for ascertaining the portion and boundaries of a roadway

2. INDOOR NAVIGATION Some of the first vision systems developed for mobile robot navigation relied heavily on the geometry of space Other metrical information for driving the vision processes and performing self-localization Three broad group Map-Based Navigation Map-Building-Based Navigation Mapless Navigation

2.1 Map-Based Approaches Map-Based Navigation consists of providing the robot with a model of the environment These models may contain different degrees of details, varying from a complete CAD model of the environment to a simple graph of interconnections or interrelationships between the elements in the environment

2.1 Map-Based Approaches In some of the vision system Occupancy map The knowledge of the environment consisted of a grid representation in which each object in the environment was represented by a 2D projection of its volume onto the horizontal plane. Virtual Force Fields (VFF) The VFF is an occupancy map where each occupied cell exerts a repulsive force to the robot and the goal exert an attracting force. Squeezing 3D space into 2D Map (S-map) It requires a spatial analysis of the three-dimensional landmarks and the projection of their surfaces into a 2D map. It consists of incorporating uncertainty in an occupancy map to account for errors in the measurement of the position coordinates associated with the objects in space and in the quantization of those coordinates into “occupied / vacant” cells.

2.1 Map-Based Approaches Some of the techniques that have been proposed for dealing with uncertainties in occupancy maps. Histogram fields or Vector Field Histogram, where the detection of an object or the detection of a vacancy leads to the incrementing or decrementing,respectively, of the value of the corresponding cell in the grid Fuzzy operators where each cell holds a fuzzy value representing the occupancy or the vacancy of the corresponding space.

2.1 Map-Based Approaches The robot can use the provided map to estimate the robot’s position (self-localization) by matching the observation (image) against the expectation (landmark description in the database). The computation involved in vision-based localization can be divided into the following four steps Acquire sensory information Detect landmarks Establish matched between observation an expectation Calculate position

2.1 Map-Based Approaches Will refer to such approaches by absolute or global localization Absolute localization is to be contrasted with incremental localization in which it is assumed that the location of the robot is known approximately at the beginning of a navigation session and that the goal of the vision system is to refine the location coordinates. An expectation view can be generated on the basis of the approximately known initial location of the robot When this expectation view is reconciled with the camera perception, the result is a more precise fix on the location of the robot.

2.1.1 Absolute Localization Since in absolute localization the robot’s initial pose is unknown, the navigation system must construct a match between the observation and the expectations as derived from the entire database. On account of the uncertainties associated with the observations, it is possible for the same set of observations to match multiple expectations.

2.1.1 Absolute Localization Localization method advanced by Atiya, Hager The basic idea of their approach is to recognize in the camera image those entities that stay invariant with respect to the position and orientation of the robot as it travels in its environment.

2.1.1 Absolute Localization Atiya and Hager have used a set-based approach for dealing with the above mentioned uncertainties and have cast the ambiguity issue in the form of the labeling problem in computer vision. In the set-based approach, the uncertainty is expressed in the form of intervals (sets) A landmark j observed in the image plane is represented by a pair of observation intervals The observation interval consists of the coordinates of that landmark with respect to the camera coordinate frame, plus or minus a certain tolerance e, which represents the error in sensor measurement

2.1.1 Absolute Localization A landmark location interval p consists of the coordinates of the landmark j with respect to the world frame plus or minus a tolerance. This matching between the observation interval and the landmark position interval is a tuple and a labeling is a set of such matchings. In a set-based model of the uncertainty, each new observation results in new intervals. The observations are then combined by taking intersection of all such intervals; this intersection should get smaller as more uncertainty

2.1.1 Absolute Localization The localization problem Problem 1. Given n point location intervals in a world coordinate frame that represent the landmarks and m pairs (stereo) of observation interval, in the robot camera coordinate frame,determine a consistent labeling,, such that Problem 2. Given the above labeling, find the set of robot positions, P, consistent with that labeling. That is, find the set The first Problem is a qualitative determination of what is being viewed and is hereafter referred to as the labeling problem. The second problem is the quantitative determination of where the viewer is located, and is hereafter referred to as the location estimation problem.

2.1.1 Absolute Localization The first problem, the labeling problem, consists of finding not one, but all possible labelings between and and choosing the best labeling according to some criterion. The authors suggest using that labeling which contains the largest number of matches. Another suggested criterion is to use all labeling with more than four matches.

2.1.1 Absolute Localization Set-Based Solution Computing Correspondences Imaging Model and Calibration Stereo-Based Position Determination Transformation to Triangles Searching For Matches Runtime Choosing a Labeling Determining Robot Position Analysis of Solution Complexity

2.1.1 Absolute Localization

2.1.2 Incremental Localization Using Geometrical Representation of Space Propagation of Positional Uncertainty through Commanded Motions Projecting Robot’s Positional Uncertainty into Camera Image. Kalman Filtering Using Topological Representation of Space

2.1.2 Incremental Localization Using Geometrical Representation of Space In a large number of practical situations, the initial position of the robot is known at least approximately The localization algorithm must simply keep track of the uncertainties in the robot’s position as it executes motion commands and, when the uncertainties exceed a bound, use its sensors for a new fix on its position. By and large, probabilistic techniques have evolved as the preferred approach for the representation and the updating of the positional uncertainties as the robot move.

Representing the uncertainty in the position of the robot by a Gaussian distribution and, thus, characterizing the uncertainty at each location of the robot by the mean and the covariance

2.1.2 Incremental Localization 1. Derivation of a constraint equation that, taking into account the camera calibration matrix, in the environment to either the image space parameters or the Hough space parameters. 2. Linearization of the constraint equation for the derivation of the Kalman filter equations 3. Projection of robot’s positional uncertainty into the camera image (and into the Hough space) to find the set of candidate image features for each landmark in the environment. 4. Using the Kalman filter to update the mean and the covariance matrix of the positional parameters of the robot as a landmark is matched with an image feature

Propagation of Positional Uncertainty through Commanded Motions. Position of the Robot Position of the Robot after the commanded motion Relationship between two parameter Jacobian of the transformation funchtion h

2.1.2 Incremental Localization

Projecting Robot’s Positional Uncertainty into Camera Image In order to determine where in the camera image one should look for a given landmark in the environment, we need to be able to project the uncertainty in the position of the robot into the camera image. This can be done with the help of the camera calibration matrix. W : perspective factor T : camera calibration matrix H : transformation matrix

2.1.2 Incremental Localization Uncertainties around point landmarks. The Mahalanobis distance d=1 was used to construct the uncertainty ellipses.

2.1.2 Incremental Localization Kalman Filtering Consists of FINALE updating the statistical parameters of the position of the robot after a lanmark is matched with an image feature. Our discussion on FINALE has shown the critical role played by an empirically constructed model of the changes in the positional uncertainty of a robot as it executes translation and rotational motions. In general, also important will be sensor uncertainties and the trade-offs that exist between sensor uncertainties and decision making for navigation

2.1.2 Incremental Localization a)The camera image and the superimposed expectation map b)Output of the model-guided edge detector c)Uncertainty regions associated with the ends of the model line features d)Matching after Kalman filtering.

2.1.2 Incremental Localization Using Topological Representation of Space An entirely different approach to incremental localization utilizes a mostly topological representation of the environment. Ex) NEURO-NAV system. A graph topologically representing a layout of the hallways is used for driving the vision processes.

2.1.2 Incremental Localization The heart of NEURO-NAV consists of two modules, Hallway Follower and Landmark Detector, each implemented using an ensemble of neural networks. Those modules allow the robot to execute motions at the same time as it performs self-localization

2.1.2 Incremental Localization EX) Robot is at the lower end of corridor C2. If the motion command is for the robot to follow that corridor towards junction J2, turn left, and move down corridor C3, the Hallway Follower module will be invoked with the task of traversing corridor C2 keeping a parallel path with respect to the walls. Landmark Detector will perform a sequence of searches for landmarks contained in the attributed node of the graph corresponding to corridor C2., namely: door d176, door d175, etc., until it finds a junction(J2).

2.1.2 Incremental Localization Hierarchy of neural networks to perform corridor-following. Image processing steps in NEURO- NAV

2.1.2 Incremental Localization a)A typical example of an image of the hallway as seen by the camera b)The output of the edge detector c)Relevant floor edges d)Hough map

2.1.2 Incremental Localization NEURO-NAV was able to generate correct steering commands 86 percent of the time. It generated incorrect commands 10 percent of the time and issued a “no decision” in the remaining 4 percent of the cases. However, even when NEURO-NAV generates an incorrect steering command in a specific instance, it can correct itself at a later instant.

2.1.2 Incremental Localization Software architecture of FUZZY-NAVLinguisitic variables and their associated fuzzy terms used in FUZZY-NAV

2.1.3 Localization Derived from Landmark Tracking One can do localization by landmark tracking when both the approximate location of the robot and the identify of the landmarks seen in the camera image are known and can be tracked. The land marks used may either be artificial ones Artificial landmarks

2.1.3 Localization Derived from Landmark Tracking In most cases, landmark tracking in these system is carried out by using simple template matching. Another system that carries out landmark tracking in a map- based approach is by Hashima et al. In this system, however, the landmarks are natural. The system uses the correlation technique to track consecutive landmarks and to detect obstacles. For both landmark tracking and obstacle detection, the system uses a dedicated piece of hardware called TRV that is capable of calculating

2.1.3 Localization Derived from Landmark Tracking Localization is achieved by a multiple landmark detection algorithm. This algorithm searches for multiple landmarks, compares the landmarks with their prestored templates, tracks the landmarks as the robot moves, and selects new landmarks from the map to be tracked. As the landmarks are matched with the templates, their 3D pose is calculated using stereo vision. The position of the robot is estimated through a comparison between the calculated landmark position and the landmark position in the map

2.2 Map-Building Model descriptions are not always easy to generate, especially if one also has to provide metrical information. The Stanford Cart used a single camera to take nine images spaced along a 50 cm slider. An interest operator was applied to extract distinctive features in the images. These features were then correlated to generate their 3D coordinates. The features were tracked at each iteration of the program and marked in the grid and in the image plane. Proposed a data structure, known as occupancy grid, that is used for accumulating data from ultrasonic sensors.

2.2 Map-Building In a recent contribution, has proposed an integrated approach that seeks to combine the best of the occupancy- grid-based and the topology-based approaches. The system first learns a grid-based representation using neural networks and Bayesian integration, The grid-based representation is then transformed into a topological representation.

2.3 Mapless Navigation We include all systems in which navigation is achieved without any prior description of the environment. No maps are ever created. The need robot motions are determined by observing and extracting relevant information about the elements in the environment. Navigation can only be carried out with respect to these elements.

2.3.1 Navigation Using Optical Flow Have developed an optical-flow-based system that mimics the visual behavior of bees. It is believed that the predominantly lateral position of the eyes in insects favors a navigation mechanism using motion- derived features rather than using depth information. Features such as “time-to-crash” (which is dependent on the speed) are more relevant than distance when it is necessary to, say, jump over an obstacle.

2.3.1 Navigation Using Optical Flow In robee, a divergent stereo approach was employed to mimic the centering reflex of a bee. The basic idea is to measure the difference between image velocities computed over a lateral portion of the left and the right images and use this information to guide the robot. To keep the robot centered in a hallway, a PID controller is used

2.3.1 Navigation Using Optical Flow Examples with (center and right) and without (left) the sustained behavior