Localization for Mobile Robot Using Monocular Vision Hyunsik Ahn Jan. 2006 Tongmyong University.

Slides:



Advertisements
Similar presentations
Vanishing points  .
Advertisements

Simultaneous surveillance camera calibration and foot-head homology estimation from human detection 1 Author : Micusic & Pajdla Presenter : Shiu, Jia-Hau.
Hilal Tayara ADVANCED INTELLIGENT ROBOTICS 1 Depth Camera Based Indoor Mobile Robot Localization and Navigation.
TP14 - Local features: detection and description Computer Vision, FCUP, 2014 Miguel Coimbra Slides by Prof. Kristen Grauman.
Jürgen Wolf 1 Wolfram Burgard 2 Hans Burkhardt 2 Robust Vision-based Localization for Mobile Robots Using an Image Retrieval System Based on Invariant.
Computer vision: models, learning and inference
Kiyoshi Irie, Tomoaki Yoshida, and Masahiro Tomono 2011 IEEE International Conference on Robotics and Automation Shanghai International Conference Center.
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Visual Navigation in Modified Environments From Biology to SLAM Sotirios Ch. Diamantas and Richard Crowder.
Adam Rachmielowski 615 Project: Real-time monocular vision-based SLAM.
Multiple Human Objects Tracking in Crowded Scenes Yao-Te Tsai, Huang-Chia Shih, and Chung-Lin Huang Dept. of EE, NTHU International Conference on Pattern.
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
Object Recognition Using Geometric Hashing
Scale Invariant Feature Transform (SIFT)
Automatic Camera Calibration for Image Sequences of a Football Match Flávio Szenberg (PUC-Rio) Paulo Cezar P. Carvalho (IMPA) Marcelo Gattass (PUC-Rio)
Vision for mobile robot navigation Jannes Eindhoven
Estimating the Driving State of Oncoming Vehicles From a Moving Platform Using Stereo Vision IEEE Intelligent Transportation Systems 2009 M.S. Student,
Computer Vision Lecture 3: Digital Images
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Stockman MSU/CSE Math models 3D to 2D Affine transformations in 3D; Projections 3D to 2D; Derivation of camera matrix form.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Lecture 12: Structure from motion CS6670: Computer Vision Noah Snavely.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Kalman filter and SLAM problem
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Tricolor Attenuation Model for Shadow Detection. INTRODUCTION Shadows may cause some undesirable problems in many computer vision and image analysis tasks,
SPIE'01CIRL-JHU1 Dynamic Composition of Tracking Primitives for Interactive Vision-Guided Navigation D. Burschka and G. Hager Computational Interaction.
3D SLAM for Omni-directional Camera
May 9, 2005 Andrew C. Gallagher1 CRV2005 Using Vanishing Points to Correct Camera Rotation Andrew C. Gallagher Eastman Kodak Company
Translations Translations and Getting Ready for Reflections by Graphing Horizontal and Vertical Lines.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
Metrology 1.Perspective distortion. 2.Depth is lost.
Correspondence-Free Determination of the Affine Fundamental Matrix (Tue) Young Ki Baik, Computer Vision Lab.
Geometric Camera Models
Binocular Stereo #1. Topics 1. Principle 2. binocular stereo basic equation 3. epipolar line 4. features and strategies for matching.
Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University.
General ideas to communicate Show one particular Example of localization based on vertical lines. Camera Projections Example of Jacobian to find solution.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
The Implementation of Markerless Image-based 3D Features Tracking System Lu Zhang Feb. 15, 2005.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot Yinxiao Li and Stanley T. Birchfield The Holcombe Department of Electrical and Computer.
Range-Only SLAM for Robots Operating Cooperatively with Sensor Networks Authors: Joseph Djugash, Sanjiv Singh, George Kantor and Wei Zhang Reading Assignment.
COMP 417 – Jan 12 th, 2006 Guest Lecturer: David Meger Topic: Camera Networks for Robot Localization.
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
Robust and Accurate Surface Measurement Using Structured Light IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 57, NO. 6, JUNE 2008 Rongqian.
Vision-based SLAM Enhanced by Particle Swarm Optimization on the Euclidean Group Vision seminar : Dec Young Ki BAIK Computer Vision Lab.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
3D Reconstruction Using Image Sequence
Digital Image Processing Additional Material : Imaging Geometry 11 September 2006 Digital Image Processing Additional Material : Imaging Geometry 11 September.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Media Processor Lab. Media Processor Lab. Trellis-based Parallel Stereo Matching Media Processor Lab. Sejong univ.
Vision for Mobile Robot Navigation. Context Introduction Indoor Navigation Map-Based Approaches Map-Building Mapless Navigation Outdoor Navigation Outdoor.
CMSC5711 Image processing and computer vision
Processing visual information for Computer Vision
Paper – Stephen Se, David Lowe, Jim Little
José Manuel Iñesta José Martínez Sotoca Mateo Buendía
TP12 - Local features: detection and description
Approximate Models for Fast and Accurate Epipolar Geometry Estimation
CMSC5711 Revision (1) (v.7.b) revised
Computer Vision Lecture 3: Digital Images
CMSC5711 Image processing and computer vision
Multiple View Geometry for Robotics
Presented by Xu Miao April 20, 2005
Maps one figure onto another figure in a plane.
Chapter 4 . Trajectory planning and Inverse kinematics
Revision 4 CSMSC5711 Revision 4: CSMC5711 v.9a.
Presentation transcript:

Localization for Mobile Robot Using Monocular Vision Hyunsik Ahn Jan Tongmyong University

Robot Vision Lab. 1. Introduction (1) Self-localization methods of mobile robot Position tracking : encoder, ultrasonic sensors, local sensors Global localization : laser-range scanner, vision-based methods Vision-based methods of indoor application Stereo vision Directly detects the geometric information, complicated H/W, much processing time Omni-directional view Using conic mirror, low resolution Mono view using landmarks Using artificial landmarks

Robot Vision Lab. 1. Introduction (2) Related work in monocular method Sugihara(1988) did pioneering works in localization using vertical edges. Atiya and Hager(1993) used geometric tolerance to describe observation error. Kosaka and Kak (1992) proposed a model-based monocular vision system with a 3D geometric model. Munoz and Gonzalez (1998) added an optimization procedure. Talluri and Aggarwal (1996) considered correspondence problem between a stored 3D model and 2D image in an outdoor urban environment. Aider et. al. (2005) proposed an incremental model-based localization using view-invariant regions. Another approach adopting SIFT (Scale-Invariant Feature Transformation) algorithm to comput correspondence between the SIFT features saved and images during navigation.

Robot Vision Lab. 1. Introduction (3) A self-localization method using vertical lines with mono view is proposed. Indoor environment, use horizontal and vertical line features(doors, furniture) Find vertical lines, compute pattern vectors Match the lines with the corners of map Find position (x,y,θ) with matched information

Robot Vision Lab. Detect line segments 2. Localization algorithm Map-making and path planning Line segments ≥ 3 Matching lines with map Input image end Uncertainty > T Yes No Localization(x,y,θ) Destination Fig. 1 The flowchart of self-localization

Robot Vision Lab. 2.1 Line feature detection Vertical Sobel operation Vertically projected histogram One dimensional averaging, and thresholding Local maximum are indexed as feature points Fig. 2 Projected histogram and a local maximum U Local maximum Threshold value

Robot Vision Lab. 2.2 Correspondence of feature vectors (1) Using geometrical information of the line features of the map Feature vectors are defined with hue(H) and saturation(S) Feature vectors of the right and left regions are defined Check whether a line meats floor regions Contacted line, non-contacted line : Define visibility of regions of contacted line Visible region, Occluded region (1)

Robot Vision Lab. 2.2 Correspondence using feature vectors (2) Matching of feature vector of lines with map. Lines of both visible region, one visible region, non-contacted line The correspondence of neighbor lines are investigated with the lines having geometrical relationship. Fig. 3 Floor contacted lines and visible regions. Contacted line : x 1, x 2, x 3. Non-contacted line : x 4. Visible region : l 1, l 2, r 2, l 3, r 3, r 4. Occluded region : r 1, l 4

Robot Vision Lab. 2.3 Self-localization using vertical lines (1) The coordinates of feature points are matched to the camera coordinates of the map. Fig. 4 Global and camera coordinates

Robot Vision Lab. 2.3 Self-localization using vertical lines (2) Fig. 5 Perspective transformation of camera coordinates : Image plane coordinates : Camera coordinates : Feature points of camera coordinates : Features of image plane : Focal length of camera

Robot Vision Lab. 2.3 Self-localization using vertical lines (3) Camera coordinates can be transformed to world coordinates by a rigid body transformation T. The camera coordinates and world coordinates are related with translation and rotation. The transformation T can be defined as (2) (3)

Robot Vision Lab. 2.3 Self-localization using vertical lines (4) Global coordinates are mapped to camera coordinates. The perspective transformation is (5) Perspective transformation and rigid transformation of the coordinates induce a system of nonlinear equations. induces from (4), (5). (5) (6) (4)

Robot Vision Lab. 2.3 Self-localization using vertical lines (5) Jacobian matrix Newton’s method to find the solution of the nonlinear equations is (8) when initial value is given. where (7) (8)

Robot Vision Lab. 3. Experimental results (1) Real position (mm, °)Measured position (mm, °) No.XYAngleXY Real position (mm, °)Measured position (mm, °) No.XYAngleXY  of errors Table 1 Real positions and errors

Robot Vision Lab. 3. Experimental results (2) Fig. 7 The procedures of detecting vertical lines (c) Projected histogram (d) Vertical lines (a) Original Image (b) Vertical edges Fig. 6 Mobile robot

Robot Vision Lab. 3. Experimental results (3) Fig. 8. Input image of each sequence

Robot Vision Lab. 3. Experimental results (4) Fig. 10. Errors through Y axis Fig. 9. The result of localization in the given map

Robot Vision Lab. 4. Conclusions A self-localization method using vertical line segment with mono view was proposed. Line features are detected by projected histogram of edge image. Pattern vectors and their geometrical properties are used for match with the point of map. A system of nonlinear equations with perspective and rigid transformation of the matched points is induced. Newton’s method was used to solve the equations. The proposed algorithm using mono view is simple and applicable to indoor environment.