Obstacle detection using v-disparity image

Slides:



Advertisements
Similar presentations
Image Registration  Mapping of Evolution. Registration Goals Assume the correspondences are known Find such f() and g() such that the images are best.
Advertisements

The fundamental matrix F
Efficient High-Resolution Stereo Matching using Local Plane Sweeps Sudipta N. Sinha, Daniel Scharstein, Richard CVPR 2014 Yongho Shin.
RGB-D object recognition and localization with clutter and occlusions Federico Tombari, Samuele Salti, Luigi Di Stefano Computer Vision Lab – University.
Rear Lights Vehicle Detection for Collision Avoidance Evangelos Skodras George Siogkas Evangelos Dermatas Nikolaos Fakotakis Electrical & Computer Engineering.
Exploiting Homography in Camera-Projector Systems Tal Blum Jiazhi Ou Dec 11, 2003 [Sukthankar, Stockton & Mullin. ICCV-2001]
CS 376b Introduction to Computer Vision 04 / 21 / 2008 Instructor: Michael Eckmann.
Electrical & Computer Engineering Dept. University of Patras, Patras, Greece Evangelos Skodras Nikolaos Fakotakis.
Real-Time Accurate Stereo Matching using Modified Two-Pass Aggregation and Winner- Take-All Guided Dynamic Programming Xuefeng Chang, Zhong Zhou, Yingjie.
Face Recognition & Biometric Systems, 2005/2006 Face recognition process.
A Versatile Depalletizer of Boxes Based on Range Imagery Dimitrios Katsoulas*, Lothar Bergen*, Lambis Tassakos** *University of Freiburg **Inos Automation-software.
Artificial PErception under Adverse CONditions: The Case of the Visibility Range LCPC in cooperation with INRETS, France Nicolas Hautière Young Researchers.
Boundary matting for view synthesis Samuel W. Hasinoff Sing Bing Kang Richard Szeliski Computer Vision and Image Understanding 103 (2006) 22–32.
Free Space Detection for autonomous navigation in daytime foggy weather Nicolas Hautière, Jean-Philippe Tarel, Didier Aubert.
Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration By: Geoffrey Taylor Lindsay Kleeman Presented by: Ali Agha.
Robust Lane Detection and Tracking
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Object recognition under varying illumination. Lighting changes objects appearance.
Robust estimation Problem: we want to determine the displacement (u,v) between pairs of images. We are given 100 points with a correlation score computed.
Stereo Sebastian Thrun, Gary Bradski, Daniel Russakoff Stanford CS223B Computer Vision (with slides by James Rehg and.
Accurate, Dense and Robust Multi-View Stereopsis Yasutaka Furukawa and Jean Ponce Presented by Rahul Garg and Ryan Kaminsky.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Manhattan-world Stereo Y. Furukawa, B. Curless, S. M. Seitz, and R. Szeliski 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.
Matthias Wimmer, Bernd Radig, Michael Beetz Chair for Image Understanding Computer Science Technische Universität München Adaptive.
Instance-level recognition I. - Camera geometry and image alignment Josef Sivic INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
Final Exam Review CS485/685 Computer Vision Prof. Bebis.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
SPIE'01CIRL-JHU1 Dynamic Composition of Tracking Primitives for Interactive Vision-Guided Navigation D. Burschka and G. Hager Computational Interaction.
3D SLAM for Omni-directional Camera
Y. Moses 11 Combining Photometric and Geometric Constraints Yael Moses IDC, Herzliya Joint work with Ilan Shimshoni and Michael Lindenbaum, the Technion.
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
Computer Vision, Robert Pless Lecture 11 our goal is to understand the process of multi-camera vision. Last time, we studies the “Essential” and “Fundamental”
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
Metrology 1.Perspective distortion. 2.Depth is lost.
Correspondence-Free Determination of the Affine Fundamental Matrix (Tue) Young Ki Baik, Computer Vision Lab.
Cmput412 3D vision and sensing 3D modeling from images can be complex 90 horizon 3D measurements from images can be wrong.
Road Scene Analysis by Stereovision: a Robust and Quasi-Dense Approach Nicolas Hautière 1, Raphaël Labayrade 2, Mathias Perrollaz 2, Didier Aubert 2 1.
Computer Vision, Robert Pless
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Image Registration with Hierarchical B-Splines Z. Xie and G. Farin.
Geometric Transformations
1 Motion Analysis using Optical flow CIS601 Longin Jan Latecki Fall 2003 CIS Dept of Temple University.
Multiple Light Source Optical Flow Multiple Light Source Optical Flow Robert J. Woodham ICCV’90.
3D Reconstruction Using Image Sequence
Stereo Vision Local Map Alignment for Robot Environment Mapping Computer Vision Center Dept. Ciències de la Computació UAB Ricardo Toledo Morales (CVC)
May 16-18, Tsukuba Science City, Japan Machine Vision Applications 2005 Estimation of the Visibility Distance by Stereovision: a Generic Approach.
John Morris Stereo Vision (continued) Iolanthe returns to the Waitemata Harbour.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Edge Segmentation in Computer Images CSE350/ Sep 03.
Image features and properties. Image content representation The simplest representation of an image pattern is to list image pixels, one after the other.
Person Following with a Mobile Robot Using Binocular Feature-Based Tracking Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering.
Signal and Image Processing Lab
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Semi-Global Matching with self-adjusting penalties
José Manuel Iñesta José Martínez Sotoca Mateo Buendía
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Homography From Wikipedia In the field of computer vision, any
3D Photography: Epipolar geometry
Range Imaging Through Triangulation
Optical Flow For Vision-Aided Navigation
Multiple View Geometry for Robotics
Part One: Acquisition of 3-D Data 2019/1/2 3DVIP-01.
Computer Vision Stereo Vision.
Video Compass Jana Kosecka and Wei Zhang George Mason University
CSE (c) S. Tanimoto, 2007 Image Understanding
Presentation transcript:

Obstacle detection using v-disparity image Based on: Global Correlation Based Ground Plane Estimation Using V-Disparity Image, By: J. Zhao, J. Katupitiya and J. Ward, 2007 Obstacle Detection with Stereo Vision for Off-Road Vehicle Navigation, By: A. Broggi, C. Caraffi, R. I. Fedriga, P. Grisleri, 2005 Real Time Obstacle Detection in Stereovision on Non Flat Road Geometry Through ”V-disparity” Representation, By: R. Labayrade, D. Aubert, J. Tarel, 2002 Presented by: Ali Agha March 25, 2009

Motivation Obstacle avoidance using stereo vision Points with Z > 0 The pitch angle between the cameras and the road surface will change. Therefore, we need to compute the pitch angle and disparity of ground pixels dynamically due to static and dynamic factors thus the disparity of ground pixels is changing from time to time.

Related work Disparity map  3D point cloud  Ground plane extraction using plane fitting  obstacle detection Yu et al (2005) Disparity map  Optical flow and ground information  obstacle detection Mascarenhas (2008) Giachetti et al (1998) Disparity map  V-disparity image and ground information  obstacle detection Labayrade (2002) Broggi (2005) Zhao (2007)

V-Disparity image Disparity map IΔ has been computed IvΔ is built by accumulating the pixels of same disparity in IΔ along the v axis u d Recently, V-disparity image has become popular for ground plane estimation [1][9][10][11]. In this image, the abscissa axis (w) plots the offset for which the correlation has been computed; the ordinates axis (v) plots the image row number; the intensity value is settled proportional to the measured correlation, or the number of pixels having the corresponding disparity (w) in a certain row (v). Each planar surface in the field of view is mapped in a segment in the V-disparity image [10]. Vertical surfaces in the 3D world are mapped into vertical line segments, while ground plane in the 3D world are mapped into slanted line segment. This line segment, called ground correlation line in this study, contains the information about the cameras pitch angle at the time of acquisition (mixed with the terrain slope information). v v

The image of a plane in v-disparity image pin-hole camera model Ground correlation line 5

Ideal and real road and obstacle Recently, V-disparity image has become popular for ground plane estimation [1][9][10][11]. In this image, the abscissa axis (w) plots the offset for which the correlation has been computed; the ordinates axis (v) plots the image row number; the intensity value is settled proportional to the measured correlation, or the number of pixels having the corresponding disparity (w) in a certain row (v). Each planar surface in the field of view is mapped in a segment in the V-disparity image [10]. Vertical surfaces in the 3D world are mapped into vertical line segments, while ground plane in the 3D world are mapped into slanted line segment. This line segment, called ground correlation line in this study, contains the information about the cameras pitch angle at the time of acquisition (mixed with the terrain slope information).

Hough Transform Labayrade uses Hough transform to extract the ground correlation line

free space on the road Once the longitudinal profile of the road has been extracted, the disparity values on the road surface are known for each scanning line (it is exactly the disparity value corresponding to the longitudinal profile curve extracted for the scanning line in question). Let Δi denote this value. Thus, it is straightforward to extract the obstacles from the disparity map IΔ already computed: for a given scanning line, pixels whose disparity is equal to Δi are located on the road surface; other pixels belong to potential obstacles. On Fig. 4, note that the vehicle located at almost 80 meters is well perceived as a potential obstacle. Once the obstacle areas have been computed, the free space on the road surface can be extracted (see also [6][7]), using a growing area algorithm (see Fig. 6 Down). The part of the image corresponding to the free space area can be used, for example, for extracting the lane-markings in more suitable conditions.

Robust ground correlation extraction Broggi et al. experimentally, found that the ground correlation line during a pitch variation oscillates, parallel to itself. Zhao et al. investigated this characteristic mathematically and gave the condition for this characteristic to be valid. The behavior of the ground correlation line during a pitch variation is to oscillate, parallel to itself. Experimentally, we found out that the cameras height variation due to oscillation has neglectable effects. Accumulating the Vdisparity image values along each of the candidate lines, it is possible to estimate the ground correlation line (choosing the line that accumulated the greatest value), and then the pitch of the cameras at the time of acquisition.

Condition derived by Zhao et al For the ground correlation lines to be parallel with each other at different pitch angles At different pitch angles shown in Fig. 3, the slope of ground plane in V-disparity image will be different. Suppose gs is the slope of ls using static calibration data when = s, gmax is the slope of lmax when = max, and gmin is the slope of lmin when = min as shown in Fig. 4. Thus for a given 4w and certain pitch oscillation, we need to reduce s. In [10], the tilt angle is 8.5. In [1], the tilt angle is 7.73. Θs= is 7.73.

Result of this condition If this condition is satisfied.

GLOBAL CORRELATION METHOD Exploiting this characteristic  Fast and Robust method (even in lack of distinct features) By accumulating the matching cost (intensity of V-disparity) along each of the candidate lines and choose the one with least (most) accumulated matching cost as ground correlation line.

Verifying equations and assumptions (Road has distinct features) Implementing the Labayrade’s work Disparity map V-D Hough

Verifying equations and assumptions (Road has distinct features) In the sequence of images the position of ground correlation lines is ranging from 25 to 28. The slop(g) of ground correlation lines ranges from 5.5 to 5.5652. After Hough transform, we can get the position(4w) and slope(g) of the ground correlation lines for these image pairs. The result is shown in Fig. 9. The lower plot of this figure shows the position(4w) of ground correlation lines for different image pairs, which is ranging from 25 to 28. This plot tells us there do exist pitch variation in different conditions. The upper plot shows the slop(g) of ground correlation lines, which ranges from 5.5 to 5.5652. From this Figure we can see that the bigger the slope(g) is, the greater 4w. This confirms the match between Eq. 4 and Fig. 4 in section II. Also, the upper plot shows that the change of slope is very small, the difference between the greatest slope and smallest slope is merely 0.0652, which is 1.17%. Thus the assumption that the ground correlation lines under different pitch angles are parallel, is valid.

Results Applying the method on the same images Calculate the matching cost (associated with each candidate ground correlation lines) matching costs associated with candidate lines Same as Hough transform The matching cost of each candidate line is accumulated from the bottom of the image to the level Of horizon GCL

Without distinct features Disparity map V-D Hough

Without distinct features - Comparison Hough transform and global correlation method GCL Hough

Obstacle detection

Conclusion This accuracy is dependant of the image quality and whether or not the ground pixel dominate this area of the image. It seems an appropriate method for detecting obstacles such as vehicles in road in night using structured light (as the headlight of automobile)

Thank you Questions??