Improved Lane Detection for Unmanned Ground Vehicle Navigation Seok Beom Kim, Joo Hyun Kim, Bumkyoo Choi, and Jungchul Lee Department of Mechanical Engineering,

Slides:



Advertisements
Similar presentations
By: Mani Baghaei Fard.  During recent years number of moving vehicles in roads and highways has been considerably increased.
Advertisements

QR Code Recognition Based On Image Processing
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
Rear Lights Vehicle Detection for Collision Avoidance Evangelos Skodras George Siogkas Evangelos Dermatas Nikolaos Fakotakis Electrical & Computer Engineering.
A Novel Approach of Assisting the Visually Impaired to Navigate Path and Avoiding Obstacle-Collisions.
IntroductionIntroduction AbstractAbstract AUTOMATIC LICENSE PLATE LOCATION AND RECOGNITION ALGORITHM FOR COLOR IMAGES Kerem Ozkan, Mustafa C. Demir, Buket.
Color spaces CIE - RGB space. HSV - space. CIE - XYZ space.
Automatic measurement of pores and porosity in pork ham and their correlations with processing time, water content and texture JAVIER MERÁS FERNÁNDEZ MSc.
Adviser : Ming-Yuan Shieh Student ID : M Student : Chung-Chieh Lien VIDEO OBJECT SEGMENTATION AND ITS SALIENT MOTION DETECTION USING ADAPTIVE BACKGROUND.
Formation et Analyse d’Images Session 8
Research on high-definition video vehicles location and tracking Xiong Changzhen, LiLin IEEE, Distributed Computing and Applications to Business Engineering.
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
Efficient Moving Object Segmentation Algorithm Using Background Registration Technique Shao-Yi Chien, Shyh-Yih Ma, and Liang-Gee Chen, Fellow, IEEE Hsin-Hua.
Robust Object Segmentation Using Adaptive Thresholding Xiaxi Huang and Nikolaos V. Boulgouris International Conference on Image Processing 2007.
Objective of Computer Vision
Robust Lane Detection and Tracking
An Approach to Korean License Plate Recognition Based on Vertical Edge Matching Mei Yu and Yong Deak Kim Ajou University Suwon, , Korea 指導教授 張元翔.
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
Objective of Computer Vision
Iris localization algorithm based on geometrical features of cow eyes Menglu Zhang Institute of Systems Engineering
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Shadow Detection In Video Submitted by: Hisham Abu saleh.
Image Subtraction for Real Time Moving Object Extraction Shahbe Mat Desa, Qussay A. Salih, CGIV’04.
Real-Time Face Detection and Tracking Using Multiple Cameras RIT Computer Engineering Senior Design Project John RuppertJustin HnatowJared Holsopple This.
Jason Li Jeremy Fowers Ground Target Following for Unmanned Aerial Vehicles.
1 REAL-TIME IMAGE PROCESSING APPROACH TO MEASURE TRAFFIC QUEUE PARAMETERS. M. Fathy and M.Y. Siyal Conference 1995: Image Processing And Its Applications.
FEATURE EXTRACTION FOR JAVA CHARACTER RECOGNITION Rudy Adipranata, Liliana, Meiliana Indrawijaya, Gregorius Satia Budhi Informatics Department, Petra Christian.
Abstract Some Examples The Eye tracker project is a research initiative to enable people, who are suffering from Amyotrophic Lateral Sclerosis (ALS), to.
 Tsung-Sheng Fu, Hua-Tsung Chen, Chien-Li Chou, Wen-Jiin Tsai, and Suh-Yin Lee Visual Communications and Image Processing (VCIP), 2011 IEEE, 6-9 Nov.
Vehicle License Plate Detection Algorithm Based on Statistical Characteristics in HSI Color Model Instructor : Dr. K. R. Rao Presented by: Prasanna Venkatesh.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
1 Webcam Mouse Using Face and Eye Tracking in Various Illumination Environments Yuan-Pin Lin et al. Proceedings of the 2005 IEEE Y.S. Lee.
1 Chapter 1: Introduction 1.1 Images and Pictures Human have evolved very precise visual skills: We can identify a face in an instant We can differentiate.
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
Bo QIN, Zongshun MA, Zhenghua FANG, Shengke WANG Computer-Aided Design and Computer Graphics, th IEEE International Conference on, p Presenter.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
The 18th Meeting on Image Recognition and Understanding 2015/7/29 Depth Image Enhancement Using Local Tangent Plane Approximations Kiyoshi MatsuoYoshimitsu.
Figure ground segregation in video via averaging and color distribution Introduction to Computational and Biological Vision 2013 Dror Zenati.
NTIT IMD 1 Speaker: Ching-Hao Lai( 賴璟皓 ) Author: Hongliang Bai, Junmin Zhu and Changping Liu Source: Proceedings of IEEE on Intelligent Transportation.
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot
Fast Census Transform-based Stereo Algorithm using SSE2
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot Yinxiao Li and Stanley T. Birchfield The Holcombe Department of Electrical and Computer.
Autonomous Robots Vision © Manfred Huber 2014.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Panorama Space Modeling Method for Observing an Object Jungil Jung, Heunggi Kim, Jinsoo Cho Gachon Univ., Bokjeong-dong, Sujeong-gu, Seongnam-si, Gyeonggi-do,
A Road Marking Extraction Method Using GPGPU
Zhongyan Liang, Sanyuan Zhang Under review for Journal of Zhejiang University Science C (Computers & Electronics) Publisher: Springer A Credible Tilt License.
IEEE Robot Team Vision System Project Michael Slutskiy & Paul Nguyen ECE 533 Presentation.
Surround-Adaptive Local Contrast Enhancement for Preserved Detail Perception in HDR Images Geun-Young Lee 1, Sung-Hak Lee 1, Hyuk-Ju Kwon 1, Tae-Wuk Bae.
Advanced Science and Technology Letters Vol.28 (EEC 2013), pp Histogram Equalization- Based Color Image.
A Framework with Behavior-Based Identification and PnP Supporting Architecture for Task Cooperation of Networked Mobile Robots Joo-Hyung Kiml, Yong-Guk.
Robot Velocity Based Path Planning Along Bezier Curve Path Gil Jin Yang, Byoung Wook Choi * Dept. of Electrical and Information Engineering Seoul National.
Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen.
Wonjun Kim and Changick Kim, Member, IEEE
Machine Vision ENT 273 Hema C.R. Binary Image Processing Lecture 3.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Digital Image Processing CSC331
A NEW ALGORITHM FOR THE VISUAL TRACKING OF SURGICAL INSTRUMENT IN ROBOT-ASSISTED LAPAROSCOPIC SURGERY 1 Interdisciplinary Program for Bioengineering, Graduate.
Shadow Detection in Remotely Sensed Images Based on Self-Adaptive Feature Selection Jiahang Liu, Tao Fang, and Deren Li IEEE TRANSACTIONS ON GEOSCIENCE.
License Plate Recognition of A Vehicle using MATLAB
Motion tracking TEAM D, Project 11: Laura Gui - Timisoara Calin Garboni - Timisoara Peter Horvath - Szeged Peter Kovacs - Debrecen.
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
Advanced Science and Technology Letters Vol.47 (Healthcare and Nursing 2014), pp Design of Roof Type.
Vision-based Android Application for GPS Assistance in Tunnels
Heechul Han and Kwanghoon Sohn
Signal and Image Processing Lab
Tae Young Kim and Myung jin Choi
March 2017 Project: IEEE P Working Group for Wireless Personal Area Networks (WPANs) Submission Title: VAT CAMCOM Image Processing Functional Requirements.
Fast and Robust Object Tracking with Adaptive Detection
Presented by Mohammad Rashidujjaman Rifat Ph.D Student,
Presentation transcript:

Improved Lane Detection for Unmanned Ground Vehicle Navigation Seok Beom Kim, Joo Hyun Kim, Bumkyoo Choi, and Jungchul Lee Department of Mechanical Engineering, Sogang University, South Korea Abstract. Fast and efficient lane detection is prerequisite for unmanned vehicle navigation. For lane detection, previous works mostly used algorithms based on edge detection. Our approach uses the Hough transform and Inverse perspective transform (IPT) via morphological vision processing and filtering and then extracts lanes to construct a local map. The suggested algorithm improves the processing time by using partially extracted lanes and enhances the detection efficiency by separating noises resulting from external light sources. In addition, camera vision images are reconstructed onto the local map using the IPT method to control unmanned vechicles. Keywords: Hough Transform, Inverse Perspective Transform, Lane Dectection, Morphology, Unmanned Ground Vehicle 1 Introduction Recently, smart technologies draw continuing attention with the help of advanced informational technology. As such technologies are adapted to automobiles, active research activities on unmanned autonomous navigation are ongoing to aid and replace human perceptions. For unmanned ground vehicle applications, a vision system is one of the key components. Since a vision system detects lanes using a series of image data through an algorithm, the vision recognition algorithm needs to be improved to deal with lanes including more complicated morphologies. Various methods have been used for lane detection. Most frequently used methods are to detect edges between roads and lanes [1], to use the perspective mapping [2], to use the Hough Transform [3], and to use the Kalman Filter [4]. In this paper, we construct an improved lane detection system which generates a localized map using the Hough transform and Inverse perspective transform (IPT). Depending on the morphology of lanes, a variety of image filters and operators are employed in the Hue-Saturation-Value (HSV) color space based on human color perception. Session 2C 183

2 Methodology 2.1 Camera Setup To detect lanes using charge-coupled device (CCD) cameras, installation position and angle are key parameters for a local map generation around the unmanned ground vehicle. Therefore, optimized setup parameters are required based on the region of interest (ROI). The optimized position and tilt angle of our CCD camera are 1.55 m from ground and 12.5 °downward (see Fig. 1). After installation, camera angle of view is °x ° (vertical(V) x horizontal (H)) and the ROI is 20 m x 10 m (VxH). With 0.25 m step on the predefined ROI, total 3200 bits (80x40) are used to describe the localized map. (a) (b) Fig. 1. (a) Camera setup parameters (b) 2D Boolean array with 3200 bits (80(V)x40(H)) in the ROI. (b) Fig. 2. (a) A road image showing light spreading phenomenon with contrast image processing (b) A processed image showing the difference between background and original image after fixing light spreading phenomenon. 2.2 Pre-processing During lane detection using a vision system, different local properties in an identical image may exist due to uncontrollable factors such as the intensity of external light sources and shadowing from neighboring obstacles. As a result, noises are inevitably introduced and the quality of lane detection becomes degraded. To compensate such uncontrollable environmental factors, a pre-processing step is necessary. For example, the camera image of lanes on a bright day contains more light spreading than one on a cloudy day (see Fig. 2(a)). Due to the similar color level of light spreading to that of white lanes, light spreading prevent lane detection and edge extraction. To compensate the local light spreading, a parallel processing using morphology is necessary since the morphology is efficiently used to clarify an object by separating 184 Computers, NetWorks, Systems, and Industrial Appications

image components such as boundaries and structures within an image. The background of an image is expanded to extract the overall background, then the same structuring element is dilated to reconstruct the background of which size is identical to the original image. Using the difference between one image with the Hue channel removed and another image processed morphologically, background, lanes, and other neighboring objects are successfully extracted (see Fig. 2(b)). With different to Fig. 2(a), the exceptionally bright area can be removed. To emphasize bright lanes, each pixel is processed from 1 to 255(=2 8 -1) with high speed using a lookup table pre-stored in our algorithm. Once lanes are extracted, filtering is required to suppress noises on the image. To this end, the Median filter is employed and the highlight convolution is used to extract edges. 2.3 Lane Detection Since there are various types of lanes in real roads, points recognized as parts of lanes from edges of the pre-processed image are extracted using the Hough transform in the form ofp=x cos 0+y sin 0, where p and 0 are the distance and angle from the origin, respectively, to perform an abstract line fitting for the tracking purpose. Lanes from the Hough transform are ones in the camera coordinate having the perspective in the camera image. Therefore, they are transformed into lanes on roads in bird's-eye view using the IPT and camera setup parameters. First, the Otsu method is employed to make data recognized from the image computable binary arrays in pixel where lanes are clustered and only existing pixels are computed to reduce the calculation time. In the case of the Inverse perspective transform, geometrical parameters in Fig. 3 and Eqns. (1) and (2) are used to transform the camera image (u, v) into the real image (x, y, 0). - 1) -1) x all resolution X(u, v) =, _____(1) tan LO + (2u/(xresaution - 1)-1) x a2] h x sin [(2v/(x„soha,0„ - 1) -1) x al] y(u,v) = (2) tan Le + (2u/(xresoh,„on -1) -1) x a2] where, h is the height from the ground and Xresolution and v resolution are pixel resolutions in x and y coordinate, respectively. Fig. 3. Geometrical parameters for the Inverse perspective transform (a) ai hx cos[(2v1 (x Session 2C 185

3 Experiment Experiments are performed on campus roads (Sogang University, Seoul, South Korea). Of note, maintenance condition of campus roads and luminance of lanes are not as good as regular traffic roads. While the camera installed on a 4-wheeled electric vehicle platform takes images (resolution : 320 x240) at 30 frames per second (fps), performance of the lane detection including the error rate and computing speed with the IPT employed are experimented. The flow chart of the algorithm used is shown in Fig. 4. Image HSV Morphology Fig. 4. A flow chart of the lane detection algorithm Fig. 5. (a) Original image (b) Background image processed with the morphological operators ("Erode" and "Dilate") (c) Image with the background eliminated (d) Image with the Median filter used (e) Image with the Highlight convolution (f) Binarized image with the ROI extracted. Fig. 5(a) shows an original image of a road on campus. All lanes are well removed in the background image processed with the morphological operators shown in Fig. 5(b). Fig. 5(c) is obtained by taking the absolute difference between Fig. 5(a) and Fig. 5(b). Neighboring objects which are not included in the same background as lanes are well recognized without the light spreading phenomenon resulting from external light sources. In addition, we can confirm that most data are well preserved when images processed with filters are binarized from Fig. 5(f). Fig. 6. (a) Detected lane with the Hough transform overlaid onto the original image (b) Local map 2D Boolean array 186 Computers,Networks, Systems,and Industrial Appications

When our lane detection algorithm is applied based on such pre-processed images, results from the Hough transform show a good agreement with the direction of actual lanes if the ROI excludes the central blue guideline. Local mapping is performed to realize a path planning algorithm for autonomous navigation and results are shown in Fig. 6. The width of the lane extracted from the IPT is 2.6 m which agrees well with the real width of 2.7 within 3.7 % error. And the computing speed in our real-time imaging processing is 15.4 fps. The maximum speed of our 4-wheeled electric vehicle platform is 5.3 m/s so our lane detection algorithm would update road information every 0.34 m. 4 Conclusion In this paper, points estimated as parts of lanes are extracted by image processing using various morphological operators and filters from HSV color channels for improved lane detection. The Hough transform and IPT are employed to construct a local map for unmanned autonomous navigation. By introducing pre-processing, masking, and configuring the ROI, computing speed of the lane detection algorithm is improved to 15.4 fps. However, objects with similar color or morphological data and noises are recognized as lanes sometimes depending on light reflection. The algorithm needs to be further improved to address such issues. Acknowledge This work was supported by the Sogang University Research Grant of 2012 ( ) References 1.J.C. McCall, M.M. Trivedi: Video-based lane estimation and tracking for driver assistance survey, system, and evaluation. In: IEEE Transactions on Intelligent Transportation Systems, vol.7, no.1, pp (2006) 2.S. G. Jeong, C. S. Kim, D. Y. Lee, S. K. Ha: Real-Time Lane Detection for Autonomous Vehicle. In: IEEE International Symposium on Industrial Electronics, vol.3, pp (2001) 3.Assidiq, A. A. M., Khalifa, et al.: Real-Time Lane Detection for Autonomous Vehicles. ICCCE pp (2008) 4.S. H. Kim, G Y. Kim, H. I. Choi: Efficient Lane Detection, using a Simplified Kalman Filter Algorithm and Dynamic Thresholding algorithm. In: Korean Institute of Information Scientists and Engineers, vol. 35, no.2, pp (2008) Session 2C 187