Curb Detector.

Slides:



Advertisements
Similar presentations
CSE473/573 – Stereo and Multiple View Geometry
Advertisements

Caroline Rougier, Jean Meunier, Alain St-Arnaud, and Jacqueline Rousseau IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 21, NO. 5,
Street Crossing Tracking from a moving platform Need to look left and right to find a safe time to cross Need to look ahead to drive to other side of road.
Structured Hough Voting for Vision-based Highway Border Detection
Computer Vision Detecting the existence, pose and position of known objects within an image Michael Horne, Philip Sterne (Supervisor)
Foreground Modeling The Shape of Things that Came Nathan Jacobs Advisor: Robert Pless Computer Science Washington University in St. Louis.
Oleh Tretiak © Computer Vision Lecture 1: Introduction.
Recognition of Traffic Lights in Live Video Streams on Mobile Devices
Robust Lane Detection and Tracking
CS 223B Assignment 1 Help Session Dan Maynes-Aminzade.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
MODULE FOUR Objectives: Students will learn to identify moderate risks driving environments, space management, roadway positions, turning rules, and parking.
December 2, 2014Computer Vision Lecture 21: Image Understanding 1 Today’s topic is.. Image Understanding.
Perceptual Hysteresis Thresholding: Towards Driver Visibility Descriptors Nicolas Hautière, Jean-philippe Tarel, Roland Brémond Laboratoire Central des.
Lecture 6: Feature matching and alignment CS4670: Computer Vision Noah Snavely.
Jason Li Jeremy Fowers Ground Target Following for Unmanned Aerial Vehicles.
A Tutorial on Object Detection Using OpenCV
This action is co-financed by the European Union from the European Regional Development Fund The contents of this poster are the sole responsibility of.
A Bayesian Approach For 3D Reconstruction From a Single Image
GM-Carnegie Mellon Autonomous Driving CRL TitleAutomated Image Analysis for Robust Detection of Curbs Thrust AreaPerception Project LeadDavid Wettergreen,
Driver’s View and Vehicle Surround Estimation using Omnidirectional Video Stream Abstract Our research is focused on the development of novel machine vision.
Sensors Cassandra, Gayathri, Rohan, Patrick. -Many distractions while driving -Want to dramatically reduce accidents that occur -Try to figure out ways.
Introduction of Mobility laboratory & Collaboration with CALTECH Noriko Shimomura Nissan Mobility Laboratory.
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
University of Maryland Department of Civil & Environmental Engineering By G.L. Chang, M.L. Franz, Y. Liu, Y. Lu & R. Tao BACKGROUND SYSTEM DESIGN DATA.
3D SLAM for Omni-directional Camera
Introduction EE 520: Image Analysis & Computer Vision.
Introduction to the Principles of Aerial Photography
The University of Texas at Austin Vision-Based Pedestrian Detection for Driving Assistance Marco Perez.
Computer Vision Michael Isard and Dimitris Metaxas.
Vision-based human motion analysis: An overview Computer Vision and Image Understanding(2007)
Computer Vision 776 Jan-Michael Frahm 12/05/2011 Many slides from Derek Hoiem, James Hays.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
CSP Visual input processing 1 Visual input processing Lecturer: Smilen Dimitrov Cross-sensorial processing – MED7.
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot Yinxiao Li and Stanley T. Birchfield The Holcombe Department of Electrical and Computer.
Autonomous Robots Vision © Manfred Huber 2014.
Jack Pinches INFO410 & INFO350 S INFORMATION SCIENCE Computer Vision I.
Final Review Course web page: vision.cis.udel.edu/~cv May 21, 2003  Lecture 37.
The Cyber-Physical Bike A Step Toward Safer Green Transportation.
Lecture 24-1 Example: two polarizers This set of two linear polarizers produces LP (linearly polarized) light. What is the final intensity? –P 1 transmits.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Using Adaptive Tracking To Classify And Monitor Activities In A Site W.E.L. Grimson, C. Stauffer, R. Romano, L. Lee.
Visual Odometry for Ground Vehicle Applications David Nistér, Oleg Naroditsky, and James Bergen Sarnoff Corporation CN5300 Princeton, New Jersey
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Perception and VR MONT 104S, Fall 2008 Lecture 8 Seeing Depth
3D Reconstruction Using Image Sequence
Edge Segmentation in Computer Images CSE350/ Sep 03.
©Roke Manor Research Ltd 2011 Part of the Chemring Group 1 Startiger SEEKER Workshop Estelle Tidey – Roke Manor Research 26 th February 2011.
GM-Carnegie Mellon Autonomous Driving CRL 1 TitleRobust Detection of Curbs Thrust AreaPerception Project LeadDavid Wettergreen, CMU Wende Zhang, GM Inna.
Announcements Final is Thursday, March 18, 10:30-12:20 –MGH 287 Sample final out today.
Representation in Vision Derek Hoiem CS 598, Spring 2009 Jan 22, 2009.
By the Brown Team Module 2. Driver Preparation Procedures Always check for small children and pets, fluid leaks, tire inflation, obvious physical damage,
Section 3 Basic Maneuvering Tasks: Low ,
Instantaneous Geo-location of Multiple Targets from Monocular Airborne Video.
Computer vision: models, learning and inference
3D Single Image Scene Reconstruction For Video Surveillance Systems
Traffic Sign Recognition Using Discriminative Local Features Andrzej Ruta, Yongmin Li, Xiaohui Liu School of Information Systems, Computing and Mathematics.
Unit 3 – Driver Physical Fitness
Common Classification Tasks
Team A – Perception System using Stereo Vision and Radar
Vehicle Segmentation and Tracking in the Presence of Occlusions
Multiple View Geometry for Robotics
Vision based automated steering
Filtering Things to take away from this lecture An image as a function
A Tutorial on Object Detection Using OpenCV
Filtering An image as a function Digital vs. continuous images
Presented by Mohammad Rashidujjaman Rifat Ph.D Student,
Presentation transcript:

Curb Detector

Detect and classify features using learning-based method Automated Image Analysis for Robust Detection of Curbs Project Leader Name & Functional Area Wende Zhang (GM R&D / ECS Lab) David Wettergreen (CMU) Timing Date______________ Initial May, 2014 Midterm iMay, 2015 Final May, 2016 Detect and classify features using learning-based method Resources 2013 2014 2015 Total Material Cost[US$] 100k 100k 100k Total Headcount (GM) 0.1 0.1 0.1 Total Headcount (CMU) 1 1 1 Description Curbs are important cues on identifying the boundary of a roadway. Drivers understand an appropriate parking spot as defined by the curbs when reverse or parallel parking. Detecting curbs and providing information to assist drivers is an important task for active safety. Curb location is also crucial to autonomous parking systems. Visual indications of curbs are widely various in the appearance. For example, under perspective imaging, projection of 3-dimensional curbs into 2-dimensional image plane distorts most of the curbs’ geometry properties, such as its angle, distance, and ratio of angles. Also, all curbs might be seen different because of age, wear, damage and lighting. Methods of detecting, localizing, and classifying curbs must address this diversity. This is to say, there is not a fixed template or set of templates that could be applied to reliably detect curbs through images. Nevertheless visual appearance is how human drivers successfully detect curbs. Although physical structure can be sensed with some ranging sensors it not distinctive (two offset planes) or diagnostic of the roadway edge. Therefore we choose to pursue visual appearance. This new project will develop an automated curb detection through : Choosing appropriate features, learning those features to detect, and classifying the detected curbs Utilizing the calibrated camera to fuse the 3D geometry information One year development plan: detect, localize, and classify curbs using in-vehicle vision sensor with backward looking view with wide field of view Motivation/Benefits Identify the boundary of a road way in urban driving Understand an appropriate parking spot as defined by the curbs when reverse or parallel parking Deliverable / Technology Insertion into GM (What, When, Where) Problem Definition: Survey of curbs Data collection: Database of definite curb images and diverse curb images Application: Detect curb features in perspective imagery Experimental validation and performance analysis Annual report GM Confidential

Use Case Slides

Assumptions Color monocular camera Known camera motion Known intrinsic parameters Maximum speed dependent upon frame rate

Use Cases Parking lots Driveways Roadways Backward parking Parallel parking Driveways Roadways Single lane Multi-lane

Parking lots Scenario : Curbs exist behind of a vehicle; rear-view camera with wide field of view Success : Detect and localize curbs on images; (Optional) estimate the distance from a vehicle to curbs CU R B CU R B GM

Parking lots Scenario : Parking curbs exist behind of a vehicle; rear-view camera with wide field of view Success : Detect and localize parking curbs on images; (Optional) estimate the distance from a vehicle to parking curbs CU R B CU R B GM

Driveways Scenario : Curbs exist at the side of the entrance of driveway; front-view camera with wide field of view Success : Detect and localize curbs on images and indicates driveways as traversable path GM CURB Driveways CURB

Roadways Scenario : Curbs exist at the side of the road; wide field of view camera Success : Detect and localize curbs on images and indicates curbs as the non-traversable path and the boundary of road CURB GM GM CURB

Flow Chart Detection Edge Segmentation Texture Tracking : Localize relevant curbs in each image Edge Segmentation Texture : Localize the detected curbs in remained images Tracking

Edge Detection Distorted Undistorted Edge Bird’s-eye view Edge

Segmentation

Texture Classification

Development Plan Develop and test simple features Train classifiers to detect and localize curbs Evaluate classifier performance Add complex features Test quantify detection and localization performance Train color classifiers to interpret appropriate parking spots Motion Stereo to exploit 3D geometry

Scheme Extract Features Edge Detection Classification Tracking Filters Edges Intensity differences Gradients Geometric consideration Horizontal Long features Thin features Color Texture Curvature Appearance based tracking

Data collection Using 180 degree field of view camera Install underneath the side mirror, tilt 45 degree down to the ground Sample images

Camera Calibration Wider field of view, more distortion Camera calibration is necessary in order to find geometry constrains (e.g., edges…) Using OCamCalib (Omnidirectional Camera Calibration Toolbox) to calibrate camera Sample undistorted images

Shape Information Edge detection HOG feature

Extract Dominant Edges Edge Detection Extract Dominant Edges Input Image at t Undistorted Image Edge Detection Sequential RANSAC

Extract Dominant Edges Edge Detection Extract Dominant Edges Input Image at t Undistorted Image Edge Detection Sequential RANSAC

Extract Dominant Edges Edge Detection Extract Dominant Edges Input Image at t Undistorted Image Edge Detection Sequential RANSAC Two or Three parallel lines with small offsets are important cue for curbs

HOG feature

HOG feature

HOG feature Input image HOG map Score map Output image

Prerequisite The maximum distance of ‘Curb Detection’ from a vehicle should be defined. Given extrinsic parameters and the maximum distance, the followings can be estimated. Different size of HOG model Region of interest short medium long

Geometry calculation Maximum detect distance CURB Local Area 2.7-3.6 m (9-12 feet) GM GM Maximum detect distance CURB

Examples Cadillac SRX Cadillac CTS http://www.cadillac.com/srx-luxury-crossover/features-specs/dimensions.html Cadillac CTS http://www.cadillac.com/cts-sport-sedan/features-specs/dimensions.html

Geometry calculation GM Maximum detect distance = 2.1 meter CURB

Image Sample with distance measure The center of the camera is 1.05m from the ground. The angle of the camera is 45 degree down from the horizontal. ROI will be reduced. (Red transparent rectangle) ROI will be changed based on the extrinsic parameter.

Lane Markings Since lane markings have strong edges, we need to eliminate outputs from lane markings. Parts of images which contain lane markings can be removed by detecting white blobs.

Result Video

Performance Measure Choose 300 testing images Positive samples: images which contains full length of curbs Negative samples: images without curbs We consider curbs are detected when the horizontal length of the detected curbs are bigger than half of the horizontal length of image. Since the size of image is 480 by 720, we consider curbs are detected and the sum of the length of the detected curbs are bigger than 360.

length of detected curb Performance Measure length of detected curb > 0.5 total length of image Groundtruth Positive Negative 80 13 24 183

Future Works Features of curb detection Front-view camera Redundant information through multiple images Include tracking system to recover false negatives Continuity Develop likelihood function to recover false negatives and remove false positives Height Front-view camera Mount 180 degree field of view camera on the front bumper

Front-view Camera Configuration