Chauffeur Shade Alabsa.

Slides:



Advertisements
Similar presentations
Vanishing points  .
Advertisements

Hough Transforms CSE 6367 – Computer Vision Vassilis Athitsos University of Texas at Arlington.
18/2/00SEM107 - © Kamin & Reddy Class 7 - LineList - 1 Class 7 - Line Drawings  The LineList data type r Recursive methods to construct line drawings.
Computer Vision, Robert Pless
Simultaneous surveillance camera calibration and foot-head homology estimation from human detection 1 Author : Micusic & Pajdla Presenter : Shiu, Jia-Hau.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Last 4 lectures Camera Structure HDR Image Filtering Image Transform.
CS 376b Introduction to Computer Vision 04 / 21 / 2008 Instructor: Michael Eckmann.
Computer vision: models, learning and inference
0 - 1 © 2007 Texas Instruments Inc, Content developed in partnership with Tel-Aviv University From MATLAB ® and Simulink ® to Real Time with TI DSPs Detecting.
EE663 Image Processing Edge Detection 5 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
1 Model Fitting Hao Jiang Computer Science Department Oct 6, 2009.
Multi video camera calibration and synchronization.
CS 376b Introduction to Computer Vision 04 / 11 / 2008 Instructor: Michael Eckmann.
MSU CSE 240 Fall 2003 Stockman CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more.
CS223B Assignment 1 Recap. Lots of Solutions! 37 Groups Many different approaches Let’s take a peek at all 37 results on one image from the test set.
Vehicle Movement Tracking
Color a* b* Brightness L* Texture Original Image Features Feature combination E D 22 Boundary Processing Textons A B C A B C 22 Region Processing.
Fitting a Model to Data Reading: 15.1,
CS 376b Introduction to Computer Vision 04 / 14 / 2008 Instructor: Michael Eckmann.
MSU CSE 803 Fall 2008 Stockman1 CV: 3D sensing and calibration Coordinate system changes; perspective transformation; Stereo and structured light.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Robust estimation Problem: we want to determine the displacement (u,v) between pairs of images. We are given 100 points with a correlation score computed.
Stockman MSU/CSE Math models 3D to 2D Affine transformations in 3D; Projections 3D to 2D; Derivation of camera matrix form.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean Hall 5409 T-R 10:30am – 11:50am.
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
FEATURE EXTRACTION FOR JAVA CHARACTER RECOGNITION Rudy Adipranata, Liliana, Meiliana Indrawijaya, Gregorius Satia Budhi Informatics Department, Petra Christian.
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Geometric Models & Camera Calibration
Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.
Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.
Localization for Mobile Robot Using Monocular Vision Hyunsik Ahn Jan Tongmyong University.
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by.
CSCE 643 Computer Vision: Structure from Motion
Generalized Hough Transform
Edges. Edge detection schemes can be grouped in three classes: –Gradient operators: Robert, Sobel, Prewitt, and Laplacian (3x3 and 5x5 masks) –Surface.
Submitted by: Giorgio Tabarani, Christian Galinski Supervised by: Amir Geva CIS and ISL Laboratory, Technion.
Geometric Camera Models
General ideas to communicate Show one particular Example of localization based on vertical lines. Camera Projections Example of Jacobian to find solution.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
HOUGH TRANSFORM. Introduced in 1962 by Paul Hough pronounced like “tough” according to orm.html.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
CSE 185 Introduction to Computer Vision Feature Matching.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Digital Image Processing CCS331 Camera Model and Imaging Geometry 1.
OpenCV C++ Image Processing
Grouping and Segmentation. Sometimes edge detectors find the boundary pretty well.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
Computer vision: models, learning and inference
CSSE463: Image Recognition Day 26
- Introduction - Graphics Pipeline
Miguel Tavares Coimbra
3D Graphics Rendering PPT By Ricardo Veguilla.
CS451Real-time Rendering Pipeline
Computer Vision Lecture 4: Color
Fitting Curve Models to Edges
EE/CSE 576 HW 1 Notes.
EE/CSE 576 HW 1 Notes.
CSSE463: Image Recognition Day 26
CSSE463: Image Recognition Day 26
Morphological Operators
CSSE463: Image Recognition Day 27
CSSE463: Image Recognition Day 26
A9 Graphs of non-linear functions
Grape Detection in Vineyards Introduction To Computational and Biological Vision final project Kobi Ruham Eli Izhak.
CSSE463: Image Recognition Day 26
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

Chauffeur Shade Alabsa

Project Details Bit overzealous Provide real time lane tracking and alert the driver if you go over the line Provide real time object tracking to detect when you’re too close to an object in front of you Bit overzealous

Issues Emulator and real hardware performed differently Hard to test Got line detection on the emulator but the phone gave me a blue screen Hard to test Tested by building apk and then driving around Painfully slow On the phone we had a 10-15s delay between frames Horrible documentation

Lane Detection In theory very easy to do but in practice not so much Speed is an issue Curves cause issue Other lines which don’t relate to the road cause issues

Doesn’t really scale and there is a better way…. Lane Detection Naïve solution Separate horizon from road Separate the road from cars like in HW 1 Check for yellow/white lines within the ROI Doesn’t really scale and there is a better way….

Lane Detection Canny edge detection – also used in the naïve solution Hough Transformation Probabilistic Hough Transformation Draw lines on image and display Much Wow

Lane Detection But how does this work?!?!?!? Well we already know about canny edge detection Basically we use two thresholds to determine which point should belong to a contour. I used the builtin canny operator and didn’t write my own, Imgproc.Canny(Mat image, Mat edges, double threshold1, double threshold2). Edges is the matrix you’re going to store the edges in.

Hough Transformation New concept!!!

Hough Transform Lines are represented by P = x * cosq + y * sinq Taken from OpenCV 2 Computer Vision Application Programming Cookbook P is the distance between the lihne and the image origin and max is image diagonal Theta is the angle perpendicular to the line Lines visible have theta between 0 and Pi radians A vertical line like line 1 has a θ angle value equal to zero, while a horizontal line (for example, line 5) has its θ value equal to π/2. Therefore, line 3 has an angle θ equal to π/4, and line 4 is at approximately 0.7π. In order to be able to represent all possible lines with θ in the interval [0,π], the radius value can be made negative. This is the case of line 2 which has a θ value equal to 0.8π with a negative value for ρ.

Hough Transform Basic implementation: Imgproc.HoughLines(Mat image, Mat lines, double rho, double theta, int threshold) Another version but I didn’t use it so I don’t know what the other options stand for but add the following parameters Double srn, double stn, double min_theta, double max_theta Lines is the Mat output which you’ll use which contain values Create false positives due to incidental pixel alignments Image is usually a binary image usually an edge map like the one returned from the canny operator Rho and theta are for the step size for the line search – search for lines of all possible radii by step of rho and possible angles of pi/180 Threshold is the number of votes that a line must receive to be considered as detected. The Hough transform uses a 2-dimensional accumulator in order to count how many times a given line is identified. The size of this accumulator is defined by the specified step sizes (as mentioned in the preceding section) of the (ρ, θ) parameters of the adopted line representation. To illustrate the functioning of the transform, let's create a 180 by 200 matrix (corresponding to a step size of π/180 for θ and 1 for ρ):

Hough Transform Solve this by using probabilistic Hough transform Can also detect line segments which is perfect for detecting lanes! Implemented with the following call Imgproc.HoughLinesP(mat image, mat lines, double rho, double theta, int threshold) Imgproc.HoughLinesP(mat image, mat lines, double rho, double theta, int threshold, double minLength, double maxLineGap) Hey that first one looks familiar!

Hazard detection Basic equation of objectSizeInImage / focalLength = objectSizeInReality / distance Assuming no lens distortion, focal length is constant, object is rigid, camera is always viewing the same side of the object Requires measurement to calibrate Distance = focalLength * objectSizeInReality /objectSizeInImage Assuming focalLength and objectSizeInReality remain constant Distance * ObjectSizeInImage = focalLength * objectSizeInReality Thus newDist * newObjectSizeInImage = oldDist * oldObjectSizeInImage newDist = oldDist * oldObjectSizeInImage / newSizeInImage

Hazard Detection Not that great for our use case To many variables Switch to using two cameras for stereo vision Lack extra camera Better yet we can implement 3D tracking Expensive! Differences between models make it difficult to define features Various 3D feature tracking techniques entail estimating the rotation of the object as well as its distance and other coordinates. The types of features might include edges and texture details. For our application, differences between models of cars make it difficult to define one set of features that is suitable for 3D tracking. Moreover, 3D tracking is computationally expensive, especially by the standards of a low-powered computer such as Raspberry Pi.

Resources OpenCV2 Computer Vision Application Programming Cookbook http://www.transistor.io/revisiting-lane-detection-using-opencv.html https://github.com/jdorweiler/lane-detection https://github.com/wahibhaq/android-opencv-lanedetection