Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chauffeur Shade Alabsa.

Similar presentations


Presentation on theme: "Chauffeur Shade Alabsa."— Presentation transcript:

1 Chauffeur Shade Alabsa

2 Project Details Bit overzealous
Provide real time lane tracking and alert the driver if you go over the line Provide real time object tracking to detect when you’re too close to an object in front of you Bit overzealous

3 Issues Emulator and real hardware performed differently Hard to test
Got line detection on the emulator but the phone gave me a blue screen Hard to test Tested by building apk and then driving around Painfully slow On the phone we had a 10-15s delay between frames Horrible documentation

4 Lane Detection In theory very easy to do but in practice not so much
Speed is an issue Curves cause issue Other lines which don’t relate to the road cause issues

5 Doesn’t really scale and there is a better way….
Lane Detection Naïve solution Separate horizon from road Separate the road from cars like in HW 1 Check for yellow/white lines within the ROI Doesn’t really scale and there is a better way….

6 Lane Detection Canny edge detection – also used in the naïve solution
Hough Transformation Probabilistic Hough Transformation Draw lines on image and display Much Wow

7 Lane Detection But how does this work?!?!?!? Well we already know about canny edge detection Basically we use two thresholds to determine which point should belong to a contour. I used the builtin canny operator and didn’t write my own, Imgproc.Canny(Mat image, Mat edges, double threshold1, double threshold2). Edges is the matrix you’re going to store the edges in.

8 Hough Transformation New concept!!!

9 Hough Transform Lines are represented by P = x * cosq + y * sinq
Taken from OpenCV 2 Computer Vision Application Programming Cookbook P is the distance between the lihne and the image origin and max is image diagonal Theta is the angle perpendicular to the line Lines visible have theta between 0 and Pi radians A vertical line like line 1 has a θ angle value equal to zero, while a horizontal line (for example, line 5) has its θ value equal to π/2. Therefore, line 3 has an angle θ equal to π/4, and line 4 is at approximately 0.7π. In order to be able to represent all possible lines with θ in the interval [0,π], the radius value can be made negative. This is the case of line 2 which has a θ value equal to 0.8π with a negative value for ρ.

10 Hough Transform Basic implementation:
Imgproc.HoughLines(Mat image, Mat lines, double rho, double theta, int threshold) Another version but I didn’t use it so I don’t know what the other options stand for but add the following parameters Double srn, double stn, double min_theta, double max_theta Lines is the Mat output which you’ll use which contain values Create false positives due to incidental pixel alignments Image is usually a binary image usually an edge map like the one returned from the canny operator Rho and theta are for the step size for the line search – search for lines of all possible radii by step of rho and possible angles of pi/180 Threshold is the number of votes that a line must receive to be considered as detected. The Hough transform uses a 2-dimensional accumulator in order to count how many times a given line is identified. The size of this accumulator is defined by the specified step sizes (as mentioned in the preceding section) of the (ρ, θ) parameters of the adopted line representation. To illustrate the functioning of the transform, let's create a 180 by 200 matrix (corresponding to a step size of π/180 for θ and 1 for ρ):

11 Hough Transform Solve this by using probabilistic Hough transform
Can also detect line segments which is perfect for detecting lanes! Implemented with the following call Imgproc.HoughLinesP(mat image, mat lines, double rho, double theta, int threshold) Imgproc.HoughLinesP(mat image, mat lines, double rho, double theta, int threshold, double minLength, double maxLineGap) Hey that first one looks familiar!

12 Hazard detection Basic equation of
objectSizeInImage / focalLength = objectSizeInReality / distance Assuming no lens distortion, focal length is constant, object is rigid, camera is always viewing the same side of the object Requires measurement to calibrate Distance = focalLength * objectSizeInReality /objectSizeInImage Assuming focalLength and objectSizeInReality remain constant Distance * ObjectSizeInImage = focalLength * objectSizeInReality Thus newDist * newObjectSizeInImage = oldDist * oldObjectSizeInImage newDist = oldDist * oldObjectSizeInImage / newSizeInImage

13 Hazard Detection Not that great for our use case To many variables
Switch to using two cameras for stereo vision Lack extra camera Better yet we can implement 3D tracking Expensive! Differences between models make it difficult to define features Various 3D feature tracking techniques entail estimating the rotation of the object as well as its distance and other coordinates. The types of features might include edges and texture details. For our application, differences between models of cars make it difficult to define one set of features that is suitable for 3D tracking. Moreover, 3D tracking is computationally expensive, especially by the standards of a low-powered computer such as Raspberry Pi.

14 Resources OpenCV2 Computer Vision Application Programming Cookbook


Download ppt "Chauffeur Shade Alabsa."

Similar presentations


Ads by Google