Download presentation
Presentation is loading. Please wait.
Published byRalph Wilkins Modified over 5 years ago
1
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Why Edge Detection? How can an algorithm extract relevant information from an image that is enables the algorithm to recognize objects? The most important information for the interpretation of an image (for both technical and biological systems) is the contour of objects. Contours are indicated by abrupt changes in brightness. We can use edge detection filters to extract contour information from an image. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
2
Types of Edges One-dimensional profiles of different edge types
November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
3
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Types of Edges One-dimensional profile of actual edges November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
4
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Edge Detection First we need some definitions: An edge point is a point in an image with coordinates [i, j] at the location of a significant local intensity change. An edge fragment corresponds to the i and j coordinates of an edge and the edge orientation , which may be the gradient angle. An edge detector is an algorithm that produces a set of edges (edge points or edge fragments) from an image. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
5
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Edge Detection A contour is a list of edges or the mathematical curve that models the list of edges. Edge linking is the process of forming an ordered list of edges from an unordered list. By convention, edges are ordered by traversal in a clockwise direction. Edge following is the process of searching the (filtered) image to determine contours. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
6
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Gradient In the one-dimensional case, a step edge corresponds to a local peak in the first derivative of the intensity function. In the two-dimensional case, we analyze the gradient instead of the first derivative. Just like the first derivative, the gradient measures the change in a function. For two-dimensional functions it is defined as November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
7
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Gradient In order to compute Gi and Gj in an image F at position [i, j], we need to consider the discrete case and get: Gi = F[i+1, j] – F[i, j] Gj = F[i, j+1] – F[i, j] This can be done with convolution filters: 1 -1 Gi = Gj = -1 1 To be precise in the assignment of gradients to pixels and to reduce noise, we usually apply 33 filters instead (next slide). November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
8
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Sobel Filters Sobel filters are the most common variant of edge detection filters. Two small convolution filters are used successively: -1 -2 1 2 Si Sj -1 1 -2 2 November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
9
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Sobel Filters Sobel filters yield two interesting pieces of information: The magnitude of the gradient (local change in brightness): The angle of the gradient (tells us about the orientation of an edge): November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
10
Gradient vs. Edge Orientation
Note: Edge and gradient orientation are perpendicular to each other: Here, the gradient orientation is horizontal (pointing to the right) and the edge orientation is vertical. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
11
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Sobel Filters Calculating the magnitude of the brightness gradient with a Sobel filter. Left: original image; right: filtered image. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
12
Sobel Filters and Thresholding
November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
13
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Hough Transform The Hough transform is a very general technique for feature detection. In the present context, we will use it for the detection of straight lines and circles as contour descriptors in edge point arrays. We could use other variants of the Hough transform to detect circular and other shapes. We could even use it outside of computer vision, for example in data mining applications. So understanding the Hough transform may benefit you in many situations. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
14
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Hough Transform The Hough transform is a voting mechanism. In general, each point in the input space votes for several combinations of parameters in the output space. Those combinations of parameters that receive the most votes are declared the winners. We will use the Hough transform to fit a straight line to edge position data. To keep the description simple and consistent, let us assume that the input image is continuous and described by an x-y coordinate system. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
15
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Hough Transform A straight line can be described by the equation: y = mx + c The variables x and y are the parameters of our input space, and m and c are the parameters of the output space. For a given value (x, y) indicating the position of an edge in the input, we can determine the possible values of m and c by rewriting the above equation: c = -xm + y You see that this represents a straight line in m-c space, which is our output space. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
16
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Hough Transform Example: Each of the three points A, B, and C on a straight line in input space are transformed into straight lines in output space. x y input space m c output space C C winner parameters B B A A The parameters of their crossing point (which would be the winners) are the parameters of the straight line in input space. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
17
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Hough Transform Hough Transform Algorithm: Quantize input and output spaces appropriately. Assume that each cell in the parameter (output) space is an accumulator (counter). Initialize all cells to zero. For each point (x, y) in the image (input) space, increment by one each of the accumulators that satisfy the equation. Maxima in the accumulator array correspond to the parameters of model instances. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
18
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Hough Transform The Hough transform does not require preprocessing of edge information such as ordering, noise removal, or filling of gaps. It simply provides an estimate of how to best fit a straight line (or other curve model) to the available edge data. If there are multiple straight lines in the image, the Hough transform will result in multiple peaks. You can search for these peaks to find the parameters for all the corresponding straight lines. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
19
Improved Hough Transform
Here is some practical advice for doing the Hough transform. The m-c space described on the previous slides is simple but not very practical. It cannot represent vertical lines, and the closer the orientation of a line gets to being vertical, the greater is the change in m required to turn the line significantly. We are going to discuss an alternative output space that requires a bit more computation but avoids the problems of the m-c space. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
20
Improved Hough Transform
As we said before, it is problematic to use m (slope) and c (intercept) as an output space. Instead, it is a good idea to use the orientation and length d of the normal of a straight line to describe it. The normal n of a straight line l is perpendicular to l and connects l with the origin of the coordinate system. The range of is from 0 to 360, and the range of d is from 0 to the length of the image diagonal. Note that we can skip the interval from 180 to 270, because it would require a negative d. Let us assume that the image is 450×450 units large. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
21
Improved Hough Transform
Column j 450 0 360 636 output space d 450 Row i input space d line to be described representation of same line in output space The parameters and d form the output space for our Hough transform. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
22
Improved Hough Transform
For any edge point (i0, j0) indicated by our Sobel edge detector, we have to find all parameters and d for those straight lines that pass through (i0, j0). We will then increase the counters in our output space located at every (, d) by the edge strength, i.e., the magnitude provided by the Sobel detector. This way we will find out which parameters (, d) are most likely to indicate the clearest lines in the image. But first of all, we have to discuss how to find all the parameters (, d) for a given point (i0, j0). November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
23
Improved Hough Transform
By varying from 0 to 360 we can find all lines crossing (i0, j0): 2 d2 450 Column j Row i But how can we compute parameter d for each value of ? Idea: Rotate (i0, j0) and the normal around origin by - so that the normal lands on i-axis. Then the i-coordinate of the rotated point is the value of d. 1 d1 3 d3 (i0, j0) November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
24
Improved Hough Transform
And how do we rotate a point in two-dimensional space? The simplest way is to multiply the point vector with a rotation matrix. We compute the rotated point (iR, jR) as obtained by rotation of point (i0, j0) around the point (0, 0) by the angle as follows: November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
25
Improved Hough Transform
We are only interested in the i-coordinate: iR = i0 cos - j0 sin In our case, we want to rotate by the angle -: iR = i0 cos(-) - j0 sin(-) iR = i0 cos + j0 sin Now we can compute parameter d as a function of i0, j0, and : d(i0, j0; ) = i0 cos + j0 sin By varying we are now able to determine all parameters (, d) for a given point (i0, j0) and increase the counters in output space accordingly. November 29, 2018 Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.