April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.

Slides:



Advertisements
Similar presentations
Boundary Detection - Edges Boundaries of objects –Usually different materials/orientations, intensity changes.
Advertisements

November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
BRISK (Presented by Josh Gleason)
EDGE DETECTION ARCHANA IYER AADHAR AUTHENTICATION.
Computer Vision Lecture 18: Object Recognition II
Instructor: Mircea Nicolescu Lecture 6 CS 485 / 685 Computer Vision.
October 2, 2014Computer Vision Lecture 8: Edge Detection I 1 Edge Detection.
Computer Vision Lecture 16: Texture
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
November 4, 2014Computer Vision Lecture 15: Shape Representation II 1Signature Another popular method of representing shape is called the signature. In.
Edge and Corner Detection Reading: Chapter 8 (skip 8.1) Goal: Identify sudden changes (discontinuities) in an image This is where most shape information.
Edge Detection. Our goal is to extract a “line drawing” representation from an image Useful for recognition: edges contain shape information –invariance.
Lecture 4 Edge Detection
Filtering and Edge Detection
Computer Vision Group Edge Detection Giacomo Boracchi 5/12/2007
Canny Edge Detector.
Edge detection. Edge Detection in Images Finding the contour of objects in a scene.
CS 376b Introduction to Computer Vision 04 / 11 / 2008 Instructor: Michael Eckmann.
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Canny Edge Detector1 1)Smooth image with a Gaussian optimizes the trade-off between noise filtering and edge localization 2)Compute the Gradient magnitude.
Segmentation (Section 10.2)
EE663 Image Processing Edge Detection 3 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
E.G.M. PetrakisBinary Image Processing1 Binary Image Analysis Segmentation produces homogenous regions –each region has uniform gray-level –each region.
3-D Computational Vision CSc Canny Edge Detection.
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
Edge Detection (with implementation on a GPU) And Text Recognition (if time permits) Jared Barnes Chris Jackson.
Discrete Images (Chapter 7) Fourier Transform on discrete and bounded domains. Given an image: 1.Zero boundary condition 2.Periodic boundary condition.
Lecture 2: Edge detection CS4670: Computer Vision Noah Snavely From Sandlot ScienceSandlot Science.
September 23, 2014Computer Vision Lecture 5: Binary Image Processing 1 Binary Images Binary images are grayscale images with only two possible levels of.
Edges. Edge detection schemes can be grouped in three classes: –Gradient operators: Robert, Sobel, Prewitt, and Laplacian (3x3 and 5x5 masks) –Surface.
EDGE DETECTION IN COMPUTER VISION SYSTEMS PRESENTATION BY : ATUL CHOPRA JUNE EE-6358 COMPUTER VISION UNIVERSITY OF TEXAS AT ARLINGTON.
October 7, 2014Computer Vision Lecture 9: Edge Detection II 1 Laplacian Filters Idea: Smooth the image, Smooth the image, compute the second derivative.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
Edge Based Segmentation Xinyu Chang. Outline Introduction Canny Edge detector Edge Relaxation Border Tracing.
Filtering Objective –improve SNR and CNR Challenges – blurs object boundaries and smears out important structures.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Course 5 Edge Detection. Image Features: local, meaningful, detectable parts of an image. edge corner texture … Edges: Edges points, or simply edges,
Lecture 04 Edge Detection Lecture 04 Edge Detection Mata kuliah: T Computer Vision Tahun: 2010.
Canny Edge Detection Using an NVIDIA GPU and CUDA Alex Wade CAP6938 Final Project.
Digital Image Processing Lecture 17: Segmentation: Canny Edge Detector & Hough Transform Prof. Charlene Tsai.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Edge Segmentation in Computer Images CSE350/ Sep 03.
Computer Vision Image Features Instructor: Dr. Sherif Sami Lecture 4.
Lecture 8: Edges and Feature Detection
Course 3 Binary Image Binary Images have only two gray levels: “1” and “0”, i.e., black / white. —— save memory —— fast processing —— many features of.
September 26, 2013Computer Vision Lecture 8: Edge Detection II 1Gradient In the one-dimensional case, a step edge corresponds to a local peak in the first.
Detection of nerves in Ultrasound Images using edge detection techniques NIRANJAN TALLAPALLY.
October 3, 2013Computer Vision Lecture 10: Contour Fitting 1 Edge Relaxation Typically, this technique works on crack edges: pixelpixelpixel pixelpixelpixelebg.
Another Example: Circle Detection
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Detection of discontinuity using
Mean Shift Segmentation
Fourier Transform: Real-World Images
Computer Vision Lecture 5: Binary Image Processing
Fitting Curve Models to Edges
Edge Detection The purpose of Edge Detection is to find jumps in the brightness function (of an image) and mark them.
Computer Vision Lecture 9: Edge Detection II
Introduction Computer vision is the analysis of digital images
Computer Vision Lecture 16: Texture II
a kind of filtering that leads to useful features
CS Digital Image Processing Lecture 5
a kind of filtering that leads to useful features
Canny Edge Detector.
Feature Detection .
Fourier Transform of Boundaries
Introduction Computer vision is the analysis of digital images
Canny Edge Detector Smooth image with a Gaussian
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation of the optimal operator, i.e., the one that maximizes the product of signal-to-noise ratio and localization. Let I[i, j] be our input image. First, we apply a Gaussian filter with standard deviation  to this image to create a smoothed image S[i, j]. Then we compute the intensity gradient. For example, we could use the Sobel filter to obtain the vertical derivative P[i, j] and the horizontal derivative Q[i, j].

April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 2 Canny Edge Detector Once we have computed P[i, j] and Q[i, j], we can also compute the magnitude m and orientation  of the gradient vector at position [i, j]: This is exactly the information that we want – it tells us where edges are, how significant they are, and what their orientation is. However, this information is still very noisy.

April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 3 Canny Edge Detector The first problem is that edges in images are not usually indicated by perfect step edges in the brightness function. Brightness transitions usually have a certain slope that extends across many pixels. As a consequence, edges typically lead to wide lines of high gradient magnitude. However, we would like to find thin lines that most precisely describe the boundary between two objects. This can be achieved with the nonmaxima suppression technique.

April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 4 Canny Edge Detector We first assign a sector  [i, j] to each pixel according to its associated gradient orientation  [i, j]. There are four sectors with numbers 0, 1, 2, and 3: 270  90  225  180  135  45  0000 315 

April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 5 Canny Edge Detector Then we iterate through the array m[i, j] and compare each element to two of its 8-neighbors. Which two neighbors we choose depends on the value of  [i, j]. In the following diagram, the numbers in the neighboring squares of [i, j] indicate the value of  [i, j] for which these neighbors are chosen: 103 2[i,j]2 301

April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 6 Canny Edge Detector If the value of m[i, j] is less than either of the m-values in these two neighboring positions, set E[i, j] = 0. Otherwise, set E[i, j] = m[i, j]. The resulting array E[i, j] is of the same size as m[i, j] and contains values greater than zero only at local maxima of gradient magnitude (measured in local gradient direction). Notice that an edge in an image always induces an intensity gradient that is perpendicular to the orientation of the edge. Thus, this nonmaxima suppression technique thins the detected edges to a width of one pixel.

April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 7 Canny Edge Detector However, usually there will still be noise in the array E[i, j], i.e., non-zero values that do not correspond to any edge in the original image. In most cases, these values will be smaller than those indicating actual edges. We could then simply use a threshold  to eliminate the noise, as we did before. However, this could still leave some isolated edge outputs caused by strong noise and some gaps in detected contours. We can improve this process by using hysteresis thresholding.

April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 8 Canny Edge Detector Hysteresis thresholding uses two thresholds - a low threshold  L and a high threshold  H. Typically,  H is chosen so that 2  L   H  3  L. In the first stage, we label each pixel in E[i, j] as follows: If E[i, j] >  H then the pixel is an edge,If E[i, j] >  H then the pixel is an edge, If E[i, j] <  L then the pixel is no edge (deleted),If E[i, j] <  L then the pixel is no edge (deleted), Otherwise, the pixel is an edge candidate.Otherwise, the pixel is an edge candidate.

April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 9 Canny Edge Detector In the second stage, we check for each edge candidate c whether it is connected to an edge via other candidates (8-neighborhood). If so, we turn c into an edge. Otherwise, we delete c, i.e., it is no edge. This method fills gaps in contours and discards isolated edges that are unlikely to be part of any contour.

April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 10 Canny Edge Detector - Example

April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 11 Canny Edge Detector - Example  [i, j] Remember the directions: 103 2[i,j]2 301

April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 12 Canny Edge Detector - Example

April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 14 Key Points Often we are unable to detect the complete contour of an object or characterize the details of its shape. In such cases, we can still represent shape information in an implicit way. The idea is to find key points in the image of the object that can be used to identify the objects and that do not change dramatically when the orientation or lighting of the object change. A good choice for this purpose are corners in the image.

April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 15 Corner Detection with FAST A very efficient algorithm for corner detection is called FAST (Features from Accelerated Segment Test). For any given point c in the image, we can test whether it is a corner by: Considering the 16 pixels at a radius of 3 pixels around c andConsidering the 16 pixels at a radius of 3 pixels around c and finding the longest run (i.e., uninterrupted sequence) of pixels whose intensity is eitherfinding the longest run (i.e., uninterrupted sequence) of pixels whose intensity is either greater than that of c plus a threshold orgreater than that of c plus a threshold or less that that of c minus the same threshold.less that that of c minus the same threshold. If the run is at least 12 pixels long, then c is a corner.If the run is at least 12 pixels long, then c is a corner.

April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 16 Corner Detection with FAST

April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 17 Corner Detection with FAST The FAST algorithm can be made even faster by first checking only pixels 1, 5, 9, and 13. If not at least three of them fulfill the intensity condition, we can immediately rule out that the given point is a corner. In order to avoid detecting multiple corners near the same pixel, we can require a minimum distance between corners. If two corners are too close, we only keep the one with the higher corner score. Such a score can be computed as the sum of intensity differences between c and the pixels in the run.

April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 18 Key Point Description with BRIEF Now that we have identified interesting points in the image, how can we describe them so that we can detect them in other images? A very efficient method is to use BRIEF (Binary Robust Independent Elementary Features) descriptors. They can be described with minimal memory requirement (32 bytes per point). Their comparison only requires 256 binary operations.

April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 19 Key Point Description with BRIEF First, smooth the input image with a 9×9 Gaussian filter. Then choose 256 pairs of points within a 35×35 pixel area, following a Gaussian distribution with  = 7 pixels. Center the resulting mask on a corner c. For every pair of points, if intensity at the first point is greater than at the second one, add a 0 to the bitstring, otherwise add a 1. The resulting bit string of length 256 is the descriptor for point c.

April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 20 Key Point Description with BRIEF

April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 21 Key Point Matching with BRIEF In order to compute the matching distance between the descriptors of two different points, we can simply count the number of mismatching bits in their description (Hamming distance). For example, the bit strings and have a Hamming distance of 2, because they differ in their second and fifth bits. In order to find the match for point c in another image, we can find the pixel in that image whose descriptor has the smallest Hamming distance to the one for c.

April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 22 Key Point Matching with BRIEF