Copyright © 2012 Elsevier Inc. All rights reserved.. Chapter 5 Edge Detection.

Slides:



Advertisements
Similar presentations
Chapter 3 Image Enhancement in the Spatial Domain.
Advertisements

Edges and Contours– Chapter 7
EDGE DETECTION ARCHANA IYER AADHAR AUTHENTICATION.
EDGE DETECTION.
Digital Image Processing
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
Edge and Corner Detection Reading: Chapter 8 (skip 8.1) Goal: Identify sudden changes (discontinuities) in an image This is where most shape information.
Edge Detection. Our goal is to extract a “line drawing” representation from an image Useful for recognition: edges contain shape information –invariance.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Lecture 4 Edge Detection
6/9/2015Digital Image Processing1. 2 Example Histogram.
Canny Edge Detector.
Edge detection. Edge Detection in Images Finding the contour of objects in a scene.
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Computer Vision - A Modern Approach
Canny Edge Detector1 1)Smooth image with a Gaussian optimizes the trade-off between noise filtering and edge localization 2)Compute the Gradient magnitude.
Segmentation (Section 10.2)
Filters and Edges. Zebra convolved with Leopard.
Lecture 2: Image filtering
Computer Vision P. Schrater Spring 2003
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Multimedia Systems & Interfaces Karrie G. Karahalios Spring 2007.
Chapter 2. Image Analysis. Image Analysis Domains Frequency Domain Spatial Domain.
Chapter 10: Image Segmentation
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30 – 11:50am.
G52IIP, School of Computer Science, University of Nottingham 1 Edge Detection and Image Segmentation.
Discrete Images (Chapter 7) Fourier Transform on discrete and bounded domains. Given an image: 1.Zero boundary condition 2.Periodic boundary condition.
Edges. Edge detection schemes can be grouped in three classes: –Gradient operators: Robert, Sobel, Prewitt, and Laplacian (3x3 and 5x5 masks) –Surface.
Edge detection –Part 2. Laplacian of Gaussian Figure 1 Response of 1-D LoG filter to a step edge. The left hand graph shows a 1-D image, 200 pixels long,
G52IVG, School of Computer Science, University of Nottingham 1 Edge Detection and Image Segmentation.
CSC508 What You Should Be Doing Code, code, code –Programming Gaussian Convolution Sobel Edge Operator.
Instructor: S. Narasimhan
CS654: Digital Image Analysis Lecture 24: Introduction to Image Segmentation: Edge Detection Slide credits: Derek Hoiem, Lana Lazebnik, Steve Seitz, David.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
EE 4780 Edge Detection.
CSC508 Convolution Operators. CSC508 Convolution Arguably the most fundamental operation of computer vision It’s a neighborhood operator –Similar to the.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Edges.
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 15/16 – TP7 Spatial Filters Miguel Tavares Coimbra.
Edge Based Segmentation Xinyu Chang. Outline Introduction Canny Edge detector Edge Relaxation Border Tracing.
Copyright © 2012 Elsevier Inc. All rights reserved.. Chapter 6 Corner and Interest Point Detection.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Course 5 Edge Detection. Image Features: local, meaningful, detectable parts of an image. edge corner texture … Edges: Edges points, or simply edges,
Lecture 04 Edge Detection Lecture 04 Edge Detection Mata kuliah: T Computer Vision Tahun: 2010.
Digital Image Processing Lecture 17: Segmentation: Canny Edge Detector & Hough Transform Prof. Charlene Tsai.
Canny Edge Detection. 5 STEPS 5 STEPS Apply Gaussian filter to smooth the image in order to remove the noise Apply Gaussian filter to smooth the image.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Computer Vision Image Features Instructor: Dr. Sherif Sami Lecture 4.
Lecture 8: Edges and Feature Detection
1 Edge Operators a kind of filtering that leads to useful features.
Edges Edges = jumps in brightness/color Brightness jumps marked in white.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
Winter in Kraków photographed by Marcin Ryczek
EDGE DETECTION Dr. Amnach Khawne. Basic concept An edge in an image is defined as a position where a significant change in gray-level values occur. An.
Miguel Tavares Coimbra
Chapter 10 Image Segmentation
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Edge Detection CS 678 Spring 2018.
Edge Detection The purpose of Edge Detection is to find jumps in the brightness function (of an image) and mark them.
Computer Vision Lecture 9: Edge Detection II
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
a kind of filtering that leads to useful features
a kind of filtering that leads to useful features
Canny Edge Detector.
CS 565 Computer Vision Nazar Khan Lecture 9.
Canny Edge Detector Smooth image with a Gaussian
Winter in Kraków photographed by Marcin Ryczek
IT472 Digital Image Processing
IT472 Digital Image Processing
Presentation transcript:

Copyright © 2012 Elsevier Inc. All rights reserved.. Chapter 5 Edge Detection

Copyright © 2012 Elsevier Inc. All rights reserved.. FIGURE 5.1 Edge models: (a) sudden step edge; (b) slanted step edge; (c) smooth step edge; (d) planar edge; (e) roof edge; and (f) line edge. The effective profiles of edge models are nonzero only within the stated neighborhood. The slanted step and the smooth step are approximations to realistic edge profiles: the sudden step and the planar edge are extreme forms that are useful for comparisons (see text).

5.2 Basic Theory of Edge Detection Both differential gradient (DG) and template matching (TM) operators estimate local intensity gradients with the aid of suitable convolution masks. DG operators require two such masks for the x and y directions. TM operators usually employ up to 12 convolution masks capable of estimating local components of gradient in different directions. Copyright © 2012 Elsevier Inc. All rights reserved.. 3

In the TM approach, the local edge gradient magnitude is approximated by taking the maximum of the responses for the component masks: where n is usually 8 or 12. Copyright © 2012 Elsevier Inc. All rights reserved.. 4

In the DG approach, the local edge magnitude may be computed using the nonlinear transformation: To save computational effort, it is common practice to approximate the above formula by one of the simple forms: Copyright © 2012 Elsevier Inc. All rights reserved.. 5

In the TM approach, edge orientation is estimated simply as that of the mask giving rise to the largest of gradient in Eq. (5.1). In the DG approach, it is estimated by: Copyright © 2012 Elsevier Inc. All rights reserved.. 6

Copyright © 2012 Elsevier Inc. All rights reserved.. Table 5.1

Copyright © 2012 Elsevier Inc. All rights reserved.. Table 5.2

5.4 Theory of 3x3 Template Operators Symmetry requirements lead to the following masks for and : If the pixel intensity values within a 3x3 neighborhood are: Copyright © 2012 Elsevier Inc. All rights reserved.. 9

The above masks will give the following estimates of gradient in the,, and directions: If vector addition is to be valid, then: Copyright © 2012 Elsevier Inc. All rights reserved.. 10

Equating coefficients of a, b, …, I leads to the self-consistent pair of conditions: A further requirement is for the and masks to give equal responses at, and lead to Where t=tan, so that Copyright © 2012 Elsevier Inc. All rights reserved.. 11

Copyright © 2012 Elsevier Inc. All rights reserved The Design of Differential Gradient Operators

5.10 Hysteresis Thresholdoing The concept of hysteresis thresholding is a general one and can be applied in a range of applications, including both image and signal processing. The basic rule is to threshold the edge at a high level, and then to allow extension of the edge down to a lower level threshold,but only adjacent to points that have already been assigned edge status. Isolated edge points within the object boundaries are ignored by hysteresis thresholding. Copyright © 2012 Elsevier Inc. All rights reserved.. 13

Copyright © 2012 Elsevier Inc. All rights reserved.. 14

Copyright © 2012 Elsevier Inc. All rights reserved.. FIGURE 5.4 Effectiveness of hysteresis thresholding. This figure shows tests made on the edge gradient image of Fig. 7.4(b). (a) Effect of thresholding at the upper hysteresis level. (b) Effect of thresholding at the lower hysteresis level. (c) Effect of hysteresis thresholding. (d) Effect of thresholding at an intermediate level.

5.11 The Canny Operator Canny operator (Canny, 1986) has become one of the most widely used edge detection operators, as it seeks to get away from the traditional mask-based operators. It permits thin line structures to emerge and ensures that they are connected together as far as possible and are meaningful at the particular scale and bandwidth. Copyright © 2012 Elsevier Inc. All rights reserved.. 16

The method involves a number of stages of processing: 1.Low-pass spatial frequency filtering (Gaussian) 2.Application of first-order differential masks (Sobel) 3.Nonmaximum suppression involving subpixel interpolation of pixel intensities 4.Hysteresis thresholding Copyright © 2012 Elsevier Inc. All rights reserved.. 17

Copyright © 2012 Elsevier Inc. All rights reserved.. FIGURE 5.5 Exactness of the well-known 3X3 smoothing kernel. This figure shows the Gaussian-based smoothing kernel (a) that is closest to the well- known 3X3 smoothing kernel (b) over the central (3X3) region. For clarity, neither is normalized by the factor 1/16. The larger Gaussian envelope drops to outside the region shown and integrates to rather than 16. Hence, the kernel in (b) can be said to approximate a Gaussian within B13%. Its actual standard deviation is compared with for the Gaussian.

Stage 3 – Non-maximum suppression Goal: Thin the grayscale edges to unit width, since only one point along the edge normal direction should be a local maximum. Method: Determine the local edge normal direction using Eq. (5.5), and move either way along the normal to determine whether the current location is or is local a local maximum. If it is not, we suppress the edge point and remain the maximum one. When the normal direction does not pass through the centers of the adjacent pixels, linear interpolation is applied. Copyright © 2012 Elsevier Inc. All rights reserved.. 19

FORMULA Copyright © 2012 Elsevier Inc. All rights reserved.. 20

Copyright © 2012 Elsevier Inc. All rights reserved.. FIGURE 5.6

Stage 4: Hysteresis Thresholding By selecting the upper thresholds, it is intended to ensure capturing edges that are reliable and then select other points that have high likelihood (lower threshold) of being viable edge points because they are adjacent to edge points of known reliable. A simple rule for choice of the lower threshold is that it should be half the upper threshold. Copyright © 2012 Elsevier Inc. All rights reserved.. 22

Copyright © 2012 Elsevier Inc. All rights reserved.. FIGURE 5.7 (1/4) FIGURE 5.7 Application of the Canny edge detector. (a) Original image. (b) Smoothed image. (c) Result of applying Sobel operator. (d) Result of nonmaximum suppression. (e) Result of hysteresis thresholding. (f) Result of thresholding only at the lower threshold level. (g) Result of thresholding at the upper threshold level. Note that there are fewer false or misleading outputs in (e) than would result from using a single threshold.

Copyright © 2012 Elsevier Inc. All rights reserved.. FIGURE 5.7 (2/4) FIGURE 5.7 Application of the Canny edge detector. (a) Original image. (b) Smoothed image. (c) Result of applying Sobel operator. (d) Result of nonmaximum suppression. (e) Result of hysteresis thresholding. (f) Result of thresholding only at the lower threshold level. (g) Result of thresholding at the upper threshold level. Note that there are fewer false or misleading outputs in (e) than would result from using a single threshold.

Copyright © 2012 Elsevier Inc. All rights reserved.. FIGURE 5.7 (3/4) FIGURE 5.7 Application of the Canny edge detector. (a) Original image. (b) Smoothed image. (c) Result of applying Sobel operator. (d) Result of nonmaximum suppression. (e) Result of hysteresis thresholding. (f) Result of thresholding only at the lower threshold level. (g) Result of thresholding at the upper threshold level. Note that there are fewer false or misleading outputs in (e) than would result from using a single threshold.

Copyright © 2012 Elsevier Inc. All rights reserved.. FIGURE 5.7 (4/4) FIGURE 5.7 Application of the Canny edge detector. (a) Original image. (b) Smoothed image. (c) Result of applying Sobel operator. (d) Result of nonmaximum suppression. (e) Result of hysteresis thresholding. (f) Result of thresholding only at the lower threshold level. (g) Result of thresholding at the upper threshold level. Note that there are fewer false or misleading outputs in (e) than would result from using a single threshold.

Copyright © 2012 Elsevier Inc. All rights reserved.. FIGURE 5.8 (1/4) FIGURE 5.8 Another application of the Canny edge detector. (a) Original image. (b) Smoothed image. (c) Result of applying Sobel operator. (d) Result of nonmaximum suppression. (e) Result of hysteresis thresholding. (f) Result of thresholding only at the lower threshold level. (g) Result of thresholding at the upper threshold level. Again there are fewer false or misleading outputs in (e) than would result from using a single threshold.

Copyright © 2012 Elsevier Inc. All rights reserved.. FIGURE 5.8 (2/4) FIGURE 5.8 Another application of the Canny edge detector. (a) Original image. (b) Smoothed image. (c) Result of applying Sobel operator. (d) Result of nonmaximum suppression. (e) Result of hysteresis thresholding. (f) Result of thresholding only at the lower threshold level. (g) Result of thresholding at the upper threshold level. Again there are fewer false or misleading outputs in (e) than would result from using a single threshold.

Copyright © 2012 Elsevier Inc. All rights reserved.. FIGURE 5.8 (3/4) FIGURE 5.8 Another application of the Canny edge detector. (a) Original image. (b) Smoothed image. (c) Result of applying Sobel operator. (d) Result of nonmaximum suppression. (e) Result of hysteresis thresholding. (f) Result of thresholding only at the lower threshold level. (g) Result of thresholding at the upper threshold level. Again there are fewer false or misleading outputs in (e) than would result from using a single threshold.

Copyright © 2012 Elsevier Inc. All rights reserved.. FIGURE 5.8 (4/4) FIGURE 5.8 Another application of the Canny edge detector. (a) Original image. (b) Smoothed image. (c) Result of applying Sobel operator. (d) Result of nonmaximum suppression. (e) Result of hysteresis thresholding. (f) Result of thresholding only at the lower threshold level. (g) Result of thresholding at the upper threshold level. Again there are fewer false or misleading outputs in (e) than would result from using a single threshold.

5.12 The Laplacian Operator Sobel: first derivative operator Laplacian: second derivative operator and is sensitive only to changes in intensity gradient. Copyright © 2012 Elsevier Inc. All rights reserved.. 31

Copyright © 2012 Elsevier Inc. All rights reserved.. FIGURE 5.9 (1/2) FIGURE 5.9 Comparison of Sobel and Laplacian outputs. (a) Pre-smoothed version of original image. (b) Result of applying Sobel operator. (c) Result of applying Laplacian operator. Because the Laplacian output can be positive or negative, the output in (c) is displayed relative to a medium (128)-gray-level background. (d) Absolute magnitude Laplacian output. For clarity, (c) and (d) have been presented at increased contrast. Note that the Laplacian output in (d) gives double edges only just inside and one just outside the edge position indicated by a Sobel or Canny operator. (To find edges using a Laplacian, zero crossings have to be located.) Both the Sobel and the Laplacian used here operate within a 3X3 window.

5.13 Active Contours The basic concept of Active Contour models (also known as “deformable contours” or “snakes”) is to obtain a complete and accurate outline of an object that may be ill-defined in places, whether through lack of contrast, or noise or fuzzy edges. A starting approximation is made, either by instituting a large contour that may be shrunk to size, or a small contour that may be expanded suitably, until its shape matches that of the object. Copyright © 2012 Elsevier Inc. All rights reserved.. 33

Then its shape is made to evolve subject to an energy minimization process: minimizing the external energy corresponding to imperfections in the degree of fit, and minimizing the internal energy, so that the shape of the snake does not become unnecessarily intricate. Copyright © 2012 Elsevier Inc. All rights reserved.. 34

The energies are written in terms of small changes in position x(s)=(x(s),y(s)) of each point on the snake, where s is the arc length distance along the snake boundary: Copyright © 2012 Elsevier Inc. All rights reserved.. 35

Eqs. (27), (28), and (29) can be rewritten as: Copyright © 2012 Elsevier Inc. All rights reserved.. 36

Copyright © 2012 Elsevier Inc. All rights reserved.. FIGURE 5.10 (1/2) FIGURE 5.10 Generation of active contour models (snakes). (a) Original picture with snake initialization points near the image boundary; the final snake locations hug the outside of the object but hardly penetrate the large concavity at the bottom: they actually lie approximately along a weak shadow edge. (b) Result of smoothing and application of Sobel operator to (a); the snake algorithm used this image as its input. The snake output is superimposed in black on (b), so that the high degree of co-location with the edge maxima can readily be seen. (c) Intermediate result, after half (30) the total number of iterations (60): this illustrates that after one edge point has been captured, it becomes much easier for other such points to be captured. (d) Result of using an increased number of initialization points and joining the final locations to give a connected boundary: some remanent deficiencies are evident.

Copyright © 2012 Elsevier Inc. All rights reserved.. FIGURE 5.10 (2/2) FIGURE 5.10 Generation of active contour models (snakes). (a) Original picture with snake initialization points near the image boundary; the final snake locations hug the outside of the object but hardly penetrate the large concavity at the bottom: they actually lie approximately along a weak shadow edge. (b) Result of smoothing and application of Sobel operator to (a); the snake algorithm used this image as its input. The snake output is superimposed in black on (b), so that the high degree of co-location with the edge maxima can readily be seen. (c) Intermediate result, after half (30) the total number of iterations (60): this illustrates that after one edge point has been captured, it becomes much easier for other such points to be captured. (d) Result of using an increased number of initialization points and joining the final locations to give a connected boundary: some remanent deficiencies are evident.

5.18 More Recent Developments Shima et al., 2010 Ren et al., 2010 Cosio et al., 2010 Mishra et al., 2011 Papadakis and Bugeau, 2011 Copyright © 2012 Elsevier Inc. All rights reserved.. 39