Download presentation
Presentation is loading. Please wait.
Published byMargaret Elliott Modified over 9 years ago
1
Image Segmentation and Edge Detection Digital Image Processing Instructor: Dr. Cheng-Chien LiuCheng-Chien Liu Department of Earth Sciences National Cheng Kung University Last updated: 21 October 2003 Chapter 7
2
Introduction Image Segmentation and Edge Detection Purpose extract information (outlines) division (color, brightness) automatic vision system The simplest method of division Histogramming and thresholding One threshold label (classified) image e.g. Fig 7.1 Hysteresis thresholding Two thresholds e.g. Fig 7.2 Principle minimize the number of misclassified pixels p-tile method
3
The minimum error threshold method Total error (Fig 7.3) E(t) = - t p 0 (x)dx + (1 – t p b (x)dx : the fraction of the pixels that make up the object 1- : the fraction of the pixels that make up the background E/ t = p 0 (t) – (1 – p b (t) Example 7.1: E(t) E/ t B7.1: the Leibnitz rule Example 7.2: draw p 0 (x) and p b (x) Example 7.3: given p 0 (x), p b (x) and t Example 7.4: given p 0 (x), p b (x) and t E(t)
4
The minimum error threshold method (cont.) Drawbacks Need the prior knowledge of p 0 (x), p b (x) and Approximate p 0 (x) and p b (x) by normal distributions still need to estimate the parameters and Two solutions of t t 1 < x < t 2 (Example 7.6) Example 7.5: the result of optimal thresholding is worse than that obtained by hysteresis thresholding with two heuristically chosen thresholds (Fig 7.4d)
5
Otsu’s threshold method Derivation Fraction Background pixels: (t) Object pixels: 1 – (t) Mean gray value The whole image: Background: b Object: o Variance The whole image: T 2 Background: b 2 Object: o 2
6
Otsu’s threshold method (cont.) Derivation (cont.) T 2 = W 2 + B 2 The within-class variance: W 2 = (t) b 2 + (1 – (t)) o 2 he between-class variance: B 2 = ( b – ) 2 (t) + ( o – ) 2 (1 – (t)) Otsu’s thresholding: Optimizing t to maximize B and minimize W If work with B (Example 7.7) B (t) – (t) (t) (t)
7
Otsu’s threshold method (cont.) Drawbacks Assume and are sufficient in representing p 0 (x) and p b (x) Break down when p 0 (x) and p b (x) are very unequal Assume the histogram of the image is bimodal Dividing the image into two classes is not valid under variable illumination
8
Variable illumination p z (u) = p r (u – i)p i (i)di f(x, y) = r(x, y) i(x, y) An image f(x, y) is a product of a reflectance function r(x, y) and an illumination function i(x, y) ln f(x, y) = ln r(x, y) + ln i(x, y) Multiplicative additive f(x, y) = r(x, y) + i(x, y) z = P z (u) = probability of z u P(z u) = r u-i p ri (r, i)drdi p z (u) = dP z (u)/du = p ri (u-i, i)di = p r (u – i)p i (i)di If i = const i = const p i (i) = (i – i o ) p z (u) = p r (u) If i const the thresholding methods break down
9
Variable illumination (cont.) Solution for non-uniform illumination Divide the image into (more or less) uniformly illuminated patches (Fig 7.8) Correcting the effect of illumination Pure illumination field i(x, y) Image of an uniform reflectance surface f(x, y) f(x, y) / i(x, y) Subtract i(x, y) from z(x, y) Multiply f(x, y) / i(x, y) with a reference value, say i(0, 0) to bring the whole image under the same illumination
10
Shortcomings of the thresholding methods The spatial proximity of the pixels in the image is not considered at all Fig 7.8 Fig 7.9 Solutions Region growing method Seed pixels attach neighboring pixels based on the predefined range scan and assign all pixels to a region Split and merge method Test the original image split into four quadrants if LV < attribute < HV test for each quadrant split … merge the region with the same attribute (Fig 7.10) Favored when the image is square with N = 2 n
11
Pattern recognition Texture region Regions are not uniform in terms of their grey values but are perceived as uniform For segmentation purposes Characterize a pixel Its GL and the variation of GL in a small patch around it Not just a scalar (GL), but a vector (feature) Pattern recognition Multidimensional histograms clustering Beyond the scope of this book
12
Edge detection Measurement Convolve the image with a window Slide a window calculate the statistical properties compare the difference specify the boundary e.g. 8 8 image in Fig 7.11 The smallest window two pixels the first derivative f x = f(i+1, j) - f(i, j) f y = f(i, j+1) - f(i, j) The dual grid Non-maxima suppression The process of identifying the local maxima as candidate edge pixels (edgels) If there is no noise in the image pick up the discontinuities in intensity
13
Edge detection (cont.) Noise Smooth the image with a lowpass filter before detecting the edges (Fig 7.12, 7.13) 1D case A i (I i-1 + I i + I i+1 ) / 3 F i (A i+1 – A i ) + (A i – A i-1 ) / 2 F i (I i+2 + I i+1 – I i-1 – I i-2 ) / 6 The larger the mask used, the better is the smoothing, the more blurred and more inaccurate its position will be (Fig 7.14)
14
Edge detection (cont.) 2D case (3 3 mask) Consider f y only (rotating 90 0 to calculate f x ) Symmetry: left right Local difference = front – behind Zero response for a smooth image a ij = 0 Differentiate in the direction of columns for a smooth image 0 for each column a 21 = 0 a 11 a 12 a 13 a 11 a 12 a 11 a 12 a 11 a 12 a 11 a 12 a 11 a 21 a 22 a 23 a 21 a 22 a 21 a 22 a 21 - 2a 21 a 21 000 a 31 a 32 a 33 a 31 a 32 a 31 -a 11 -a 12 -a 11 -a 12 -a 11 -a 12 -a 11
15
Edge detection (cont.) 2D case (cont.) Divide by a 11 one parameter mask 1K1 000 -1-1-1-1-K -1-1-1-1
16
Sobel mask Sobel mask Differentiating an image along two directions Choose K = 2 B7.2 Strength: E(i, j) = [ f x 2 + f y 2 ] 1/2 Orientation: a(i, j) = tan -1 [ f y / f x ] Specify K keep E and a to response the true values of the non-discretized image Example 7.9: Expression of Sobel mask at (i, j) Example 7.10: Constructing a 9 9 matrix to calculate the i-gradient of a 3 3 matrix Example 7.11: Implementation of Example 7.10
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.