Image Segmentation and Edge Detection Digital Image Processing Instructor: Dr. Cheng-Chien LiuCheng-Chien Liu Department of Earth Sciences National Cheng.

Slides:



Advertisements
Similar presentations
Robust statistical method for background extraction in image segmentation Doug Keen March 29, 2001.
Advertisements

Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
Instructor: Mircea Nicolescu Lecture 6 CS 485 / 685 Computer Vision.
DREAM PLAN IDEA IMPLEMENTATION Introduction to Image Processing Dr. Kourosh Kiani
Image Enhancement Digital Image Processing Instructor: Dr. Cheng-Chien LiuCheng-Chien Liu Department of Earth Sciences National Cheng Kung University Last.
Lecture 07 Segmentation Lecture 07 Segmentation Mata kuliah: T Computer Vision Tahun: 2010.
Computer Vision Lecture 16: Region Representation
Content Based Image Retrieval
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
6/9/2015Digital Image Processing1. 2 Example Histogram.
EE 7730 Image Segmentation.
Thresholding Otsu’s Thresholding Method Threshold Detection Methods Optimal Thresholding Multi-Spectral Thresholding 6.2. Edge-based.
Chapter 10 Image Segmentation.
Machinen Vision and Dig. Image Analysis 1 Prof. Heikki Kälviäinen CT50A6100 Lectures 8&9: Image Segmentation Professor Heikki Kälviäinen Machine Vision.
Region Segmentation. Find sets of pixels, such that All pixels in region i satisfy some constraint of similarity.
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Segmentation Divide the image into segments. Each segment:
Segmentation (Section 10.2)
Chapter 10 Image Segmentation.
EE663 Image Processing Edge Detection 3 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Segmentation (Section 10.3 & 10.4) CS474/674 – Prof. Bebis.
Lecture 2: Image filtering
Machinen Vision and Dig. Image Analysis 1 Prof. Heikki Kälviäinen CT50A6100 Lectures 8&9: Image Segmentation Professor Heikki Kälviäinen Machine Vision.
© 2010 Cengage Learning Engineering. All Rights Reserved.
Thresholding Thresholding is usually the first step in any segmentation approach We have talked about simple single value thresholding already Single value.
DIGITAL IMAGE PROCESSING Instructors: Dr J. Shanbehzadeh M.Yekke Zare M.Yekke Zare ( J.Shanbehzadeh M.Yekke Zare )
Chapter 2. Image Analysis. Image Analysis Domains Frequency Domain Spatial Domain.
Computer vision.
Discrete Images (Chapter 7) Fourier Transform on discrete and bounded domains. Given an image: 1.Zero boundary condition 2.Periodic boundary condition.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
Lecture 2: Edge detection CS4670: Computer Vision Noah Snavely From Sandlot ScienceSandlot Science.
Image Segmentation and Morphological Processing Digital Image Processing in Life- Science Aviad Baram
Chapter 10 Image Segmentation.
Chapter 10, Part II Edge Linking and Boundary Detection The methods discussed in the previous section yield pixels lying only on edges. This section.
G52IVG, School of Computer Science, University of Nottingham 1 Edge Detection and Image Segmentation.
Chapter 10, Part I.  Segmentation subdivides an image into its constituent regions or objects.  Image segmentation methods are generally based on two.
Data Extraction using Image Similarity CIS 601 Image Processing Ajay Kumar Yadav.
Pixel Connectivity Pixel connectivity is a central concept of both edge- and region- based approaches to segmentation The notation of pixel connectivity.
Image Restoration Digital Image Processing Instructor: Dr. Cheng-Chien LiuCheng-Chien Liu Department of Earth Sciences National Cheng Kung University Last.
CS654: Digital Image Analysis
CS654: Digital Image Analysis Lecture 24: Introduction to Image Segmentation: Edge Detection Slide credits: Derek Hoiem, Lana Lazebnik, Steve Seitz, David.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Levels of Image Data Representation 4.2. Traditional Image Data Structures 4.3. Hierarchical Data Structures Chapter 4 – Data structures for.
Brent M. Dingle, Ph.D Game Design and Development Program Mathematics, Statistics and Computer Science University of Wisconsin - Stout Edge Detection.
Introduction Digital Image Processing Instructor: Dr. Cheng-Chien LiuCheng-Chien Liu Department of Earth Sciences National Cheng Kung University Last updated:
Digital Image Processing
Machine Vision ENT 273 Regions and Segmentation in Images Hema C.R. Lecture 4.
Evaluation of Image Segmentation algorithms By Dr. Rajeev Srivastava.
Course 5 Edge Detection. Image Features: local, meaningful, detectable parts of an image. edge corner texture … Edges: Edges points, or simply edges,
Lecture 04 Edge Detection Lecture 04 Edge Detection Mata kuliah: T Computer Vision Tahun: 2010.
Thresholding Foundation:. Thresholding In A: light objects in dark background To extract the objects: –Select a T that separates the objects from the.
Computer Vision Image Features Instructor: Dr. Sherif Sami Lecture 4.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Digital Image Processing CSC331
Finding Boundaries Computer Vision CS 143, Brown James Hays 09/28/11 Many slides from Lana Lazebnik, Steve Seitz, David Forsyth, David Lowe, Fei-Fei Li,
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
Machine Vision ENT 273 Lecture 4 Hema C.R.
Edge Detection slides taken and adapted from public websites:
IMAGE SEGMENTATION USING THRESHOLDING
Edge Detection CS 678 Spring 2018.
Jeremy Bolton, PhD Assistant Teaching Professor
Computer Vision Lecture 9: Edge Detection II
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
DICOM 11/21/2018.
Lecture 2: Edge detection
Chapter 10 – Image Segmentation
Lecture 2: Edge detection
IT472 Digital Image Processing
IT472 Digital Image Processing
Presentation transcript:

Image Segmentation and Edge Detection Digital Image Processing Instructor: Dr. Cheng-Chien LiuCheng-Chien Liu Department of Earth Sciences National Cheng Kung University Last updated: 21 October 2003 Chapter 7

Introduction  Image Segmentation and Edge Detection Purpose  extract information (outlines)  division (color, brightness)  automatic vision system The simplest method of division  Histogramming and thresholding  One threshold  label (classified) image  e.g. Fig 7.1 Hysteresis thresholding  Two thresholds  e.g. Fig 7.2  Principle  minimize the number of misclassified pixels  p-tile method

The minimum error threshold method  Total error (Fig 7.3) E(t) =   -  t p 0 (x)dx + (1 –  t  p b (x)dx   : the fraction of the pixels that make up the object  1-  : the fraction of the pixels that make up the background  E/  t =  p 0 (t) – (1 –  p b (t)  Example 7.1: E(t)   E/  t  B7.1: the Leibnitz rule  Example 7.2: draw p 0 (x) and p b (x)  Example 7.3: given p 0 (x), p b (x) and   t  Example 7.4: given p 0 (x), p b (x) and   t  E(t)

The minimum error threshold method (cont.)  Drawbacks Need the prior knowledge of p 0 (x), p b (x) and   Approximate p 0 (x) and p b (x) by normal distributions  still need to estimate the parameters  and   Two solutions of t  t 1 < x < t 2 (Example 7.6)  Example 7.5: the result of optimal thresholding is worse than that obtained by hysteresis thresholding with two heuristically chosen thresholds (Fig 7.4d)

Otsu’s threshold method  Derivation Fraction  Background pixels:  (t)  Object pixels: 1 –  (t) Mean gray value  The whole image:   Background:  b  Object:  o Variance  The whole image:  T 2  Background:  b 2  Object:  o 2

Otsu’s threshold method (cont.)  Derivation (cont.)  T 2 =  W 2 +  B 2  The within-class variance:  W 2 =  (t)  b 2 + (1 –  (t))  o 2   he between-class variance:  B 2 = (  b –  ) 2  (t) + (  o –  ) 2 (1 –  (t)) Otsu’s thresholding:  Optimizing t to maximize  B and minimize  W  If work with  B (Example 7.7)   B  (t) –  (t)    (t)  (t) 

Otsu’s threshold method (cont.)  Drawbacks Assume  and  are sufficient in representing p 0 (x) and p b (x) Break down when p 0 (x) and p b (x) are very unequal Assume the histogram of the image is bimodal Dividing the image into two classes is not valid under variable illumination

Variable illumination  p z (u) =    p r (u – i)p i (i)di f(x, y) = r(x, y) i(x, y)  An image f(x, y) is a product of a reflectance function r(x, y) and an illumination function i(x, y) ln f(x, y) = ln r(x, y) + ln i(x, y)  Multiplicative  additive f(x, y) = r(x, y) + i(x, y) z = P z (u) = probability of z  u  P(z  u) =     r  u-i p ri (r, i)drdi p z (u) = dP z (u)/du =    p ri (u-i, i)di =    p r (u – i)p i (i)di If i = const  i = const  p i (i) =  (i – i o )  p z (u) = p r (u) If i  const  the thresholding methods break down

Variable illumination (cont.)  Solution for non-uniform illumination Divide the image into (more or less) uniformly illuminated patches (Fig 7.8) Correcting the effect of illumination  Pure illumination field  i(x, y)  Image of an uniform reflectance surface  f(x, y)  f(x, y) / i(x, y)  Subtract i(x, y) from z(x, y)  Multiply f(x, y) / i(x, y) with a reference value, say i(0, 0) to bring the whole image under the same illumination

Shortcomings of the thresholding methods  The spatial proximity of the pixels in the image is not considered at all Fig 7.8 Fig 7.9  Solutions Region growing method  Seed pixels  attach neighboring pixels based on the predefined range  scan and assign all pixels to a region Split and merge method  Test the original image  split into four quadrants if LV < attribute < HV  test for each quadrant  split  …  merge the region with the same attribute (Fig 7.10)  Favored when the image is square with N = 2 n

Pattern recognition  Texture region Regions are not uniform in terms of their grey values but are perceived as uniform  For segmentation purposes Characterize a pixel  Its GL and the variation of GL in a small patch around it  Not just a scalar (GL), but a vector (feature)  Pattern recognition Multidimensional histograms  clustering Beyond the scope of this book

Edge detection  Measurement Convolve the image with a window  Slide a window  calculate the statistical properties  compare the difference  specify the boundary  e.g. 8  8 image in Fig 7.11 The smallest window  two pixels  the first derivative   f x = f(i+1, j) - f(i, j)   f y = f(i, j+1) - f(i, j)  The dual grid Non-maxima suppression  The process of identifying the local maxima as candidate edge pixels (edgels)  If there is no noise in the image  pick up the discontinuities in intensity

Edge detection (cont.)  Noise Smooth the image with a lowpass filter before detecting the edges (Fig 7.12, 7.13)  1D case A i  (I i-1 + I i + I i+1 ) / 3 F i  (A i+1 – A i ) + (A i – A i-1 ) / 2 F i  (I i+2 + I i+1 – I i-1 – I i-2 ) / 6 The larger the mask used, the better is the smoothing, the more blurred and more inaccurate its position will be (Fig 7.14)

Edge detection (cont.)  2D case (3  3 mask) Consider  f y only (rotating 90 0 to calculate  f x ) Symmetry: left  right Local difference = front – behind Zero response for a smooth image   a ij = 0 Differentiate in the direction of columns for a smooth image  0 for each column  a 21 = 0 a 11 a 12 a 13  a 11 a 12 a 11  a 12 a 11  a 12 a 11  a 12 a 11 a 21 a 22 a 23 a 21 a 22 a 21 a 22 a a 21 a a 31 a 32 a 33 a 31 a 32 a 31 -a 11 -a 12 -a 11 -a 12 -a 11 -a 12 -a 11

Edge detection (cont.)  2D case (cont.) Divide by a 11  one parameter mask 1K K

Sobel mask  Sobel mask Differentiating an image along two directions  Choose K = 2  B7.2  Strength: E(i, j) = [  f x 2 +  f y 2 ] 1/2  Orientation: a(i, j) = tan -1 [  f y /  f x ]  Specify K  keep E and a to response the true values of the non-discretized image Example 7.9:  Expression of Sobel mask at (i, j) Example 7.10:  Constructing a 9  9 matrix to calculate the i-gradient of a 3  3 matrix Example 7.11:  Implementation of Example 7.10