1 Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which.

Slides:



Advertisements
Similar presentations
CSCE643: Computer Vision Mean-Shift Object Tracking Jinxiang Chai Many slides from Yaron Ukrainitz & Bernard Sarel & Robert Collins.
Advertisements

Department of Computer Engineering
Segmentácia farebného obrazu
Clustering & image segmentation Goal::Identify groups of pixels that go together Segmentation.
Image Segmentation Selim Aksoy Department of Computer Engineering Bilkent University
10/11/2001Random walks and spectral segmentation1 CSE 291 Fall 2001 Marina Meila and Jianbo Shi: Learning Segmentation by Random Walks/A Random Walks View.
Lecture 6 Image Segmentation
Image segmentation. The goals of segmentation Group together similar-looking pixels for efficiency of further processing “Bottom-up” process Unsupervised.
EE 7730 Image Segmentation.
Thresholding Otsu’s Thresholding Method Threshold Detection Methods Optimal Thresholding Multi-Spectral Thresholding 6.2. Edge-based.
Mean Shift A Robust Approach to Feature Space Analysis Kalyan Sunkavalli 04/29/2008 ES251R.
CS 376b Introduction to Computer Vision 04 / 08 / 2008 Instructor: Michael Eckmann.
1 Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which.
Normalized Cuts and Image Segmentation Jianbo Shi and Jitendra Malik, Presented by: Alireza Tavakkoli.
Region Segmentation. Find sets of pixels, such that All pixels in region i satisfy some constraint of similarity.
© University of Minnesota Data Mining for the Discovery of Ocean Climate Indices 1 CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance.
Segmentation CSE P 576 Larry Zitnick Many slides courtesy of Steve Seitz.
Segmentation Divide the image into segments. Each segment:
Announcements Project 2 more signup slots questions Picture taking at end of class.
Image Segmentation. Introduction The purpose of image segmentation is to partition an image into meaningful regions with respect to a particular application.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
CS 376b Introduction to Computer Vision 04 / 04 / 2008 Instructor: Michael Eckmann.
CSE 803 Fall 2008 Stockman1 Ch 10 Image Segmentation Ideally, partition an image into regions corresponding to real world objects.
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
Image Segmentation Using Region Growing and Shrinking
Image Segmentation Selim Aksoy Department of Computer Engineering Bilkent University
Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which usually.
Image Segmentation Rob Atlas Nick Bridle Evan Radkoff.
Chapter 5 Segmentation Presenter: 蘇唯誠
CS 376b Introduction to Computer Vision 04 / 02 / 2008 Instructor: Michael Eckmann.
Computer Vision James Hays, Brown
Mean-shift and its application for object tracking
Region Segmentation Readings: Chapter 10: 10.1 Additional Materials Provided K-means Clustering (text) EM Clustering (paper) Graph Partitioning (text)
1Ellen L. Walker Segmentation Separating “content” from background Separating image into parts corresponding to “real” objects Complete segmentation Each.
Digital Image Processing In The Name Of God Digital Image Processing Lecture8: Image Segmentation M. Ghelich Oghli By: M. Ghelich Oghli
Chapter 14: SEGMENTATION BY CLUSTERING 1. 2 Outline Introduction Human Vision & Gestalt Properties Applications – Background Subtraction – Shot Boundary.
CSE 185 Introduction to Computer Vision Pattern Recognition 2.
Digital Image Processing Lecture 18: Segmentation: Thresholding & Region-Based Prof. Charlene Tsai.
Chapter 10 Image Segmentation.
Chapter 10, Part II Edge Linking and Boundary Detection The methods discussed in the previous section yield pixels lying only on edges. This section.
MACHINE LEARNING 8. Clustering. Motivation Based on E ALPAYDIN 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2  Classification problem:
CS654: Digital Image Analysis Lecture 30: Clustering based Segmentation Slides are adapted from:
CS654: Digital Image Analysis
Image Segmentation Shengnan Wang
Chapter 13 (Prototype Methods and Nearest-Neighbors )
Mean Shift ; Theory and Applications Presented by: Reza Hemati دی 89 December گروه بینایی ماشین و پردازش تصویر Machine Vision and Image Processing.
1 Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which.
Machine Vision ENT 273 Regions and Segmentation in Images Hema C.R. Lecture 4.
 In the previews parts we have seen some kind of segmentation method.  In this lecture we will see graph cut, which is a another segmentation method.
Thresholding Foundation:. Thresholding In A: light objects in dark background To extract the objects: –Select a T that separates the objects from the.
1 Review and Summary We have covered a LOT of material, spending more time and more detail on 2D image segmentation and analysis, but hopefully giving.
Color Image Segmentation Mentor : Dr. Rajeev Srivastava Students: Achit Kumar Ojha Aseem Kumar Akshay Tyagi.
1 Kernel Machines A relatively new learning methodology (1992) derived from statistical learning theory. Became famous when it gave accuracy comparable.
Gaussian Mixture Model classification of Multi-Color Fluorescence In Situ Hybridization (M-FISH) Images Amin Fazel 2006 Department of Computer Science.
Machine Vision ENT 273 Lecture 4 Hema C.R.
Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which usually.
Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which usually.
Unsupervised Learning
Region Segmentation Readings: Chapter 10: 10
Clustering and Segmentation
Mean Shift Segmentation
Computer Vision Lecture 12: Image Segmentation II
Grouping.
Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which usually.
Ch 10 Image Segmentation Ideally, partition an image into regions corresponding to real world objects. CSE 803 Fall 2012.
Spectral Clustering Eric Xing Lecture 8, August 13, 2010
Announcements Project 4 questions Evaluations.
EM Algorithm and its Applications
Presentation transcript:

1 Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which usually cover the image 2. into linear structures, such as - line segments - curve segments 3. into 2D shapes, such as - circles - ellipses - ribbons (long, symmetric regions)

CSE 803 Fall 2008 Stockman2 Goals of segmentation

3

4 Example 1: Regions

5 Example 2: Straight Lines

6 Example 3: Lines and Circular Arcs

7 Region Segmentation: Segmentation Criteria From Pavlidis A segmentation is a partition of an image I into a set of regions S satisfying: 1.  Si = S Partition covers the whole image. 2. Si  Sj = , i  j No regions intersect. 3.  Si, P(Si) = true Homogeneity predicate is satisfied by each region. 4. P(Si  Sj) = false, Union of adjacent regions i  j, Si adjacent Sj does not satisfy it.

8 So All we have to do is to define and implement the similarity predicate P. But, what do we want to be similar in each region? Is there any property that will cause the regions to be meaningful objects?

9

10

11 Main Methods of Region Segmentation 1. Region Growing 2. Split and Merge 3. Clustering

12 Region Growing Region growing techniques start with one pixel of a potential region and try to grow it by adding adjacent pixels till the pixels being compared are too disimilar. The first pixel selected can be just the first unlabeled pixel in the image or a set of seed pixels can be chosen from the image. Usually a statistical test is used to decide which pixels can be added to a region.

13 The RGGROW Algorithm Let R be the N pixel region so far and P be a neighboring pixel with gray tone y. Define the mean X and scatter S (sample variance) by X = 1/N  I(r,c) S = 1/N  (I(r,c) - X) 2 2 (r,c)  R 2

14 The RGGROW Statistical Test The T statistic is defined by (N-1) * N T = (y - X) / S (N+1) 22 1/2

15 Decision and Update For the T distribution, statistical tables give us the probability Pr(T  t) for a given degrees of freedom and a confidence level. From this, pick suitable threshold t. If the computed T  t for desired confidence level, add y to region R and update X and S. If T is too high, the value y is not likely to have arisen from the population of pixels in R. Start a new region. 2

16 RGGROW Example image segmentation Not so great and it’s order dependent.

17 Split and Merge 1. Start with the whole image 2. If the variance is too high, break into quadrants 3.Merge any adjacent regions that are similar enough. 4.Repeat Steps 2 and 3, iteratively till no more splitting or merging occur Idea: Good Results: Blocky

18

19 Clustering There are K clusters C 1,…, C K with means m 1,…, m K. The least-squares error is defined as Out of all possible partitions into K clusters, choose the one that minimizes D. Why don’t we just do this? If we could, would we get meaningful objects? D =   || x i - m k ||. k=1 x i  C k K 2

20 Some Clustering Methods K-means Clustering and Variants Histogram-Based Clustering and Recursive Variant Graph-Theoretic Clustering EM Clustering

21 K-Means Clustering (review) Form K-means clusters from a set of n-dimensional vectors 1. Set ic (iteration count) to 1 2. Choose randomly a set of K means m1(1), …, mK(1). 3. For each vector xi, compute D(xi,mk(ic)), k=1,…K and assign xi to the cluster Cj with nearest mean. 4. Increment ic by 1, update the means to get m1(ic),…,mK(ic). 5. Repeat steps 3 and 4 until Ck(ic) = Ck(ic+1) for all k.

22 K-Means Example 1

23 K-Means Example 2

24 K-means Variants Different ways to initialize the means Different stopping criteria Dynamic methods for determining the right number of clusters (K) for a given image Isodata: K-means with split and merge

ISODATA CLUSTERING 25

Histogram thresholding Seek for the modes of multimodal histogram Use knowledge directed thresholding 26

HISTOGRAM BASED CLUSTERING 27

Valley seeking 28

Image Segmentation by Thresholding

CSE 803 Fall 2008 Stockman30 Otsu’s method assumes K=2. It searches for the threshold t that optimizes the intra class variance.

Thresholding

35 Ohlander’s Recursive Histogram- Based Clustering Input: color images of real indoor and outdoor scenes starts with the whole image and finds the histogram selects the R, G, or B histogram with largest peak and finds the connected regions from that peak converts to regions on the image and creates masks for each region and recomputes the histogram for each region pushes each mask onto a stack for further clustering

36 Ohlander’s Method sky tree2 tree1 Ohta suggested using (R+G+B)/3, (R-B)/2 and (2G-R-B)/4 instead of (R, G, B). separate R, G, B

37 Jianbo Shi’s Graph-Partitioning An image is represented by a graph whose nodes are pixels or small groups of pixels. The goal is to partition the vertices into disjoint sets so that the similarity within each set is high and across different sets is low.

38 Minimal Cuts Let G = (V,E) be a graph. Each edge (u,v) has a weight w(u,v) that represents the similarity between u and v. Graph G can be broken into 2 disjoint graphs with node sets A and B by removing edges that connect these sets. Let cut(A,B) =  w(u,v). One way to segment G is to find the minimal cut. u  A, v  B

39 Cut(A,B) w1 w2 A B cut(A,B) =  w(u,v). u  A, v  B

40 Normalized Cut Minimal cut favors cutting off small node groups, so Shi proposed the normalized cut. cut(A, B) cut(A,B) Ncut(A,B) = asso(A,V) asso(B,V) asso(A,V) =  w(u,t) u  A, t  V Association: How much is A connected to the graph V as a whole. normalized cut

41 Example Normalized Cut Ncut(A,B) = A B

42 Shi turned graph cuts into an eigenvector/eigenvalue problem. Set up a weighted graph G=(V,E) –V is the set of (N) pixels –E is a set of weighted edges (weight w ij gives the similarity between nodes i and j)

Define two matrices: D and W –Length N vector d: d i is the sum of the weights from node i to all other nodes –N x N matrix D: D is a diagonal matrix with d on its diagonal –Similarity matrix W: N x N symmetric matrix W: W ij = w ij 43

Edge weights 44

45 Let x be a characteristic vector of a set A of nodes – x i = 1 if node i is in a set A – x i = -1 otherwise Let y be a continuous approximation to x

Solve the system of equations (D – W) y = D y for the eigenvectors y and eigenvalues Use the eigenvector y with second smallest eigenvalue to bipartition the graph (y => x => A) If further subdivision is merited, repeat recursively 46

47 How Shi used the procedure Shi defined the edge weights w(i,j) by w(i,j) = e * e if ||X(i)-X(j)|| 2 < r 0 otherwise ||F(i)-F(j)|| 2 /  I ||X(i)-X(j)|| 2 /  X where X(i) is the spatial location of node i F(i) is the feature vector for node I which can be intensity, color, texture, motion… The formula is set up so that w(i,j) is 0 for nodes that are too far apart.

48 Examples of Shi Clustering See Shi’s Web Page

CSE 803 Fall 2008 Stockman49 Representation of regions

Overlay 50

CSE 803 Fall 2008 Stockman51 Chain codes for boundaries

CSE 803 Fall 2008 Stockman52 Quad trees divide into quadrants M=mixed; E=empty; F=full

CSE 803 Fall 2008 Stockman53 Can segment 3D images also Oct trees subdivide into 8 octants Same coding: M, E, F used Software available for doing 3D image processing and differential equations using octree representation. Can achieve large compression factor.

CSE 803 Fall 2008 Stockman54 Segmentation with clustering Mean shift description hiftSeg.pdf hiftSeg.pdf Expectation Maximization Demo ml Tutorial 2.cs.cmu.edu/~awm/tutorials/gmm13.pdf

Mean Shift Adopted from Yaron Ukrainitz & Bernard Sarel

Intuitive Description Distribution of identical billiard balls Region of interest Center of mass Mean Shift vector Objective : Find the densest region

Intuitive Description Distribution of identical billiard balls Region of interest Center of mass Mean Shift vector Objective : Find the densest region

Intuitive Description Distribution of identical billiard balls Region of interest Center of mass Mean Shift vector Objective : Find the densest region

Intuitive Description Distribution of identical billiard balls Region of interest Center of mass Mean Shift vector Objective : Find the densest region

Intuitive Description Distribution of identical billiard balls Region of interest Center of mass Mean Shift vector Objective : Find the densest region

Intuitive Description Distribution of identical billiard balls Region of interest Center of mass Mean Shift vector Objective : Find the densest region

Intuitive Description Distribution of identical billiard balls Region of interest Center of mass Objective : Find the densest region

What is Mean Shift ? Non-parametric Density Estimation Non-parametric Density GRADIENT Estimation (Mean Shift) Data Discrete PDF Representation PDF Analysis PDF in feature space Color space Scale space Actually any feature space you can conceive … A tool for: Finding modes in a set of data samples, manifesting an underlying probability density function (PDF) in R N

Non-Parametric Density Estimation Assumption : The data points are sampled from an underlying PDF Assumed Underlying PDFReal Data Samples Data point density implies PDF value !

Assumed Underlying PDFReal Data Samples Non-Parametric Density Estimation

Assumed Underlying PDFReal Data Samples ? Non-Parametric Density Estimation

Parametric Density Estimation Assumption : The data points are sampled from an underlying PDF Assumed Underlying PDF Estimate Real Data Samples

68 EM Demo

69 EM Algorithm and its Applications Prepared by Yi Li Department of Computer Science and Engineering University of Washington

70 From K-means to EM is from discrete to probabilistic Form K-means clusters from a set of n -dimensional vectors 1. Set ic (iteration count) to 1 2. Choose randomly a set of K means m 1 (1), …, m K (1). 3. For each vector x i, compute D(x i,m k (ic)), k=1,…K and assign x i to the cluster C j with nearest mean. 4. Increment ic by 1, update the means to get m 1 (ic),…,m K (ic). 5. Repeat steps 3 and 4 until C k (ic) = C k (ic+1) for all k. K-means revisited

71 Classifier (K-Means) x 1 ={r 1, g 1, b 1 } x 2 ={r 2, g 2, b 2 } … x i ={r i, g i, b i } … Classification Results x 1  C(x 1 ) x 2  C(x 2 ) … x i  C(x i ) … Cluster Parameters m 1 for C 1 m 2 for C 2 … m k for C k K-Means Classifier

72 x 1 ={r 1, g 1, b 1 } x 2 ={r 2, g 2, b 2 } … x i ={r i, g i, b i } … Classification Results x 1  C(x 1 ) x 2  C(x 2 ) … x i  C(x i ) … Cluster Parameters m 1 for C 1 m 2 for C 2 … m k for C k Input (Known)Output (Unknown) K-Means Classifier (Cont.)

73 x 1 ={r 1, g 1, b 1 } x 2 ={r 2, g 2, b 2 } … x i ={r i, g i, b i } … Classification Results (1) C(x 1 ), C(x 2 ), …, C(x i ) Initial Guess of Cluster Parameters m 1, m 2, …, m k Input (Known) Output (Unknown) Cluster Parameters (1) m 1, m 2, …, m k Classification Results (2) C(x 1 ), C(x 2 ), …, C(x i ) Cluster Parameters (2) m 1, m 2, …, m k Classification Results (ic) C(x 1 ), C(x 2 ), …, C(x i ) Cluster Parameters (ic) m 1, m 2, …, m k 

74 K-Means (Cont.) Boot Step : –Initialize K clusters: C 1, …, C K Each cluster is represented by its mean m j Iteration Step : –Estimate the cluster for each data point –Re-estimate the cluster parameters xixi C(xi)C(xi)

75 K-Means Example

76 K-Means Example

77 K-Means  EM Boot Step : –Initialize K clusters: C 1, …, C K Iteration Step : –Estimate the cluster of each data point –Re-estimate the cluster parameters (  j,  j ) and P(C j ) for each cluster j. For each cluster j Expectation Maximization

78 Classifier (EM) x 1 ={r 1, g 1, b 1 } x 2 ={r 2, g 2, b 2 } … x i ={r i, g i, b i } … Classification Results p(C 1 |x 1 ) p(C j |x 2 ) … p(C j |x i ) … Cluster Parameters (  1,  1 ),p(C 1 ) for C 1 (  2,  2 ),p(C 2 ) for C 2 … (  k,  k ),p(C k ) for C k EM Classifier

79 x 1 ={r 1, g 1, b 1 } x 2 ={r 2, g 2, b 2 } … x i ={r i, g i, b i } … Cluster Parameters (  1,  1 ), p(C 1 ) for C 1 (  2,  2 ), p(C 2 ) for C 2 … (  k,  k ), p(C k ) for C k EM Classifier (Cont.) Input (Known)Output (Unknown) Classification Results p(C 1 |x 1 ) p(C j | x 2 ) … p(C j |x i ) …

80 x 1 ={r 1, g 1, b 1 } x 2 ={r 2, g 2, b 2 } … x i ={r i, g i, b i } … Cluster Parameters (  1,  1 ), p(C 1 ) for C 1 (  2,  2 ), p(C 2 ) for C 2 … (  k,  k ), p(C k ) for C k Expectation Step Input (Known)Input (Estimation)Output + Classification Results p(C 1 |x 1 ) p(C j |x 2 ) … p(C j |x i ) …

81 x 1 ={r 1, g 1, b 1 } x 2 ={r 2, g 2, b 2 } … x i ={r i, g i, b i } … Cluster Parameters (  1,  1 ), p(C 1 ) for C 1 (  2,  2 ), p(C 2 ) for C 2 … (  k,  k ), p(C k ) for C k Maximization Step Input (Known)Input (Estimation)Output + Classification Results p(C 1 |x 1 ) p(C j |x 2 ) … p(C j |x i ) …

82 EM Algorithm Boot Step : –Initialize K clusters: C 1, …, C K Iteration Step : –Expectation Step –Maximization Step (  j,  j ) and P(C j ) for each cluster j.

83 EM Demo Demo Tutorial

84 EM Applications Blobworld: Image segmentation using Expectation-Maximization and its application to image querying Yi’s Generative/Discriminative Learning of object classes in color images

85 Image Segmentaton with EM: Symbols The feature vector for pixel i is called x i. There are going to be K segments; K is given. The j- th segment has a Gaussian distribution with parameters  j =(  j,  j ).  j 's are the weights (which sum to 1 ) of Gaussians.  is the collection of parameters:  = (  1, …,  k,  1, …,  k )

86 Initialization Each of the K Gaussians will have parameters  j =(  j,  j ), where –  j is the mean of the j- th Gaussian. –  j is the covariance matrix of the j- th Gaussian. The covariance matrices are initialed to be the identity matrix. The means can be initialized by finding the average feature vectors in each of K windows in the image; this is data-driven initialization.

87 E-Step

88 M-Step

89 Sample Results from Blobworld