Presenter : Kuang-Jui Hsu Date : 2011/5/3(Tues.).

Slides:



Advertisements
Similar presentations
Liang Shan Clustering Techniques and Applications to Image Segmentation.
Advertisements

3D Geometry for Computer Graphics
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
B.Macukow 1 Lecture 12 Neural Networks. B.Macukow 2 Neural Networks for Matrix Algebra Problems.
Segmentácia farebného obrazu
Normalized Cuts and Image Segmentation
Online Social Networks and Media. Graph partitioning The general problem – Input: a graph G=(V,E) edge (u,v) denotes similarity between u and v weighted.
Clustering II CMPUT 466/551 Nilanjan Ray. Mean-shift Clustering Will show slides from:
Matrices: Inverse Matrix
10/11/2001Random walks and spectral segmentation1 CSE 291 Fall 2001 Marina Meila and Jianbo Shi: Learning Segmentation by Random Walks/A Random Walks View.
The Visual Recognition Machine Jitendra Malik University of California at Berkeley Jitendra Malik University of California at Berkeley.
Lecture 6 Image Segmentation
Image segmentation. The goals of segmentation Group together similar-looking pixels for efficiency of further processing “Bottom-up” process Unsupervised.
Lecture 21: Spectral Clustering
CS 376b Introduction to Computer Vision 04 / 08 / 2008 Instructor: Michael Eckmann.
Motion Analysis Slides are from RPI Registration Class.
Normalized Cuts and Image Segmentation Jianbo Shi and Jitendra Malik, Presented by: Alireza Tavakkoli.
© 2003 by Davi GeigerComputer Vision October 2003 L1.1 Image Segmentation Based on the work of Shi and Malik, Carnegie Mellon and Berkley and based on.
Fast, Multiscale Image Segmentation: From Pixels to Semantics Ronen Basri The Weizmann Institute of Science Joint work with Achi Brandt, Meirav Galun,
Region Segmentation. Find sets of pixels, such that All pixels in region i satisfy some constraint of similarity.
Unsupervised Learning of Categories from Sets of Partially Matching Image Features Dominic Rizzo and Giota Stratou.
Prune-and-search Strategy
A Unified View of Kernel k-means, Spectral Clustering and Graph Cuts
CS 376b Introduction to Computer Vision 04 / 04 / 2008 Instructor: Michael Eckmann.
Segmentation Graph-Theoretic Clustering.
MRF Labeling With Graph Cut CMPUT 615 Nilanjan Ray.
Cutting complete weighted graphs Jameson Cahill Ido Heskia Math/CSC 870 Spring 2007.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 6 May 7, 2006
Image Segmentation A Graph Theoretic Approach. Factors for Visual Grouping Similarity (gray level difference) Similarity (gray level difference) Proximity.
Presentation By Michael Tao and Patrick Virtue. Agenda History of the problem Graph cut background Compute graph cut Extensions State of the art Continued.
Perceptual Organization: Segmentation and Optical Flow.
Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which usually.
Domain decomposition in parallel computing Ashok Srinivasan Florida State University COT 5410 – Spring 2004.
Image Segmentation Rob Atlas Nick Bridle Evan Radkoff.
Piyush Kumar (Lecture 2: PageRank) Welcome to COT5405.
IAstro/IDHA Workshop Strasbourg Observatory November 2002 Vito Di Gesù, Giosuè Lo Bosco DMA – University of Palermo, ITALY THE.
Clustering appearance and shape by learning jigsaws Anitha Kannan, John Winn, Carsten Rother.
Segmentation using eigenvectors
CSSE463: Image Recognition Day 34 This week This week Today: Today: Graph-theoretic approach to segmentation Graph-theoretic approach to segmentation Tuesday:
Segmentation using eigenvectors Papers: “Normalized Cuts and Image Segmentation”. Jianbo Shi and Jitendra Malik, IEEE, 2000 “Segmentation using eigenvectors:
Region Segmentation Readings: Chapter 10: 10.1 Additional Materials Provided K-means Clustering (text) EM Clustering (paper) Graph Partitioning (text)
Random Walks and Semi-Supervised Learning Longin Jan Latecki Based on : Xiaojin Zhu. Semi-Supervised Learning with Graphs. PhD thesis. CMU-LTI ,
Image Segmentation February 27, Implicit Scheme is considerably better with topological change. Transition from Active Contours: –contour v(t) 
Chapter 14: SEGMENTATION BY CLUSTERING 1. 2 Outline Introduction Human Vision & Gestalt Properties Applications – Background Subtraction – Shot Boundary.
© The McGraw-Hill Companies, Inc., Chapter 6 Prune-and-Search Strategy.
Jad silbak -University of Haifa. What we have, and what we want: Most segmentations until now focusing on local features (K-Means). We would like to extract.
Learning Spectral Clustering, With Application to Speech Separation F. R. Bach and M. I. Jordan, JMLR 2006.
Presenter : Kuang-Jui Hsu Date : 2011/3/24(Thur.).
Image Segmentation Superpixel methods Speaker: Hsuan-Yi Ko.
Domain decomposition in parallel computing Ashok Srinivasan Florida State University.
Week 4 Functions and Graphs. Objectives At the end of this session, you will be able to: Define and compute slope of a line. Write the point-slope equation.
 In the previews parts we have seen some kind of segmentation method.  In this lecture we will see graph cut, which is a another segmentation method.
Community structure in graphs Santo Fortunato. More links “inside” than “outside” Graphs are “sparse” “Communities”
Spectral Clustering Shannon Quinn (with thanks to William Cohen of Carnegie Mellon University, and J. Leskovec, A. Rajaraman, and J. Ullman of Stanford.
A Tutorial on Spectral Clustering Ulrike von Luxburg Max Planck Institute for Biological Cybernetics Statistics and Computing, Dec. 2007, Vol. 17, No.
Normalized Cuts and Image Segmentation Patrick Denis COSC 6121 York University Jianbo Shi and Jitendra Malik.
Spectral Methods for Dimensionality
Region Segmentation Readings: Chapter 10: 10
CSSE463: Image Recognition Day 34
Degree and Eigenvector Centrality
Segmentation Graph-Theoretic Clustering.
Grouping.
Digital Image Processing
Image Segmentation CS 678 Spring 2018.
Spectral Clustering Eric Xing Lecture 8, August 13, 2010
Announcements Project 1 is out today help session at the end of class.
Intensity Transformation
CSSE463: Image Recognition Day 34
Eigenvalues and Eigenvectors
“Traditional” image segmentation
Presentation transcript:

Presenter : Kuang-Jui Hsu Date : 2011/5/3(Tues.)

Outline Motivation Computing the Optimal Partition The Grouping Algorithm Experiments

Motivation How can we do it?? Generally, image segmentation means that we can cut an object from an image. Many authors of other papers proposed many methods by using the cuts and minimizing the cuts. The value of cuts is the sum of the weights of the removed edges. But doing the image segmentation by using the cuts will have a drawback!!!!!!

Graph Partitioning The image segmentation is also viewed as the method of graph partitioning. Given a graph G = (V, E) V : the set of the vertices ; E : the set of edges Graph Partitioning means we partition the vertices into two disjoint sets, A, B by removing the connecting edges.

Graph Partitioning

The Definition of Cuts So, we can define the cuts according the removed the edges. The optimal bipartitioning of a graph is the one that minimizes this the value of cuts. But there is a problem here !!!!!!

The Definition of Cuts AB

One Problem by Using the Cuts The minimum cut criteria favors cutting small sets of isolated nodes in the graph. Assuming the edge weight are inversely proportional to the distance between the two nodes. In fact, any cut that partitions out individual nodes on the right half will have smaller cut value than the cut that partitions the nodes into the left and right halves.

Solve the Problem by the new method The authors proposed the new measure of disassociation between two group in order to solve the problem. A: any given partition in the graph V: the set of vertices We can rewrite the definition of the cut and call this disassociation measure the normalized cut (Ncut):

We look the figure above again. Solve the Problem by the new method Assuming the edge weight are inversely proportional to the distance between the two nodes. We never select the cut 1 or cut 2 by using the normalized cut, because the Ncut value is 100%. This method actually solve the problem that the cut will select the isolated nodes.

Normalized Association We can use the same method to define the total normalized association and call it Nassoc. assoc(A,A) and assoc(B,B) : total weights of edges connecting nodes within A and B, respectively.

The Important Property Property : Ncut(A,B) + Nassoc(A,B) = 2 Proof: Minimizing normalized cut exactly is NP-complete, even for the special case of graphs.

Computing the Optimal Partition In this section, the authors use many linear algebra and matrix technique to simplify the Ncut. Symbol Definition: Given a partition of nodes of a graph, V, into two sets A and B. V : the set of nodes N : the number of nodes and equal to |V| x : N d imension al indicator vector =1 if node i is in A and -1, otherwise d(i) =

Computing the Optimal Partition Symbol Definition: D : N N diagonal matrix with d on its diagonal W: N N symmetrical matrix with W(i, j) = k : 1-k : 1: N 1 vector with all ones.

Computing the Optimal Partition We can use the fact and which are indicator vectors for > 0 and < 0, respectively. Rewrite the 4[Ncut(x)] as : = 通分並且整理同類項

Computing the Optimal Partition Let α(x) β(x) γ M We can expand the above equation by using the symbol. The last term equals 0

Computing the Optimal Partition 分子分母 同除 Let γ = 0 通分且同類 項合併

Computing the Optimal Partition Using Setting y = (1 + x) – b(1 - x)

Computing the Optimal Partition Setting y = (1 + x) – b(1 - x)

Computing the Optimal Partition b = y = (1 + x) – b(1 - x)

Computing the Optimal Partition Putting everything together we have, With the condition y = (1 + x) – b(1 - x) and {1, -1} If y is relaxed to take on real values, we can minimize the above equation by solving generalized eigenvalue system.

Find the solution of the eigenvalue system Transforming the above system into a standard eigensystem. We can easily verify that eigenvector with eigenvalue of 0

is symmetric positive semidefinite. is positive semidefinite. Find the solution of the eigenvalue system So, is the smallest eignevector. All eigenvectors are perpendicular.

The Grouping Algorithm

Take operation Luckily, we have some properties to reduce the operation time. Property 1)The graph are often only locally connected and the resulting systems are very sparse. 2)Only the top few eigenvectors are needed. 3)The precision requirement for eigenvectors is low We can solve the system by these properties and the method is called Lanczos method.

The Grouping Algorithm The running time of Lanczos algorithm is O(mn) + O(mM(n)) m: the maximum number of matrix-vector computations required M(n): the cost of a matrix-vector computation of Ax where A = Note that sparsity structure of A is identical to that of the weight matrix W. Since W is sparse, so is A and the matrix-vector computation is O(n). The constant factor is determined by the size of the spatial neighborhood of a node. In the experiment, m is typically less than.

The Grouping Algorithm In the idea case, the eigenvector should only take on two discrete values and the signs of the values can tell us exactly how to partition the graph. But, the eigenvectors may be continuous values. So, we must decide the value that splitting the values. We can take 0 or the median as the splitting point, but, currently, we can use the every possible point to be the splitting point, and select best Ncut value to partition the graph.

The Grouping Algorithm

Recursive Two-way Ncut

Simultanous K-Way cut with Multiple Eigenvectors One drawback of the recursive 2-way cut is its treatment of the oscillatory eigenvector. Also, the approach is computationally wasteful; only the second eigenvector is used, whereas the next few small eigenvectors also contain useful partitioning information. Use all of the top eigenvectors to simultanously obtain a K-way partition.

Simultanous K-Way cut with Multiple Eigenvectors First step: A simple algorithm is used to obtain an oversegmentation of the image into groups. Second step: one can proceed in the following two ways.

Greedy pruning

Global recursive cut W(1,4 ) W(1,2 ) W(1,3 ) W(3,4 ) W(2,3 ) W(1,3 ) W(i, j ) =

Experiments Construct the graph by taking each pixel as node. There are two ways to define the edge weight. a.) the product of a feature similarity tem and spatial proximity term: X(i): the spatial of node i F(i): a feature vector based on intensity, color, or texture information at node i

Experiments Define the feature vectors as F(i) = 1, in the case of segmenting point sets F(i) = I(i), the intensity value, for segmenting brightness images F(i) = (i), where h, s, v are the HSV values, for color segmentation F(i) = (i), where the are DOOG filters at various scales, in the case of texture segmentation. Note the = 0 for any pair of node i and j that that are more than γ pixels apart.

Experiments b.) Use the motion: treat the image sequence as a spatiotemporal data set. Given an image sequence, a weighted graph is constructed by taking each pixel in the image sequence as a node and connecting pixels that are in the spatiotemporal neighborhood of each other.

Experiments Define the weight: d(i, j): the motion distance between pixels i and j X(i) : the spatial-temporal position of pixel i

Motion Distance To compute this “motion distance”, use a motion feature called motion profile. A measure of the probability distribution of image velocity at each pixel as our motion feature vector. The are many ways to compute similarity between two image patches. Use a measure based on the sum of squared difference(SSD)

Motion Distance

The ” motion distance ” between two image pixels is defined as one minus the cross-correlation of the motion profiles.

Computation time The running time of the normalized cut algorithm is O(mn), where n is the number pixels and m is the number of steps Lanczos takes to converge. On the , the normalized cut algorithm takes about 2 minutes on Intel Pentium 200MHz machines. A multiresolution implementation can be used to reduce this running time further on larger images. With the implementation, the running time on a image can be reduced to about 20 seconds on Intel Pentium 300MHz machine. The bottleneck of the computation, a sparse matrix-vector multiplication step, can be easily parallelized taking advantage of future computer chip designs.

Results