Grouping and Segmentation Previously: model-based grouping (e.g., straight line, figure 8…) Now: general ``bottom-up” image organization (Like detecting-specific-brightness-patterns.

Slides:



Advertisements
Similar presentations
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Advertisements

Segmentácia farebného obrazu
Information Networks Graph Clustering Lecture 14.
Online Social Networks and Media. Graph partitioning The general problem – Input: a graph G=(V,E) edge (u,v) denotes similarity between u and v weighted.
10/11/2001Random walks and spectral segmentation1 CSE 291 Fall 2001 Marina Meila and Jianbo Shi: Learning Segmentation by Random Walks/A Random Walks View.
Lecture 6 Image Segmentation
CS 376b Introduction to Computer Vision 04 / 08 / 2008 Instructor: Michael Eckmann.
Normalized Cuts and Image Segmentation Jianbo Shi and Jitendra Malik, Presented by: Alireza Tavakkoli.
Region Segmentation. Find sets of pixels, such that All pixels in region i satisfy some constraint of similarity.
Segmentation CSE P 576 Larry Zitnick Many slides courtesy of Steve Seitz.
Problem Sets Problem Set 3 –Distributed Tuesday, 3/18. –Due Thursday, 4/3 Problem Set 4 –Distributed Tuesday, 4/1 –Due Tuesday, 4/15. Probably a total.
Segmentation Divide the image into segments. Each segment:
Announcements Project 2 more signup slots questions Picture taking at end of class.
Snakes Goes from edges to boundaries. Edge is strong change in intensity. Boundary is boundary of an object. –Smooth (more or less) –Closed. –…
CS 376b Introduction to Computer Vision 04 / 04 / 2008 Instructor: Michael Eckmann.
Segmentation Graph-Theoretic Clustering.
Image Segmentation Today’s Readings Intelligent Scissors, Mortensen et. al, SIGGRAPH 1995Intelligent Scissors From Sandlot ScienceSandlot Science.
Cutting complete weighted graphs Jameson Cahill Ido Heskia Math/CSC 870 Spring 2007.
Fitting a Model to Data Reading: 15.1,
Image Segmentation A Graph Theoretic Approach. Factors for Visual Grouping Similarity (gray level difference) Similarity (gray level difference) Proximity.
Image Segmentation Today’s Readings Forsyth & Ponce, Chapter 14
CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detection.
Announcements vote for Project 3 artifacts Project 4 (due next Wed night) Questions? Late day policy: everything must be turned in by next Friday.
Segmentation and Perceptual Grouping The problem Gestalt Edge extraction: grouping and completion Image segmentation.
Computer Vision - A Modern Approach Set: Segmentation Slides by D.A. Forsyth Segmentation and Grouping Motivation: not information is evidence Obtain a.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
Clustering Unsupervised learning Generating “classes”
Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which usually.
Domain decomposition in parallel computing Ashok Srinivasan Florida State University COT 5410 – Spring 2004.
Image Segmentation Rob Atlas Nick Bridle Evan Radkoff.
CSE 185 Introduction to Computer Vision
Curve Modeling Bézier Curves
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Image Features Kenton McHenry, Ph.D. Research Scientist.
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2014.
Machine Vision for Robots
Camera Geometry and Calibration Thanks to Martial Hebert.
Presenter : Kuang-Jui Hsu Date : 2011/5/3(Tues.).
Segmentation using eigenvectors
CSSE463: Image Recognition Day 34 This week This week Today: Today: Graph-theoretic approach to segmentation Graph-theoretic approach to segmentation Tuesday:
Segmentation using eigenvectors Papers: “Normalized Cuts and Image Segmentation”. Jianbo Shi and Jitendra Malik, IEEE, 2000 “Segmentation using eigenvectors:
CSE554AlignmentSlide 1 CSE 554 Lecture 5: Alignment Fall 2011.
Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.
Chapter 14: SEGMENTATION BY CLUSTERING 1. 2 Outline Introduction Human Vision & Gestalt Properties Applications – Background Subtraction – Shot Boundary.
Lecture 2: Edge detection CS4670: Computer Vision Noah Snavely From Sandlot ScienceSandlot Science.
G52IVG, School of Computer Science, University of Nottingham 1 Edge Detection and Image Segmentation.
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2013.
CS654: Digital Image Analysis
 In the previews parts we have seen some kind of segmentation method.  In this lecture we will see graph cut, which is a another segmentation method.
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 15/16 – TP10 Advanced Segmentation Miguel Tavares.
Digital Image Processing Lecture 17: Segmentation: Canny Edge Detector & Hough Transform Prof. Charlene Tsai.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Lecture 10: Harris Corner Detector CS4670/5670: Computer Vision Kavita Bala.
Normalized Cuts and Image Segmentation Patrick Denis COSC 6121 York University Jianbo Shi and Jitendra Malik.
Grouping and Segmentation. Sometimes edge detectors find the boundary pretty well.
CSE 554 Lecture 8: Alignment
Miguel Tavares Coimbra
Announcements CS accounts Project 1 is out today
CSSE463: Image Recognition Day 34
Segmentation Graph-Theoretic Clustering.
Grouping.
Seam Carving Project 1a due at midnight tonight.
Spectral Clustering Eric Xing Lecture 8, August 13, 2010
Announcements Project 4 questions Evaluations.
Announcements Project 1 is out today
Announcements Project 4 out today (due Wed March 10)
Announcements Project 1 is out today help session at the end of class.
CSSE463: Image Recognition Day 34
“Traditional” image segmentation
Presentation transcript:

Grouping and Segmentation Previously: model-based grouping (e.g., straight line, figure 8…) Now: general ``bottom-up” image organization (Like detecting-specific-brightness-patterns vs “interesting” patterns/interest points/corners)

Grouping Grouping is the process of associating similar image features together

Grouping Grouping is the process of associating similar image features together The Gestalt School: –Proximity: nearby image elements tend to be grouped. –Similarity: similar image elements... –Common fate: image elements with similar motion... –Common region: image elements in same closed region... –Parallelism: parallel curves or other parallel image elements... –Closure: image elements forming closed curves... –Symmetry: symmetrically positioned image elements... –Good continuation: image elements that join up nicely... –Familiar Conguration: image elements giving a familiar object...

Good Continuation

Good continuation

Common Form: (includes color and texture)

Connectivity

Symmetry

Convexity (stronger than symmetry?)

Good continuation also stronger than symmetry?

Closure closed curves are much easier to pick out than open ones

Grouping and Segmentation Segmentation is the process of dividing an image into regions of “related content” Courtesy Uni Bonn

Grouping and Segmentation Segmentation is the process of dividing an image into regions of “related content” Courtesy Uni Bonn

Grouping and Segmentation Both are ill defined problems “Related” or “similar” are “high level” concepts. Hard to apply directly to image data How are they used? Are they “early” process that precedes recognizing objects? A lot of research, some pretty good algorithms. No single best approach.

Boundaries

Problem: Find best path between two boundary points. User can help by clicking on endpoints

How do we decide how good a path is? Which of two paths is better?

Discrete Grid Curve is good if it follows object boundary So it should have: Strong gradients along curve (on average) Curve directions roughly perpendicular to gradients (since these is usually true for boundaries) Curve should be smooth, not jagged Since: Object boundaries typically smooth Curves that follow “noise edges” typically jagged So good curve has: Low curvature (on average) along curve Slow change in gradient direction along curve

Discrete Grid How to find the best curve? Approach - Define cost function on curves (Given any curve, rule for computing its cost) - Define so good curves have lower cost (non-wiggly curves passing mostly through edgels have low cost) - Search for best curve with lowest cost (minimize cost function over all curves)

Discrete Grid How to find the best curve? Approach - Define cost function on curves (Given any curve, rule for computing its cost) - Define so good curves have lower cost (non-wiggly curves passing mostly through edgels have low cost) - Search for best curve with lowest cost (minimize cost function over all curves) How to compute cost? - At every pixel in image, define costs for “curve fragments” connecting it to adjoining pixels - Compute curve cost by summing up costs for fragments.

Defining cost: smoothness Path: series of pixels,, …,, Good curve has small curvature at each pixel –Small direction changes. – small on average (summing along curve) Good curve has small changes in gradient direction One possible term in cost definition : Fragment Fragment Cost

Defining cost: smoothness Path: series of pixels,, …,, Good curve has small curvature at each pixel –Small direction changes. – small on average (summing along curve) Good curve has small changes in gradient direction One possible term in cost definition : small when gradients at both nearly perpendicular to fragment direction Fragment Fragment Cost

Defining cost: smoothness Path: series of pixels,, …,, Good curve has small curvature at each pixel –Small direction changes. – small on average (summing along curve) Good curve has small changes in gradient direction One possible term in cost definition : small when gradients at both nearly perpendicular to fragment direction If this small at both i, i+1, then small at i. Low average value of this quantity  low average value (good curve) Fragment Fragment Cost

Path Cost Function (better path has smaller cost) Path:,,,, Total cost: sum cost for each pixel over whole path One possible cost definition:

Path Cost Function (better path has smaller cost) Path:,,,, Total cost: sum cost for each pixel over whole path One possible cost definition: Small for high gradient, small direction change

How do we find the best Path? Computer Science… Remember: Curve is path through grid. Cost sums over each step of path. We want to minimize cost.

Map problem to Graph Edges connect pixels Edge weight = cost of fragment connecting the two pixels Each pixel is a node

Map problem to Graph Edge weight Example Cost: Edge weight = cost of fragment connecting the two pixels

Map problem to Graph Edge weight Example Cost: Edge weight = cost of fragment connecting the two pixels Note: this is just an example of a cost function. The cost function actually used differs

Algorithm: basic idea Two steps: 1)Compute least costs for best paths from start pixel to all other pixels. 2)Given a specific end pixel, use the cost computation to get best path between start and end pixels.

Dijkstra’s shortest path algorithm Algorithm: compute least costs from start node to all other nodes Iterate outward from start node, adding one node at a time. Cost estimates start high, nodes gradually “learn” their true cost Always choose next node so can calculate its correct cost. link cost

Dijkstra’s shortest path algorithm Algorithm: compute least costs from start node to all other nodes Iterate outward from start node, adding one node at a time. Cost estimates start high, gradually improve Always choose next node so can calculate its correct cost. Example: choose left node L with cost 1. Min cost to L equals 1 since any other path to L gets cost>1 on first step from start node L

Dijkstra’s shortest path algorithm Computes minimum costs from seed to every pixel –Running time (N pixels): O(N log N) using active priority queue (heap) fraction of a second for a typical (640x480) image Then find best path from seed to any point in O(N).  Real time!

Intelligent Scissors

Results

Voting again Recall Hough Transform –Idea: data votes for best model (eg, best line passing through data points) –Implementation Choose type of model in advance (eg, straight lines) Each data point votes on parameters of model (eg, location + orientation of line)

Another type of voting Popularity contest –Each data point votes for others that it “likes,” that it feels “most related to” –“Most popular” points form a distinct group –(Like identifying social network from cluster of web links)

B A B “feels related” to A and votes for it.

C A C also “feels related” to A and votes for it.

D A So do D…

A So do D and E E

“Popular” points form a group, with “friends” reinforcing each other

feels unconnected to, doesn’t vote for it (or votes against)

also feels unconnected and doesn’t vote for

Only B feels connected and votes for (but it’s not enough) B

is unpopular and left out of group

Note: even though can connect smoothly to, it doesn’t feel related since it’s too far away

Identifying groups by “popularity” In vision, technical name for “popularity” is saliency Procedure –Voting: Pixels accumulate votes from their neighbors –Grouping: Find smooth curves made up of the most salient pixels (the pixels with the most votes)

Identifying groups by “popularity” In vision, technical name for “popularity” is saliency Procedure –Voting: Pixels accumulate votes from their neighbors –Grouping: Find smooth curves made up of the most salient pixels (the pixels with the most votes) Grouping: Start at salient point

Identifying groups by “popularity” In vision, technical name for “popularity” is saliency Procedure –Voting: Pixels accumulate votes from their neighbors –Grouping: Find smooth curves made up of the most salient pixels (the pixels with the most votes) Grouping: Start at salient point Grow curve toward most related salient point

Identifying groups by “popularity” In vision, technical name for “popularity” is saliency Procedure –Voting: Pixels accumulate votes from their neighbors –Grouping: Find smooth curves made up of the most salient pixels (the pixels with the most votes) Grouping: Start at salient point Grow curve toward most related salient point

Identifying groups by “popularity” In vision, technical name for “popularity” is saliency Procedure –Voting: Pixels accumulate votes from their neighbors –Grouping: Find smooth curves made up of the most salient pixels (the pixels with the most votes) Grouping: Start at salient point Grow curve toward most related salient point

Identifying groups by “popularity” In vision, technical name for “popularity” is saliency Procedure –Voting: Pixels accumulate votes from their neighbors –Grouping: Find smooth curves made up of the most salient pixels (the pixels with the most votes) Grouping: Start at salient point Grow curve toward most related salient point

Affinity In vision, the amount of “relatedness” between two data points is called their affinity Related curve fragments: high affinity

Affinity In vision, the amount of “relatedness” between two data points is called their affinity Related curve fragments: high affinity (Since can be connected by smooth curve)

Affinity In vision, the amount of “relatedness” between two data points is called their affinity Related curve fragments: high affinity (Since can be connected by smooth curve) Unrelated curve fragments: low affinity

Affinity In vision, the amount of “relatedness” between two data points is called their affinity Related curve fragments: high affinity (Since can be connected by smooth curve) Unrelated curve fragments: low affinity (Can’t be connected by smooth curve)

Computing Affinity Co-circularity: a simple measure of affinity between line fragments –Two fragments are smoothly related if both lie on same circle. –Two fragments on the same circle have property that Chord joining line fragment centers

Computing Affinity Co-circularity: a simple measure of affinity between line fragments –Two fragments are smoothly related if both lie on same circle. –Two fragments on the same circle have property that Chord joining line fragment centers is angle between chord and x-axis

Computing Affinity Co-circularity: a simple measure of affinity between line fragments –Two fragments are smoothly related if both lie on same circle. –Two fragments on the same circle have property that Chord joining line fragment centers is angle between chord and x-axis Angle of first fragment with X axis

Computing Affinity Co-circularity: a simple measure of affinity between line fragments –Two fragments are smoothly related if both lie on same circle. –Two fragments on the same circle have property that Chord joining line fragment centers is angle between chord and x-axis Angle of second fragment with X axis

Computing Affinity Co-circularity: a simple measure of affinity between line fragments –Two fragments are smoothly related if both lie on same circle. –Two fragments on the same circle have property that Chord joining line fragment centers is angle between chord and x-axis

Computing Affinity Co-circularity: a simple measure of affinity between line fragments –Two fragments are smoothly related if both lie on same circle. –Two fragments on the same circle have property that Chord joining line fragment centers is angle between chord and x-axis Co-circularity: Second fragment angle determined by angle of first, and angle of connecting line

Computing Affinity is the orientation that second fragment should have if it connects smoothly to first fragment. Measure affinity by difference with actual orientation: Example (use Gaussians so affinity falls off smoothly) : Edgel locations

Computing Affinity is the orientation that second fragment should have if it connects smoothly to first fragment. Measure affinity by difference with actual orientation: Example (use Gaussians so affinity falls off smoothly) : Treat both edge fragments in same way: sum orientation discrepancy for both

Computing Affinity is the orientation that second fragment should have if it connects smoothly to first fragment. Measure affinity by difference with actual orientation: Example (use Gaussians so affinity falls off smoothly) : More distant edges should have less affinity

Computing Affinity Why give smaller affinity to distant fragments? –Distant fragments likely to come from different objects –Many distant fragments, so some line up just by random chance

Computing Affinity For faster computation, fewer affinities: Set affinities to 0 beyond some distance T: (Often choose T small: e.g., less than 7 pixels, or just to include the nearest neighbors)

Eigenvector Grouping

Reminder: Grouping with Affinity Matrix Task: Partition image elements into groups G 1,..,G n of similar elements.  Affinities should be: –high within group: –low between groups:

Eigenvector grouping Eigenvector grouping is just improved popularity grouping! Idea: –To judge your own popularity, weight the votes from your friends according to their popularity.

Eigenvector grouping Eigenvector grouping is just improved popularity grouping! Idea: –To judge your own popularity, weight the votes from your friends according to their popularity. –Recompute using better estimate of friends popularity: (use weighted voting for them also)

Eigenvector grouping Eigenvector grouping is just improved popularity grouping! Idea: –To judge your own popularity, weight the votes from your friends according to their popularity. –Recompute using better estimate of friends popularity: (use weighted voting for them also) –Even better if recompute using this improved estimate for friends

Eigenvector grouping Eigenvector grouping is just improved popularity grouping! Idea: –To judge your own popularity, weight the votes from your friends according to their popularity. –Recompute using better estimate of friends popularity: (use weighted voting for them also) –Even better if recompute using this improved estimate for friends

Eigenvector grouping Eigenvector grouping is just improved popularity grouping! Idea: –To judge your own popularity, weight the votes from your friends according to their popularity. –Recompute using better estimate of friends popularity: (use weighted voting for them also) –Even better if recompute using this improved estimate for friends –Iterating this, “weighted popularity” converges to eigenvector!

Your popularity = sum of friend’s votes

Usually add normalization so popularities always add to 1

Your popularity = sum of friend’s votes But don’t care about normalization. Important thing is that P is proportional to this

Your popularity = sum of friend’s votes Now use new estimate of friend’s popularity to weight their vote

Your popularity = sum of friend’s votes Now use new estimate of friend’s popularity to weight their vote

Your popularity = sum of friend’s votes Again use new estimate…

Your popularity = sum of friend’s votes Keep going…

Your popularity = sum of friend’s votes Keep going… An eigenvector of !

Your popularity = sum of friend’s votes Keep going… An eigenvector of ! Why? For very large N, (one more time makes little difference) 

Your popularity = sum of friend’s votes Keep going… An eigenvector of ! Why? For very large N, (one more time makes little difference)  Eigenvector of Aff, by definition

Your popularity = sum of friend’s votes Another view Multiplying by Aff many times, only the component E with largest |Aff E| remains from ( …1)

Your popularity = sum of friend’s votes Another view Multiplying by Aff many times, only the component E with largest |Aff E| remains from ( …1) So this is the eigenvector of Aff with the biggest eigenvalue

Biggest Eigenvector: Math Eigenvectors of Aff (or any symmetric matrix) give a rotated coordinated system. So we can write Eigenvector

Biggest Eigenvector: Math Eigenvectors of Aff (or any symmetric matrix) give a rotated coordinated system. So we can write Eigenvector

Biggest Eigenvector: Math Eigenvectors of Aff (or any symmetric matrix) give a rotated coordinated system. So we can write Eigenvector Eigenvalue for

Biggest Eigenvector: Math Eigenvectors of Aff (or any symmetric matrix) give a rotated coordinated system. So we can write Eigenvector Eigenvalue for Biggest eigenvalue dominates:

Eigenvector Grouping: Another View Toy problem –Single foreground group:

Eigenvector Grouping Toy problem –Single foreground group: …

Eigenvector Grouping Toy problem –Single foreground group: This is what Aff looks like if pixels in foreground group come before background pixels

Grouping Toy problem: single foreground group: Algorithm to find group –Find unit vector V that makes largest Affinity matrix: …

Grouping Toy problem: single foreground group: Algorithm to find group –Find unit vector V that makes largest –Large entry in V indicates group member. Affinity matrix: …

Grouping Toy problem: single foreground group: Algorithm to find group –Find unit vector V that makes largest –Answer ~ Affinity matrix: … (because every nonzero entry of V contributes as much as possible to |Aff V|)

Grouping Toy problem: single foreground group: Algorithm to find group –Find unit vector V that makes largest –To compute V: find ‘largest’ eigenvector of Affinity matrix: … (with the largest eigenvalue)

Grouping Toy problem: single foreground group: Algorithm to find group –Find unit vector V that makes largest –To compute V: find ‘largest’ eigenvector of (This gives largest value for |Aff V|) Affinity matrix: …

Eigenvector Grouping Toy problem –Single foreground group: (Toy) Algorithm summary: –Compute largest eigenvector E of Aff – Assign foreground if –Assign background if

Eigenvector Grouping Why does largest eigenvector work? Leading eigenvector E of A has properties: (by definition)

Eigenvector Grouping Why does largest eigenvector work? Leading eigenvector E of A has properties: (by definition) Eigenvalue (a number) Should be largest eigenvalue

Eigenvector Grouping Why does largest eigenvector work? Leading eigenvector E of A has properties: (by definition) occurs at (max value ) Fact:

Eigenvector Grouping Why does largest eigenvector work? Leading eigenvector E of A has properties: (by definition) occurs at (max value ) Fact: Interpretation: E is “central direction” of all rows of A (largest dot products with rows of A)

Aside: Math occurs at (max value ) Fact: (equivalent to maximize the square)

Aside: Math Derive from calculus: max is at stationary point occurs for Show

Aside: Math Derive from calculus: max is at stationary point occurs for Show

Aside: Math Derive from calculus: max is at stationary point At max V is an eigenvector of

Aside: Math Derive from calculus: max is at stationary point At max V is an eigenvector of  V an eigenvector of A

Aside: Math Derive from calculus: max is at stationary point At max V is an eigenvector of  V an eigenvector of A Why? A symmetric  A has same eigenvectors as, since

Eigenvector Grouping More realistic problem –Single foreground group:

Eigenvector Grouping More realistic problem –Single foreground group: Eigenvector also has form

Eigenvector Grouping More realistic problem –Single foreground group: Eigenvector also has form (E is unit vector. To get largest product with Aff, more efficient to have big entries where they multiply big entries of Aff.)

Eigenvector Grouping More realistic problem –Single foreground group: Eigenvector also has form Can identify foreground items by their big entries of E

Eigenvector Grouping Real Algorithm summary –Compute largest eigenvector E of Aff – Assign foreground if –Assign background if (T a threshold chosen by you)

Eigenvector Grouping Even more realistic problem –Several groups: –Example: two groups Group 1Group 2

Eigenvector Grouping Even more realistic problem –Several groups: –Example: two groups Group 1Group 2 Usually, leading eigenvector picks out one of the groups (the one with biggest affinities). In this case, we again expect

Technical Aside Why does eigenvector pick out just one group? Toy example: 2 x 2 “affinity matrix” are “big” values; affinities between “groups” = 0

Technical Aside Why does eigenvector pick out just one group? Toy example: 2 x 2 “affinity matrix” are “big” values; affinities between “groups” = 0 Leading eigenvector is with eigenvalue Intuition: E has to be a unit vector. To be most efficient in getting large dot product, it put all its large values into group with largest affinities

Eigenvector Grouping For several groups, leading eigenvector picks out the “leading” (most self similar) group. (Remember: calculate eigenvector by svd on ) To identify other groups, can remove points from the leading group and repeat leading eigenvector computation for remaining points

Eigenvector Grouping For several groups, leading eigenvector (or singular vector) picks out the “leading” (most self similar) group. To identify other groups, can remove points from the leading group and repeat leading eigenvector computation for remaining points Alternative: just use non-leading eigenvectors/eigenvalues to pick out the non- leading groups.

(Alternative) Algorithm Summary Choose affinity measures and create affinity matrix Aff Compute eigenvalues for Aff. Assign all elements to active list L Repeat for each eigenvalue starting from largest: –Compute corresponding eigenvector –Choose threshold, assign to new group –Remove new group elements i from active list L. –Stop if L empty or new group too small. “Clean up” groups (optional)

Eigenvector Grouping Problems Method picks “strongest” group. When there are several groups of similar strength, it can get confused Eigenvector algorithm described so far not used currently.

Eigenvectors + Graph Cut Methods

Graph Cut Methods Image  undirected graph G = (V,E) Each graph edge has weight value w(E) Example –Vertex nodes =edgels i –Graph edges go between edgels i, j –Graph edge has weight w(i,j)=Aff(i,j) Task: Partition V into V 1...V n, s.t. similarity is high within groups and low between groups

Issues What is a good partition ? How can you compute such a partition efficiently ?

Graph Cut G=(V,E) Sets A and B are a disjoint partition of V

Graph Cut G=(V,E) Sets A and B are a disjoint partition of V Measure of dissimilarity between the two groups:

Graph Cut Cut G=(V,E) Sets A and B are a disjoint partition of V Measure of dissimilarity between the two groups: Cut

Graph Cut Cut links that get summed over G=(V,E) Sets A and B are a disjoint partition of V Measure of dissimilarity between the two groups: Cut

The temptation Cut is a measure of disassociation Minimizing Cut gives partition with maximum disassociation. Efficient poly-time algorithms exist

The problem with MinCut It usually outputs segments that are too small!

The problem with MinCut It usually outputs segments that are too small! Can get small cut just by having fewer cut links

The Normalized Cut (Shi + Malik)

Given a partition (A,B) of the vertex set V. Ncut(A,B) measures difference between two groups, normalized by how similar they are within themselves. If A small, assoc(A) is small and Ncut large (bad partition). Problem: Find partition minimizing Ncut

Matrix formulation Definitions: D is an n x n diagonal matrix with entries W is an n x n symmetric matrix

Normalized cuts Transform the problem to one which can be approximated by eigenvector methods. After some algebra, the Ncut problem becomes subject to the constraints: for some constant

Normalized cuts Transform the problem to one which can be approximated by eigenvector methods. After some algebra, the Ncut problem becomes subject to the constraints: for some constant NP-Complete!

Normalized cuts Drop discreteness constraints to make easier, solvable by eigenvectors: Subject to constraint:

Normalized cuts Drop discreteness constraints to make easier, solvable by eigenvectors: Subject to constraint: Solution y is second least eigenvector of matrix

Normalized cuts Drop discreteness constraints to make easier, solvable by eigenvectors: Subject to constraint: Solution y is second least eigenvector of matrix Easy, since D is diagonal

Normalized cuts Drop discreteness constraints to make easier, solvable by eigenvectors: Subject to constraint: Solution y is second least eigenvector of matrix Because of constraint! (Means eigenvector for second smallest eigenvalue.)

Normalized cuts Drop discreteness constraints to make easier, solvable by eigenvectors: Subject to constraint: Solution y is second least eigenvector of matrix MATLAB: [U,S,V] = svd(M); y= V(:,end-1);

Normalized Cuts: algorithm Define affinities Compute matrices: and [U,S,V] = svd(M); y= V(:,end-1); Threshold y: (could also use a different threshold besides the mean of y)

Normalized Cuts Very powerful algorithm, widely used Results shown later