Download presentation
Presentation is loading. Please wait.
Published byPosy Cobb Modified over 8 years ago
1
Grouping and Segmentation Previously: model-based grouping (e.g., straight line, figure 8…) Now: general ``bottom-up” image organization (Like detecting-specific-brightness-patterns vs “interesting” patterns/interest points/corners)
2
Grouping Grouping is the process of associating similar image features together
3
Grouping Grouping is the process of associating similar image features together The Gestalt School: –Proximity: nearby image elements tend to be grouped. –Similarity: similar image elements... –Common fate: image elements with similar motion... –Common region: image elements in same closed region... –Parallelism: parallel curves or other parallel image elements... –Closure: image elements forming closed curves... –Symmetry: symmetrically positioned image elements... –Good continuation: image elements that join up nicely... –Familiar Conguration: image elements giving a familiar object...
8
Good Continuation
9
Good continuation
11
Common Form: (includes color and texture)
12
Connectivity
13
Symmetry
15
Convexity (stronger than symmetry?)
16
Good continuation also stronger than symmetry?
17
Closure closed curves are much easier to pick out than open ones
18
Grouping and Segmentation Segmentation is the process of dividing an image into regions of “related content” Courtesy Uni Bonn
19
Grouping and Segmentation Segmentation is the process of dividing an image into regions of “related content” Courtesy Uni Bonn
20
Grouping and Segmentation Both are ill defined problems “Related” or “similar” are “high level” concepts. Hard to apply directly to image data How are they used? Are they “early” process that precedes recognizing objects? A lot of research, some pretty good algorithms. No single best approach.
21
Boundaries
22
Problem: Find best path between two boundary points. User can help by clicking on endpoints
23
How do we decide how good a path is? Which of two paths is better?
24
Discrete Grid Curve is good if it follows object boundary So it should have: Strong gradients along curve (on average) Curve directions roughly perpendicular to gradients (since these is usually true for boundaries) Curve should be smooth, not jagged Since: Object boundaries typically smooth Curves that follow “noise edges” typically jagged So good curve has: Low curvature (on average) along curve Slow change in gradient direction along curve
25
Discrete Grid How to find the best curve? Approach - Define cost function on curves (Given any curve, rule for computing its cost) - Define so good curves have lower cost (non-wiggly curves passing mostly through edgels have low cost) - Search for best curve with lowest cost (minimize cost function over all curves)
26
Discrete Grid How to find the best curve? Approach - Define cost function on curves (Given any curve, rule for computing its cost) - Define so good curves have lower cost (non-wiggly curves passing mostly through edgels have low cost) - Search for best curve with lowest cost (minimize cost function over all curves) How to compute cost? - At every pixel in image, define costs for “curve fragments” connecting it to adjoining pixels - Compute curve cost by summing up costs for fragments.
27
Defining cost: smoothness Path: series of pixels,, …,, Good curve has small curvature at each pixel –Small direction changes. – small on average (summing along curve) Good curve has small changes in gradient direction One possible term in cost definition : Fragment Fragment Cost
28
Defining cost: smoothness Path: series of pixels,, …,, Good curve has small curvature at each pixel –Small direction changes. – small on average (summing along curve) Good curve has small changes in gradient direction One possible term in cost definition : small when gradients at both nearly perpendicular to fragment direction Fragment Fragment Cost
29
Defining cost: smoothness Path: series of pixels,, …,, Good curve has small curvature at each pixel –Small direction changes. – small on average (summing along curve) Good curve has small changes in gradient direction One possible term in cost definition : small when gradients at both nearly perpendicular to fragment direction If this small at both i, i+1, then small at i. Low average value of this quantity low average value (good curve) Fragment Fragment Cost
30
Path Cost Function (better path has smaller cost) Path:,,,, Total cost: sum cost for each pixel over whole path One possible cost definition:
31
Path Cost Function (better path has smaller cost) Path:,,,, Total cost: sum cost for each pixel over whole path One possible cost definition: Small for high gradient, small direction change
32
How do we find the best Path? Computer Science… Remember: Curve is path through grid. Cost sums over each step of path. We want to minimize cost.
33
Map problem to Graph Edges connect pixels Edge weight = cost of fragment connecting the two pixels Each pixel is a node
34
Map problem to Graph Edge weight Example Cost: Edge weight = cost of fragment connecting the two pixels
35
Map problem to Graph Edge weight Example Cost: Edge weight = cost of fragment connecting the two pixels Note: this is just an example of a cost function. The cost function actually used differs
36
Algorithm: basic idea Two steps: 1)Compute least costs for best paths from start pixel to all other pixels. 2)Given a specific end pixel, use the cost computation to get best path between start and end pixels.
37
Dijkstra’s shortest path algorithm 0 5 31 33 4 9 2 Algorithm: compute least costs from start node to all other nodes Iterate outward from start node, adding one node at a time. Cost estimates start high, nodes gradually “learn” their true cost Always choose next node so can calculate its correct cost. link cost
38
Dijkstra’s shortest path algorithm 4 10 5 3 323 9 5 31 33 4 9 2 Algorithm: compute least costs from start node to all other nodes Iterate outward from start node, adding one node at a time. Cost estimates start high, gradually improve Always choose next node so can calculate its correct cost. Example: choose left node L with cost 1. Min cost to L equals 1 since any other path to L gets cost>1 on first step from start node L
39
Dijkstra’s shortest path algorithm Computes minimum costs from seed to every pixel –Running time (N pixels): O(N log N) using active priority queue (heap) fraction of a second for a typical (640x480) image Then find best path from seed to any point in O(N). Real time!
40
Intelligent Scissors
41
Results
42
Voting again Recall Hough Transform –Idea: data votes for best model (eg, best line passing through data points) –Implementation Choose type of model in advance (eg, straight lines) Each data point votes on parameters of model (eg, location + orientation of line)
43
Another type of voting Popularity contest –Each data point votes for others that it “likes,” that it feels “most related to” –“Most popular” points form a distinct group –(Like identifying social network from cluster of web links)
44
B A B “feels related” to A and votes for it.
45
C A C also “feels related” to A and votes for it.
46
D A So do D…
47
A So do D and E E
48
“Popular” points form a group, with “friends” reinforcing each other
49
feels unconnected to, doesn’t vote for it (or votes against)
50
also feels unconnected and doesn’t vote for
51
Only B feels connected and votes for (but it’s not enough) B
52
is unpopular and left out of group
53
Note: even though can connect smoothly to, it doesn’t feel related since it’s too far away
54
Identifying groups by “popularity” In vision, technical name for “popularity” is saliency Procedure –Voting: Pixels accumulate votes from their neighbors –Grouping: Find smooth curves made up of the most salient pixels (the pixels with the most votes)
55
Identifying groups by “popularity” In vision, technical name for “popularity” is saliency Procedure –Voting: Pixels accumulate votes from their neighbors –Grouping: Find smooth curves made up of the most salient pixels (the pixels with the most votes) Grouping: Start at salient point
56
Identifying groups by “popularity” In vision, technical name for “popularity” is saliency Procedure –Voting: Pixels accumulate votes from their neighbors –Grouping: Find smooth curves made up of the most salient pixels (the pixels with the most votes) Grouping: Start at salient point Grow curve toward most related salient point
57
Identifying groups by “popularity” In vision, technical name for “popularity” is saliency Procedure –Voting: Pixels accumulate votes from their neighbors –Grouping: Find smooth curves made up of the most salient pixels (the pixels with the most votes) Grouping: Start at salient point Grow curve toward most related salient point
58
Identifying groups by “popularity” In vision, technical name for “popularity” is saliency Procedure –Voting: Pixels accumulate votes from their neighbors –Grouping: Find smooth curves made up of the most salient pixels (the pixels with the most votes) Grouping: Start at salient point Grow curve toward most related salient point
59
Identifying groups by “popularity” In vision, technical name for “popularity” is saliency Procedure –Voting: Pixels accumulate votes from their neighbors –Grouping: Find smooth curves made up of the most salient pixels (the pixels with the most votes) Grouping: Start at salient point Grow curve toward most related salient point
60
Affinity In vision, the amount of “relatedness” between two data points is called their affinity Related curve fragments: high affinity
61
Affinity In vision, the amount of “relatedness” between two data points is called their affinity Related curve fragments: high affinity (Since can be connected by smooth curve)
62
Affinity In vision, the amount of “relatedness” between two data points is called their affinity Related curve fragments: high affinity (Since can be connected by smooth curve) Unrelated curve fragments: low affinity
63
Affinity In vision, the amount of “relatedness” between two data points is called their affinity Related curve fragments: high affinity (Since can be connected by smooth curve) Unrelated curve fragments: low affinity (Can’t be connected by smooth curve)
64
Computing Affinity Co-circularity: a simple measure of affinity between line fragments –Two fragments are smoothly related if both lie on same circle. –Two fragments on the same circle have property that Chord joining line fragment centers
65
Computing Affinity Co-circularity: a simple measure of affinity between line fragments –Two fragments are smoothly related if both lie on same circle. –Two fragments on the same circle have property that Chord joining line fragment centers is angle between chord and x-axis
66
Computing Affinity Co-circularity: a simple measure of affinity between line fragments –Two fragments are smoothly related if both lie on same circle. –Two fragments on the same circle have property that Chord joining line fragment centers is angle between chord and x-axis Angle of first fragment with X axis
67
Computing Affinity Co-circularity: a simple measure of affinity between line fragments –Two fragments are smoothly related if both lie on same circle. –Two fragments on the same circle have property that Chord joining line fragment centers is angle between chord and x-axis Angle of second fragment with X axis
68
Computing Affinity Co-circularity: a simple measure of affinity between line fragments –Two fragments are smoothly related if both lie on same circle. –Two fragments on the same circle have property that Chord joining line fragment centers is angle between chord and x-axis
69
Computing Affinity Co-circularity: a simple measure of affinity between line fragments –Two fragments are smoothly related if both lie on same circle. –Two fragments on the same circle have property that Chord joining line fragment centers is angle between chord and x-axis Co-circularity: Second fragment angle determined by angle of first, and angle of connecting line
70
Computing Affinity is the orientation that second fragment should have if it connects smoothly to first fragment. Measure affinity by difference with actual orientation: Example (use Gaussians so affinity falls off smoothly) : Edgel locations
71
Computing Affinity is the orientation that second fragment should have if it connects smoothly to first fragment. Measure affinity by difference with actual orientation: Example (use Gaussians so affinity falls off smoothly) : Treat both edge fragments in same way: sum orientation discrepancy for both
72
Computing Affinity is the orientation that second fragment should have if it connects smoothly to first fragment. Measure affinity by difference with actual orientation: Example (use Gaussians so affinity falls off smoothly) : More distant edges should have less affinity
73
Computing Affinity Why give smaller affinity to distant fragments? –Distant fragments likely to come from different objects –Many distant fragments, so some line up just by random chance
74
Computing Affinity For faster computation, fewer affinities: Set affinities to 0 beyond some distance T: (Often choose T small: e.g., less than 7 pixels, or just to include the nearest neighbors)
75
Eigenvector Grouping
76
Reminder: Grouping with Affinity Matrix Task: Partition image elements into groups G 1,..,G n of similar elements. Affinities should be: –high within group: –low between groups:
77
Eigenvector grouping Eigenvector grouping is just improved popularity grouping! Idea: –To judge your own popularity, weight the votes from your friends according to their popularity.
78
Eigenvector grouping Eigenvector grouping is just improved popularity grouping! Idea: –To judge your own popularity, weight the votes from your friends according to their popularity. –Recompute using better estimate of friends popularity: (use weighted voting for them also)
79
Eigenvector grouping Eigenvector grouping is just improved popularity grouping! Idea: –To judge your own popularity, weight the votes from your friends according to their popularity. –Recompute using better estimate of friends popularity: (use weighted voting for them also) –Even better if recompute using this improved estimate for friends
80
Eigenvector grouping Eigenvector grouping is just improved popularity grouping! Idea: –To judge your own popularity, weight the votes from your friends according to their popularity. –Recompute using better estimate of friends popularity: (use weighted voting for them also) –Even better if recompute using this improved estimate for friends
81
Eigenvector grouping Eigenvector grouping is just improved popularity grouping! Idea: –To judge your own popularity, weight the votes from your friends according to their popularity. –Recompute using better estimate of friends popularity: (use weighted voting for them also) –Even better if recompute using this improved estimate for friends –Iterating this, “weighted popularity” converges to eigenvector!
82
Your popularity = sum of friend’s votes
83
Usually add normalization so popularities always add to 1
84
Your popularity = sum of friend’s votes But don’t care about normalization. Important thing is that P is proportional to this
85
Your popularity = sum of friend’s votes Now use new estimate of friend’s popularity to weight their vote
86
Your popularity = sum of friend’s votes Now use new estimate of friend’s popularity to weight their vote
87
Your popularity = sum of friend’s votes Again use new estimate…
88
Your popularity = sum of friend’s votes Keep going…
89
Your popularity = sum of friend’s votes Keep going… An eigenvector of !
90
Your popularity = sum of friend’s votes Keep going… An eigenvector of ! Why? For very large N, (one more time makes little difference)
91
Your popularity = sum of friend’s votes Keep going… An eigenvector of ! Why? For very large N, (one more time makes little difference) Eigenvector of Aff, by definition
92
Your popularity = sum of friend’s votes Another view Multiplying by Aff many times, only the component E with largest |Aff E| remains from (1 1 1 1…1)
93
Your popularity = sum of friend’s votes Another view Multiplying by Aff many times, only the component E with largest |Aff E| remains from (1 1 1 1…1) So this is the eigenvector of Aff with the biggest eigenvalue
94
Biggest Eigenvector: Math Eigenvectors of Aff (or any symmetric matrix) give a rotated coordinated system. So we can write Eigenvector
95
Biggest Eigenvector: Math Eigenvectors of Aff (or any symmetric matrix) give a rotated coordinated system. So we can write Eigenvector
96
Biggest Eigenvector: Math Eigenvectors of Aff (or any symmetric matrix) give a rotated coordinated system. So we can write Eigenvector Eigenvalue for
97
Biggest Eigenvector: Math Eigenvectors of Aff (or any symmetric matrix) give a rotated coordinated system. So we can write Eigenvector Eigenvalue for Biggest eigenvalue dominates:
98
Eigenvector Grouping: Another View Toy problem –Single foreground group:
99
Eigenvector Grouping Toy problem –Single foreground group: …
100
Eigenvector Grouping Toy problem –Single foreground group: This is what Aff looks like if pixels in foreground group come before background pixels
101
Grouping Toy problem: single foreground group: Algorithm to find group –Find unit vector V that makes largest Affinity matrix: …
102
Grouping Toy problem: single foreground group: Algorithm to find group –Find unit vector V that makes largest –Large entry in V indicates group member. Affinity matrix: …
103
Grouping Toy problem: single foreground group: Algorithm to find group –Find unit vector V that makes largest –Answer ~ Affinity matrix: … (because every nonzero entry of V contributes as much as possible to |Aff V|)
104
Grouping Toy problem: single foreground group: Algorithm to find group –Find unit vector V that makes largest –To compute V: find ‘largest’ eigenvector of Affinity matrix: … (with the largest eigenvalue)
105
Grouping Toy problem: single foreground group: Algorithm to find group –Find unit vector V that makes largest –To compute V: find ‘largest’ eigenvector of (This gives largest value for |Aff V|) Affinity matrix: …
106
Eigenvector Grouping Toy problem –Single foreground group: (Toy) Algorithm summary: –Compute largest eigenvector E of Aff – Assign foreground if –Assign background if
107
Eigenvector Grouping Why does largest eigenvector work? Leading eigenvector E of A has properties: (by definition)
108
Eigenvector Grouping Why does largest eigenvector work? Leading eigenvector E of A has properties: (by definition) Eigenvalue (a number) Should be largest eigenvalue
109
Eigenvector Grouping Why does largest eigenvector work? Leading eigenvector E of A has properties: (by definition) occurs at (max value ) Fact:
110
Eigenvector Grouping Why does largest eigenvector work? Leading eigenvector E of A has properties: (by definition) occurs at (max value ) Fact: Interpretation: E is “central direction” of all rows of A (largest dot products with rows of A)
111
Aside: Math occurs at (max value ) Fact: (equivalent to maximize the square)
112
Aside: Math Derive from calculus: max is at stationary point occurs for Show
113
Aside: Math Derive from calculus: max is at stationary point occurs for Show
114
Aside: Math Derive from calculus: max is at stationary point At max V is an eigenvector of
115
Aside: Math Derive from calculus: max is at stationary point At max V is an eigenvector of V an eigenvector of A
116
Aside: Math Derive from calculus: max is at stationary point At max V is an eigenvector of V an eigenvector of A Why? A symmetric A has same eigenvectors as, since
117
Eigenvector Grouping More realistic problem –Single foreground group:
118
Eigenvector Grouping More realistic problem –Single foreground group: Eigenvector also has form
119
Eigenvector Grouping More realistic problem –Single foreground group: Eigenvector also has form (E is unit vector. To get largest product with Aff, more efficient to have big entries where they multiply big entries of Aff.)
120
Eigenvector Grouping More realistic problem –Single foreground group: Eigenvector also has form Can identify foreground items by their big entries of E
121
Eigenvector Grouping Real Algorithm summary –Compute largest eigenvector E of Aff – Assign foreground if –Assign background if (T a threshold chosen by you)
122
Eigenvector Grouping Even more realistic problem –Several groups: –Example: two groups Group 1Group 2
123
Eigenvector Grouping Even more realistic problem –Several groups: –Example: two groups Group 1Group 2 Usually, leading eigenvector picks out one of the groups (the one with biggest affinities). In this case, we again expect
124
Technical Aside Why does eigenvector pick out just one group? Toy example: 2 x 2 “affinity matrix” are “big” values; affinities between “groups” = 0
125
Technical Aside Why does eigenvector pick out just one group? Toy example: 2 x 2 “affinity matrix” are “big” values; affinities between “groups” = 0 Leading eigenvector is with eigenvalue Intuition: E has to be a unit vector. To be most efficient in getting large dot product, it put all its large values into group with largest affinities
126
Eigenvector Grouping For several groups, leading eigenvector picks out the “leading” (most self similar) group. (Remember: calculate eigenvector by svd on ) To identify other groups, can remove points from the leading group and repeat leading eigenvector computation for remaining points
127
Eigenvector Grouping For several groups, leading eigenvector (or singular vector) picks out the “leading” (most self similar) group. To identify other groups, can remove points from the leading group and repeat leading eigenvector computation for remaining points Alternative: just use non-leading eigenvectors/eigenvalues to pick out the non- leading groups.
128
(Alternative) Algorithm Summary Choose affinity measures and create affinity matrix Aff Compute eigenvalues for Aff. Assign all elements to active list L Repeat for each eigenvalue starting from largest: –Compute corresponding eigenvector –Choose threshold, assign to new group –Remove new group elements i from active list L. –Stop if L empty or new group too small. “Clean up” groups (optional)
129
Eigenvector Grouping Problems Method picks “strongest” group. When there are several groups of similar strength, it can get confused Eigenvector algorithm described so far not used currently.
130
Eigenvectors + Graph Cut Methods
131
Graph Cut Methods Image undirected graph G = (V,E) Each graph edge has weight value w(E) Example –Vertex nodes =edgels i –Graph edges go between edgels i, j –Graph edge has weight w(i,j)=Aff(i,j) Task: Partition V into V 1...V n, s.t. similarity is high within groups and low between groups
132
Issues What is a good partition ? How can you compute such a partition efficiently ?
133
Graph Cut G=(V,E) Sets A and B are a disjoint partition of V
134
Graph Cut G=(V,E) Sets A and B are a disjoint partition of V Measure of dissimilarity between the two groups:
135
Graph Cut Cut G=(V,E) Sets A and B are a disjoint partition of V Measure of dissimilarity between the two groups: Cut
136
Graph Cut Cut links that get summed over G=(V,E) Sets A and B are a disjoint partition of V Measure of dissimilarity between the two groups: Cut
137
The temptation Cut is a measure of disassociation Minimizing Cut gives partition with maximum disassociation. Efficient poly-time algorithms exist
138
The problem with MinCut It usually outputs segments that are too small!
139
The problem with MinCut It usually outputs segments that are too small! Can get small cut just by having fewer cut links
140
The Normalized Cut (Shi + Malik)
141
Given a partition (A,B) of the vertex set V. Ncut(A,B) measures difference between two groups, normalized by how similar they are within themselves. If A small, assoc(A) is small and Ncut large (bad partition). Problem: Find partition minimizing Ncut
142
Matrix formulation Definitions: D is an n x n diagonal matrix with entries W is an n x n symmetric matrix
143
Normalized cuts Transform the problem to one which can be approximated by eigenvector methods. After some algebra, the Ncut problem becomes subject to the constraints: for some constant
144
Normalized cuts Transform the problem to one which can be approximated by eigenvector methods. After some algebra, the Ncut problem becomes subject to the constraints: for some constant NP-Complete!
145
Normalized cuts Drop discreteness constraints to make easier, solvable by eigenvectors: Subject to constraint:
146
Normalized cuts Drop discreteness constraints to make easier, solvable by eigenvectors: Subject to constraint: Solution y is second least eigenvector of matrix
147
Normalized cuts Drop discreteness constraints to make easier, solvable by eigenvectors: Subject to constraint: Solution y is second least eigenvector of matrix Easy, since D is diagonal
148
Normalized cuts Drop discreteness constraints to make easier, solvable by eigenvectors: Subject to constraint: Solution y is second least eigenvector of matrix Because of constraint! (Means eigenvector for second smallest eigenvalue.)
149
Normalized cuts Drop discreteness constraints to make easier, solvable by eigenvectors: Subject to constraint: Solution y is second least eigenvector of matrix MATLAB: [U,S,V] = svd(M); y= V(:,end-1);
150
Normalized Cuts: algorithm Define affinities Compute matrices: and [U,S,V] = svd(M); y= V(:,end-1); Threshold y: (could also use a different threshold besides the mean of y)
151
Normalized Cuts Very powerful algorithm, widely used Results shown later
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.