Download presentation
Presentation is loading. Please wait.
Published byMerry Baldwin Modified over 9 years ago
1
Chapter 5 Segmentation 周立晴 d99922027
2
Movie Special effects
3
Segmentation (1/2) 5.1 Active Contours 5.2 Split and Merge 5.3 Mean Shift and Mode Finding 5.4 Normalized Cuts 5.5 Graph Cuts and Energy-based Methods
4
Segmentation (2/2)
5
5.1 Active Contours Snakes Scissors Level Sets
6
5.1.1 Snakes Snakes are a two-dimensional generalization of the 1D energy-minimizing splines. One may visualize the snake as a rubber band of arbitrary shape that is deforming with time trying to get as close as possible to the object contour.
7
Snakes
8
Snakes Minimize energy Parametric equations PDE constraint
9
Snakes Internal spline energy: o s: arc length o v s, v ss : first-order and second-order derivatives of curve o α, β: first-order and second-order weighting functions Discretized form of internal spline energy contour curvature
10
Snakes External spline energy: o Line term: attracting to dark ridges o Edge term: attracting to strong gradients o Term term: attracting to line terminations In practice, most systems only use the edge term, which can be directly estimated by gradient. constrain
11
Snakes
12
Snakes Edge functional o The snake is attracted to contours with large image gradients.
13
Snakes Termination Function o Use the curvature of level lines in a slightly smoothed image.
14
Snakes User-placed constraints can also be added. o f: the snake points o d: anchor points
15
Snakes Internal spline energy: o s: arc length o v s, v ss : first-order and second-order derivatives of curve o α, β: first-order and second-order weighting functions Discretized form of internal spline energy contour curvature
16
Snakes External spline energy: o Line term: attracting to dark ridges o Edge term: attracting to strong gradients o Term term: attracting to line terminations In practice, most systems only use the edge term, which can be directly estimated by gradient. constrain
17
Snakes
18
Snakes The above Euler equations can be written in matrix form as
19
Snakes
20
Snakes
21
Snakes Because regular snakes have a tendency to shrink, it is usually better to initialize them by drawing the snake outside the object of interest to be tracked.
22
B-spline Approximations Snakes sometimes exhibit too many degrees of freedom, making it more likely that they can get trapped in local minima during their evolution. Use B-spline approximations to control the snake with fewer degrees of freedom.
23
Shape Prior
24
5.1.2 Dynamic snakes and CONDENSATION In many applications of active contours, the object of interest is being tracked from frame to frame as it deforms and evolves. In this case, it make sense to use estimates from the previous frame to predict and constrain the new estimates.
25
Elastic Nets and Slippery Springs Applying to TSP (Traveling Salesman Problem):
26
Elastic Nets and Slippery Springs (cont’d) Probabilistic interpretation: o i: each snake node o j: each city o σ: standard deviation of the Gaussian o d ij : Euclidean distance between a tour point f(i) and a city location d(j)
27
Elastic Nets and Slippery Springs (cont’d) The tour f(s) is initialized as a small circle around the mean of the city points and σ is progressively lowered. Slippery spring: this allows the association between constraints (cities) and curve (tour) points to evolve over time.
28
Snakes
29
5.1.3 Scissors
30
Scissors can draw a better curve (optimal curve path) that clings to high-contrast edges as the user draws a rough outline. Semi-automatic segmentation Algorithm: o Step 1: Associate edges that are likely to be boundary elements. o Step 2: Continuously recompute the lowest cost path between the starting point and the current mouse location using Dijkstra’s algorithm.
31
Scissors
32
Scissors
33
Scissors
34
Scissors
35
Scissors
36
Scissors Cut along the gradient direction -> minimize angle between cut direction and gradient direction
37
Scissors
38
Scissors Solve optimal path by Dijkstra’s algorithm.
39
Scissors
40
Scissors 0 ∞∞ ∞∞ 10 5 1 23 2 7 9 4 6 (a) 0 10∞ 5∞ 5 1 23 2 7 9 4 6 (b) ss
41
Scissors 0 811 57 10 5 1 23 2 7 9 4 6 (d) s 0 814 57 10 5 1 23 2 7 9 4 6 (c) s
42
Scissors 0 89 57 10 5 1 23 2 7 9 4 6 (e) 0 89 57 10 5 1 23 2 7 9 4 6 (e) ss
43
Scissors
44
5.1.4 Level Sets If the active contours based on parametric curves of the form f(s), as the shape changes dramatically, (topology changes) curve reparameterization may also be required.
45
5.1.4 Level Sets
47
Level Sets
52
An example is the geodesic active contour: o g(I): snake edge potential (gradient) o φ: signed distance function away from the curve o div: divergent
53
Level Sets
54
An example is the geodesic active contour: o g(I): snake edge potential (gradient) o φ: signed distance function away from the curve o div: divergent
55
Level Sets
57
According to g(I), the first term can straighten the curve and the second term encourages the curve to migrate towards minima of g(I). Level-set is still susceptible to local minima. An alternative approach is to use the energy measurement inside and outside the segmented regions.
58
Level Sets
59
5.2 Split and Merge Watershed Region splitting and merging Graph-based Segmentation k-means clustering Mean Shift
60
5.2.1 Watershed An efficient way to compute such regions is to start flooding the landscape at all of the local minima and to label ridges wherever differently evolving components meet. Watershed segmentation is often used with the user manual marks corresponding to the centers of different desired components.
61
Watershed
62
5.2.2 Region Splitting (Divisive Clustering) Step 1: Computes a histogram for the whole image. Step 2: Finds a threshold that best separates the large peaks in the histogram. Step 3: Repeated until regions are either fairly uniform or below a certain size.
63
5.2.3 Region Merging (Agglomerative Clustering) The various criterions of merging regions: o Relative boundary lengths and the strength of the visible edges at these boundaries o Distance between closest points and farthest points o Average color difference or whose regions are too small
64
5.2.4 Graph-based Segmentation
65
Graph-based Segmentation This algorithm uses relative dissimilarities between regions to determine which ones should be merged. Internal difference for any region R: o MST(R): minimum spanning tree of R o w(e): intensity differences of an edge in MST(R)
66
Graph-based Segmentation Difference between two adjacent regions: Minimum internal difference of these two regions: o τ(R): heuristic region penalty
67
Graph-based Segmentation
70
5.2.5 Probabilistic Aggregation Gray level similarity:
71
Probabilistic Aggregation
72
Definition of strong coupling: o C: a subset of V o φ: usually set to 0.2
73
Probabilistic Aggregation
75
5.3 Mean Shift and Mode Finding
76
5.3.1 K-means K-means: o Step 1: Guess center. Give the number of clusters k it is supposed to find. Then choose k samples as the centers of clusters. We call the set of centers Y. o Step 2: Given center, find groups. Use fixed Y to compute the square error for all pixels, then we can get the clusters U which has least square error E min. o Step 3: Given groups, find new centers. Use fixed Y and U to compute the square error E min ’. If E min = E min ’ then stop and we get the final clusters. o Step 4: Repeat until centers do not change. If E min ≠ E min ’ then use U to find new cluster centers Y’. Go to Step 2 and find new cluster U’, iteratively.
77
5.3.2 Mean Shift Mean shift segmentation is the inverse of the watershed algorithm => find the peaks (modes) and then expand the region.
78
Mean Shift Step 1: Use kernel density estimation to estimate the density function given a sparse set of samples. o f(x): density function o x i : input samples o k(r): kernel function or Parzen window o h: width of kernel
79
Mean Shift
80
The location of y k in iteration can be expressed in the following formula: Repeat Step 2 until completely converge or after finite steps. Step 3: The remaining points can then be classified based on the nearest evolution path.
81
Mean Shift
82
There are still some kernels to be used: o Epanechnikov kernel (converge in finite steps) o Gaussian (normal) kernel (slower but result better)
83
Mean Shift Joint domain: use spatial domain and range domain to segment color image. Kernel of joint domain (five-dimensional): o x r : (L*, u*, v*) in range domain o x s : (x, y) in spatial domain o h r, h s : color and spatial widths of kernel
84
Mean Shift o M: a region has pixels under the number threshold will be eliminated
85
Intuitive Description Distribution of identical billiard balls Region of interest Center of mass Mean Shift vector Objective : Find the densest region
86
Intuitive Description Distribution of identical billiard balls Region of interest Center of mass Mean Shift vector Objective : Find the densest region
87
Intuitive Description Distribution of identical billiard balls Region of interest Center of mass Mean Shift vector Objective : Find the densest region
88
Intuitive Description Distribution of identical billiard balls Region of interest Center of mass Mean Shift vector Objective : Find the densest region
89
Intuitive Description Distribution of identical billiard balls Region of interest Center of mass Mean Shift vector Objective : Find the densest region
90
Intuitive Description Distribution of identical billiard balls Region of interest Center of mass Mean Shift vector Objective : Find the densest region
91
Intuitive Description Distribution of identical billiard balls Region of interest Center of mass Objective : Find the densest region
92
5.4 Normalized Cuts Normalized cuts examine the affinities between nearby pixels and try to separate groups that are connected by weak affinities. Not min-cut, otherwise single pixels will be isolated
93
Normalized Cuts
94
5.4 Normalized Cuts Pixel-wise affinity weight for pixels within a radius ∥ x i - x j ∥ < r : o F i, F j : feature vectors that consist of intensities, colors, or oriented filter histograms o x i, x j : pixel locations
95
Normalized Cuts
96
But computing the optimal normalized cut is NP- complete. The following is a faster method. Minimize the cut can be expressed as a Rayleigh quotient: o x is the indicator vector where x i = +1 iff i ∈ A and x i = -1 iff i ∈ B. o y = ((1 + x) - b(1 - x)) / 2 (Compare equation for Rayleigh Quotient) o (Global minimum/maximum can be solved)
97
Normalized Cuts o x is the indicator vector where x i = +1 iff i ∈ A and x i = -1 iff i ∈ B. o y = ((1 + x) - b(1 - x)) / 2 X as the indicator vector AB 12 35 46 7 X +1 +1 +1 1 2 3 4 5 6 7 Y 1 -b 1 1 1 1+x 2 0 2 2 0 0 2 -b(1-x) 0 -2b 0 0 0
98
Normalized Cuts o x is the indicator vector where x i = +1 iff i ∈ A and x i = -1 iff i ∈ B. o y = ((1 + x) - b(1 - x)) / 2 o W: weight matrix [w ij ] o D: diagonal matrix, diagonal entries are the number of corresponding row sums in W It is equivalent to solving a regular eigenvalue problem: o N = D -1/2 WD -1/2 and N is called normalized affinity matrix. o z = D 1/2 y
99
Normalized Cuts
101
5.5 Graph Cuts
102
Graph Cuts
107
Graph Cuts (min-cut) a bd ce 5 5 6 6 1 3 3 6 Ss z T Min-cut is the same as max-flow
108
Graph Cuts (min-cut) a bd ce 0/5 0/6 0/1 0/6 Ss z T 0/3
109
Graph Cuts (min-cut) a bd ce 3/5 0/5 0/6 3/6 0/1 0/6 Ss z T 3/3 0/3
110
Graph Cuts (min-cut) a bd ce 3/5 0/5 0/6 3/6 0/1 0/6 Ss z T 3/3 0/3
111
Graph Cuts (min-cut) a bd ce 3/5 1/5 1/6 3/6 1/1 1/6 Ss z T 2/3 0/3
112
Graph Cuts (min-cut) a bd ce 3/5 1/5 1/6 3/6 1/1 1/6 Ss z T 2/3 0/3
113
Graph Cuts (min-cut) a bd ce 3/5 4/5 1/6 3/6 1/1 4/6 Ss z T 2/3 3/3
114
Graph Cuts (min-cut) a bd ce 5/5 4/5 3/6 1/1 6/6 Ss z T 2/3 3/3
115
Graph Cuts (min-cut) a bd ce 4/5 3/6 6/6 Ss z T 2/3 5/5 1/1 3/3 5/5 1/1 3/3
116
Review 5.1 Active Contours o Snake o Scissors o Level sets 5.2 Split and Merge o Watershed o Region splitting (histogram) o Region merging o Graph-based o K-mean 5.3 Mean Shift and Mode Finding 5.4 Normalized Cuts 5.5 Graph Cuts and Energy-based Methods
117
Representations Image o Surface function in 3D o Graph o Set of values (histogram) Segments o Line/path o Surface function in 3D o Groups of Nodes o Clusters
118
Segmentation Criteria Object Edge o Gradient, Laplacian Location o Connected shape o close proximity Shape of segment o Derivative of line Similarity within segment o K-means, mean-shift o Weights for Normalized cuts, Graph cut Difference with external area User-defined criteria
119
Optimization of Criteria PDE: Gradient-descent / Calculus of variations Dijkstra’s algorithm K-means Mean-shift Rayleigh’s Quotient Min-cut
120
Evaluation The Berkeley Segmentation Dataset
121
Medical Segmentation
122
END
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.