Download presentation
Presentation is loading. Please wait.
Published byEleanor Jordan Modified over 9 years ago
1
CS 691 Computational Photography Instructor: Gianfranco Doretto Cutting Images
2
This Lecture: Finding Seams and Boundaries Segmentation
3
This Lecture: Finding Seams and Boundaries Retargetting http://swieskowski.net/carve/
4
This Lecture: Finding Seams and Boundaries Stitching
5
This Lecture: Finding seams and boundaries Fundamental Concept: The Image as a Graph Intelligent Scissors: Good boundary = short path Graph cuts: Good region has low cutting cost
6
Semi-automated segmentation User provides imprecise and incomplete specification of region – your algorithm has to read his/her mind. Key problems 1.What groups of pixels form cohesive regions? 2.What pixels are likely to be on the boundary of regions? 3.Which region is the user trying to select?
7
What makes a good region? Contains small range of color/texture Looks different than background Compact
8
What makes a good boundary? High gradient along boundary Gradient in right direction Smooth
9
The Image as a Graph Node: pixel Edge: cost of path or cut between two pixels
10
Intelligent Scissors Mortenson and Barrett (SIGGRAPH 1995)
11
Intelligent Scissors Formulation: find good boundary between seed points Challenges –Minimize interaction time –Define what makes a good boundary –Efficiently find it
12
Intelligent Scissors A good image boundary has a short path through the graph. Mortenson and Barrett (SIGGRAPH 1995) 1 21 4 1 6 9 1 3 1 4 1 1 3 2 3 5 Start End
13
Intelligent Scissors: method 1.Define boundary cost between neighboring pixels 2.User specifies a starting point (seed) 3.Compute lowest cost from seed to each other pixel 4.Get new seed, get path between seeds, repeat
14
Intelligent Scissors: method 1.Define boundary cost between neighboring pixels a)Lower if edge is present (e.g., with edge(im, ‘canny’)) b)Lower if gradient is strong c)Lower if gradient is in direction of boundary
15
Gradients, Edges, and Path Cost Gradient Magnitude Edge Image Path Cost
16
Intelligent Scissors: method 1.Define boundary cost between neighboring pixels 2.User specifies a starting point (seed) –Snapping
17
Intelligent Scissors: method 1.Define boundary cost between neighboring pixels 2.User specifies a starting point (seed) 3.Compute lowest cost from seed to each other pixel –Djikstra’s shortest path algorithm
18
Djikstra’s shortest path algorithm Initialize, given seed s: Compute cost 2 (q, r) % cost for boundary from pixel q to neighboring pixel r cost(s) = 0 % total cost from seed to this point A = {s} % set to be expanded E = { } % set of expanded pixels P(q) % pointer to pixel that leads to q Loop while A is not empty 1. q = pixel in A with lowest cost 2.for each pixel r in neighborhood of q that is not in E a)cost_tmp = cost(q) + cost 2 (q,r) b)if (r is not in A) OR (cost_tmp < cost(r)) i.cost(r) = cost_tmp ii. P(r) = q iii.Add r to A
19
Intelligent Scissors: method 1.Define boundary cost between neighboring pixels 2.User specifies a starting point (seed) 3.Compute lowest cost from seed to each other pixel 4.Get new seed, get path between seeds, repeat
20
Intelligent Scissors: improving interaction 1.Snap when placing first seed 2.Automatically adjust as user drags 3.Freeze stable boundary points to make new seeds
21
Where will intelligent scissors work well, or have problems?
22
Grab cuts and graph cuts Grab cuts and graph cuts User Input Result Magic Wand (198?) Intelligent Scissors Mortensen and Barrett (1995) GrabCut Regions Boundary Regions & Boundary Source: Rother
23
Segmentation with graph cuts Source (Label 0) Sink (Label 1) Cost to assign to 0 Cost to assign to 1 Cost to split nodes
24
Segmentation with graph cuts Source (Label 0) Sink (Label 1) Cost to assign to 0 Cost to assign to 1 Cost to split nodes
25
Interactive Graph Cuts [Boykov, Jolly ICCV’01] Image Min Cut Min Cut Cut: separating source and sink; Energy: collection of edges Min Cut: Global minimal enegry in polynomial time Foreground (source) Background (sink) constraints
26
GrabCut Colour Model GrabCut Colour Model Gaussian Mixture Model (typically 5-8 components) Foreground & Background Background Foreground Background G R G R Iterated graph cut Source: Rother
27
Graph cuts segmentation 1.Define graph –usually 4-connected or 8-connected 2.Set weights to foreground/background –Color histogram or mixture of Gaussians for background and foreground 3.Set weights for edges between pixels 4.Apply min-cut/max-flow algorithm 5.Return to 2, using current labels to compute foreground, background models
28
What is easy or hard about these cases for graphcut-based segmentation?
29
Easier examples GrabCut – Interactive Foreground Extraction 10
30
More difficult Examples Camouflage & Low Contrast Harder Case Fine structure Initial Rectangle Initial Result GrabCut – Interactive Foreground Extraction 11
31
Lazy Snapping (Li et al. SG 2004)
32
Limitations of Graph Cuts Requires associative graphs –Connected nodes should prefer to have the same label Is optimal only for binary problems
33
Other applications: Seam Carving Demo: http://swieskowski.net/carve/ Seam Carving – Avidan and Shamir (2007)
34
Other applications: Seam Carving Find shortest path from top to bottom (or left to right), where cost = gradient magnitude Demo: http://swieskowski.net/carve/ Seam Carving – Avidan and Shamir (2007) http://www.youtube.com/watch?v=6NcIJXTlugc
36
Dynamic Programming Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming. –Used when problem breaks down into recurring small subproblems Dynamic programming is typically applied to optimization problems. In such problem there can be many solutions. Each solution has a value, and we wish to find a solution with the optimal value.
37
Dynamic Programming Dynamic programming is a way of improving on inefficient divide- and-conquer algorithms. By “inefficient”, we mean that the same recursive call is made over and over. If same subproblem is solved several times, we can use table to store result of a subproblem the first time it is computed and thus never have to recompute it again. Dynamic programming is applicable when the subproblems are dependent, that is, when subproblems share subsubproblems. “Programming” refers to a tabular method
38
Elements of Dynamic Programming (DP) DP is used to solve problems with the following characteristics: Simple subproblems –We should be able to break the original problem to smaller subproblems that have the same structure Optimal substructure of the problems – The optimal solution to the problem contains within optimal solutions to its subproblems. Overlapping sub-problems –there exist some places where we solve the same subproblem more than once.
39
Steps to Designing a Dynamic Programming Algorithm 1.Characterize optimal substructure 2. Recursively define the value of an optimal solution 3. Compute the value bottom up 4. (if needed) Construct an optimal solution
40
11-40 Example: Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product A 1 A 2 …A n There are many possible ways (parenthesizations) to compute the product
41
11-41 Matrix-chain Multiplication …contd Example: consider the chain A 1, A 2, A 3, A 4 of 4 matrices –Let us compute the product A 1 A 2 A 3 A 4 There are 5 possible ways: 1.(A 1 (A 2 (A 3 A 4 ))) 2.(A 1 ((A 2 A 3 )A 4 )) 3.((A 1 A 2 )(A 3 A 4 )) 4.((A 1 (A 2 A 3 ))A 4 ) 5.(((A 1 A 2 )A 3 )A 4 )
42
11-42 Matrix-chain Multiplication …contd To compute the number of scalar multiplications necessary, we must know: –Algorithm to multiply two matrices –Matrix dimensions Can you write the algorithm to multiply two matrices?
43
11-43 Algorithm to Multiply 2 Matrices Input: Matrices A p×q and B q×r (with dimensions p×q and q×r) Result: Matrix C p×r resulting from the product A·B MATRIX-MULTIPLY(A p×q, B q×r ) 1.for i ← 1 to p 2.for j ← 1 to r 3.C[i, j] ← 0 4.for k ← 1 to q 5.C[i, j] ← C[i, j] + A[i, k] · B[k, j] 6.return C Scalar multiplication in line 5 dominates time to compute CNumber of scalar multiplications = pqr
44
11-44 Matrix-chain Multiplication …contd Example: Consider three matrices A 10x100, B 100x5, and C 5x50 There are 2 ways to parenthesize –((AB)C) = D 10x5 · C 5x50 AB 10 · 100 · 5=5,000 scalar multiplications DC 10 · 5 · 50 =2,500 scalar multiplications –(A(BC)) = A 10 100 · E 100 50 BC 100 · 5 · 50=25,000 scalar multiplications AE 10 · 100 · 50 =50,000 scalar multiplications Total: 7,500 Total: 75,000
45
11-45 Matrix-chain Multiplication …contd Matrix-chain multiplication problem –Given a chain A 1, A 2, …, A n of n matrices, where for i=1, 2, …, n, matrix A i has dimension p i-1 xp i –Parenthesize the product A 1 A 2 …A n such that the total number of scalar multiplications is minimized Brute force method of exhaustive search takes time exponential in n
46
11-46 Dynamic Programming Approach The structure of an optimal solution –Let us use the notation A i..j for the matrix that results from the product A i A i+1 … A j –An optimal parenthesization of the product A 1 A 2 …A n splits the product between A k and A k+1 for some integer k where1 ≤ k < n –First compute matrices A 1..k and A k+1..n ; then multiply them to get the final matrix A 1..n
47
11-47 Dynamic Programming Approach …contd –Key observation: parenthesizations of the subchains A 1 A 2 …A k and A k+1 A k+2 …A n must also be optimal if the parenthesization of the chain A 1 A 2 …A n is optimal (why?) –That is, the optimal solution to the problem contains within it the optimal solution to subproblems
48
11-48 Dynamic Programming Approach …contd Recursive definition of the value of an optimal solution –Let m[i, j] be the minimum number of scalar multiplications necessary to compute A i..j –Minimum cost to compute A 1..n is m[1, n] –Suppose the optimal parenthesization of A i..j splits the product between A k and A k+1 for some integer k where i ≤ k < j
49
11-49 Dynamic Programming Approach …contd –A i..j = (A i A i+1 …A k )·(A k+1 A k+2 …A j )= A i..k · A k+1..j –Cost of computing A i..j = cost of computing A i..k + cost of computing A k+1..j + cost of multiplying A i..k and A k+1..j –Cost of multiplying A i..k and A k+1..j is p i-1 p k p j –m[i, j ] = m[i, k] + m[k+1, j ] + p i-1 p k p j for i ≤ k < j –m[i, i ] = 0 for i=1,2,…,n
50
11-50 Dynamic Programming Approach …contd –But … optimal parenthesization occurs at one value of k among all possible i ≤ k < j –Check all these and select the best one m[i, j ] = 0 if i=j min {m[i, k] + m[k+1, j ] + p i-1 p k p j } if i<j i ≤ k< j
51
11-51 Dynamic Programming Approach …contd To keep track of how to construct an optimal solution, we use a table s s[i, j ] = value of k at which A i A i+1 … A j is split for optimal parenthesization Algorithm: next slide –First computes costs for chains of length l =1 –Then for chains of length l =2,3, … and so on –Computes the optimal cost bottom-up
52
11-52 Algorithm to Compute Optimal Cost Input: Array p[0…n] containing matrix dimensions and n Result: Minimum-cost table m and split table s MATRIX-CHAIN-ORDER(p[ ], n) for i ← 1 to n m[i, i] ← 0 for l ← 2 to n for i ← 1 to n-l+1 j ← i+l-1 m[i, j] ← ∞ for k ← i to j-1 q ← m[i, k] + m[k+1, j] + p[i-1] p[k] p[j] if q < m[i, j] m[i, j] ← q s[i, j] ← k return m and s Takes O(n 3 ) time Requires O(n 2 ) space
53
11-53 Constructing Optimal Solution Our algorithm computes the minimum- cost table m and the split table s The optimal solution can be constructed from the split table s –Each entry s[i, j ]=k shows where to split the product A i A i+1 … A j for the minimum cost
54
11-54 Example Show how to multiply this matrix chain optimally Solution on the board –Minimum cost 15,125 –Optimal parenthesization ((A 1 (A 2 A 3 ))((A 4 A 5 )A 6 )) MatrixDimension A1A1 30×35 A2A2 35×15 A3A3 15×5 A4A4 5×10 A5A5 10×20 A6A6 20×25
55
min. error boundary Minimal error boundary overlapping blocksvertical boundary _ = 2 overlap error
56
Other applications: stitching Graphcut Textures – Kwatra et al. SIGGRAPH 2003
57
Other applications: stitching + Graphcut Textures – Kwatra et al. SIGGRAPH 2003 Ideal boundary: 1.Similar color in both images 2.High gradient in both images
58
Summary of big ideas Treat image as a graph –Pixels are nodes –Between-pixel edge weights based on gradients –Sometimes per-pixel weights for affinity to foreground/background Good boundaries are a short path through the graph (Intelligent Scissors, Seam Carving) Good regions are produced by a low-cost cut (GrabCuts, Graph Cut Stitching)
59
Slide Credits This set of sides also contains contributions kindly made available by the following authors –Alexei Efros –Carsten Rother –Derek Hoiem
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.