Download presentation
Presentation is loading. Please wait.
Published byFay Cook Modified over 9 years ago
1
Pushmeet Kohli
3
E(X) E: {0,1} n → R 0 → fg 1 → bg Image (D) n = number of pixels [Boykov and Jolly ‘ 01] [Blake et al. ‘04] [Rother, Kolmogorov and Blake `04]
4
Unary Cost (c i ) Dark (negative) Bright (positive) E: {0,1} n → R 0 → fg 1 → bg n = number of pixels [Boykov and Jolly ‘ 01] [Blake et al. ‘04] [Rother, Kolmogorov and Blake `04] ∑ c i x i Pixel Colour E(X) =
5
Unary Cost (c i ) Dark (negative) Bright (positive) E: {0,1} n → R 0 → fg 1 → bg n = number of pixels [Boykov and Jolly ‘ 01] [Blake et al. ‘04] [Rother, Kolmogorov and Blake `04] x * = arg min E(x) E(X) = ∑ c i x i Pixel Colour
6
Discontinuity Cost (d ij ) E: {0,1} n → R 0 → fg 1 → bg n = number of pixels [Boykov and Jolly ‘ 01] [Blake et al. ‘04] [Rother, Kolmogorov and Blake `04] E(X) = + ∑ d ij |x i -x j | Smoothness Prior ∑ c i x i Pixel Colour
7
Discontinuity Cost (d ij ) E: {0,1} n → R 0 → fg 1 → bg n = number of pixels [Boykov and Jolly ‘ 01] [Blake et al. ‘04] [Rother, Kolmogorov and Blake `04] E(X) = + ∑ d ij x i (1-x j ) + d ij x j (1-x i ) Smoothness Prior ∑ c i x i Pixel Colour
8
E: {0,1} n → R 0 → fg 1 → bg n = number of pixels [Boykov and Jolly ‘ 01] [Blake et al. ‘04] [Rother, Kolmogorov and Blake `04] E(X) = ∑ c i x i Pixel Colour + ∑ d ij x i (1-x j ) + d ij x j (1-x i ) Smoothness Prior Old Solution x * = arg min E(x)
9
E: {0,1} n → R 0 → fg 1 → bg n = number of pixels [Boykov and Jolly ‘ 01] [Blake et al. ‘04] [Rother, Kolmogorov and Blake `04] E(X) = ∑ c i x i Pixel Colour + ∑ d ij x i (1-x j ) + d ij x j (1-x i ) Smoothness Prior x * = arg min E(x)
11
E(x) = ∑ f i (x i ) + ∑ g ij (x i,x j ) + ∑ h c (x c ) i ij c UnaryPairwiseHigher Order How to minimize E(x)? x takes from a label set L = {l 1, l 2,.., l k }
12
Space of Problems n = Number of Variables Segmentation Energy CSP MAXCUT NP-Hard Tractability Properties
13
Space of Problems n = Number of Variables Segmentation Energy CSP Tree Structured MAXCUT NP-Hard Tractability Properties Structural Tractability
14
Space of Problems n = Number of Variables Segmentation Energy Submodular Functions CSP Tree Structured Pair-wise O(n 3 ) MAXCUT O(n 6 ) NP-Hard Tractability Properties Language or Form Tractability g ij (x i,x j ) Constraints on the terms of your energy functions
16
Example: n = 2, A = [1,0], B = [0,1] f([1,0]) + f([0,1]) f([1,1]) + f([0,0]) Property : Sum of submodular functions is submodular E(x) = ∑ c i x i + ∑ d ij |x i -x j | ii,j Binary Image Segmentation Energy is submodular for all A,B {0,1} n f(A) + f(B) f(A ˅ B) + f(A ˄ B) (AND)(OR) Pseudo-boolean function f {0,1} n ℝ is submodular if
17
Discrete Analogues of Concave Functions [Lovasz, ’83] Widely applied in Operations Research Applications in Machine Learning MAP Inference in Markov Random Fields Clustering [Narasimhan, Jojic, & Bilmes, NIPS 2005] Structure Learning [Narasimhan & Bilmes, NIPS 2006] Maximizing the spread of influence through a social network [Kempe, Kleinberg & Tardos, KDD 2003]
18
Polynomial time algorithms Ellipsoid Algorithm: [Grotschel, Lovasz & Schrijver ‘81] First strongly polynomial algorithm: [Iwata et al. ’00] [A. Schrijver ’00] Current Best: O(n 5 Q + n 6 ) [Q is function evaluation time] [Orlin ‘07] Symmetric functions: E(x) = E(1-x) Can be minimized in O(n 3 ) Minimizing Pairwise submodular functions Can be transformed to st-mincut/max-flow [Hammer, 1965] Very low empirical running time ~ O(n) E(X) = ∑ f i (x i ) + ∑ g ij (x i,x j ) iij
19
Source Sink v1v1 v2v2 2 5 9 4 1 2 Graph (V, E, C) Vertices V = {v 1, v 2... v n } Edges E = {(v 1, v 2 )....} Costs C = {c (1, 2)....}
20
Source Sink v1v1 v2v2 2 5 9 4 1 2 What is a st-cut?
21
Source Sink v1v1 v2v2 2 5 9 4 1 2 What is a st-cut? An st-cut (S,T) divides the nodes between source and sink. What is the cost of a st-cut? Sum of cost of all edges going from S to T 5 + 1 + 9 = 15
22
What is a st-cut? An st-cut (S,T) divides the nodes between source and sink. What is the cost of a st-cut? Sum of cost of all edges going from S to T What is the st-mincut? st-cut with the minimum cost Source Sink v1v1 v2v2 2 5 9 4 1 2 2 + 2 + 4 = 8
23
Construct a graph such that: 1.Any st-cut corresponds to an assignment of x 2.The cost of the cut is equal to the energy of x : E(x) Solution T S st-mincut E(x) [Hammer, 1965] [Kolmogorov and Zabih, 2002
24
E(x) = ∑ θ i (x i ) + ∑ θ ij (x i,x j ) i,ji θ ij (0,1) + θ ij (1,0) θ ij (0,0) + θ ij (1,1)For all ij E(x) = ∑ c i x i + ∑ c ij x i (1-x j ) c ij ≥0 i,ji Equivalent (transformable)
25
Sink (1) Source (0) a1a1 a2a2 E(a 1,a 2 )
26
Sink (1) Source (0) a1a1 a2a2 E(a 1,a 2 ) = 2a 1 2
27
a1a1 a2a2 E(a 1,a 2 ) = 2a 1 + 5ā 1 2 5 Sink (1) Source (0)
28
a1a1 a2a2 E(a 1,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 2 5 9 4 Sink (1) Source (0)
29
a1a1 a2a2 E(a 1,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 2 5 9 4 2 Sink (1) Source (0)
30
a1a1 a2a2 E(a 1,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 + ā 1 a 2 2 5 9 4 2 1 Sink (1) Source (0)
31
a1a1 a2a2 E(a 1,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 + ā 1 a 2 2 5 9 4 2 1 Sink (1) Source (0)
32
a1a1 a2a2 E(a 1,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 + ā 1 a 2 2 5 9 4 2 1 a 1 = 1 a 2 = 1 E (1,1) = 11 Cost of cut = 11 Sink (1) Source (0)
33
a1a1 a2a2 E(a 1,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 + ā 1 a 2 2 5 9 4 2 1 Sink (1) Source (0) a 1 = 1 a 2 = 0 E (1,0) = 8 st-mincut cost = 8
34
Source Sink v1v1 v2v2 2 5 9 4 2 1 Solve the dual maximum flow problem Compute the maximum flow between Source and Sink s.t. Edges: Flow < Capacity Nodes: Flow in = Flow out Assuming non-negative capacity In every network, the maximum flow equals the cost of the st-mincut Min-cut\Max-flow Theorem
35
Augmenting Path Based Algorithms Source Sink v1v1 v2v2 2 5 9 4 2 1 Flow = 0
36
Augmenting Path Based Algorithms 1.Find path from source to sink with positive capacity Source Sink v1v1 v2v2 2 5 9 4 2 1 Flow = 0
37
Augmenting Path Based Algorithms 1.Find path from source to sink with positive capacity 2.Push maximum possible flow through this path Source Sink v1v1 v2v2 2-2 5-2 9 4 2 1 Flow = 0 + 2
38
Source Sink v1v1 v2v2 0 3 9 4 2 1 Augmenting Path Based Algorithms 1.Find path from source to sink with positive capacity 2.Push maximum possible flow through this path Flow = 2
39
Source Sink v1v1 v2v2 0 3 9 4 2 1 Augmenting Path Based Algorithms 1.Find path from source to sink with positive capacity 2.Push maximum possible flow through this path 3.Repeat until no path can be found Flow = 2
40
Source Sink v1v1 v2v2 0 3 9 4 2 1 Augmenting Path Based Algorithms 1.Find path from source to sink with positive capacity 2.Push maximum possible flow through this path 3.Repeat until no path can be found Flow = 2
41
Source Sink v1v1 v2v2 0 3 5 0 2 1 Augmenting Path Based Algorithms 1.Find path from source to sink with positive capacity 2.Push maximum possible flow through this path 3.Repeat until no path can be found Flow = 2 + 4
42
Source Sink v1v1 v2v2 0 3 5 0 2 1 Augmenting Path Based Algorithms 1.Find path from source to sink with positive capacity 2.Push maximum possible flow through this path 3.Repeat until no path can be found Flow = 6
43
Source Sink v1v1 v2v2 0 3 5 0 2 1 Augmenting Path Based Algorithms 1.Find path from source to sink with positive capacity 2.Push maximum possible flow through this path 3.Repeat until no path can be found Flow = 6
44
Source Sink v1v1 v2v2 0 1 3 0 2-2 1+2 Augmenting Path Based Algorithms 1.Find path from source to sink with positive capacity 2.Push maximum possible flow through this path 3.Repeat until no path can be found Flow = 6 + 2
45
Source Sink v1v1 v2v2 0 2 4 0 3 0 Augmenting Path Based Algorithms 1.Find path from source to sink with positive capacity 2.Push maximum possible flow through this path 3.Repeat until no path can be found Flow = 8
46
Source Sink v1v1 v2v2 0 2 4 0 3 0 Augmenting Path Based Algorithms 1.Find path from source to sink with positive capacity 2.Push maximum possible flow through this path 3.Repeat until no path can be found Flow = 8
47
a1a1 a2a2 E(a 1,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 + ā 1 a 2 2 5 9 4 2 1 Sink (1) Source (0)
48
a1a1 a2a2 E(a 1,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 + ā 1 a 2 2 5 9 4 2 1 Sink (1) Source (0) 2a 1 + 5ā 1 = 2(a 1 +ā 1 ) + 3ā 1 = 2 + 3ā 1
49
Sink (1) Source (0) a1a1 a2a2 E(a 1,a 2 ) = 2 + 3ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 + ā 1 a 2 0 3 9 4 2 1 2a 1 + 5ā 1 = 2(a 1 +ā 1 ) + 3ā 1 = 2 + 3ā 1
50
a1a1 a2a2 E(a 1,a 2 ) = 2 + 3ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 + ā 1 a 2 0 3 9 4 2 1 9a 2 + 4ā 2 = 4(a 2 +ā 2 ) + 5ā 2 = 4 + 5ā 2 Sink (1) Source (0)
51
a1a1 a2a2 E(a 1,a 2 ) = 2 + 3ā 1 + 5a 2 + 4 + 2a 1 ā 2 + ā 1 a 2 0 3 5 0 2 1 9a 2 + 4ā 2 = 4(a 2 +ā 2 ) + 5ā 2 = 4 + 5ā 2 Sink (1) Source (0)
52
a1a1 a2a2 E(a 1,a 2 ) = 6 + 3ā 1 + 5a 2 + 2a 1 ā 2 + ā 1 a 2 0 3 5 0 2 1 Sink (1) Source (0)
53
a1a1 a2a2 E(a 1,a 2 ) = 6 + 3ā 1 + 5a 2 + 2a 1 ā 2 + ā 1 a 2 0 3 5 0 2 1 Sink (1) Source (0)
54
a1a1 a2a2 E(a 1,a 2 ) = 6 + 3ā 1 + 5a 2 + 2a 1 ā 2 + ā 1 a 2 0 3 5 0 2 1 3ā 1 + 5a 2 + 2a 1 ā 2 = 2(ā 1 +a 2 +a 1 ā 2 ) +ā 1 +3a 2 = 2(1+ā 1 a 2 ) +ā 1 +3a 2 F1 = ā 1 +a 2 +a 1 ā 2 F2 = 1+ā 1 a 2 a1a1 a2a2 F1F2 0011 0122 1011 1111 Sink (1) Source (0)
55
a1a1 a2a2 E(a 1,a 2 ) = 8 + ā 1 + 3a 2 + 3ā 1 a 2 0 1 3 0 0 3 3ā 1 + 5a 2 + 2a 1 ā 2 = 2(ā 1 +a 2 +a 1 ā 2 ) +ā 1 +3a 2 = 2(1+ā 1 a 2 ) +ā 1 +3a 2 a1a1 a2a2 F1F2 0011 0122 1011 1111 F1 = ā 1 +a 2 +a 1 ā 2 F2 = 1+ā 1 a 2 Sink (1) Source (0)
56
a1a1 a2a2 0 1 3 0 0 3 E(a 1,a 2 ) = 8 + ā 1 + 3a 2 + 3ā 1 a 2 No more augmenting paths possible Sink (1) Source (0)
57
a1a1 a2a2 0 1 3 0 0 3 E(a 1,a 2 ) = 8 + ā 1 + 3a 2 + 3ā 1 a 2 Total Flow Residual Graph (positive coefficients) bound on the optimal solution Tight Bound --> Inference of the optimal solution becomes trivial Sink (1) Source (0)
58
a1a1 a2a2 0 1 3 0 0 3 E(a 1,a 2 ) = 8 + ā 1 + 3a 2 + 3ā 1 a 2 a 1 = 1 a 2 = 0 E (1,0) = 8 st-mincut cost = 8 Total Flow bound on the energy of the optimal solution Residual Graph (positive coefficients) Tight Bound --> Inference of the optimal solution becomes trivial Sink (1) Source (0)
59
[Slide credit: Andrew Goldberg] Augmenting Path and Push-Relabel n: # nodes m: # edges U: maximum edge weight Algorithms assume non- negative edge weights
60
[Slide credit: Andrew Goldberg] n: # nodes m: # edges U: maximum edge weight Algorithms assume non- negative edge weights Augmenting Path and Push-Relabel
61
a1a1 a2a2 1000 1 Sink Source 1000 0 Ford Fulkerson: Choose any augmenting path
62
a1a1 a2a2 1000 1 Sink Source 1000 0 Good Augmenting Paths Ford Fulkerson: Choose any augmenting path
63
a1a1 a2a2 1000 1 Sink Source 1000 0 Bad Augmenting Path Ford Fulkerson: Choose any augmenting path
64
a1a1 a2a2 999 0 Sink Source 1000 999 1 Ford Fulkerson: Choose any augmenting path
65
a1a1 a2a2 999 0 Sink Source 1000 999 1 Ford Fulkerson: Choose any augmenting path n: # nodes m: # edges We will have to perform 2000 augmentations! Worst case complexity: O (m x Total_Flow) (Pseudo-polynomial bound: depends on flow)
66
Dinic: Choose shortest augmenting path n: # nodes m: # edges Worst case Complexity: O (m n 2 ) a1a1 a2a2 1000 1 Sink Source 1000 0
67
Specialized algorithms for vision problems Grid graphs Low connectivity (m ~ O(n)) Dual search tree augmenting path algorithm [Boykov and Kolmogorov PAMI 2004] Finds approximate shortest augmenting paths efficiently High worst-case time complexity Empirically outperforms other algorithms on vision problems Efficient code available on the web http://www.adastral.ucl.ac.uk/~vladkolm/software.html
68
E(x) = ∑ c i x i + ∑ d ij |x i -x j | ii,j Global Minimum (x * ) x x * = arg min E(x) How to minimize E(x)? E: {0,1} n → R 0 → fg 1 → bg n = number of pixels
69
Sink (1) Source (0) Graph *g; For all pixels p /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q), cost(p,q)); end g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1)
70
a1a1 a2a2 fgCost(a 1 ) Sink (1) Source (0) fgCost(a 2 ) bgCost(a 1 ) bgCost(a 2 ) Graph *g; For all pixels p /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q), cost(p,q)); end g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1)
71
a1a1 a2a2 fgCost(a 1 ) Sink (1) Source (0) fgCost(a 2 ) bgCost(a 1 ) bgCost(a 2 ) cost(p,q) Graph *g; For all pixels p /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q), cost(p,q)); end g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1)
72
Graph *g; For all pixels p /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q), cost(p,q)); end g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1) a1a1 a2a2 fgCost(a 1 ) Sink (1) Source (0) fgCost(a 2 ) bgCost(a 1 ) bgCost(a 2 ) cost(p,q) a 1 = bg a 2 = fg
74
Mixed (Real-Integer) Problems Multi-label Problems Ordered Labels ▪ Stereo (depth labels) Unordered Labels ▪ Object segmentation ( ‘car’, `road’, `person’) Higher Order Energy Functions
75
Mixed (Real-Integer) Problems Multi-label Problems Ordered Labels ▪ Stereo (depth labels) Unordered Labels ▪ Object segmentation ( ‘car’, `road’, `person’) Higher Order Energy Functions
76
x – binary image segmentation (x i ∊ {0,1}) ω – non-local parameter (lives in some large set Ω) constant unary potentials pairwise potentials E(x, ω ) = C( ω ) + ∑ θ i ( ω, x i ) + ∑ θ ij ( ω, x i,x j ) i,ji ≥ 0 ω Template Position Scale Orientation We have seen several of them in the intro...
77
x – binary image segmentation (x i ∊ {0,1}) ω – non-local parameter (lives in some large set Ω) constant unary potentials pairwise potentials E(x, ω ) = C( ω ) + ∑ θ i ( ω, x i ) + ∑ θ ij ( ω, x i,x j ) i,ji ≥ 0 {x *, ω*} = arg min E(x,ω) Standard “graph cut” energy if ω is fixed x,ω [Kohli et al, 06,08] [Lempitsky et al, 08]
78
Local Method: Gradient Descent over ω ω* ω * = arg min min E (x, ω ) xω Submodular [Kohli et al, 06,08]
79
Local Method: Gradient Descent over ω ω * = arg min min E (x, ω ) xω Submodular Dynamic Graph Cuts 15- 20 time speedup! E (x,ω 1 ) E (x,ω 2 ) Similar Energy Functions [Kohli et al, 06,08]
80
Global Method: Branch and Mincut [Lempitsky et al, 08] Produces the global optimal solution Exhaustively explores all ω in Ω in the worst case 30,000,000 shapes
81
Mixed (Real-Integer) Problems Multi-label Problems Ordered Labels ▪ Stereo (depth labels) Unordered Labels ▪ Object segmentation ( ‘car’, `road’, `person’) Higher Order Energy Functions
82
Exact Transformation to QPBF Move making algorithms E(y) = ∑ f i (y i ) + ∑ g ij (y i,y j ) i,ji y Labels L = {l 1, l 2, …, l k } Min y [Roy and Cox ’98] [Ishikawa ’03] [Schlesinger & Flach ’06] [Ramalingam, Alahari, Kohli, and Torr ’08]
83
So what is the problem? E b (x 1,x 2,..., x m )E m (y 1,y 2,..., y n ) Multi-label ProblemBinary label Problem y i L = {l 1, l 2, …, l k }x i L = {0,1} such that: Let Y and X be the set of feasible solutions, then 1. One-One encoding function T:X->Y 2. arg min E m (y) = T(arg min E b (x))
84
Popular encoding scheme [Roy and Cox ’98, Ishikawa ’03, Schlesinger & Flach ’06] # Nodes = n * k # Edges = m * k 2
85
Popular encoding scheme [Roy and Cox ’98, Ishikawa ’03, Schlesinger & Flach ’06] # Nodes = n * k # Edges = m * k 2 Ishikawa’s result: E(y) = ∑ θ i (y i ) + ∑ θ ij (y i,y j ) i,ji y Labels L = {l 1, l 2, …, l k } θ ij (y i,y j ) = g(|y i -y j |) Convex Function g(|y i -y j |) |y i -y j |
86
Popular encoding scheme [Roy and Cox ’98, Ishikawa ’03, Schlesinger & Flach ’06] # Nodes = n * k # Edges = m * k 2 Schlesinger & Flach ’06: E(y) = ∑ θ i (y i ) + ∑ θ ij (y i,y j ) i,ji y Labels L = {l 1, l 2, …, l k } θ ij (l i+1,l j ) + θ ij (l i,l j+1 ) θ ij (l i,l j ) + θ ij (l i+1,l j+1 ) l i +1 lili l j +1 ljlj
87
ImageMAP Solution Scanline algorithm [Roy and Cox, 98]
88
Applicability Cannot handle truncated costs (non-robust) Computational Cost Very high computational cost Problem size = |Variables| x |Labels| Gray level image denoising (1 Mpixel image) (~2.5 x 10 8 graph nodes) θ ij (y i,y j ) = g(|y i -y j |) |y i -y j | discontinuity preserving potentials Blake&Zisserman’83,87
89
Unary Potentials Pair-wise Potentials Complexity Ishikawa Transformation [03] ArbitraryConvex and Symmetric T(nk, mk 2 ) Schlesinger Transformation [06] ArbitrarySubmodularT(nk, mk 2 ) Hochbaum [01] LinearConvex and Symmetric T(n, m) + n log k Hochbaum [01] ConvexConvex and Symmetric O(mn log n log nk) Other “less known” algorithms T(a,b) = complexity of maxflow with a nodes and b edges
90
Exact Transformation to QPBF Move making algorithms E(y) = ∑ f i (y i ) + ∑ g ij (y i,y j ) i,ji y Labels L = {l 1, l 2, …, l k } Min y [Boykov, Veksler and Zabih 2001] [Woodford, Fitzgibbon, Reid, Torr, 2008] [Lempitsky, Rother, Blake, 2008] [Veksler, 2008] [Kohli, Ladicky, Torr 2008]
91
Solution Space Energy
92
Search Neighbourhood Current Solution Optimal Move Solution Space Energy
93
Search Neighbourhood Current Solution Optimal Move xcxc (t)(t) Key Property Move Space Bigger move space Solution Space Energy Better solutions Finding the optimal move hard
94
Minimizing Pairwise Functions [Boykov Veksler and Zabih, PAMI 2001] Series of locally optimal moves Each move reduces energy Optimal move by minimizing submodular function Space of Solutions (x) : L n Move Space (t) : 2 n Search Neighbourhood Current Solution n Number of Variables L Number of Labels Kohli et al. ‘07, ‘08, ‘09 Extend to minimize Higher order Functions
95
Minimize over move variables t x = t x 1 + (1- t ) x 2 New solution Current Solution Second solution E m ( t ) = E( t x 1 + (1- t ) x 2 ) For certain x 1 and x 2, the move energy is sub-modular QPBF [Boykov, Veksler and Zabih 2001]
96
Variables labeled α, β can swap their labels [Boykov, Veksler and Zabih 2001]
97
Sky House Tree Ground Swap Sky, House Variables labeled α, β can swap their labels [Boykov, Veksler and Zabih 2001]
98
Move energy is submodular if: Unary Potentials: Arbitrary Pairwise potentials: Semi-metric θ ij (l a,l b ) ≥ 0 θ ij (l a,l b ) = 0 a = b Examples: Potts model, Truncated Convex [Boykov, Veksler and Zabih 2001] Variables labeled α, β can swap their labels
99
[Boykov, Veksler, Zabih] Variables take label or retain current label [Boykov, Veksler and Zabih 2001]
100
Sky House Tree Ground Initialize with TreeStatus:Expand GroundExpand HouseExpand Sky [Boykov, Veksler, Zabih] [Boykov, Veksler and Zabih 2001] Variables take label or retain current label
101
Move energy is submodular if: Unary Potentials: Arbitrary Pairwise potentials: Metric [Boykov, Veksler, Zabih] θ ij (l a,l b ) + θ ij (l b,l c ) ≥ θ ij (l a,l c ) Semi metric + Triangle Inequality Examples: Potts model, Truncated linear Cannot solve truncated quadratic Variables take label or retain current label [Boykov, Veksler and Zabih 2001]
102
Expansion and Swap can be derived as a primal dual scheme Get solution of the dual problem which is a lower bound on the energy of solution Weak guarantee on the solution [Komodakis et al 05, 07] E(x) < 2(d max /d min ) E(x*) d max d min θ ij (l i,l j ) = g(|l i -l j |) |y i -y j |
103
Move TypeFirst Solution Second Solution Guarantee ExpansionOld solutionAll alphaMetric FusionAny solution Minimize over move variables t x = t x 1 + (1-t) x 2 New solution First solution Second solution Move functions can be non-submodular!!
104
x = t x 1 + (1-t) x 2 x 1, x 2 can be continuous F x1x1 x2x2 x Optical Flow Example Final Solution Solution from Method 1 Solution from Method 2 [Woodford, Fitzgibbon, Reid, Torr, 2008] [Lempitsky, Rother, Blake, 2008]
105
Move variables can be multi-label Optimal move found out by using the Ishikawa Transform Useful for minimizing energies with truncated convex pairwise potentials θ ij (y i,y j ) = min(|y i -y j | 2,T) |y i -y j | θ ij (y i,y j ) T x = (t == 1) x 1 + (t == 2) x 2 +… +(t == k) x k [Kumar and Torr, 2008] [Veksler, 2008]
106
[Veksler, 2008] Image Noisy Image Range Moves Expansion Move
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.