Download presentation
Presentation is loading. Please wait.
Published bySteven Sutton Modified over 10 years ago
1
Primal-dual Algorithm for Convex Markov Random Fields Vladimir Kolmogorov University College London GDR (Optimisation Discrète, Graph Cuts et Analyse d'Images) Paris, 29 November 2005 Note: these slides contain animation
2
Convex MRF functions Functions D p (.), V pq (.) are convex x p [0,…,K-1] (K is # of labels) Goal: compute global minimum of E
3
Example: Panoramic image stitching [Levin,Zomet,Peleg,Weiss04] Main idea: gradients of output image x should match gradients of input images
4
Energy function: – - output image (e.g. x p [0,…,255]) V pq x q - x p 0 255-255 V pq (.) is convex! Example: Panoramic image stitching [Levin,Zomet,Peleg,Weiss04]
5
Algorithms for MRF minimisation Arbitrary D p (.), convex V pq (.) –Battleship construction [Ishikawa03], [Ahuja et al.04] Construct graph with O(nK) nodes Minimum cut gives global minimum –Needs a lot of memory! Convex D p (.), convex V pq (.) –Dual algorithms (maintain dual variables - flow f) [Karzanov et al. 97], [Ahuja et al. 03]. Best complexity is O(n m log (n 2 /m) log(nK)) –Primal algorithm (maintains primal variables - configuration x) Iterative min cut [Bioucas-Dias & Valadão05] Advantage: relies on maxflow algorithm Complexity?
6
New results (convex D p, convex V pq ): –Establishing complexity of primal algorithm At most 2K steps –Extending primal algorithm to primal-dual algorithm Maintains both primal and dual variables Can be speeded up using Dijkstras shortest path procedure Experimentally much faster than primal algorithm Algorithms for MRF minimisation
7
Overview of primal algorithm (iterative min cut)
8
label K-1 label 0 label 1 Primal algorithm (iterative min cut) Graph:
9
label K-1 label 0 label 1 Start with arbitrary configuration xProcedure UP: Primal algorithm (iterative min cut)
10
label K-1 label 0 label 1 Procedure UP:
11
label K-1 label 0 label 1 Procedure DOWN: Primal algorithm (iterative min cut)
12
label K-1 label 0 label 1 Procedure DOWN: Primal algorithm (iterative min cut)
13
label K-1 label 0 label 1 UP: Primal algorithm (iterative min cut)
14
label K-1 label 0 label 1 UP: DOWN: Primal algorithm (iterative min cut)
15
label K-1 label 0 label 1 DOWN: Primal algorithm (iterative min cut)
16
label K-1 label 0 label 1 DOWN: Done! –UP and DOWN do not decrease energy Primal algorithm (iterative min cut)
17
Discussion [Bioucas-Dias & Valadão05]: –Procedure yields global minimum! No unary terms D p, terms V pq are convex –Straighforward extension to convex MRF functions Convex D p, convex V pq –Non-polynomial bound on the number of steps [Murota 00,03] (steepest descent algorithm) –Procedure yields global minimum for L -convex functions Convex MRF functions are special case of L -convex functions –O(nK) bound on the number of steps New result: –Global minimum after at most 2K steps –Holds for L -convex functions (including convex MRF functions)
18
Contribution #1: Complexity of primal algorithm Background
19
Two classes of functions Consider function E(x) = E(x 1,…,x n ) –x p [0,…,K-1] Algorithm can be applied to any such function: UP: DOWN: Question #1: When UP and DOWN can be solved efficiently? Question #2: When does it yield global minimum? Submodular functions L -convex functions
20
Submodular functions E is submodular if for all configurations x, y – and are component-wise minimum/maximum: Definition is extended from binary variables (K=2) to multi-valued variables (K>2) Can be minimised in time polynomial in K, n, m –Functions with unary, pairwise and ternary terms: reduction to min cut/max flow [Kovtun04]
21
L -convex functions E is L -convex if for all configurations x, y – and are component-wise round-down and round-up (floor and ceiling) Note: in continuous case, E is convex if for all x, y
22
Submodulariry and L -convexity K=2: submodular functions = L -convex functions K>2: submodular functions L -convex functions Example: D p arbitrary V pq convex E is submodular D p convex V pq convex E is L -convex D1D1 x1x1 D1D1 x1x1
23
Contribution #1: Complexity of primal algorithm for L -convex functions Proof
24
For configuration x, define + (x), (x) – 0 + (x), (x) < K Prove that UP and DOWN do not increase + (x), (x) Prove that if + (x) > 0, then UP will decrease it –Similarly for (x) and DOWN Prove that + (x) = (x) = 0 implies that x is a global minimum Overview
25
Property of submodular functions Let OPT(E) be the set of global minima of E There exists minimal and maximal optimal configurations: In general, not true for non-submodular functions! (e.g. Potts interactions)
26
Step 1: Let E + be a restriction of E to configurations y x Defining + (x) Step 3: Define + (x) = || x + - x || = max { x + p - x p } Step 2: Let x + be the minimal optimal configuration of E + x x+x+
27
Step 1: Let E ¯ be a restriction of E to configurations y x Defining (x) Step 3: Define (x) = || x - x ¯ || = max { x p - x ¯ p } Step 2: Let x ¯ be the maximal optimal configuration of E ¯ x x¯x¯
28
Algorithms behaviour (x) = 2 (x) = 3
29
Algorithms behaviour (x) = 1 (x) = 3
30
Algorithms behaviour (x) = 0 (x) = 3
31
Algorithms behaviour (x) = 0 (x) = 2
32
Algorithms behaviour (x) = 0 (x) = 1
33
Algorithms behaviour (x) = 0
34
Contribution #2: Primal-dual algorithm
35
Primal-dual algorithm Primal algorithm maintains only primal variables (configuration x) –Each maxflow problem is solved independently Motivation: reuse flow from previous computation New primal-dual algorithm –Applies to convex MRF functions –Maintains both primal variables (configuration x) and dual variables (flow f ) –Upon termination, optimal x and f –Can be speeded up via Dijkstras algorithm –Experimentally much faster than primal algorithm
36
Flow and reparameterisation Dp(xp)Dp(xp) xpxp xqxq x q -x p V pq (x q -x p ) Dq(xq)Dq(xq) D p (x p ) + f·x p xpxp xqxq x q -x p V pq (x q -x p ) + f·(x q -x p ) D p (x p ) f·x q
37
Flow and reparameterisation D p (x p ) + f p ·x p xpxp x q -x p V pq (x q -x p ) + f pq ·(x q -x p ) Flow: vector f = { f p, f pq } satisfying antisymmetry and flow conservation constraints Any flow defines reparameterisation
38
Optimality conditions D p (x p ) + f p ·x p xpxp x q -x p V pq (x q -x p ) + f pq ·(x q -x p ) ( x, f ) is an optimal primal-dual pair iff
39
Primal-dual algorithm UP: –MAXFLOW-UP Construct graph for minimising E(x + b), b {0,1} n Compute maximum flow Update x and f accordingly –DIJKSTRA-UP Update x DOWN: similar nodes edges Algorithms property: –Maintains optimality condition for edges (but not necessarily for nodes)
40
DIJKSTRA-UP Increase x p until D p (x p ) starts increasing Maintain optimality condition for edges Compute maximal such labeling x –Dijkstras shortest path algorithm nodes edges Complexity is preserved (at most 2K steps)
41
Experimental results input pair maximal optimal configuration minimal optimal configuration average
42
Experimental results input pair maximal optimal configuration minimal optimal configuration average
43
Experimental results input pair maximal optimal configuration minimal optimal configuration average
44
Running times Primal Primal-dual
45
Running times Initialisation is important –If x = global minimum, then terminates in 2 steps Two-stage process: –Solve the problem in the overlap area Small graph –Use it as initialisation for the full image Experimentally, second stage takes 3 steps
46
Running times MCNF (minimum cost network flow, [Goldberg97]) Primal-dual, 1 stage Primal-dual, 2 stages
47
Conclusions Complexity of primal algorithm for minimising L -convex functions –Tight bound on the number of steps –Improves bounds of Murota and Bioucas-Dias et al. New primal-dual algorithm –Applies to convex MRF functions –Experimentally much faster than primal algorithm –With good initialisation, outperforms MCNF
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.