Presentation is loading. Please wait.

Presentation is loading. Please wait.

Combinatorial Optimization and Computer Vision Philip Torr.

Similar presentations


Presentation on theme: "Combinatorial Optimization and Computer Vision Philip Torr."— Presentation transcript:

1 Combinatorial Optimization and Computer Vision Philip Torr

2 Story How an attempt to solve one problem lead into many different areas of computer vision and some interesting results.

3 Aim Given an image, to segment the object Segmentation should (ideally) be shaped like the object e.g. cow-like obtained efficiently in an unsupervised manner able to handle self-occlusion Segmentation Object Category Model Cow Image Segmented Cow

4 Challenges Self Occlusion Shape Variability Appearance Variability

5 Motivation Current methods require user intervention Object and background seed pixels (Boykov and Jolly, ICCV 01) Bounding Box of object (Rother et al. SIGGRAPH 04) Cow Image Object Seed Pixels

6 Motivation Current methods require user intervention Object and background seed pixels (Boykov and Jolly, ICCV 01) Bounding Box of object (Rother et al. SIGGRAPH 04) Cow Image Object Seed Pixels Background Seed Pixels

7 Motivation Current methods require user intervention Object and background seed pixels (Boykov and Jolly, ICCV 01) Bounding Box of object (Rother et al. SIGGRAPH 04) Segmented Image

8 Motivation Current methods require user intervention Object and background seed pixels (Boykov and Jolly, ICCV 01) Bounding Box of object (Rother et al. SIGGRAPH 04) Cow Image Object Seed Pixels Background Seed Pixels

9 Motivation Current methods require user intervention Object and background seed pixels (Boykov and Jolly, ICCV 01) Bounding Box of object (Rother et al. SIGGRAPH 04) Segmented Image

10 Problem Manually intensive Segmentation is not guaranteed to be object-like Non Object-like Segmentation Motivation

11 MRF for Image Segmentation Energy MRF Pair-wise Terms MAP Solution Unary likelihoodData (D) Unary likelihoodContrast TermPair-wise terms (Potts Model) Boykov and Jolly [ICCV 2001] Maximum-a-posteriori (MAP) solution x* = arg min E(x) x =

12 GraphCut for Inference Cut: A collection of edges which separates the Source from the Sink MinCut: The cut with minimum weight (sum of edge weights) Solution: Global optimum (MinCut) in polynomial time Image Sink Source Foreground Background Cut

13 Energy Minimization using Graph cuts E MRF (a 1,a 2 ) Sink (1) Source (0) a1a1 a2a2 Graph Construction for Boolean Random Variables

14 Energy Minimization using Graph cuts Sink (1) Source (0) a1a1 a2a2 E MRF (a 1,a 2 ) = 2a 1 2 t-edges (unary terms)

15 Energy Minimization using Graph cuts E MRF (a 1,a 2 ) = 2a 1 + 5ā 1 Sink (1) Source (0) a1a1 a2a2 2 5

16 Energy Minimization using Graph cuts E MRF (a 1,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 Sink (1) Source (0) a1a1 a2a2 2 5 9 4

17 Energy Minimization using Graph cuts E MRF (a 1,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 Sink (1) Source (0) a1a1 a2a2 2 5 9 4 2 n-edges (pair-wise term)

18 Energy Minimization using Graph cuts E MRF (a 1,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 + ā 1 a 2 Sink (1) Source (0) a1a1 a2a2 2 5 9 4 2 1

19 Energy Minimization using Graph cuts E MRF (a 1,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 + ā 1 a 2 Sink (1) Source (0) a1a1 a2a2 2 5 9 4 2 1

20 Energy Minimization using Graph cuts E MRF (a 1,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 + ā 1 a 2 Sink (1) Source (0) a1a1 a2a2 2 5 9 4 2 1 a 1 = 1 a 2 = 1 E MRF (1,1) = 11 Cost of st-cut = 11

21 Energy Minimization using Graph cuts E MRF (a 1,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 + ā 1 a 2 Sink (1) Source (0) a1a1 a2a2 2 5 9 4 2 1 a 1 = 1 a 2 = 0 E MRF (1,0) = 8 Cost of st-cut = 8

22 The Max-flow Problem - Edge capacity and flow balance constraints Computing the st-mincut from Max-flow algorithms Notation - Residual capacity (edge capacity – current flow) Simple Augmenting Path based Algorithms - Repeatedly find augmenting paths and push flow. - Saturated edges constitute the st-mincut. [Ford-Fulkerson Theorem] Sink (1) Source (0) a1a1 a2a2 2 5 9 4 2 1

23 Minimum s-t cuts algorithms n Augmenting paths [Ford & Fulkerson, 1962] n Push-relabel [Goldberg-Tarjan, 1986]

24 Augmenting Paths Find a path from S to T along non-saturated edges source A graph with two terminals S T sink n Increase flow along this path until some edge saturates

25 Augmenting Paths Find a path from S to T along non-saturated edges source A graph with two terminals S T sink n Increase flow along this path until some edge saturates n Find next path… n Increase flow…

26 Augmenting Paths Find a path from S to T along non-saturated edges source A graph with two terminals S T sink n Increase flow along this path until some edge saturates Iterate until … all paths from S to T have at least one saturated edge MAX FLOW MIN CUT

27 MRF, Graphical Model Probability for a labelling consists of Likelihood Unary potential based on colour of pixel Prior which favours same labels for neighbours (pairwise potentials) Prior Ψ xy (m x,m y ) Unary Potential Φ x (D|m x ) D (pixels) m (labels) Image Plane x y mxmx mymy

28 Example Cow Image Object Seed Pixels Background Seed Pixels Prior x … y … … … x … y … … … Φ x (D|obj) Φ x (D|bkg) Ψ xy (m x,m y ) Likelihood Ratio (Colour)

29 Example Cow Image Object Seed Pixels Background Seed Pixels Pair-wise Terms Likelihood Ratio (Colour)

30 Contrast-Dependent MRF Probability of labelling in addition has Contrast term which favours boundaries to lie on image edges D (pixels) m (labels) Image Plane Contrast Term Φ(D|m x,m y ) x y mxmx mymy

31 Example Cow Image Object Seed Pixels Background Seed Pixels Pair-wise Term x … y … … … x … y … … … Likelihood Ratio (Colour) Ψ xy (m x,m y ) + Φ(D|m x,m y ) Φ x (D|obj) Φ x (D|bkg)

32 Example Cow Image Object Seed Pixels Background Seed Pixels Prior + Contrast Likelihood Ratio (Colour)

33 Object Graphical Model Probability of labelling in addition has Unary potential which depend on distance from Θ (shape parameter) D (pixels) m (labels) Θ (shape parameter) Image Plane Object Category Specific MRF x y mxmx mymy Unary Potential Φ x (m x |Θ)

34 Example Cow Image Object Seed Pixels Background Seed Pixels Prior + Contrast Distance from Θ Shape Prior Θ

35 Example Cow Image Object Seed Pixels Background Seed Pixels Prior + Contrast Likelihood + Distance from Θ Shape Prior Θ

36 Example Cow Image Object Seed Pixels Background Seed Pixels Prior + Contrast Likelihood + Distance from Θ Shape Prior Θ

37 Thought We can imagine rather than using user input to define histograms we use object detection.

38 Shape Model BMVC 2004

39 Pictorial Structure Fischler & Elschlager, 1973 Yuille, ‘ 91 Brunelli & Poggio, ‘ 93 Lades, v.d. Malsburg et al. ‘ 93 Cootes, Lanitis, Taylor et al. ‘ 95 Amit & Geman, ‘ 95, ‘ 99 Perona et al. ‘ 95, ‘ 96, ’ 98, ‘ 00

40 Layered Pictorial Structures (LPS) Generative model Composition of parts + spatial layout Layer 2 Layer 1 Parts in Layer 2 can occlude parts in Layer 1 Spatial Layout (Pairwise Configuration)

41 Layer 2 Layer 1 Transformations Θ 1 P(Θ 1 ) = 0.9 Cow Instance Layered Pictorial Structures (LPS)

42 Layer 2 Layer 1 Transformations Θ 2 P(Θ 2 ) = 0.8 Cow Instance Layered Pictorial Structures (LPS)

43 Layer 2 Layer 1 Transformations Θ 3 P(Θ 3 ) = 0.01 Unlikely Instance Layered Pictorial Structures (LPS)

44 How to learn LPS From video via motion segmentation see Kumar Torr and Zisserman ICCV 2005. Graph cut based method.

45 Examples

46 LPS for Detection Learning – Learnt automatically using a set of examples Detection –Matches LPS to image using Loopy Belief Propagation –Localizes object parts

47 Detection Like a proposal process.

48 Pictorial Structures (PS) PS = 2D Parts + Configuration Fischler and Eschlager. 1973 Aim: Learn pictorial structures in an unsupervised manner Identify parts Learn configuration Learn relative depth of parts Parts + Configuration + Relative depth Layered Pictorial Structures (LPS)

49 Motivation Matching Pictorial Structures - Felzenszwalb et al - 2001 Part likelihoodSpatial Prior Outline Texture Image P1P1 P3P3 P2P2 (x,y,, ) MRF

50 Motivation Image P1P1 P3P3 P2P2 (x,y,, ) MRF Unary potentials are negative log likelihoods Valid pairwise configuration Potts Model Matching Pictorial Structures - Felzenszwalb et al - 2001 1 2 YESNO

51 Motivation P1P1 P3P3 P2P2 (x,y,, ) Pr(Cow)Image Unary potentials are negative log likelihoods Matching Pictorial Structures - Felzenszwalb et al - 2001 Valid pairwise configuration Potts Model 1 2 YESNO

52 Bayesian Formulation (MRF) D = image. D i = pixels Є p i, given l i (PDF Projection Theorem. ) z = sufficient statistics ψ(l i,l j ) = const, if valid configuration = 0, otherwise. Pott s model

53 Combinatorial Optimization SDP formulation (Torr 2001, AI stats), best bound SOCP formulation (Kumar, Torr & Zisserman this conference), good compromise of speed and accuracy. LBP (Huttenlocher, many), worst bound.

54 Defining the likelihood We want a likelihood that can combine both the outline and the interior appearance of a part. Define features which will be sufficient statistics to discriminate foreground and background:

55 Features Outline: z 1 Chamfer distance Interior: z 2 Textons Model joint distribution of z 1 z 2 as a 2D Gaussian.

56 Chamfer Match Score Outline (z 1 ) : minimum chamfer distances over multiple outline exemplars d cham = 1/n Σ i min{ min j ||u i -v j ||, τ } ImageEdge Image Distance Transform

57 Texton Match Score Texture(z 2 ) : MRF classifier –(Varma and Zisserman, CVPR 03) Multiple texture exemplars x of class t Textons: 3 X 3 square neighbourhood VQ in texton space Descriptor: histogram of texton labelling χ 2 distance

58 Bag of Words/Histogram of Textons Having slagged off BoWs I reveal we used it all along, no big deal. So this is like a spatially aware bag of words model… Using a spatially flexible set of templates to work out our bag of words.

59 2. Fitting the Model Cascades of classifiers –Efficient likelihood evaluation Solving MRF –LBP, use fast algorithm –GBP if LBP doesnt converge –Could use Semi Definite Programming (2003) –Recent work second order cone programming method best CVPR 2006.

60 Efficient Detection of parts Cascade of classifiers Top level use chamfer and distance transform for efficient pre filtering At lower level use full texture model for verification, using efficient nearest neighbour speed ups.

61 Cascade of Classifiers- for each part f Y. Amit, and D. Geman, 97?; S. Baker, S. Nayer 95

62 High Levels based on Outline (x,y)

63 Low levels on Texture The top levels of the tree use outline to eliminate patches of the image. Efficiency: Using chamfer distance and pre computed distance map. Remaining candidates evaluated using full texture model.

64 Efficient Nearest Neighbour Goldstein, Platt and Burges (MSR Tech Report, 2003) Conversion from fixed distance to rectangle search bitvector i j (R k ) = 1 = 0 Nearest neighbour of x Find intervals in all dimensions AND appropriate bitvectors Nearest neighbour search on pruned exemplars R k Є I i in dimension j

65 Inspiration ICCV 2003, Stenger et al. System developed for tracking articulated objects such as hands or bodies, based on efficient detection.

66 Evaluation at Multiple Resolutions Tree: 9000 templates of hand pointing, rigid

67 Templates at Level 1

68 Templates at Level 2

69 Templates at Level 3

70 Tracking Results

71 Marginalize out Pose Get an initial estimate of pose distribution. Use EM to marginalize out pose.

72 SegmentationImage Results Using LPS Model for Cow

73 In the absence of a clear boundary between object and background SegmentationImage Results Using LPS Model for Cow

74 SegmentationImage Results Using LPS Model for Cow

75 SegmentationImage Results Using LPS Model for Cow

76 SegmentationImage Results Using LPS Model for Horse

77 SegmentationImage Results Using LPS Model for Horse

78 Our MethodLeibe and SchieleImage Results

79 Thoughts Object models can help segmentation. But good models hard to obtain.

80 Do we really need accurate models? Segmentation boundary can be extracted from edges Rough 3D Shape-prior enough for region disambiguation

81 Energy of the Pose-specific MRF Energy to be minimized Unary term Shape prior Pairwise potential Potts model But what should be the value of θ?

82 The different terms of the MRF Original image Likelihood of being foreground given a foreground histogram Grimson- Stauffer segmentation Shape prior model Shape prior (distance transform) Likelihood of being foreground given all the terms Resulting Graph-Cuts segmentation

83 Can segment multiple views simultaneously

84 Solve via gradient descent Comparable to level set methods Could use other approaches (e.g. Objcut) Need a graph cut per function evaluation

85 Formulating the Pose Inference Problem

86 But… EACH … to compute the MAP of E(x) w.r.t the pose, it means that the unary terms will be changed at EACH iteration and the maxflow recomputed! However… Kohli and Torr showed how dynamic graph cuts can be used to efficiently find MAP solutions for MRFs that change minimally from one time instant to the next: Dynamic Graph Cuts (ICCV05).

87 Dynamic Graph Cuts PBPB SBSB cheaper operation computationally expensive operation Simpler problem P B* differences between A and B similar PAPA SASA solve

88 Dynamic Image Segmentation Image Flows in n-edges Segmentation Obtained

89 9 + α 4 + α Adding a constant to both the t-edges of a node does not change the edges constituting the st-mincut. Key Observation Sink (1) Source (0) a1a1 a2a2 2 5 2 1 E (a 1,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 + ā 1 a 2 E*(a 1,a 2 ) = E(a 1,a 2 ) + α(a 2 +ā 2 ) = E(a 1,a 2 ) + α [a 2 +ā 2 =1] Reparametrization

90 9 + α 4 All reparametrizations of the graph are sums of these two types. Other type of reparametrization Sink (1) Source (0) a1a1 a2a2 2 5 + α 2 + α 1 - α Reparametrization, second type E* (a 1,a 2 ) = E (a1,a2) + α ā 1 + α a 2 + α a 1 ā 2 - α ā 1 a 2 E* (a 1,a 2 ) = E (a1,a2) + α ( ā 1 + a 2 + a 1 (1-a 2 ) - ā 1 a 2 ) E* (a 1,a 2 ) = E (a1,a2) + α

91 9 + α 4 All reparametrizations of the graph are sums of these two types. Other type of reparametrization Sink (1) Source (0) a1a1 a2a2 2 5 + α 2 + α 1 - α Reparametrization, second type Both maintain the solution and add a constant α to the energy.

92 Reparametrization Nice result (easy to prove) All other reparametrizations can be viewed in terms of these two basic operations. Proof in Hammer, and also in one of Vlads recent papers.

93 s G t original graph 0/9 0/7 0/5 0/20/4 0/1 xixi xjxj flow/residual capacity Graph Re-parameterization

94 s G t original graph 0/9 0/7 0/5 0/20/4 0/1 xixi xjxj flow/residual capacity Graph Re-parameterization t residual graph xixi xjxj 0/12 5/2 3/2 1/0 2/04/0 st-mincut Compute Maxflow GrGr Edges cut

95 Update t-edge Capacities s GrGr t residual graph xixi xjxj 0/12 5/2 3/2 1/0 2/04/0

96 Update t-edge Capacities s GrGr t residual graph xixi xjxj 0/12 5/2 3/2 1/0 2/04/0 capacity changes from 7 to 4

97 Update t-edge Capacities s G` t updated residual graph xixi xjxj 0/12 5/-1 3/2 1/0 2/04/0 capacity changes from 7 to 4 edge capacity constraint violated! (flow > capacity) = 5 – 4 = 1 excess flow (e) = flow – new capacity

98 add e to both t-edges connected to node i Update t-edge Capacities s G` t updated residual graph xixi xjxj 0/12 3/2 1/0 2/04/0 capacity changes from 7 to 4 edge capacity constraint violated! (flow > capacity) = 5 – 4 = 1 excess flow (e) = flow – new capacity 5/-1

99 Update t-edge Capacities s G` t updated residual graph xixi xjxj 0/12 3/2 1/0 4/0 capacity changes from 7 to 4 excess flow (e) = flow – new capacity add e to both t-edges connected to node i = 5 – 4 = 1 5/0 2/1 edge capacity constraint violated! (flow > capacity)

100 Update n-edge Capacities s GrGr t residual graph xixi xjxj 0/12 5/2 3/2 1/0 2/04/0 Capacity changes from 5 to 2

101 Update n-edge Capacities s t Updated residual graph xixi xjxj 0/12 5/2 3/-1 1/0 2/04/0 G` Capacity changes from 5 to 2 - edge capacity constraint violated!

102 Update n-edge Capacities s t Updated residual graph xixi xjxj 0/12 5/2 3/-1 1/0 2/04/0 G` Capacity changes from 5 to 2 - edge capacity constraint violated! Reduce flow to satisfy constraint

103 Update n-edge Capacities s t Updated residual graph xixi xjxj 0/11 5/2 2/0 1/0 2/04/0 excess deficiency G` Capacity changes from 5 to 2 - edge capacity constraint violated! Reduce flow to satisfy constraint - causes flow imbalance!

104 Update n-edge Capacities s t Updated residual graph xixi xjxj 0/11 5/2 2/0 1/0 2/04/0 deficiency excess G` Capacity changes from 5 to 2 - edge capacity constraint violated! Reduce flow to satisfy constraint - causes flow imbalance! Push excess flow to/from the terminals Create capacity by adding α = excess to both t-edges.

105 Update n-edge Capacities Updated residual graph Capacity changes from 5 to 2 - edge capacity constraint violated! Reduce flow to satisfy constraint - causes flow imbalance! Push excess flow to the terminals Create capacity by adding α = excess to both t-edges. G` xixi xjxj 0/11 5/3 2/0 3/04/1 s t

106 Update n-edge Capacities Updated residual graph Capacity changes from 5 to 2 - edge capacity constraint violated! Reduce flow to satisfy constraint - causes flow imbalance! Push excess flow to the terminals Create capacity by adding α = excess to both t-edges. G` xixi xjxj 0/11 5/3 2/0 3/04/1 s t

107 First segmentation problem MAP solution GaGa Our Algorithm GbGb second segmentation problem Maximum flow residual graph ( G r ) G` difference between G a and G b updated residual graph

108 Dynamic Graph Cut vs Active Cuts Our method flow recycling AC cut recycling Both methods: Tree recycling

109 Experimental Analysis MRF consisting of 2x10 5 latent variables connected in a 4-neighborhood. Running time of the dynamic algorithm

110 Experimental Analysis Image resolution: 720x576 static: 220 msec dynamic (optimized): 50 msec Image segmentation in videos (unary & pairwise terms) Graph CutsDynamic Graph Cuts Energy MRF =

111 Segmentation Comparison Grimson-Stauffer Bathia04 Our method

112 Segmentation + Pose inference [Images courtesy: M. Black, L. Sigal]

113 Segmentation + Pose inference [Images courtesy: Vicon]

114 Max-Marginals for Parameter Learning Use Max-marginals instead of Pseudo marginals from LBP (from Sanjiv Kumar)

115 Volumetric Graph cuts Source Sink Min cut Can apply to 3D

116 Results Model House

117 Results Stone carving

118 Results Haniwa

119 Conclusion Combining pose inference and segmentation worth investigating. Lots more to do to extend MRF models Combinatorial Optimization is a very interesting and hot area in vision at the moment. Algorithms are as important as models.

120 Ask Pushmeet for code Demo

121

122

123

124

125


Download ppt "Combinatorial Optimization and Computer Vision Philip Torr."

Similar presentations


Ads by Google