An Analysis of Convex Relaxations M. Pawan Kumar Vladimir Kolmogorov Philip Torr for MAP Estimation.

Slides:



Advertisements
Similar presentations
POSE–CUT Simultaneous Segmentation and 3D Pose Estimation of Humans using Dynamic Graph Cuts Mathieu Bray Pushmeet Kohli Philip H.S. Torr Department of.
Advertisements

O BJ C UT M. Pawan Kumar Philip Torr Andrew Zisserman UNIVERSITY OF OXFORD.
Solving Markov Random Fields using Second Order Cone Programming Relaxations M. Pawan Kumar Philip Torr Andrew Zisserman.
Solving Markov Random Fields using Dynamic Graph Cuts & Second Order Cone Programming Relaxations M. Pawan Kumar, Pushmeet Kohli Philip Torr.
Convex Programming Brookes Vision Reading Group. Huh? What is convex ??? What is programming ??? What is convex programming ???
Engineering Optimization
Globally Optimal Estimates for Geometric Reconstruction Problems Tom Gilat, Adi Lakritz Advanced Topics in Computer Vision Seminar Faculty of Mathematics.
Introduction to Markov Random Fields and Graph Cuts Simon Prince
ICCV 2007 tutorial Part III Message-passing algorithms for energy minimization Vladimir Kolmogorov University College London.
An Analysis of Convex Relaxations (PART I) Minimizing Higher Order Energy Functions (PART 2) Philip Torr Work in collaboration with: Pushmeet Kohli, Srikumar.
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
Graph Laplacian Regularization for Large-Scale Semidefinite Programming Kilian Weinberger et al. NIPS 2006 presented by Aggeliki Tsoli.
Continuous optimization Problems and successes
Efficient Inference for Fully-Connected CRFs with Stationarity
Semi-Definite Algorithm for Max-CUT Ran Berenfeld May 10,2005.
1 Fast Primal-Dual Strategies for MRF Optimization (Fast PD) Robot Perception Lab Taha Hamedani Aug 2014.
A Constraint Generation Approach to Learning Stable Linear Dynamical Systems Sajid M. Siddiqi Byron Boots Geoffrey J. Gordon Carnegie Mellon University.
Approximation Algoirthms: Semidefinite Programming Lecture 19: Mar 22.
Graph Algorithms: Minimum Spanning Tree We are given a weighted, undirected graph G = (V, E), with weight function w:
Semidefinite Programming
ICCV Tutorial 2007 Philip Torr Papers, presentations and videos on web.....
P 3 & Beyond Solving Energies with Higher Order Cliques Pushmeet Kohli Pawan Kumar Philip H. S. Torr Oxford Brookes University CVPR 2007.
Improved Moves for Truncated Convex Models M. Pawan Kumar Philip Torr.
Weizmann Institute Range Minimization O. Shtrichman The Weizmann Institute Joint work with A.Pnueli, Y.Rodeh, M.Siegel.
Efficiently Solving Convex Relaxations M. Pawan Kumar University of Oxford for MAP Estimation Philip Torr Oxford Brookes University.
A Constraint Generation Approach to Learning Stable Linear Dynamical Systems Sajid M. Siddiqi Byron Boots Geoffrey J. Gordon Carnegie Mellon University.
Seminar in Advanced Machine Learning Rong Jin. Course Description  Introduction to the state-of-the-art techniques in machine learning  Focus of this.
Relaxations and Moves for MAP Estimation in MRFs M. Pawan Kumar STANFORDSTANFORD Vladimir KolmogorovPhilip TorrDaphne Koller.
Hierarchical Graph Cuts for Semi-Metric Labeling M. Pawan Kumar Joint work with Daphne Koller.
Invariant Large Margin Nearest Neighbour Classifier M. Pawan Kumar Philip Torr Andrew Zisserman.
ECE Synthesis & Verification - LP Scheduling 1 ECE 667 ECE 667 Synthesis and Verification of Digital Circuits Scheduling Algorithms Analytical approach.
MAP Estimation Algorithms in M. Pawan Kumar, University of Oxford Pushmeet Kohli, Microsoft Research Computer Vision - Part I.
Multiplicative Bounds for Metric Labeling M. Pawan Kumar École Centrale Paris École des Ponts ParisTech INRIA Saclay, Île-de-France Joint work with Phil.
An Invariant Large Margin Nearest Neighbour Classifier Results Matching Faces from TV Video Aim: To learn a distance metric for invariant nearest neighbour.
Probabilistic Inference Lecture 4 – Part 2 M. Pawan Kumar Slides available online
CS774. Markov Random Field : Theory and Application Lecture 08 Kyomin Jung KAIST Sep
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
Multiplicative Bounds for Metric Labeling M. Pawan Kumar École Centrale Paris Joint work with Phil Torr, Daphne Koller.
Rounding-based Moves for Metric Labeling M. Pawan Kumar Center for Visual Computing Ecole Centrale Paris.
Learning a Small Mixture of Trees M. Pawan Kumar Daphne Koller Aim: To efficiently learn a.
Discrete Optimization Lecture 2 – Part I M. Pawan Kumar Slides available online
Algorithms for MAP estimation in Markov Random Fields Vladimir Kolmogorov University College London.
Discrete Optimization in Computer Vision M. Pawan Kumar Slides will be available online
Nonlinear Programming Models
Discrete Optimization Lecture 3 – Part 1 M. Pawan Kumar Slides available online
Low-Rank Kernel Learning with Bregman Matrix Divergences Brian Kulis, Matyas A. Sustik and Inderjit S. Dhillon Journal of Machine Learning Research 10.
Probabilistic Inference Lecture 5 M. Pawan Kumar Slides available online
Learning Spectral Clustering, With Application to Speech Separation F. R. Bach and M. I. Jordan, JMLR 2006.
Introduction to Semidefinite Programs Masakazu Kojima Semidefinite Programming and Its Applications Institute for Mathematical Sciences National University.
Robust Optimization and Applications Laurent El Ghaoui IMA Tutorial, March 11, 2003.
Tractable Higher Order Models in Computer Vision (Part II) Slides from Carsten Rother, Sebastian Nowozin, Pusohmeet Khli Microsoft Research Cambridge Presented.
Discrete Optimization Lecture 2 – Part 2 M. Pawan Kumar Slides available online
Inference for Learning Belief Propagation. So far... Exact methods for submodular energies Approximations for non-submodular energies Move-making ( N_Variables.
Probabilistic Inference Lecture 2 M. Pawan Kumar Slides available online
Large-Scale Matrix Factorization with Missing Data under Additional Constraints Kaushik Mitra University of Maryland, College Park, MD Sameer Sheoreyy.
Unique Games Approximation Amit Weinstein Complexity Seminar, Fall 2006 Based on: “Near Optimal Algorithms for Unique Games" by M. Charikar, K. Makarychev,
Discrete Optimization Lecture 1 M. Pawan Kumar Slides available online
Tightening LP Relaxations for MAP using Message-Passing David Sontag Joint work with Talya Meltzer, Amir Globerson, Tommi Jaakkola, and Yair Weiss.
MAP Estimation in Binary MRFs using Bipartite Multi-Cuts Sashank J. Reddi Sunita Sarawagi Sundar Vishwanathan Indian Institute of Technology, Bombay TexPoint.
C&O 355 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Linear Programming Chap 2. The Geometry of LP  In the text, polyhedron is defined as P = { x  R n : Ax  b }. So some of our earlier results should.
Chapter 3 Linear Programming Problems  Optimization problems whose objective functions and constraints are.  Optimization problems whose objective functions.
Rounding-based Moves for Metric Labeling M. Pawan Kumar École Centrale Paris INRIA Saclay, Île-de-France.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Nonnegative polynomials and applications to learning
Lecture 8 – Nonlinear Programming Models
An Analysis of Convex Relaxations for MAP Estimation
(Convex) Cones Def: closed under nonnegative linear combinations, i.e.
Part 3. Linear Programming
Presentation transcript:

An Analysis of Convex Relaxations M. Pawan Kumar Vladimir Kolmogorov Philip Torr for MAP Estimation

Aim V1V1 V2V2 V3V3 V4V4 Label ‘0’ Label ‘1’ Labelling m = {1, 0, 0, 1} Random Variables V = {V 1,...,V 4 } Label Set L = {0, 1} To analyze convex relaxations for MAP estimation

Aim V1V1 V2V2 V3V3 V4V4 Label ‘0’ Label ‘1’ Cost(m) = 2 To analyze convex relaxations for MAP estimation

Aim V1V1 V2V2 V3V3 V4V4 Label ‘0’ Label ‘1’ Cost(m) = To analyze convex relaxations for MAP estimation

Aim V1V1 V2V2 V3V3 V4V4 Label ‘0’ Label ‘1’ Cost(m) = To analyze convex relaxations for MAP estimation

Aim V1V1 V2V2 V3V3 V4V4 Label ‘0’ Label ‘1’ Cost(m) = To analyze convex relaxations for MAP estimation

Aim V1V1 V2V2 V3V3 V4V4 Label ‘0’ Label ‘1’ Cost(m) = To analyze convex relaxations for MAP estimation

Aim V1V1 V2V2 V3V3 V4V4 Label ‘0’ Label ‘1’ Cost(m) = To analyze convex relaxations for MAP estimation

Aim V1V1 V2V2 V3V3 V4V4 Label ‘0’ Label ‘1’ Cost(m) = To analyze convex relaxations for MAP estimation

Aim V1V1 V2V2 V3V3 V4V4 Label ‘0’ Label ‘1’ Cost(m) = = 13 Minimum Cost Labelling = MAP estimate Pr(m)  exp(-Cost(m)) To analyze convex relaxations for MAP estimation

Aim V1V1 V2V2 V3V3 V4V4 Label ‘0’ Label ‘1’ Cost(m) = = 13 Which approximate algorithm is the best? NP-hard problem To analyze convex relaxations for MAP estimation

Aim V1V1 V2V2 V3V3 V4V4 Label ‘0’ Label ‘1’ Objectives Compare existing convex relaxations – LP, QP and SOCP Develop new relaxations based on the comparison To analyze convex relaxations for MAP estimation

Outline Integer Programming Formulation Existing Relaxations Comparison Generalization of Results Two New SOCP Relaxations

Integer Programming Formulation V1V1 V2V2 Label ‘0’ Label ‘1’ Unary Cost Unary Cost Vector u = [ 5 Cost of V 1 = 0 2 Cost of V 1 = 1 ; 2 4 ] Labelling m = {1, 0}

V1V1 V2V2 Label ‘0’ Label ‘1’ Unary Cost Unary Cost Vector u = [ 5 2 ; 2 4 ] T Labelling m = {1, 0} Label vector x = [ -1 V 1  0 1 V 1 = 1 ; 1 -1 ] T Recall that the aim is to find the optimal x Integer Programming Formulation

V1V1 V2V2 Label ‘0’ Label ‘1’ Unary Cost Unary Cost Vector u = [ 5 2 ; 2 4 ] T Labelling m = {1, 0} Label vector x = [ -11; 1 -1 ] T Sum of Unary Costs = 1 2 ∑ i u i (1 + x i ) Integer Programming Formulation

V1V1 V2V2 Label ‘0’ Label ‘1’ Pairwise Cost Labelling m = {1, 0} 0 Cost of V 1 = 0 and V 1 = Cost of V 1 = 0 and V 2 = 0 3 Cost of V 1 = 0 and V 2 = Pairwise Cost Matrix P Integer Programming Formulation

V1V1 V2V2 Label ‘0’ Label ‘1’ Pairwise Cost Labelling m = {1, 0} Pairwise Cost Matrix P Sum of Pairwise Costs 1 4 ∑ ij P ij (1 + x i )(1+x j ) Integer Programming Formulation

V1V1 V2V2 Label ‘0’ Label ‘1’ Pairwise Cost Labelling m = {1, 0} Pairwise Cost Matrix P Sum of Pairwise Costs 1 4 ∑ ij P ij (1 + x i +x j + x i x j ) 1 4 ∑ ij P ij (1 + x i + x j + X ij )= X = x x T X ij = x i x j Integer Programming Formulation

Constraints Uniqueness Constraint ∑ x i = 2 - |L| i  V a Integer Constraints x i  {-1,1} X = x x T Integer Programming Formulation

x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  {-1,1} X = x x T Convex Non-Convex Integer Programming Formulation

Outline Integer Programming Formulation Existing Relaxations –Linear Programming (LP-S) –Semidefinite Programming (SDP-L) –Second Order Cone Programming (SOCP-MS) Comparison Generalization of Results Two New SOCP Relaxations

LP-S x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  {-1,1} X = x x T Retain Convex Part Schlesinger, 1976 Relax Non-Convex Constraint

LP-S x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  [-1,1] X = x x T Retain Convex Part Schlesinger, 1976 Relax Non-Convex Constraint

LP-S X = x x T Schlesinger, 1976 X ij  [-1,1] 1 + x i + x j + X ij ≥ 0 ∑ X ij = (2 - |L|) x i j  V b

LP-S x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  [-1,1] X = x x T Retain Convex Part Schlesinger, 1976 Relax Non-Convex Constraint

LP-S x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  [-1,1], Retain Convex Part Schlesinger, 1976 X ij  [-1,1] 1 + x i + x j + X ij ≥ 0 ∑ X ij = (2 - |L|) x i j  V b LP-S

Outline Integer Programming Formulation Existing Relaxations –Linear Programming (LP-S) –Semidefinite Programming (SDP-L) –Second Order Cone Programming (SOCP-MS) Comparison Generalization of Results Two New SOCP Relaxations Experiments

SDP-L x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  {-1,1} X = x x T Retain Convex Part Lasserre, 2000 Relax Non-Convex Constraint

SDP-L x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  [-1,1] X = x x T Retain Convex Part Relax Non-Convex Constraint Lasserre, 2000

x1x1 x2x2 xnxn x1x1 x2x2... xnxn 1xTxT x X = Rank = 1 X ii = 1 Positive Semidefinite Convex Non-Convex SDP-L

x1x1 x2x2 xnxn x1x1 x2x2... xnxn 1xTxT x X = X ii = 1 Positive Semidefinite Convex SDP-L

Schur’s Complement AB BTBT C = I0 B T A -1 I A0 0 C - B T A -1 B IA -1 B 0 I 0 A 0 C -B T A -1 B 0

X - xx T 0 1xTxT x X = 10 x I 10 0 X - xx T IxTxT 0 1 Schur’s Complement SDP-L

x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  [-1,1] X = x x T Retain Convex Part Relax Non-Convex Constraint Lasserre, 2000

SDP-L x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  [-1,1] Retain Convex Part X ii = 1 X - xx T 0 Accurate SDP-L Inefficient Lasserre, 2000

Outline Integer Programming Formulation Existing Relaxations –Linear Programming (LP-S) –Semidefinite Programming (SDP-L) –Second Order Cone Programming (SOCP-MS) Comparison Generalization of Results Two New SOCP Relaxations

SOCP Relaxation x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  [-1,1] X ii = 1 X - xx T 0 Derive SOCP relaxation from the SDP relaxation Further Relaxation

1-D Example X - xx T 0 X - x 2 ≥ 0 For two semidefinite matrices, Frobenius inner product is non-negative A A  0 x 2  X SOC of the form || v || 2  st = 1

2-D Example X 11 X 12 X 21 X 22 1X 12 1 = X = x1x1x1x1 x1x2x1x2 x2x1x2x1 x2x2x2x2 xx T = x12x12 x1x2x1x2 x1x2x1x2 = x22x22

2-D Example (X - xx T ) 1 - x 1 2 X 12 -x 1 x 2.  X 12 -x 1 x x 2 2 x 1 2  1 -1  x 1  1 C 1.  0 C 1 0

2-D Example (X - xx T ) 1 - x 1 2 X 12 -x 1 x 2 C 2.  0.  X 12 -x 1 x x 2 2 x 2 2  1 -1  x 2  1 C 2 0

2-D Example (X - xx T ) 1 - x 1 2 X 12 -x 1 x 2 C 3.  0.  X 12 -x 1 x x 2 2 (x 1 + x 2 ) 2  2 + 2X 12 SOC of the form || v || 2  st C 3 0

2-D Example (X - xx T ) 1 - x 1 2 X 12 -x 1 x 2 C 4.  0.  X 12 -x 1 x x 2 2 (x 1 - x 2 ) 2  2 - 2X 12 SOC of the form || v || 2  st C 4 0

SOCP Relaxation Consider a matrix C 1 = UU T 0 (X - xx T ) ||U T x || 2  X. C 1 C 1.  0 Continue for C 2, C 3, …, C n SOC of the form || v || 2  st Kim and Kojima, 2000

SOCP Relaxation How many constraints for SOCP = SDP ? Infinite. For all C 0 Specify constraints similar to the 2-D example xixi xjxj X ij (x i + x j ) 2  2 + 2X ij (x i + x j ) 2  2 - 2X ij

SOCP-MS x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  [-1,1] X ii = 1 X - xx T 0 Muramatsu and Suzuki, 2003

SOCP-MS x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  [-1,1] (x i + x j ) 2  2 + 2X ij (x i - x j ) 2  2 - 2X ij Specified only when P ij  0 Muramatsu and Suzuki, 2003

Outline Integer Programming Formulation Existing Relaxations Comparison Generalization of Results Two New SOCP Relaxations

Dominating Relaxation For all MAP Estimation problem (u, P) A dominates B A B ≥ Dominating relaxations are better

Equivalent Relaxations A dominates B A B = B dominates A For all MAP Estimation problem (u, P)

Strictly Dominating Relaxation A dominates B A B > B does not dominate A For at least one MAP Estimation problem (u, P)

SOCP-MS (x i + x j ) 2  2 + 2X ij (x i - x j ) 2  2 - 2X ij Muramatsu and Suzuki, 2003 P ij ≥ 0 (x i + x j ) X ij = P ij < 0 (x i - x j ) X ij = SOCP-MS is a QP Same as QP by Ravikumar and Lafferty, 2005 SOCP-MS ≡ QP-RL

LP-S vs. SOCP-MS Differ in the way they relax X = xx T X ij  [-1,1] 1 + x i + x j + X ij ≥ 0 ∑ X ij = (2 - |L|) x i j  V b LP-S (x i + x j ) 2  2 + 2X ij (x i - x j ) 2  2 - 2X ij SOCP-MS F(LP-S) F(SOCP-MS)

LP-S vs. SOCP-MS LP-S strictly dominates SOCP-MS LP-S strictly dominates QP-RL Where have we gone wrong? A Quick Recap !

Recap of SOCP-MS xixi xjxj X ij C = (x i + x j ) 2  2 + 2X ij

Recap of SOCP-MS xixi xjxj X ij 1 1 C = (x i - x j ) 2  2 - 2X ij Can we use different C matrices ?? Can we use a different subgraph ??

Outline Integer Programming Formulation Existing Relaxations Comparison Generalization of Results –SOCP Relaxations on Trees –SOCP Relaxations on Cycles Two New SOCP Relaxations

SOCP Relaxations on Trees Choose any arbitrary tree

SOCP Relaxations on Trees Choose any arbitrary C 0 Repeat over trees to get relaxation SOCP-T LP-S strictly dominates SOCP-T LP-S strictly dominates QP-T

Outline Integer Programming Formulation Existing Relaxations Comparison Generalization of Results –SOCP Relaxations on Trees –SOCP Relaxations on Cycles Two New SOCP Relaxations

SOCP Relaxations on Cycles Choose an arbitrary even cycle P ij ≥ 0 P ij ≤ 0OR

SOCP Relaxations on Cycles Choose any arbitrary C 0 Repeat over even cycles to get relaxation SOCP-E LP-S strictly dominates SOCP-E LP-S strictly dominates QP-E

SOCP Relaxations on Cycles True for odd cycles with P ij ≤ 0 True for odd cycles with P ij ≤ 0 for only one edge True for odd cycles with P ij ≥ 0 for only one edge True for all combinations of above cases

Outline Integer Programming Formulation Existing Relaxations Comparison Generalization of Results Two New SOCP Relaxations –The SOCP-C Relaxation –The SOCP-Q Relaxation

The SOCP-C Relaxation Include all LP-S constraintsTrue SOCP ab bc ca Submodular Non-submodularSubmodular

The SOCP-C Relaxation ab Include all LP-S constraintsTrue SOCP bc ca Frustrated Cycle

The SOCP-C Relaxation ab Include all LP-S constraintsTrue SOCP bc ca LP-S Solution ab bc ca Objective Function = 0

The SOCP-C Relaxation ab Include all LP-S constraintsTrue SOCP bc ca LP-S Solution ab bc ca Define an SOC Constraint using C = 1

The SOCP-C Relaxation ab Include all LP-S constraintsTrue SOCP bc ca LP-S Solution ab bc ca (x i + x j + x k ) 2  (X ij + X jk + X ki )

The SOCP-C Relaxation ab Include all LP-S constraintsTrue SOCP bc ca SOCP-C Solution Objective Function = 0.75 SOCP-C strictly dominates LP-S ab bc ca

Outline Integer Programming Formulation Existing Relaxations Comparison Generalization of Results Two New SOCP Relaxations –The SOCP-C Relaxation –The SOCP-Q Relaxation

The SOCP-Q Relaxation Include all cycle inequalitiesTrue SOCP ab cd Clique of size n Define an SOCP Constraint using C = 1 (Σ x i ) 2 ≤ n + (Σ X ij ) SOCP-Q strictly dominates LP-S SOCP-Q strictly dominates SOCP-C

4-Neighbourhood MRF Test SOCP-C 50 binary MRFs of size 30x30 u ≈ N (0,1) P ≈ N (0,σ 2 )

4-Neighbourhood MRF σ = 2.5

8-Neighbourhood MRF Test SOCP-Q 50 binary MRFs of size 30x30 u ≈ N (0,1) P ≈ N (0,σ 2 )

8-Neighbourhood MRF σ = 1.125

Conclusions Large class of SOCP/QP dominated by LP-S New SOCP relaxations dominate LP-S More experimental results in poster

Future Work Comparison with cycle inequalities Determine best SOC constraints Develop efficient algorithms for new relaxations

Questions ??