Discrete Optimization Lecture 2 – Part I M. Pawan Kumar Slides available online

Slides:



Advertisements
Similar presentations
MAP Estimation Algorithms in M. Pawan Kumar, University of Oxford Pushmeet Kohli, Microsoft Research Computer Vision - Part I.
Advertisements

O BJ C UT M. Pawan Kumar Philip Torr Andrew Zisserman UNIVERSITY OF OXFORD.
Solving Markov Random Fields using Second Order Cone Programming Relaxations M. Pawan Kumar Philip Torr Andrew Zisserman.
Solving Markov Random Fields using Dynamic Graph Cuts & Second Order Cone Programming Relaxations M. Pawan Kumar, Pushmeet Kohli Philip Torr.
Convex Programming Brookes Vision Reading Group. Huh? What is convex ??? What is programming ??? What is convex programming ???
Lecture #3; Based on slides by Yinyu Ye
Introduction to Markov Random Fields and Graph Cuts Simon Prince
How should we define corner points? Under any reasonable definition, point x should be considered a corner point x What is a corner point?
An Analysis of Convex Relaxations (PART I) Minimizing Higher Order Energy Functions (PART 2) Philip Torr Work in collaboration with: Pushmeet Kohli, Srikumar.
+ Convex Functions, Convex Sets and Quadratic Programs Sivaraman Balakrishnan.
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
Probabilistic Inference Lecture 1
An Analysis of Convex Relaxations M. Pawan Kumar Vladimir Kolmogorov Philip Torr for MAP Estimation.
P 3 & Beyond Solving Energies with Higher Order Cliques Pushmeet Kohli Pawan Kumar Philip H. S. Torr Oxford Brookes University CVPR 2007.
Improved Moves for Truncated Convex Models M. Pawan Kumar Philip Torr.
Efficiently Solving Convex Relaxations M. Pawan Kumar University of Oxford for MAP Estimation Philip Torr Oxford Brookes University.
Exploiting Duality (Particularly the dual of SVM) M. Pawan Kumar VISUAL GEOMETRY GROUP.
What Energy Functions Can be Minimized Using Graph Cuts? Shai Bagon Advanced Topics in Computer Vision June 2010.
Relaxations and Moves for MAP Estimation in MRFs M. Pawan Kumar STANFORDSTANFORD Vladimir KolmogorovPhilip TorrDaphne Koller.
Hierarchical Graph Cuts for Semi-Metric Labeling M. Pawan Kumar Joint work with Daphne Koller.
Polyhedral Optimization Lecture 3 – Part 3 M. Pawan Kumar Slides available online
Optimality Conditions for Nonlinear Optimization Ashish Goel Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
MAP Estimation Algorithms in M. Pawan Kumar, University of Oxford Pushmeet Kohli, Microsoft Research Computer Vision - Part I.
Multiplicative Bounds for Metric Labeling M. Pawan Kumar École Centrale Paris École des Ponts ParisTech INRIA Saclay, Île-de-France Joint work with Phil.
1 OR II GSLM Outline  separable programming  quadratic programming.
Probabilistic Inference Lecture 4 – Part 2 M. Pawan Kumar Slides available online
Polyhedral Optimization Lecture 3 – Part 2
C&O 355 Mathematical Programming Fall 2010 Lecture 1 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A.
C&O 355 Lecture 2 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
Polyhedral Optimization Lecture 1 – Part 2 M. Pawan Kumar Slides available online
MATH 527 Deterministic OR Graphical Solution Method for Linear Programs.
Multiplicative Bounds for Metric Labeling M. Pawan Kumar École Centrale Paris Joint work with Phil Torr, Daphne Koller.
Rounding-based Moves for Metric Labeling M. Pawan Kumar Center for Visual Computing Ecole Centrale Paris.
Discrete Optimization Lecture 5 – Part 2 M. Pawan Kumar Slides available online
Discrete Optimization Lecture 4 – Part 2 M. Pawan Kumar Slides available online
Probabilistic Inference Lecture 3 M. Pawan Kumar Slides available online
Discrete Optimization in Computer Vision M. Pawan Kumar Slides will be available online
Nonlinear Programming Models
Discrete Optimization Lecture 3 – Part 1 M. Pawan Kumar Slides available online
Linear Programming (Convex) Cones  Def: closed under nonnegative linear combinations, i.e. K is a cone provided a 1, …, a p  K  R n, 1, …, p.
Probabilistic Inference Lecture 5 M. Pawan Kumar Slides available online
Introduction to Semidefinite Programs Masakazu Kojima Semidefinite Programming and Its Applications Institute for Mathematical Sciences National University.
Optimization - Lecture 4, Part 1 M. Pawan Kumar Slides available online
Tractable Higher Order Models in Computer Vision (Part II) Slides from Carsten Rother, Sebastian Nowozin, Pusohmeet Khli Microsoft Research Cambridge Presented.
Discrete Optimization Lecture 2 – Part 2 M. Pawan Kumar Slides available online
CPSC 536N Sparse Approximations Winter 2013 Lecture 1 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAA.
Inference for Learning Belief Propagation. So far... Exact methods for submodular energies Approximations for non-submodular energies Move-making ( N_Variables.
Probabilistic Inference Lecture 2 M. Pawan Kumar Slides available online
Discrete Optimization Lecture 1 M. Pawan Kumar Slides available online
Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234: Lecture 4  Linear Programming  LP and Simplex Algorithm [PS82]-Ch2.
MAP Estimation in Binary MRFs using Bipartite Multi-Cuts Sashank J. Reddi Sunita Sarawagi Sundar Vishwanathan Indian Institute of Technology, Bombay TexPoint.
MAP Estimation of Semi-Metric MRFs via Hierarchical Graph Cuts M. Pawan Kumar Daphne Koller Aim: To obtain accurate, efficient maximum a posteriori (MAP)
Optimization - Lecture 5, Part 1 M. Pawan Kumar Slides available online
C&O 355 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Linear Programming Chap 2. The Geometry of LP  In the text, polyhedron is defined as P = { x  R n : Ax  b }. So some of our earlier results should.
OR II GSLM
Rounding-based Moves for Metric Labeling M. Pawan Kumar École Centrale Paris INRIA Saclay, Île-de-France.
Linear Programming (LP) Vector Form Maximize: cx Subject to : Ax  b c = (c 1, c 2, …, c n ) x = b = A = Summation Form Maximize:  c i x i Subject to:
1 Chapter 4 Geometry of Linear Programming  There are strong relationships between the geometrical and algebraic features of LP problems  Convenient.
Proving that a Valid Inequality is Facet-defining
Lecture 8 – Nonlinear Programming Models
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
An Analysis of Convex Relaxations for MAP Estimation
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
I.4 Polyhedral Theory (NW)
I.4 Polyhedral Theory.
Proving that a Valid Inequality is Facet-defining
(Convex) Cones Def: closed under nonnegative linear combinations, i.e.
BASIC FEASIBLE SOLUTIONS
Presentation transcript:

Discrete Optimization Lecture 2 – Part I M. Pawan Kumar Slides available online

Recap VaVa VbVb VcVc dada dbdb dcdc Label l 0 Label l 1 D : Observed data (image) V : Unobserved variables L : Discrete, finite label set Labeling f : V  L

Recap VaVa VbVb VcVc dada dbdb dcdc Label l 0 Label l 1 D : Observed data (image) V : Unobserved variables L : Discrete, finite label set Labeling f : V  L V a is assigned l f(a)

Recap VaVa VbVb VcVc Q(f;  ) = ∑ a  a;f(a) + ∑ (a,b)  ab;f(a)f(b) Label l 0 Label l 1  bc;f(b)f(c)  a;f(a) dada dbdb dcdc

Recap VaVa VbVb VcVc Q(f;  ) = ∑ a  a;f(a) + ∑ (a,b)  ab;f(a)f(b) Label l 0 Label l 1 f* = argmin f Q(f;  ) dada dbdb dcdc

Recap VaVa VbVb VcVc Q(f;  ) = ∑ a  a;f(a) + ∑ (a,b)  ab;f(a)f(b) Label l 0 Label l 1 f* = argmin f Q(f;  )

Outline Convex Optimization Integer Programming Formulation Convex Relaxations Comparison Generalization of Results

Mathematical Optimization min g 0 (x) s.t. g i (x) ≤ 0 h i (x) = 0 Objective function Inequality constraints Equality constraints x is a feasible point  g i (x) ≤ 0, h i (x) = 0 x is a strictly feasible point  g i (x) < 0, h i (x) = 0 Feasible region - set of all feasible points

Convex Optimization min g 0 (x) s.t. g i (x) ≤ 0 h i (x) = 0 Objective function Inequality constraints Equality constraints Objective function is convex Feasible region is convex Convex set? Convex function?

Convex Set x1x1 x2x2 c x 1 + (1 - c) x 2 c  [0,1] Line Segment Endpoints

Convex Set x1x1 x2x2 All points on the line segment lie within the set For all line segments with endpoints in the set

Non-Convex Set x1x1 x2x2

Examples of Convex Sets x1x1 x2x2 Line Segment

Examples of Convex Sets x1x1 x2x2 Line

Examples of Convex Sets Hyperplane a T x - b = 0

Examples of Convex Sets Halfspace a T x - b ≤ 0

Examples of Convex Sets Second-order Cone ||x|| ≤ t t x2x2 x1x1

Examples of Convex Sets Semidefinite Cone {X | X 0} a T Xa ≥ 0, for all a  R n All eigenvalues of X are non-negative a T X 1 a ≥ 0 a T X 2 a ≥ 0 a T (cX 1 + (1-c)X 2 )a ≥ 0

Operations that Preserve Convexity Intersection Polyhedron / Polytope

Operations that Preserve Convexity Intersection

Operations that Preserve Convexity Affine Transformation x  Ax + b

Convex Function x g(x) Blue point always lies above red point x1x1 x2x2

Convex Function x g(x) g( c x 1 + (1 - c) x 2 ) ≤ c g(x 1 ) + (1 - c) g(x 2 ) x1x1 x2x2 Domain of g(.) has to be convex

Convex Function x g(x) x1x1 x2x2 -g(.) is concave g( c x 1 + (1 - c) x 2 ) ≤ c g(x 1 ) + (1 - c) g(x 2 )

Convex Function Once-differentiable functions g(y) +  g(y) T (x - y) ≤ g(x) x g(x) (y,g(y)) g(y) +  g(y) T (x - y) Twice-differentiable functions  2 g(x) 0

Convex Function and Convex Sets x g(x) Epigraph of a convex function is a convex set

Examples of Convex Functions Linear function a T x p-Norm functions (x 1 p + x 2 p + x n p ) 1/p, p ≥ 1 Quadratic functions x T Q x Q 0

Operations that Preserve Convexity Non-negative weighted sum x g 1 (x) w1w1 x g 2 (x) + w 2 + …. x T Q x + a T x + b Q 0

Operations that Preserve Convexity Pointwise maximum x g 1 (x) max x g 2 (x), Pointwise minimum of concave functions is concave

Convex Optimization min g 0 (x) s.t. g i (x) ≤ 0 h i (x) = 0 Objective function Inequality constraints Equality constraints Objective function is convex  Feasible region is convex 

Linear Programming min g 0 (x) s.t. g i (x) ≤ 0 h i (x) = 0 Objective function Inequality constraints Equality constraints min g 0 T x s.t. g i T x ≤ 0 h i T x = 0 Linear function Linear constraints

Quadratic Programming min g 0 (x) s.t. g i (x) ≤ 0 h i (x) = 0 Objective function Inequality constraints Equality constraints min x T Qx + a T x + b s.t. g i T x ≤ 0 h i T x = 0 Quadratic function Linear constraints

Second-Order Cone Programming min g 0 (x) s.t. g i (x) ≤ 0 h i (x) = 0 Objective function Inequality constraints Equality constraints min g 0 T x s.t. x T Q i x + a i T x + b i ≤ 0 h i T x = 0 Linear function Quadratic constraints Linear constraints

Semidefinite Programming min g 0 (x) s.t. g i (x) ≤ 0 h i (x) = 0 Objective function Inequality constraints Equality constraints min Q  X s.t. X 0 A i  X = 0 Linear function Semidefinite constraints Linear constraints

Outline Convex Optimization Integer Programming Formulation Convex Relaxations Comparison Generalization of Results

Integer Programming Formulation V1V1 V2V2 Label ‘ 0 ’ Label ‘ 1 ’ Unary Cost Unary Cost Vector u = [ 5 Cost of V 1 = 0 2 Cost of V 1 = 1 ; 2 4 ] Labeling = {1, 0}

V1V1 V2V2 Label ‘ 0 ’ Label ‘ 1 ’ Unary Cost Unary Cost Vector u = [ 5 2 ; 2 4 ] T Label vector x = [ -1 V 1  0 1 V 1 = 1 ; 1 -1 ] T Recall that the aim is to find the optimal x Integer Programming Formulation Labeling = {1, 0}

V1V1 V2V2 Label ‘ 0 ’ Label ‘ 1 ’ Unary Cost Unary Cost Vector u = [ 5 2 ; 2 4 ] T Label vector x = [ -11; 1 -1 ] T Sum of Unary Costs = 1 2 ∑ i u i (1 + x i ) Integer Programming Formulation Labeling = {1, 0}

V1V1 V2V2 Label ‘ 0 ’ Label ‘ 1 ’ Pairwise Cost 0 Cost of V 1 = 0 and V 1 = Cost of V 1 = 0 and V 2 = 0 3 Cost of V 1 = 0 and V 2 = Pairwise Cost Matrix P Integer Programming Formulation Labeling = {1, 0}

V1V1 V2V2 Label ‘ 0 ’ Label ‘ 1 ’ Pairwise Cost Pairwise Cost Matrix P Sum of Pairwise Costs 1 4 ∑ ij P ij (1 + x i )(1+x j ) Integer Programming Formulation Labeling = {1, 0}

V1V1 V2V2 Label ‘ 0 ’ Label ‘ 1 ’ Pairwise Cost Pairwise Cost Matrix P Sum of Pairwise Costs 1 4 ∑ ij P ij (1 + x i +x j + x i x j ) 1 4 ∑ ij P ij (1 + x i + x j + X ij )= X = x x T X ij = x i x j Integer Programming Formulation Labeling = {1, 0}

Constraints Uniqueness Constraint ∑ x i = 2 - |L| i  V a Integer Constraints x i  {-1,1} X = x x T Integer Programming Formulation

x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  {-1,1} X = x x T Convex Non-Convex Integer Programming Formulation

Outline Convex Optimization Integer Programming Formulation Convex Relaxations –Linear Programming (LP-S) –Semidefinite Programming (SDP-L) –Second Order Cone Programming (SOCP-MS) Comparison Generalization of Results

LP-S x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  {-1,1} X = x x T Retain Convex Part Schlesinger, 1976 Relax Non-Convex Constraint

LP-S x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  [-1,1] X = x x T Retain Convex Part Schlesinger, 1976 Relax Non-Convex Constraint

LP-S X = x x T Schlesinger, 1976 X ij  [-1,1] 1 + x i + x j + X ij ≥ 0 ∑ X ij = (2 - |L|) x i j  V b

LP-S x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  [-1,1] X = x x T Retain Convex Part Schlesinger, 1976 Relax Non-Convex Constraint

LP-S x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  [-1,1], Retain Convex Part Schlesinger, 1976 X ij  [-1,1] 1 + x i + x j + X ij ≥ 0 ∑ X ij = (2 - |L|) x i j  V b LP-S

Outline Convex Optimization Integer Programming Formulation Convex Relaxations –Linear Programming (LP-S) –Semidefinite Programming (SDP-L) –Second Order Cone Programming (SOCP-MS) Comparison Generalization of Results

SDP-L x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  {-1,1} X = x x T Retain Convex Part Lasserre, 2000 Relax Non-Convex Constraint

SDP-L x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  [-1,1] X = x x T Retain Convex Part Relax Non-Convex Constraint Lasserre, 2000

x1x1 x2x2 xnxn x1x1 x2x2... xnxn 1xTxT x X = Rank = 1 X ii = 1 Positive Semidefinite Convex Non-Convex SDP-L

x1x1 x2x2 xnxn x1x1 x2x2... xnxn X ii = 1 Positive Semidefinite Convex SDP-L 1xTxT x X =

Schur’s Complement AB BTBT C = I0 B T A -1 I A0 0 C - B T A -1 B IA -1 B 0 I 0 A 0 C -B T A -1 B 0

X - xx T 0 1xTxT x X = 10 x I 10 0 X - xx T IxTxT 0 1 Schur ’ s Complement SDP-L

x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  [-1,1] X = x x T Retain Convex Part Relax Non-Convex Constraint Lasserre, 2000

SDP-L x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  [-1,1] Retain Convex Part X ii = 1 X - xx T 0 Accurate SDP-L Inefficient Lasserre, 2000

Outline Convex Optimization Integer Programming Formulation Convex Relaxations –Linear Programming (LP-S) –Semidefinite Programming (SDP-L) –Second Order Cone Programming (SOCP-MS) Comparison Generalization of Results

SOCP Relaxation x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  [-1,1] X ii = 1 X - xx T 0 Derive SOCP relaxation from the SDP relaxation Further Relaxation

1-D Example X - xx T 0 X - x 2 ≥ 0 For two semidefinite matrices, Frobenius inner product is non-negative A A  0 x 2  X SOC of the form || v || 2  st = 1

2-D Example X 11 X 12 X 21 X 22 1X 12 1 = X = x1x1x1x1 x1x2x1x2 x2x1x2x1 x2x2x2x2 xx T = x12x12 x1x2x1x2 x1x2x1x2 = x22x22

2-D Example (X - xx T ) 1 - x 1 2 X 12 -x 1 x 2  X 12 -x 1 x x 2 2 x 1 2  1 -1  x 1  1 C 1   0 C 1 0 

2-D Example (X - xx T ) 1 - x 1 2 X 12 -x 1 x 2  X 12 -x 1 x x 2 2 C 2   0 C 2 0  x 2 2  1 -1  x 2  1

2-D Example (X - xx T ) 1 - x 1 2 X 12 -x 1 x 2  X 12 -x 1 x x 2 2 C 3   0 C 3 0  (x 1 + x 2 ) 2  2 + 2X 12 SOC of the form || v || 2  st

2-D Example (X - xx T ) 1 - x 1 2 X 12 -x 1 x 2  X 12 -x 1 x x 2 2 C 4   0 C 4 0  (x 1 - x 2 ) 2  2 - 2X 12 SOC of the form || v || 2  st

SOCP Relaxation Consider a matrix C 1 = UU T 0 (X - xx T ) ||U T x || 2  X  C 1 C 1   0 Continue for C 2, C 3, …, C n SOC of the form || v || 2  st Kim and Kojima, 2000

SOCP Relaxation How many constraints for SOCP = SDP ? Infinite. For all C 0 Specify constraints similar to the 2-D example xixi xjxj X ij (x i + x j ) 2  2 + 2X ij (x i + x j ) 2  2 - 2X ij

SOCP-MS x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  [-1,1] X ii = 1 X - xx T 0 Muramatsu and Suzuki, 2003

SOCP-MS x* = argmin 1 2 ∑ u i (1 + x i ) ∑ P ij (1 + x i + x j + X ij ) ∑ x i = 2 - |L| i  V a x i  [-1,1] (x i + x j ) 2  2 + 2X ij (x i - x j ) 2  2 - 2X ij Specified only when P ij  0 Muramatsu and Suzuki, 2003

Outline Convex Optimization Integer Programming Formulation Convex Relaxations Comparison Generalization of Results Kumar, Kolmogorov and Torr, JMLR 2010

Dominating Relaxation For all MAP Estimation problem (u, P) A dominates B A B ≥ Dominating relaxations are better

Equivalent Relaxations A dominates B A B = B dominates A For all MAP Estimation problem (u, P)

Strictly Dominating Relaxation A dominates B A B > B does not dominate A For at least one MAP Estimation problem (u, P)

SOCP-MS (x i + x j ) 2  2 + 2X ij (x i - x j ) 2  2 - 2X ij Muramatsu and Suzuki, 2003 P ij ≥ 0 (x i + x j ) X ij = P ij < 0 (x i - x j ) X ij = SOCP-MS is a QP Same as QP by Ravikumar and Lafferty, 2005 SOCP-MS ≡ QP-RL

LP-S vs. SOCP-MS Differ in the way they relax X = xx T X ij  [-1,1] 1 + x i + x j + X ij ≥ 0 ∑ X ij = (2 - |L|) x i j  V b LP-S (x i + x j ) 2  2 + 2X ij (x i - x j ) 2  2 - 2X ij SOCP-MS F(LP-S) F(SOCP-MS)

LP-S vs. SOCP-MS LP-S strictly dominates SOCP-MS LP-S strictly dominates QP-RL Where have we gone wrong? A Quick Recap !

Recap of SOCP-MS xixi xjxj X ij C = (x i + x j ) 2  2 + 2X ij

Recap of SOCP-MS xixi xjxj X ij 1 1 C = (x i - x j ) 2  2 - 2X ij Can we use different C matrices ?? Can we use a different subgraph ??

Outline Convex Optimization Integer Programming Formulation Convex Relaxations Comparison Generalization of Results –SOCP Relaxations on Trees –SOCP Relaxations on Cycles Kumar, Kolmogorov and Torr, JMLR 2010

SOCP Relaxations on Trees Choose any arbitrary tree

SOCP Relaxations on Trees Choose any arbitrary C 0 Repeat over trees to get relaxation SOCP-T LP-S strictly dominates SOCP-T LP-S strictly dominates QP-T

Outline Convex Optimization Integer Programming Formulation Convex Relaxations Comparison Generalization of Results –SOCP Relaxations on Trees –SOCP Relaxations on Cycles Kumar, Kolmogorov and Torr, JMLR 2010

SOCP Relaxations on Cycles Choose an arbitrary even cycle P ij ≥ 0 P ij ≤ 0OR

SOCP Relaxations on Cycles Choose any arbitrary C 0 Repeat over even cycles to get relaxation SOCP-E LP-S strictly dominates SOCP-E LP-S strictly dominates QP-E

SOCP Relaxations on Cycles True for odd cycles with P ij ≤ 0 True for odd cycles with P ij ≤ 0 for only one edge True for odd cycles with P ij ≥ 0 for only one edge True for all combinations of above cases

The SOCP-C Relaxation Include all LP-S constraintsTrue SOCP ab bc ca Submodular Non-submodularSubmodular

The SOCP-C Relaxation ab Include all LP-S constraintsTrue SOCP bc ca Frustrated Cycle

The SOCP-C Relaxation ab Include all LP-S constraintsTrue SOCP bc ca LP-S Solution ab bc ca Objective Function = 0

The SOCP-C Relaxation ab Include all LP-S constraintsTrue SOCP bc ca LP-S Solution ab bc ca Define an SOC Constraint using C = 1

The SOCP-C Relaxation ab Include all LP-S constraintsTrue SOCP bc ca LP-S Solution ab bc ca (x i + x j + x k ) 2  (X ij + X jk + X ki )

The SOCP-C Relaxation ab Include all LP-S constraintsTrue SOCP bc ca SOCP-C Solution Objective Function = 0.75 SOCP-C strictly dominates LP-S ab bc ca

The SOCP-Q Relaxation Include all cycle inequalitiesTrue SOCP ab cd Clique of size n Define an SOCP Constraint using C = 1 (Σ x i ) 2 ≤ n + (Σ X ij ) SOCP-Q strictly dominates LP-S SOCP-Q strictly dominates SOCP-C

4-Neighbourhood MRF Test SOCP-C 50 binary MRFs of size 30x30 u ≈ N (0,1) P ≈ N (0,σ 2 )

4-Neighbourhood MRF σ = 2.5

8-Neighbourhood MRF Test SOCP-Q 50 binary MRFs of size 30x30 u ≈ N (0,1) P ≈ N (0,σ 2 )

8-Neighbourhood MRF σ = 1.125

Conclusions Large class of SOCP/QP dominated by LP-S New SOCP relaxations dominate LP-S But better LP relaxations exist