Rounding-based Moves for Metric Labeling M. Pawan Kumar Center for Visual Computing Ecole Centrale Paris.

Slides:



Advertisements
Similar presentations
MAP Estimation Algorithms in
Advertisements

Algorithms for MAP estimation in Markov Random Fields Vladimir Kolmogorov University College London Tutorial at GDR (Optimisation Discrète, Graph Cuts.
1 LP, extended maxflow, TRW OR: How to understand Vladimirs most recent work Ramin Zabih Cornell University.
O BJ C UT M. Pawan Kumar Philip Torr Andrew Zisserman UNIVERSITY OF OXFORD.
Solving Markov Random Fields using Second Order Cone Programming Relaxations M. Pawan Kumar Philip Torr Andrew Zisserman.
Primal Dual Combinatorial Algorithms Qihui Zhu May 11, 2009.
Linear Time Methods for Propagating Beliefs Min Convolution, Distance Transforms and Box Sums Daniel Huttenlocher Computer Science Department December,
Solving IPs – Cutting Plane Algorithm General Idea: Begin by solving the LP relaxation of the IP problem. If the LP relaxation results in an integer solution,
ECE Longest Path dual 1 ECE 665 Spring 2005 ECE 665 Spring 2005 Computer Algorithms with Applications to VLSI CAD Linear Programming Duality – Longest.
Tutorial at ICCV (Barcelona, Spain, November 2011)
ICCV 2007 tutorial Part III Message-passing algorithms for energy minimization Vladimir Kolmogorov University College London.
The University of Ontario CS 4487/9687 Algorithms for Image Analysis Multi-Label Image Analysis Problems.
An Analysis of Convex Relaxations (PART I) Minimizing Higher Order Energy Functions (PART 2) Philip Torr Work in collaboration with: Pushmeet Kohli, Srikumar.
Discrete Optimization for Vision and Learning. Who? How? M. Pawan Kumar Associate Professor Ecole Centrale Paris Nikos Komodakis Associate Professor Ecole.
Probabilistic Inference Lecture 1
Learning with Inference for Discrete Graphical Models Nikos Komodakis Pawan Kumar Nikos Paragios Ramin Zabih (presenter)
Approximation Algoirthms: Semidefinite Programming Lecture 19: Mar 22.
Robust Higher Order Potentials For Enforcing Label Consistency
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
An Analysis of Convex Relaxations M. Pawan Kumar Vladimir Kolmogorov Philip Torr for MAP Estimation.
P 3 & Beyond Solving Energies with Higher Order Cliques Pushmeet Kohli Pawan Kumar Philip H. S. Torr Oxford Brookes University CVPR 2007.
Improved Moves for Truncated Convex Models M. Pawan Kumar Philip Torr.
Efficiently Solving Convex Relaxations M. Pawan Kumar University of Oxford for MAP Estimation Philip Torr Oxford Brookes University.
Jean-Charles REGIN Michel RUEHER ILOG Sophia Antipolis Université de Nice – Sophia Antipolis A global constraint combining.
Graph Cut based Inference with Co-occurrence Statistics Ľubor Ladický, Chris Russell, Pushmeet Kohli, Philip Torr.
Relaxations and Moves for MAP Estimation in MRFs M. Pawan Kumar STANFORDSTANFORD Vladimir KolmogorovPhilip TorrDaphne Koller.
Hierarchical Graph Cuts for Semi-Metric Labeling M. Pawan Kumar Joint work with Daphne Koller.
Measuring Uncertainty in Graph Cut Solutions Pushmeet Kohli Philip H.S. Torr Department of Computing Oxford Brookes University.
Extensions of submodularity and their application in computer vision
MAP Estimation Algorithms in M. Pawan Kumar, University of Oxford Pushmeet Kohli, Microsoft Research Computer Vision - Part I.
Multiplicative Bounds for Metric Labeling M. Pawan Kumar École Centrale Paris École des Ponts ParisTech INRIA Saclay, Île-de-France Joint work with Phil.
Approximation Algorithms: Bristol Summer School 2008 Seffi Naor Computer Science Dept. Technion Haifa, Israel TexPoint fonts used in EMF. Read the TexPoint.
Daniel Kroening and Ofer Strichman Decision Procedures An Algorithmic Point of View Deciding ILPs with Branch & Bound ILP References: ‘Integer Programming’
Evaluating Performance for Data Mining Techniques
Probabilistic Inference Lecture 4 – Part 2 M. Pawan Kumar Slides available online
Approximation Algorithms for Stochastic Combinatorial Optimization Part I: Multistage problems Anupam Gupta Carnegie Mellon University.
1 By: MOSES CHARIKAR, CHANDRA CHEKURI, TOMAS FEDER, AND RAJEEV MOTWANI Presented By: Sarah Hegab.
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
Planar Cycle Covering Graphs for inference in MRFS The Typhon Algorithm A New Variational Approach to Ground State Computation in Binary Planar Markov.
Multiplicative Bounds for Metric Labeling M. Pawan Kumar École Centrale Paris Joint work with Phil Torr, Daphne Koller.
Learning a Small Mixture of Trees M. Pawan Kumar Daphne Koller Aim: To efficiently learn a.
Discrete Optimization Lecture 2 – Part I M. Pawan Kumar Slides available online
Discrete Optimization Lecture 5 – Part 2 M. Pawan Kumar Slides available online
Discrete Optimization Lecture 4 – Part 2 M. Pawan Kumar Slides available online
Probabilistic Inference Lecture 3 M. Pawan Kumar Slides available online
Algorithms for MAP estimation in Markov Random Fields Vladimir Kolmogorov University College London.
Discrete Optimization in Computer Vision M. Pawan Kumar Slides will be available online
Discrete Optimization Lecture 3 – Part 1 M. Pawan Kumar Slides available online
1 Markov Random Fields with Efficient Approximations Yuri Boykov, Olga Veksler, Ramin Zabih Computer Science Department CORNELL UNIVERSITY.
Approximation Algorithms for Prize-Collecting Forest Problems with Submodular Penalty Functions Chaitanya Swamy University of Waterloo Joint work with.
Fast and accurate energy minimization for static or time-varying Markov Random Fields (MRFs) Nikos Komodakis (Ecole Centrale Paris) Nikos Paragios (Ecole.
Probabilistic Inference Lecture 5 M. Pawan Kumar Slides available online
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Integer LP In-class Prob
Discrete Optimization Lecture 2 – Part 2 M. Pawan Kumar Slides available online
Inference for Learning Belief Propagation. So far... Exact methods for submodular energies Approximations for non-submodular energies Move-making ( N_Variables.
Probabilistic Inference Lecture 2 M. Pawan Kumar Slides available online
Discrete Optimization Lecture 1 M. Pawan Kumar Slides available online
A global approach Finding correspondence between a pair of epipolar lines for all pixels simultaneously Local method: no guarantee we will have one to.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
MAP Estimation of Semi-Metric MRFs via Hierarchical Graph Cuts M. Pawan Kumar Daphne Koller Aim: To obtain accurate, efficient maximum a posteriori (MAP)
PRIMAL-DUAL APPROXIMATION ALGORITHMS FOR METRIC FACILITY LOCATION AND K-MEDIAN PROBLEMS K. Jain V. Vazirani Journal of the ACM, 2001.
Rounding-based Moves for Metric Labeling M. Pawan Kumar École Centrale Paris INRIA Saclay, Île-de-France.
Combinatorial clustering algorithms. Example: K-means clustering
Alexander Shekhovtsov and Václav Hlaváč
Markov Random Fields with Efficient Approximations
Efficient Graph Cut Optimization for Full CRFs with Quantized Edges
Discrete Inference and Learning
Graphical Models and Learning
MAP Estimation of Semi-Metric MRFs via Hierarchical Graph Cuts
Presentation transcript:

Rounding-based Moves for Metric Labeling M. Pawan Kumar Center for Visual Computing Ecole Centrale Paris

Post Metric Labeling Random variables V = {v 1, v 2, …, v n } Label set L = {l 1, l 2, …, l h } Labelings quantatively distinguished by energy E(y) Labeling y ∈ L n Unary potential of variable v a ∈ V ∑ a θ a (y a )

Post Metric Labeling Random variables V = {v 1, v 2, …, v n } Label set L = {l 1, l 2, …, l h } Labelings quantatively distinguished by energy E(y) Labeling y ∈ L n Pairwise potential of variables (v a,v b ) ∑ a θ a (y a )+ ∑ (a,b) w ab d(y a,y b ) w ab is non-negatived(.,.) is a metric distance function min y

Post Existing Work –Move-Making Algorithms (Efficient) –Linear Programming Relaxation (Accurate) Rounding-based Moves –Equivalence –Complete Rounding and Complete Move –Interval Rounding and Interval Move –Hierarchical Rounding and Hierarchical Move Outline

Post Expansion Algorithm Sky House Tree Ground Initialize with TreeExpand GroundExpand HouseExpand Sky Variables take label l α or retain current label Boykov, Veksler and Zabih, ICCV 1999

Post Move-Making Algorithms Iteration t Define S t ⊆ L n containing current labeling y t ∑ a θ a (y a )+ ∑ (a,b) w ab d(y a,y b ) argmin y s.t. y ∈ S t Sometimes it can even be solved exactly Above problem is easier than original problem y t+1 = Start with an initial labeling y 0

Post Existing Work –Move-Making Algorithms (Efficient) –Linear Programming Relaxation (Accurate) Rounding-based Moves –Equivalence –Complete Rounding and Complete Move –Interval Rounding and Interval Move –Hierarchical Rounding and Hierarchical Move Outline

Post Linear Programming Relaxation Chekuri, Khanna, Naor and Zosin, SODA 2001 Binary indicator x a (i) ∈ {0,1} If variable ‘a’ takes the label ‘i’ then x a (i) = 1 ∑ i x a (i) = 1Each variable ‘a’ takes one label Similarly, binary indicator x ab (i,k) ∈ {0,1}

Post Linear Programming Relaxation Minimize a linear function over feasible x Indicators x a (i), x ab (i,k)  {0,1} Relaxed x a (i), x ab (i,k)  [0,1] Rounding Chekuri, Khanna, Naor and Zosin, SODA 2001

Post Existing Work –Move-Making Algorithms (Efficient) –Linear Programming Relaxation (Accurate) Rounding-based Moves –Equivalence –Complete Rounding and Complete Move –Interval Rounding and Interval Move –Hierarchical Rounding and Hierarchical Move Outline

Post Move-Making Bound y*: Optimal Labelingy: Estimated Labeling Σ a θ a (y a ) + Σ (a,b) w ab d(y a,y b ) Σ a θ a (y* a ) + Σ (a,b) w ab d(y* a,y* b ) ≥

Post Move-Making Bound y*: Optimal Labelingy: Estimated Labeling Σ a θ a (y a ) + Σ (a,b) w ab d(y a,y b ) Σ a θ a (y* a ) + Σ (a,b) w ab d(y* a,y* b ) B ≤ For all possible values of θ a (i) and w ab

Post Rounding Approximation x*: LP Optimal Solutionx: Rounded Solution Σ a Σ i θ a (i)x a (i) + Σ (a,b) Σ (i,k) w ab d(i,k)x ab (i,k) ≥ Σ a Σ i θ a (i)x* a (i) + Σ (a,b) Σ (i,k) w ab d(i,k)x* ab (i,k)

Post Rounding Approximation x*: LP Optimal Solutionx: Rounded Solution Σ a Σ i θ a (i)x a (i) + Σ (a,b) Σ (i,k) w ab d(i,k)x ab (i,k) ≤ Σ a Σ i θ a (i)x* a (i) + Σ (a,b) Σ (i,k) w ab d(i,k)x* ab (i,k) A For all possible values of θ a (i) and w ab

Post Equivalence For any known rounding with approximation A there exists a move-making algorithm such that the move-making bound B = A We know how to design such an algorithm

Post Existing Work –Move-Making Algorithms (Efficient) –Linear Programming Relaxation (Accurate) Rounding-based Moves –Equivalence –Complete Rounding and Complete Move –Interval Rounding and Interval Move –Hierarchical Rounding and Hierarchical Move Outline

Post Complete Rounding Treat x* a (i)  [0,1] as probability that y a = l i Cumulative probability z a (i) = Σ j≤i x* a (j) 0z a (1) z a (2) z a (h) = 1 z a (k) z a (i) Generate a random number r  (0,1] Assign the label next to r r

Post Complete Rounding - Example 0z a (1) z a (4) z a (3) z a (2) z b (1) z b (4) z b (3) z b (2) z c (1) z c (4) z c (3) z c (2) r r r

Post Equivalent Move Complete Move !!

Post Complete Move Iteration t Define S t ⊆ L n ∑ a θ a (y a )+ ∑ (a,b) w ab d(y a,y b ) argmin y s.t. y ∈ S t y t+1 = Start with an initial labeling y 0

Post Complete Move Iteration t Define S t = L n ∑ a θ a (y a )+ ∑ (a,b) w ab d(y a,y b ) argmin y s.t. y ∈ S t How do we solve this problem? Above problem is the same as the original problem y t+1 = Start with an initial labeling y 0

Post Complete Move Define S t = L n ∑ a θ a (y a )+ ∑ (a,b) w ab d’(y a,y b ) argmin y s.t. y ∈ S t How do we solve this problem? Above problem is the same as the original problem y t+1 =

Post Complete Move Define S t = L n ∑ a θ a (y a )+ ∑ (a,b) w ab d’(y a,y b ) argmin y s.t. y ∈ S t Obtained by solving a small LP Submodular overestimation d’ of d y t+1 =

Post Submodular Overestimation max i,k d’(l i,l k )/d(l i,l k )min d’ d’(l i,l k ) ≥ d(l i,l k ) s.t. d’(l i,l k+1 ) + d’(l i+1,l k ) ≥ d(l i,l k ) + d(l i+1,l k+1 )

Post Submodular Overestimation bmin d’ d’(l i,l k ) ≥ d(l i,l k ) s.t. d’(l i,l k+1 ) + d’(l i+1,l k ) ≥ d(l i,l k ) + d(l i+1,l k+1 ) bd(l i,l k ) ≥ d’(l i,l k ) Dual provides worst-case instance for complete rounding

Post Existing Work –Move-Making Algorithms (Efficient) –Linear Programming Relaxation (Accurate) Rounding-based Moves –Equivalence –Complete Rounding and Complete Move –Interval Rounding and Interval Move –Hierarchical Rounding and Hierarchical Move Outline

Post Interval Rounding Treat x* a (i)  [0,1] as probability that y a = l i Cumulative probability z a (i) = Σ j≤i x* a (j) 0z a (1) z a (2) z a (h) = 1 z a (k) z a (i) Choose an interval of length h’

Post Interval Rounding Treat x* a (i)  [0,1] as probability that y a = l i Cumulative probability z a (i) = Σ j≤i x* a (j) r Generate a random number r  (0,1] Assign the label next to r if it is within the interval z a (k)-z a (i) 0 Choose an interval of length h’ REPEAT

Post Interval Rounding - Example 0z a (1) z a (4) z a (3) z a (2) z b (1) z b (4) z b (3) z b (2) z c (1) z c (4) z c (3) z c (2)

Post Interval Rounding - Example 0z a (1) z a (2) z b (1) z b (2) z c (1) z c (2) r r r

Post Interval Rounding - Example 0z a (1) z a (4) z a (3) z a (2) z b (1) z b (4) z b (3) z b (2) z c (1) z c (4) z c (3) z c (2)

Post Interval Rounding - Example 0 z c (1) z c (4) z c (3) z c (2)

Post Interval Rounding - Example 0 z c (3) z c (2) r -z c (1)

Post Interval Rounding - Example 0z a (1) z a (4) z a (3) z a (2) z b (1) z b (4) z b (3) z b (2) z c (1) z c (4) z c (3) z c (2)

Post Equivalent Move Interval Move !!

Post Interval Move Iteration t y ∈ S t iff y a = y t a or y a ∈ interval of labels ∑ a θ a (y a )+ ∑ (a,b) w ab d(y a,y b ) argmin y s.t. y ∈ S t y t+1 = Start with an initial labeling y 0 Choose an interval of labels of length h’ How do we solve this problem?

Post Interval Move Iteration t y ∈ S t iff y a = y t a or y a ∈ interval of labels ∑ a θ a (y a )+ ∑ (a,b) w ab d’(y a,y b ) argmin y s.t. y ∈ S t y t+1 = Start with an initial labeling y 0 Choose an interval of labels of length h’ Submodular overestimation d’ of d

Post Existing Work –Move-Making Algorithms (Efficient) –Linear Programming Relaxation (Accurate) Rounding-based Moves –Equivalence –Complete Rounding and Complete Move –Interval Rounding and Interval Move –Hierarchical Rounding and Hierarchical Move Outline

Post Hierarchical Rounding L1L1 L2L2 l1l1 l2l2 l3l3 l4l4 l5l5 l6l6 l7l7 l8l8 l9l9 L3L3 Hierarchical clustering of labels (e.g. r-HST metrics)

Post Hierarchical Rounding L1L1 L2L2 l1l1 l2l2 l3l3 l4l4 l5l5 l6l6 l7l7 l8l8 l9l9 L3L3 Assign variables to labels L 1, L 2 or L 3 Move down the hierarchy until the leaf level

Post Hierarchical Rounding L1L1 L2L2 l1l1 l2l2 l3l3 l4l4 l5l5 l6l6 l7l7 l8l8 l9l9 L3L3 Assign variables to labels l 1, l 2 or l 3

Post Hierarchical Rounding L1L1 L2L2 l1l1 l2l2 l3l3 l4l4 l5l5 l6l6 l7l7 l8l8 l9l9 L3L3 Assign variables to labels l 4, l 5 or l 6

Post Hierarchical Rounding L1L1 L2L2 l1l1 l2l2 l3l3 l4l4 l5l5 l6l6 l7l7 l8l8 l9l9 L3L3 Assign variables to labels l 7, l 8 or l 9

Post Equivalent Move Hierarchical Move !!

Post Hierarchical Move L1L1 L2L2 l1l1 l2l2 l3l3 l4l4 l5l5 l6l6 l7l7 l8l8 l9l9 L3L3 Hierarchical clustering of labels (e.g. r-HST metrics)

Post Hierarchical Move L1L1 L2L2 l1l1 l2l2 l3l3 l4l4 l5l5 l6l6 l7l7 l8l8 l9l9 L3L3 Obtain labeling y 1 restricted to labels {l 1,l 2,l 3 }

Post Hierarchical Move L1L1 L2L2 l1l1 l2l2 l3l3 l4l4 l5l5 l6l6 l7l7 l8l8 l9l9 L3L3 Obtain labeling y 2 restricted to labels {l 4,l 5,l 6 }

Post Hierarchical Move L1L1 L2L2 l1l1 l2l2 l3l3 l4l4 l5l5 l6l6 l7l7 l8l8 l9l9 L3L3 Obtain labeling y 3 restricted to labels {l 7,l 8,l 9 }

Post Hierarchical Move L1L1 L2L2 L3L3 VaVa VbVb y 1 (a) y 2 (a) y 3 (a) Move up the hierarchy until we reach the root y 1 (b) y 2 (b) y 3 (b)

Questions?

Post Simple Example - Rounding θ a (1)x a (1) + θ a (2)x a (2) + θ b (1)x b (1) + θ b (2)x b (2)min x≥0 + d(1,1)x ab (1,1) + d(1,2)x ab (1,2) + d(2,1)x ab (2,1) + d(2,2)x ab (2,2) x a (1) + x a (2) = 1s.t. x b (1) + x b (2) = 1 x ab (1,1) + x ab (1,2) = x a (1) x ab (2,1) + x ab (2,2) = x a (2) x ab (1,1) + x ab (2,1) = x b (1) x ab (1,2) + x ab (2,2) = x b (2)

Post Simple Example - Rounding x* a (1) + x* a (2) = 1 x* a (1) 0 x* b (1) + x* b (2) = 1 x* b (1) 0 Generate a uniform random number r  (0,1] Assign the label next to r r r Probability that V a is assigned label l 1 ?x* a (1) Probability that V a is assigned label l 2 ?x* a (2)

Post Simple Example - Rounding x* a (1) + x* a (2) = 1 x* a (1) 0 x* b (1) + x* b (2) = 1 x* b (1) 0 Generate a uniform random number r  (0,1] Assign the label next to r r r Probability that V a and V b are assigned l 1 and l 1 ? min{x* a (1), x* b (1)}

Post Simple Example - Rounding x* a (1) + x* a (2) = 1 x* a (1) 0 x* b (1) + x* b (2) = 1 x* b (1) 0 Generate a uniform random number r  (0,1] Assign the label next to r r r Probability that V a and V b are assigned l 1 and l 1 ? min{x* ab (1,1)+x* ab (1,2), x* ab (1,1) + x* ab (2,1)} x* ab (1,1) + min{x* ab (1,2), x* ab (2,1)}

Post Simple Example - Rounding x* a (1) + x* a (2) = 1 x* a (1) 0 x* b (1) + x* b (2) = 1 x* b (1) 0 Generate a uniform random number r  (0,1] Assign the label next to r r r Probability that V a and V b are assigned l 1 and l 2 ? max{0,x* a (1) - x* b (1)}

Post Simple Example - Rounding x* a (1) + x* a (2) = 1 x* a (1) 0 x* b (1) + x* b (2) = 1 x* b (1) 0 Generate a uniform random number r  (0,1] Assign the label next to r r r Probability that V a and V b are assigned l 1 and l 2 ? x* ab (1,2) - min{x* ab (1,2), x* ab (2,1)} max{0,x* ab (1,2) - x* ab (2,1)}

Post Simple Example - Rounding x* a (1) + x* a (2) = 1 x* a (1) 0 x* b (1) + x* b (2) = 1 x* b (1) 0 Generate a uniform random number r  (0,1] Assign the label next to r r r Probability that V a and V b are assigned l 2 and l 1 ? max{0,x* b (1) - x* a (1)}

Post Simple Example - Rounding x* a (1) + x* a (2) = 1 x* a (1) 0 x* b (1) + x* b (2) = 1 x* b (1) 0 Generate a uniform random number r  (0,1] Assign the label next to r r r Probability that V a and V b are assigned l 2 and l 1 ? x* ab (2,1) - min{x* ab (1,2), x* ab (2,1)} max{0,x* ab (2,1) - x* ab (1,2)}

Post Simple Example - Rounding x* a (1) + x* a (2) = 1 x* a (1) 0 x* b (1) + x* b (2) = 1 x* b (1) 0 Generate a uniform random number r  (0,1] Assign the label next to r r r Probability that V a and V b are assigned l 2 and l 2 ? 1-max{x* a (1), x* b (1)}

Post Simple Example - Rounding x* a (1) + x* a (2) = 1 x* a (1) 0 x* b (1) + x* b (2) = 1 x* b (1) 0 Generate a uniform random number r  (0,1] Assign the label next to r r r Probability that V a and V b are assigned l 2 and l 2 ? min{x* a (2), x* b (2)}

Post Simple Example - Rounding x* a (1) + x* a (2) = 1 x* a (1) 0 x* b (1) + x* b (2) = 1 x* b (1) 0 Generate a uniform random number r  (0,1] Assign the label next to r r r Probability that V a and V b are assigned l 2 and l 2 ? min{x* ab (2,2)+x* ab (1,2), x* ab (2,2) + x* ab (2,1)} x* ab (2,2) + min{x* ab (1,2), x* ab (2,1)}

Post Simple Example - Move θ a (y a ) + θ b (y b ) min y + d(y a,y b ) y a,y b ∈ {1,2} If d is submodular, solve using graph cuts Otherwise

Post Simple Example - Move θ a (y a ) + θ b (y b ) min y + d’(y a,y b ) y a,y b ∈ {0,1} If d is submodular, solve using graph cuts Otherwiseuse submodular overestimation d’ Estimate d’ by minimizing distortion

Post Simple Example - Move bmin d' d’(1,1) ≤ b d(1,1)s.t.d’(1,2) ≤ b d(1,2) d’(2,1) ≤ b d(2,1)d’(2,2) ≤ b d(2,2) d(1,1) ≤ d’(1,1)d(1,2) ≤ d’(1,2) d(2,1) ≤ d’(2,1)d(2,2) ≤ d’(2,2) d’(1,1) + d’(2,2) ≤ d’(2,1) + d’(2,2) Dual LP provides worst-case rounding example LP in the variables d’(i,k)

Post Simple Example - Move d(1,1)β(1,1)+d(1,2)β(1,2)+d(2,1)β(2,1)+d(2,2)β(2,2)min α,β,γ≥0 s.t.d(1,1)α(1,1)+d(1,2)α(1,2)+d(2,1)α(2,1)+d(2,2)α(2,2) = 1 β(1,1) = α(1,1) + γ β(1,2) = α(1,2) - γ β(2,1) = α(2,1) - γ β(2,2) = α(2,2) + γ Set x ab *(i,k) = α(i,k) Set γ = min{x ab *(1,2), x ab *(2,1)}