1 of 56 Linear Programming-Based Approximation Algorithms Shoshana Neuburger Graduate Center, CUNY May 13, 2009.

Slides:



Advertisements
Similar presentations
Iterative Rounding and Iterative Relaxation
Advertisements

The Primal-Dual Method: Steiner Forest TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A AA A A A AA A A.
1 LP, extended maxflow, TRW OR: How to understand Vladimirs most recent work Ramin Zabih Cornell University.
C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
1 LP Duality Lecture 13: Feb Min-Max Theorems In bipartite graph, Maximum matching = Minimum Vertex Cover In every graph, Maximum Flow = Minimum.
Geometry and Theory of LP Standard (Inequality) Primal Problem: Dual Problem:
Linear Programming (LP) (Chap.29)
Approximation Algorithms Chapter 14: Rounding Applied to Set Cover.
C&O 355 Mathematical Programming Fall 2010 Lecture 22 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Introduction to Algorithms
1 EE5900 Advanced Embedded System For Smart Infrastructure Static Scheduling.
1 Introduction to Linear Programming. 2 Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. X1X2X3X4X1X2X3X4.
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
CS38 Introduction to Algorithms Lecture 15 May 20, CS38 Lecture 15.
Instructor Neelima Gupta Table of Contents Lp –rounding Dual Fitting LP-Duality.
Linear Programming and Approximation
Totally Unimodular Matrices Lecture 11: Feb 23 Simplex Algorithm Elliposid Algorithm.
Semidefinite Programming
1 Introduction to Linear and Integer Programming Lecture 9: Feb 14.
Introduction to Linear and Integer Programming Lecture 7: Feb 1.
Primal Dual Method Lecture 20: March 28 primaldual restricted primal restricted dual y z found x, succeed! Construct a better dual.
Approximation Algorithm: Iterative Rounding Lecture 15: March 9.
Duality Lecture 10: Feb 9. Min-Max theorems In bipartite graph, Maximum matching = Minimum Vertex Cover In every graph, Maximum Flow = Minimum Cut Both.
Approximation Algorithms
Computer Algorithms Integer Programming ECE 665 Professor Maciej Ciesielski By DFG.
Computational Methods for Management and Economics Carla Gomes
Job Scheduling Lecture 19: March 19. Job Scheduling: Unrelated Multiple Machines There are n jobs, each job has: a processing time p(i,j) (the time to.
Integer Programming Difference from linear programming –Variables x i must take on integral values, not real values Lots of interesting problems can be.
Distributed Combinatorial Optimization
Linear Programming – Max Flow – Min Cut Orgad Keller.
1 Introduction to Approximation Algorithms Lecture 15: Mar 5.
Approximation Algorithms: Bristol Summer School 2008 Seffi Naor Computer Science Dept. Technion Haifa, Israel TexPoint fonts used in EMF. Read the TexPoint.
Decision Procedures An Algorithmic Point of View
V. V. Vazirani. Approximation Algorithms Chapters 3 & 22
Computational Geometry Piyush Kumar (Lecture 5: Linear Programming) Welcome to CIS5930.
Design Techniques for Approximation Algorithms and Approximation Classes.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
1 EL736 Communications Networks II: Design and Algorithms Class5: Optimization Methods Yong Liu 10/10/2007.
MIT and James Orlin1 NP-completeness in 2005.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Linear Programming Data Structures and Algorithms A.G. Malamos References: Algorithms, 2006, S. Dasgupta, C. H. Papadimitriou, and U. V. Vazirani Introduction.
Theory of Computing Lecture 13 MAS 714 Hartmut Klauck.
Approximation Algorithms
EMIS 8373: Integer Programming NP-Complete Problems updated 21 April 2009.
TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A A A A A A A Image:
Linear Programming Erasmus Mobility Program (24Apr2012) Pollack Mihály Engineering Faculty (PMMK) University of Pécs João Miranda
Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234: Lecture 4  Linear Programming  LP and Simplex Algorithm [PS82]-Ch2.
Algorithmic Game Theory and Internet Computing Vijay V. Vazirani Georgia Tech Primal-Dual Algorithms for Rational Convex Programs II: Dealing with Infeasibility.
Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234: Lecture 4  Linear Programming  LP and Simplex Algorithm [PS82]-Ch2.
CPSC 536N Sparse Approximations Winter 2013 Lecture 1 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAA.
Lecture.6. Table of Contents Lp –rounding Dual Fitting LP-Duality.
1 Chapter 11 Approximation Algorithms Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.
Problems in Combinatorial Optimization. Linear Programming.
C&O 355 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Common Intersection of Half-Planes in R 2 2 PROBLEM (Common Intersection of half- planes in R 2 ) Given n half-planes H 1, H 2,..., H n in R 2 compute.
Approximation Algorithms Duality My T. UF.
Linear Programming Piyush Kumar Welcome to CIS5930.
Approximation Algorithms based on linear programming.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Lap Chi Lau we will only use slides 4 to 19
Topics in Algorithms Lap Chi Lau.
The minimum cost flow problem
Linear Programming Piyush Kumar Welcome to COT 5405.
Linear Programming and Approximation
Linear Programming Duality, Reductions, and Bipartite Matching
Chapter 5. The Duality Theorem
Flow Feasibility Problems
Presentation transcript:

1 of 56 Linear Programming-Based Approximation Algorithms Shoshana Neuburger Graduate Center, CUNY May 13, 2009

2 of 56 Linear Programming (LP) Given: m linear inequality constraints n Non-negative, real-valued variables Linear objective function Goal: Optimize objective function subject to constraints.

3 of 56 LP Any solution that satisfies all the constraints is a feasible solution. The feasible region forms a convex polytope in n-space. If there is an optimal solution, it will occur on one of the corner points of the region.

4 of 56 LP Example Objective: min 5x 1 +4x 2 Subject to: 5x 1 +5x 2 ≥ 15 and x 1 +3x 2 ≥ 5 and 2x 1 +x 2 ≥ 5

5 of 56 Solving LP Problems Efficient algorithms to solve LPs: Simplex algorithm (Dantzig – 1947) – practical, widely used, exponential time in worst case Ellipsoid algorithm (Khachiyan – 1979) – impractical, polynomial time Interior point algorithm (Kharmarkar – 1984) – practical, polynomial time

6 of 56 LP – Standard Form Minimize Subject to i = 1,…, m j = 1,…, n

7 of 56 LP – Standard Form In matrix form: min cx Subject to Ax ≥ b and x ≥ 0 All LP problems can be transformed to the standard form in poly time.

8 of 56 LP Example Minimize Subject to Feasible PointObjective function value (10,5,2)85 (8,1,1)62

9 of 56 LP Opt Solution Upper bound on OPT value: evaluate objective function at a feasible point How can we obtain lower bounds? linear combination of constraints which equals objective function corresponds to a dual LP solution

10 of 56 LP Example Minimize Subject to 16 is a lower bound since any feasible solution has a nonnegative setting for each x i

11 of 56 Dual Finding the best lower bound of OPT is an LP. We call this problem the dual program. And the original the primal program.

12 of 56 PrimalDual MinMax Sub. to

13 of 56 LP Duality The dual of the dual is the primal program. Existence of feasible solutions for primal and dual with matching objective function values implies that the solution is optimal to both ∞ dual solutionsprimal solutions dual opt = primal opt

14 of 56 Minmax. subj. to (i = 1,…, m) (j = 1,…, n) (j = 1,…, n) (i = 1,…, m) PrimalDual

15 of 56 Weak Duality Thm Let x be a feasible solution for the primal program, and y a feasible solution for the dual program. Then,

16 of 56 Proof of Weak Duality Since We can see Which completes the proof. i.e.,

17 of 56 Duality Thm If an LP has an optimal solution, so does its dual, and at optimality their costs are equal.

18 of 56 Duality Thm The primal program has finite optimum iff its dual has finite optimum. If the primal program has opt x * and the dual program has opt y *, then

19 of 56 Complementary Slackness Let x be a primal feasible solution and y a dual feasible solution. Then x and y are both optimal iff (primal condition) for and (dual condition) for

20 of 56 LP Can use LP techniques to find exact algorithmic solution. Example: Max flow in network from source to sink. max-flow min-cut algorithm

21 of 56 Integer Programming Integer programming asks whether a system of linear inequalities with integer coefficients has an integer solution (NP). Linear programming asks whether a system of linear inequalities with integer coefficients has a rational solution (P) Interesting to note: Adding restrictions on the allowable problem instances will not change its complexity while Adding restrictions on the allowable solutions may make a problem easier, as hard, or harder.

22 of Integer Programming 0-1 Integer Programming is NP-complete Karp (1972): this problem is NP-complete. Input: integer matrix C and integer vector b Property: exists 0-1 vector x such that Cx = b. Reduction of SAT to 0-1 Int Programming: C ij = 1, if x j  clause i = -1, if the complement of x j  clause i = 0, otherwise b i = 1 – (# of complemented variables in clause i)

23 of 56 LP Relaxations Approximation Algorithms: 1.Formulate the problem as an IP. (Since 0-1 IP is NP-hard, any NPO problem can be reduced to a 0-1 IP problem.) 2.LP relaxation - let the variables in the IP problem take on real values. 3.Use the LP (and its solution) to get a solution to the IP, and the original problem.

24 of 56

25 of 56 Integrality Gap OPT f (I): cost of opt. fract. solution to instance I. Integrality gap: worst case gap between integer optimum and fractional optimum. Exact relaxation: integrality gap = 1. The best approximation factor we can hope to prove is the integrality gap of the relaxation. ** Min. LP **

26 of 56 Approximation via LP Find good formulations Prove constructive (algorithmic) bounds on integrality gap Translate into effective algorithms

27 of 56 Pros of LP approach Generic paradigm that applies to all NPO problems Solution to LP gives both a lower bound (OPT LP (I)) on OPT(I) (in case of minimization) as well as useful information to convert fractional solution (round) into an integer solution. For many problems solution of better quality than guaranteed by integrality gaps Often LP can be solved faster than original formulation or insight leads to a combinatorial algorithm that is much faster in practice.

28 of 56 Cons of LP approach LPs are not easy to solve quickly although polynomial time algorithms exist. Numerical issues (not strongly polynomial time). Typical formulations have large size. Infeasible in some cases. Does not completely eliminate the search for a good formulation (algorithm).

29 of 56 Set Cover Given: universe U of n elements, S = {S 1, S 2, …, S k }: a collection of subsets of U Cost function c: S→Q + Goal: Find min cost subcollection of S covering all of U

30 of 56 Set Cover The frequency of an element is the number of sets it is in. f = frequency of most frequent element NP-hard problem App. algorithms achieve O(log n) or f. Vertex Cover is Set Cover with f=2.

31 of 56 Set Cover - Greedy Greedy Alg. (Vazirani, Chap 2) Iteratively choose the most cost-effective set until all elements are covered The most cost-effective set has the lowest ratio of cost to the number of new elements it includes. With dual fitting, can prove approximation guarantee of H n

32 of 56 Set Cover as IP Indicator variable x i  {0,1} for each set S i in S. The constraint is that we choose at least one of the sets containing each element. Minimize Subject to

33 of 56 LP-relaxation of IP Indicator variable x i  {0,1} for each set S i in S. Indicator variable 0 ≤ x i ≤ 1 for each set S i in S. Minimize Subject to

34 of 56 Min.Max. subj. to When constraint matrix, objective function, and right- hand side are all ≥ 0, min. LP is a covering and the max. LP is a packing. Primal(covering)Dual (packing)

35 of 56 LP Relaxation 0OPT f ∞ dual fractional solutionsprimal fractional solutions OPT primal integral solutions For a minimization LP problem,

36 of 56 Method I: Dual Fitting Combinatorial algorithm (greedy for set cover) Use LP relaxation and its dual Show: c(primal integral solution) ≤ c(dual), but dual is infeasible Divide dual by factor so that shrunk dual is feasible and is a lower bound on OPT. Factor is app. guarantee of algorithm. With dual fitting, can prove approximation guarantee of for greedy set cover (Chvatal 79).

37 of 56 Method II: Rounding Rounding Algorithm Solve the LP to get OPT solution, x* Include S j in the integer solution I if Rounding is an f-app. (Hochbaum 82) Proof:

38 of 56 Method III: Primal-Dual Yields combinatorial algorithms – no time spent solving LP problem. Used to find efficient, exact algorithms for problems in P: – Matching – Network flow – Shortest path Property: LP relaxations have integral solutions

39 of 56 Method III: Primal-Dual Recall: optimal solutions to linear LPs satisfy complementary slackness conditions. Iterative method : Begin with initial solutions to primal and dual. Iteratively start satisfying complementary slackness conditions – modify the primal integrally. Once all conditions are met, the solutions must be optimal. What if optimal solution is not integral? Relax the conditions.

40 of 56 Duality Thm The primal program has finite optimum iff its dual has finite optimum. If the primal program has opt x * and the dual program has opt y *, then

41 of 56 Complementary Slackness Let x be a primal feasible solution and y a dual feasible solution. Then x and y are both optimal iff (primal condition) for and (dual condition) for

42 of 56 Relaxed Complementary Slackness Primal Condition Dual Condition If x and y are primal and dual feasible solutions satisfying above conditions, then

43 of 56 Relaxed Complementary Slackness Primal Condition Dual Condition Proof:

44 of 56 Method III: Primal-Dual Algorithm: Start with primal infeasible solution and dual feasible solution. Iteratively improve the feasibility of the primal solution and the optimality of the dual solution. End when all complementary slackness conditions are met with suitable α and β. The primal is always modified integrally, ensuring that final solution is integral. The approximation guarantee is αβ.

45 of 56 Method III: Primal-Dual Primal-Dual to obtain f-app of set cover: Let α=1, β=f. Complementary Slackness conditions: Primal: Increment primal integrally - pick only tight sets. Dual: 0/1 solution for x so cover each element having nonzero dual value at most f times.

46 of 56 Primal-Dual Set Cover Algorithm Initialize: Do until all elements are covered: – Pick an uncovered element, e, raise y e until some set is tight. – Pick all tight sets in the cover and update x. – Declare all the elements occurring in these sets as covered. The approximation guarantee: αβ = f.

47 of 56 Min Makespan Scheduling Given: – Set of jobs, J = {j 1, j 2, …, j n }. – Set of machines, M = {m 1, m 2, …, m m } – Processing time of job j on machine i, Goal: Schedule jobs so as to minimize makespan, maximum processing time of any machine.

48 of 56 Min Makespan Scheduling PTAS for – Identical processing time on all machines. – Uniform parallel machines Focus on problems in which there is no relation between processing times of a job on different machines. – Can use parametric pruning in LP setting.

49 of 56 IP for Scheduling on Unrelated Parallel Machines Indicator variable x ij  {0,1} for each job j and each machine i. Minimize Subject to

50 of 56 LP Relaxation with Parametric Pruning Parameter T: guess a lower bound for makespan Pruning: eliminate assignments in which p ij >T Minimize Subject to

51 of 56 LP Relaxation with Parametric Pruning Family of linear programs, LP(T). One LP for each value of parameter Observe – Any extreme point solution to LP(T) has at most n+m nonzero variables. – Any extreme point solution to LP(T) must set at least n-m jobs integrally.

52 of 56 LP-Rounding Algorithm For each extreme point solution x to LP(T), G=(J, M, E) is the bipartite graph with edges connecting vertices in J and M. H=(J’,M’,E’) is the subgraph of G induced on the set of jobs that are fractionally set in x. A perfect matching in H matches every fractionally set job Fact: H has a perfect matching.

53 of 56 LP-Rounding Algorithm Compute range to find right value of T. Greedy schedule: assign each job to the machine on which it is processed the fastest. Let be the makespan of this schedule. The range we consider is

54 of 56 LP-Rounding Algorithm 1.By a binary search in the interval find the smallest value of for which LP(T) has a feasible solution. Call it T*. 2.Find an extreme point solution to LP(T*), x. 3.Assign all jobs to machines that are integrally assigned in x. 4.Construct graph H; find perfect matching M in H. 5.Assign fractionally set jobs to machines according to perfect matching M.

55 of 56 LP-Rounding Algorithm Theorem: Algorithm achieves 2-approximation for scheduling on unrelated parallel machines. Proof: T*OPT since LP(OPT) has a feasible solution. x has a fractional makespan T*. Thus, restriction of x to integrally set jobs has a makespan T*. Each edge of H satisfies M schedules at most one extra job on each machine. Total makespan is 2T* 2 OPT.

56 of 56 References V.V. Vazirani. Approximation Algorithms Springer- Verlag. Christos H. Papidimitriou & Kenneth Steiglitz. Combinatrorial Optimization Princeton Hall. Dorit S. Hochbaum. Approximation Algorithms for NP-Hard Problems PWS Publishing Company. William P. Williamson. Lecture Notes on Approximation Algorithms. t9and10.pdf t9and10.pdf