Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 of 56 Linear Programming-Based Approximation Algorithms Shoshana Neuburger Graduate Center, CUNY May 13, 2009.

Similar presentations


Presentation on theme: "1 of 56 Linear Programming-Based Approximation Algorithms Shoshana Neuburger Graduate Center, CUNY May 13, 2009."— Presentation transcript:

1 1 of 56 Linear Programming-Based Approximation Algorithms Shoshana Neuburger Graduate Center, CUNY May 13, 2009

2 2 of 56 Linear Programming (LP) Given: m linear inequality constraints n Non-negative, real-valued variables Linear objective function Goal: Optimize objective function subject to constraints.

3 3 of 56 LP Any solution that satisfies all the constraints is a feasible solution. The feasible region forms a convex polytope in n-space. If there is an optimal solution, it will occur on one of the corner points of the region.

4 4 of 56 LP Example Objective: min 5x 1 +4x 2 Subject to: 5x 1 +5x 2 ≥ 15 and x 1 +3x 2 ≥ 5 and 2x 1 +x 2 ≥ 5

5 5 of 56 Solving LP Problems Efficient algorithms to solve LPs: Simplex algorithm (Dantzig – 1947) – practical, widely used, exponential time in worst case Ellipsoid algorithm (Khachiyan – 1979) – impractical, polynomial time Interior point algorithm (Kharmarkar – 1984) – practical, polynomial time

6 6 of 56 LP – Standard Form Minimize Subject to i = 1,…, m j = 1,…, n

7 7 of 56 LP – Standard Form In matrix form: min cx Subject to Ax ≥ b and x ≥ 0 All LP problems can be transformed to the standard form in poly time.

8 8 of 56 LP Example Minimize Subject to Feasible PointObjective function value (10,5,2)85 (8,1,1)62

9 9 of 56 LP Opt Solution Upper bound on OPT value: evaluate objective function at a feasible point How can we obtain lower bounds? linear combination of constraints which equals objective function corresponds to a dual LP solution

10 10 of 56 LP Example Minimize Subject to 16 is a lower bound since any feasible solution has a nonnegative setting for each x i

11 11 of 56 Dual Finding the best lower bound of OPT is an LP. We call this problem the dual program. And the original the primal program.

12 12 of 56 PrimalDual MinMax Sub. to

13 13 of 56 LP Duality The dual of the dual is the primal program. Existence of feasible solutions for primal and dual with matching objective function values implies that the solution is optimal to both. 0 26 ∞ dual solutionsprimal solutions dual opt = primal opt

14 14 of 56 Minmax. subj. to (i = 1,…, m) (j = 1,…, n) (j = 1,…, n) (i = 1,…, m) PrimalDual

15 15 of 56 Weak Duality Thm Let x be a feasible solution for the primal program, and y a feasible solution for the dual program. Then,

16 16 of 56 Proof of Weak Duality Since We can see Which completes the proof. i.e.,

17 17 of 56 Duality Thm If an LP has an optimal solution, so does its dual, and at optimality their costs are equal.

18 18 of 56 Duality Thm The primal program has finite optimum iff its dual has finite optimum. If the primal program has opt x * and the dual program has opt y *, then

19 19 of 56 Complementary Slackness Let x be a primal feasible solution and y a dual feasible solution. Then x and y are both optimal iff (primal condition) for and (dual condition) for

20 20 of 56 LP Can use LP techniques to find exact algorithmic solution. Example: Max flow in network from source to sink. max-flow min-cut algorithm

21 21 of 56 Integer Programming Integer programming asks whether a system of linear inequalities with integer coefficients has an integer solution (NP). Linear programming asks whether a system of linear inequalities with integer coefficients has a rational solution (P) Interesting to note: Adding restrictions on the allowable problem instances will not change its complexity while Adding restrictions on the allowable solutions may make a problem easier, as hard, or harder.

22 22 of 56 0-1 Integer Programming 0-1 Integer Programming is NP-complete Karp (1972): this problem is NP-complete. Input: integer matrix C and integer vector b Property: exists 0-1 vector x such that Cx = b. Reduction of SAT to 0-1 Int Programming: C ij = 1, if x j  clause i = -1, if the complement of x j  clause i = 0, otherwise b i = 1 – (# of complemented variables in clause i)

23 23 of 56 LP Relaxations Approximation Algorithms: 1.Formulate the problem as an IP. (Since 0-1 IP is NP-hard, any NPO problem can be reduced to a 0-1 IP problem.) 2.LP relaxation - let the variables in the IP problem take on real values. 3.Use the LP (and its solution) to get a solution to the IP, and the original problem.

24 24 of 56 http://www.cs.uiuc.edu/homes/chekuri/teaching/fall2006/lect9and10.pdf

25 25 of 56 Integrality Gap OPT f (I): cost of opt. fract. solution to instance I. Integrality gap: worst case gap between integer optimum and fractional optimum. Exact relaxation: integrality gap = 1. The best approximation factor we can hope to prove is the integrality gap of the relaxation. ** Min. LP **

26 26 of 56 Approximation via LP Find good formulations Prove constructive (algorithmic) bounds on integrality gap Translate into effective algorithms

27 27 of 56 Pros of LP approach Generic paradigm that applies to all NPO problems Solution to LP gives both a lower bound (OPT LP (I)) on OPT(I) (in case of minimization) as well as useful information to convert fractional solution (round) into an integer solution. For many problems solution of better quality than guaranteed by integrality gaps Often LP can be solved faster than original formulation or insight leads to a combinatorial algorithm that is much faster in practice.

28 28 of 56 Cons of LP approach LPs are not easy to solve quickly although polynomial time algorithms exist. Numerical issues (not strongly polynomial time). Typical formulations have large size. Infeasible in some cases. Does not completely eliminate the search for a good formulation (algorithm).

29 29 of 56 Set Cover Given: universe U of n elements, S = {S 1, S 2, …, S k }: a collection of subsets of U Cost function c: S→Q + Goal: Find min cost subcollection of S covering all of U

30 30 of 56 Set Cover The frequency of an element is the number of sets it is in. f = frequency of most frequent element NP-hard problem App. algorithms achieve O(log n) or f. Vertex Cover is Set Cover with f=2.

31 31 of 56 Set Cover - Greedy Greedy Alg. (Vazirani, Chap 2) Iteratively choose the most cost-effective set until all elements are covered The most cost-effective set has the lowest ratio of cost to the number of new elements it includes. With dual fitting, can prove approximation guarantee of H n

32 32 of 56 Set Cover as IP Indicator variable x i  {0,1} for each set S i in S. The constraint is that we choose at least one of the sets containing each element. Minimize Subject to

33 33 of 56 LP-relaxation of IP Indicator variable x i  {0,1} for each set S i in S. Indicator variable 0 ≤ x i ≤ 1 for each set S i in S. Minimize Subject to

34 34 of 56 Min.Max. subj. to When constraint matrix, objective function, and right- hand side are all ≥ 0, min. LP is a covering and the max. LP is a packing. Primal(covering)Dual (packing)

35 35 of 56 LP Relaxation 0OPT f ∞ dual fractional solutionsprimal fractional solutions OPT primal integral solutions For a minimization LP problem,

36 36 of 56 Method I: Dual Fitting Combinatorial algorithm (greedy for set cover) Use LP relaxation and its dual Show: c(primal integral solution) ≤ c(dual), but dual is infeasible Divide dual by factor so that shrunk dual is feasible and is a lower bound on OPT. Factor is app. guarantee of algorithm. With dual fitting, can prove approximation guarantee of for greedy set cover (Chvatal 79).

37 37 of 56 Method II: Rounding Rounding Algorithm Solve the LP to get OPT solution, x* Include S j in the integer solution I if Rounding is an f-app. (Hochbaum 82) Proof:

38 38 of 56 Method III: Primal-Dual Yields combinatorial algorithms – no time spent solving LP problem. Used to find efficient, exact algorithms for problems in P: – Matching – Network flow – Shortest path Property: LP relaxations have integral solutions

39 39 of 56 Method III: Primal-Dual Recall: optimal solutions to linear LPs satisfy complementary slackness conditions. Iterative method : Begin with initial solutions to primal and dual. Iteratively start satisfying complementary slackness conditions – modify the primal integrally. Once all conditions are met, the solutions must be optimal. What if optimal solution is not integral? Relax the conditions.

40 40 of 56 Duality Thm The primal program has finite optimum iff its dual has finite optimum. If the primal program has opt x * and the dual program has opt y *, then

41 41 of 56 Complementary Slackness Let x be a primal feasible solution and y a dual feasible solution. Then x and y are both optimal iff (primal condition) for and (dual condition) for

42 42 of 56 Relaxed Complementary Slackness Primal Condition Dual Condition If x and y are primal and dual feasible solutions satisfying above conditions, then

43 43 of 56 Relaxed Complementary Slackness Primal Condition Dual Condition Proof:

44 44 of 56 Method III: Primal-Dual Algorithm: Start with primal infeasible solution and dual feasible solution. Iteratively improve the feasibility of the primal solution and the optimality of the dual solution. End when all complementary slackness conditions are met with suitable α and β. The primal is always modified integrally, ensuring that final solution is integral. The approximation guarantee is αβ.

45 45 of 56 Method III: Primal-Dual Primal-Dual to obtain f-app of set cover: Let α=1, β=f. Complementary Slackness conditions: Primal: Increment primal integrally - pick only tight sets. Dual: 0/1 solution for x so cover each element having nonzero dual value at most f times.

46 46 of 56 Primal-Dual Set Cover Algorithm Initialize: Do until all elements are covered: – Pick an uncovered element, e, raise y e until some set is tight. – Pick all tight sets in the cover and update x. – Declare all the elements occurring in these sets as covered. The approximation guarantee: αβ = f.

47 47 of 56 Min Makespan Scheduling Given: – Set of jobs, J = {j 1, j 2, …, j n }. – Set of machines, M = {m 1, m 2, …, m m } – Processing time of job j on machine i, Goal: Schedule jobs so as to minimize makespan, maximum processing time of any machine.

48 48 of 56 Min Makespan Scheduling PTAS for – Identical processing time on all machines. – Uniform parallel machines Focus on problems in which there is no relation between processing times of a job on different machines. – Can use parametric pruning in LP setting.

49 49 of 56 IP for Scheduling on Unrelated Parallel Machines Indicator variable x ij  {0,1} for each job j and each machine i. Minimize Subject to

50 50 of 56 LP Relaxation with Parametric Pruning Parameter T: guess a lower bound for makespan Pruning: eliminate assignments in which p ij >T Minimize Subject to

51 51 of 56 LP Relaxation with Parametric Pruning Family of linear programs, LP(T). One LP for each value of parameter Observe – Any extreme point solution to LP(T) has at most n+m nonzero variables. – Any extreme point solution to LP(T) must set at least n-m jobs integrally.

52 52 of 56 LP-Rounding Algorithm For each extreme point solution x to LP(T), G=(J, M, E) is the bipartite graph with edges connecting vertices in J and M. H=(J’,M’,E’) is the subgraph of G induced on the set of jobs that are fractionally set in x. A perfect matching in H matches every fractionally set job Fact: H has a perfect matching.

53 53 of 56 LP-Rounding Algorithm Compute range to find right value of T. Greedy schedule: assign each job to the machine on which it is processed the fastest. Let be the makespan of this schedule. The range we consider is

54 54 of 56 LP-Rounding Algorithm 1.By a binary search in the interval find the smallest value of for which LP(T) has a feasible solution. Call it T*. 2.Find an extreme point solution to LP(T*), x. 3.Assign all jobs to machines that are integrally assigned in x. 4.Construct graph H; find perfect matching M in H. 5.Assign fractionally set jobs to machines according to perfect matching M.

55 55 of 56 LP-Rounding Algorithm Theorem: Algorithm achieves 2-approximation for scheduling on unrelated parallel machines. Proof: T*OPT since LP(OPT) has a feasible solution. x has a fractional makespan T*. Thus, restriction of x to integrally set jobs has a makespan T*. Each edge of H satisfies M schedules at most one extra job on each machine. Total makespan is 2T* 2 OPT.

56 56 of 56 References V.V. Vazirani. Approximation Algorithms. 2001. Springer- Verlag. Christos H. Papidimitriou & Kenneth Steiglitz. Combinatrorial Optimization. 1982. Princeton Hall. Dorit S. Hochbaum. Approximation Algorithms for NP-Hard Problems. 1997. PWS Publishing Company. William P. Williamson. Lecture Notes on Approximation Algorithms. http://people.orie.cornell.edu/~dpw/cornell.pshttp://people.orie.cornell.edu/~dpw/cornell.ps http://www.cs.uiuc.edu/homes/chekuri/teaching/fall2006/lec t9and10.pdf http://www.cs.uiuc.edu/homes/chekuri/teaching/fall2006/lec t9and10.pdf


Download ppt "1 of 56 Linear Programming-Based Approximation Algorithms Shoshana Neuburger Graduate Center, CUNY May 13, 2009."

Similar presentations


Ads by Google