Presentation is loading. Please wait.

Presentation is loading. Please wait.

Approximation Algorithms for Stochastic Optimization Chaitanya Swamy Caltech and U. Waterloo Joint work with David Shmoys Cornell University.

Similar presentations


Presentation on theme: "Approximation Algorithms for Stochastic Optimization Chaitanya Swamy Caltech and U. Waterloo Joint work with David Shmoys Cornell University."— Presentation transcript:

1 Approximation Algorithms for Stochastic Optimization Chaitanya Swamy Caltech and U. Waterloo Joint work with David Shmoys Cornell University

2 Stochastic Optimization Way of modeling uncertainty. Exact data is unavailable or expensive – data is uncertain, specified by a probability distribution. Want to make the best decisions given this uncertainty in the data. Applications in logistics, transportation models, financial instruments, network design, production planning, … Dates back to 1950’s and the work of Dantzig.

3 Stochastic Recourse Models Given:Probability distribution over inputs. Stage I :Make some advance decisions – plan ahead or hedge against uncertainty. Uncertainty evolves through various stages. Learn new information in each stage. Can take recourse actions in each stage – can augment earlier solution paying a recourse cost. Choose initial (stage I) decisions to minimize (stage I cost) + (expected recourse cost).

4 2-stage problem  2 decision points 0.2 0.02 0.3 0. 1 stage I stage II scenarios 0.5 0.2 0.3 0.4 stage I stage II scenarios in stage k k-stage problem  k decision points

5 2-stage problem  2 decision points Choose stage I decisions to minimize expected total cost = (stage I cost) + E all scenarios [cost of stages 2 … k]. 0.2 0.02 0.3 0. 1 stage I stage II scenarios 0.5 0.2 0.4 stage I stage II k-stage problem  k decision points 0.3 scenarios in stage k

6 2-Stage Stochastic Facility Location Distribution over clients gives the set of clients to serve. client set D facility Stage I : Open some facilities in advance; pay cost f i for facility i. Stage I cost = ∑ (i opened) f i. stage I facility

7 2-Stage Stochastic Facility Location Distribution over clients gives the set of clients to serve. facility Stage I : Open some facilities in advance; pay cost f i for facility i. Stage I cost = ∑ (i opened) f i. stage I facility Actual scenario A = { clients to serve}, materializes. Stage II : Can open more facilities to serve clients in A; pay cost f i A to open facility i. Assign clients in A to facilities. Stage II cost = ∑ f i A + (cost of serving clients in A). i opened in scenario A

8 Want to decide which facilities to open in stage I. Goal: Minimize Total Cost = (stage I cost) + E A  D [stage II cost for A]. How is the probability distribution specified? A short (polynomial) list of possible scenarios Independent probabilities that each client exists A black box that can be sampled.

9 Approximation Algorithm Hard to solve the problem exactly. Even special cases are #P-hard. Settle for approximate solutions. Give polytime algorithm that always finds near-optimal solutions. A is a  -approximation algorithm if, A runs in polynomial time. A (I) ≤ . OPT(I) on all instances I,  is called the approximation ratio of A.

10 Overview of Previous Work polynomial scenario model: Dye, Stougie & Tomasgard; Ravi & Sinha; Immorlica, Karger, Minkoff & Mirrokni. Immorlica et al.: also consider independent activation model proportional costs: (stage II cost) = (stage I cost), e.g., f i A =. f i for each facility i, in each scenario A. Gupta, Pál, Ravi & Sinha (GPRS04): black-box model but also with proportional costs. Shmoys, S (SS04): black-box model with arbitrary costs. approximation scheme for 2-stage LPs + rounding procedure “reduces” stochastic problems to their deterministic versions. for some problems improve upon previous results.

11 Boosted Sampling (GPRS04) Proportional costs: (stage II cost) = (stage  cost) Note: is same as  in previous talk. –Sample times from distribution –Use “suitable” algorithm to solve deterministic instance consisting of sampled scenarios (e.g., all sampled clients) – determines stage I decisions Analysis relies on the existence of cost-shares that can be used to share the stage I cost among sampled scenarios.

12 Shmoys, S ’04 vs. Boosted sampling –Can handle arbitrary costs in the two stages. –LP rounding: give an algorithm to solve the stochastic LP. –Need many more samples to solve stochastic LP. Need proportional costs: (stage II cost) = (stage  cost) can depend on scenario. Primal-dual approach: cost- shares obtained by exploiting structure via primal-dual schema. Need only samples. Both work in the black-box model: arbitrary distributions.

13 Stochastic Set Cover (SSC) Universe U = {e 1, …, e n }, subsets S 1, S 2, …, S m  U, set S has weight w S. Deterministic problem: Pick a minimum weight collection of sets that covers each element. Stochastic version: Set of elements to be covered is given by a probability distribution. –choose some sets initially paying w S for set S –subset A  U to be covered is revealed –can pick additional sets paying w S A for set S. Minimize ( w-cost of sets picked in stage I ) + E A  U [ w A -cost of new sets picked for scenario A ].

14 A Linear Program for SSC For simplicity, consider w S A = W S for every scenario A.  S : stage  weight of set S p A : probability of scenario A  U. x S : indicates if set S is picked in stage I. y A,S : indicates if set S is picked in scenario A. Minimize ∑ S  S x S + ∑ A  U p A ∑ S W S y A,S subject to, ∑ S:e  S x S + ∑ S:e  S y A,S ≥ 1 for each A  U, e  A x S, y A,S ≥ 0for each S, A. Exponential number of variables and exponential number of constraints.

15 A Rounding Theorem Assume LP can be solved in polynomial time. Suppose for the deterministic problem, we have an  -approximation algorithm wrt. the LP relaxation, i.e., A such that A (I) ≤ . (optimal LP solution for I) for every instance I. e.g., “the greedy algorithm” for set cover is a log n-approximation algorithm wrt. LP relaxation. Theorem: Can use such an  -approx. algorithm to get a 2  -approximation algorithm for stochastic set cover.

16 Rounding the LP Assume LP can be solved in polynomial time. Suppose we have an  -approximation algorithm wrt. the LP relaxation for the deterministic problem. Let E = {e : ∑ S:e  S x S ≥ ½}. So (2x) is a fractional set cover for the set E  can round to get an integer set cover S for E of cost ∑ S  S  S ≤  (∑ S 2  S x S ). S is the first stage decision. Let (x,y) : optimal solution with cost OPT. ∑ S:e  S x S + ∑ S:e  S y A,S ≥ 1 for each A  U, e  A  for every element e, either ∑ S:e  S x S ≥ ½ OR in each scenario A : e  A, ∑ S:e  S y A,S ≥ ½.

17 Sets Elements Rounding (contd.) Set in S Element in E Consider any scenario A. Elements in A  E are covered. For every e  A\E, it must be that ∑ S:e  S y A,S ≥ ½. So (2y A ) is a fractional set cover for A\E  can round to get a set cover of W-cost ≤  (∑ S 2W S y A,S ). A Using this to augment S in scenario A, expected cost ≤ ∑ S  S  S + 2 . ∑ A  U p A (∑ S W S y A,S ) ≤ 2 . OPT.

18 An  -approx. algorithm for deterministic problem gives a 2  -approximation guarantee for stochastic problem. In the polynomial-scenario model, gives simple polytime approximation algorithms for covering problems. 2log n-approximation for SSC. 4-approximation for stochastic vertex cover. 4-approximation for stochastic multicut on trees. Ravi & Sinha gave a log n-approximation algorithm for SSC, 2-approximation algorithm for stochastic vertex cover in the polynomial-scenario model. Rounding (contd.)

19 Let E = {e : ∑ S:e  S x S ≥ ½}. So (2x) is a fractional set cover for the set E  can round to get an integer set cover S of cost ∑ S  S  S ≤  (∑ S 2  S x S ). S is the first stage decision. Rounding the LP Assume LP can be solved in polynomial time. Suppose we have an  -approximation algorithm wrt. the LP relaxation for the deterministic problem. Let (x,y) : optimal solution with cost OPT. ∑ S:e  S x S + ∑ S:e  S y A,S ≥ 1 for each A  U, e  A  for every element e, either ∑ S:e  S x S ≥ ½ OR in each scenario A : e  A, ∑ S:e  S y A,S ≥ ½.

20 A Compact Convex Program p A : probability of scenario A  U. x S : indicates if set S is picked in stage I. Minimize h(x)=∑ S  S x S + ∑ A  U p A f A (x) s.t. x S ≥ 0for each S (SSC-P) where f A (x)= min. ∑ S W S y A,S s.t. ∑ S:e  S y A,S ≥ 1 – ∑ S:e  S x S for each e  A y A,S ≥ 0 for each S. Equivalent to earlier LP. Each f A (x) is convex, so h(x) is a convex function.

21 The General Strategy 1.Get a ( 1 +  )-optimal fractional first-stage solution (x) by solving the convex program. 2.Convert fractional solution (x) to integer solution –decouple the two stages near-optimally –use  -approx. algorithm for the deterministic problem to solve subproblems. Obtain a c.  -approximation algorithm for the stochastic integer problem. Many applications: set cover, vertex cover, facility location, multicut on trees, …

22 Need a procedure that at any point y, if y  P, returns a violated inequality which shows that y  P Solving the Convex Program Minimize h(x) subject to x  P.h(.) : convex Ellipsoid method P y

23 Need a procedure that at any point y, if y  P, returns a violated inequality which shows that y  P if y  P, computes the subgradient of h(.) at y d  m is a subgradient of h(.) at u, if  v, h(v)-h(u) ≥ d. (v-u). Given such a procedure, ellipsoid runs in polytime and returns points x 1, x 2, …, x k  P such that min i= 1 …k h(x i ) is close to OPT. Solving the Convex Program Minimize h(x) subject to x  P.h(.) : convex Ellipsoid method Computing subgradients is hard. Evaluating h(.) is hard. P y h(x) ≤ h(y) d

24 Need a procedure that at any point y, if y  P, returns a violated inequality which shows that y  P if y  P, computes an approximate subgradient of h(.) at y d'  m is an  -subgradient at u, if  v  P, h(v)-h(u) ≥ d'. (v-u) – . h(u). Given such a procedure, can compute point x  P such that h(x) ≤ OPT/( 1 -  ) +  without ever evaluating h(.)! Solving the Convex Program Minimize h(x) subject to x  P.h(.) : convex P y h(x) ≤ h(y) d' Ellipsoid method Can compute  -subgradients by sampling.

25 Putting it all together Get solution x with h(x) close to OPT. Sample initially to detect if OPT is large – this allows one to get a ( 1 +  ). OPT guarantee. Theorem: (SSC-P) can be solved to within a factor of ( 1 +  ) in polynomial time, with high probability. Gives a (2log n+  )-approximation algorithm for the stochastic set cover problem.

26 A Solvable Class of Stochastic LPs Minimize h(x)= . x + ∑ A  U p A f A (x) s.t. x  P   m where f A (x)=min. w A. y A +c A. r A s.t. D A r A + T A y A ≥ j A – T A x y A  m, r A  n, y A, r A ≥ 0. Theorem: Can get a ( 1 +  )-optimal solution for this class of stochastic programs in polynomial time. Includes covering problems ( e.g., set cover, network design, multicut ), facility location problems, multicommodity flow. ≥ 0

27 Moral of the Story Even though the stochastic LP relaxation has exponentially many variables and constraints, we can still obtain near-optimal fractional first-stage decisions Fractional first-stage decisions are sufficient to decouple the two stages near-optimally Many applications: set cover, vertex cover, facility locn., multicommodity flow, multicut on trees, … But we have to solve convex program with many samples (not just )!

28 Sample Average Approximation Sample Average Approximation (SAA) method: –Sample initially N times from scenario distribution –Solve 2-stage problem estimating p A with frequency of occurrence of scenario A How large should N be to ensure that an optimal solution to sampled problem is a ( 1 +  )-optimal solution to original problem? Kleywegt, Shapiro & Homem De-Mello (KSH0 1 ): –bound N by variance of a certain quantity – need not be polynomially bounded even for our class of programs. S, Shmoys ’05 : –show using  -subgradients that for our class, N can be poly-bounded. Charikar, Chekuri & Pál ’05: –give another proof that for a class of 2-stage problems, N can be poly-bounded.

29 Multi-stage Problems Given:Distribution over inputs. Stage I :Make some advance decisions – hedge against uncertainty. Uncertainty evolves in various stages. Learn new information in each stage. Can take recourse actions in each stage – can augment earlier solution paying a recourse cost. k-stage problem  k decision points 0.5 0.2 0.4 stage I stage II scenarios in stage k Choose stage I decisions to minimize expected total cost = (stage I cost) + E all scenarios [cost of stages 2 … k]. 0.3

30 Multi-stage Problems Fix k = number of stages. LP-rounding: S, Shmoys ’05 –Ellipsoid-based algorithm extends –SAA method also works black-box model, arbitrary costs Rounding procedure of SS04 can be easily adapted: lose an O(k)-factor over the deterministic guarantee –O(k)-approx. for k-stage vertex cover, facility location, multicut on trees; k. log n-approx. for k-stage set cover Gupta, Pál, Ravi & Sinha ’05: boosted sampling extends but with outcome-dependent proportional costs –2k-approx. for k-stage Steiner tree (also Hayrapetyan, S & Tardos) –factors exponential in k for k-stage vertex cover, facility location Computing  -subgradients is significantly harder, need several new ideas

31 Open Questions Combinatorial algorithms in the black box model and with general costs. What about strongly polynomial algorithms? Incorporating “risk” into stochastic models. Obtaining approximation factors independent of k for k-stage problems. Integrality gap for covering problems does not increase. Munagala has obtained a 2-approx. for k-stage VC. Is there a larger class of doubly exponential LPs that one can solve with (more general) techniques?

32 Thank You.


Download ppt "Approximation Algorithms for Stochastic Optimization Chaitanya Swamy Caltech and U. Waterloo Joint work with David Shmoys Cornell University."

Similar presentations


Ads by Google