Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 On the Cost and Benefits of Procrastination: Approximation Algorithms for Stochastic Combinatorial Optimization Problems (SODA 2004) Nicole Immorlica,

Similar presentations


Presentation on theme: "1 On the Cost and Benefits of Procrastination: Approximation Algorithms for Stochastic Combinatorial Optimization Problems (SODA 2004) Nicole Immorlica,"— Presentation transcript:

1

2 1 On the Cost and Benefits of Procrastination: Approximation Algorithms for Stochastic Combinatorial Optimization Problems (SODA 2004) Nicole Immorlica, David Karger, Maria Minkoff, Vahab S. Mirrokni Jaehan Koh (jaehanko@cs.tamu.edu)jaehanko@cs.tamu.edu Dept. of Computer Science Texas A&M University

3 2 Outline Introduction Preplanning Framework Examples Results Summary Homework

4 3 Introduction Scenarios Harry is designing a network for the Computer Science Department at Texas A&M University. In designing, he should make his best guess about the future demands in a network and purchase capacity in accordance with them. A mobile robot is navigating around a room. Since the information about the environment is unknown or becomes available too late to be useful to the robot, it might be impossible to modify a solution or improve its value once the actual inputs are revealed.

5 4 Planning under uncertainty Problem data frequently subject to uncertainty May represent information about the future Inputs may be evolve over time On-line model Assumes no knowledge of the future Stochastic modeling of uncertainty Given: probability distribution on potential outcomes Goal: minimize expected cost over all potential outcomes

6 5 Approaches Plan ahead Full solution has to be specified before we learn values of unknown parameters Information becomes available too late to be useful Wait-and-See Possibility to defer some decisions until getting to know exact inputs Trade-off: decisions made late may be more expensive

7 6 Approaches (Cont’d) Trade-offs Make some purchase/allocation decisions early to reduce cost while deferring others at greater expense to take advantage of additional information. Problems in which the probolem instance is uncertain. Min-cost flow, bin packing, vertex cover, shortest path, and the Steiner tree problems.

8 7 Preplanning framework Stochastic combinatorial optimization problem ground set of elements e  E A (randomly selected) problem instance I, which defines a set of feasible solutions F I, each corresponding to a subset F I  2 E. We can buy certain elements “in advance” at cost c e, then sample a problem instance, then buy other elements at “last-minute” cost c e so as to produce a feasible solution S  F I for the problem instance. Goal: to choose a subset of elements to buy in advance to minimize the expected total cost.

9 8 Two types of instance prob. distribution Bounded support distribution Nonzero probability to only a polynomial number of distinct problem instances. Independent distribution Each element / constraint for the problem instance active independently with some probability.

10 9 Versions Scenario-based Bounded number of possible scenarios Explicit probability distribution over problem instances Independent events model Random instance is defined implicitly by underlying probabilistic process The number of possible scenarios can be exponential in the problem size

11 10 Problems Min Cost Flow Given a source and sink and a probability distribution on demand, buy some edges in advance and some after sampling (at greater cost) such that the given amount of demand can be routed from source to sink. Bin Packing A collection of item is given, each of which will need to be packed into a bin with some probability. Bins can be purchased in advance at cost 1; after the determination of which items need to be packed, additional bins can be purchased as cost > 1. How many bins should be purchased in advance to minimize the expected total cost?

12 11 Problems (Cont’d) Vertex Cover A graph is given, along with a probability distribution over sets of edges that may need to be covered. Vertices can be purchased in advance at cost 1; after determination of which edges need to be covered, additional vertices can be purchased at cost. Which vertices should be purchased in advance?

13 12 Problems (Cont’d) Cheap Path Given a graph and a randomly selected pair of vertices (or one fixed vertex and one random vertex), connect them by a path. We can purchase edge e at cost c e before the pair is known or at cost c e after. We wish to minimize the expected total edge cost.

14 13 Problems (Cont’d) Steiner Tree A graph is given, along with a probability distribution over sets of terminals that need to be connected by a Steiner tree. Edge e can be purchased at cost c e in advance or at cost c e after the set of terminal is known.

15 14 Preplanning combinatorial optimization problem A ground set of elements e  E, a probability distribution on instances {I}, a cost function c: E  R, a penalty factor  1 Each instance I has a corresponding set of feasible solutions F I  2 E associated with it. Suppose a set of elements A  E is purchased before sampling the probability distribution. The posterior cost function c A is defined by

16 15 Preplanning combinatorial optimization problem (Cont’d) The objective of a PCO problme: choose a subset of elements A to be purchased in advance so as to minimize the total expected cost of a feasible solution over a random choice of an instance I.

17 16 The Threshold Property Theorem 1. An element should be purchased in advance if and only if the probability it is used in the solution for a randomly chosen instance exceeds 1/.

18 17 Example: Min-cost Flow We wish to provide capacity on a network sufficient to carry a random amount of flow demand D from a source s to a sink t. We have an option to pre-install some amout of capacity in advance at some cost per unit. We rent additional capacity once the demands become known, but at cost a factor of or larger per unit. The sum of capacity in advance and capacity rented must satisfy a given upper bound of total capacity for each edge. Goal: Over a given probability distribution on demands, minimize the expected cost of installing sufficient capacity in the network so that the network satisfies the demand.

19 18 Example: Min-cost Flow (Cont’d) Suppose Cap(s-a) = Cap(a-t) = Cap(s-b) = Cap(a-b) = Cap(b-t) = 1 = 2 Pr[D=0] = ¼, Pr[D=1] = ¼, Pr[D=2] = ½ a s b t 1 3 31 1

20 19 Example: Min-cost Flow (Cont’d) Suppose Cap(s-a) = Cap(a-t) = Cap(s-b) = Cap(a-b) = Cap(b-t) = 1 = 2 Pr[D=0] = 5/12, Pr[D=1] = ¼, Pr[D=2] = 1/3 a s b t 1 3 31 1

21 20 Example: Vertex cover Classical vertex cover problem Given: a graph G = (V, E) Output: a subset of vertices such that each edge has at least one endpoint in the set Goal: minimize cardinality of the vertex subset Stochastic vertex cover Random subset of edges is present Vertices picked in advance cost 1 Additional vertices can be purchased at cost >1 Goal: minimize expected cost of a vertex cover Time-information trade-off

22 21 Our techniques Merger of Linear Programs Easy way to handle scenario-based stochastic problems with LP-formulations Threshold Property Identity elements likely to be needed and buy them in advance Probability Aggregation Cluster probability mass to get nearly deterministic subproblems Justifies buying something in advance

23 22 Vertex Cover with preplanning Given Graph G = (V, E) Random instance: a subset of edges to be covered Scenario-based: poly number of possible edge sets Independent version: each e  E present independently with probability p. Vertices cost 1 in advance; after edge set is sampled Goal: select a subset A of vertices to buy in advance such that minimize expected cost of a vertex cover

24 23 Idea 1: LP merger Deterministic problem has LP formulation Write separate LP for each scenario of stochastic problem Take probability-weighted linear combination of objective functions Applications of scenario-based vertex cover VC has LP relaxation Combine LP relaxation of scenarios Round in standard way Result: 4-approximation

25 24 Idea 2: Threshold property Buy element e in advance  Pr[need e]  1/ Advance cost c e Expected buy-later cost c e Pr[need e] Application of independent-events vertex cover Threshold degree k = 1/ p Vertex with degree k is adjacent to an active edge with probability  1/ Not worth buying vertices with degree < k in advance Idea: purchase a subset of high degree vertices in advance

26 25 k-matching k-matching is a subset of edges that induces degree  k on each vertex Vertex v  V is tight if its degree in matching is exactly k Can construct a maximal k-matching greedily

27 26 Algorithm for k-matching Assume  4 Set k = 1 / p Construct some maximal k-matching M k Purchase in advance the set of tight vertices A t Solution cost Prepurchase cost | A t | “Wait-and-See” cost  expected size of minimum vertex cover of active edges not already covered by A t

28 27 Bounding 2 nd stage cost Claim: once purchased all the tight vertices, it is optimal to buy nothing else in advance Vertices in A t cover all edges not in k-matching, then e has at least one tight endpoint Vertices not in A t have degree < k E[cost] a vertex cover in V \ A t -induced subgraph is at most OPT

29 28 Bounding prepurchase cost Restrict attention to instnace induced by M k Costs no more than OPT Intuition Vertex of degree k = 1/ p is likely to be adjacent to an active edge Active edges are not likely to be clustered together Can show that a tight vertex is useful for the vertex cover with sufficiently high probability  | A t | = O(OPT)

30 29 Steiner network preplanning Given Undirected graph G = (V, E) with edge costs c e  0 Probability p i of node i becoming active Penalty  1 Goal Buy subset of edges in advance in order to minimize expected cost of s Steiner tree over active nodes Approaches: probability aggregation Cluster vertices into groups of 1/ probability Purchase in advance edges of an MST over clusters

31 30 CLUSTER algorithm The Steiner tree for the ultrametric case An assignment of edge weights satisfying ĉ uv  max (ĉ u, ĉ v ). The basic idea To cluster nodes into components, each containing clients of total probability mass  (1/ ). Lemma Algorithm CLUSTER produces an MST with the specified properties.

32 31 CLUSTER algorithm (Cont’d)

33 32 CLUSTER algorithm (Cont’d) A hub tree h h <T >T

34 33 Probability Aggregation pp pp p p p p p p p p pp p pp p pp p p p p p 1/

35 34 Results ProblemStochastic elements Approx. guarantee Min-cost s-t flowDemand1Via LP Bin packingItemsAsymptotic FPAS Using FPAS for determ. version Vertex coverEdges4 4.37 Scenario-based Indep. Events Shortest pathStart node Start & end O(1) ConnFacLoc Rent-or-Buy Steiner treeTerminals3 O(log n) Ultrametrics General case

36 35 Summary Stochastic combinatorial optimization problems in a novel “preplanning” framework Study of time-information trade-off in problems with uncertain inputs Algorithms to approximately optimize the choice of what to purchase in advance and what to defer Open questions Combinatorial algorithms for stochastic min-cost flow Metric Steiner tree preplanning Applying preplanning scheme to other problems: scheduling, multicommodity network flow, network design

37 36 Homework 1. (40 pts) In the paper, the authors argue that we can reduce the preplanning version of an NP-hard problem to solve a preplanning instance of another optimization problem that has a polynomial-time algorithm. Explain in your own words why this is true and give at least one example. 2. (a) (30 pts) Formulate the Minimum Steiner Tree problem as an optimization problem (b) (30 pts) Reformulate the Minimum Steiner Tree problem in a PCO (Preplanning Combinatorial Optimization) problem version. Explain the difference between (a) and (b).

38 37 Thank you … Any Questions?


Download ppt "1 On the Cost and Benefits of Procrastination: Approximation Algorithms for Stochastic Combinatorial Optimization Problems (SODA 2004) Nicole Immorlica,"

Similar presentations


Ads by Google