Download presentation
Presentation is loading. Please wait.
1
Graph Algorithms for Planning and Partitioning Shuchi Chawla Carnegie Mellon University Thesis Oral 6/22/06
2
Planning and Partitioning Algorithms Shuchi Chawla 2 Planning & Partitioning problems in graphs Find structure in a set of objects given pairwise constraints or relations on the objects NP-hard Our objective: study the approximability of these problems
3
Path-Planning
4
Planning and Partitioning Algorithms Shuchi Chawla 4 A repair-bot problem Robot receives requests for repair Requests come with a time-window for servicing Brownie points for each request serviced Cannot perform all of them –Takes time to service each request –Takes time to move from one location to another The problem: –Which ones to accept? –How to schedule them? –Goal: Maximize the total brownie points Selection Ordering
5
Planning and Partitioning Algorithms Shuchi Chawla 5 Informally… planning and ordering of tasks Classic instance ― Traveling Salesman Problem Find the shortest tour covering all given locations A natural extension ― Orienteering Cover as many locations as possible by a given deadline Path-planning Many variants, applications: –Delivery & distribution problems –Production planning, Supply chain management –Robot navigation Studied in Operations Research for 2-3 decades Mostly NP-hard; we look for approximation algorithms
6
Planning and Partitioning Algorithms Shuchi Chawla 6 Approximation Results A reward vs. time trade-off A budget on time; maximize reward –Orienteering –Deadline-TSP –TSP with Time-Windows A quota on reward; minimize time –TSP –k-TSP Optimize a combination of reward and time –Prize-Collecting TSP –Discounted-Reward TSP single deadline on time different deadlines on different locations different time windows for diff. locations visit all locations visit k locations minimize time plus reward foregone max. reward, reward decreases with time
7
Planning and Partitioning Algorithms Shuchi Chawla 7 2 [Goemans Williamson ’92] 6.75+ A reward vs. time trade-off A budget on time; maximize reward –Orienteering –Deadline-TSP –TSP with Time-Windows A quota on reward; minimize time –TSP –k-TSP Optimize a combination of reward and time –Prize-Collecting TSP –Discounted-Reward TSP 1.5 [Christofides ’76] 2+ [BRV99] [Garg99] [AK00] [CGRT03] … 1.5 [Christofides ’76] 2+ [BRV99] [Garg99] [AK00] [CGRT03] … Approximation Results ?????? 2 [Goemans Williamson ’92] ? 3 3 log n 3 log 2 n Use LP-rounding Use structural properties & Dynamic Programming Joint work with Bansal, Blum, Karger, Meyerson, Minkoff & Lane
8
Planning and Partitioning Algorithms Shuchi Chawla 8 We can approximate the very-low slack case: –When the solution visits all nodes in order of increasing distance from start –Use dynamic program Why LPs don’t work Budget problems are ill-behaved w.r.t. small perturbations Algorithms for “quota” problems rely on the Goemans- Williamson primal-dual subroutine –Miss out on far-away reward Bad case: low slack
9
Planning and Partitioning Algorithms Shuchi Chawla 9 3 3 log n 3 log 2 n 2 [Goemans Williamson ’92] 6.75+ A budget on time; maximize reward –Orienteering –Deadline-TSP –TSP with Time-Windows A quota on reward; minimize time –TSP –k-TSP –M Optimize a combination of reward and time –Prize-Collecting TSP –Discounted-Reward TSP visit k locations, but minimize excess 1.5 [Christofides ’76] 2+ [BRV99] [Garg99] [AK00] [CGRT03] … 1.5 [Christofides ’76] 2+ [BRV99] [Garg99] [AK00] [CGRT03] … Approximation Results Use LP-rounding Use structural properties & Dynamic Programming Joint work with Bansal, Blum, Karger, Meyerson, Minkoff & Lane Min-Excess 2+ O(c) for reward; (1+2 -c ) for deadlines TSP with Time-Windows
10
Planning and Partitioning Algorithms Shuchi Chawla 10 Stochastic planning Robots face uncertainty –may run out of battery power –may face unforeseen obstacle causing delay –may follow instructions imprecisely Uncertainty arises from robot’s environment, as well as its own actions Goal: perform as many tasks as possible in expectation before failure occurs; perform all tasks as fast as possible in expectation
11
Planning and Partitioning Algorithms Shuchi Chawla 11 A model for uncertainty: MDPs Typically modeled as a Markov Decision Process –Current position of the robot summarized as a “state” –Several actions available in each state –Each action results in a new state with some probability 0.2 0.5 0.3 0.1 0.6 0.3 0.5 0.2 0.8 Measure reward/time in expectation e.g.: Stochastic-TSP—minimize expected time to visit all locations
12
Planning and Partitioning Algorithms Shuchi Chawla 12 Stochastic TSP The best strategy may be dynamic –May depend on the set of states visited, number of steps taken so far … –May be exponentially long, and take exponential bits to describe Is Stoch-TSP in NP? In PSPACE? Can we approximate it well in poly time? Are we allowed to compute the strategy at run-time? Two approaches: –Pre-compute in poly-time and express in poly-bits –Iteratively compute every action at run-time in poly-time Stoch-TSP can be “iteratively”-computed in PSPACE heads move forward; tails start again Need n heads in a row to reach the last node 2 n tries
13
Planning and Partitioning Algorithms Shuchi Chawla 13 A simple solution concept: Static policies Dynamic strategy “history-dependent”: depends on previously visited states, current time, etc. … Difficult to implement Poorly understood Static policy “memoryless”: depends only on current state, not on history Very easy to implement, easier to analyze, better understood May be far from optimal Dyn takes 2n steps Stat takes O(n 2 ) What is the worst gap between Dyn and Stat?
14
Planning and Partitioning Algorithms Shuchi Chawla 14 The adaptivity gap What is the worst gap between Dyn and Stat? First studied for the “stochastic-knapsack problem” –Known to be a constant [DGV’04, DGV’05] At least (n) even in “deterministic” graphs At most (n log n) in deterministic graphs Dyn takes 2n steps Stat takes O(n 2 ) At most (n 3 log n) in general
15
Planning and Partitioning Algorithms Shuchi Chawla 15 The adaptivity gap The policy: –Visit every action with the “probability” that OPT visits it For any set S, prob of taking an edge from S to V\S 1/OPT an edge from S to V\S with prob. mass 1/(n 2 OPT) At most (n 3 log n) in general Is this well defined? For any u & v, a path of length at most n, with edges having “probability mass” at least 1/(n 2 OPT) For any u & v, “hitting time” from u to v O(n 3 OPT) Adaptivity gap = O(n 3 log n)
16
Planning and Partitioning Algorithms Shuchi Chawla 16 A summary of our results The optimal strategy for Stoch-TSP can be iteratively computed in PSPACE Adaptivity gap for deterministic graphs is O(n log n) and (n) Adaptivity gap in general = O(n 3 log n) O(n 3 log n)-approximation using static strategies O(n)-approximation using dynamic strategies Joint work with Blum, Kleinberg & McMahan
17
Planning and Partitioning Algorithms Shuchi Chawla 17 Open Problems Approximations for directed path-planning –Chekuri, Pal give quasi-polytime polylog-approximations Techniques for approximating planning problems on MDPs Hardness for directed/undirected problems
18
Graph Partitioning
19
Planning and Partitioning Algorithms Shuchi Chawla 19 Partitioning with pairwise constraints Join Split Goal: Find a partition that satisfies all constraints What if all constraints can’t be satisfied simultaneously? New Goal: Find a partition that satisfies most constraints; Minimize “cost” of violating constraints
20
Planning and Partitioning Algorithms Shuchi Chawla 20 Cost of partition = # ( ) edges outside clusters + # ( ) edges inside clusters Correlation Clustering Cost = 4 + 1 = 5 Join Split
21
Planning and Partitioning Algorithms Shuchi Chawla 21 Multicut Cost of partition = # ( ) edges cut by partition Goal: Must cut all ( ) edges Cost = 5 Join Split
22
Planning and Partitioning Algorithms Shuchi Chawla 22 Sparsest Cut Goal: find the cut minimizing sparsity For a set S, “demand” D(S) = # ( ) edges cut “capacity” C(S) = # ( )edges cut Cost, or, Sparsity = C(S)/D(S) Sparsity = 2/5 = 0.4 Join Split
23
Planning and Partitioning Algorithms Shuchi Chawla 23 Why Partition? Partitioning problems arise in machine learning, computational geometry, computational biology, divide- and-conquer algorithms, network design … Correlation clustering: NLP—correference analysis, document classification, image classification, authorship disambiguation Multicut, Sparsest Cut: divide & conquer, mixing of Markov chains, VLSI layout, graph bisection, low- distortion metric embeddings
24
Planning and Partitioning Algorithms Shuchi Chawla 24 Approximating Correlation Clustering 1.3048 [CGW03] 1.3044 [Swamy04] PTAS [BBC02] Maximizing agreements O(log n) ~ [CGW03, EF03, DI03] [BBC02] 4 [CGW03] 3 [ACN05] Minimizing disagreements General graph Unweighted complete graph APX-hard [CGW03] APX-hard [BBC02] 29/28 [CGW03] 1.0087 [CGW03] 17433 O(1) — Maximizing Correlation O(log n) [CW04] Fixed # cluster s PTAS [GG05]
25
Planning and Partitioning Algorithms Shuchi Chawla 25 Approximating Multicut & Sparsest Cut O(log n) for “uniform” demands [LR’88] O(log n) via LPs [LLR’95, AR’98] O( log n) for uniform demands via SDP [ARV’04] O( log 3/4 n ) [CGR’05] O( log n log log n ) [ALN’05] Nothing known! Sparsest Cut O(log n) approx via LPs [GVY’96] APX-hard [DJPSY’94] Integrality gap of (log n) for LP & SDP [ACMM’05] (1), (log log log n) based on UGC [CKKRS’05] Multicut (1), (log log log n) based on UGC [CKKRS’05]
26
Planning and Partitioning Algorithms Shuchi Chawla 26 Coming up… An O(log 3/4 n)-approx for generalized Sparsest Cut Hardness of approximation for Multicut & Sparsest Cut Conclusions and open problems
27
Planning and Partitioning Algorithms Shuchi Chawla 27 Sparsity of a cut S V, (S) = (S) c(e) x S, y S D(x,y) Given set S, define a “cut” metric S (x,y) = 1 if x and y on different sides of cut (S, V-S) 0 otherwise (S) = e c(e) S (e) x,y D(x,y) S (x,y) Finding sparsest cut minimizing above function over all metrics Sparsest Cut and metric embeddings cut ℓ1ℓ1 ( ) = e c(e) (e) x,y D(x,y) (x,y)
28
Planning and Partitioning Algorithms Shuchi Chawla 28 Sparsest Cut and metric embeddings Finding sparsest cut minimizing ( ) over metrics Lemma: Minimize over a class ℳ to obtain + have -distortion embedding from into -approx for sparsest cut ℓ1ℓ1 ℓ1ℓ1 When ℳ = all metrics, obtain O(log n) approximation [Linial London Rabinovich ’95, Aumann Rabani ’98] Cannot do any better [Leighton Rao ’88] ( ) = e c(e) (e) x,y D(x,y) (x,y) NP-hard
29
Planning and Partitioning Algorithms Shuchi Chawla 29 ℳ = “negative-type” metrics O( log n) approx [Arora Rao Vazirani ’04] Question: Can we obtain O( log n) for generalized sparsest cut, or an O( log n) distortion embedding from into Finding sparsest cut minimizing ( ) over metrics Lemma: Minimize over a class ℳ to obtain + have -avg-distortion embedding from into -approx for “uniform-demands” sparsest cut Squared-Euclidean, or ℓ 2 -metrics 2 Sparsest Cut and metric embeddings ℓ1ℓ1 ℓ1ℓ1 ℓ1ℓ1 ℓ2ℓ2 2 ( ) = e c(e) (e) x,y D(x,y) (x,y)
30
Planning and Partitioning Algorithms Shuchi Chawla 30 Solve an SDP relaxation to get the best representation Key Theorem: Let be a “well-spread-out” metric. Then – an embedding from into a line, such that, - for all pairs (x,y), (x,y) (x,y) - for a constant fraction of (x,y), (x,y) 1 ⁄ O( log n) (x,y) The general case – issues 1.Well-spreading does not hold 2.Constant fraction is not enough Want low distortion for every demand pair. For a const. fraction of (x,y), (x,y) > const. diameter Arora et al.’s O( log n)-approx ℓ2ℓ2 2 ℓ2ℓ2 2 Implies an avg. distortion of O( log n)
31
Planning and Partitioning Algorithms Shuchi Chawla 31 1. Ensuring well-spreading Divide pairs into groups based on distances D i = { (x,y) : 2 i (x,y) 2 i+1 } At most O(log n) groups Each group by itself is well-spread, by definition Embed each group individually –distortion O( log n) contracting embedding into a line for each (assume for now) “Glue” the embeddings appropriately –Naïve gluing via concatenation: distortion O( log n) = O(log n) –A better gluing: “measured-descent” by Krauthgamer et al.’04 gives distortion O( log n) = O( log ¾ n )
32
Planning and Partitioning Algorithms Shuchi Chawla 32 2. Average to worst-case distortion Arora et al.’s guarantee – a constant fraction of pairs embed with low distortion We want – every pair should embed with low distortion Idea: Re-embed pairs that have high distortion Problem: Increases the number of embeddings, implying a larger distortion A “re-weighting” solution: –Don’t ignore low-distortion pairs completely – keep them around and reduce their importance
33
Planning and Partitioning Algorithms Shuchi Chawla 33 Summarizing… Start with a solution to the SDP For every distance scale –Use [ARV04] to embed points into line –Use re-weighting to obtain good worst-case distortion Combine distance scales using measured-descent In practice –Write another SDP to find best embedding into –Use J-L to embed into and then into a cut-metric ℓ2ℓ2 ℓ1ℓ1 ℓ2ℓ2 Joint work with Gupta & Räcke
34
Planning and Partitioning Algorithms Shuchi Chawla 34 Coming up… An O(log 3/4 n)-approx for generalized Sparsest Cut Hardness of approximation for Multicut & Sparsest Cut Conclusions and open problems
35
Planning and Partitioning Algorithms Shuchi Chawla 35 Hardness of approximation: our results Use Khot’s Unique Games Conjecture (UGC) –A certain label cover problem is NP-hard to approximate The following holds for Multicut, Sparsest Cut and Min-2CNF Deletion : UGC L-hardness for any constant L > 0 Stronger UGC (log log log n)-hardness Joint work with Krauthgamer, Kumar, Rabani & Sivakumar
36
Planning and Partitioning Algorithms Shuchi Chawla 36 A label-cover game Given: A bipartite graph Set of labels for each vertex Relation on labels for edges To find: A label for each vertex Maximize no. of edges satisfied Value of game = fraction of edges satisfied by best solution (,,, ) “Is value = or value < ?” is NP-hard
37
Planning and Partitioning Algorithms Shuchi Chawla 37 Unique Games Conjecture (,,, ) Given: A bipartite graph Set of labels for each vertex Bijection on labels for edges To find: A label for each vertex Maximize no. of edges satisfied Value of game = fraction of edges satisfied by best solution UGC: “Is value > or value < ?” is NP-hard [Khot’02]
38
Planning and Partitioning Algorithms Shuchi Chawla 38 The power of UGC Implies the following hardness results –Vertex-Cover2 [KR’03] –Max-cut GW = 0.878 [KKMO’04] –Min 2-CNF Deletion –Max-k-cut –2-Lin-mod-2 UGC: “Is value > or value < ?” is NP-hard [Khot’02]...
39
Planning and Partitioning Algorithms Shuchi Chawla 39 1 /k 1 - solvable [CMM 05] 1 - 1 /3 1 - ( / log n ) solvable [Trevisan 05] L( ) known NP-hard [FR 04] 1 /k 1 -k -0.1 solvable [Khot 02] The plausibility of UGC 0 1 Conjecture is true Conjecture is plausible (1) (1) 1 - ( 1 ) conjectured NP-hard [Khot 02] k : # labels n : # nodes Strongest plausible version: 1 / , 1 / < min ( log k, log log n ) c
40
Planning and Partitioning Algorithms Shuchi Chawla 40 Our results Use Khot’s Unique Games Conjecture (UGC) –A certain label cover problem is hard to approximate The following holds for Multicut, Sparsest Cut and Min-2CNF Deletion : UGC ( log 1/( ) )-hardness L-hardness for any constant L > 0 Stronger UGC ( log log log n )-hardness ( k log n, , (log log n) -c )
41
Planning and Partitioning Algorithms Shuchi Chawla 41 The key gadget Cheapest cut – a “dimension cut” cost = 2 d-1 Most expensive cut – “diagonal cut” cost = O( d 2 d ) Cheap cuts look like dimension cuts — lean heavily on few dimensions Suppose:size of cut < x 2 d-1 Then, a dimension h such that: fraction of edges cut along h > 2 - (x) [Kahn Kalai Linial 88]:
42
Planning and Partitioning Algorithms Shuchi Chawla 42 Relating cuts to labels (,, ) Suppose that “cross-edges” cannot be cut Each cube must have exactly the same cut! Picking labels for a vertex: Pick a dimension in proportion to its “weight” in cut Cheap cut high prob. of picking most prominent dimension high prob. of picking same dim on either side
43
Planning and Partitioning Algorithms Shuchi Chawla 43 A recap… “NO”-instance of UG cut > log 1/( + ) 2 d-1 per cube “YES”-instance of UG cut < 2 d per cube UGC:NP-hard to distinguish between “YES” and “NO” instances of UG NP-hard to distinguish between whether cut log 1/( + ) 2 d-1 n ( log 1/( + ) )-hardness for Multicut
44
Planning and Partitioning Algorithms Shuchi Chawla 44 A related result… [Khot Vishnoi 05] Independently obtain ( min ( 1 / , log 1 / ) 1/6 ) hardness based on the same assumption Use this to develop an “integrality-gap” instance for the Sparsest Cut SDP –A graph with low SDP value and high actual value –Implies that we cannot obtain a better than O(log log n) 1/6 approximation using SDPs –Independent of any assumptions!
45
Planning and Partitioning Algorithms Shuchi Chawla 45 Open Problems Closing the gap for Sparsest cut –Better approximation via SDP? Would have to avoid embedding via –Improving the hardness — Fourier analysis is tight –Reduction based on a general 2-prover system Prove/disprove UGC Hardness for uniform sparsest cut, min-bisection, cuts in directed graphs, … ? ℓ2ℓ2
46
Planning and Partitioning Algorithms Shuchi Chawla 46 The adaptivity gap: deterministic case Let the optimal TSP tour be , | | = m Remove all edges from the graph except those of Policy : At every location, pick an outgoing edge at random Claim: For all u & v, expected time to go from u to v mn Claim’: For all u and e = (u,v), expected time to go to v starting at v m Proof of claim: a path of length at most n from u to v At most (n log n) in deterministic graphs Suppose starting from u, we run for time O(mn log n) Probability that some node v is not seen < 1/poly(n) Adaptivity gap = O(n log n) Hitting Time
47
Planning and Partitioning Algorithms Shuchi Chawla 47 The adaptivity gap: general case Thm: Adaptivity gap = O(n 3 log n) Claim: For any u & v, hitting time from u to v O(n 3 OPT) Claim: For any u & v, a path of length at most n, with edges having “probability mass” at least 1/(n 2 OPT)
48
Planning and Partitioning Algorithms Shuchi Chawla 48 Thm: Adaptivity gap = O(n 3 log n) Defining the policy As before, visit each edge/action roughly with the probability that OPT visits it For any set S, prob of taking an edge from S to V\S 1/OPT an edge from S to V\S with prob. mass 1/(n 2 OPT) For any u & v, a path of length at most n, with edges having “probability mass” at least 1/(n 2 OPT) For any u & v, hitting time from u to v O(n 3 OPT) Adaptivity gap = O(n 3 log n) The adaptivity gap: general case i.e., Is this well defined?
49
Planning and Partitioning Algorithms Shuchi Chawla 49 More formally… Model as a graph, with distances between locations and “rewards” on locations Parameters of interest: –Timing constraints: Single deadline, Different deadlines, Time-windows –Reward constraints: a quota on the total reward collected –Vehicle constraints: A given number of vehicles –Solution concept: path, tree, cycle –…–…
50
Planning and Partitioning Algorithms Shuchi Chawla 50 A simple failure model At each step, fail with probability Goal: maximize expected reward collected before failure A crude heuristic: –Expected number of steps until failure = 1/ –Set deadline = 1/ and apply Orienteering algorithm –Provides no guarantee on reward Better formulation: “exponential-discounting” –Probability that robot is alive at time t = 2 -t (say, = ½ ) –Thus, if robot visits reward at time t, expected reward collected = 2 -t –Maximize “discounted-reward” reward time Fixed deadline (Orienteering) Discounted-reward Can be solved using techniques in AI, if reward is collected every time the robot visits a location The one-time reward case: Discounted-reward TSP 6.8-approximation [FOCS’03]
51
Planning and Partitioning Algorithms Shuchi Chawla 51 The adaptivity gap: general case Thm: Adaptivity gap = O(n 3 log n) Defining the policy As before, visit each edge/action roughly with the probability that OPT visits it Is this well defined? p1p1 p2p2 p3p3 pkpk Label on any edge = total prob. of all paths through it Prob. of taking action a from node v labels on all (a,v) edges
52
Planning and Partitioning Algorithms Shuchi Chawla 52 Thm: Adaptivity gap = O(n 3 log n) Defining the policy As before, visit each edge/action roughly with the probability that OPT visits it Prob. of taking action a from node v = labels on all (a,v) edges OPT For any set S, prob of taking an edge from S to V\S 1/OPT an edge from S to V\S with prob. mass 1/(n 2 OPT) For any u & v, a path of length at most n, with edges having “probability mass” at least 1/(n 2 OPT) For any u & v, hitting time from u to v O(n 3 OPT) Adaptivity gap = O(n 3 log n) The adaptivity gap: general case p1p1 p2p2 p3p3 pkpk i.e.,
53
Planning and Partitioning Algorithms Shuchi Chawla 53 Gluing the groups Start with an = O( log n) embedding for each scale A naïve gluing –concatenate all the embeddings and renormalize by dividing by O( log n) –Distortion O( log n) = O(log n) A better gluing lemma –“measured-descent” by Krauthgamer, Lee, Mendel & Naor (2004) –Gives distortion O( log n) distortion O( log ¾ n )
54
Planning and Partitioning Algorithms Shuchi Chawla 54 Extensions to other problems Obvious extension to Min-CNF Deletion –Think of edges as 2-variable constraints “Bi-criteria” Multicut –Allowed to separate only a ¼ frac of the demand-pairs –Fourier analysis stays the same: cheap cuts cutting ¼ th of the pairs are close to dimension cuts –Similar guarantee follows Sparsest Cut –Simple extension of bi-criteria Multicut
55
Planning and Partitioning Algorithms Shuchi Chawla 55 Weighting-and-watching Initialize weight = 1 for each pair Apply ARV to weighted instance For pairs with low-distortion, decrease weights by factor of 2 For other pairs, do nothing Repeat until total weight < 1/k Total weight decreases by constant factor every time O(log k) iterations Each individual weight decreases from 1 to 1/k Each pair contributes to (log k) iterations Implies low distortion for every pair A constant fraction of the weight is embed with low distortion
56
Planning and Partitioning Algorithms Shuchi Chawla 56 Picking labels for a vertex: # edges cut in dimension h total # edges cut in cube Prob[ label 1 = h 1 & label 2 = h 2 ] > Good Multicut good labeling Suppose that “cross-edges” cannot be cut Each cube must have exactly the same cut! Prob[ label = h ] = [ If cut < x 2 d-1 ] 2 -x x > 2 -2x x 2 > for x = O(log 1 / ) * ** * cut < log ( 1 / ) 2 d-1 per cube -fraction of edges can be satisfied Conversely, a “NO”-instance of UG cut > log ( 1 / ) 2 d-1 per cube
57
Planning and Partitioning Algorithms Shuchi Chawla 57 Picking labels for a vertex: Pick a dimension in proportion to its “weight” in cut Cheap cut high prob. of picking most prominent dimension high prob. of picking same dim on either side Good Multicut good labeling Suppose that “cross-edges” cannot be cut Each cube must have exactly the same cut! Key claim: If cut < x 2 d-1 per cube, most edges are satisfied with prob 2 - (x) cut < log ( 1 / ) 2 d-1 per cube -fraction of edges can be satisfied Conversely, a “NO”-instance of UG cut > log ( 1 / ) 2 d-1 per cube
58
Planning and Partitioning Algorithms Shuchi Chawla 58 Good labeling good Multicut Constructing a good cut given a label assignment: For every cube, pick the dimension corresponding to the label of the vertex What about unsatisfied edges? Remove the corresponding cross-edges Cost of cross-edges = n/ m Total cost 2 d-1 n + m2 d-1 n/ m O(2 d n) = O(2 d ) per cube no. of nodes no. of edges in UG a “YES”-instance of UG cut < 2 d per cube
59
Planning and Partitioning Algorithms Shuchi Chawla 59 Revisiting the “NO” instance Cheapest multicut may cut cross-edges Cannot cut too many cross-edges on average For most cube-pairs, few edges cut Cuts on either side are similar, if not the same Same analysis as before follows
60
Planning and Partitioning Algorithms Shuchi Chawla 60 Path-planning Why planning? –Delivery & distribution problems –Production planning, supply chain management –Robot navigation Studied in Operations Research for 2-3 decades Model as a graph, with distances between locations and “rewards” on locations Parameters of interest: –Timing constraints: Single deadline, Different deadlines, Time-windows –Reward constraints: a quota on the total reward collected –Vehicle constraints: A given number of vehicles –Solution concept: path, tree, cycle –…–…
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.