Download presentation
Presentation is loading. Please wait.
Published byBraydon Zone Modified over 10 years ago
1
Locality Sensitive Distributed Computing David Peleg Weizmann Institute
2
Structure of mini-course 1.Basics of distributed network algorithms 2.Locality-preserving network representations 3.Constructions and applications
3
Part 2: Representations 1.Clustered representations Basic concepts: clusters, covers, partitions Sparse covers and partitions Decompositions and regional matchings 2.Skeletal representations Spanning trees and tree covers Sparse and light weight spanners
4
Basic idea of locality-sensitive distributed computing Utilize locality to both simplify control structures and algorithms and reduce their costs Operation performed in large network may concern few processors in small region (Global operation may have local sub-operations) Reduce costs by utilizing “locality of reference”
5
Components of locality theory General framework, complexity measures and algorithmic methodology Suitable graph-theoretic structures and efficient construction methods Adaptation to wide variety of applications
6
Fundamental approach Clustered representation: Impose clustered hierarchical organization on given network Use it efficiently for bounding complexity of distributed algorithms. Skeletal representation: Sparsify given network Execute applications on remaining skeleton, reducing complexity
7
Clusters, covers and partitions Cluster = connected subset of vertices S V
8
Clusters, covers and partitions Cover of G(V,E, w ) = collection of clusters ={S 1,...,S m } containing all vertices of G (i.e., s.t. [ = V).
9
Partitions Partial partition of G = collection of disjoint clusters ={S 1,...,S m }, i.e., s.t. S i Å S j = Partition = cover & partial partition
10
Evaluation criteria Locality and Sparsity Locality level: cluster radius Sparsity level: vertex / cluster degrees
11
Evaluation criteria Locality - sparsity tradeoff: locality and sparsity parameters go opposite ways: better sparsity ⇔ worse locality (and vice versa)
12
Evaluation criteria Locality measures Weighted distances: Length of path (e 1,...,e s ) = ∑ 1≤i≤s w (e i ) dist(u,w,G) = (weighted) length of shortest path dist(U,W) = min{ dist(u,w) | u U, w W }
13
Evaluation criteria Diameter, radius: As before, except weighted Denote logD = d log Diam(G) e For collection of clusters : Diam( ) = max i Diam(S i ) Rad ( ) = max i Rad (S i )
14
Neighborhoods (v) = neighborhood of v = set of neighbors in G (including v itself) (v)
15
Neighborhoods (v) = -neighborhood of v = vertices at distance or less from v 0 (v) 1 (v) 2 (v)
16
Neighborhood covers For W V: s (W) = -neighborhood cover of W = { (v) | v W } (collection of -neighborhoods of W vertices)
17
Neighborhood covers E.g: s 0 (V) = partition into singleton clusters
18
Neighborhood covers E.g: s 1 (W) = cover of W nodes by neighborhoods W = colored nodes s 1 (W)
19
Sparsity measures Different representations Different ways to measure sparsity
20
Cover sparsity measure - overlap deg(v, ) = # occurrences of v in clusters S i.e., degree of v in hypergraph (V, ) deg(v) = 3 v C ( ) = maximum degree of cover Av ( ) = average degree of = ∑ v V deg(v, ) / n = ∑ S |S| / n
21
Partition sparsity measure - adjacency Intuition: “contract” clusters into super-nodes, look at resulting cluster graph of , ( )=( , )
22
Partition sparsity measure - adjacency edges = inter-cluster edges ( )=( , ) : ={(S,S') | S,S‘ , G contains edge (u,v) for u S and v S'}
23
Cluster-neighborhood Def: Given partition , cluster S , integer ≥0: S Cluster-neighborhood of S = neighborhood of S in cluster graph ( ) c (S,G) = (S, ( )) c (S,G)
24
Sparsity measure Average cluster-degree of partition : Av c ( ) = S | c (S)| / n Note: Av c ( ) ~ # inter-cluster edges
25
Example: A basic construction Goal: produce a partition with: 1. clusters of radius ≤ k 2. few inter-cluster edges (or, low Av c ( )) Algorithm BasicPart Algorithm operates in iterations, each constructing one cluster
26
Example: A basic construction At end of iteration: - Add resulting cluster S to output collection - Discard it from V - If V is not empty then start new iteration
27
Iteration structure Arbitrarily pick a vertex v from V Grow cluster S around v, adding layer by layer Vertices added to S are discarded from V
28
Iteration structure Layer merging process is carried repeatedly until reaching required sparsity condition: -next iteration increases # vertices by a factor of < n 1/k (I.e., | (S)| < |S| · n 1/k )
29
Analysis Av-Deg-Partition Thm: Given n-vertex graph G(V,E), integer k≥1, Alg. BasicPart creates a partition satisfying: 1)Rad( ) ≤ k-1, 2)# inter-cluster edges in ( ) ≤ n 1+1/k (or, Av c ( ) ≤ n 1/k )
30
Analysis (cont) Proof: Correctness: Every S added to is (connected) cluster The generated clusters are disjoint (Alg erases from V every v added to cluster) is a partition (covers all vertices)
31
Analysis (cont) Property (2): [ ( ( )) ≤ n 1+1/k ] By termination condition of internal loop, the resulting S satisfies | (S)| ≤ n 1/k ·|S| (# inter-cluster edges touching S) ≤ n 1/k ·|S| Number can only decrease in later iterations, if adjacent vertices get merged into same cluster | | ≤ ∑ S n 1/k ·|S| = n 1+1/k
32
Analysis (cont) Property (1): [ Rad( ) ≤ k-1 ] Consider iteration of main loop. Let J = # times internal loop was executed Let S i = S constructed on i'th internal iteration |S i | > n (i-1)/k for 2≤i≤J (By induction on i)
33
Analysis (cont) J ≤ k (otherwise, |S| > n) Note: Rad(S i ) ≤ i-1 for every 1≤i≤J (S 1 is composed of a single vertex, each additional layer increases Rad(S i ) by 1) Rad(S J ) ≤ k-1
34
Variant - Separated partial partitions Sep( ) = Separation of partial partition = minimal distance between any two clusters When Sep( )=s, we say is s-separated Example: 2-separated partial partition
35
Coarsening Cover ={T 1,...,T q } coarsens ={S 1,...,S p } if clusters are fully subsumed in clusters
36
Coarsening (cont) The radius ratio of the coarsening = Rad( ) / Rad( ) r R = R / r
37
Coarsening (cont) Motivation: Given “useful” with high overlaps: Coarsen by merging some clusters together, getting a coarsening cover with larger clusters better sparsity increased radii
38
Sparse covers Goal: For initial cover , construct coarsening with low overlaps, paying little in cluster radii Inherent tradeoff: Simple Goal: Low average degree lower overlap higher radius ratio (and vice versa)
39
Sparse covers Algorithm AvCover Operates in iterations Each iteration merges together some clusters into one output cluster Z At end of iteration: Add resulting cluster Z to output collection Discard merged clusters from If is not empty then start new iteration
40
Sparse covers Algorithm AvCover – high-level flow
41
Iteration structure Arbitrarily pick cluster S 0 in (as kernel Y of cluster Z constructed next) Repeatedly merge cluster with intersecting clusters from (adding one layer at a time) Clusters added to Z are discarded from
42
Iteration structure - Layer merging process is carried repeatedly until reaching required sparsity condition: adding next layer increases # vertices by a factor of ≤ n 1/k (|Z| ≤ |Y| · n 1/k )
43
Analysis Thm: Given graph G(V,E, w ), cover , int k≥1, Algorithm AvCover constructs a cover s.t.: 1. coarsens 2.Rad( ) ≤ (2k+1) Rad( ) (radius ratio ≤ 2k+1) 3.Av ( ) ≤ n 1/k (low average sparsity)
44
Analysis (cont) Corollary for -neighborhood cover: Given G(V,E, w ), integers k, ≥1, there exists cover = ,k s.t. 1. coarsens the neighborhood cover s (V) 2. Rad( ) ≤ (2k+1) 3. Av ( ) ≤ n 1/k
45
Analysis (cont) Proof of Thm: Property (1): [ coarsens ] Holds directly from construction (Each Z added to is a (connected) cluster, since at the beginning contained clusters)
46
Analysis (cont) Claim: The kernels Y corresponding to clusters Z generated by the algorithm are mutually disjoint. Proof: By contradiction. Suppose there is a vertex v s.t. v Y Å Y' W.l.o.g. suppose Y was created before Y' v Y' There is a cluster S' s.t. v S' and S' was still in when algorithm started constructing Y'.
47
Analysis (cont) But S' satisfies S' Å Y ≠ ∅ The final merge creating from Y should have added S' into and eliminated it from ; contradiction.
48
Output clusters and kernels cover kernels
49
Analysis (cont) Property (2): [ Rad( ) ≤ (2k+1)·Rad( ) ] Consider some iteration of main loop (starting with cluster S ) J = # times internal loop was executed. 0 = initial set i = constructed on i'th internal iteration (1≤i≤J) Respectively Z i,Y i
50
Analysis (cont) Note 1: |Z i | > n i/k, for every 1≤i≤J-1, J ≤ k Note 2: Rad(Y i ) ≤ (2i-1)Rad( ), for every 1≤i≤J Rad (Y J ) ≤ (2k-1)Rad( )
51
Analysis (cont) Property (3): [ Av ( ) ≤ n 1/k ] Av ( )= ∑ Zi |Z i | / n ≤ ∑ Zi |Y i | · n 1/k / n ≤ n · n 1/k / n(Y i ’s are disjoint) = n 1/k
52
Partial partitions Goal: Given initial cover and integer k≥1, construct a partial partition : subsuming a “large” subset of clusters , with low radius ratio.
53
Partial partitions (cont) Procedure Part General structure and iterations similar to Algorithm AvCover, except for two differences: Small difference: Procedure keeps also “unmerged” collections , of original clusters merged into Y and Z.
54
Partial partitions (cont) Small difference (cont): Sparsity condition concerns sizes of , , i.e., # original clusters “captured” by merge, and not sizes of Y, Z, i.e., # vertices covered Merging ends when next iteration increases # clusters merged into by a factor ≤ | | 1/k.
55
Main difference Procedure removes all clusters in , but takes into output collection only the kernel Y, not the cluster Z
56
Main difference Implication: Each selected cluster Y has additional “external layer” of clusters around it, acting as “protective barrier” providing disjointness between different clusters Y, Y' added to
57
Main difference Note: Not all clusters are subsumed by (E.g., those merged into some external layer - will not be subsumed)
58
Analysis Partial Partition Lemma: Given graph G(V,E, w ), cluster collection and integer k≥1, the collections and constructed by Procedure Part( ) satisfy: 1. coarsens (as before) 2. is a partial partition (i.e., Y Å Y’ = for every Y,Y’ ) (guaranteed by construction) 3. | | ≥ | | 1-1/k (# clusters discarded ≤ | | 1/k · # clusters taken) 4. Rad( ) ≤ (2k-1)Rad( ) (as before)
59
s-Separated partial partitions Goal: For initial -neighborhood cover , s,k≥1, construct s-separated partial partition subsuming a “large” subset of clusters , with low radius ratio.
60
s-Separated partial partitions (cont) Procedure SepPart Given , construct modified collection ' of neighborhoods of radius ' = +s/2 : = { (v) | v W} for some W V ' = { ' (v) | v W }
61
s-Separated partial partitions (cont) Example: =1, s=2 ' = 2 = { 1 (v) | v W} ' = { 2 (v) | v W}
62
s-Separated partial partitions (cont) Apply Procedure Part to ', get partial partition ' and subsumed neighborhoods ' Transform ' into required as follows: Shrink cluster T‘ ' into T by eliminating from T' the vertices closer to its border than s/2 input neighborhoods corresponding to '
63
Analysis Lemma: Given graph G(V,E, w ), collection of -neighborhoods and integers s,k, the collections and constructed by Procedure SepPart satisfy: 1. coarsens 2. is an s-separated partial partition 3. | | ≥ | | 1-1/k 4. Rad( ) ≤ (2k-1)· + k s
64
Sparse covers with low max degree Goal: For initial cover , construct coarsening cover with low max degree and cluster ratio. Idea: Reduce to sub-problem of partial partition
65
Low max degree covers (cont) Strategy: Given initial cover and integer k≥1: 1.Repeatedly select low radius partial partitions, each subsuming many clusters of . 2.Their union should subsume all of . 3.The resulting overlap = # partial partitions. 1 2 3
66
Low max degree covers (cont) Algorithm MaxCover Cover clusters by several partial partitions (repeatedly using Procedure Part on remaining clusters, until is empty) Merge the constructed partial partitions into the desired cover
67
Low max degree covers (cont) Max-Deg-Cover Thm: Given G(V,E, w ), cover , integer k≥1, Algorithm MaxCover constructs cover satisfying: 1. coarsens , 2. Rad( ) ≤ (2k-1) Rad( ), 3. C ( ) ≤ 2k| | 1/k
68
Analysis Proof: Define i = contents of at start of phase i; r i = | i | i = set added to at end of phase i, i = set removed from at end of phase.
69
Analysis (cont) Property (1): [ coarsens ] Since = [ i i, = [ i i, and by Partial Partition Lemma, i coarsens i for every i. Property (2): [ Rad( ) ≤ (2k-1) Rad( ) ] Directly by Partial Partition Lemma
70
Analysis (cont) Property (3): [ C ( ) ≤ 2k| | 1/k ] By Partial Partition Lemma, clusters in i are disjoint # clusters v belongs to ≤ # phases of algorithm
71
Analysis (cont) Observation: In every phase i, # i clusters removed from i satisfies | i | ≥ | i | 1-1/k (by Partial Partition Lemma) size of remaining i shrinks as r i+1 ≤ r i - r i 1-1/k
72
Analysis (cont) Claim: Given recurrence x i+1 = x i - x i , 0< <1, let f(n) = least index i s.t. x i ≤1 given x 0 =n. Then f(n) < ((1- ) ln 2) -1 ·n 1- Consequently: as r 0 =| |, is exhausted after ≤ 2k·| | 1/k phases of Algorithm MaxCover C ( ) ≤ 2k·| | 1/k
73
Analysis (cont) Corollary for -neighborhood cover: Given G(V,E, w ), integers k, ≥ 1, there exists cover = ,k satisfying : 1. coarsens s (V) 2. Rad( ) ≤ (2k-1) 3. C ( ) ≤ 2k·n 1/k
74
Covers based on s-separated partial partitions Goal: Cover coarsening neighborhood cover s (V), in which the partial partitions are well separated. Method: Substitute Procedure SepPart for Procedure Part in Algorithm MaxCover.
75
Covers based on s-separated partial partitions Thm: Given G(V,E, w ), integers k, ≥ 1, there exists cover = ,k s.t.: 1. coarsens s (V), 2. Rad( ) ≤ (2k-1) + k·s, 3. C ( ) ≤ 2k n 1/k, 4.each of the C ( ) layers of partial partitions composing is s-separated.
76
Related graph representations Network decomposition: Partition is a (d,c)-decomposition of G(V,E) if radius of clusters in G is Rad( ) ≤ d chromatic number of cluster graph ( ) is ( ( )) ≤ c
77
Example: A (2,3)-decomposition Rad( ) ≤ 2 ( ( )) ≤ 3
78
Decomposition algorithm Algorithm operates in iterations In each iteration i: - Invoke Procedure SepPart to construct a 2-separated partial partition for V At end of iteration: - Assign color i to all output clusters - Delete covered vertices from V - If V is not empty then start new iteration
79
Decomposition algorithm (cont) Main properties: 1. Uses Procedure SepPart instead of Part (i.e., guaranteed separation = 2, not 1) Ensures all output clusters of a single iteration can be colored by single color 2. Each iteration applies only to remaining nodes Clusters generated in different iterations are disjoint.
80
Analysis Thm: Given G(V,E, w ), k ≥ 1, there is a (k,k·n 1/k )-decomposition. Proof: Note: Final collection is a partition (- each generated by SepPart is a partial partition - vertices added to of iteration i are removed from V)
81
Analysis (cont) Iteration starting with results with of size | | = (| | 1-1/k ) Process continues for ≤ O(k·n 1/k ) iterations End with O(k·n 1/k ) colors, and each cluster has O(k) diameter. Picking k=log n: Corollary: Every n-vertex graph G has a (log n,log n)-decomposition.
82
Skeletal representations Spanner: connected subgraph spanning all nodes (Special case: spanning tree) Tree cover: collection of trees covering G
83
Skeletal representations Evaluation criteria Locality level: stretch factor Sparsity level: # edges As for clustered representations, locality and sparsity parameters go opposite ways: better sparsity ⇔ worse locality
84
Stretch Given a graph G(V,E, w ) and a spanning subgraph G'(V,E'), the stretch factor of G' is: G G' Stretch(G') = 2 Stretch(G') = max u,v V {dist(u,v,G’) / dist(u,v,G)}
85
Depth Def: Depth of v in tree T = distance from root: Depth T (v) = dist(v, r 0,T) Depth(T) = max v Depth(v,T) = radius w.r.t. root Depth(T) = Rad( r 0,T)
86
Sparsity measures Def: Given subgraph G'(V',E') of G(V,E, w ): w (G') = weight of G' = e E' w (e) Size of G' = # edges, |E'|
87
Spanning trees - basic types MST: minimum-weight spanning tree of G = spanning tree T M minimizing w (T M ) SPT: shortest paths tree of G w.r.t. given root r 0 = spanning tree T S s.t. for every v≠ r 0, the path from r 0 to v in the tree is the shortest possible, or, Stretch(T S, r 0 )=1
88
Spanning trees - basic types BFS: breadth-first tree of G w.r.t. given root r 0 = spanning tree T B s.t. for every v≠ r 0, path from r 0 to v in tree is shortest possible, measuring path length in # edges
89
Controlling tree degrees deg(v,G) = degree of v in G (G) = max degree in G Tree Embedding Thm: For every rooted tree T, integer m ≥ 1, embedded virtual tree S with same node set, same root (but different edge set), s.t. 1. (S) ≤ 2m 2. Each edge of S has path of length ≤ 2 in T 3. Depth S (v) ≤ (2log m (T)-1) Depth T (v), for every v
90
Proximity-preserving spanners Motivation: How good is a shortest paths tree as spanner? T S preserves distances in graph w.r.t. root r 0, i.e., achieves Stretch(T S, r 0 )=1 However, it fails to preserve distances w.r.t. vertex pairs not involving r 0 (or, to bound Stretch(T S ) ) Q: Construct example where two neighboring vertices in G are at distance 2·Depth(T) in SPT
91
Proximity-preserving spanners k-Spanner: Given graph G(V,E, w ), the subgraph G'(V,E') is a k-spanner of G if Stretch(G') ≤ k Typical goal: Find sparse (small size, small weight) spanners with small stretch factor
92
Example - 2-spanner
94
Tree covers Basic notion: A tree T covering the -neighborhood of v v 2 (V) covering T
95
Tree covers (cont) -tree cover for graph G = tree cover for s (V) = collection TC of trees in G s.t. for every v V, there is a tree T TC (denoted home(v) ), spanning the -neighborhood of v Depth(TC) = max T TC {Depth(T)} Overlap(TC) = max v {# trees containing v}
96
Tree covers Algorithm TreeCover(G,k, ) 1.Construct -neighborhood cover of G, = s (V) 2.Compute a coarsening cover for as in Max-Deg-Cover Thm, with parameter k 3.Select in each cluster R an SPT T(R) rooted at some center of R and spanning R 4.Set TC(k, ) = { T(R) | R }
97
Tree covers (cont) Thm: For every graph G(V,E, w ), integers k, ≥ 1, there is an -tree cover TC=TC(k, ) with Depth(TC) ≤ (2k-1) Overlap(TC) ≤ d 2k·n 1/k e
98
Tree covers (cont) Proof: 1. TC built by Alg. TreeCover is -tree cover: Consider v V coarsens there is a cluster R s.t. (v) R tree T(R) TC covers -neighborhood (v)
99
Tree covers (cont) 2. Bound on Depth(TC): follows from radius bound on clusters of cover , guaranteed by Max-Deg-Cover Thm, as these trees are SPT's. 3. Bound on Overlap(TC): follows from degree bound on (Max-Deg-Cover Thm), as | |=n
100
Tree covers (cont) Relying on Theorem and Tree Embedding Thm, and taking m=n 1/k : Corollary: For every graph G(V,E, w ), integers k, ≥ 1, there is a (virtual) -tree cover TC=TC(k, ) for G, with Depth(TC) ≤ (2k-1) 2 Overlap(TC) ≤ d 2k·n 1/k e, (T) ≤ 2n 1/k for every tree T TC
101
Tree covers (cont) Motivating intuition: a tree cover TC constructed for a given cluster-based cover serves as a way to “materialize” or “implement” efficiently. (In fact, applications employing covers actually use the corresponding tree cover)
102
Sparse spanners for unweighted graphs Basic lemma: For unweighted graph G(V,E), subgraph G' is a k-spanner of G ⇔ for every (u,v) E, dist G' (u,v) ≤ k (No need to look at the stretch of each pair u,v ; suffices to consider the stretch of edges)
103
Sparse spanners for unweighted graphs Algorithm Unweighted_Span(G,k) 1.Set initial partition = s 0 (V) = { {v} | v V } 2.Build coarsening using Alg. BasicPart
104
Algorithm UnweightedSpan(G,k) - cont 3. For every cluster T i construct SPT rooted at some center c i of T i 4. Add all edges of these trees to spanner G'
105
Algorithm UnweightedSpan(G,k) - cont 5. In addition, for every pair of neighboring clusters T i,T j : - select a single “intercluster” edge e ij, - add it to G'
106
Analysis Thm: For every unweighted graph G, k≤1, there is an O(k)-spanner with O(n 1+1/k ) edges
107
Analysis (cont) (a) Estimating # edges in spanner: 1. is a partition of V # edges of trees built for clusters ≤ n 2. Av-Deg-Partition Thm # intercluster edges ≤ n 1+1/k
108
Analysis (cont) (b) Bounding the stretch: Consider edge e=(u,w) in G (recall: enough to look at edges) e was selected to the spanner stretch = 1 So suppose e is not in the spanner.
109
Analysis (cont) Case 1: endpoints u,w belong to same cluster T i length of path from u to w through center c i ≤ 2r Clusters have radius ≤ r k-1
110
Analysis (cont) Case 2: endpoints belong to clusters u T i, w T j These clusters are connected by an inter-cluster edge
111
Analysis (cont) There is a u-w path from u to c i (≤ r steps), from c i through e ij to c j (≤ r+1+r steps), from c j to w (≤ r steps) total length ≤ 4r+1 ≤ 4k-3
112
Stretch factor analysis Fixing k=log n we get: Corollary: For every unweighted graph G(V,E) there is an O(log n)-spanner with O(n) edges.
113
Lower bounds Def: Girth(G) = # edges of shortest cycle in G Girth = 3Girth = 4 Girth = ∞
114
Lower bounds Lemma: For every k ≥ 1, for every unweighted G(V,E) with Girth(G) ≥ k+2, the only k-spanner of G is G itself (no edge can be erased from G)
115
Lower bounds Proof: Suppose, towards contradiction, that G has some spanner G' in which the edge e=(u,v) E is omitted G' has alternative path P of length k from u to v e P P ∪{ e} = cycle of length k+1 < Girth(G); Contradiction. k 1
116
Size and girth Lemma: For every r≥1 and n-vertex, m-edge graph G(V,E) with girth Girth(G) ≥ r, m ≤ n 1+2/(r-2) + n For every r≥3, there are n-vertex, m-edge graphs G(V,E) with girth Girth(G) ≤ r and m ≥ n 1+1/r / 4
117
Lower bounds (cont) Thm: For every k≥3, there are graphs G(V,E) for which every (k-2)-spanner requires (n 1+1/k ) edges
118
Lower bounds (cont) Corollary: For every k≥3, there is an unweighted G(V,E) s.t. (a) for every cover coarsening s 1 (V), if Rad( ) ≤ k then Av ( ) = (n 1/k ) (b) for every partition coarsening s 0 (V), if Rad( ) ≤ k then Av c ( ) = (n 1/k )
119
Lower bounds (cont) Similar bounds implied for average degree partition problem and all maximum degree problems. The radius - chromatic number tradeoff for network decomposition presented earlier is also optimal within factor k Lower bound on radius-degree tradeoff for -regional matchings on arbitrary graphs follow similarly
120
Examples Restricted graph families: Behave better Graph classes with O(n) edges have (trivial) optimal spanner (includes common topologies such as bounded-degree and planar graphs - rings, meshes, trees, butterflies, cube-connected cycles,…) General picture: larger k ⇔ sparser spanner
121
Spanners for weighted graphs Algorithm Weighted_Span(G,k) 1. For every 1 < i < logD : construct 2 i -tree-cover TC(k,2 i ) for G using Alg. TreeCover 2. Take all edges of tree covers into spanner G'(V,E')
122
Spanners for weighted graphs (cont) Lemma: Spanner G' built by Alg. Weighted_Span(G,k) has (1)Stretch(G') ≤ 2k-1 (2)O(logD·k·n 1+1/k ) edges
123
Greedy construction Algorithm GreedySpan(G,k) /* Generalization of Kruskal's MST algorithm */ 1.Sort E by nondecreasing edge weight, get E={e 1,...,e m } (sorted: w (e i ) ≥ w (e i+1 )) 2.Set E'= (spanner edges)
124
Greedy construction 3.Scan edges one by one, for each e j =(u,v) do: Compute P(u,v) = shortest path from u to v in G'(V,E') If w (P(u,v)) > k· w (e j ) (alternative path is too long) then E' E' [ {e j } (must include e in spanner) 4.Output G'(V,E') ejej P(u,v)
125
Analysis Lemma: Spanner G' built by Algorithm GreedySpan(G,k) has Stretch(G') ≤ k Proof: Consider two vertices x,y of G P x,y = (e 1,...,e q ) = shortest x - y path in G
126
Analysis (cont) Consider edge e j =(u,v) along P x,y e j not included in G' when e j was examined by the algorithm, E' contained a u - v path P j = P(u,v) of length ≤ k· w (e j ) PjPj
127
Analysis (cont) This path exists in final G' To mimic the path P x,y in G' : replace each “missing” edge e j (not taken to G') by its substitute P j Resulting path has total length ≤ k· w (P x,y )
128
Analysis (cont) Lemma: Spanner has Girth(G') > k+1 Proof: Consider cycle C in G'. Let e j =(u,v) be last edge added to C by alg. When algorithm examined e j, the spanner E' already contained all other C edges the shortest u - v path P j constructed by the algorithm satisfies w (P j ) ≤ w (C-{e j })
129
Analysis (cont) e j added to E' w (P j ) > k· w (e j ) (by selection rule) e j = heaviest edge in C w (C) ≤ |C|· w (e j ) |C| > k+1 PjPj C w (C) > (k+1)· w (e j ) (C = P j ∪ {e j } )
130
Analysis (cont) Corollary: |E'| ≤ n 1+2/k + n Thm: For every weighted graph G(V,E, w ), k ≥ 1, there is an (2k+1)-spanner G'(V,E') s.t. |E'| < n· d n 1/k e Recall: For every r ≥ 1, every graph G(V,E) with Girth(G) ≤ r has |E| ≤ n 1+2/(r-2) + n
131
Shallow Light Trees Goal: Find spanning tree T near-optimal in both depth and weight Candidate 1: SPT Problem: ∃ G s.t. w (SPT) = (n· w (MST)) Example G: MST: Heavy SPT:
132
Shallow Light Trees (cont) Candidate 2: MST Problem: ∃ G s.t. Depth(MST) = (n·Depth(SPT)) Example G: Deep MST: SPT:
133
Shallow Light Trees (cont) Shallow-light tree (SLT) for graph G(V,E, w ) and root r 0 : Spanning tree T satisfying both Stretch(T, r 0 ) = O(1) w (T) / w (MST) = O(1) Thm: Shallow-light trees exist for every graph G and root r 0
134
Light, sparse, low-stretch spanners Algorithm GreedySpan guarantees: Thm: For every graph G(V,E, w ), integer k≥1, there is a spanner G'(V,E') for G with 1. Stretch(G') < 2k+1 2. |E'| < n· d n 1/k e 3. w (G') = w (MST(G))·O(n 1/k )
135
Lower bound Thm: For every k ≥ 3, there are graphs G(V,E, w ) s.t. every spanner G'(V,E') for G with Stretch(G') ≤ k-2 requires |E'| = (n 1+1/k ) and w (G') = ( w (MST(G))·n 1/k ) Proof: By bound for unweighted graphs
136
Part 3: Constructions and Applications Distributed construction of basic partition Fast decompositions Exploiting topological knowledge: broadcast revisited Local coordination: synchronizers revisited Hierarchical example: routing revisited Advanced symmetry breaking: MIS revisited
137
Basic partition construction algorithm Simple distributed implementation for Algorithm BasicPart Single “thread” of computation (single locus of activity at any given moment)
138
Basic partition construction algorithm Components ClusterCons : Procedure for constructing a cluster around a chosen center v NextCtr : Procedure for selecting the next center v around which to grow a cluster RepEdge : Procedure for selecting a representative inter- cluster edge between any two adjacent clusters
139
Analysis Thm: Distributed Algorithm BasicPart requires Time = O(nk) Comm = O(n 2 )
140
Efficient cover construction algorithms Goal: Fast distributed algorithm for coarsening a neighborhood cover Known: Randomized algorithms for constructing low (average or maximum) degree cover of G, guaranteeing bounds on weak cluster diameter
141
Efficient decompositions Goal: fast distributed algorithms for constructing a network decomposition Basic tool: s-separated, r-ruling set ((s,r)-set): (Combinaton of independent & dominating set) W={w 1,...,w m } V in G s.t. dist(w i,w j ) ≥ s, 1≤i<j≤m, for every v W, 1≤i≤m s.t. dist(w i,v) ≤ r
142
Efficient decompositions (s,r)-partition: (associated with (s,r)-set W = {w 1,...,w m } ) Partition of G, (W) = {S 1,...,S m }, s.t. 1≤i≤m : w i S i Rad(w i,G(S i )) ≤ r ≥s ≤r wiwi SiSi
143
Distributed construction Thm: There is a deterministic distributed algorithm for constructing (2 ,2 )-decomposition for given n-vertex graph in time O(2 ), for = p (c log n), for some constant c>0 Using an efficient distributed construction for (3,2)-sets and (3,2)-partitions and a recursive coloring algorithm, one can get:
144
Exploiting topological knowledge: Broadcast revisited Delay measure: When broadcasting from source s, message delivery to node v suffers delay ρ if it reaches it after ρ·dist(s,v) time. For broadcast algorithm B: Delay(B) = max v {Delay(v,B)}
145
Broadcast on a subgraph Lemma: Flood(G') broadcast on subgraph G' costs Message(Flood(G')) = |E(G')| Comm(Flood(G')) = w (G') Delay(Flood(G')) = Stretch(G') (in both synchronous and asynchronous models)
146
Broadcast (cont) Selecting an appropriate subgraph: For spanning tree T: Message(Flood(T)) = n-1 (optimal) Comm(Flood(T)) = w (T) Delay(Flood(T)) = Stretch(T, r 0 ) Goal: Lower both w (T) and Stretch(T, r 0 )
147
Broadcast (cont) Using light, low-stretch tree (SLT): Lemma: For every graph G, source v, there is a spanning tree SLT v s.t. broadcast by Flood(SLT v ) costs: Message(Flood(SLT v )) = n-1 Comm(Flood(SLT v )) = O( w (MST)) Delay(Flood(SLT v )) = O(1)
148
Broadcasting on a spanner Disadvantage of SLT broadcast: Tree efficient for broadcasting from one source is poor for another, w.r.t Delay Solution 1: Maintain separate tree for every source (heavy memory / update costs, involved control)
149
Broadcasting on a spanner Solution 2: Flood(G') broadcast on spanner G' Recall: For every graph G(V,E, w ), integer k≥1, there is a spanner G'(V,E') for G with 1. Stretch(G') ≤ 2k+1 2. |E'| ≤ n 1+1/k 3. w (G') = w (MST(G))·O(n 1/k )
150
Broadcasting on a spanner (cont) Setting k=log n: Thm: For every graph G, there is a spanner G', s.t. Algorithm Flood(G') has complexities Message(Flood(G')) = O(n · log n · logD) Comm(Flood(G')) = O(log n · logD · w (MST)) Delay(Flood(G')) = O(log n) (optimal up to polylog factor in all 3 measures)
151
Topology knowledge and broadcast Assumption: No predefined structures exist in G (Broadcast performed “from scratch”) Focus on message complexity Extreme models of topological knowledge: KT ∞ model: Full knowledge Vertices have full topological knowledge
152
Topology knowledge and broadcast KT ∞ model: Full topological knowledge broadcast with minimal # messages, Message= (n) 1.Each v locally constructs same tree T, sending no messages 2.Use tree broadcast algorithm Flood(T)
153
Topology knowledge and broadcast KT 0 model: “Clean” network: Vertices know nothing on topology KT 1 model: Neighbor knowledge: Vertices know own + neighbor ID's, nothing else
154
Topology knowledge & msg complexity Lemma: In model KT 0, every broadcast algorithm must send ≥1 message over every edge of G Proof: Suppose there is an algorithm disobeying the claim. Consider graph G and edge e=(u,w) s.t. broadcasts on G without sending any messages over e
155
Topology knowledge & msg complexity Then G can be replaced by G' as follows: ⇒
156
Clean network model u and w cannot distinguish between the two topologies G' and G No msgs sent on e No msgs sent on e 1, e 2
157
Clean network model In executing algorithm over G', u and w fail to forward the message to u' and w' xx
158
Clean network model u' and w' do not get message, contradiction xx
159
Clean network model Thm: Every broadcast protocol for the KT 0 model has complexity Message( ) = (|E|)
160
Msg complexity of broadcast in KT 1 Note: In KT 1, previous intuition fails ! Nodes know the ID’s of their neighbors ⇓ not all edges must be used
161
Broadcast in KT 1 (cont) Traveler algorithm “Traveler” (token) performs DFS traversal on G Traveler carries a list L of vertices visited so far.
162
Broadcast in KT 1 (cont) To pick next neighbor to visit after v: - Compare L with list of v's neighbors, - Make next choice only from neighbors not in L (If all v neighbors were already visited, backtrack from v on edge to parent.) {0} {0,1} 0 12 345 {0,1,3} {0,1,3,4} {0,1,3,4,5}
163
Broadcast in KT 1 (cont) Note: Traveler's “forward” steps restricted to the edges of the DFS tree spanning G; non-tree edges are not traversed {0} {0,1} 0 12 345 {0,1,3} {0,1,3,4} {0,1,3,4,5} No need to send messages on every edge !
164
Broadcast in KT 1 (cont) Q: Does the traveler algorithm disprove the (|E|) lower bound on messages? Observe: # basic (O(log n) bit) messages sent by algorithm = (n 2 ) >> 2n (the lists carried by the traveler contain up to O(n) vertex ID's) traversing an edge requires O(n) basic messages on average
165
(|E|) lower bound for KT 1 Idea: To avoid traversing edge e=(v,u) – the traveler algorithm must inform, say, v, that u already got the message. This can only be done by sending some message to u - as expensive as traversing e itself… Intuitively, edge e was “utilized,” just as if a message actually crossed it
166
Lower bound (cont) Def: Edge e=(u,v) E is utilized during a run of algorithm on G if one of the following events holds: 1. A message is sent on e 2. u either sends or receives a message containing ID(v) 3. v either sends or receives a message containing ID(u)
167
Lower bound (cont) m = # utilized edges in run of protocol on G M = # (basic) messages sent during run Lemma: M= (m) Proof: Consider a message sent over e=(u,v). The message contains O(1) node ID's z 1,...,z B. Each z i utilizes ≤ 2 edges, (u,z i ) and (v,z i ) (if exist). Also, e itself becomes utilized.
168
Lower bound (cont) ⇒ To prove a lower bound on messages, it suffices to prove a lower bound on # edges utilized by algorithm Lemma: Every algorithm for broadcast under the KT 1 model must utilize every edge of G Thm: Every broadcast protocol for the KT 1 model has complexity Message( ) = (|E|)
169
Lower bound (cont) Observation: Thm no longer holds if, in addition to arbitrary computations, we allow protocols with time unbounded in network size. Once such behavior is allowed, one may encode an unbounded number of ID's by the choice of transmission round, and hence implement, say, the “traveler” algorithm. (This relates only to the synchronous model; In asynch model such encoding is impossible!)
170
Hierarchy of partial topological knowledge KT k model: Known topology to radius k: Every vertex knows the topology of the neighborhood of radius k around it, G( k (v)) Example: In KT 2, v knows the topology of its 2-neighnorhood
171
Hierarchy of partial topological knowledge KT k model: Known topology to radius k: Every vertex knows topology of subgraph of radius k around it, G( k (v)) Information-communication tradeoff: For every fixed k ≥ 1: # basic messages required for broadcast in the KT k model = (min{|E|,n 1+ (1)/k })
172
Hierarchy of partial topological knowledge Lower bound proof: Variant of KT 1 case. Upper bound Idea: v knows all edges at distance ≤ k from it v can detect all short cycles (length ≤ 2k) going through it Possible to disconnect all short cycles locally, by deleting one edge in each cycle.
173
KT k model Algorithm k-Flood Assumption: There is some (locally computable) assignment of distinct weights to edges
174
KT k model Algorithm k-Flood Define subgraph G * (V,E * ) of G: 1.Mark heaviest edge in each short cycle “unusable”, 2.include precisely all unmarked edges in E * (Only e endpoints should know e is usable; Given partial topological knowledge, edge deletions done locally, sending no messages)
175
KT k model Algorithm k-Flood (cont) Perform broadcast by Alg. Flood(G * ) on G * (I.e., whenever v receives message first time, it sends it over all incident usable edges e E * )
176
Analysis Lemma: G connected G * connected too. Consequence of marking process defining G * : All short cycles are disconnected Lemma: Girth(G * ) ≤ 2k+1
177
Analysis Recall: For every r≥1, graph G(V,E) with girth Girth(G) ≥ r, |E| ≤ n 1+2/(r-2) + n Corollary: |E * |=O(n 1+c/k ) for constant c>0 Thm: For every G(V,E), k≥1, Algorithm k-Flood performs broadcast in KT k model, with Message(k-Flood)=O(min{|E|,n 1+c/k }) (fixed c>0)
178
Synchronizers revisited Recall: Synchronizers enable transforming an algorithm for synchronous networks into an algorithm for asynchronous networks. Phase A (of pulse p): Each processor learns (in finite time) that all messages it sent during pulse p have arrived (it is safe) Phase B (of pulse p): Each processor learns that all its neighbors are safe w.r.t. pulse p Operate in 2 phases per pulse
179
Learning neighbor safety Safe Ready
180
Synchronizer costs Goal: Synchronizer capturing reasonable middle points on time-communication tradeoff scale C pulse O(|E|)O(n) T pulse O(1)O(Diam)
181
Synchronizer Assumption: Given a low-degree partition Rad( ) ≤ k-1, # inter-cluster edges in ( ) ≤ n 1+1/k
182
Synchronizer (cont) For each cluster in , build rooted spanning tree. synchronization link spanning tree In addition, between any two neighboring clusters designate a synchronization link.
183
Handling safety information (in Phase B) Step 1: For every cluster separately, apply synchronizer (By end of step, every node knows that every node in its cluster is safe) my_subtree_safe cluster_safe
184
Handling safety information (in Phase B) Step 2: Every node incident to a synchronization link sends a message to the other cluster, saying “my cluster is safe” my_cluster_safe
185
Handling safety information (in Phase B) Step 3: Repeat step 1, but the convergecast performed in each cluster carries different information: Whenever v learns all clusters neighboring its subtree are safe, it reports this to parent. all_clusters_adjacent_to _my_subtree_are_safe ☑ ☑
186
Handling safety information (in Phase B) Step 4: When root learns all neighboring clusters are safe, it broadcasts “start new pulse” on tree all_neighboring_ clusters_are_safe (By end of step, every node knows that all its neighbors are safe)
187
Analysis Claim: Synchronizer is correct. Claim: 1.C pulse ( ) = O(n 1+1/k ) 2.T pulse ( ) = O(k)
188
Analysis (cont) Proof: Time to implement one pulse: ≤ 2 broadcast / convergecast rounds in clusters (+ 1 message-exchange step among border vertices in neighboring clusters) T pulse ( ) ≤ 4 Rad( ) +1 = O(k)
189
Complexity Messages: Broadcast / convergecast rounds, separately in each cluster, cost O(n) messages in total (clusters are disjoint) Communication step among neighboring clusters requires n·Av c ( ) = O(n 1+1/k ) messages C pulse ( ) = O(n 1+1/k )
190
Synchronizer Assumption: Given a sparse k-spanner G'(V,E') G'(V,E')
191
Synchronizer (cont) Handling safety information (in Phase B): When v learns it is safe for pulse p: For k rounds do: 1.Send “safe” to all spanner neighbors 2.Wait to hear same from these neighbors
192
Synchronizer Lemma: For every 1≤i≤k, once v completes i rounds, every node u at distance dist(u,v,G’) ≤ i from v in the spanner G' is safe
193
Analysis Proof: By induction on i. For i=0: Immediate. For i+1: Consider the time v finishes (i+1)st round.
194
Analysis v received i+1 messages “safe” from its neighbors in G' These neighbors each sent their (i+1)st message only after finishing their i’th round
195
Analysis By inductive hypothesis, for every such neighbor u, every w at distance ≤ i from u in G' is safe. Every w at distance ≤ i+1 from v in G' is safe too i i i i i i
196
Analysis (cont) Corollary: When v finishes k rounds, each neighbor of v in G is safe (v is ready for pulse p+1) Proof: By lemma, at that time, every processor u at distance ≤ k from v in G' is safe. By def of k-spanners, every neighbor of v in G is at distance ≤ k from v in G'. every neighbor is safe.
197
Analysis (cont) Lemma: If G has k-spanner with m edges, then it has synchronizer with T pulse ( )=O(k) C pulse ( )=O(k·m)
198
Summary C pulse O(|E|) O(n) O(n 1+1/k ) O(kn 1+1/k ) T pulse O(1) O(Diam) O(k) O(k) On a general n-vertex graph, for parameter k≥1:
199
Compact routing revisited Tradeoff between stretch and space: Any routing scheme for general n-vertex networks achieving stretch factor k≥1 must use (n 1+1/(2k+4) ) bits of routing information overall (Lower bound holds for unweighted networks as well, and concerns total memory requirements)
200
Interval tree routing Goal: Given tree T, design routing scheme based on interval labeling Idea: Label each v by integer interval Int(v) s.t. for every two vertices u,v, Int(v) Int(u) ⇔ v is a descendent of u in T
201
Interval labeling Algorithm IntLab on tree T 1.Perform depth-first (DFS) tour of T, starting at root; Assign each u T a depth-first number DFS(u)
202
Interval labeling (cont) Algorithm IntLab on tree T 2.Label node u by interval [DFS(u),DFS(w)] : w = last descendent of u visited by DFS (Labels contain ≤ d 2log n e bits)
203
Interval tree routing Data structures: Vertex u stores its own label Int(u) and the labels of its children in T Forwarding protocol: Routes along unique path
204
Interval tree routing Lemma: For every tree T=(V,E, w ), scheme ITR(T) has Dilation(ITR,G)=1 and uses O( (T)log n) bits per vertex, and O(n log n) memory in total
205
Interval tree routing (cont) Forwarding protocol: Routing M from u to v: At intermediate w along route: Compare Int(v) with Int(w) Possibilities: 1.Int(w) = Int(v) (w = v): receive M.
206
Interval tree routing (cont) 2.Int(w) Int(v) (w descendent of v): Forward M upwards to parent w v
207
Interval tree routing (cont) 3.Disjoint intervals (v, w in different subtrees): Forward M upwards to parent w v
208
Interval tree routing (cont) 4.Int(v) Int(w) (v descendent of w): Examine intervals of w’s children, find unique child w' s.t. Int(v) Int(w'), forward M to w' w v
209
ITR for general networks 1. Construct shortest paths tree T for G, 2. Apply ITR to T. Total memory requirements = O(n log n) bits Problems: - stretch may be as high as Rad(G), - maximum memory per vertex depend on maximum degree of T
210
Overcoming high max degree problem Recall: For every rooted tree T, integer m ≥ 1, there is an embedded virtual tree S with same node set, same root (but different edge set), s.t. 1. (S) ≤ 2m 2. Each edge of S corresponds to path of length ≤ 2 in T 3. Depth S (v) ≤ (2log m (T)-1) Depth T (v), for every v
211
Overcoming high max degree problem Setting m=n 1/k, embed in T a virtual tree T' with (T') < 2n 1/k Depth(T') < (2k -1) Rad(G)
212
Overcoming high max degree problem Lemma: For every G(V,E, w ), the ITR(T) scheme guarantees message delivery in G with communication O(Rad(G)) and uses O(n log n) memory Problem: stretch may be as high as Rad(G)
213
A regional (C, )-routing scheme For every u,v: If dist(u,v) ≤ : scheme succeeds in delivering M from u to v. Else: routing fails, M returns to u Communication cost ≤ C.
214
A regional (C, )-routing scheme Recall: For graph G(V,E, w ), integers k, ≥ 1, there is an -tree cover TC=TC(k, ) with Depth(TC) ≤ (2k-1) Overlap(TC) ≤ 2k n 1/k sum of tree sizes = O(k·n 1+1/k )
215
Data structures 1.Construct tree cover TC(k, ) 2.Assign each tree T in TC(k, ) distinct Id(T) 3.Set up interval tree routing component ITR(T) on each tree T TC(k, )
216
Data structures Recall: Every v V has home tree T=home(v) in TC(k, ), containing its entire -neighborhood. Scheme RS k, : Routing label for v: Pair (Id(T),Int T (v)) where Id(T) = ID of v's home tree Int T (v) = v's routing label in ITR(T)
217
Data structures Forwarding protocol: Routing M from u to v with label (Id(T),Int T(v) ): Examine if u belongs to T. - u not in T: detect “unknown destination” failure and terminate routing procedure. - u in T: send M using ITR(T) component
218
Analysis Lemma: For every graph G, integers k, ≥ 1, scheme RS k, is a regional (O(k ), )-routing scheme and it uses O(kn· 1+1/k ·log n) memory
219
Analysis (cont) Proof: Stretch: Suppose dist(u,v) ≤ for some u,v. By definition, v (u). Let T = home tree of u. (u) V(T) v T ITR(T) succeeds. Also, path length = O(Depth(T)) = O(k )
220
Analysis (cont) Memory: Each v stores O( (T(C))·log n) bits per each cluster C to which it belongs, where T(C) = spanning tree constructed for C O(k·n 1+1/k ·log n) memory in total
221
Hierarchical routing scheme RS k Data structures: For 1 ≤ i ≤ logD: construct a regional (O(k i ), i )-routing scheme R i =RS k, for i =2 i Each v belongs to all regional schemes R i (has home tree home i (v) in each R i and routing label in each level, stores all info required for each scheme)
222
Hierarchical routing scheme RS k Routing label = concatenation of regional labels Forwarding protocol: Routing M from u to v: 1.Identify lowest-level regional scheme R i usable (u first checks if it belongs to tree home 1 (v) ; If not, then check second level, etc.) 2.Forward M to v on ITR(home i (v)) component of regional scheme R i
223
Analysis Lemma: Dilation(RS k )=O(k). Proof: Suppose u sends M to v. Let d=dist(u,v) and j= d log d e (2 j-1 < d < 2 j ) i = lowest level s.t. u belongs to v's home tree
224
Analysis u must belong to home j (v) regional scheme R j is usable (if no lower level was) (Note: highest-level R logD always succeeds) Comm(RS k,u,v) ≤ | (RS k,u,v)| ≤ ∑ j i=1 O(k·2 i ) ≤ O(k·2 j+1 ) ≤ O(k)·dist(u,v)
225
Analysis (cont) Thm: For every graph G, integer k≥1, hierarchical routing scheme RS k has Dilation(RS k ) = O(k) Mem(RS k ) = O(k·n 1+1/k ·logn·logD)
226
Analysis (cont) Proof: Memory required by hierarchical scheme = logD terms, each bounded by O(k·n 1+1/k ·logn) total memory = O(k·n 1+1/k ·logn·logD) bits
227
Deterministic decomposition-based MIS Assumption: given a (d,c)-decomposition for G plus coloring of clusters in cluster graph MIS computation ( ) phases: Phase i computes MIS among vertices belonging to clusters colored i (These clusters are non-adjacent, so may compute MIS for each independently, in parallel, using PRAM-based distributed algorithm, in time O(d log 2 n).)
228
Deterministic MIS (cont) Note: A vertex joining the MIS must mark all neighbors as excluded off MIS, including those of other colors not all occupants of clusters colored i participate in phase i - only those not excluded in earlier phases
229
Deterministic MIS (cont) Procedure DecompToMIS(d,c) - code for v For phase i=1 through c do: /* Each phase consists of O(d log 2 n) rounds */ 1.If v's cluster is colored i then do: a.If v has not decided yet ( v = -1) then compute MIS on cluster using PRAM-based algorithm b.If v joined MIS then inform all neighbors 2.Else if neighbor joined MIS then decide v 0
230
Analysis # phases ( )=O(c) Time = O(c·d·log 2 n) Lemma: There is a deterministic distributed algorithm that given colored (d,c)-decomposition for G, computes MIS for G in time O(d·c·log 2 n) Recall: For every graph G, k ≥ 1, there is a (k,kn 1/k )-decomposition
231
Analysis (cont) Taking k=log n, we get: Recall: There is a deterministic algorithm for computing a decomposition in time O(2 ) for = c p (log n), constant c>0 Corollary: There is a deterministic distributed MIS algorithm with time O(2 p (c log n) ) Corollary: Given colored (log n,log n)-decomp’ for G, there is a deterministic distributed MIS algorithm with time O(polylog n)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.