Presentation is loading. Please wait.

Presentation is loading. Please wait.

Locality Sensitive Distributed Computing Exercise Set 2 David Peleg Weizmann Institute.

Similar presentations


Presentation on theme: "Locality Sensitive Distributed Computing Exercise Set 2 David Peleg Weizmann Institute."— Presentation transcript:

1 Locality Sensitive Distributed Computing Exercise Set 2 David Peleg Weizmann Institute

2 Basic partition construction algorithm Simple distributed implementation for Algorithm BasicPart Single “thread” of computation (single locus of activity at any given moment)

3 Basic partition construction algorithm Components ClusterCons : Procedure for constructing a cluster around a chosen center v NextCtr : Procedure for selecting the next center v around which to grow a cluster RepEdge : Procedure for selecting a representative inter- cluster edge between any two adjacent clusters

4 Cluster construction procedure ClusterCons Goal: Invoked at center v, construct cluster and BFS tree (rooted at v) spanning it Tool: Variant of Dijkstra's algorithm.

5 Recall: Dijkstra’s BFS algorithm phase p+1:

6 Main changes to Algorithm DistDijk 1. Ignoring covered vertices: Global BFS algorithm sends exploration msgs to all neighbors save those known to be in tree New variant ignores also vertices known to belong to previously constructed clusters 2. Bounding depth: BFS tree grown to limited depth, adding new layers tentatively, based on halting condition (|  (S)| < |S|·n 1/k )

7 Distributed Implementation Before deciding to expand tree T by adding newly discovered layer L: Count # vertices in L by convergecast process: Leaf w  T: set Z w = # new children in L Internal vertex: add and upcast counts.

8 Distributed Implementation Root: compare final count Z v to total # vertices in T (known from previous phase). -If ratio ≥ n 1/k, then broadcast next Pulse msg (confirm new layer and start next phase) -Otherwise, broadcast message Reject (reject new layer, complete current cluster) Final broadcast step has 2 more goals: -mark cluster by unique name (e.g., ID of root), -inform all vertices of new cluster name

9 Distributed Implementation (cont) This information is used to define cluster borders. I.e., once cluster is complete, each vertex in it informs all neighbors of its new residence.  nodes of cluster under construction know which neighbors already belong to existing clusters.

10 Center selection procedure NextCtr Fact: Algorithm's “center of activity” always located at currently constructed cluster C. Idea: Select as center for next cluster some vertex v adjacent to C (= v from rejected layer) Implementation: Via convergecast process. (leaf: pick arbitrary neighbor from rejected layer, upcast to parent internal node: upcast arbitrary candidate)

11 Center selection procedure (NextCtr) Problem: What if rejected layer is empty? (It might still be that the entire process is not yet complete: there may be some yet unclustered nodes elsewhere in G)  r0r0 ??

12 Center selection procedure (NextCtr) Solution: Traverse the graph (using cluster construction procedure within a global search procedure)  r0r0

13 Distributed Implementation Use DFS algorithm for traversing the tree of constructed cluster. Start at originator vertex r 0, invoke ClusterCons to construct the first cluster. Whenever the rejected layer is nonempty, choose one rejected vertex as next cluster center Each cluster center marks a parent cluster in the cluster DFS tree, namely, the cluster from which it was selected

14 Distributed Implementation (cont) DFS algorithm (cont): Once the search cannot progress forward (rejected layer is empty) : the DFS backtracks to previous cluster and looks for new center among neighboring nodes If no neighbors are available, the DFS process continues backtracking on the cluster DFS tree

15 Inter-cluster edge selection RepEdge Goal: Select one representative inter-cluster edge between every two adjacent clusters C and C' r0r0 E(C,C') = edges connecting C and C' (known to endpoints in C, as C vertices know the cluster- residence of each neighbor)

16 Inter-cluster edge selection RepEdge  Representative edge can be selected by convergecast process on all edges of E(C,C'). Requirement: C and C' must select same edge Solution: Using unique ordering of edges - pick minimum E(C,C') edge. Q: Define unique edge order by unique ID's?

17 Inter-cluster edge selection (RepEdge) E.g., Define ID-weight of edge e=(v,w), where ID(v) < ID(w), as pair h ID(v),ID(w) i, and order ID-weights lexicographically; This ensures distinct weights and allows consistent selection of inter-cluster edges

18 Inter-cluster edge selection (RepEdge) Problem: Cluster C must carry selection process for every adjacent cluster C' individually Solution: Inform each C vertex of identities of all clusters adjacent to C by convergecast + broadcast Pipeline individual selection processes

19 Analysis (C 1,C 2,...,C p ) = clusters constructed by algorithm For cluster C i : E i = edges with at least one endpoint in C i n i = |C i |, m i = |E i |, r i =Rad(C i )

20 Analysis (cont) ClusterCons: Depth-bounded Dijkstra procedure constructs C i and BFS tree in: O(r i 2 ) time and O(n i r i + m i ) messages Q: Prove O(n) bound  Time(ClusterCons) = ∑ i O(r i 2 ) ≤ ∑ i O(r i k) ≤ k ∑ i O(n i ) = O(kn)

21 Analysis (cont) C i and BFS tree cost: O(r i 2 ) time and O(n i r i + m i ) messages  Comm(ClusterCons) = ∑ i O(n i r i + m i ) Each edge occurs in ≤ 2 distinct sets E i, hence Comm(ClusterCons) = O(nk + |E|)

22 Analysis (NextCtr) DFS process on the cluster tree is more expensive than plain DFS: visiting cluster C i and deciding the next step requires O(r i ) time and O(n i ) comm. DFS step Deciding next step

23 Analysis (NextCtr) DFS visits clusters in cluster tree O(p) times Entire DFS process (not counting Procedure ClusterCons invocations) requires: Time(NextCtr) = O(pk) = O(nk) Comm(NextCtr) = O(pn) = O(n 2 )

24 Analysis (RepEdge) s i = # neighboring clusters surrounding C i Convergecasting ID of neighboring cluster C' in C i costs O(r i ) time and O(n i ) messages For all s i neighboring clusters: O(s i +r i ) time (pipelining) O(s i n i ) messages

25 Analysis (RepEdge) Pipelined inter-cluster edge selection – similar. As s i ≤ n, we get Time(RepEdge) = max i O(s i + r i ) = O(n) Comm(RepEdge) = ∑ i O(s i n i ) = O(n 2 )

26 Analysis Thm: Distributed Algorithm BasicPart requires Time = O(nk) Comm = O(n 2 )

27 Sparse spanners Example - m-dimensional hypercube: H m =(V m,E m ), V m ={0,1} m, E m = {(x,y) | x and y differ in exactly one bit} |V m |=2 m, |E m |=m 2 m-1, diameter m Ex: Prove that for every m ≥ 0, the m-cube has a 3-spanner with # edges ≤ 7·2 m

28 Regional Matchings Locality sensitive tool for distributed match-making

29 Distributed match making Paradigm for establishing client-server connection in a distributed system (via specified rendezvous locations in the network) Ads of server v: written in locations Write(v) v client u: reads ads in locations Read(u) u

30 Regional Matchings Requirement: “read” and “write” sets must intersect: for every v,u  V, Write(v) Å Read(u) ≠  v u Write(v) Read(u) Client u must find an ad of server v

31 Regional Matchings (cont) Distance considerations taken into account: Client u must find an ad of server v only if they are sufficiently close -regional matching: “read” and “write” sets  = { Read(v), Write(v) | v  V } s.t. for every v,u  V, dist(u,v) ≤  Write(v) Å Read(u) ≠ 

32 Regional Matchings (cont) Degree parameters:  write (  ) = max v  V |Write(v)|  read (  ) = max v  V |Read(v)|

33 Regional Matchings (cont) Radius parameters: Str write (  ) = max u,v  V { dist(u,v) | u  Write(v) } / Str read (  ) = max u,v  V { dist(u,v) | u  Read(v)} /

34 Regional matching construction [Given graph G, k, ≥ 1, construct regional matching ,k ] 1.Set    s (V) ( -neighborhood cover)

35 Regional matching construction 2.Build coarsening cover  as in Max-Deg-Cover Thm

36 Regional matching construction 3.Select a center vertex r 0 (T) in each cluster T  

37 Regional matching construction 4.Select for every v a cluster T v   s.t.  (v)  T v v  (v) T v =T 1

38 Regional matching construction 5.Set Read(v) = {r 0 (T) | v  T} Write(v) = {r 0 (T v )} v  (v) T1T1 Read(v) = {r 1,r 2,r 3 } Write(v) = {r 1 } r1r1 T2T2 T3T3 r2r2 r3r3

39 Analysis Claim: Resulting ,k is an -regional matching. Proof: Consider u,v such that dist(u,v) ≤ Let T v be cluster s.t. Write(v) = {r 0 (T v )}

40 Analysis (cont) By definition, u   (v). Also  (v)  T v  u  T v  r 0 (T v )  Read(u)  Read(u) Å Write(v) ≠ 

41 Analysis (cont) Thm: For every graph G(V,E, w ),,k≥1, there is an -regional matching ,k with  read ( ,k ) ≤ 2k n 1/k  write ( ,k ) = 1 Str read ( ,k ) ≤ 2k+1 Str write ( ,k ) ≤ 2k+1

42 Analysis (cont) Taking k=log n we get Corollary: For every graph G(V,E, w ), ≥1, there is an -regional matching  with  read (  ) = O(log n)  write (  ) = 1 Str read (  ) = O(log n) Str write (  ) = O(log n)


Download ppt "Locality Sensitive Distributed Computing Exercise Set 2 David Peleg Weizmann Institute."

Similar presentations


Ads by Google