Chapter 4 Downcasts and Upcasts. 4.1 Downcasts At first we assume the case where the root has m distinct items A = {  1, …,  m }, each destined to one.

Slides:



Advertisements
Similar presentations
Chapter 11 Trees Graphs III (Trees, MSTs) Reading: Epp Chp 11.5, 11.6.
Advertisements

Iterative Rounding and Iterative Relaxation
Chapter 5: Tree Constructions
Trees Chapter 11.
Communication Networks Recitation 3 Bridges & Spanning trees.
Price Of Anarchy: Routing
Joining LANs - Bridges. Connecting LANs 4 Repeater –Operates at the Physical layer no decision making, processing signal boosting only 4 Bridges –operates.
Chapter 15 Basic Asynchronous Network Algorithms
Approximation, Chance and Networks Lecture Notes BISS 2005, Bertinoro March Alessandro Panconesi University La Sapienza of Rome.
1 K-clustering in Wireless Ad Hoc Networks Fernandess and Malkhi Hebrew University of Jerusalem Presented by: Ashish Deopura.
Lecture 7: Synchronous Network Algorithms
Outline. Theorem For the two processor network, Bit C(Leader) = Bit C(MaxF) = 2[log 2 ((M + 2)/3.5)] and Bit C t (Leader) = Bit C t (MaxF) = 2[log 2 ((M.
Prepared by Ilya Kolchinsky.  n generals, communicating through messengers  some of the generals (up to m) might be traitors  all loyal generals should.
Minimum Spanning Trees
Minimum Spanning Trees
Locality Sensitive Distributed Computing Exercise Set 2 David Peleg Weizmann Institute.
CPSC 668Set 10: Consensus with Byzantine Failures1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
1 Discrete Structures & Algorithms Graphs and Trees: II EECE 320.
1 Minimum Spanning Trees Gallagher-Humblet-Spira (GHS) Algorithm.
CPSC 668Set 3: Leader Election in Rings1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 10: Consensus with Byzantine Failures1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
1 Fault-Tolerant Consensus. 2 Failures in Distributed Systems Link failure: A link fails and remains inactive; the network may get partitioned Crash:
Communication operations Efficient Parallel Algorithms COMP308.
1 Parallel Algorithms III Topics: graph and sort algorithms.
Job Scheduling Lecture 19: March 19. Job Scheduling: Unrelated Multiple Machines There are n jobs, each job has: a processing time p(i,j) (the time to.
Advanced Topics in Algorithms and Data Structures Page 1 An overview of lecture 3 A simple parallel algorithm for computing parallel prefix. A parallel.
The Euler-tour technique
P2P Course, Structured systems 1 Introduction (26/10/05)
Ch. 8 & 9 – Linear Sorting and Order Statistics What do you trade for speed?
1 Binomial heaps, Fibonacci heaps, and applications.
Broadcast & Convergecast Downcast & Upcast
Algorithms: Design and Analysis Summer School 2013 at VIASM: Random Structures and Algorithms Lecture 3: Greedy algorithms Phan Th ị Hà D ươ ng 1.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 10 Instructor: Haifeng YU.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Sets.
COSC 2007 Data Structures II Chapter 14 Graphs III.
Which of these can be drawn without taking your pencil off the paper and without going over the same line twice? If we can find a path that goes over all.
© 2015 JW Ryder CSCI 203 Data Structures1. © 2015 JW Ryder CSCI 203 Data Structures2.
Chapter 11 Heap. Overview ● The heap is a special type of binary tree. ● It may be used either as a priority queue or as a tool for sorting.
Discrete Structures Lecture 12: Trees Ji Yanyan United International College Thanks to Professor Michael Hvidsten.
5.5.2 M inimum spanning trees  Definition 24: A minimum spanning tree in a connected weighted graph is a spanning tree that has the smallest possible.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 9.
Teacher: Chun-Yuan Lin
A correction The definition of knot in page 147 is not correct. The correct definition is: A knot in a directed graph is a subgraph with the property that.
CS 61B Data Structures and Programming Methodology Aug 4, 2008 David Sun.
5.5.2 M inimum spanning trees  Definition 24: A minimum spanning tree in a connected weighted graph is a spanning tree that has the smallest possible.
Review 1 Queue Operations on Queues A Dequeue Operation An Enqueue Operation Array Implementation Link list Implementation Examples.
CS 103 Discrete Structures Lecture 13 Induction and Recursion (1)
1 Fat heaps (K & Tarjan 96). 2 Goal Want to achieve the performance of Fibonnaci heaps but on the worst case. Why ? Theoretical curiosity and some applications.
Graphs Slide credits:  K. Wayne, Princeton U.  C. E. Leiserson and E. Demaine, MIT  K. Birman, Cornell U.
MA/CSSE 473 Day 34 MST details: Kruskal's Algorithm Prim's Algorithm.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 3: Leader Election in Rings 1.
HYPERCUBE ALGORITHMS-1
Chapter 20: Graphs. Objectives In this chapter, you will: – Learn about graphs – Become familiar with the basic terminology of graph theory – Discover.
5. Biconnected Components of A Graph If one city’s airport is closed by bad weather, can you still fly between any other pair of cities? If one computer.
Chapter 11. Chapter Summary  Introduction to trees (11.1)  Application of trees (11.2)  Tree traversal (11.3)  Spanning trees (11.4)
Proof of correctness of Dijkstra’s algorithm: Basically, we need to prove two claims. (1)Let S be the set of vertices for which the shortest path from.
Trees.
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
Khaled M. Alzoubi, Peng-Jun Wan, Ophir Frieder
CSE 421: Introduction to Algorithms
Tree Construction (BFS, DFS, MST) Chapter 5
Autumn 2015 Lecture 11 Minimum Spanning Trees (Part II)
Communication operations
CSE 421: Introduction to Algorithms
Route Inspection Which of these can be drawn without taking your pencil off the paper and without going over the same line twice? If we introduce a vertex.
Lecture 8: Synchronous Network Algorithms
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Important Problem Types and Fundamental Data Structures
Locality In Distributed Graph Algorithms
Presentation transcript:

Chapter 4 Downcasts and Upcasts

4.1 Downcasts At first we assume the case where the root has m distinct items A = {  1, …,  m }, each destined to one specific vertex in the tree. Lemma –Downcasting m distinct messages on T requires  (Depth(T)) time in the worst cast for every tree T. –Downcasting m distinct messages on arbitrary tree requires  (m) time in the worst case. –To each children of the root simply sends the messages destined to the subtree by one on the edge in arbitrary order. –And the intermediate vertex in the tree receives at most one message at each step and passes it on. Lemma –Algorithm DOWNCAST performs downcasting of m distinct messages on the tree T in time O(m+Depth(T)). –Proof is in page 41

4.2 Upcasts We suppose that m data items A = {  1, …,  m } are initially stored at some of the vertices of the tree T. Items can be replicated, namely, each item is stored in one or more vertices. Lemma –For every tree T, upcasting m distinct messages on T requires  (Depth(T)) time in the worst cast. –Upcasting m distinct messages on arbitrary tree requires  (m) time in the worst case. –The upcasting is not the simple convergecast process of downcasting, since the items are sent up to the root individually. In the upcast operation the different messages “converge” to a single spot on the tree, hence they tend to disrupt each other more and more. –Though at last we know the result that upcast operation can be performed in time m + Depth(T), the algorithms of it are totally difference from downcast operation. –Simply “rolling the execution tape backwards” will give us a feasible schedule for the upcast operation –We give out three possible settings of assumptions regarding the given items.

4.2.1 Ranked items The first assumptions is the items are taken from an ordered set, and each item is marked by its rank in the set. (the items are given in the form of pairs (i,  such that  i   i+1 for 1  i  m). Algorithm RANKED_UPCAST Lemma –If the ith item is M v, then at the end of round it is stored at v. –This immediately guarantees that by time Depth(T) + m, all items are collected at the root. Corollary –Upcast of m ranked items on a tree T can be performed in time Depth(T) + m. On round (for every i  1) do If the ith item, (i,  ), is stored locally then forward it to parent.

4.2.2 Ordered items The second assumptions is the slightly more general case where the items are taken from an ordered set, but their ranks are not marked. It is impossible to tell the position of a particular item in the complete list by inspecting the item and it is possible to compare them and decide which is the larger of the two. Algorithm ORDERED_UPCAST In this situation, we still can prove Lemma Corollary –Upcast of m ordered items on a tree T can be performed in time Depth(T) + m. On each round do Forward to parent the smallest locally stored item that has not been upcast so far.

4.2.3 Unrdered items The third assumptions is the case where the items are entirely incomparable. It is the most general case. Algorithm UNORDERED_UPCAST In this setting, Lemma no longer holds. We need the following claim. Lemma (proof is in page 44) –Consider a vertex v and an integer t. Suppose that for every 1  i  k, at the end of round t+I, v stored at least I items. Then at the end of round t+k+1, v’s parent w has received from v at least k items. Lemma (proof is in page 44) –For ever 1  I  |M v |, at the end of round, at least i items are stored at v. Corollary –Upcast of m unordered items on a tree t can be performed in time Depth(T) + m. On each round do Forward to parent an arbitrary locally stored item that has not been upcast so far.

4.3 Applications The traditional way to do this is, first, find the minimum element and inform all the vertices by broadcasting it throughout the tree. Then find the next smallest element by the same method and so on. The should take O(kDepth(T)) time. An alternative and faster method would be the following. At any given moment along the execution, every vertex keeps the elements it knows of in an ordered list. In each step, each vertex sends to its parent the smallest element that hasn’t been sent yet. Lemma –Upcasting the k smallest elements on a tree T can be performed in Depth(T) + k time Smallest k-of-m

4.3.2 Information gathering and dissemination Suppose that m data items are initially stored at some of the vertices of the tree T. Items can be replicated, namely, each item is stored in one or more vertices. The goal is to end up with each vertex knowing all the items. The natural way is to collect the items at the root of the tree and then broadcast them one by one. We can do upcast operation to collecting the information and downcast operation to broadcast the information to every leaves. Hence the total time should be O(m+Depth(T)).

4.3.3 Route-disjoint matching Suppose we are given a network in the form of a rooted tree T (with each vertex knowing the edge leading to its parent and the edges leading to its children in T) A set of 2k vertices W={w 1, …, w 2k } for k   n/2  is initially marked in the tree. Our goal is to find a matching of these vertices into pairs (w i1, w i2 ) for 1  i  k, such that the unique routes Y i connecting w i1, to w i2 in T are all edge-disjoint. Lemma –For every tree T and for every set W as above, there exists an edge-disjoint matching as required. –Furthermore, this matching can be found by a distributed algorithm on T in time O(Depth(T)).

4.3.4 Token distribution n token are initially distributed among the n vertices of the tree with no more than K at each site The goal is redistributing the tokens so that each processor will have exactly one token. The cost of the entire redistribution process equals the sum of the distances traversed by the tokens in their way to their destinations. P =  u  ro |p u |, where s u is the number of tokens in the subtree T u, n u is the number of vertices in the subtree T u, p u = s u – n u is the number of token that need to be transferred out of T u Lemma –There exists a distributed algorithm for performing token distribution on a tree using an optimal number of messages P and O(n) time, after a preprocessing stage requiring O(Depth(T)) time and O(n) messages.