Distributed Computing 2. Leader Election – ring network Shmuel Zaks ©

Slides:



Advertisements
Similar presentations
Chapter 13 Leader Election. Breaking the symmetry in system Similar to distributed mutual exclusion problems, the first process to enter the CS can be.
Advertisements

Leader Election.
Leader Election Breaking the symmetry in a system.
Distributed Leader Election Algorithms in Synchronous Ring Networks
CS 542: Topics in Distributed Systems Diganta Goswami.
Lecture 8: Asynchronous Network Algorithms
Distributed Computing 1. Lower bound for leader election on a complete graph Shmuel Zaks ©
Token-Dased DMX Algorithms n LeLann’s token ring n Suzuki-Kasami’s broadcast n Raymond’s tree.
Routing in a Parallel Computer. A network of processors is represented by graph G=(V,E), where |V| = N. Each processor has unique ID between 1 and N.
Chapter 15 Basic Asynchronous Network Algorithms
Leader Election Let G = (V,E) define the network topology. Each process i has a variable L(i) that defines the leader.  i,j  V  i,j are non-faulty.
Self Stabilizing Algorithms for Topology Management Presentation: Deniz Çokuslu.
Lecture 7: Synchronous Network Algorithms
Outline. Theorem For the two processor network, Bit C(Leader) = Bit C(MaxF) = 2[log 2 ((M + 2)/3.5)] and Bit C t (Leader) = Bit C t (MaxF) = 2[log 2 ((M.
Minimum Spanning Trees
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Self Stabilization 1.
1 Choose a Leader Example: find extrema in unidirectional ring There are N processes configured into a unidirectional ring; i.e. For 1
CPSC 668Set 10: Consensus with Byzantine Failures1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
CPSC 668Set 5: Synchronous LE in Rings1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
1 Complexity of Network Synchronization Raeda Naamnieh.
CPSC 668Set 2: Basic Graph Algorithms1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 3: Leader Election in Rings1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
Chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-1 Chapter 4: roadmap 4.1 Token Passing: Converting a Central Daemon to read/write 4.2 Data-Link.
Accelerated Cascading Advanced Algorithms & Data Structures Lecture Theme 16 Prof. Dr. Th. Ottmann Summer Semester 2006.
Parallel Routing Bruce, Chiu-Wing Sham. Overview Background Routing in parallel computers Routing in hypercube network –Bit-fixing routing algorithm –Randomized.
CPSC 668Self Stabilization1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
Bit Complexity of Breaking and Achieving Symmetry in Chains and Rings.
Leader Election in Rings
Distributed Algorithms (22903) Lecturer: Danny Hendler Leader election in rings This presentation is based on the book “Distributed Computing” by Hagit.
Message Passing Systems A Formal Model. The System Topology – network (connected undirected graph) Processors (nodes) Communication channels (edges) Algorithm.
Election Algorithms and Distributed Processing Section 6.5.
Message Passing Systems A Formal Model. The System Topology – network (connected undirected graph) Processors (nodes) Communication channels (edges) Algorithm.
Distributed Computing 5. Synchronization Shmuel Zaks ©
Selected topics in distributed computing Shmuel Zaks
Lecture #12 Distributed Algorithms (I) CS492 Special Topics in Computer Science: Distributed Algorithms and Systems.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 10 Instructor: Haifeng YU.
Distributed Computing 3. Leader Election – lower bound for ring networks Shmuel Zaks ©
1 Maximal Independent Set. 2 Independent Set (IS): In a graph G=(V,E), |V|=n, |E|=m, any set of nodes that are not adjacent.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February 10, 2005 Session 9.
1 Leader Election in Rings. 2 A Ring Network Sense of direction left right.
Leader Election. Leader Election: the idea We study Leader Election in rings.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 5: Synchronous LE in Rings 1.
Lecture #14 Distributed Algorithms (II) CS492 Special Topics in Computer Science: Distributed Algorithms and Systems.
Leader Election (if we ignore the failure detection part)
DISTRIBUTED ALGORITHMS Spring 2014 Prof. Jennifer Welch Set 2: Basic Graph Algorithms 1.
1 Review Questions Define n variables, types of shared variables, message-passing, shared-memory system, system topology n program-counter/guarded command.
Hwajung Lee. Let G = (V,E) define the network topology. Each process i has a variable L(i) that defines the leader.   i,j  V  i,j are non-faulty ::
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 3: Leader Election in Rings 1.
Fault tolerance and related issues in distributed computing Shmuel Zaks GSSI - Feb
1 Fault-Tolerant Consensus. 2 Communication Model Complete graph Synchronous, network.
Fault tolerance and related issues in distributed computing Shmuel Zaks GSSI - Feb
CIS 825 Review session. P1: Assume that processes are arranged in a ring topology. Consider the following modification of the Lamport’s mutual exclusion.
Distributed Algorithms (22903) Lecturer: Danny Hendler Leader election in rings This presentation is based on the book “Distributed Computing” by Hagit.
CIS 825 Lecture 8. Leader Election Aim is to elect exactly one node as the leader.
2016/6/23Election Algorithms1 Introduction to Distributed Algorithm Part Two: Fundamental Algorithm Chapter 7- Election Algorithms Teacher: Chun-Yuan Lin.
Distributed Leader Election Krishnendu Mukhopadhyaya Indian Statistical Institute, Kolkata.
Leader Election Let G = (V,E) define the network topology. Each process i has a variable L(i) that defines the leader.  i,j  V  i,j are non-faulty ::
Leader Election Chapter 3 Observations Election in the Ring
Distributed Processing Election Algorithm
Distributed Algorithms (22903)
Algorithms for COOPERATIVE DS: Leader Election in the MPS model
Lecture 9: Asynchronous Network Algorithms
Leader Election (if we ignore the failure detection part)
Parallel and Distributed Algorithms
Leader Election CS60002: Distributed Systems
Lecture 8: Synchronous Network Algorithms
Distributed Algorithms (22903)
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Leader Election Ch. 3, 4.1, 15.1, 15.2 Chien-Liang Fok 4/29/2019
Presented by Jiossy Rami
Presentation transcript:

Distributed Computing 2. Leader Election – ring network Shmuel Zaks ©

message passing asynchronous ? Leader election

 motivation  who starts?  Leader election, maximum finding, spanning tree (Leader election)

 Unidirectional ring  Bidirectional rings  Complete networks  General networks (Leader election)

Unidirectional ring phases, unique execution (Leader election)

Bidirectional ring (Leader election) sense of direction L R L L L L L L L R R R R R R R

Bidirectional ring (Leader election) no sense of direction R R L L L L L L L R R L R R R R

Sense of Direction : For each process p in a bidirectional ring, its left and right neighbors are termed left(p) and right(p) respectively. If right(left(p)) = p for every p, then there is a sense of direction (otherwise – no sense of direction)

LeLann’s algorithm state := candidate; send (my_id); receive (nid); while nid ≠ my_id do if nid > my_id then state:=no_leader; send (nid); receive (nid); od; if state=candidate then state:=leader.

(LeLann’s algorithm) messages: 64 time: 8 My_id nid 1 5

Theorem: LeLann’s algorithm terminates, and exactly one processor is in state= leader. Message complexity: O(n 2 ) (worst and average) Time complexity: O(n) (LeLann’s algorithm)

Chang and Roberts algorithm state := candidate; send (my_id); receive (nid); while nid ≠ my_id do if nid > my_id then state:=no_leader; send (nid); receive (nid); od; if state=candidate then state:=leader.

or: state := candidate; send (my_id); while state ≠ leader do receive (nid); if nid > my_id then send (nid); od; if nid = my_id then state:=leader. (Chang and Roberts’ algorithm)

messages: = 20 time: 8

Theorem: Chang and Roberts’s algorithm terminates, and exactly one processor is in state= leader. Message complexity: O(n 2 ) (worst) Time complexity: O(n) (Chang and Roberts’ algorithm)

Theorem: The average message complexity of Chang and Roberts’s algorithm is O(n log n). assume all rings equally probably (for the proof – assume ids are 1,2,…, n) (Chang and Roberts’ algorithm)

K i P(i,k) – probability that id i makes exactly k steps (Chang and Roberts’ algorithm)

or: Consider all rings Each id makes 1 step – times Identity of P i : makes 2 nd step iff it is largest among Pi, Pi+1, which happens times Identity of Pi: makes 3 rd step iff it is largest among Pi, Pi+1,Pi+2, which happens times, etc … (Chang and Roberts’ algorithm)

Bidirectional rings messages: ? time: ?

Phases 1,2,… processors start phase k no. of phases messages time = O(n) Hirschberg and Sinclair’s algorithm

messages: ? time: ? Franklin’s algorithm

messages: ? (Franklin’s algorithm) messages: 16messages: 32messages: 48

no. of phases messages time = O(n) (Franklin’s algorithm) Exercise: what is the expected number of active processors after the first phase?

Peterson’s 1 st Algorithms This algorithm is a modification of Franklin’s algorithm for unidirectional ring. The basic idea is, during a phase, each active process receives the temporary identifier of its nearest active neighbor and that neighbor’s nearest active neighbor’s temporary identifier, then applies Franklin’s strategy. DKRP

Each node maintains four variables: state { candidate, relay, leader} tid – temporary identity ntid – first id received nntid – second id received (Peterson’s 1 st Algorithms)

state := candidate ; tid := id ; while state  relay do begin [start phase] send( tid ); receive( ntid ); if ntid = id then state := leader ; if tid > ntid then send( tid ); else send( ntid ); receive( nntid ); if nntid = id then state := leader ; if ntid  max( tid, nntid ) then tid := ntid else state := relay ; end; (now state = relay )

while state  leader do begin receive( tid ); if tid = id then state := leader ; send( tid ); end (now state = relay )

tid ntid nntid tid:=id; candidate [start phase] : send(tid); receive(ntid); phase 1a

tid ntid nntid candidate: if tid > ntid then send(tid); else send(ntid); receive(nntid); phase 1b

tid ntid nntid candidate: if ntid  max(tid, nntid) then tid:=ntid else state := relay; phase 1c

tid ntid nntid phase 2a candidate [start phase] : send(tid); receive(ntid); relay: …

tid ntid nntid phase 2b candidate: if tid > ntid then send(tid); else send(ntid); receive(nntid); relay: …

tid ntid nntid phase 2c candidate: if ntid  max(tid, nntid) then tid:=ntid else state := relay; relay: … 8

tid ntid nntid phase 3a candidate [start phase] : send(tid); receive(ntid); relay: … 8 8 8

Exercises: 1. why send max{tid,ntid}? 2. what happens if n=2? 3. what happens if n=1?

P max – processor holding max_id Phase 1,2,… t p - number of non-relay processors starting phase p. Lemma: During the execution of the algorithm, a candidate processor that becomes relay will never be in a candidate state. Lemma: For every p, if t p ≥ 3 then t p+1 ≤ t p / 2.

Lemma: if t p ≥ 3 then at the start of phase p: for p>1: The tid of a candidate is equal to the tid of the preceeding candidate in the previous phase. (corollary: for every p: each identity resides as the tid of at most one candidate processor.) If the id of P i resides in P k, then all processors P i,P i+1, …, P k-1 are relays. max_id resides as a tid of some processor. Lemma: if t p = 2 or t p = 1 then the algorithm terminates, with the processor holding max_id as a leader.

Theorem: Peterson’s 1st algorithm always determines a unique processor – the one holding the largest identity - as a leader. Message complexity ≤ 2n log n Time complexity ≤ 2n-1 Exercise: show examples for worst cases and for best cases in terms of time and in terms of messages.

Peterson’s 2 nd Algorithm improvement of Peterson’s 1 st algorithm Instead of comparing its id with both neighbors in the same time, a process first compares itself with its left neighbor, then its right neighbor.

Each node maintains four variables: state { candidate, relay, leader} tid – temporary identity ntid – id received (Peterson’s 2 nd Algorithms)

state := candidate ; tid := id ; while state  relay do begin [compare to left, odd phase] send( tid ); receive( ntid ); if ntid = id then state := leader ; if tid < ntid then state := relay ; end; begin [compare to right, even phase] send( tid ); receive( ntid ); if ntid = id then state := leader ; if tid > ntid then state := relay else tid := ntid ; end; (now state = relay )

while state  leader do begin receive( tid ); if tid = id then state := leader ; send( tid ); end (now state = relay )

tid ntid begin [compare left] send(tid); receive(ntid); if ntid = id then state :=leader; if tid < ntid then state := relay; end; phase 1a

tid ntid phase 1b begin [compare right] send(tid); receive(ntid); if ntid = id then state :=leader; if tid > ntid then state := relay else tid := ntid ; end; relay: …

tid ntid phase 2a begin [compare left] send(tid); receive(ntid); if ntid = id then state :=leader; if tid < ntid then state := relay; end; relay: …

Theorem: Peterson’s 2nd algorithm always determines a unique processor as a leader. Message complexity ≤ 1.44… n log n Exercise: show an example where a processor whose id is not the largest is elected as a leader.

phases p, p-1, … 1 (last phase) t k – no. of processors that remain candidates after phase k = no. of processors that start phase k-1. t p+1 = n t 1 = 1 t 2 ≥ 2

Lemma: t k ≤ no. of processors that became relay during phase k+1 Proof: We show that for each processor that remained active after phase k there is a processor that became relay during phase k+1 (the previous phase)

beginning of phase k+1 end of phase k end of phase k+1 = beginning of phase k p p p Case a: k is odd

P survived phase k: end of phase k end of phase k+1 = beginning of phase k p p q q ? Hence p > q Case a: k is odd

If in the beginning of phase k+1 all of these were already relays … end of phase k+1 = beginning of phase k p q then p would have become relay, contradiction. Case a: k is odd

Lemma: t k ≤ no. of processors that became relay during phase k+1 Corollary: t k ≤ t k+2 - t k+1 t k + t k+1 ≤ t k+2 t k ≥ Fibonacci k+1 =

n = t p+1 ≥ Fibonacci p+2 = message complexity ≤ np ≤ 1.44… n log n no. of phases = p ≤ 1.44… log n

References E. Chang and R. Roberts, An improved algorithm for decentralized extrema-finding in circular configurations of processes, Communications of the ACM}, 22, 5, 1979, pp

References D. Dolev, M. Klawe and M. Rodeh, An O(n log n) unidirectional distributed algorithm for extrema finding in a circle, Journal of Algorithms, 3, 1982, pp

References W. R. Franklin, On an improved algorithm for decentralized extrema finding in circular configurations of processors, Communication of the ACM, 25, 1982, pp

References D. S. Hirschberg and J. B. Sinclair, Decentralized extrema-finding in circular configuration of processors, Communications of the ACM, 23, 1980, pp

References G. LeLann, Distributed systems - towards a formal approach, Information Processing Letters, 1977, pp

References G. L. Peterson An O(nlogn) unidirectional algorithm for the circular extrema problem. ACM Trans. Program. Lang. Syst. 4,4 (Oct. 1982),

References N. Santoro, Sense of direction, topological awareness and communication complexity, SIGACT News, 16, 2, Summer 1984, pp