Download presentation
Presentation is loading. Please wait.
1
Chapter 4 Self-Stabilization Self-Stabilization Shlomi Dolev MIT Press, 2000 Draft of October 2003 Shlomi Dolev, All Rights Reserved ©
2
Chapter 4: roadmap 4.1 Token Passing: Converting a Central Daemon to read/write 4.2 Data-Link Algorithms: Converting Shared Memory to Message Passing 4.3 Self-Stabilizing Ranking: Converting an Id- based System to a Special-processor System 4.4 Update: Converting a Special Processor to an Id- based Dynamic System 4.5 Stabilizing Synchronizers: Converting Synchronous to Asynchronous Algorithms 4.6 Self-Stabilizing Naming in Uniform Systems: Converting Id-based to Uniform Dynamic Systems
3
Token Passing: Converting a Central Daemon to read/write Distributed Daemon – activates a selected set of processors simultaneously to execute a computation step 1 4 3 2 state 1 state 4 state 2 state 3 Central Daemon – is a special case of Distributed Daemon in which the set is of exactly 1 processor Synchronous System – is a special case in which the set consists all the processors in the system The Daemon chooses a set of processors Each processor in the set simultaneously reads from its neighbors… … and then all write their new state
4
The use of Central Daemon The literature in self-stabilizing is rich in algorithms that assume the existence of powerful schedulers WHY ? 1.Dijkstras’ choices in the first work in the field 2.The assumption of the existence of a daemon enables the designer to consider only a subset of the possible execution set An algorithm designed to work in read/write atomicity can be used in any system in which there exists a daemon but the reverse wont work
5
Compiler An algorithm designed to stabilize in the presence of distributed daemon must stabilize in a system with central daemon Compiler AL for T (daemon) AL for T (read/write) All the above facts are our motivation for designing a compiler:
6
1. Compose a spanning tree using the Spanning Tree Construction algorithm Al for T (read/write) = Al for T (daemon) ◦ Mutual Exclusion How does the compiler work ? Converting 2. Construct an Euler tour on the tree to create a virtual ring for the Mutual Exclusion algorithm 3. A processor that enters the critical section reads the state of its neighbors changes state and writes, then it exits the critical section
7
Chapter 4: roadmap 4.1 Token Passing: Converting a Central Daemon to read/write 4.2 Data-Link Algorithms: Converting Shared Memory to Message Passing 4.3 Self-Stabilizing Ranking: Converting an Id- based System to a Special-processor System 4.4 Update: Converting a Special Processor to an Id- based Dynamic System 4.5 Stabilizing Synchronizers: Converting Synchronous to Asynchronous Algorithms 4.6 Self-Stabilizing Naming in Uniform Systems: Converting Id-based to Uniform Dynamic Systems
8
Designing a self-stabilization algorithm for asynchronous message-passing systems is more subtle than the same task in shared memory systems Main difficulty : The messages stored in the communication links –No bound on message delivery time –No bound on number of messages that can be in link There are infinitely many initial configurations from which the system must stabilize Converting Shared Memory to Message Passing
9
Our main goal is designing of compiler: Compiler AL for T (read/write) AL for T (message passing) First goal in designing of such a compiler is a self-stabilizing data-link algorithm
10
Definition of Self-Stabilizing Data-Link Algorithm Data-Link Algorithm : Messages fetched by sender from network layer should be delivered by receiver to network layer without duplications, omissions or reordering One of the implementations of data-link task is the token-passing algorithm Token-passing task is a set of executions TP The legal execution of TP is the sequence of configurations in which : - No more than one processor holds the token - Both the sender and receiver hold the token in infinitely many configurations
11
Unbounded solution of TP task Sender: 01 upon timeout 02 send ( counter ) 03upon message arrival 04 begin 05 receive (MsgCounter) 06if MsgCounter ≥ counter then 07begin 08counter := MsgCounter + 1 09send (counter) 10end 11else send (counter) 12end A timeout mechanism is used to ensure that the system will not enter to communication-deadlock configuration Each message has integer label called MsgCounter Sender ( and Receiver ) maintains an unbounded local variable called counter
12
Unbounded solution of TP task Receiver: 13upon message arrival 14 begin 15 receive (MsgCounter) 16if MsgCounter ≠ counter then 17counter := MsgCounter 18send (counter) 19end Token arrives Token released In safe configuration of TP and the algorithm - counter values of all messages and values of the counters of sender and receiver, have the same value (lemma 4.1)
13
The algorithm is self-stabilizing For every possible configuration c, every fair execution that starts in c reaches a safe configuration with relation to TP (Theorem 4.1) Question : Whether the unbounded counter and label can be eliminated from the algorithm ? Answer : NO
14
Lower Bound on the System Memory The memory of the system in configuration c is the number of bits for encoding state of sender, receiver, and messages in transit. Weak-Exclusion task (WE): In every legal execution E, there exists a combination of steps, a step for each processor, so that these steps are never executed concurrently. We will prove that there is no bound on system memory for WE task.
15
Lower Bound on the System Memory Theorem: For any self-stabilizing message driven protocol for WE task and for any execution E’ in WE all the configurations are distinct. Hence for any t > 0, the size of at least one of the first t configurations in E is at least log 2 (t)
16
Proof of Theorem Sender sending… Receiver acknowleging… c1c1 c1c1 s1s1 r1r1 Sender P s Receiver P r s1s1 r1r1 q s,r (c 1 ) q r,s (c 1 ) qS r,s (E) qS s,r (E) E Any execution E’ in which not all the configurations are distinct has circular sub- execution E = (c 1,a 2,….,c m ) where (c 1 =c m ) cm=cm=
17
Proof - Building CE and c init Let E – be circular sub-execution S i - sequence of steps of P i CE – set of circular executions Each execution in CE – merge of S i ’s, while keeping their internal order We obtain initial configuration of E c in CE from c 1 of E, and sequence of messages sent during E c init r1r1 s1s1 q r,s (c 1 ) q s,r (c 1 ) qS s,r (E) qS r,s (E)
18
c init r1r1 s1s1 q r,s (c 1 ) q s,r (c 1 ) qS s,r (E) qS r,s (E) q r,s (c 1 ) qS r,s (E) Receiver recieves… qR s,r (E) Receiver sends… qS r,s (E) q r,s (c 1 ) q s,r (c 1 ) c init r1r1 s1s1 qS s,r (E) qS r,s (E) Sender recieves… qR r,s (E) Sender sends… qS s,r (E) Proof - Possible execution in CE E steps
19
Sender steps during E is S sender = {a 1,a 2 …a m } Receiver steps during E is S receiver = { b 1,b 2 …b k } For any pair there exists E’’ CE in which there is configuration c, such that a i and b j is applicable in c c is not safe configuration Proof – cont… c init c a1a1 a2a2 … a i-1 b1b1 b2b2 … b j-1 a i & b j
20
Proof – cont… If self-stabilizing algorithm AL for WE task have circular sub-execution E exists infinite fair execution E ∞ of AL, none of whose configurations is safe for WE E’ : c init E’ from CE(E) E’ from CE(E) … c1c1
21
Proof – cont… Let assume in contradiction that c 1 is safe Then let’s extend E’ to E ∞ by E 1, E 2, …, E k - executions from CE(E) E ∞ : For each pair there is c’ in E ∞ so that both a i, b j applicable in c’ c’ is not safe The proof is complete! c init c1c1 … E 1 E 2 … E k c’
22
Bounded-Link Solution Let cap be the bound on number of the messages in transit The algorithm is the same as presented before with counter incremented modulo cap+1 08counter := (MsgCounter + 1)mod(cap+1) Sender must eventually introduce a counter value that not existing in any message in transit
23
Randomized Solution The algorithm is the same as original one with counter chosen randomly 08label := ChooseLabel(MsgLabel) At least three labels should be used The sender repeatedly send a message with particular label L until the a message with the same label L arrives The sender chooses randomly the next label L’ from the remaining labels so that L’ ≠ L
24
Self-Stabilizing Simulation of Shared Memory The heart of the simulation is a self-stabilizing implementation of the read and write operations The simulation implements these operations by using a self-stabilizing, token passing algorithm The algorithm run on the two links connecting any pair of neighbors In each link the processor with the larger ID acts as the sender while the other as the receiver (Remind: all processors have distinct IDs)
25
Self-Stabilizing Simulation of Shared Memory - cont… Every time P i receives a token from P j. P i write the current value of R ij in the value of the token Write operation of P i into r ij is implemented by locally writing into R ij Read operation of P i from r ji is implemented by: 1.P i receives the token from P j 2.P i receives the token from P j. Return the value attached to this token
26
Self-Stabilizing Simulation of Shared Memory - Run R ij R ji PiPi PjPj t | value = z Write Operation: P i write x to r ij R ij =X Read Operation: P i read from value y from r ji t+1 | value = z t+1 | value = Y P i writes to R ij 1. P i receive token from P j 2. P i send token to P j 4. P i receive token from P j and read the value of the token t+1 | value = y 3. P j receive token from P i and write R ji to value of the token R ji = Y
27
Chapter 4: roadmap 4.1 Token Passing: Converting a Central Daemon to read/write 4.2 Data-Link Algorithms: Converting Shared Memory to Message Passing 4.3 Self-Stabilizing Ranking: Converting an Id- based System to a Special-processor System 4.4 Update: Converting a Special Processor to an Id- based Dynamic System 4.5 Stabilizing Synchronizers: Converting Synchronous to Asynchronous Algorithms 4.6 Self-Stabilizing Naming in Uniform Systems: Converting Id-based to Uniform Dynamic Systems
28
Converting an Id-based System to a Special-processor System Our goal is to design a compiler that converts self-stabilizing algorithm for a unique ID system to work in special processor system. The ranking (compiler) task is to assign each of the n processors in the system with a unique identifier in the range 1 to n.
29
Converting an Id-based System to a Special-processor System We will form the self stabilizing ranking algorithm by running 3 self-stabilizing algorithms one after the other: 1)Self-Stabilizing spanning tree construction (section 2.5) 2)Self-Stabilizing counting algorithm 3)Self-Stabilizing naming algorithm Special processor system Unique Id’s system Spanning tree const. Counting algorithm Naming algorithm
30
Self-Stabilizing Counting Algorithm Assuming rooted spanning tree system in which every processor knows its parent and children's P i has a variable count i that hold the number of processor in sub-tree where P i is a root The correctness proofs by induction on the height of a processor
31
Self-Stabilizing Counting Algorithm 01 Root: do forever 02 sum:= 0 03forall P j children(i) do 04 lr ji := read(r ji ) 05 sum := sum + lr ji. count 06od 07count i = sum + 1 08od 09 Other: do forever 10sum:= 0 11forall P j children(i) do 12lr ji := read(r ji ) 13 sum := sum + lr ji. count 14od 15count i = sum + 1 16write r i,parent.count := count i 17od Calculate count i : sum the values of r ji registers of his child’s and 1 (himself) Write local count value to communication register
32
Self-Stabilizing Naming Algorithm The naming algorithm uses the value of the count fields from counting algorithm. Algorithm assign unique identifiers to the processors. The identifier of a processor is stored in the ID i variable. Proof of the stabilization by induction on the distance of the processors from the root
33
Self-Stabilizing Naming Algorithm 01 Root: do forever 02 ID i := 1 03sum := 0 04 forall P j children(i) do 05 lr ji := read (r ji ) 05 write r ij.identifier := Id + 1 + sum 06sum := sum + lr ji.count 08 od 09 od 10 Other: do forever 11sum:= 0 12lr parent,i := read (r parent,I ) 13ID i := lr parent,I.identifier 14 forall P j children(i) do 15lr ji := read (r ji ) 16 write r ij.identifier := Id i + 1 + sum 17sum := sum + lr ji.count 18 od 19 od
34
Ranking Task Run Count i := ID i := Count i := ID i := Count i := ID i := Count i := ID i := Count i := ID i := Count i := ID i := Count i := ID i := Count i := ID i := r.id= Count i := 1 ID i := Count i := 1 ID i := Count i := 2 ID i := Count i := 1 ID i := Count i := 2 ID i := Count i := 3 ID i := Count i := 2 ID i := Count i := 8 ID i := Count i := 8 ID i := 1 r.id=2 r.id=4 r.id=6 Count i := 2 ID i := 4 Count i := 1 ID i := 5 Count i := 2 ID i := 2 r.id=3 Count i := 3 ID i := 6 r.id=7 Count i := 2 ID i := 7 r.id=8 Count i := 1 ID i := 3 Count i := 1 ID i := 8 1 2 4 3 6 5 7 r.id=5 8 1 2 3 4 5 6 7
35
Counting Algorithm for non-rooted tree 01 do forever 02forall P j N(i) do lr ji := read (r ji ) 03sum i := 0 04 forall P j N(i) do 05 sum j := 0 06 forall P k N(i) do 07if P j != P k then 08sum j := sum j +lr ki.count 09od 10count i [j] := sum j + 1 11sum i := sum i + sum j 12write r ij.count := count i [j] 13od 14count i = sum i + 1 15 od Each processor P i has a variable count i [j] for every neighbor P j The value of count i [j] is the number of processors in subtree of T to which P i belongs and P j doesn’t. The correctness proof is by induction on the height of the registers r ij
36
Counting Algorithm for non-rooted tree Run Count i [j]=3 Count i [1]=2 Count i [j]=5 Count i [j]=2 Count i [j]=1 Count i [j]=3 Count i [j]=5 Count i [j]=3 Count i [1]=7 Count i [1]=6 Count i [1]=2 Count i [1]=7 Count i [j] =1 Count i [j]=2 Count i [j]=4 Count i [1]=1 Count i [j]=6 Count i [j]=5 Count i [j]=6 Count i [j] =4 Count i [j]=7
37
Chapter 4: roadmap 4.1 Token Passing: Converting a Central Daemon to read/write 4.2 Data-Link Algorithms: Converting Shared Memory to Message Passing 4.3 Self-Stabilizing Ranking: Converting an Id- based System to a Special-processor System 4.4 Update: Converting a Special Processor to an Id- based Dynamic System 4.5 Stabilizing Synchronizers: Converting Synchronous to Asynchronous Algorithms 4.6 Self-Stabilizing Naming in Uniform Systems: Converting Id-based to Uniform Dynamic Systems
38
Update - Converting a Special Processor to an Id-based Dynamic System The task of the update algorithm in id-based system is to inform each processor of the other processors that are in its connected component. As a result every processor in the connected component knows the maximal identifier in the system, and a single leader is elected. The update algorithm is a self-stabilizing leader election algorithm within O(d) cycles (pulses). The motivation for the restriction that the update must work in id-based system can be viewed by examining Dijkstra self stabilizing ME algorithm for a ring of processors…
39
Dijkstra proved that without a special processor, it is impossible to achieve ME in a self-stabilizing manner The impossibility proof is for composite number of identical processors connected in a ring activated by a central daemon Dijkstra proof ME requires that processor P i should execute the critical section if and only if it is the only processor that can change its state (by reading its neighbors’ states) at that execution point
40
S0S0 Then, p1 and p3 will be at S1, p2 and p4 will be at S1’. We can see that symmetry in state is preserved Dijkstra proof S1S1 p4p4 p3p3 p2p2 p1p1 S0S0 S0’S0’ S0’S0’ S1S1 S1’S1’ S1’S1’ P1P1,P 3,P 2,P 4 … Consider P1,P3 starting in the same state S0, P2,P4 starting in the same state S0’ And execution order is:
41
Conclusions from Dijkstra’s proof Whenever P 1 has permission to execute critical so does P 3. (Like that for P 2 and P 4 ). With no central daemon the impossibility result of self-stabilizing ME algorithm holds also for a ring of prime number of identical processors. We start in a configuration that all the processors are in the same state, and the contents of all the registers are identical. An Execution that preserve that symmetry forever is one that every processor reads all neighbors registers before any writing is done. The restriction for designing the update algorithm in an id-based (not identical) system is thus clear.
42
The Update algorithm outlines The algorithm constructs n directed BFS trees. One for every processor. 1. Each processor P i holds a BFS tree rooted at P i. 2. When a node P j with distance k+1 from P i has more than one neighbor at distance k from P i, P j is connected to the neighbor (parent) with the maximal ID. 3. Each P i reads from his δ neighbors their set of tuples where j represents a processor id, and x is the distance of j from the processor that holds this tuple. 4. P i tuples are computed from these neighbors’ sets, adapting for each j i the tuple with the smallest x and adding 1 to x. P i also adds the tuple indicating the distance from itself. At the end of the iteration, P i removes the floating (false) tuples.
43
Update algorithm 01 do forever 02Readset i :=Ø 03forall P j N(i) do 04 Readset i := Readset i read(Processors j ) 05Readset i := Readset i \\ 06Readset i := Readset i ++ \\ 07Readset i := Readset i { } 08forall P j processors(Readset i ) do 09 Readset i := Readset i \\ NotMinDist(P j, Readset i ) 10 write Processors i := ConPrefix(Readset i ) 11od
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.