Download presentation
Presentation is loading. Please wait.
1
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-1 5 2 6 1 4 3 iddistance 22 iddistance 10 24 32 40 51 iddistance 10 24 32 45 iddistance 10 24 32 iddistance 10 22 32 45 54 iddistanc e 11 23 92 iddistance 10 41 52 33 iddistance 30 11 41 52 93 iddistanc e 50 31 41 11 iddistance 20 11 31 51 42 iddistance 40 31 11 iddistance 20 11 31 51 42 62 iddistance 30 21 41 51 12 iddistance 50 61 21 31 42 12 iddistance 60 41 51 32 12 iddistance 60 41 51 22 32 12 iddistance 40 61 31 22 52 13 First round Second round Third round iddistance 20 11 31 51 42 62 iddistance 30 21 51 41 12 62 iddistance 40 61 31 22 52 13 iddistance 60 41 51 22 32 13 iddistance 50 61 21 31 42 12 idDistance 10 21 32 52 43 iddistance 10 21 32 52 43 63 iddistance 10 21 32 52 43 63
2
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-2 Floating Tuples Floating tuple in ReadSet is a tuple with a processor’s id that does not exist. Let y be the minimal missing distance value in ReadSet i. we remove every tuple with distance greater than y in ReadSet i. ConPrefix(ReadSet i ) removes these tuples out.
3
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-3 Correctness To prove the correctness we define safe configuration such that for every P i it holds: 1. Processors i includes n tuples, a tuple for every processor P j in the system, where y is the distance of P j from P i. 2. ReadSet i tuples will rewrite the same contents to Processors i
4
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-4 Tuple Addition (Lemma 4.6) In every arbitrary execution following the k th cycle, for each pair of processors P i and P j that are at distance l < min(k,d+1) it holds: Assertion 1 (inclusion): A tuple appears in Processors i. Assertion 2 (constant value): If a tuple, such that y l appears in Processors i, then there exists a processor P x at distance y from P i.
5
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-5 Tuple Addition – cont. The proof is by induction on k, the number of cycles in the execution. Induction Step: By the assumption, in the start of the k+1 th cycle the processors variables are correct up to distance l -1. Cycle k+1 reads all the correct tuples with distance l -1, so it computes all the tuples of distance l correctly. Induction assumption : assertion 1 and assertion 2 hold after k cycles. Base case: k=1, by adding to ReadSet i, incrementing the distance of all other tuples to be greater than 0, Thus assertion 1 (inclusion). After the increment, only tuple is left with distance 0, thus assertion 2 (constant value).
6
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-6 Correctness (Lemma 4.7) In every arbitrary execution following d+2 cycles, it holds for every tuple in every processors i variable that a processor x exists in the system.
7
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-7 Correctness - cont. The proof depend on Tuple addition (lemma 4.6). According to Lemma 4.6, it holds that following d+1 cycles every tuple with distance d is not a floating tuple. If a floating tuple exists after cycle d+1 in a processors i variable, then y > d. Cycle d+2 then increments y, thus y > d+1. Since no tuple of distance d+1 exists, the function ConPrefix removes it.
8
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-8 Stabilization (Corollary 4.1) In any execution, any configuration that follows the first d+3 cycles is a safe configuration Proof: In accordance with lemma 4.6, it holds that, in every configuration that follows the first d+1 cycles, every tuple with distance d is not a floating tuple, and also P j and P i at distance l d, a tuple Processors i. In accordance with lemma 4.7, in every configuration that follows d+2 cycles, no tuples of grater distance than d exists in Processors i variables. Therefore, during the d+3 cycle, a safe configuration is reached.
9
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-9 Self-Stabilizing Convergecast for Topology Update In the topology-update algorithm, the information that is convergecast is the local topology (the descendant’s neighbors identity). We assume that every processor knows its parent and children in the BFS tree of the leader (can be done in O(d) cycles with update).
10
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-10 Up 4 :{…} Up 5 :{…} Up 3 :{…} Up 1 :{…} Up 2 :{…} Up 0 :{…} Up 7 :{…} Self-Stabilizing Convergecast for Topology Update 1 2 7 5 3 4 0 Up 4 ={4.3;4.5} Up 5 ={5.3;5.4; 5.2} Up 7 ={7.2;7.1} Up 3 ={3.2;3.5;4.3; 4.5} Up 2 ={2.0;2.3;2.7;5.3; 5.4;5.2;…} Up 1 ={1.0;7.2;7.1} Up 2 ={2.0;2.3;2.7;5.3;5.4; 5.2;4.3} Up 0 ={2.0;2.3;2.7;5.3;5.4;5.2; 1.0;7.2;7.1} Up 0 ={2.0;2.3;2.7;5.3;5.4;5.2; 1.0;7.1;4.3} The stabilization of the convergecast is based on the correct information in the leaves and direction, in which information is collected - from leaves up. In the convergecast every processor P i uses the variable up i to write to his parent his local topology and also (if P i is not a leaf) his children’s topology.
11
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-11 Up 0 ={2.0;2.3;2.7;5.3;5.4;5.2; 1.0;7.1;4.3} Down 0 ={2.0;2.3;2.7;5.3;5.4;5.2; 1.0;7.1;4.3} Down 2 ={2.0;2.3;2.7;5.3;5. 4;5.2; 1.0;7.1;4.3} Down 1 ={2.0;2.3;2.7;5.3;5. 4;5.2; 1.0;7.1;4.3} Self Stabilizing Broadcast for Topology Update In order to inform every processor the tree topology collected by the leader, we use a self-stabilizing broadcast mechanism: Root (x): Down x = Up x Everyone else (i): Down i = Down j | j=parent(i) This is done in O(d). 1 2 7 5 3 4 0
12
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-12 Adaptive self-stabilizing Algorithm A self-stabilizing algorithm is time-adaptive if the number of cycles until convergence is proportional to the system’s parameters. A self-stabilizing algorithm is memory-adaptive if the amount of memory used during safe configuration is proportional to the system’s parameters. A silent self-stabilizing algorithm is communication- adaptive if the number of bits that are communicated is proportional to the system’s parameters. The update algorithm stabilizes within O(d) cycles therefore it’s time-adaptive. It reads/writes O(n) tuples in the communication register and is therefore memory-adaptive and communication- adaptive. In dynamic systems, the parameters of the system such as diameter and number of processors are not fixed
13
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-13 Chapter 4: roadmap 4.1 Token Passing: Converting a Central Daemon to read/write 4.2 Data-Link Algorithms: Converting Shared Memory to Message Passing 4.3 Self-Stabilizing Ranking: Converting an Id-based System to a Special-processor System 4.4 Update: Converting a Special Processor to an Id-based Dynamic System 4.5 Stabilizing Synchronizers: Converting Synchronous to Asynchronous Algorithms 4.6 Self-Stabilizing Naming in Uniform Systems: Converting Id-based to Uniform Dynamic Systems
14
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-14 What is a Synchronizer ? A Synchronizer is an algorithm that converts a synchronous algorithm so it can be executed in an asynchronous system. In other words, the task of the Synchronizer is to emulate distributively a synchronous execution in an asynchronous system.
15
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-15 Why use a Synchronizer? In an asynchronous system every configuration can be extended to a set of executions that differ in the order in which applicable steps are scheduled. In contrast, when the system is synchronous, all the processes in the system change state simultaneously. Thus, the configuration uniquely defines the execution that follows it. Designing algorithms for asynchronous system can be a sophisticated task:
16
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-16 Therefore... Given a task for an asynchronous system, design an algorithm solving the task in a synchronous system Use a synchronizer to convert the algorithm so it can be executed in an asynchronous system
17
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-17 So, why not always use Synchronizers? The restriction of the execution of steps to be scheduled slows down the fastest processors to executing steps at the speed of the slowest processor. Intuitively, in a non-restricted asynchronous execution, the fastest processors may make progress, thus helping the slow processors to solve the system’s task. Efficiency…
18
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-18 Which synchronizers are we going to get exposed to ? First, a simple one - the unbounded version of self- stabilizing alpha-synchronizer Second, the bounded version of the alpha- synchronizer Finally, the beta-synchronizer
19
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-19 We first present the task of the alpha- synchronizer not in the context of synchronization Later we explain how it is used for synchronization The unbounded alpha-synchronizer
20
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-20 Alpha-synchronizer The alpha-synchronizer uses the services of a self- stabilizing data-link algorithm Every message sent by a processor is received and acknowledged Processor P will wait for acknowledgement on message m sent to Q, before sending a new message m’ to Q The acknowledgement itself may carry information concerning the state of Q between the time P started sending m and the arrival of the acknowledgement
21
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-21 The Task Every processor p i has an integer variable phase i. The task of the alpha-synchronizer is defined by the set of executions such that: 1. The values of the phase variables of each two neighboring processes differ by no more than one. 2. Each of the phase variables is incremented infinitely often.
22
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-22 Self-Stabilizing alpha-synchronizer the unbounded version 1. do forever 2. forall P j N(i) do received j := false 3. do 4. DLsend (phase i ) 5. upon DLreceive(ack j,phase j ) 6. received j := true 7. phase ji := phase j 8. upon DLreceive(phase j ) 9. phase ji := phase j 10. until P j N(i) received j := true 11. if P j N(i) phase i ≤ phase ji then 12.phase i := phase i + 1 13. od If it’s ok increase the phase
23
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-23 Correctness is derived from the following two Lemmas Lemma 1: In every fair execution there exists a configuration after which the difference in the phase value of every two neighboring processors is no greater than one Lemma 2: In every fair execution, a phase variable is incremented infinitely often
24
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-24 Using the synchronizer In a synchronous system the state of a processor at time t +1 depends on the states of its neighbors at time t When using the alpha-synchronizer every processor will compute the state of the corresponding processor in the emulated system at the time step of its phase variable The processor then attaches the previous and current state to the messages it sends Thus, a processor with value t in its phase variable will receive the states of its neighbors at the t th (emulated) pulse and compute its state at time t +1
25
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-25 Bounded alpha-synchronizer The task remains the same but the phase variables are bounded The phase variables are incremented modulo M, M ≥ n (the number of processors) Each processor P i holds an additional variable - reset i
26
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-26 The technique used to stabilize the system is to detect whenever two neighboring phase variables differ in more than one, and to set all phase variables to 0, upon such detection. Then achieve a configuration in which all phase variables are 0. After stabilization counting is essentially not different from the unbounded version. The general idea
27
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-27 Resetting Whenever a processor P i discovers a difference greater than one between its phase variable and its neighbors phase variable – P i sets its reset variable to 0. This causes the neighboring processors to have a value not greater than one in their reset variables. Which in turn causes the neighbor’s neighbors to have a value not greater than two in their reset variables, etc… Thus, a processor that sets its reset variable to 0 initiates a “reset wave” that propagates in the system.
28
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-28 Resetting (cont.) Roughly speaking, The reset wave sets the phase variables to 0, and Counting is resumed only after all phase variables were reset.
29
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-29 1 do forever 2 forall P j N(i) do received j = false 3 PhaseUpdate = false 4 ResetUpdate = false 5 do 6 DLsend(phase i, reset i ) //start send to all neighbours 7 upon DLreceive(ack j, phase j, reset j ) 8 received j = true 9 UpdatePhase( ) 10 upon DLreceive(phase j, reset j ) 11 UpdatePhase( ) 12 until P j N(i) received j = true //all send terminated 13 If ResetUpdate = false then 14 if P j N(i) reset i ≤ reset ji then reset i = min(2N, reset i +1) 15 if PhaseUpdate = false and 16 P j N(i) phase i {phase ji, (phase ji -1)mod M} then 17 phase i = (phase i +1)mod M 18 od The algorithm
30
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-30 19 UpdatePhase() 20 phase ji = phase j 21 reset ji = reset j 22 If reset i > reset ji then 23 reset i = min(2N, reset ji + 1) 24 RsetUpdate = true 25 If phase ji {(phase i -1)mod M, phase i, (phase i +1) mod M} then 26 reset i = 0 27 ResetUpdate = true 28 If reset i 2N then 29 phase i = 0 30 PhaseUpdate = true The algorithm (cont)
31
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-31 Observation While a reset wave propagates, no processor that has already reduced the reset variable can increment it beyond 2N, where N is a bound on the number of processors. Conclusion: because the phase variable is set to 0 if the reset variable is smaller than 2N then a configuration in which all phase variables are 0 will be reached.
32
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-32 Demonstration of the Observation P1P2P3 reset: 5240 P1 line 25: 3 ∉ {(1-1),1,(1+1)} reset 1 =0 0 1 1 20 phase: 4130 P1 line 28: 0≠2N phase 1 = 0P2 line 10,11 and 22 : 2>0 reset 2 =0+1P2 line 28: 1 ≠2N phase 2 =0 0 P3 line 10,11 and 22: 5>1 reset 3 =1+1P3 line 25: 0 ∉ {(4-1),4,(4+1)} reset 3 =0 P3 line 28: 0≠2N phase 3 = 0 0 All processors increment reset until 2N 234 2N All processors increment phase 1 1122
33
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-33 Correctness is proved in steps via three lemmas 1.Every fair execution in which no processor P i assigns 0 to reset i has a suffix in which the value of the reset variable of every processor is 2N in each of its configurations. 2.Every fair execution in which no processor P i assigns 0 to reset i reaches a safe configuration. 3.Every fair execution in which a processor P i assigns 0 to reset i reaches a safe configuration.
34
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-34 Informal introduction with the Beta-Synchronizer I. Creation of a rooted spanning tree of the communication graph (the Leader Election Algorithm and Spanning Tree Construction Algorithm). the beta-synchronizer uses shared memory Emulating synchronous execution by the beta- synchronizer algorithm can be logically divided into three stages:
35
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-35 Informal introduction with the Beta-Synchronizer (cont.) II. Start emulating a pulse, the root processor initiates a broadcast of a “pulse message” to the rooted spanning tree Upon receiving a pulse message: A process P i reads from registers (concerning the synchronous algorithm) of it’s neighbors in the original communication graph.
36
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-36 Example: PtPt PjPj PiPi PmPm PkPk read pulse When P i receives a pulse message from its parent in the rooted spanning tree P j it: Sends it to all its children in the rooted spanning tree. Reads from registers of it’s neighbors P t & P m in the original communication graph.
37
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-37 Informal introduction with the beta-Synchronizer (cont.) III. When P i completed reading registers of it’s neighbors in the original communication graph: P i reaches a safe state and waits for the arrival of “safe messages” from its children in the rooted spanning tree. When P i receives the last “safe message” from its children it writes to the registers of its neighbors in the original communication graph (concerning the synchronous algorithm) and forwards the “safe message” to its parent.
38
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-38 Example: Process P i reads registers (orange arrow) of its neighbors in the original graph. PiPi PjPj PmPm PtPt P i then reaches a safe state and waits for notification that its children have also reached a safe state.
39
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-39 Example (cont.) Whenever P i receives a “safe message” from both its children P t and P m (blue arrow): PiPi PjPj PmPm PtPt P i writes to it’s neighbors registers (orange arrows) P i informs its parent that it has reached a safe state (blue arrow).
40
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-40 Self-Stabilizing Beta-synchronizer 1. Root: do forever 2. forall P j ∊ children(i) do lr ji := read(r ji ) 3. if ∀ P j ∊ children(i) (lr ji.color = color i ) then 4. color i := (color i +1)mod(5n - 3) 5. forall P j ∊ children(i) do write r ij.color := color i 6. od
41
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-41 Self-Stabilizing Beta-synchronizer (cont.) 7. Other: do forever 8. forall P j ∊{ children(i) ∪ parent} do lr ji := read (r ji ) 9. if color i ≠ lr parent,i.color then 10. color i := lr parent,i.color 11. else if ∀ P j ∊ children(i) (lr ji.color = color i ) then 12. write r i,parent.color := color i 13. forall P j ∊ children(i) do write r ij.color := color i 14. od
42
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-42 A few observations The represented algorithm assumes the existence of a rooted spanning tree. Therefore, the final self-stabilizing beta- synchronizer is a fair composition of: coloring algorithm rooted spanning tree construction algorithm (which is a fair composition of STC algorithm and Leader Election algorithm)
43
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-43 Observations (cont.) During the execution the root processor repeatedly checks its children’s color. Whenever the root processor discovers that the subtrees rooted at its children are colored by its current color, it chooses a new color and communicates the new color to each of its children P i in the register root,i
44
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-44 Observations (cont.) For other processors: The root broadcasts it’s color in it’s rooted tree towards the leaves. (Note, equivalent to broadcasting of pulse message by the root processor at the second stage. ) The leaves color is being convergecast in the tree towards the root. (Note, equivalent to convergecast of the safe message from the leaves towards the root at the third stage. )
45
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-45 Two Lemmas Lemma: In every fair execution, the root changes color at least once in every 2d + 1 successive cycles. Lemma: A configuration in which the color of all the processors is equal is reached within O(dn) cycles.
46
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-46 Conclusions From both Lemmas we conclude that the algorithm reaches a safe configuration within O(dn) cycles. Every execution starting from a safe configuration is legal (in LE), as described in the informal introduction to the beta-synchronizer.
47
chapter 4 - Self-Stabilizing Algorithms for Model Conversions4-47 How is a synchronous step executed? Just before each time a processor P i changes its color, it reads the communication registers of its neighbors, as in the synchronous step. Just before each time a non-root processor P i writes a color to its parent in the tree, P i writes new values to its communication registers, as in the second part of the synchronous step.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.