Chapter 4 Self-Stabilization Self-Stabilization Shlomi Dolev MIT Press, 2000 Draft of October 2003 Shlomi Dolev, All Rights Reserved ©

Slides:



Advertisements
Similar presentations
Chapter 5: Tree Constructions
Advertisements

CS 603 Process Synchronization: The Colored Ticket Algorithm February 13, 2002.
Size-estimation framework with applications to transitive closure and reachability Presented by Maxim Kalaev Edith Cohen AT&T Bell Labs 1996.
Chapter 6 - Convergence in the Presence of Faults1-1 Chapter 6 Self-Stabilization Self-Stabilization Shlomi Dolev MIT Press, 2000 Shlomi Dolev, All Rights.
Chapter 7 - Local Stabilization1 Chapter 7: roadmap 7.1 Super stabilization 7.2 Self-Stabilizing Fault-Containing Algorithms 7.3 Error-Detection Codes.
Chapter 2 - Definitions, Techniques and Paradigms2-1 Chapter 2 Self-Stabilization Self-Stabilization Shlomi Dolev MIT Press, 2000 Draft of May 2003, Shlomi.
Token-Dased DMX Algorithms n LeLann’s token ring n Suzuki-Kasami’s broadcast n Raymond’s tree.
Chapter 15 Basic Asynchronous Network Algorithms
Multiprocessor Synchronization Algorithms ( ) Lecturer: Danny Hendler The Mutual Exclusion problem.
6.852: Distributed Algorithms Spring, 2008 Class 24.
Outline. Theorem For the two processor network, Bit C(Leader) = Bit C(MaxF) = 2[log 2 ((M + 2)/3.5)] and Bit C t (Leader) = Bit C t (MaxF) = 2[log 2 ((M.
Prepared by Ilya Kolchinsky.  n generals, communicating through messengers  some of the generals (up to m) might be traitors  all loyal generals should.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Self Stabilization 1.
Lecture 4: Elections, Reset Anish Arora CSE 763 Notes include material from Dr. Jeff Brumfield.
1 Complexity of Network Synchronization Raeda Naamnieh.
Chapter 8 - Self-Stabilizing Computing1 Chapter 8 – Self-Stabilizing Computing Self-Stabilization Shlomi Dolev MIT Press, 2000 Draft of January 2004 Shlomi.
Ordering and Consistent Cuts Presented By Biswanath Panda.
CPSC 668Set 3: Leader Election in Rings1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
Chapter 4 - Self-Stabilizing Algorithms for Model Conservation4-1 Chapter 4: roadmap 4.1 Token Passing: Converting a Central Daemon to read/write 4.2 Data-Link.
CPSC 668Set 16: Distributed Shared Memory1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
CPSC 668Set 10: Consensus with Byzantine Failures1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
1 Fault-Tolerant Consensus. 2 Failures in Distributed Systems Link failure: A link fails and remains inactive; the network may get partitioned Crash:
CPSC 668Self Stabilization1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
Chapter 4 - Self-Stabilizing Algorithms for Model Conversions iddistance 22 iddistance iddistance iddistance.
CS294, YelickSelf Stabilizing, p1 CS Self-Stabilizing Systems
Concurrency in Distributed Systems: Mutual exclusion.
Distributed systems Module 2 -Distributed algorithms Teaching unit 1 – Basic techniques Ernesto Damiani University of Bozen Lesson 4 – Consensus and reliable.
Chapter Resynchsonous Stabilizer Chapter 5.1 Resynchsonous Stabilizer Self-Stabilization Shlomi Dolev MIT Press, 2000 Draft of Jan 2004, Shlomi.
Self Stabilization Classical Results and Beyond… Elad Schiller CTI (Grece)
Message Passing Systems A Formal Model. The System Topology – network (connected undirected graph) Processors (nodes) Communication channels (edges) Algorithm.
Chapter 3 - Motivating Self-Stabilization3-1 Chapter 3 Self-Stabilization Self-Stabilization Shlomi Dolev MIT Press, 2000 Shlomi Dolev, All Rights Reserved.
CS Dept, City Univ.1 The Complexity of Connectivity in Wireless Networks Presented by LUO Hongbo.
Chapter 7 - Local Stabilization1 Chapter 7 – Local Stabilization Self-Stabilization Shlomi Dolev MIT Press, 2000 Draft of January 2004 Shlomi Dolev, All.
Election Algorithms and Distributed Processing Section 6.5.
Message Passing Systems A Formal Model. The System Topology – network (connected undirected graph) Processors (nodes) Communication channels (edges) Algorithm.
Regular Model Checking Ahmed Bouajjani,Benget Jonsson, Marcus Nillson and Tayssir Touili Moran Ben Tulila
Distributed Mutual Exclusion
On Probabilistic Snap-Stabilization Karine Altisen Stéphane Devismes University of Grenoble.
Distributed Computing 5. Synchronization Shmuel Zaks ©
Selected topics in distributed computing Shmuel Zaks
Lecture #12 Distributed Algorithms (I) CS492 Special Topics in Computer Science: Distributed Algorithms and Systems.
Leader Election Algorithms for Mobile Ad Hoc Networks Presented by: Joseph Gunawan.
On Probabilistic Snap-Stabilization Karine Altisen Stéphane Devismes University of Grenoble.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 10 Instructor: Haifeng YU.
Coordination and Agreement. Topics Distributed Mutual Exclusion Leader Election.
1 Chapter 9 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Synchronization Algorithms and Concurrent Programming Synchronization.
6.852: Distributed Algorithms Spring, 2008 Class 13.
By J. Burns and J. Pachl Based on a presentation by Irina Shapira and Julia Mosin Uniform Self-Stabilization 1 P0P0 P1P1 P2P2 P3P3 P4P4 P5P5.
The Complexity of Distributed Algorithms. Common measures Space complexity How much space is needed per process to run an algorithm? (measured in terms.
A correction The definition of knot in page 147 is not correct. The correct definition is: A knot in a directed graph is a subgraph with the property that.
DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE
6.852: Distributed Algorithms Spring, 2008 Class 25-1.
Self-stabilization. What is Self-stabilization? Technique for spontaneous healing after transient failure or perturbation. Non-masking tolerance (Forward.
Hwajung Lee. Mutual Exclusion CS p0 p1 p2 p3 Some applications are: 1. Resource sharing 2. Avoiding concurrent update on shared data 3. Controlling the.
Vertex Coloring Distributed Algorithms for Multi-Agent Networks
Impossibility of Distributed Consensus with One Faulty Process By, Michael J.Fischer Nancy A. Lynch Michael S.Paterson.
Chapter 21 Asynchronous Network Computing with Process Failures By Sindhu Karthikeyan.
Page 1 Mutual Exclusion & Election Algorithms Paul Krzyzanowski Distributed Systems Except as otherwise noted, the content.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 3: Leader Election in Rings 1.
1 Fault tolerance in distributed systems n Motivation n robust and stabilizing algorithms n failure models n robust algorithms u decision problems u impossibility.
Fault tolerance and related issues in distributed computing Shmuel Zaks GSSI - Feb
Fault tolerance and related issues in distributed computing Shmuel Zaks GSSI - Feb
Fault tolerance and related issues in distributed computing Shmuel Zaks GSSI - Feb
Superstabilizing Protocols for Dynamic Distributed Systems Authors: Shlomi Dolev, Ted Herman Presented by: Vikas Motwani CSE 291: Wireless Sensor Networks.
1 Chapter 11 Global Properties (Distributed Termination)
CIS 825 Review session. P1: Assume that processes are arranged in a ring topology. Consider the following modification of the Lamport’s mutual exclusion.
Alternating Bit Protocol
Parallel and Distributed Algorithms
Locality In Distributed Graph Algorithms
Presentation transcript:

Chapter 4 Self-Stabilization Self-Stabilization Shlomi Dolev MIT Press, 2000 Draft of October 2003 Shlomi Dolev, All Rights Reserved ©

Chapter 4: roadmap 4.1 Token Passing: Converting a Central Daemon to read/write 4.2 Data-Link Algorithms: Converting Shared Memory to Message Passing 4.3 Self-Stabilizing Ranking: Converting an Id- based System to a Special-processor System 4.4 Update: Converting a Special Processor to an Id- based Dynamic System 4.5 Stabilizing Synchronizers: Converting Synchronous to Asynchronous Algorithms 4.6 Self-Stabilizing Naming in Uniform Systems: Converting Id-based to Uniform Dynamic Systems

Token Passing: Converting a Central Daemon to read/write Distributed Daemon – activates a selected set of processors simultaneously to execute a computation step state 1 state 4 state 2 state 3 Central Daemon – is a special case of Distributed Daemon in which the set is of exactly 1 processor Synchronous System – is a special case in which the set consists all the processors in the system The Daemon chooses a set of processors Each processor in the set simultaneously reads from its neighbors… … and then all write their new state

The use of Central Daemon The literature in self-stabilizing is rich in algorithms that assume the existence of powerful schedulers WHY ? 1.Dijkstras’ choices in the first work in the field 2.The assumption of the existence of a daemon enables the designer to consider only a subset of the possible execution set An algorithm designed to work in read/write atomicity can be used in any system in which there exists a daemon but the reverse wont work

Compiler An algorithm designed to stabilize in the presence of distributed daemon must stabilize in a system with central daemon Compiler AL for T (daemon) AL for T (read/write) All the above facts are our motivation for designing a compiler:

1. Compose a spanning tree using the Spanning Tree Construction algorithm Al for T (read/write) = Al for T (daemon) ◦ Mutual Exclusion How does the compiler work ? Converting 2. Construct an Euler tour on the tree to create a virtual ring for the Mutual Exclusion algorithm 3. A processor that enters the critical section reads the state of its neighbors changes state and writes, then it exits the critical section

Chapter 4: roadmap 4.1 Token Passing: Converting a Central Daemon to read/write 4.2 Data-Link Algorithms: Converting Shared Memory to Message Passing 4.3 Self-Stabilizing Ranking: Converting an Id- based System to a Special-processor System 4.4 Update: Converting a Special Processor to an Id- based Dynamic System 4.5 Stabilizing Synchronizers: Converting Synchronous to Asynchronous Algorithms 4.6 Self-Stabilizing Naming in Uniform Systems: Converting Id-based to Uniform Dynamic Systems

Designing a self-stabilization algorithm for asynchronous message-passing systems is more subtle than the same task in shared memory systems Main difficulty : The messages stored in the communication links –No bound on message delivery time –No bound on number of messages that can be in link There are infinitely many initial configurations from which the system must stabilize Converting Shared Memory to Message Passing

Our main goal is designing of compiler: Compiler AL for T (read/write) AL for T (message passing) First goal in designing of such a compiler is a self-stabilizing data-link algorithm

Definition of Self-Stabilizing Data-Link Algorithm Data-Link Algorithm : Messages fetched by sender from network layer should be delivered by receiver to network layer without duplications, omissions or reordering One of the implementations of data-link task is the token-passing algorithm Token-passing task is a set of executions TP The legal execution of TP is the sequence of configurations in which : - No more than one processor holds the token - Both the sender and receiver hold the token in infinitely many configurations

Unbounded solution of TP task Sender: 01 upon timeout 02 send ( counter ) 03upon message arrival 04 begin 05 receive (MsgCounter) 06if MsgCounter ≥ counter then 07begin 08counter := MsgCounter send (counter) 10end 11else send (counter) 12end A timeout mechanism is used to ensure that the system will not enter to communication-deadlock configuration Each message has integer label called MsgCounter Sender ( and Receiver ) maintains an unbounded local variable called counter

Unbounded solution of TP task Receiver: 13upon message arrival 14 begin 15 receive (MsgCounter) 16if MsgCounter ≠ counter then 17counter := MsgCounter 18send (counter) 19end Token arrives Token released In safe configuration of TP and the algorithm - counter values of all messages and values of the counters of sender and receiver, have the same value (lemma 4.1)

The algorithm is self-stabilizing For every possible configuration c, every fair execution that starts in c reaches a safe configuration with relation to TP (Theorem 4.1) Question : Whether the unbounded counter and label can be eliminated from the algorithm ? Answer : NO

Lower Bound on the System Memory The memory of the system in configuration c is the number of bits for encoding state of sender, receiver, and messages in transit. Weak-Exclusion task (WE): In every legal execution E, there exists a combination of steps, a step for each processor, so that these steps are never executed concurrently. We will prove that there is no bound on system memory for WE task.

Lower Bound on the System Memory Theorem: For any self-stabilizing message driven protocol for WE task and for any execution E’ in WE all the configurations are distinct. Hence for any t > 0, the size of at least one of the first t configurations in E is at least log 2 (t)

Proof of Theorem Sender sending… Receiver acknowleging… c1c1 c1c1 s1s1 r1r1 Sender P s Receiver P r s1s1 r1r1 q s,r (c 1 ) q r,s (c 1 ) qS r,s (E) qS s,r (E) E Any execution E’ in which not all the configurations are distinct has circular sub- execution E = (c 1,a 2,….,c m ) where (c 1 =c m ) cm=cm=

Proof - Building CE and c init Let E – be circular sub-execution S i - sequence of steps of P i CE – set of circular executions Each execution in CE – merge of S i ’s, while keeping their internal order We obtain initial configuration of E c in CE from c 1 of E, and sequence of messages sent during E c init r1r1 s1s1 q r,s (c 1 ) q s,r (c 1 ) qS s,r (E) qS r,s (E)

c init r1r1 s1s1 q r,s (c 1 ) q s,r (c 1 ) qS s,r (E) qS r,s (E) q r,s (c 1 ) qS r,s (E) Receiver recieves… qR s,r (E) Receiver sends… qS r,s (E) q r,s (c 1 ) q s,r (c 1 ) c init r1r1 s1s1 qS s,r (E) qS r,s (E) Sender recieves… qR r,s (E) Sender sends… qS s,r (E) Proof - Possible execution in CE E steps

Sender steps during E is S sender = {a 1,a 2 …a m } Receiver steps during E is S receiver = { b 1,b 2 …b k } For any pair there exists E’’  CE in which there is configuration c, such that a i and b j is applicable in c  c is not safe configuration Proof – cont… c init c a1a1 a2a2 … a i-1 b1b1 b2b2 … b j-1 a i & b j

Proof – cont… If self-stabilizing algorithm AL for WE task have circular sub-execution E  exists infinite fair execution E ∞ of AL, none of whose configurations is safe for WE E’ : c init E’ from CE(E) E’ from CE(E) … c1c1

Proof – cont… Let assume in contradiction that c 1 is safe Then let’s extend E’ to E ∞ by E 1, E 2, …, E k - executions from CE(E) E ∞ : For each pair there is c’ in E ∞ so that both a i, b j applicable in c’  c’ is not safe The proof is complete! c init c1c1 … E 1 E 2 … E k c’

Bounded-Link Solution Let cap be the bound on number of the messages in transit The algorithm is the same as presented before with counter incremented modulo cap+1 08counter := (MsgCounter + 1)mod(cap+1) Sender must eventually introduce a counter value that not existing in any message in transit

Randomized Solution The algorithm is the same as original one with counter chosen randomly 08label := ChooseLabel(MsgLabel) At least three labels should be used The sender repeatedly send a message with particular label L until the a message with the same label L arrives The sender chooses randomly the next label L’ from the remaining labels so that L’ ≠ L

Self-Stabilizing Simulation of Shared Memory The heart of the simulation is a self-stabilizing implementation of the read and write operations The simulation implements these operations by using a self-stabilizing, token passing algorithm The algorithm run on the two links connecting any pair of neighbors In each link the processor with the larger ID acts as the sender while the other as the receiver (Remind: all processors have distinct IDs)

Self-Stabilizing Simulation of Shared Memory - cont… Every time P i receives a token from P j. P i write the current value of R ij in the value of the token Write operation of P i into r ij is implemented by locally writing into R ij Read operation of P i from r ji is implemented by: 1.P i receives the token from P j 2.P i receives the token from P j. Return the value attached to this token

Self-Stabilizing Simulation of Shared Memory - Run R ij R ji PiPi PjPj t | value = z Write Operation: P i write x to r ij R ij =X Read Operation: P i read from value y from r ji t+1 | value = z t+1 | value = Y P i writes to R ij 1. P i receive token from P j 2. P i send token to P j 4. P i receive token from P j and read the value of the token t+1 | value = y 3. P j receive token from P i and write R ji to value of the token R ji = Y

Chapter 4: roadmap 4.1 Token Passing: Converting a Central Daemon to read/write 4.2 Data-Link Algorithms: Converting Shared Memory to Message Passing 4.3 Self-Stabilizing Ranking: Converting an Id- based System to a Special-processor System 4.4 Update: Converting a Special Processor to an Id- based Dynamic System 4.5 Stabilizing Synchronizers: Converting Synchronous to Asynchronous Algorithms 4.6 Self-Stabilizing Naming in Uniform Systems: Converting Id-based to Uniform Dynamic Systems

Converting an Id-based System to a Special-processor System Our goal is to design a compiler that converts self-stabilizing algorithm for a unique ID system to work in special processor system. The ranking (compiler) task is to assign each of the n processors in the system with a unique identifier in the range 1 to n.

Converting an Id-based System to a Special-processor System We will form the self stabilizing ranking algorithm by running 3 self-stabilizing algorithms one after the other: 1)Self-Stabilizing spanning tree construction (section 2.5) 2)Self-Stabilizing counting algorithm 3)Self-Stabilizing naming algorithm Special processor system Unique Id’s system Spanning tree const. Counting algorithm Naming algorithm

Self-Stabilizing Counting Algorithm Assuming rooted spanning tree system in which every processor knows its parent and children's P i has a variable count i that hold the number of processor in sub-tree where P i is a root The correctness proofs by induction on the height of a processor

Self-Stabilizing Counting Algorithm 01 Root: do forever 02 sum:= 0 03forall P j  children(i) do 04 lr ji := read(r ji ) 05 sum := sum + lr ji. count 06od 07count i = sum od 09 Other: do forever 10sum:= 0 11forall P j  children(i) do 12lr ji := read(r ji ) 13 sum := sum + lr ji. count 14od 15count i = sum write r i,parent.count := count i 17od Calculate count i : sum the values of r ji registers of his child’s and 1 (himself) Write local count value to communication register

Self-Stabilizing Naming Algorithm The naming algorithm uses the value of the count fields from counting algorithm. Algorithm assign unique identifiers to the processors. The identifier of a processor is stored in the ID i variable. Proof of the stabilization by induction on the distance of the processors from the root

Self-Stabilizing Naming Algorithm 01 Root: do forever 02 ID i := 1 03sum := 0 04 forall P j  children(i) do 05 lr ji := read (r ji ) 05 write r ij.identifier := Id sum 06sum := sum + lr ji.count 08 od 09 od 10 Other: do forever 11sum:= 0 12lr parent,i := read (r parent,I ) 13ID i := lr parent,I.identifier 14 forall P j  children(i) do 15lr ji := read (r ji ) 16 write r ij.identifier := Id i sum 17sum := sum + lr ji.count 18 od 19 od

Ranking Task Run Count i := ID i := Count i := ID i := Count i := ID i := Count i := ID i := Count i := ID i := Count i := ID i := Count i := ID i := Count i := ID i := r.id= Count i := 1 ID i := Count i := 1 ID i := Count i := 2 ID i := Count i := 1 ID i := Count i := 2 ID i := Count i := 3 ID i := Count i := 2 ID i := Count i := 8 ID i := Count i := 8 ID i := 1 r.id=2 r.id=4 r.id=6 Count i := 2 ID i := 4 Count i := 1 ID i := 5 Count i := 2 ID i := 2 r.id=3 Count i := 3 ID i := 6 r.id=7 Count i := 2 ID i := 7 r.id=8 Count i := 1 ID i := 3 Count i := 1 ID i := r.id=

Counting Algorithm for non-rooted tree 01 do forever 02forall P j  N(i) do lr ji := read (r ji ) 03sum i := 0 04 forall P j  N(i) do 05 sum j := 0 06 forall P k  N(i) do 07if P j != P k then 08sum j := sum j +lr ki.count 09od 10count i [j] := sum j sum i := sum i + sum j 12write r ij.count := count i [j] 13od 14count i = sum i od Each processor P i has a variable count i [j] for every neighbor P j The value of count i [j] is the number of processors in subtree of T to which P i belongs and P j doesn’t. The correctness proof is by induction on the height of the registers r ij

Counting Algorithm for non-rooted tree Run Count i [j]=3 Count i [1]=2 Count i [j]=5 Count i [j]=2 Count i [j]=1 Count i [j]=3 Count i [j]=5 Count i [j]=3 Count i [1]=7 Count i [1]=6 Count i [1]=2 Count i [1]=7 Count i [j] =1 Count i [j]=2 Count i [j]=4 Count i [1]=1 Count i [j]=6 Count i [j]=5 Count i [j]=6 Count i [j] =4 Count i [j]=7

Chapter 4: roadmap 4.1 Token Passing: Converting a Central Daemon to read/write 4.2 Data-Link Algorithms: Converting Shared Memory to Message Passing 4.3 Self-Stabilizing Ranking: Converting an Id- based System to a Special-processor System 4.4 Update: Converting a Special Processor to an Id- based Dynamic System 4.5 Stabilizing Synchronizers: Converting Synchronous to Asynchronous Algorithms 4.6 Self-Stabilizing Naming in Uniform Systems: Converting Id-based to Uniform Dynamic Systems

Update - Converting a Special Processor to an Id-based Dynamic System The task of the update algorithm in id-based system is to inform each processor of the other processors that are in its connected component. As a result every processor in the connected component knows the maximal identifier in the system, and a single leader is elected. The update algorithm is a self-stabilizing leader election algorithm within O(d) cycles (pulses). The motivation for the restriction that the update must work in id-based system can be viewed by examining Dijkstra self stabilizing ME algorithm for a ring of processors…

Dijkstra proved that without a special processor, it is impossible to achieve ME in a self-stabilizing manner The impossibility proof is for composite number of identical processors connected in a ring activated by a central daemon Dijkstra proof ME requires that processor P i should execute the critical section if and only if it is the only processor that can change its state (by reading its neighbors’ states) at that execution point

S0S0 Then, p1 and p3 will be at S1, p2 and p4 will be at S1’. We can see that symmetry in state is preserved Dijkstra proof S1S1 p4p4 p3p3 p2p2 p1p1 S0S0 S0’S0’ S0’S0’ S1S1 S1’S1’ S1’S1’ P1P1,P 3,P 2,P 4 … Consider P1,P3 starting in the same state S0, P2,P4 starting in the same state S0’ And execution order is:

Conclusions from Dijkstra’s proof Whenever P 1 has permission to execute critical so does P 3. (Like that for P 2 and P 4 ). With no central daemon the impossibility result of self-stabilizing ME algorithm holds also for a ring of prime number of identical processors. We start in a configuration that all the processors are in the same state, and the contents of all the registers are identical. An Execution that preserve that symmetry forever is one that every processor reads all neighbors registers before any writing is done. The restriction for designing the update algorithm in an id-based (not identical) system is thus clear.

The Update algorithm outlines The algorithm constructs n directed BFS trees. One for every processor. 1. Each processor P i holds a BFS tree rooted at P i. 2. When a node P j with distance k+1 from P i has more than one neighbor at distance k from P i, P j is connected to the neighbor (parent) with the maximal ID. 3. Each P i reads from his δ neighbors their set of tuples where j represents a processor id, and x is the distance of j from the processor that holds this tuple. 4. P i tuples are computed from these neighbors’ sets, adapting for each j  i the tuple with the smallest x and adding 1 to x. P i also adds the tuple indicating the distance from itself. At the end of the iteration, P i removes the floating (false) tuples.

Update algorithm 01 do forever 02Readset i :=Ø 03forall P j  N(i) do 04 Readset i := Readset i  read(Processors j ) 05Readset i := Readset i \\ 06Readset i := Readset i ++ \\ 07Readset i := Readset i  { } 08forall P j  processors(Readset i ) do 09 Readset i := Readset i \\ NotMinDist(P j, Readset i ) 10 write Processors i := ConPrefix(Readset i ) 11od