Hwajung Lee. Why do we need these? Don’t we already know a lot about programming? Well, you need to capture the notions of atomicity, non-determinism,

Slides:



Advertisements
Similar presentations
Modeling issues Book: chapters 4.12, 5.4, 8.4, 10.1.
Advertisements

Mutual Exclusion – SW & HW By Oded Regev. Outline: Short review on the Bakery algorithm Short review on the Bakery algorithm Black & White Algorithm Black.
Problems and Their Classes
Global States.
Impossibility of Distributed Consensus with One Faulty Process
Models of Concurrency Manna, Pnueli.
CS3771 Today: deadlock detection and election algorithms  Previous class Event ordering in distributed systems Various approaches for Mutual Exclusion.
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 6 Instructor: Haifeng YU.
Chapter 6 - Convergence in the Presence of Faults1-1 Chapter 6 Self-Stabilization Self-Stabilization Shlomi Dolev MIT Press, 2000 Shlomi Dolev, All Rights.
Program correctness The State-transition model A global state S  s 0 x s 1 x … x s m {s k = local state of process k} S0  S1  S2  … Each state transition.
Token-Dased DMX Algorithms n LeLann’s token ring n Suzuki-Kasami’s broadcast n Raymond’s tree.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Process Synchronization Continued 7.2 The Critical-Section Problem.
Mutual Exclusion.
Outline. Theorem For the two processor network, Bit C(Leader) = Bit C(MaxF) = 2[log 2 ((M + 2)/3.5)] and Bit C t (Leader) = Bit C t (MaxF) = 2[log 2 ((M.
Distributed Computing 8. Impossibility of consensus Shmuel Zaks ©
Termination Detection Part 1. Goal Study the development of a protocol for termination detection with the help of invariants.
CPSC 668Set 7: Mutual Exclusion with Read/Write Variables1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
Spin Tutorial (some verification options). Assertion is always executable and has no other effect on the state of the system than to change the local.
CPSC 668Set 16: Distributed Shared Memory1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded.
Concurrency in Distributed Systems: Mutual exclusion.
 Idit Keidar, Principles of Reliable Distributed Systems, Technion EE, Spring Principles of Reliable Distributed Systems Lecture 12: Impossibility.
Selected topics in distributed computing Shmuel Zaks
Representing distributed algorithms Why do we need these? Don’t we already know a lot about programming? Well, you need to capture the notions of atomicity,
Distributed Algorithms – 2g1513 Lecture 9 – by Ali Ghodsi Fault-Tolerance in Distributed Systems.
Concurrency, Mutual Exclusion and Synchronization.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 10 Instructor: Haifeng YU.
THIRD PART Algorithms for Concurrent Distributed Systems: The Mutual Exclusion problem.
The Complexity of Distributed Algorithms. Common measures Space complexity How much space is needed per process to run an algorithm? (measured in terms.
1 Consensus Hierarchy Part 1. 2 Consensus in Shared Memory Consider processors in shared memory: which try to solve the consensus problem.
Program correctness The State-transition model The set of global states = so x s1 x … x sm {sk is the set of local states of process k} S0 ---> S1 --->
Program correctness The State-transition model A global states S  s 0 x s 1 x … x s m {s k = set of local states of process k} S0  S1  S2  Each state.
Hwajung Lee. The State-transition model The set of global states = s 0 x s 1 x … x s m {s k is the set of local states of process k} S0  S1  S2  Each.
Distributed systems Consensus Prof R. Guerraoui Distributed Programming Laboratory.
Sliding window protocol The sender continues the send action without receiving the acknowledgements of at most w messages (w > 0), w is called the window.
Hwajung Lee. The State-transition model The set of global states = s 0 x s 1 x … x s m {s k is the set of local states of process k} S0  S1  S2  Each.
Physical clock synchronization Question 1. Why is physical clock synchronization important? Question 2. With the price of atomic clocks or GPS coming down,
Hwajung Lee. Well, you need to capture the notions of atomicity, non-determinism, fairness etc. These concepts are not built into languages like JAVA,
Hwajung Lee. -- How many messages are in transit on the internet? --What is the global state of a distributed system of N processes? How do we compute.
Hwajung Lee. Mutual Exclusion CS p0 p1 p2 p3 Some applications are: 1. Resource sharing 2. Avoiding concurrent update on shared data 3. Controlling the.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 16: Distributed Shared Memory 1.
1 Fault tolerance in distributed systems n Motivation n robust and stabilizing algorithms n failure models n robust algorithms u decision problems u impossibility.
Fault tolerance and related issues in distributed computing Shmuel Zaks GSSI - Feb
Fault tolerance and related issues in distributed computing Shmuel Zaks GSSI - Feb
Program Correctness. The designer of a distributed system has the responsibility of certifying the correctness of the system before users start using.
“Towards Self Stabilizing Wait Free Shared Memory Objects” By:  Hopeman  Tsigas  Paptriantafilou Presented By: Sumit Sukhramani Kent State University.
Presented by: Belgi Amir Seminar in Distributed Algorithms Designing correct concurrent algorithms Spring 2013.
“Distributed Algorithms” by Nancy A. Lynch SHARED MEMORY vs NETWORKS Presented By: Sumit Sukhramani Kent State University.
1 Chapter 11 Global Properties (Distributed Termination)
Today’s Agenda  Quiz 4  Temporal Logic Formal Methods in Software Engineering1.
Hwajung Lee. Mutual Exclusion CS p0 p1 p2 p3 Some applications are:  Resource sharing  Avoiding concurrent update on shared data  Controlling the.
Agenda  Quick Review  Finish Introduction  Java Threads.
Semaphores Chapter 6. Semaphores are a simple, but successful and widely used, construct.
Introduction to distributed systems description relation to practice variables and communication primitives instructions states, actions and programs synchrony.
Model and complexity Many measures Space complexity Time complexity
Hwajung Lee ITEC452 Distributed Computing Lecture 8 Representing Distributed Algorithms.
Alternating Bit Protocol
Atomicity, Non-determinism, Fairness
ITEC452 Distributed Computing Lecture 5 Program Correctness
CSE 486/586 Distributed Systems Consistency --- 1
Semaphores Chapter 6.
Distributed Transactions
Abstraction.
CIS 720 Lecture 5.
COMP60621 Designing for Parallelism
ITEC452 Distributed Computing Lecture 8 Distributed Snapshot
CIS 720 Lecture 5.
Presentation transcript:

Hwajung Lee

Why do we need these? Don’t we already know a lot about programming? Well, you need to capture the notions of atomicity, non-determinism, fairness etc. These concepts are not built into languages like JAVA, C++ etc!

 Structure of a program  In the opening line program ;  To define variables and constants define : ; (ex 1) define n: message; (ex 2) type message = record a: integer b: integer c: boolean end definem: message

 To assign an initial value to a variable Initially = ; (ex) initially x = 0;  A simple assignment := (ex) x := E  A compound assignment [, ] := [, ] (ex) x, y := m.a, 2 It is equivalent to x := m.a and y := 2

 Example (We will revisit this program later.) program uncertain; define x : integer; initially x = 0; do x < 4  x := x + 1  x = 3  x := 0 od

 Guarded Action: Conditional Statement  is equivalent to if G then A  Not: ¬

 Sequential actionsS0; S1; S2;... ; Sn  Alternative constructsif fi  Repetitive constructsdo od The specification is useful for representing abstract algorithms, not executable codes.

Alternative construct ifG 1  S 1  G 2  S 2 …  G n  S n fi When no guard is true, skip (do nothing). When multiple guards are true, the choice of the action to be executed is completely arbitrary.

Repetitive construct doG 1  S 1  G 2  S 2.  G n  S n od Keep executing the actions until all guards are false and the program terminates. When multiple guards are true, the choice of the action is arbitrary.

{program for process i} do  j  neighbor(i): c(j) = c(i)  c(i) := 1-c(i) od Will the above computation terminate? There are four processes. The system has to reach a configuration in which no two neighboring processes have the same color.

program uncertain; definex : integer; initially x = 0 dox < 4   x := x + 1  x = 3  x := 0 od Question. Will the program terminate?  Our goal here is to understand fairness  A Major issue in a distributed computation is global termination

A distributed computation can be viewed as a game between the system and an adversary. The adversary may come up with feasible schedules to challenge the system and cause “bad things”. A correct algorithm must be able to prevent those bad things from happening.

 Deterministic Computation  The behaviors remains the same during every run of the program  Nondeterministic Computation  The behaviors of a program may be different during different runs since the scheduler may choose other alternative actions.

define x: array [0..k-1] of boolean initially all channels are empty do ¬ empty (c 0 )  send ACK along c 0  ¬ empty (c 1 )  send ACK along c 1  …  ¬ empty (c k-1 )  send ACK along c k-1 od  For example, if all three requests are sent simultaneously, client 2 or 3 may never get the token with a deterministic scheduler! The outcome could have been different if the server makes a non- deterministic choice Non-determinism is abundant in the real world (examples?). Determinism is a special case of non-determinism.

 Non-determinism is abundant in the real world.  If there are multiple processes ready to execute actions, who will execute the action first is nondeterministic.  If message propagation delays are arbitrary, the order of message reception is non-deterministic Determinism has a specific order and is a special case of non-determinism.

Atomic = all or nothing Atomic actions = indivisible actions do redmessage  x:= 0 {red action}  blue message  x:=7 {blue action} od Regardless of how nondeterminism is handled, we would expect that the value of x will be an arbitrary sequence between 0 and 7. Right or wrong? x

do redmessage  x:= 0 {red action}  blue message  x:=7 {blue action} od Let x be a 3-bit integer x2 x1 x0, so x:=7 means x2:=1, x1:= 1, x2:=1, and x:=0 means x2:=0, x1:= 0, x2:=0 If the assignment is not atomic, then many Interleavings are possible, leading to any possible values of x x So, the answer may depend on atomicity

Does hardware guarantee any form of atomicity? Yes! Transactions are atomic by definition (in spite of process failures). Also, critical section codes are atomic. We will assume that G  A is an “atomic operation.” Does it make a difference if it is not so? y x if x ≠ y  y:= x fi

{Program for P} define b: boolean initially b = true dob  send msg m to Q ÿ¬ empty(R,P)  receive msg; b := false od  Suppose it takes 15 seconds to send the message. After 5 seconds, P receives a message from R. Will it stop sending the remainder of the message? NO PQR b

Defines the choices or restrictions on the scheduling of actions. No such restriction implies an unfair scheduler. For fair schedulers, the following types of fairness have received attention:  Unconditional fairness  Weak fairness  Strong fairness Scheduler / demon / adversary

Programtest definex : integer {initial value unknown} dotrue  x : = 0  x = 0  x : = 1  x = 1  x : = 2 od An unfair scheduler may never schedule the second (or the third actions). So, x may always be equal to zero. An unconditionally fair scheduler will eventually give every statement a chance to execute without checking their eligibility. (Example: process scheduler in a multiprogrammed OS.)

Program test define x : integer {initial value unknown} dotrue  x : = 0  x = 0  x : = 1  x = 1  x : = 2 od  A scheduler is weakly fair, when it eventually executes every guarded action whose guard becomes true, and remains true thereafter  A weakly fair scheduler will eventually execute the second action, but may never execute the third action. Why?

Programtest define x : integer {initial value unknown} dotrue  x : = 0  x = 0  x : = 1  x = 1  x : = 2 od  A scheduler is strongly fair, when it eventually executes every guarded action whose guard is true infinitely often.  The third statement will be executed under a strongly fair scheduler. Why?

 Distributed Scheduler  Since each individual process has a local scheduler, it leaves the scheduling decision to these individual schedulers, without attempting any kind of global coordination.  Central Scheduler or Serial Scheduler  It based on the interleaving of actions. It assume that an invisible demon finds out all the guards that are enabled, arbitrarily picks any one of these guards, schedules the corresponding actions, and waits for the completion of this action before re-evaluating the guards.

 Goal: To make x[i+1 mod 2] = x[i] {in all the processors i} do x[i+1 mod 2] ≠ x[i]  x [i] := ¬ x[i] od Will this program terminate?  using distributed scheduler  using central scheduler

 Example  Let y[k,i] denotes the local copy of the state x[k] of process k as maintained by a neighboring process i. ▪ To evaluate the guard by process i ▪ process i copies the state of each neighbor k, that is, y[k,i] := x[k] ▪ Each process evaluates its guard(s) using the local copies of its neighbors’ state and decides if an action will be scheduled. ▪ The number of steps allowed to copy the neighbors’ states will depend on the grain of atomicity. ▪ Read-write atomicity in a fine-grain atomicity: only one read at a time ▪ Coarse-grain atomicity model: all the read can be done in a single step

 Advantage  Relatively easy of correctness proof  Disadvantage  Poor parallelism and poor scalability

 No System function correctly with distributed schedulers unless it functions correctly under a central scheduler.  In restricted cases, correct behavior with a central scheduler guarantees correct behavior with a distributed scheduler.  Theorem 4.1 If a distributed system works correctly with a central scheduler and no enabled guard of a process is disabled by the actions of their neighbors, the system is also correct with a distributed scheduler.

 Proof. Assume that I and j are neighboring processes. Consider the following four events: (1) the evaluation of G i as true; (2) the execution of S i ; (3) the evaluation of G j as true; and (4) the execution of S j. Distributed schedulers allow the following schedules: ▪ Case 1: (1)(2)(3)(4) ▪ Case 2: (1)(3)(4)(2) ▪ Case 3: (1)(3)(2)(4) Since the case 2 and the case 3 can be reduced to the case 1 and the case 1 corresponds to that of a central schedule. Thus, the theorem is proven.