Distributed Computing

Slides:



Advertisements
Similar presentations
CS 603 Process Synchronization: The Colored Ticket Algorithm February 13, 2002.
Advertisements

Mutual Exclusion – SW & HW By Oded Regev. Outline: Short review on the Bakery algorithm Short review on the Bakery algorithm Black & White Algorithm Black.
CSC321 Concurrent Programming: §3 The Mutual Exclusion Problem 1 Section 3 The Mutual Exclusion Problem.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Process Synchronization Continued 7.2 The Critical-Section Problem.
Mutual Exclusion By Shiran Mizrahi. Critical Section class Counter { private int value = 1; //counter starts at one public Counter(int c) { //constructor.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Chapter 6 (a): Synchronization.
1 Chapter 2 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2007 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Multiprocessor Synchronization Algorithms ( ) Lecturer: Danny Hendler The Mutual Exclusion problem.
1 Operating Systems, 122 Practical Session 5, Synchronization 1.
Chair of Software Engineering Concurrent Object-Oriented Programming Prof. Dr. Bertrand Meyer Lecture 4: Mutual Exclusion.
Mutual Exclusion.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Mutual Exclusion Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit Modified by Rajeev Alur for CIS 640, Spring.
Chapter 3 The Critical Section Problem
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
CPSC 668Set 8: More Mutex with Read/Write Variables1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
Concurrency in Distributed Systems: Mutual exclusion.
Mutual Exclusion Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Mutual Exclusion Presented by: Rohan Sen (Distributed Algorithms Ch. 10)
28/10/1999POS-A1 The Synchronization Problem Synchronization problems occur because –multiple processes or threads want to share data; –the executions.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
THIRD PART Algorithms for Concurrent Distributed Systems: The Mutual Exclusion problem.
DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE
Mutual Exclusion Using Atomic Registers Lecturer: Netanel Dahan Instructor: Prof. Yehuda Afek B.Sc. Seminar on Distributed Computation Tel-Aviv University.
Mutual Exclusion Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 8: More Mutex with Read/Write Variables 1.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 11: October 5, 2010 Instructor: Bhuvan Urgaonkar.
Hwajung Lee. Mutual Exclusion CS p0 p1 p2 p3 Some applications are:  Resource sharing  Avoiding concurrent update on shared data  Controlling the.
Homework-6 Questions : 2,10,15,22.
Mutual Exclusion. Art of Multiprocessor Programming 2 Mutual Exclusion We will clarify our understanding of mutual exclusion We will also show you how.
Synchronization Questions answered in this lecture: Why is synchronization necessary? What are race conditions, critical sections, and atomic operations?
Mutual Exclusion Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit.
Process Synchronization: Semaphores
Background on the need for Synchronization
Mutual Exclusion Companion slides for
Mutual Exclusion Companion slides for
Designing Parallel Algorithms (Synchronization)
Distributed Consensus
Mutual Exclusion Companion slides for
Topic 6 (Textbook - Chapter 5) Process Synchronization
The Critical-Section Problem
Course Syllabus 1. Introduction - History; Views; Concepts; Structure
Mutual Exclusion Problem Specifications
Lecture 2 Part 2 Process Synchronization
Mutual Exclusion CS p0 CS p1 p2 CS CS p3.
Grades.
Implementing Mutual Exclusion
Concurrency: Mutual Exclusion and Process Synchronization
Course Syllabus 1. Introduction - History; Views; Concepts; Structure
Implementing Mutual Exclusion
Multiprocessor Synchronization Algorithms ( )
Course Syllabus 1. Introduction - History; Views; Concepts; Structure
ITEC452 Distributed Computing Lecture 7 Mutual Exclusion
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
Course Syllabus 1. Introduction - History; Views; Concepts; Structure
Course Syllabus 1. Introduction - History; Views; Concepts; Structure
Process/Thread Synchronization (Part 2)
CSE 542: Operating Systems
CSE 542: Operating Systems
Syllabus 1. Introduction - History; Views; Concepts; Structure
Mutual Exclusion Modified from Companion slides for
Presentation transcript:

Distributed Computing Adam Morrison Some slides based on “Art of Multiprocessor Programming” 1

Outline Administration Background Mutual exclusion 2

Administration Mandatory attendance in 11 of the 13 lectures https://www.cs.tau.ac.il/~afek/dc19.html Grade: 5% class participation 40% homework (~5 in the semester) 55% project 3

Project Analyze some topic or papers Submit 2-5 pages summarizing your findings Give a 15-minute talk 4

Outline Administration Background Mutual exclusion 5

Distributed computing code 6

Distributed computing code code code code 7

Models Message-passing: Communicate by messages over the network code code network code 8

Models Message-passing: Communicate by messages over the network Shared-memory: Communicate by reading/writing shared memory code code network code code code memory code 9

Models Message-passing & shared-memory are closely connected Shared-memory can simulate message-passing Proof: Implement message queues in software Message-passing can simulate shared-memory* *under assumptions on # of failures Proof: “Sharing Memory Robustly in Message-Passing Systems” [Attiya, Bar-Noy, Dolev 1990] Dijkstra award 10

This course Message-passing: Communicate by messages over the network Shared-memory: Communicate by reading/writing shared memory code code network code code code memory code 11

This course Foundations of distributed computing Mainly about communication/synchronization between processors Not about parallelism Problems of agreement will come up a lot New this year: blockchains (theory of) 12

Outline Administration Background Mutual exclusion 13

Shared-memory model code code code memory read write read read read 14

Shared-memory model Execution consists of a sequence of steps Each step is a read/write of some memory location (Don’t care about local computation!) P1 P2 time 15

Shared-memory model Execution consists of a sequence of steps Each step is a read/write of some memory location (Don’t care about local computation!) Asynchronous system Sudden unpredictable delays Some scheduler picks next step in an arbitrary way P1 P2 time 16

Mutual exclusion Lock is an object (variable) with basic methods: Lock() (acquire) Unlock() (release) Code between Lock(&L) and Unlock(&L) is a critical section of L The lock algorithm guarantees mutual exclusion: of all callers to lock(), only one can finish and enter the critical section until it exits the CS by calling unlock() Lock L; Lock(&L); … Unlock(&L); 17

Mutual exclusion Process execution consists of repeat forever: remainder section entry section critical section exit section Progress assumption: process can only halt while in the remainder section defined by mutual exclusion algorithm 18

Mutual exclusion formalism Interval (a0,a1) is the subsequence of events starting with a0 & ending with a1 time a0 a1 19

Mutual exclusion formalism Overlapping intervals time 20

Mutual exclusion formalism Disjoint intervals We write A -> B (A precedes B) End event of A precedes start event of B Precedence is a partial order A->B and B->A might both be false time A B 21

Mutual exclusion property Let CSik be process i's k-th critical section execution and CSjm be process j's m-th critical section execution Then either: CSik -> CSjm CSjm -> CSik CSik CSjm time 22

Plan 2-process solution N-process solution Fairness Inherent costs 23

First attempt: flag principle Entry (“lock()”) P0 P1 flag0 := 1 while (flag1){} -- CS –- flag0 := 0 flag1 := 1 while (flag0){} -- CS –- flag1 := 0 flag1 24

First attempt: flag principle Exit (“unlock()”) P0 P1 flag0 := 1 while (flag1){} -- CS –- flag0 := 0 flag1 := 1 while (flag0){} -- CS –- flag1 := 0 flag1 25

First attempt: flag principle while (flag1){} -- CS –- flag0 := 0 flag1 := 1 while (flag0){} -- CS –- flag1 := 0 flag0 flag1 No other flag up My flag up time 26

Mutual exclusion proof Assume CSik overlaps CSjm Consider each process's last (kth and mth) read and write before entering Derive a contradiction 27

Proof From code: write0(flag0=true)  read0(flag1==false)  CS0 while (flag1){} -- CS –- flag0 := 0 flag1 := 1 while (flag0){} -- CS –- flag1 := 0 From code: write0(flag0=true)  read0(flag1==false)  CS0 write1(flag1=true)  read1(flag0==false)  CS1 28

Proof From code: write0(flag0=true)  read0(flag1==false)  CS0 while (flag1){} -- CS –- flag0 := 0 flag1 := 1 while (flag0){} -- CS –- flag1 := 0 From code: write0(flag0=true)  read0(flag1==false)  CS0 write1(flag1=true)  read1(flag0==false)  CS1 From assumption: read0(flag1==false)  write1(flag1=true) read1(flag0==false)  write0(flag0=true) 29

Proof From code: write0(flag0=true)  read0(flag1==false)  CS0 while (flag1){} -- CS –- flag0 := 0 flag1 := 1 while (flag0){} -- CS –- flag1 := 0 From code: write0(flag0=true)  read0(flag1==false)  CS0 write1(flag1=true)  read1(flag0==false)  CS1 From assumption: read0(flag1==false)  write1(flag1=true) read1(flag0==false)  write0(flag0=true) 30

Proof From code: write0(flag0=true)  read0(flag1==false)  CS0 while (flag1){} -- CS –- flag0 := 0 flag1 := 1 while (flag0){} -- CS –- flag1 := 0 From code: write0(flag0=true)  read0(flag1==false)  CS0 write1(flag1=true)  read1(flag0==false)  CS1 From assumption: read0(flag1==false)  write1(flag1=true) read1(flag0==false)  write0(flag0=true) 31

Proof From code: write0(flag0=true)  read0(flag1==false)  CS0 while (flag1){} -- CS –- flag0 := 0 flag1 := 1 while (flag0){} -- CS –- flag1 := 0 From code: write0(flag0=true)  read0(flag1==false)  CS0 write1(flag1=true)  read1(flag0==false)  CS1 From assumption: read0(flag1==false)  write1(flag1=true) read1(flag0==false)  write0(flag0=true) 32

Proof From code: write0(flag0=true)  read0(flag1==false)  CS0 while (flag1){} -- CS –- flag0 := 0 flag1 := 1 while (flag0){} -- CS –- flag1 := 0 From code: write0(flag0=true)  read0(flag1==false)  CS0 write1(flag1=true)  read1(flag0==false)  CS1 From assumption: read0(flag1==false)  write1(flag1=true) read1(flag0==false)  write0(flag0=true) 33

Impossible in a total order (of events) Proof A cycle! Impossible in a total order (of events) flag0 := 1 while (flag1){} -- CS –- flag0 := 0 flag1 := 1 while (flag0){} -- CS –- flag1 := 0 From code: write0(flag0=true)  read0(flag1==false)  CS0 write1(flag1=true)  read1(flag0==false)  CS1 From assumption: read0(flag1==false)  write1(flag1=true) read1(flag0==false)  write0(flag0=true) 34

Problem: progress P0 P1 flag0 := 1 flag1 := 1 while (flag1){} -- CS –- flag0 := 0 flag1 := 1 while (flag0){} -- CS –- flag1 := 0 flag1 35

Deadlock and progress Deadlock is a state in which no thread can complete its operation, because they’re all waiting for some condition that will never happen Previous slide is an example Mutual exclusion progress guarantees: Deadlock-freedom: If a thread is trying to enter the critical section then some thread must eventually enter the critical section Starvation-freedom: If a thread is trying to enter the critical section then this thread must eventually enter the critical section 36

2nd attempt victim := 0 victim := 1 flag0 := 1 flag1 := 1 while (flag1 && victim==0){} -- CS –- flag0 := 0 victim := 1 flag1 := 1 while (flag0 && victim==1){} -- CS –- flag1 := 0 flag1 37

Peterson’s algorithm flag0 := 1 flag1 := 1 victim := 0 victim := 1 while (flag1 && victim==0){} -- CS –- flag0 := 0 flag1 := 1 victim := 1 while (flag0 && victim==1){} -- CS –- flag1 := 0 flag1 38

Peterson’s mutual exclusion proof flag[i] := 1 victim := i while (flag[1-i] && victim==i) {} -- CS –- flag[i] := 0 Assume both in CS 39

Peterson’s mutual exclusion proof flag[i] := 1 victim := i while (flag[1-i] && victim==i) {} -- CS –- flag[i] := 0 Assume both in CS Suppose P0 is last to write victim (writes 0) P0 writes victim:=0 40

Peterson’s mutual exclusion proof flag[i] := 1 victim := i while (flag[1-i] && victim==i) {} -- CS –- flag[i] := 0 Assume both in CS Suppose P0 is last to write victim (writes 0) So it reads flag1==0 P0 writes victim:=0 P0 reads flag1==0 41

Peterson’s mutual exclusion proof flag[i] := 1 victim := i while (flag[1-i] && victim==i) {} -- CS –- flag[i] := 0 Assume both in CS Suppose P0 is last to write victim (writes 0) So it reads flag1==0 So P1 writes flag1 later P0 writes victim:=0 P0 reads flag1==0 P1 writes flag1:=1 42

Peterson’s mutual exclusion proof flag[i] := 1 victim := i while (flag[1-i] && victim==i) {} -- CS –- flag[i] := 0 Assume both in CS Suppose P0 is last to write victim (writes 0) So it reads flag1==0 So P1 writes flag1 later But then it writes victim=> contradiction P0 writes victim:=0 P0 reads flag1==0 P1 writes flag1:=1 P1 writes victim:=1 43

Peterson’s deadlock-freedom proof flag[i] := 1 victim := i while (flag[1-i] && victim==i) {} -- CS –- flag[i] := 0 Process blocked: Only at while loop Only if other’s flag==1 Only if it is victim In solo execution: other’s flag is false Otherwise: somebody isn’t the victim 44

Peterson’s starvation-freedom proof flag[i] := 1 victim := i while (flag[1-i] && victim==i) {} -- CS –- flag[i] := 0 Process i blocked only if 1-i repeatedly enters so that flag[1-i] && victim==i But when 1-i re-enters: It sets victim to 1-i So i gets in 45

Plan 2-process solution N-process solution Fairness Inherent costs 46

Filter algorithm Generalization of Peterson’s N-1 levels (waiting rooms) that a process has to go through to enter CS At each level: At least one enters At least one blocked if many try I.e., at most N-i pass into level i Only one process makes it to CS (level N-1) remainder cs Art of Multiprocessor Programming 47

Filter algorithm int level[N] // level of process i int victim[N] // victim at level L for (L = 1; L < N; L++) { level[i] = L victim[L] = i while (($ k!=i:level[k]>=L) && victim[L] == i) {} } -- CS –- level[i] = 0 flag1 48

Filter algorithm int level[N] // level of process i int victim[N] // victim at level L for (L = 1; L < N; L++) { level[i] = L victim[L] = i while (($ k!=i:level[k]>=L) && victim[L] == i) {} } -- CS –- level[i] = 0 flag1 One level at a time 49

Announce intention to enter level L Filter algorithm int level[N] // level of process i int victim[N] // victim at level L for (L = 1; L < N; L++) { level[i] = L victim[L] = i while (($ k!=i:level[k]>=L) && victim[L] == i) {} } -- CS –- level[i] = 0 flag1 Announce intention to enter level L 50

Give priority to anyone but me Filter algorithm int level[N] // level of process i int victim[N] // victim at level L for (L = 1; L < N; L++) { level[i] = L victim[L] = i while (($ k!=i:level[k]>=L) && victim[L] == i) {} } -- CS –- level[i] = 0 flag1 Give priority to anyone but me 51

Filter algorithm int level[N] // level of process i int victim[N] // victim at level L for (L = 1; L < N; L++) { level[i] = L victim[L] = i while (($ k!=i:level[k]>=L) && victim[L] == i) {} } -- CS –- level[i] = 0 flag1 Wait as long as someone else is at same or higher level, and I’m the victim 52

Process enters level L when it completes the loop Filter algorithm int level[N] // level of process i int victim[N] // victim at level L for (L = 1; L < N; L++) { level[i] = L victim[L] = i while (($ k!=i:level[k]>=L) && victim[L] == i) {} } -- CS –- level[i] = 0 flag1 Process enters level L when it completes the loop 53

. . . 1 5 level[.] 2 1 1 1 1 2 victim[.] . . while(($ k != i level[k] >= L)&& (victim[L] == i)) {}; while(($ k != i level[k] >= L)&& (victim[L] == i)) {}; 5 Art of Multiprocessor Programming 54

. . . 1 5 level[.] 1 2 victim[.] . . 5 CS Art of Multiprocessor Programming 55

Claim Start at level L=0 At most n-L threads enter level L © 2003 Herlihy and Shavit Claim Start at level L=0 At most n-L threads enter level L Mutual exclusion at level L=n-1 remainder L=0 L=1 L=n-2 cs L=n-1 Art of Multiprocessor Programming

Induction hypothesis No more than n-(L-1) at level L-1 © 2003 Herlihy and Shavit Induction hypothesis No more than n-(L-1) at level L-1 Induction step: by contradiction Assume all at level L-1 enter level L A last to write victim[L] B is any other thread at level L remainder assume L-1 has n-(L-1) L has n-L cs prove Art of Multiprocessor Programming

Proof structure A B remainder Assumed to enter L-1 n-(L-1) = 4 © 2003 Herlihy and Shavit Proof structure remainder Assumed to enter L-1 A B n-(L-1) = 4 n-(L-1) = 4 Last to write victim[L] cs By way of contradiction all enter L Show that A must have seen B in level[L] and since victim[L] == A could not have entered Art of Multiprocessor Programming

Just like Peterson (1) writeB(level[B]=L)writeB(victim[L]=B) © 2003 Herlihy and Shavit Just like Peterson (1) writeB(level[B]=L)writeB(victim[L]=B) for (L = 1; L < N; L++) { level[i] = L victim[L] = i while (($ k!=i:level[k]>=L) && victim[L] == i) {} } From the Code 59 59

From the code (2) writeA(victim[L]=A)readA(level[B]) © 2003 Herlihy and Shavit From the code (2) writeA(victim[L]=A)readA(level[B]) readA(victim[L]) for (L = 1; L < N; L++) { level[i] = L victim[L] = i while (($ k!=i:level[k]>=L) && victim[L] == i) {} } 60 60

By assumption, A is the last thread to write victim[L] © 2003 Herlihy and Shavit By assumption (3) writeB(victim[L]=B)writeA(victim[L]=A) By assumption, A is the last thread to write victim[L] 61 61

Combining observations © 2003 Herlihy and Shavit Combining observations (1) writeB(level[B]=L)writeB(victim[L]=B) (3) writeB(victim[L]=B)writeA(victim[L]=A) (2) writeA(victim[L]=A)readA(level[B]) readA(victim[L]) 62 62

Combining observations © 2003 Herlihy and Shavit Combining observations (1) writeB(level[B]=L)writeB(victim[L]=B) (3) writeB(victim[L]=B)writeA(victim[L]=A) (2) writeA(victim[L]=A)readA(level[B]) readA(victim[L]) A read level[B] ≥ L, and victim[L] = A, so it could not have entered level L! 63 63

No starvation Filter Lock satisfies properties: © 2003 Herlihy and Shavit No starvation Filter Lock satisfies properties: Just like Peterson algorithm at any level So no one starves But what about fairness? Threads can be overtaken by others Art of Multiprocessor Programming 64 64

Bounded waiting Want stronger fairness guarantees © 2003 Herlihy and Shavit Bounded waiting Want stronger fairness guarantees Thread not “overtaken” too much If A starts before B, then A enters before B? But what does “start” mean? Need to adjust definitions… Art of Multiprocessor Programming 65 65

Bounded waiting Divide entry section into 2 parts: Doorway interval: © 2003 Herlihy and Shavit Bounded waiting Divide entry section into 2 parts: Doorway interval: Written DA Always finishes in finite steps Waiting interval: Written WA May take unbounded steps Art of Multiprocessor Programming

r-Bounded waiting For threads A and B: If DAk  DB m © 2003 Herlihy and Shavit r-Bounded waiting For threads A and B: If DAk  DB m A's k-th doorway precedes B's m-th doorway Then CSAk  CSBm+r A's k-th critical section precedes B's m+r-th critical section B cannot overtake A more than r times Art of Multiprocessor Programming

What’s r for Peterson’s algorithm? © 2003 Herlihy and Shavit What’s r for Peterson’s algorithm? flag[i] := 1 victim := i while (flag[1-i] && victim==i) {} -- CS –- flag[i] := 0 Answer: r = 0

What’s r for Filter lock? © 2003 Herlihy and Shavit What’s r for Filter lock? for (L = 1; L < N; L++) { level[i] = L victim[L] = i while (($ k!=i:level[k]>=L) && victim[L] == i) {} } Answer: there is no value of “r”

Fairness Filter Lock satisfies properties: No one starves © 2003 Herlihy and Shavit Fairness Filter Lock satisfies properties: No one starves But very weak fairness Can be overtaken arbitrary # of times So being fair is stronger than avoiding starvation And filter is pretty lame… Art of Multiprocessor Programming

First-come-first-served © 2003 Herlihy and Shavit First-come-first-served For threads A and B: If DAk  DB m A's k-th doorway precedes B's m-th doorway Then CSAk  CSBm A's k-th critical section precedes B's m-th critical section B cannot overtake A Art of Multiprocessor Programming

Bakery algorithm [Lamport] © 2003 Herlihy and Shavit Bakery algorithm [Lamport] Provides First-Come-First-Served for n threads How? Take a “number” Wait until lower numbers have been served Lexicographic order (a,i) > (b,j) If a > b, or a = b and i > j Art of Multiprocessor Programming

Bakery algorithm flag1 Bool flag[N] // initially false Label label[N] // initially 0 flag[i] = true; t = max(label[0], …, label[N-1]) label[i] = t+1 while ($ k:flag[k] && (label[i],i) > (label[k],k)) {} -- CS –- flag[i] = false flag1 73

Bakery algorithm flag1 Bool flag[N] // initially false Label label[N] // initially 0 flag[i] = true; t = max(label[0], …, label[N-1]) label[i] = t+1 while ($ k:flag[k] && (label[i],i) > (label[k],k)) {} -- CS –- flag[i] = false flag1 Doorway 74

Bakery algorithm flag1 Bool flag[N] // initially false Label label[N] // initially 0 flag[i] = true; t = max(label[0], …, label[N-1]) label[i] = t+1 while ($ k:flag[k] && (label[i],i) > (label[k],k)) {} -- CS –- flag[i] = false flag1 I’m interested 75

Bakery algorithm flag1 Bool flag[N] // initially false Label label[N] // initially 0 flag[i] = true; t = max(label[0], …, label[N-1]) label[i] = t+1 while ($ k:flag[k] && (label[i],i) > (label[k],k)) {} -- CS –- flag[i] = false flag1 Someone is interested 76

Someone is interested, whose label is lexicographically lower Bakery algorithm Bool flag[N] // initially false Label label[N] // initially 0 flag[i] = true; t = max(label[0], …, label[N-1]) label[i] = t+1 while ($ k:flag[k] && (label[i],i) > (label[k],k)) {} -- CS –- flag[i] = false flag1 Someone is interested, whose label is lexicographically lower 77

No longer interested. Labels keep increasing. Bakery algorithm Bool flag[N] // initially false Label label[N] // initially 0 flag[i] = true; t = max(label[0], …, label[N-1]) label[i] = t+1 while ($ k:flag[k] && (label[i],i) > (label[k],k)) {} -- CS –- flag[i] = false flag1 No longer interested. Labels keep increasing. 78

No deadlock There is always one thread with earliest label © 2003 Herlihy and Shavit No deadlock There is always one thread with earliest label Ties are impossible (why?) Art of Multiprocessor Programming

FCFS If DA  DB then A's label is smaller And: © 2003 Herlihy and Shavit flag[i] = true; t = max(label[0], …, label[N-1]) label[i] = t+1 while ($ k:flag[k] && (label[i],i) > (label[k],k)) {} FCFS If DA  DB then A's label is smaller And: writeA(label[A])  readB(label[A])  writeB(label[B])  readB(flag[A]) So B sees smaller label for A locked out while flag[A] is true

Mutex Suppose A and B in CS together Suppose A has earlier label © 2003 Herlihy and Shavit flag[i] = true; t = max(label[0], …, label[N-1]) label[i] = t+1 while ($ k:flag[k] && (label[i],i) > (label[k],k)) {} Mutex Suppose A and B in CS together Suppose A has earlier label When B entered, it must have seen flag[A] is false, or label[A] > label[B]

Mutex Suppose A and B in CS together Suppose A has earlier label © 2003 Herlihy and Shavit flag[i] = true; t = max(label[0], …, label[N-1]) label[i] = t+1 while ($ k:flag[k] && (label[i],i) > (label[k],k)) {} Mutex Suppose A and B in CS together Suppose A has earlier label When B entered, it must have seen flag[A] is false, or label[A] > label[B] Could B see label[A] > label[B] ? By assumption, A now has earlier label Labels are strictly increasing => So, no

Mutex Suppose A and B in CS together Suppose A has earlier label © 2003 Herlihy and Shavit flag[i] = true; t = max(label[0], …, label[N-1]) label[i] = t+1 while ($ k:flag[k] && (label[i],i) > (label[k],k)) {} Mutex Suppose A and B in CS together Suppose A has earlier label When B entered, it must have seen flag[A] is false, or label[A] > label[B] Could B see label[A] > label[B] ? By assumption, A now has earlier label Labels are strictly increasing => So, no LabelingB  readB(flag[A]==false)  writeA(flag[A]=true)  LabelingA Contradiction

Mutual exclusion breaks if label overflow (and decreases) Bakery overflow bug Bool flag[N] // initially false Label label[N] // initially 0 flag[i] = true; t = max(label[0], …, label[N-1]) label[i] = t+1 while ($ k:flag[k] && (label[i],i) > (label[k],k)) {} -- CS –- flag[i] = false flag1 Mutual exclusion breaks if label overflow (and decreases) 84

Timestamps Label variable is really a timestamp Need ability to © 2003 Herlihy and Shavit Timestamps Label variable is really a timestamp Need ability to Read others' timestamps Compare them Generate a later timestamp Can we do this without overflow? Art of Multiprocessor Programming

The good news One can construct a Wait-free (no mutual exclusion) © 2003 Herlihy and Shavit The good news One can construct a Wait-free (no mutual exclusion) Concurrent Timestamping system That never overflows Art of Multiprocessor Programming

The good news Bad This part is hard One can construct a © 2003 Herlihy and Shavit The good news Bad One can construct a Wait-free (no mutual exclusion) Concurrent Timestamping system That never overflows This part is hard Art of Multiprocessor Programming

Philosophical question © 2003 Herlihy and Shavit Philosophical question The Bakery Algorithm is Succinct, Elegant, and Fair. Q: So why isn't it practical? A: Well, you have to read N distinct variables Art of Multiprocessor Programming

Shared memory variables © 2003 Herlihy and Shavit Shared memory variables Shared read/write memory locations called Registers (historical reasons) Come in different flavors Multi-Reader-Single-Writer (flag[]) Multi-Reader-Multi-Writer (victim[]) Art of Multiprocessor Programming

© 2003 Herlihy and Shavit Theorem (lower bound) At least N MRSW (multi-reader/single-writer) registers are needed to solve deadlock-free mutual exclusion. Art of Multiprocessor Programming

Proving algorithmic impossibility © 2003 Herlihy and Shavit Proving algorithmic impossibility C To show no algorithm exists: Assume by way of contradiction one does, Show a bad execution that violates properties: In our case assume an algorithm for deadlock free mutual exclusion using < N registers write CS Art of Multiprocessor Programming

Proof a0 a3 a2 a1 Threads are state machines © 2003 Herlihy and Shavit Proof Threads are state machines Execution events are transitions a0 a3 a2 a1 Art of Multiprocessor Programming

…can't tell whether A is in critical section © 2003 Herlihy and Shavit Proof Each thread must write to some register A B C write write CS CS CS …can't tell whether A is in critical section Art of Multiprocessor Programming

Upper bound Bakery algorithm Uses 2N MRSW registers © 2003 Herlihy and Shavit Upper bound Bakery algorithm Uses 2N MRSW registers So the bound is (pretty) tight But what if we use MRMW registers? Like victim[] ? Art of Multiprocessor Programming

Philosophical motivation © 2003 Herlihy and Shavit Philosophical motivation MRMW register: hardware implements mutual exclusion for us Two concurrent writes always get ordered Does this buy us anything? Art of Multiprocessor Programming 95 95

© 2003 Herlihy and Shavit Bad news theorem At least N MRMW multi-reader/multi-writer registers are needed to solve deadlock-free mutual exclusion. (So multiple writers don't help) Art of Multiprocessor Programming

© 2003 Herlihy and Shavit Theorem (for 2 threads) Theorem: Deadlock-free mutual exclusion for 2 threads requires at least 2 multi-reader multi-writer registers Proof: assume one register suffices and derive a contradiction Art of Multiprocessor Programming

Two-thread execution R A B CS CS Threads run, reading and writing R © 2003 Herlihy and Shavit Two-thread execution A B R Write(R) CS CS Threads run, reading and writing R Deadlock free so at least one gets in Art of Multiprocessor Programming

Covering state Covering state: B © 2003 Herlihy and Shavit Covering state Covering state: at least 1 thread about to write to each register but memory looks as if CS is empty (and no one is trying to enter) B Write(R) Art of Multiprocessor Programming 99 99

Register looks as if CS empty © 2003 Herlihy and Shavit Covering state for 1 register B Write(R) In any protocol B has to write to the register before entering CS, so stop it just before. Register looks as if CS empty Art of Multiprocessor Programming

A runs, possibly writes to the register, enters CS © 2003 Herlihy and Shavit Proof: assume cover of 1 A B Write(R) A runs, possibly writes to the register, enters CS CS Art of Multiprocessor Programming

Proof: assume cover of 1 A B B runs, first obliterating © 2003 Herlihy and Shavit Proof: assume cover of 1 A B B runs, first obliterating any trace of A, then also enters the critical section Write(R) CS CS Art of Multiprocessor Programming

© 2003 Herlihy and Shavit Theorem Deadlock-free mutual exclusion for 3 threads requires at least 3 multi-reader multi-writer registers Art of Multiprocessor Programming

Registers look as if CS empty © 2003 Herlihy and Shavit Proof: assume cover of 2 A B C Write(RB) Write(RC) Only 2 registers Registers look as if CS empty Art of Multiprocessor Programming

Writes to one or both registers, enters CS © 2003 Herlihy and Shavit Run A solo A B C Write(RB) Write(RC) Writes to one or both registers, enters CS CS Art of Multiprocessor Programming

Other threads obliterate evidence that A entered CS © 2003 Herlihy and Shavit Obliterate traces of A A B C Write(RB) Write(RC) Other threads obliterate evidence that A entered CS CS Art of Multiprocessor Programming

CS looks empty, so another thread gets in © 2003 Herlihy and Shavit Mutual exclusion fails A B C Write(RB) Write(RC) CS looks empty, so another thread gets in CS CS Art of Multiprocessor Programming

Constructing a covering state © 2003 Herlihy and Shavit Constructing a covering state Proved: a contradiction starting from a covering state for 2 registers Claim: a covering state for 2 registers is reachable from any state where CS is empty and no process is trying to enter it Art of Multiprocessor Programming

© 2003 Herlihy and Shavit Covering state for 2 A B Write(RA) Write(RB) If we run B through CS 3 times, first write of B must twice be to some register, say RB Art of Multiprocessor Programming

© 2003 Herlihy and Shavit Covering state for 2 A B Write(RA) Write(RB) Start with B covering register RB for the 1st time Run A until it is about to write to uncovered RA Are we done? Art of Multiprocessor Programming

Covering state for 2 A B NO! A could have written to RB © 2003 Herlihy and Shavit Covering state for 2 A B Write(RA) Write(RB) NO! A could have written to RB So CS no longer looks empty to thread C Art of Multiprocessor Programming

Covering state for 2 A B Run B obliterating traces of A in RB © 2003 Herlihy and Shavit Covering state for 2 A B Write(RA) Write(RB) Run B obliterating traces of A in RB Run B again until it is about to write to RB Now we are done Art of Multiprocessor Programming

Inductively we can show © 2003 Herlihy and Shavit Inductively we can show A B C Write(RA) Write(RB) Write(RC) There is a covering state Where k threads not in CS cover k distinct registers Proof follows when k = N-1 Art of Multiprocessor Programming

© 2003 Herlihy and Shavit Summary In the 1960s several incorrect solutions to starvation-free mutual exclusion using RW-registers were published… Today we know how to solve FIFO N thread mutual exclusion using 2N RW-Registers Art of Multiprocessor Programming