O(log n / log log n) RMRs Randomized Mutual Exclusion Danny Hendler Philipp Woelfel PODC 2009 Ben-Gurion University University of Calgary.

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

Dr. Kalpakis CMSC 621, Advanced Operating Systems. Distributed Mutual Exclusion.
CS 603 Process Synchronization: The Colored Ticket Algorithm February 13, 2002.
Mutual Exclusion – SW & HW By Oded Regev. Outline: Short review on the Bakery algorithm Short review on the Bakery algorithm Black & White Algorithm Black.
CS542 Topics in Distributed Systems Diganta Goswami.
1 Chapter 4 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Synchronization Algorithms and Concurrent Programming Synchronization.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Process Synchronization Continued 7.2 The Critical-Section Problem.
Mutual Exclusion By Shiran Mizrahi. Critical Section class Counter { private int value = 1; //counter starts at one public Counter(int c) { //constructor.
1 Chapter 2 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2007 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Multiprocessor Synchronization Algorithms ( ) Lecturer: Danny Hendler The Mutual Exclusion problem.
Local-spin, Abortable Mutual Exclusion Joe Rideout.
Local-Spin Algorithms Multiprocessor synchronization algorithms ( ) Lecturer: Danny Hendler This presentation is based on the book “Synchronization.
Local-Spin Algorithms
1 Chapter 3 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Synchronization Algorithms and Concurrent Programming Synchronization.
Local-Spin Algorithms Multiprocessor synchronization algorithms ( ) Lecturer: Danny Hendler This presentation is based on the book “Synchronization.
THIRD PART Algorithms for Concurrent Distributed Systems: The Mutual Exclusion problem.
CPSC 668Set 7: Mutual Exclusion with Read/Write Variables1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
Multiprocess Synchronization Algorithms ( )
CPSC 668Set 6: Mutual Exclusion in Shared Memory1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
The Performance of Spin Lock Alternatives for Shared-Memory Microprocessors Thomas E. Anderson Presented by David Woodard.
Computer Science Lecture 12, page 1 CS677: Distributed OS Last Class Distributed Snapshots –Termination detection Election algorithms –Bully –Ring.
CPSC 668Set 6: Mutual Exclusion in Shared Memory1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
Bakery Algorithm - Proof
1 Adaptive and Efficient Mutual Exclusion Presented by: By Hagit Attya and Vita Bortnikov Mian Huang.
Concurrency in Distributed Systems: Mutual exclusion.
1 Concurrency: Deadlock and Starvation Chapter 6.
Local-Spin Algorithms Multiprocessor synchronization algorithms ( ) Lecturer: Danny Hendler This presentation is based on the book “Synchronization.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
Computer Science Lecture 12, page 1 CS677: Distributed OS Last Class Vector timestamps Global state –Distributed Snapshot Election algorithms.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Chapter 6 Concurrency: Deadlock and Starvation Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic,
Atomic Operations David Monismith cs550 Operating Systems.
Maekawa’s algorithm Divide the set of processes into subsets that satisfy the following two conditions: i  S i  i,j :  i,j  n-1 :: S i  S j.
28/10/1999POS-A1 The Synchronization Problem Synchronization problems occur because –multiple processes or threads want to share data; –the executions.
Chapter 2/6 –Critical Section Problem / Mutual exclusion progress, bounded wait –Hardware Solution disable interrupts –problems ? –Software Solution busy.
Presenter: Long Ma Advisor: Dr. Zhang 4.5 DISTRIBUTED MUTUAL EXCLUSION.
THIRD PART Algorithms for Concurrent Distributed Systems: The Mutual Exclusion problem.
DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE
Mutual Exclusion Using Atomic Registers Lecturer: Netanel Dahan Instructor: Prof. Yehuda Afek B.Sc. Seminar on Distributed Computation Tel-Aviv University.
Complexity Implications of Memory Models. Out-of-Order Execution Avoid with fences (and atomic operations) Shared memory processes reordering buffer Hagit.
Max Registers, Counters, and Monotone Circuits Keren Censor, Technion Joint work with: James Aspnes, Yale University Hagit Attiya, Technion (Slides
Chapter 11 Resource Allocation by Mikhail Nesterenko “Distributed Algorithms” by Nancy A. Lynch.
Local-Spin Mutual Exclusion Multiprocessor synchronization algorithms ( ) Lecturer: Danny Hendler This presentation is based on the book “Synchronization.
CPSC 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 6: Mutual Exclusion in Shared Memory 1.
Mutual Exclusion Algorithms. Topics r Defining mutual exclusion r A centralized approach r A distributed approach r An approach assuming an organization.
Hwajung Lee. Mutual Exclusion CS p0 p1 p2 p3 Some applications are:  Resource sharing  Avoiding concurrent update on shared data  Controlling the.
CSC 8420 Advanced Operating Systems Georgia State University Yi Pan Transactions are communications with ACID property: Atomicity: all or nothing Consistency:
Chapter 10 Mutual Exclusion Presented by Yisong Jiang.
Bakery Algorithm - Proof
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Process Synchronization: Semaphores
O(log n / log log n) RMRs Randomized Mutual Exclusion
O(log n / log log n) RMRs Randomized Mutual Exclusion
Challenges in Concurrent Computing
(Slides were added to Keren’s original presentation. DH.)
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Course Syllabus 1. Introduction - History; Views; Concepts; Structure
Lecture 2 Part 2 Process Synchronization
Mutual Exclusion CS p0 CS p1 p2 CS CS p3.
Concurrency: Mutual Exclusion and Process Synchronization
Sitting on a Fence: Complexity Implications of Memory Reordering
Multiprocessor Synchronization Algorithms ( )
Course Syllabus 1. Introduction - History; Views; Concepts; Structure
Course Syllabus 1. Introduction - History; Views; Concepts; Structure
CSE 542: Operating Systems
CSE 542: Operating Systems
(Slides were added to Keren’s original presentation. DH.)
Presentation transcript:

O(log n / log log n) RMRs Randomized Mutual Exclusion Danny Hendler Philipp Woelfel PODC 2009 Ben-Gurion University University of Calgary

Talk outline  Prior art and our results  Basic Algorithm (CC)  Enhanced Algorithm (CC)  Pseudo-code  Open questions

Most Relevant Prior Art  Best upper bound for mutual exclusion: O(log n) RMRs (Yang and Anderson, Distributed Computing '96).  A tight Θ (n log n) RMRs lower bound for deterministic mutex (Attiya, Hendler and Woelfel, STOC '08)  Compare-and-swap (CAS) is equivalent to read/write for RMR complexity (Golab, Hadzilacos, Hendler and Woelfel, PODC '07)

Our Results Randomized mutual exclusion algorithms (for both CC/DSM) that have:  O(log N / log log N) expected RMR complexity against a strong adversary, and  O(log N) deterministic worst-case RMR complexity Separation in terms of RMR complexity between deterministic/randomized mutual exclusion algorithms

Shared-memory scheduling adversary types  Oblivious adversary: Makes all scheduling decisions in advance  Weak adversary: Sees a process' coin-flip only after the process takes the following step, can change future scheduling based on history  Strong adversary: Can change future scheduling after each coin-flip / step based on history

Talk outline  Prior art and our results  Basic algorithm (CC model)  Enhanced Algorithm (CC model)  Pseudo-code  Open questions

Basic Algorithm – Data Structures 12 Δ Δ-1 Δ n Δ = Θ (log n / log log n) Key idea: Processes apply randomized promotion Key idea: Processes apply randomized promotion

Basic Algorithm – Data Structures (cont'd) Δ-1 Δ Δ 12n lock  {P,} apply: p i1 p i2 p ik Promotion Queue notified[1…n] Per-node structure

Basic Algorithm – Key Idea Δ-1 Δ 0 1 i Lock= apply: i i Randomized Promotion Randomized Promotion

Basic Algorithm – Entry Section Δ-1 Δ 0 1 i Lock= apply: i CAS(, i) i 

Basic Algorithm – Entry Section: scenario #2 Δ-1 Δ 0 1 i Lock=q apply: i CAS(,i) Failure

Basic Algorithm – Entry Section: scenario #2 Δ-1 Δ 0 1 i Lock=q apply: i await (n.lock=) || apply[ch]=)

Basic Algorithm – Entry Section: scenario #2 Δ-1 Δ 0 1 i Lock=q apply: i await (n.lock=) || apply[ch]=)

Basic Algorithm – Entry Section: scenario #2 Δ-1 Δ 0 1 i await (notified[i) =true) CS

Climb up from leaf until last node captured in entry section Lock=p apply: Basic Algorithm – Exit Section Δ-1 Δ 0 1 p  Lottery

Perform a lottery on the root Lock=p apply: Basic Algorithm – Exit Section Δ-1 Δ 0 1 p s Promotion Queue t q 

Basic Algorithm – Exit Section Δ-1 Δ 0 1 i await (notified[i) =true) t s Promotion Queue q t CS

Basic Algorithm – Exit Section (scenario #2) Δ-1 Δ 0 1 i Promotion Queue EMPTY Free Root Lock

Basic Algorithm – Properties Lemma: mutual exclusion is satisfied Proof intuition: when a process exits, it either  signals a single process without releasing the root's lock, or  if the promoted-processes queue is empty, releases the lock. o When lock is free, it is captured atomically by CAS

Basic Algorithm – Properties (cont'd) Lemma: Expected RMR complexity is Θ(log N / log log N) await (n.lock=) || apply[ch]=) A waiting process participates in a lottery every constant number of RMRs incurred here Probability of winning a lottery is 1/ Δ Expected #RMRs incurred before promotion is Θ(log N / log log N)

Basic Algorithm – Properties (cont'd)  Mutual Exclusion  Expected RMR complexity: Θ(log N / log log N)  Non-optimal worst-case complexity and (even worse) starvation possible.

Talk outline  Prior art and our results  Basic algorithm (CC)  Enhanced Algorithm (CC)  Pseudo-code  Open questions

The enhanced algorithm. Key idea Quit randomized algorithm after incurring ‘'too many’’ RMRS and then execute a deterministic algorithm. Problems  How do we count the number of RMRs incurred?  How do we “quit” the randomized algorithm?

Enhanced algorithm: counting RMRs problem await (n.lock=) || apply[ch]=) The problem: A process may incur here an unbounded number of RMRs without being aware of it.

Counting RMRs: solution Key idea Perform both randomized and deterministic promotion Lock=p apply:  Increment promotion token whenever releasing a node  Perform deterministic promotion according to promotion index in addition to randomized promotion token:

The enhanced algorithm: quitting problem 12 Δ 12N Upon exceeding allowed number of RMRs, why can't a process simply release captured locks and revert to a deterministic algorithm? ? Waiting processes may incur RMRs without participating in lotteries!

Quitting problem: solution Add a deterministic Δ - process mutex object to each node Δ-1 Δ Δ 12n lock  {P,} apply: Per-node structure MX: Δ -process mutex token:

Quitting problem: solution (cont'd) After incurring O(log Δ ) RMRs on a node, compete for the MX lock. Then spin trying to capture node lock. In addition to randomized and deterministic promotion, an exiting process promotes also the process that holds the MX lock, if any. lock  {P,} apply: Per-node structure MX: Δ -process mutex token:

Quitting problem: solution (cont'd) After incurring O(log Δ ) RMRs on a node, compete for the MX lock. Then spin trying to capture node lock. Worst-case number of RMRs = O(Δ log Δ)=O(log n)

Talk outline  Prior art and our results  Basic algorithm (CC)  Enhanced Algorithm (CC)  Pseudo-code  Open questions

Data-structures i'th the i'th leaf

The entry section i'th

The exit section i'th

Open Problems  Is this best possible? For strong adversary? For weak adversary? For oblivious adversary?  Is there an abortable randomized algorithm?  Is there an adaptive one?