Concurrency: Mutual Exclusion and Synchronization

Slides:



Advertisements
Similar presentations
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Advertisements

Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Operating Systems: A Modern Perspective, Chapter 8 Slide 8-1 Copyright © 2004 Pearson Education, Inc.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Synchronization CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Slide 8-1 Copyright © 2004 Pearson Education, Inc. Basic Synchronization Principles.
Basic Synchronization Principles. Concurrency Value of concurrency – speed & economics But few widely-accepted concurrent programming languages (Java.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Synchronization CSCI 444/544 Operating Systems Fall 2008.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Condition Variables Revisited Copyright ©: University of Illinois CS 241 Staff1.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Chapter 6 Concurrency: Deadlock and Starvation Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic,
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings 1.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Classical problems.
Critical Problem Revisit. Critical Sections Mutual exclusion Only one process can be in the critical section at a time Without mutual exclusion, results.
Midterm 1 – Wednesday, June 4  Chapters 1-3: understand material as it relates to concepts covered  Chapter 4 - Processes: 4.1 Process Concept 4.2 Process.
Operating Systems: A Modern Perspective, Chapter 8 Slide 8-1 Copyright © 2004 Pearson Education, Inc.
3 Chapter 5. Theme of OS Design? Management of processes and threads Multiprogramming Multiprocessing Distributed processing.
Chap 6 Synchronization. Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
Operating Systems CSE 411 CPU Management Dec Lecture Instructor: Bhuvan Urgaonkar.
Semaphores Ref: William Stallings G.Anuradha. Principle of a Semaphore Two or more processes can cooperate by means of simple signals, such that a process.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Operating Systems: A Modern Perspective, Chapter 8 Slide 8-1 Copyright © 2004 Pearson Education, Inc.
CS4315A. Berrached:CMS:UHD1 Process Synchronization Chapter 8.
Process Synchronization CS 360. Slide 2 CS 360, WSU Vancouver Process Synchronization Background The Critical-Section Problem Synchronization Hardware.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
6.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles 6.5 Semaphore Less complicated than the hardware-based solutions Semaphore S – integer.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Interprocess Communication Race Conditions
Chapter 6: Process Synchronization
Process Synchronization
Process Synchronization: Semaphores
Chapter 5: Process Synchronization
G.Anuradha Reference: William Stallings
Chapter 5: Process Synchronization
Ref: William Stallings,Galvin, Naresh Chauhan G.Anuradha
Chapter 6: Process Synchronization
Concurrency: Mutual Exclusion and Synchronization
Chapter 5: Process Synchronization
Critical section problem
Steve’s Concurrency Slides
Concurrency: Mutual Exclusion and Process Synchronization
CSE 153 Design of Operating Systems Winter 19
Outline Announcements Basic Synchronization Principles – continued
Operating System 6 CONCURRENCY: MUTUAL EXCLUSION AND SYNCHRONIZATION
Chapter 6: Synchronization Tools
Outline Please turn in Homework #2 Announcements
Steve’s Concurrency Slides
Process/Thread Synchronization (Part 2)
Presentation transcript:

Concurrency: Mutual Exclusion and Synchronization Chapter 5

Critical Sections & Mutual Exclusion shared double balance; Code for DEPOSIT (p1) Code for WITHDRAWAL (p2) . . . . . . balance = balance + amount; balance = balance - amount; balance += amount balance -= amount Shared balance Assembly code for p1 Assembly code for p2 load R1, balance load R1, balance load R2, amount load R2, amount add R1, R2 sub R1, R2 store R1, balance store R1, balance

Critical Sections (cont.) Mutual exclusion Only one process can be in the critical section at a time Without mutual exclusion, results of multiple execution are not consistent The sections may be defined by different code in different processes. Therefore, it is not easy for compilers to detect them Need an OS mechanism so programmer can manage access Some Possible OS Mechanisms Disable interrupts Software solution – locks, semaphores

Disabling Interrupts Code for p1 Code for p2 shared double balance; Code for p1 Code for p2 disableInterrupts(); disableInterrupts(); balance = balance + amount; balance = balance - amount; enableInterrupts(); enableInterrupts(); Disabling interrupts guarantees mutual exclusion, but … Disadvantages A user process can easily abuse this privilege and hence should not be available to a user process. Interrupts could be disabled arbitrarily long We only want to prevent p1 and p2 from interfering with one another; this prevents any other process pk to execute In a Multiprocessor system, disabling interrupts in one processor will not disable it in another process and hence mutual exclusion is not guaranteed

Using a lock Variable Code for p1 Code for p2 shared boolean lock = FALSE; shared double balance; Code for p1 Code for p2 /* Acquire the lock */ /* Acquire the lock */ while (lock) /*loop */ ; while (lock) /*loop */ ; lock = TRUE; lock = TRUE; /* Execute critical sect */ /* Execute critical sect */ balance = balance + amount; balance = balance - amount; /* Release lock */ /* Release lock */ lock = FALSE; lock = FALSE; Access to lock is not atomic, things may still go wrong! Is it possible to solve the problem?

However, get(lock) and release(lock) operations must be atomic ! New Solution shared boolean lock = FALSE; shared list L; Code for p1 Code for p2 . . . . . . get(lock); get(lock); <execute critical section>; <execute critical section>; release(lock); release(lock); However, get(lock) and release(lock) operations must be atomic !

Semaphore solution get( )  P( )  wait( )  pthread_mutex_lock() struct semaphore { int count; queueType queue; } void get(semaphore s) disable_interrupts(); { s.count--; if (s.count < 0) {place this process in s.queue; block this process enable_interrupts(); void release(semaphore s) {disable_interrupts(); s.count++; if (s.count <= 0) { remove a process P from s.queue; place process P on ready queue; get( )  P( )  wait( )  pthread_mutex_lock() release( )  V( )  signal( )  pthread_mutex_unlock()

Another Possible Semaphore Implementation Using Compare&Swap (Test&Set) instruction Figure 5.14a shows the use of a compare&swap instruction. In this implementation, the semaphore is again a structure, but now includes a new integer component, s.flag . Admittedly, this involves a form of busy waiting. However, the semWait and semSignal operations are relatively short, so the amount of busy waiting involved should be minor. For a single-processor system, it is possible to inhibit interrupts for the duration of a semWait or semSignal operation, as suggested in Figure 5.14b. Once again, the relatively short duration of these operations means that this approach is reasonable.

Dijkstra’s Semaphore Invented in the 1960s Conceptual OS mechanism, with no specific implementation defined (could be get()/release()) Basis of all contemporary OS synch. mechanisms A semaphore, s, is a nonnegative integer variable that can only be changed or tested by these two atomic (indivisible / uninterruptable) functions: P(s) : [ while(s == 0) {wait}; s = s – 1; ] V(s) : [ s = s + 1;]

Shared Account Problem { . . . /* Enter the CS */ P(mutex); balance += amount; V(mutex); } P1() { . . . /* Enter the CS */ P(mutex); balance -= amount; V(mutex); } semaphore mutex = 1; pthread_create(P0, 0); pthread_create(P1, 0);

Important considerations for software locks Only the processes competing for a CS must be considered for resolving who enters the CS next. Once a process attempts to enter its CS, it should not be postponed indefinitely  NO STARVATION! (After requesting entry, only a bounded number of other processes may enter before the requesting process) No deadlock or starvation should occur A process should not be delayed access to a critical section when there is no other process using it No assumptions should be made about the relative speeds of processes or the number of competing processes

Semaphore/Critical Section example A is done! What is going to happen now?

Processing Two Critical Sections shared lock1 = FALSE; shared lock2 = FALSE; Code for p1 . . . /* Enter CS-1 */ get(lock1); <critical section 1>; release(lock1); <other computation>; /* Enter CS-2 */ get(lock2); <critical section 2>; release(lock2); shared lock1 = FALSE; shared lock2 = FALSE; Code for p2 . . . /* Enter CS-2*/ get(lock2); <critical section 2>; release(lock2); <other computation>; /* Enter CS-1 */ get(lock1); <critical section 1>; release(lock1);

Deadlock may occur if locks are not used properly! shared boolean lock1 = FALSE; shared boolean lock2 = FALSE; Code for p1 Code for p2 . . . . . . get(lock1); get(lock2); <delete element>; <update length>; /* Enter CS to update length */ /* Enter CS to add element */ get(lock2); get(lock1); <update length>; <add element>; release(lock2); releaselock1); release(lock1); release(lock2); . . . . . .

OS concerns related to concurrency Synchronization (support for mutual exclusion) Communication (data sharing / message passing) Protection of data/resources (access control for sharing) Deadlock Resouce Allocation / deallocation Processor Memory Files I/O devices

Message Passing Two important mechanisms are needed to facilitate interaction between processes (concurrency): Message Passing is another approach to providing these functions works with distributed systems and shared memory multiprocessor and uniprocessor systems Message Passing is generally provided in the form of a pair of primitives: send (destination, message) receive (source, message) (* Blocking receive *) synchronization to enforce mutual exclusion communication to exchange information When processes interact with one another, two fundamental requirements must be satisfied: synchronization and communication. Processes need to be synchronized to enforce mutual exclusion; cooperating processes may need to exchange information. One approach to providing both of these functions is message passing. Message passing has the further advantage that it lends itself to implementation in distributed systems as well as in shared-memory multiprocessor and uniprocessor systems.

Synchronization Communication of a message between two processes implies synchronization between the two When a receive primitive is executed in a process there are two possibilities: if a message has previously been sent the message is received and execution continues if there is no message the process is blocked until a message arrives or the process continues to execute, abandoning the attempt to receive the receiver cannot receive a message until it has been sent by another process The communication of a message between two processes implies some level of synchronization between the two: The receiver cannot receive a message until it has been sent by another process. In addition, we need to specify what happens to a process after it issues a send or receive primitive. Consider the send primitive first. When a send primitive is executed in a process, there are two possibilities: Either the sending process is blocked until the message is received, or it is not. Similarly, when a process issues a receive primitive, there are two possibilities: 1. If a message has previously been sent, the message is received and execution continues. 2. If there is no waiting message, then either (a) the process is blocked until a message arrives, or (b) the process continues to execute, abandoning the attempt to receive.

Producer/Consumer Problem One or more producers are generating data and placing these in a buffer A single consumer is taking items out of the buffer one at time Producer: while (true) { /* produce item v */ b[in] = v; in++; } Consumer: while (true) { while (in <= out) /* wait */; w = b[out]; out++; /* consume item w */ }   Infinite Buffer Since Buffer b[], “in” and “out” pointers are all shared, these solutions do not work! Only one producer or consumer should access the buffer at any one time !

Producer/Consumer using a Circular Buffer while (true) { /* produce item v */ while((in + 1)%n == out) /* do nothing */; b[in] = v; in = (in + 1) % n } Consumer: while (true) { while (in == out) /* do nothing */; w = b[out]; out = (out + 1) % n; /* consume item w */ } Since Buffer b[], “in” and “out” pointers are all shared, these solutions do not work, either!

A Solution to Bounded-Buffer Producer/Consumer Problem /* lock for the FullPool */ /* of items in FullPool */ /* empty slots */ Bounded Buffer EmptyPool Producer Consumer Producer Consumer Producer FullPool

Readers-Writers Problem Shared Resource

Readers-Writers Problem Shared Resource

Readers-Writers Problem Shared Resource

First Solution First reader competes with writers while(TRUE) { <other computing>; P(mutex); readCount++; if(readCount == 1) P(writeBlock); V(mutex); /* Critical section */ access(resource); readCount--; if(readCount == 0) V(writeBlock); } resourceType *resource; int readCount = 0; semaphore mutex = 1; semaphore writeBlock = 1; writer() { while(TRUE) { <other computing>; P(writeBlock); /* Critical section */ access(resource); V(writeBlock); } First reader competes with writers Last reader signals writers

First Solution First reader competes with writers while(TRUE) { <other computing>; P(mutex); readCount++; if(readCount == 1) P(writeBlock); V(mutex); /* Critical section */ access(resource); readCount--; if(readCount == 0) V(writeBlock); } resourceType *resource; int readCount = 0; semaphore mutex = 1; semaphore writeBlock = 1; writer() { while(TRUE) { <other computing>; P(writeBlock); /* Critical section */ access(resource); V(writeBlock); } 5 2 4 3 1 First reader competes with writers Last reader signals writers Any writer must wait for all readers Readers can starve writers “Updates” can be delayed forever not desirable!

Writers Take Precedence reader() { while(TRUE) { <other computing>; P(readBlock); P(mutex1); readCount++; if(readCount == 1) P(writeBlock); V(mutex1); V(readBlock); access(resource); readCount--; if(readCount == 0) V(writeBlock); } int readCount=0, writeCount=0; semaphore mutex1=1, mutex2=1; Semaphore readBlock=1,writeBlock=1 writer() { while(TRUE) { <other computing>; P(mutex2); writeCount++; if(writeCount == 1) P(readBlock); V(mutex2); P(writeBlock); access(resource); V(writeBlock); P(mutex2) writeCount--; if(writeCount == 0) V(readBlock); } 5 4 2 3 1

Writers Take Precedence reader() { while(TRUE) { <other computing>; P(readBlock); P(mutex1); readCount++; if(readCount == 1) P(writeBlock); V(mutex1); V(readBlock); access(resource); readCount--; if(readCount == 0) V(writeBlock); } int readCount=0, writeCount=0; semaphore mutex1=1, mutex2=1; semaphore readBlock=1,writeBlock=1 writer() { while(TRUE) { <other computing>; P(mutex2); writeCount++; if(writeCount == 1) P(readBlock); V(mutex2); P(writeBlock); access(resource); V(writeBlock); P(mutex2) writeCount--; if(writeCount == 0) V(readBlock); } 5 4 3 2 1

Writers Take Precedence reader() { while(TRUE) { <other computing>; P(readBlock); P(mutex1); readCount++; if(readCount == 1) P(writeBlock); V(mutex1); V(readBlock); access(resource); readCount--; if(readCount == 0) V(writeBlock); } int readCount=0, writeCount=0; semaphore mutex1=1, mutex2=1; semaphore readBlock=1,writeBlock=1 writer() { while(TRUE) { <other computing>; P(mutex2); writeCount++; if(writeCount == 1) P(readBlock); V(mutex2); P(writeBlock); access(resource); V(writeBlock); P(mutex2) writeCount--; if(writeCount == 0) V(readBlock); } 5 4 7 6 3 Writers can starve readers “Reads” can be delayed forever Not desirable, either !

Fair to Readers and Writers ? while(TRUE) { <other computing>; P(writePending); P(readBlock); P(mutex1); readCount++; if(readCount == 1) P(writeBlock); V(mutex1); V(readBlock); V(writePending); access(resource); readCount--; if(readCount == 0) V(writeBlock); } int readCount = 0, writeCount = 0; semaphore mutex1 = 1, mutex2 = 1; semaphore readBlock = 1, writeBlock = 1, writePending = 1; writer() { while(TRUE) { <other computing>; P(writePending); P(mutex2); writeCount++; if(writeCount == 1) P(readBlock); V(mutex2); P(writeBlock); access(resource); V(writeBlock); V(writePending); P(mutex2) writeCount--; if(writeCount == 0) V(readBlock); } 4 3 2 1

Fair to Readers and Writers ? while(TRUE) { <other computing>; P(writePending); P(readBlock); P(mutex1); readCount++; if(readCount == 1) P(writeBlock); V(mutex1); V(readBlock); V(writePending); access(resource); readCount--; if(readCount == 0) V(writeBlock); } int readCount = 0, writeCount = 0; semaphore mutex1 = 1, mutex2 = 1; semaphore readBlock = 1, writeBlock = 1, writePending = 1; writer() { while(TRUE) { <other computing>; P(writePending); P(mutex2); writeCount++; if(writeCount == 1) P(readBlock); V(mutex2); P(writeBlock); access(resource); V(writeBlock); V(writePending); P(mutex2) writeCount--; if(writeCount == 0) V(readBlock); } 4 5 3 2 1

Fair to Readers and Writers ? while(TRUE) { <other computing>; P(writePending); P(readBlock); P(mutex1); readCount++; if(readCount == 1) P(writeBlock); V(mutex1); V(readBlock); V(writePending); access(resource); readCount--; if(readCount == 0) V(writeBlock); } int readCount = 0, writeCount = 0; semaphore mutex1 = 1, mutex2 = 1; semaphore readBlock = 1, writeBlock = 1, writePending = 1; writer() { while(TRUE) { <other computing>; P(writePending); P(mutex2); writeCount++; if(writeCount == 1) P(readBlock); V(mutex2); P(writeBlock); access(resource); V(writeBlock); V(writePending); P(mutex2) writeCount--; if(writeCount == 0) V(readBlock); } 4 5 3

Fair to Readers and Writers ? while(TRUE) { <other computing>; P(writePending); P(readBlock); P(mutex1); readCount++; if(readCount == 1) P(writeBlock); V(mutex1); V(readBlock); V(writePending); access(resource); readCount--; if(readCount == 0) V(writeBlock); } int readCount = 0, writeCount = 0; semaphore mutex1 = 1, mutex2 = 1; semaphore readBlock = 1, writeBlock = 1, writePending = 1; writer() { while(TRUE) { <other computing>; P(writePending); P(mutex2); writeCount++; if(writeCount == 1) P(readBlock); V(mutex2); P(writeBlock); access(resource); V(writeBlock); V(writePending); P(mutex2) writeCount--; if(writeCount == 0) V(readBlock); } 6 5 4 3

Dining Philosophers Problem No two philosophers can use the same fork at the same time (mutual exclusion) No philosopher must starve to death (avoid deadlock and starvation) We now turn to the dining philosophers problem, introduced by Dijkstra [DIJK71]. Five philosophers live in a house, where a table is laid for them. The life of each philosopher consists principally of thinking and eating, and through years of thought, all of the philosophers had agreed that the only food that contributed to their thinking efforts was spaghetti. Due to a lack of manual skill, each philosopher requires two forks to eat spaghetti. The eating arrangements are simple ( Figure 6.11 ): a round table on which is set a large serving bowl of spaghetti, five plates, one for each philosopher, and five forks. A philosopher wishing to eat goes to his or her assigned place at the table and, using the two forks on either side of the plate, takes and eats some spaghetti. The problem: Devise a ritual (algorithm) that will allow the philosophers to eat. The algorithm must satisfy mutual exclusion (no two philosophers can use the same fork at the same time) while avoiding deadlock and starvation (in this case, the term has literal as well as algorithmic meaning!). This problem may not seem important or relevant in itself. However, it does illustrate basic problems in deadlock and starvation. Furthermore, attempts to develop solutions reveal many of the difficulties in concurrent programming (e.g., see [GING90]). In addition, the dining philosophers problem can be seen as representative of problems dealing with the coordination of shared resources, which may occur when an application includes concurrent threads of execution. Accordingly, this problem is a standard test case for evaluating approaches to synchronization.

Anything wrong with this code? Figure 6.12 suggests a solution using semaphores. Each philosopher picks up first the fork on the left and then the fork on the right. After the philosopher is finished eating, the two forks are replaced on the table. This solution, alas, leads to deadlock: If all of the philosophers are hungry at the same time, they all sit down, they all pick up the fork on their left, and they all reach out for the other fork, which is not there. In this undignified position, all philosophers starve. To overcome the risk of deadlock, we could buy five additional forks (a more sanitary solution!) or teach the philosophers to eat spaghetti with just one fork. As another approach, we could consider adding an attendant who only allows four philosophers at a time into the dining room. With at most four seated philosophers, at least one philosopher will have access to two forks. Anything wrong with this code?

Dining Philosophers

A Second Solution to the Dining Philosophers Problem Figure 6.13 shows such a solution, again using semaphores. This solution is free of deadlock and starvation.

The Barbershop Problem semaphore max_capacity = 20, sofa = 4, barber_chair = 3, coord = 3; semaphore cust_ready = 0, finished = 0, leave_b_chair = 0, payment= 0, receipt = 0; void customer () { wait(max_capacity); enter_shop(); wait(sofa); sit_on_sofa(); wait(barber_chair); get_up_from_sofa(); signal(sofa); sit_in_barber_chair; signal(cust_ready); wait(finished); leave_barber_chair(); signal(leave_b_chair); pay(); signal(payment); wait(receipt); exit_shop(); signal(max_capacity) } void barber() { while (true) wait(cust_ready); wait(coord); cut_hair(); signal(coord); signal(finished); wait(leave_b_chair); signal(barber_chair); }

The Barbershop (cont.) void cashier() void main() { while (true) wait(payment); wait(coord); accept_pay(); signal(coord); signal(receipt); } void main() parbegin (customer, . . . 50 times . . . customer, barber, barber, barber, cashier);