CSNB334 Advanced Operating Systems 4. Concurrency : Mutual Exclusion and Synchronization.

Slides:



Advertisements
Similar presentations
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Advertisements

Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Ch 7 B.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
1 Semaphores and Monitors CIS450 Winter 2003 Professor Jinhua Guo.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Inter Process Communication:  It is an essential aspect of process management. By allowing processes to communicate with each other: 1.We can synchronize.
1 Concurrency: Deadlock and Starvation Chapter 6.
1 Chapter 6: Concurrency: Mutual Exclusion and Synchronization Operating System Spring 2007 Chapter 6 of textbook.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings 1.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Background Concurrent.
Concurrency, Mutual Exclusion and Synchronization.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
Critical Problem Revisit. Critical Sections Mutual exclusion Only one process can be in the critical section at a time Without mutual exclusion, results.
1 Chapter 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Special Machine Instructions for Synchronization Semaphores.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chap 6 Synchronization. Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 9 th Edition Chapter 5: Process Synchronization.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
Chapter 6: Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-5 Process Synchronization Department of Computer Science and Software.
Operating Systems CSE 411 CPU Management Dec Lecture Instructor: Bhuvan Urgaonkar.
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Process Synchronization CS 360. Slide 2 CS 360, WSU Vancouver Process Synchronization Background The Critical-Section Problem Synchronization Hardware.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 22 Semaphores Classic.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
6.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Synchronization Background The Critical-Section Problem Peterson’s.
6.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles 6.5 Semaphore Less complicated than the hardware-based solutions Semaphore S – integer.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Chapter 6: Process Synchronization
Process Synchronization
Chapter 5: Process Synchronization
Process Synchronization: Semaphores
CSNB334 Advanced Operating Systems 4
Background on the need for Synchronization
Chapter 5: Process Synchronization
Chapter 5: Process Synchronization
Concurrency: Mutual Exclusion and Synchronization
Topic 6 (Textbook - Chapter 5) Process Synchronization
Semaphore Originally called P() and V() wait (S) { while S <= 0
Synchronization Hank Levy 1.
Concurrency: Mutual Exclusion and Process Synchronization
Synchronization Hank Levy 1.
CSE 153 Design of Operating Systems Winter 2019
Chapter 6: Synchronization Tools
Presentation transcript:

CSNB334 Advanced Operating Systems 4. Concurrency : Mutual Exclusion and Synchronization

Concurrency Concurrency is the simultaneous execution of threads  The system must support concurrent execution of threads  Scheduling : Deals with execution of “unrelated” threads  Concurrency: Deals with execution of “related” threads Why is it necessary? Cooperation: One thread may need to wait for the result of some operation done by another thread  e.g. “Calculate Average” must wait until all “data reads” are completed Competition: Several threads may compete for exclusive use of resources  e.g. two threads trying to increment the value is a memory location

Concurrency Thr A Thr B load mem, reg inc reg store reg, mem Critical Section All three instructions of a thread must be executed before other thread

Mutual Exclusion If one thread is going to use a shared resource (critical resource)  a file  a variable  printer  register, etc the other thread must be excluded from using the same resource Critical resource: A resource for which sharing by the threads must be controlled by the system Critical section of a program: A part of a program where access to a critical resource occurs

Concurrency

Two problems related to concurrency control Concurrency

Mutual Exclusion Mechanism Concurrency

among all threads that have CSs for the same resource  Only one thread at a time is allowed into its CS, It must not be possible for a thread requiring access to a CS to be delayed indefinitely  no deadlock  no starvation When no thread is in a CS, any thread requesting entry to the CS must be granted permission without delay No assumptions are made about the relative thread speeds or number of processors. A thread remains inside its CS for a finite time only. Concurrency requirements

The responsibility of Mutual Exclusion can be satisfied in a number of ways: 1. Leave it to the processes. 2. Use special purpose machine instructions 3. Provide some support within the OS semaphores message passing monitors, etc. It is the responsibility of the OS (not the programmer) to enforce mutual exclusion. Concurrency

Interrupt Disabling The only way of providing threads to interleave on a single processor machine is by the use of interrupts. If it is guaranteed that no interrupt occurs while a thread is in the CS, then no other thread can enter the same CS Mutual Exclusion : Hardware Support Simplest solution  But it is not desirable to give a thread the power of controlling interrupts In a multiprocessor environment, it does not work This approach is often used by some OS threads (because they are short)

Mutual Exclusion : OS Support Semaphores Semaphore is a non-negative integer variable  Its value is initialized  Its value can be changed by two “atomic” instructions  WAIT: (P) Wait until the value is greater than 0. Then the value is decremented by 1. (The thread when waiting is moved to a wait queue)  SIGNAL: (V) The value is incremented by 1 (If there is a thread waiting for that semaphore, it’s woken up and continues)

Semaphores

Synchronization

Only one thread can access the buffer at a time Order of WAIT signals is crucial  E.g. if WAIT(Mutex) comes before WAIT(SlotFree) in producer algorithm, the system would go into deadlock when the buffer is full.  Or, if WAIT(Mutex) comes before WAIT(ItemAvailable) in consumer algorithm, the system would go into deadlock when the buffer is empty.

Implementation of Semaphores They must be atomic There must be a queue mechanism, for putting the waiting thread into a queue, and waking it up later.  Scheduler must be involved Define a semaphore as a record typedef struct { int value; struct process *PList; } semaphore; Assume two simple operations:  block suspends the process that invokes it.  wakeup(P) resumes the execution of a blocked process P.

Implementation of Semaphores Semaphore operations now defined as  wait(S): S.value--; if (S.value < 0) { add this process to S.PList; block; }  signal(S): S.value++; if (S.value <= 0) { remove a process P from S.PList; wakeup(P); } REM: Note that, S can be negative with this implementation, where the negative value indicates the number of processes waiting for S…

Semaphores Semaphore mechanism is handled by OS Writing correct semaphore algorithms is a complex task All threads using the same semaphore are assumed to have the same priority.  Implementation does not take priority into account.

Readers-Writers Reader tasks and writer tasks share a resource, say a database Many readers may access a database without fear of data corruption (interference) However, only one write may access the database at a time (all other readers and writers must be “locked” out of database. Solutions:  simple solution gives priority to readers. Readers enter CS regardless of whether a write is waiting writers may starve  second solution requires once a write is ready, it gets to perform its write as soon as possible readers may starve

Readers-Writers Problem semaphore mutex = 1, wrt = 1; int rdrcnt = 0; Writer // get exclusive lock wait(wrt); … modify object … // release exclusive lock signal(wrt); Reader // enter rdrcnt C.S. wait(mutex); rdrcnt++; if (rdrcnt == 1) // get reader lock wait(wrt); // exit rdrcnt C.S. signal(mutex); … reading is performed … // enter rdrcnt C.S wait(mutex); rdrcnt--; if (rdrcnt == 0) // release reader lock signal(wrt); signal(mutex): // exit rdrcnt C.S.

28 Semaphore solution Give writers priority No new readers admitted when any writer intends to write readcount / writecount : used to see if 1 or more readers or writers are active x, y : semaphores protecting readcount and writecount wsem : enforces writing under mutual exclusion rsem : holds readers while writing occurs z : only allows one reader to wait on rsem at a time to allow a writer to enter after current reader finishes int readcount=0 int writecount=0 semaphore x {=1} semaphore y {=1} semaphore z {=1} semaphore wsem {=1} semaphore rsem {=1}

29 Reader protocol to read wait( z ); wait( rsem ); wait( x ); readcount++; if ( readcount == 1 ) wait( wsem ); signal( x ); signal( rsem ); signal( z );. wait( x ); readcount--; if ( readcount == 0 ) signal( wsem ); signal( x ); Writer protocol to write wait( y ); writecount++; if ( writecount == 1 ) wait( rsem ); signal( y ); wait( wsem ); ; signal( wsem ); wait( y ); writecount--; if ( writecount == 0 ) signal( rsem ); signal( y );

30 Points to note First reader blocks new writers Last reader allows new writer First writer blocks new readers Last writer allows new readers

Example 1 A simple readers/writers program using a one-word shared memory.  read-write-1.c read-write-1.c

mmap() system call To memory map a file, use the mmap() system call, which is defined as follows: void *mmap(void *addr, size_t len, int prot, int flags, int fildes, off_t off);

addr  This is the address we want the file mapped into. len  This parameter is the length of the data we want to map into memory. This can be any length you want. (rounded to the page size) prot  The "protection" argument allows you to specify what kind of access this process has to the memory mapped region. PROT_READ, PROT_WRITE, and PROT_EXEC, for read, write, and execute permissions, respectively. flags  MAP_SHARED if you want to share your changes to the file with other processes, or MAP_PRIVATE otherwise. If you set it to the latter, your process will get a copy of the mapped region, so any changes you make to it will not be reflected in the original file--thus, other processes will not be able to see them. fildes  This is the file descriptor opened earlier. off  This is the offset in the file that you want to start mapping from. A restriction: this must be a multiple of the virtual memory page size. This page size can be obtained with a call to getpagesize(). mmap() returns -1 on error, and sets errno. Otherwise, it returns a pointer to the start of the mapped data.

Stack Mapped File Heap Bss Text Off len Mapped Region of file File

An Example of using mmap() #include int fd, pagesize; char *data; fd = fopen("foo", O_RDONLY); pagesize = getpagesize(); data = mmap((caddr_t)0, pagesize, PROT_READ, MAP_SHARED, fd, pagesize); Once this code stretch has run, you can access the first byte of the mapped section of file using data[0].

Annotations for read-write-1.c: The mmap procedure (from the library) sets up a shared memory segment and returns the base address for that segment. It has the following form:  base address = mmap(0, num bytes, protection, flags, -1, 0); The second parameter, num bytes, specifies the number of bytes to be allocated for the new segment. The third parameter, protection, specifies whether the segment may be used for reading, writing, executing, or other purpose.  For typical shared memory, both read and write permission is specified using the combination PROT READ | PROT WRITE. In read-write-1.c, the combination MAP ANONYMOUS | MAP SHARED in the fourth parameter indicates a new memory segment should be allocation (rather than allocating space from a file descriptor) and all writes to the memory segment should be shared with other processes. The value -1 in the next-to-last parameter indicates a new, internal file descriptor is needed - the segment will not be part of an existing file.

Example 2 A simple readers/writers program using a shared buffer and spinlocks  read-write-2.c read-write-2.c

Annotations for read-write-2.c: A logical buffer is allocated in shared memory, and buffer indexes, in and out, are used to identify where data will be stored or read by the writer or reader process. More specifically,  *in gives the next free place in the buffer for the writer to enter data.  *out gives the first place in the buffer for the reader to extract data. Writing to the buffer may continue unless the buffer is full (i.e., (*in + 1) % BUF SIZE == *out) and reading from the buffer may proceed unless the buffer is empty (i.e., *in == *out). Both conditions are tested in spin locks.