CSNB334 Advanced Operating Systems 4. Concurrency : Mutual Exclusion and Synchronization
Concurrency Concurrency is the simultaneous execution of threads The system must support concurrent execution of threads Scheduling : Deals with execution of “unrelated” threads Concurrency: Deals with execution of “related” threads Why is it necessary? Cooperation: One thread may need to wait for the result of some operation done by another thread e.g. “Calculate Average” must wait until all “data reads” are completed Competition: Several threads may compete for exclusive use of resources e.g. two threads trying to increment the value is a memory location
Concurrency Thr A Thr B load mem, reg inc reg store reg, mem Critical Section All three instructions of a thread must be executed before other thread
Mutual Exclusion If one thread is going to use a shared resource (critical resource) a file a variable printer register, etc the other thread must be excluded from using the same resource Critical resource: A resource for which sharing by the threads must be controlled by the system Critical section of a program: A part of a program where access to a critical resource occurs
Concurrency
Two problems related to concurrency control Concurrency
Mutual Exclusion Mechanism Concurrency
among all threads that have CSs for the same resource Only one thread at a time is allowed into its CS, It must not be possible for a thread requiring access to a CS to be delayed indefinitely no deadlock no starvation When no thread is in a CS, any thread requesting entry to the CS must be granted permission without delay No assumptions are made about the relative thread speeds or number of processors. A thread remains inside its CS for a finite time only. Concurrency requirements
The responsibility of Mutual Exclusion can be satisfied in a number of ways: 1. Leave it to the processes. 2. Use special purpose machine instructions 3. Provide some support within the OS semaphores message passing monitors, etc. It is the responsibility of the OS (not the programmer) to enforce mutual exclusion. Concurrency
Interrupt Disabling The only way of providing threads to interleave on a single processor machine is by the use of interrupts. If it is guaranteed that no interrupt occurs while a thread is in the CS, then no other thread can enter the same CS Mutual Exclusion : Hardware Support Simplest solution But it is not desirable to give a thread the power of controlling interrupts In a multiprocessor environment, it does not work This approach is often used by some OS threads (because they are short)
Mutual Exclusion : OS Support Semaphores Semaphore is a non-negative integer variable Its value is initialized Its value can be changed by two “atomic” instructions WAIT: (P) Wait until the value is greater than 0. Then the value is decremented by 1. (The thread when waiting is moved to a wait queue) SIGNAL: (V) The value is incremented by 1 (If there is a thread waiting for that semaphore, it’s woken up and continues)
Semaphores
Synchronization
Only one thread can access the buffer at a time Order of WAIT signals is crucial E.g. if WAIT(Mutex) comes before WAIT(SlotFree) in producer algorithm, the system would go into deadlock when the buffer is full. Or, if WAIT(Mutex) comes before WAIT(ItemAvailable) in consumer algorithm, the system would go into deadlock when the buffer is empty.
Implementation of Semaphores They must be atomic There must be a queue mechanism, for putting the waiting thread into a queue, and waking it up later. Scheduler must be involved Define a semaphore as a record typedef struct { int value; struct process *PList; } semaphore; Assume two simple operations: block suspends the process that invokes it. wakeup(P) resumes the execution of a blocked process P.
Implementation of Semaphores Semaphore operations now defined as wait(S): S.value--; if (S.value < 0) { add this process to S.PList; block; } signal(S): S.value++; if (S.value <= 0) { remove a process P from S.PList; wakeup(P); } REM: Note that, S can be negative with this implementation, where the negative value indicates the number of processes waiting for S…
Semaphores Semaphore mechanism is handled by OS Writing correct semaphore algorithms is a complex task All threads using the same semaphore are assumed to have the same priority. Implementation does not take priority into account.
Readers-Writers Reader tasks and writer tasks share a resource, say a database Many readers may access a database without fear of data corruption (interference) However, only one write may access the database at a time (all other readers and writers must be “locked” out of database. Solutions: simple solution gives priority to readers. Readers enter CS regardless of whether a write is waiting writers may starve second solution requires once a write is ready, it gets to perform its write as soon as possible readers may starve
Readers-Writers Problem semaphore mutex = 1, wrt = 1; int rdrcnt = 0; Writer // get exclusive lock wait(wrt); … modify object … // release exclusive lock signal(wrt); Reader // enter rdrcnt C.S. wait(mutex); rdrcnt++; if (rdrcnt == 1) // get reader lock wait(wrt); // exit rdrcnt C.S. signal(mutex); … reading is performed … // enter rdrcnt C.S wait(mutex); rdrcnt--; if (rdrcnt == 0) // release reader lock signal(wrt); signal(mutex): // exit rdrcnt C.S.
28 Semaphore solution Give writers priority No new readers admitted when any writer intends to write readcount / writecount : used to see if 1 or more readers or writers are active x, y : semaphores protecting readcount and writecount wsem : enforces writing under mutual exclusion rsem : holds readers while writing occurs z : only allows one reader to wait on rsem at a time to allow a writer to enter after current reader finishes int readcount=0 int writecount=0 semaphore x {=1} semaphore y {=1} semaphore z {=1} semaphore wsem {=1} semaphore rsem {=1}
29 Reader protocol to read wait( z ); wait( rsem ); wait( x ); readcount++; if ( readcount == 1 ) wait( wsem ); signal( x ); signal( rsem ); signal( z );. wait( x ); readcount--; if ( readcount == 0 ) signal( wsem ); signal( x ); Writer protocol to write wait( y ); writecount++; if ( writecount == 1 ) wait( rsem ); signal( y ); wait( wsem ); ; signal( wsem ); wait( y ); writecount--; if ( writecount == 0 ) signal( rsem ); signal( y );
30 Points to note First reader blocks new writers Last reader allows new writer First writer blocks new readers Last writer allows new readers
Example 1 A simple readers/writers program using a one-word shared memory. read-write-1.c read-write-1.c
mmap() system call To memory map a file, use the mmap() system call, which is defined as follows: void *mmap(void *addr, size_t len, int prot, int flags, int fildes, off_t off);
addr This is the address we want the file mapped into. len This parameter is the length of the data we want to map into memory. This can be any length you want. (rounded to the page size) prot The "protection" argument allows you to specify what kind of access this process has to the memory mapped region. PROT_READ, PROT_WRITE, and PROT_EXEC, for read, write, and execute permissions, respectively. flags MAP_SHARED if you want to share your changes to the file with other processes, or MAP_PRIVATE otherwise. If you set it to the latter, your process will get a copy of the mapped region, so any changes you make to it will not be reflected in the original file--thus, other processes will not be able to see them. fildes This is the file descriptor opened earlier. off This is the offset in the file that you want to start mapping from. A restriction: this must be a multiple of the virtual memory page size. This page size can be obtained with a call to getpagesize(). mmap() returns -1 on error, and sets errno. Otherwise, it returns a pointer to the start of the mapped data.
Stack Mapped File Heap Bss Text Off len Mapped Region of file File
An Example of using mmap() #include int fd, pagesize; char *data; fd = fopen("foo", O_RDONLY); pagesize = getpagesize(); data = mmap((caddr_t)0, pagesize, PROT_READ, MAP_SHARED, fd, pagesize); Once this code stretch has run, you can access the first byte of the mapped section of file using data[0].
Annotations for read-write-1.c: The mmap procedure (from the library) sets up a shared memory segment and returns the base address for that segment. It has the following form: base address = mmap(0, num bytes, protection, flags, -1, 0); The second parameter, num bytes, specifies the number of bytes to be allocated for the new segment. The third parameter, protection, specifies whether the segment may be used for reading, writing, executing, or other purpose. For typical shared memory, both read and write permission is specified using the combination PROT READ | PROT WRITE. In read-write-1.c, the combination MAP ANONYMOUS | MAP SHARED in the fourth parameter indicates a new memory segment should be allocation (rather than allocating space from a file descriptor) and all writes to the memory segment should be shared with other processes. The value -1 in the next-to-last parameter indicates a new, internal file descriptor is needed - the segment will not be part of an existing file.
Example 2 A simple readers/writers program using a shared buffer and spinlocks read-write-2.c read-write-2.c
Annotations for read-write-2.c: A logical buffer is allocated in shared memory, and buffer indexes, in and out, are used to identify where data will be stored or read by the writer or reader process. More specifically, *in gives the next free place in the buffer for the writer to enter data. *out gives the first place in the buffer for the reader to extract data. Writing to the buffer may continue unless the buffer is full (i.e., (*in + 1) % BUF SIZE == *out) and reading from the buffer may proceed unless the buffer is empty (i.e., *in == *out). Both conditions are tested in spin locks.