Lecture 5: Concurrency: Mutual Exclusion and Synchronization (cont’d)

Slides:



Advertisements
Similar presentations
More on Processes Chapter 3. Process image _the physical representation of a process in the OS _an address space consisting of code, data and stack segments.
Advertisements

1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6: Process Synchronization
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Concurrency: mutual exclusion and synchronization Slides are mainly taken from «Operating Systems: Internals and Design Principles”, 8/E William Stallings.
Intertask Communication and Synchronization In this context, the terms “task” and “process” are used interchangeably.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Solution to Race Condition: Mutual Exclusion and Synchronization.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
A. Frank - P. Weisberg Operating Systems Inter-Process Communication (IPC)
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
5.6 Semaphores Semaphores –Software construct that can be used to enforce mutual exclusion –Contains a protected variable Can be accessed only via wait.
Concurrency: Deadlock and Starvation Chapter 6. Revision Describe three necessary conditions for deadlock Which condition is the result of the three necessary.
1 Semaphores Special variable called a semaphore is used for signaling If a process is waiting for a signal, it is suspended until that signal is sent.
Inter Process Communication:  It is an essential aspect of process management. By allowing processes to communicate with each other: 1.We can synchronize.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
Concurrency: Mutual Exclusion and Synchronization
OS Fall’02 Concurrency Operating Systems Fall 2002.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Chapter 5 1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Race Conditions/Mutual Exclusion Segment of code of a process where a shared resource is accessed (changing global variables, writing files etc) is called.
Chapter 6 Concurrency: Deadlock and Starvation Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic,
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings 1.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic,
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Midterm 1 – Wednesday, June 4  Chapters 1-3: understand material as it relates to concepts covered  Chapter 4 - Processes: 4.1 Process Concept 4.2 Process.
1 Concurrency: Mutual Exclusion and Synchronization Module 2.2.
1 Announcements The fixing the bug part of Lab 4’s assignment 2 is now considered extra credit. Comments for the code should be on the parts you wrote.
Chapter 71 Deadlock Detection revisited. Chapter 72 Message Passing (see Section 4.5 in Processes Chapter)  A general method used for interprocess communication.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Chapter 7 - Interprocess Communication Patterns
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion.
Processes. Process Concept Process Scheduling Operations on Processes Interprocess Communication Communication in Client-Server Systems.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 6.
Synchronicity II Introduction to Operating Systems: Module 6.
Gokul Kishan CS8 1 Inter-Process Communication (IPC)
Process Synchronization Presentation 2 Group A4: Sean Hudson, Syeda Taib, Manasi Kapadia.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems The producer consumer problem Monitors and messaging.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Operating systems Deadlocks.
Concurrency: Mutual Exclusion and Synchronization
Operating systems Deadlocks.
Lecture 2 Part 2 Process Synchronization
Concurrency: Mutual Exclusion and Synchronization
Concurrency: Mutual Exclusion and Process Synchronization
Concurrency: Mutual Exclusion and Synchronization
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
CSE 542: Operating Systems
Synchronization CS Spring 2002.
Presentation transcript:

Lecture 5: Concurrency: Mutual Exclusion and Synchronization (cont’d)

The Producer/Consumer Problem A producer process produces information that is consumed by a consumer process –Example 1: a print program produces characters that are consumed by a printer –Exampel 2: an assembler produces object modules that are consumed by a loader We need a buffer to hold items that are produced and eventually consumed Producer/Consumer is a common paradigm for cooperating processes in OS

P/C: Unbounded Buffer We assume an unbounded buffer consisting of a linear array of elements –in points to the next item to be produced –out points to the next item to be consumed

P/C: Unbounded Buffer We use a semaphore S to perform mutual exclusion on the buffer access only one process at a time can access the buffer We use another semaphore N to synchronize producer and consumer on the number N (= in - out) of items in the buffer an item can be consumed only after it has been created

P/C: Unbounded Buffer The producer is free to add an item into the buffer at any time –it performs wait(S) before appending and signal(S) afterwards to prevent customer access –It also performs signal(N) after each append to increment N The consumer must first do wait(N) to see if there is an item to consume and use wait(S)/signal(S) to access the buffer

P/C Solution for Unbounded Buffer Producer: repeat produce v; wait(S); append(v); signal(S); signal(N); forever Consumer: repeat wait(N); wait(S); w:=take(); signal(S); consume(w); forever Initialization: S.count:=1; N.count:=0; in:=out:=0; critical sections append(v): b[in]:=v; in++; take(): w:=b[out]; out++; return w ;

P/C: Unbounded Buffer Remarks: Putting signal(N) inside the CS of the producer (instead of outside) has no effect since the consumer must always wait for both semaphores before proceeding The consumer must perform wait(N) before wait(S), otherwise deadlock occurs if consumer enter CS while the buffer is empty Using semaphores is a difficult art...

P/C: Finite Circular Buffer of Size k can consume only when the number N of (consumable) items is at least 1 (now N != in-out) can produce only when the number E of empty spaces is at least 1

P/C: Finite Circular Buffer of Bize k As before: –we need a semaphore S to have mutual exclusion on buffer access –we need a semaphore N to synchronize producer and consumer on the number of consumable items In addition: –we need a semaphore E to synchronize producer and consumer on the number of empty spaces

Solution of P/C: Finite Circular Buffer of Size k Initialization: S.count:=1; in:=0; N.count:=0; out:=0; E.count:=k; Producer: repeat produce v; wait(E); wait(S); append(v); signal(S); signal(N); forever Consumer: repeat wait(N); wait(S); w:=take(); signal(S); signal(E); consume(w); forever critical sections append(v): b[in]:=v; in:=(in+1) mod k ; take(): w:=b[out]; out:=(out+1) mod k; return w;

Producer/Consumer using Monitors Two types of processes: –producers –consumers Synchronization is now confined within the monitor append(.) and take(.) are procedures within the monitor: are the only means by which P/C can access the buffer If these procedures are correct, synchronization will be correct for all participating processes ProducerI: repeat produce v; Append(v); forever ConsumerI: repeat Take(v); consume v; forever

Monitor for the bounded P/C problem Buffer: –buffer: array[0..k-1] of items; Two condition variables: –notfull: csignal(notfull) indicates that the buffer is not full –notemty: csignal(notempty) indicates that the buffer is not empty Buffer pointers and counts: –nextin: points to next item to be appended –nextout: points to next item to be taken –count: holds the number of items in the buffer

Monitor for the Bounded P/C Problem Monitor boundedbuffer: buffer: array[0..k-1] of items; nextin:=0, nextout:=0, count:=0: integer; notfull, notempty: condition; Append(v): if (count=k) cwait(notfull); buffer[nextin]:= v; nextin:= nextin+1 mod k; count++; csignal(notempty); Take(v): if (count=0) cwait(notempty); v:= buffer[nextout]; nextout:= nextout+1 mod k; count--; csignal(notfull);

Message Passing Is a general method used for inter-process communication (IPC) –for processes on the same computer –for processes in a distributed system Can be another mean to provide process synchronization and mutual exclusion Two basic primitives: –send(destination, message) –received(source, message) For both operations, the process may or may not be blocked

Synchronization with Message Passing For the sender: typically, does not to block on send(.,.) –can send several messages to multiple destinations –but sender usually expect acknowledgment of message receipt (in case receiver fails) For the receiver: typically, blocks on receive(.,.) –the receiver usually needs the info before proceeding –cannot be blocked indefinitely if sender process fails before send(.,.)

Synchronization in Message Passing Other possibilities are sometimes offered Example: blocking send, blocking receive: –both are blocked until the message is received –occurs when the communication link is unbuffered (no message queue) –provides tight synchronization (rendez-vous)

Addressing in Message Passing Direct addressing: –when a specific process identifier is used for source/destination –it might be impossible to specify the source ahead of time (ex: a print server) Indirect addressing (more convenient): –messages are sent to a shared mailbox, which consists of a queue of messages –senders place messages in the mailbox, receivers pick them up

Enforcing Mutual Exclusion with Message Passing create a mailbox mutex shared by n processes send() is non blocking receive() blocks when mutex is empty Initialization: send(mutex, “go”); The first Pi that executes receive() will enter CS. Others will block until Pi resends msg. Process Pi: var msg: message; repeat receive(mutex,msg); CS send(mutex,msg); RS forever

The Bounded-Buffer P/C Problem with Message Passing The producer places items (inside messages) in the mailbox mayconsume mayconsume acts as the buffer: consumer can consume item when at least one message is present Mailbox mayproduce is filled initially with k null messages (k= buffer size) The size of mayproduce shrinks with each production and grows with each consumption Can support multiple producers/consumers

The Bounded-Buffer P/C Problem with Message Passing Producer: var pmsg: message; repeat receive(mayproduce, pmsg); pmsg:= produce(); send(mayconsume, pmsg); forever Consumer: var cmsg: message; repeat receive(mayconsume, cmsg); consume(cmsg); send(mayproduce, null); forever

Spin-Locks (Busy Waiting) inefficient on uniprocessors: waste CPU cycles on multiprocessors cache coherence effects can make them inefficient Problem:lock = false /* init */ while (TST(lock)==TRUE);/* busy waiting to get the lock cause bus contention*/ lock = false;/* unlock */ Solution:lock = false /* init */ while (lock == TRUE || TST(lock)==TRUE); /* spinning is done in cache if lock is busy */ lock = false;/* unlock */

Cache Coherence Effect TST causes cache invalidations even if unsuccessful Solution: keeps spinning in the cache as long as the lock is busy At release, lock is invalidated, each processor incurs a read miss First processor resolving the miss acquires the lock Those processors which pass the spinning in the cache but fail on TST generate more cache misses Better solution: introduce random delays

Spinning vs. Blocking Spinning is good when no other thread waits for the processor or the lock is quickly release Blocking is expensive but necessary to allow concurrent threads to run (especially if one happens to hold the lock) Combine spinning with blocking: when a thread fails to acquire a lock it spins for some time then blocks If the time spend in spinning is equal to a context switch the scheme is 2-competitive More sophisticated adaptive schemes based on the observed lock-waiting time

Kernel Emulation of Atomic Operation on Uniprocessors Kernel can emulate a read-modify-write instruction in the process address space because it can avoid rescheduling The solution is pessimistic and expensive Optimistic approach: Define Restartable Atomic Aequences (RAS) Practically no overhead if no interrupts Recognize when an interrupt occurs and restart the sequence Needs kernels support to register (RAS) and to detect if thread switching occurs in RAS

TST Emulation Using RAS Test_and_set(p) { int result; result = 1; BEGIN RAS if (p==1) result = 0; else p = 1; END RAS return result; }

Unix SVR4 Concurrency Mechanisms To communicate data across processes: –Pipes –Messages –Shared memory To trigger actions by other processes: –Signals –Semaphores

Unix Pipes A shared bounded FIFO queue written by one process and read by another –based on the producer/consumer model –OS enforces mutual exclusion: only one process at a time can access the pipe –if there is not enough room to write, the producer is blocked, else it writes –consumer is blocked if attempting to read more bytes than are currently in the pipe –accessed by a file descriptor, like an ordinary file –processes sharing the pipe are unaware of each other’s existence

Unix Messages A process can create or access a message queue (like a mailbox) with the msgget system call. msgsnd and msgrcv system calls are used to send and receive messages to a queue There is a “type” field in message headers –FIFO access within each message type –each type defines a communication channel A process is blocked (put asleep) when: –trying to receive from an empty queue –trying to send to a full queue

Shared Memory in Unix A block of virtual memory shared by multiple processes The shmget system call creates a new region of shared memory or return an existing one A process attaches a shared memory region to its virtual address space with the shmat system call Mutual exclusion must be provided Fastest form of IPC provided by Unix

Unix Semaphores A generalization of the counting semaphores (more operations are permitted). A semaphore includes: –the current value S of the semaphore –number of processes waiting for S to increase –number of processes waiting for S to be 0 There are queues of processes that are blocked on a semaphore The system call semget creates an array of semaphores The system call semop performs a list of operations on a set of semaphores

Unix Signals Similar to hardware interrupts without priorities Each signal is represented by a numeric value. Examples: –02, SIGINT: to interrupt a process –09, SIGKILL: to terminate a process Each signal is maintained as a single bit in the process table entry of the receiving process –the bit is set when the corresponding signal arrives (no waiting queues) A signal is processed as soon as the process enters in user mode A default action (eg: termination) is performed unless a signal handler function is provided for that signal (by using the signal system call)