Presentation is loading. Please wait.

Presentation is loading. Please wait.

Concurrency: Mutual Exclusion and Synchronization Chapter 5.

Similar presentations


Presentation on theme: "Concurrency: Mutual Exclusion and Synchronization Chapter 5."— Presentation transcript:

1 Concurrency: Mutual Exclusion and Synchronization Chapter 5

2 Concurrency n Arises due to the loading of multiple processes in memory at the same time (multiprogramming) n Uniprocessor system: interleave execution of processes n Multiprocessor system: interleave & overlap execution of processes n Interleaving and overlapping present the same fundamental problems

3 Problems with concurrency n Concurrent processes (or threads) often need to share global data and resources n If there is no controlled access to shared data, some processes will obtain an inconsistent view of this data n The action performed by concurrent processes will then depend on the order in which their execution is interleaved

4 An example n Process P1 and P2 are running this same procedure and have access to the same variable “a” n Processes can be interrupted anywhere n If P1 is first interrupted after user input and P2 executes entirely n Then the character echoed by P1 will be the one read by P2 !! static char a; void echo() { cin >> a; cout << a; }

5 Process Interaction n Processes unaware of each other - multiple independent processes - competition for resources n Processes indirectly aware of each other - do not know each other’s process ID - cooperation by sharing some object n Processes directly aware of each other - know each other’s process ID - cooperation by communication

6 Competition Among Processes for Resources n If two or more processes wish access to a single resource, one process will be allocated the resource and the others will have to wait n Control problems: - mutual exclusion - deadlock - starvation

7 Critical Section n Critical resource: only one process must be allowed access at a time (mutual exclusion) n Critical section: portion of program that uses the critical resource - enforce mutual exclusion on critical section

8 Cooperation Among Processes by Sharing n Processes use and update shared data such as shared variables, files, and databases n Cooperate to ensure data integrity n Control problems: mutual exclusion, deadlock, starvation n Only write operations must be mutually exclusive n New problem: data coherence

9 Data Coherence Example n Items a and b must satisfy a = b n Process P1: a = a + 1; b = b + 1; n Process P2: a = 2 * a; b = 2 * b; n Problem sequence of execution (a = b = 2): a = a + 1;=> a = 3 a = 2 * a;=> a = 6 b = 2 * b;=> b = 4 b = b + 1;=> b = 5

10 Cooperation Among Processes by Communication n Communication provides a way to synchronize, or coordinate, the various activities n Nothing shared: Mutual exclusion not a requirement n Possible to have deadlock u each process waiting for a message from the other process n Possible to have starvation u two processes sending message to each other while another process waits for a message

11 Requirements for Mutual Exclusion, I. n Only one process at a time is allowed in the critical section for a resource n If a process halts in its noncritical section, it must not interfere with other processes n A process requiring the critical section must not be delayed indefinitely: no deadlock or starvation

12 Requirements for Mutual Exclusion, II. n A process must not be delayed access to a critical section when there is no other process using it n No assumptions are made about relative process speeds or number of processes n A process remains inside its critical section for a finite time only

13 The critical section problem, I. n When a process executes code that manipulates shared data (or resource), we say that the process is in it’s critical section (CS) (for that shared data) n The execution of critical sections must be mutually exclusive: at any time, only one process is allowed to execute in its critical section (even with multiple processors) n Then each process must request the permission to enter its critical section (CS)

14 The critical section problem, II. n The section of code implementing this request is called the entry section n The critical section (CS) might be followed by an exit section n The remaining code is the remainder section (RS) n The critical section problem is to design a protocol that the processes can use so that the requirements of mutual exclusion can be met

15 Framework for analysis of solutions n Each process executes at nonzero speed but no assumption on the relative speed of n processes n General structure of a process: n many processors may be present but memory hardware prevents simultaneous access to the same memory location n No assumption about order of interleaved execution n For solutions: we need to specify entry and exit sections repeat entry section critical section exit section remainder section forever

16 Types of solutions n Software solutions u algorithms who’s correctness does not rely on any other assumptions (see framework) n Hardware solutions u rely on some special machine instructions n Operation System solutions u provide some functions and data structures to the programmer

17 17 Types of Solutions n Note: Solutions presented assume u Single processor system or u Multiprocessor system with shared main memory. n Later in Chapter 5, we will examine Message Passing. This technique lends itself best to u Distributed Systems (Clusters).

18 Software solutions n We consider first the case of 2 processes u Algorithm 1 and 2 are incorrect u Algorithm 3 is correct (Peterson’s algorithm) n Notation u We start with 2 processes: P0 and P1 u Pi and Pj refer to different processes (i not equal to j)

19 Algorithm 1 n The shared variable turn is initialized (to 0 or 1) before executing any Pi n Pi’s critical section is executed iff turn = i n Pi is busy waiting if Pj is in CS: mutual exclusion is satisfied n Drawbacks: - processes must strictly alternate in the use of CS - If Pi fails Pj is blocked forever Process P0: repeat while(turn!=0){}; CS turn:=1; forever Process P1: repeat while(turn!=1){}; CS turn:=0; forever

20 Algorithm 2 n Use one Bool variable for each process: flag[0] and flag[1] n Pi signals that it is ready to enter it’s CS by: flag[i]:=true n Mutual Exclusion is satisfied but could lead to deadlock n If we have the sequence: u T0: flag[0]:=true u T1: flag[1]:=true n Both process will wait forever to enter their CS: we have a deadlock Process P0: repeat flag[0]:=true; while(flag[1]){}; CS flag[0]:=false; forever Process P1: repeat flag[1]:=true; while(flag[0]){}; CS flag[1]:=false; forever

21 Algorithm 3 (Peterson’s algorithm) n Initialization: flag[0]:=flag[1]:=false turn:= 0 or 1 n Ready to enter CS specified by flag[i]:=true If both processes attempt to enter their CS simultaneously, only one turn value will last n Exit section: specifies that Pi is not ready to enter CS Process P0: repeat flag[0]:=true; turn:=1; while (flag[1]and turn=1){}; CS flag[0]:=false; forever Process P1: repeat flag[1]:=true; turn:=0; while (flag[0]and turn=0){}; CS flag[1]:=false; forever

22 Algorithm 3: proof of correctness n Mutual exclusion is preserved: u P0 and P1 are both in CS only if flag[0] = flag[1] = true and only if turn = i for each Pi (impossible) n No mutual blocking: u Assume P0 is blocked in its while loop (flag[1]=true and turn=1). It can enter its CS when either flag[1] = false or turn = 0. Three exhaustive cases: 1. P1 has no interest in its CS: Impossible because it implies flag[1] = false. 2. P1 is waiting for its CS: Impossible because if turn = 1, P1 can enter its CS. 3. P1 is monopolizing access to CS: Cannot happen because P1 sets turn=0 before each attempt to enter its CS.

23 Drawbacks of software solutions n Processes that are requesting to enter in their critical section are busy waiting (consuming processor time needlessly) n If CSs are long, it would be more efficient to block processes that are waiting...

24 Hardware solutions: interrupt disabling n On a uniprocessor: no overlapping of execution n Therefore, if a process doesn’t get interrupted in its CS, mutual exclusion can be guaranteed. Process Pi: repeat disable interrupts critical section enable interrupts remainder section forever n Degrades efficiency because processor cannot interleave programs when interrupts are disabled n On a multiprocessor: mutual exclusion is not preserved n Generally not an acceptable solution

25 Hardware solutions: special machine instructions n Normally, memory hardware ensures access to a memory location is mutually exclusive n Extension: machine instructions that perform two actions atomically (indivisible) on the same memory location (eg: reading and testing) n Not subject to interference from other instructions because actions performed in single instruction cycle (even with multiple processors)

26 The test-and-set instruction n A C++ description of test-and-set: n An algorithm that uses testset for Mutual Exclusion: n Shared variable b is initialized to 0 n Only the first Pi who sets b enter CS bool testset(int &i) { if (i==0) { i=1; return true; } else { return false; } Process Pi: repeat repeat{} until testset(b); CS b:=0; RS forever

27 The test-and-set instruction (cont.) n Mutual exclusion is preserved: if Pi enter CS, the other Pj are busy waiting n Problem: still using busy waiting n Processors (ex: Pentium) often provide an atomic xchg(a,b) instruction that swaps the contents of a and b (register and memory). n But xchg(a,b) suffers from the same drawbacks as test-and-set

28 Using xchg for mutual exclusion n Shared variable b is initialized to 0 n Each Pi has a local variable k n The only Pi that can enter CS is the one that finds b=0 n This Pi excludes all the other Pj by setting b to 1 n Sets b back to 0 on leaving CS Process Pi: repeat k:=1 repeat xchg(k,b) until k=0; CS b:=0; RS forever

29 Properties of machine instructions n Advantages: u Applicable to any number of processes u Can be used on single as well as multiple processor machines sharing main memory u Simple and easy to verify

30 Properties … (cont’d) n Disadvantages: u Busy waiting consumes processor time u Starvation is possible: selection of waiting process is arbitrary; some process could be denied access indefinitely. u Deadlock is possible: P1 is interrupted in CS. P2 of higher priority tries to use same resource and goes into busy waiting. P1 is not dispatched because it is of lower priority.

31 Semaphores, I. n Synchronization tool (provided by the OS) that do not require busy waiting n Processes cooperate by means of signals; a process can be made to wait on a signal n A semaphore S is an integer variable with the following operations defined on it u Initialization to non-negative value (1 for mutual exclusion) u wait(S): decrements value, blocks process if value becomes negative u signal(S): increments value, unblocks one process if value is not positive.

32 32 Semaphores n Note: semaphores can be used for u Synchronization u Mutual exclusion u Two separate, but similar problems u Descriptions presented are general F Notes are added for mutual exclusion to disallow semaphore.count > 1

33 Semaphores, II. n Hence, in fact, a semaphore is a record (structure): type semaphore = record count: integer; queue: list of process end; var S: semaphore; n When a process must wait for a semaphore S, it is blocked and put on the semaphore’s queue n The signal operation removes (according to a fair policy like FIFO) one process from the queue and puts it in the list of ready processes

34 Semaphore’s operations (atomic) wait(S): S.count--; if (S.count<0) { block this process place this process in S.queue } signal(S): S.count++; if (S.count<=0) { remove a process P from S.queue place this process P on ready list } S.count must be initialized to a nonnegative value (1 for mutual exclusion)

35 Semaphores: observations, I. n When S.count >=0: the number of processes that can execute wait(S) without being blocked = S.count (! > 1 for mutual exclusion) n When S.count<0: the number of processes waiting on S is = |S.count| n Atomicity and mutual exclusion: no 2 process can be in wait(S) or signal(S) (on the same S) at the same time (even with multiple CPUs) n Hence the blocks of code defining wait(S) and signal(S) are, in fact, critical sections

36 Semaphores: observations, II. n The critical sections defined by wait(S) and signal(S) are very short: typically 10 instructions n Solutions: u uniprocessor: disable interrupts during these operations (ie: for a very short period). This does not work on a multiprocessor machine. u multiprocessor: use previous software or hardware schemes. The amount of busy waiting should be small.

37 Using semaphores for solving critical section problems n For n processes n Initialize S.count to 1 n Then only 1 process is allowed into CS (mutual exclusion) n To allow k processes into CS, we initialize S.count to k (! mut. ex.) n When CS is exited, S.count is incremented and one of the blocked processes is moved to the Ready state Process Pi: repeat wait(S); CS signal(S); RS forever

38 Using semaphores to synchronize processes n We have 2 processes: P1 and P2 n Statement S1 in P1 needs to be performed before statement S2 in P2 n Then define a semaphore “synch” n Initialize synch to 0 n Proper synchronization is achieved by having in P1: S1; signal(synch); n And having in P2: wait(synch); S2;

39 Binary semaphores n The semaphores we have studied are called counting (or integer) semaphores n We have also binary semaphores u similar to counting semaphores except that “count” is Boolean valued (0 or 1) u counting semaphores can be implemented by binary semaphores... u generally more difficult to use than counting semaphores (eg: they cannot be initialized to an integer k > 1)

40 Binary semaphores waitB(S): if (S.value = 1) { S.value := 0; } else { block this process place this process in S.queue } signalB(S): if (S.queue is empty) { S.value := 1; } else { remove a process P from S.queue place this process P on ready list }

41 Spinlocks n They are counting semaphores that use busy waiting (instead of blocking) n Useful on multi processors when critical sections last for a short time n We then waste a bit of CPU time but we save process switch wait(S): S--; while S<0 do{}; signal(S): S++;

42 Problems with semaphores n Semaphores provide a powerful tool for enforcing mutual exclusion and coordinating processes n But wait(S) and signal(S) are scattered among several processes. Hence, difficult to understand their effects n Usage must be correct in all the processes n One bad (or malicious) process can fail the entire collection of processes

43 The producer/consumer problem n A producer process produces data items that are consumed by a consumer process u Eg: a print program produces characters that are consumed by a printer n We need a buffer to hold items that are produced and eventually consumed n A common paradigm for cooperating processes

44 P/C: unbounded buffer n We assume first an unbounded buffer consisting of a linear array of elements in points to the next item to be produced out points to the next item to be consumed

45 P/C: unbounded buffer n We need a semaphore S to perform mutual exclusion on the buffer: only 1 process at a time can access the buffer n We need another semaphore N to synchronize producer and consumer on the number N (= in - out) of items in the buffer u an item can be consumed only after it has been created

46 P/C: unbounded buffer n The producer is free to add an item into the buffer at any time: it performs wait(S) before appending and signal(S) afterwards to enforce mutual exclusion n The producer also performs signal(N) after each append to increment N n The consumer must first do wait(N) to see if there is an item to consume and use wait(S)/signal(S) to access the buffer

47 Solution of P/C: unbounded buffer Producer: repeat produce v; wait(S); append(v); signal(S); signal(N); forever Consumer: repeat wait(N); wait(S); w:=take(); signal(S); consume(w); forever Initialization: S.count:=1; N.count:=0; in:=out:=0; critical sections append(v): b[in]:=v; in++; take(): w:=b[out]; out++; return w;

48 P/C: unbounded buffer n Remarks: u Putting signal(N) inside the CS of the producer (instead of outside) has no effect since the consumer must always wait for both semaphores before proceeding u The consumer must perform wait(N) before wait(S), otherwise deadlock occurs if consumer executes wait(N) while the buffer is empty n Could be difficult to produce correct designs using semaphores

49 P/C: finite circular buffer of size k n can consume only when number N of (consumable) items is at least 1 (now: N!=in-out) n can produce only when number E of empty spaces is at least 1

50 P/C: finite circular buffer of size k n As before: u we need a semaphore S to have mutual exclusion on buffer access u we need a semaphore N to synchronize producer and consumer on the number of consumable items n In addition: u we need a semaphore E to synchronize producer and consumer on the number of empty spaces

51 Solution of P/C: finite circular buffer of size k Initialization: S.count:=1; in:=0; N.count:=0; out:=0; E.count:=k; Producer: repeat produce v; wait(E); wait(S); append(v); signal(S); signal(N); forever Consumer: repeat wait(N); wait(S); w:=take(); signal(S); signal(E); consume(w); forever critical sections append(v): b[in]:=v; in:=(in+1) mod k; take(): w:=b[out]; out:=(out+1) mod k; return w;

52 The Dining Philosophers Problem n 5 philosophers who only eat and think n each need to use 2 forks on either side for eating n we have only 5 forks n Enforce mutual exclusion on each fork n Avoid deadlock and starvation n A classical synchronization problem n Illustrates the difficulty of allocating resources among process without deadlock and starvation

53 The Dining Philosophers Problem n Each philosopher is a process n One semaphore per fork: u fork: array[0..4] of semaphores u Initialization: fork[i].count:=1 for i:=0..4 n A first attempt: n Deadlock if each philosopher start by picking his left fork! Process Pi: repeat think; wait(fork[i]); wait(fork[i+1 mod 5]); eat; signal(fork[i+1 mod 5]); signal(fork[i]); forever

54 The Dining Philosophers Problem n A solution: admit only 4 philosophers at a time who try to eat n Then 1 philosopher can always eat when the other 3 are holding 1 fork n Hence, we can use another semaphore T that would limit at 4 the num. of philosophers “sitting at the table” n Initialize: T.count:=4 Process Pi: repeat think; wait(T); wait(fork[i]); wait(fork[i+1 mod 5]); eat; signal(fork[i+1 mod 5]); signal(fork[i]); signal(T); forever

55 Monitors n Are high-level language constructs that provide equivalent functionality to that of semaphores but are easier to control n Found in many concurrent programming languages F Concurrent Pascal, Modula-3, Java... n Now supported by libraries also n Can be implemented by semaphores...

56 Monitor n Is a software module containing: u one or more procedures u an initialization sequence u local data variables n Characteristics: u local variables accessible only by monitor’s procedures u a process enters the monitor by invoking one of its procedures u only one process can be in the monitor at any one time

57 Monitor n The monitor ensures mutual exclusion: no need to program this constraint explicitly n Hence, shared data are protected by placing them in the monitor u The monitor locks the shared data on process entry n Process synchronization is done by the programmer by using condition variables that represent conditions a process may need to wait for before executing in the monitor

58 Condition variables n are local to the monitor (accessible only within the monitor) n can be accessed and changed only by two functions: u cwait(a): blocks execution of the calling process on condition (variable) a F monitor can now be used by another process u csignal(a): resume execution of some process blocked on condition (variable) a. F If several such process exists: choose 1 from q. F If no process is waiting do nothing (signal is lost – different from semaphore)

59 Monitor n Awaiting processes are either in the entrance queue or in a condition queue n A process puts itself into condition queue cn by issuing cwait(cn) n csignal(cn) brings into the monitor 1 process in condition cn queue n Hence csignal(cn) blocks the calling process and puts it in the urgent queue (unless csignal is the last operation of the monitor procedure)

60 Producer/Consumer problem n Two types of processes: u producers u consumers n Synchronization is now confined within the monitor n append(.) and take(.) are procedures within the monitor: are the only means by which P/C can access the buffer n If these procedures are correct, synchronization will be correct for all participating processes ProducerI: repeat produce v; Append(v); forever ConsumerI: repeat Take(v); consume v; forever

61 Monitor for the bounded P/C problem n Monitor needs to hold the buffer: u buffer: array[0..k-1] of items; n needs two condition variables: u notfull: csignal(notfull) indicates that the buffer is not full u notempty: csignal(notempty) indicates that the buffer is not empty n needs buffer pointers and counts: u nextin: points to next item to be appended u nextout: points to next item to be taken u count: holds the number of items in buffer

62 Monitor for the bounded P/C problem Monitor boundedbuffer: buffer: array[0..k-1] of items; nextin:=0, nextout:=0, count:=0: integer; notfull, notempty: condition; Append(v): if (count=k) cwait(notfull); buffer[nextin]:= v; nextin:= (nextin+1) mod k; count++; csignal(notempty); Take(v): if (count=0) cwait(notempty); v:= buffer[nextout]; nextout:= (nextout+1) mod k; count--; csignal(notfull);

63 Message Passing n Is a general method used for interprocess communication (IPC) u for processes inside the same computer u for processes in a distributed system n Yet another means to provide process synchronization and mutual exclusion n We have at least two primitives: u send(destination, message) u receive(source, message) n In both cases, the process may or may not be blocked

64 Synchronization in message passing n For the sender: it is more natural not to be blocked after issuing send(...) u can send several messages to multiple dest. u but sender usually expect acknowledgment of message receipt (in case receiver fails) n For the receiver: it is more natural to be blocked after issuing receive(.,.) u the receiver usually needs the info before proceeding u but could be blocked indefinitely if sender process fails before send(...)

65 Synchronization in message passing n Hence other possibilities are sometimes offered n Ex: blocking send, blocking receive: u both are blocked until the message is delivered u occurs when the communication link is unbuffered (no message queue) u provides tight synchronization (rendez-vous)

66 Addressing in message passing n direct addressing: u when a specific process identifier is used for source/destination u but it might be impossible to specify the source ahead of time (ex: a print server) n indirect addressing (more convenient): u messages are sent to a shared mailbox which consists of a queue of messages u senders place messages in the mailbox, receivers pick them up

67 Unix SVR4 concurrency mechanisms n To communicate data across processes: u Pipes u Messages u Shared memory n To trigger actions by other processes: u Signals u Semaphores


Download ppt "Concurrency: Mutual Exclusion and Synchronization Chapter 5."

Similar presentations


Ads by Google