Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 5 1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.

Similar presentations


Presentation on theme: "Chapter 5 1 Concurrency: Mutual Exclusion and Synchronization Chapter 5."— Presentation transcript:

1 Chapter 5 1 Concurrency: Mutual Exclusion and Synchronization Chapter 5

2 2 Problems with concurrent execution n Concurrent processes (or threads) often need to share data (maintained either in shared memory or files) and resources n If there is no controlled access to shared data, execution of the processes on these data can interleave. n The results will then depend on the order in which data were modified (nondeterminism). n A program may give different and sometimes undesirable results each time it is executed

3 Chapter 5 3 An example n Process P1 and P2 are running this same procedure and have access to the same variable “a” u shared variable n Processes can be interrupted anywhere n If P1 is first interrupted after user input and P2 executes entirely n Then the character echoed by P1 will be the one read by P2 !! static char a; void echo() { cin >> a; cout << a; }

4 Chapter 5 4 Global view: possible interleaved execution Process P1 static char a; void echo() { cin >> a; cout << a; } Process P2 static char a; void echo() { cin >> a; cout << a; } CSCS CSCS CS: Critical Section: part of a program whose execution cannot interleave with the execution of other CSs

5 Chapter 5 5 Race conditions n Situations such as the preceding one, where processes are racing against each other for access to ressources (variables, etc.) and the result depends on the order of access, are called race conditions n In this example, there is race on the variable a n Non-determinism: results don`t depend exclusively on input data, they depend also on timing conditions

6 Chapter 5 6 Other examples n A counter that is updated by several processes, if the update requires several instructions (p.ex. in machine language) n Threads that work simultaneously on an array, one to update it, the other to extract stats n Processes that work simultaneouosly on a database, for example in order to reserve airplane seats u Two travellers could get the same seat... n In this chapter, we will talk normally about concurrent processes. The same considerations apply to concurrent threads.

7 Chapter 5 7 The critical section problem n When a process executes code that manipulates shared data (or resources), we say that the process is in a critical section (CS) (for that shared data or resource) n CSs can be thought of as sequences of instructions that are ‘tightly bound’ so no other CSs on the same data or resource can interleave. n The execution of CSs must be mutually exclusive: at any time, only one process should be allowed to execute in a CS for a given shared data or resource (even with multiple CPUs) u the results of the manipulation of the shared data or resource will no longer depend on unpredictable interleaving n The CS problem is the problem of providing special sequences of instructions to obtain such mutual exclusion n These are sometimes called synchronization mechanisms

8 Chapter 5 8 CS: Entry and exit sections n A process wanting to enter a critical section must ask for permission n The section of code implementing this request is called the entry section n The critical section (CS) will be followed by an exit section, u which opens the possibility of other processes entering their CS. n The remaining code is the remainder section repeat entry section critical section exit section remainder section forever

9 Chapter 5 9 Framework for analysis of solutions n Each process executes at nonzero speed but there are no assumption on the relative speed of processes, nor on their interleaving n Memory hardware prevents simultaneous access to the same memory location, even with several CPUs u this establishes a sort of elementary critical section, on which all other synch mechanisms are based

10 Chapter 5 10 Requirements for a valid solution to the critical section problem n Mutual Exclusion u At any time, at most one process can be in its critical section (CS) n Progress u Only processes that are not executing in their Remainder Section are taken in consideration in the decision of who will enter next in the CS. u This decision cannot be postponed indefinitely

11 Chapter 5 11 Requirements for a valid solution to the critical section problem (cont.) n Bounded Waiting u After a process P has made a request to enter its CS, there is a limit on the number of times that the other processes are allowed to enter their CS, before P gets it: no starvation u Of course also no deadlock n It is difficult to find solutions that satisfy all criteria and so most solutions we will see are defective on some aspects

12 Chapter 5 12 3 Types of solutions n No special instructions (book:software appr.) u algorithms that don’t use instructions designed to solve this particular problem n Hardware solutions u rely on special machine instructions (e.g. Lock bit) n Operating System and Programming Language solutions (e.g. Java) u provide specific system calls to the programmer

13 Chapter 5 13 Software solutions n We consider first the case of 2 processes u Algorithm 1 and 2 have problems u Algorithm 3 is correct (Peterson’s algorithm) n Notation u We start with 2 processes: P0 and P1 u When presenting process Pi, Pj always denotes the other process (i != j)

14 Chapter 5 14 Process P0: repeat flag[0]:=true; while(flag[1])do{}; CS flag[0]:=false; RS forever Process P1: repeat flag[1]:=true; while(flag[0])do{}; CS flag[1]:=false; RS forever Algorithm 2 global view P0: flag[0]:=true P1: flag[1]:=true deadlock! Algorithm 1 or the excessive courtesy

15 Chapter 5 15 Algorithm 1 or the excessive courtesy n Keep 1 Bool variable for each process: flag[0] and flag[1] n Pi signals that it is ready to enter its CS by: flag[i]:=true u but it gives first a chance to the other n Mutual Exclusion, progress OK n but not the deadlock requirement n If we have the sequence: u P0: flag[0]:=true u P1: flag[1]:=true n Both processes will wait forever to enter their CS: deadlock n Could work in other cases! Process Pi: repeat flag[i]:=true; while(flag[j])do{}; CS flag[i]:=false; RS forever

16 Chapter 5 16 Process P0: repeat while(turn!=0)do{}; CS turn:=1; RS forever Process P1: repeat while(turn!=1)do{}; CS turn:=0; RS forever Algorithm 1 global view Note: turn is a shared variable between the two processes Algorithm 2—Strict Order

17 Chapter 5 17 Algorithm 2—Strict Order n The shared variable turn is initialized (to 0 or 1) before executing any Pi n Pi’s critical section is executed iff turn = i n Pi is busy waiting if Pj is in CS: mutual exclusion is satisfied n Progress requirement is not satisfied since it requires strict alternation of CSs. n If a process requires its CS more often then the other, it can’t get it. Process Pi: // i,j= 0 or 1 repeat while(turn!=i)do{}; CS turn:=j; RS forever do nothing

18 Chapter 5 18 Process P0: repeat flag[0]:=true; // 0 wants in turn:= 1; // 0 gives a chance to 1 while (flag[1]&turn=1)do{}; CS flag[0]:=false; // 0 no longer wants in RS forever Process P1: repeat flag[1]:=true; // 1 wants in turn:=0; // 1 gives a chance to 0 while (flag[0]&turn=0)do{}; CS flag[1]:=false; // 1 no longer wants in RS forever Peterson’s algorithm global view Algorithm 3 (Peterson’s algorithm) (forget about Dekker)

19 Chapter 5 19 Wait or enter? n A process i waits if: u the other process wants in and its turn has come F flag[j]and turn=j n A process i enters if: u It wants to enter or its turn has come F flag[i] or turn=i F in practice, if process i gets to the test, flag[i] is necessarily true so it`s turn that counts

20 Chapter 5 20 Algorithm 3 (Peterson’s algorithm) (forget about Dekker) n Initialization: flag[0]:=flag[1]:=false turn:= 0 or 1 n Interest to enter CS specified by flag[i]:=true n flag[i]:= false upon exit If both processes attempt to enter their CS simultaneously, only one turn value will last Process Pi: repeat flag[i]:=true; // want in turn:=j; // but give a chance... while (flag[j]&turn=j)do{}; CS flag[i]:=false; // no longer want in RS forever

21 Chapter 5 21 Algorithm 3: proof of correctness n Mutual exclusion holds since: u P0 and P1 are both in CS only if flag[0] = flag[1] = true and only if turn = i for each Pi (impossible) n We now prove that the progress and bounded waiting requirements are satisfied: u Pi cannot enter CS only if stuck in while{} with condition flag[ j] = true and turn = j. u If Pj is not ready to enter CS then flag[ j] = false and Pi can then enter its CS

22 Chapter 5 22 Algorithm 3: proof of correctness (cont.) u If Pj has set flag[ j]=true and is in its while{}, then either turn=i or turn=j u If turn=i, then Pi enters CS. If turn=j then Pj enters CS but will then reset flag[ j]=false on exit: allowing Pi to enter CS u but if Pj has time to reset flag[ j]=true, it must also set turn=i u since Pi does not change value of turn while stuck in while{}, Pi will enter CS after at most one CS entry by Pj (bounded waiting)

23 Chapter 5 23 What about process failures? n If all 3 criteria (ME, progress, bounded waiting) are satisfied, then a valid solution will provide robustness against failure of a process in its remainder section (RS) u since failure in RS is just like having an infinitely long RS. n However, the solution given does not provide robustness against a process failing in its critical section (CS). u A process Pi that fails in its CS without releasing the CS causes a system failure u it would be necessary for a failing process to inform the others, perhaps difficult!

24 Chapter 5 24 Extensions to >2 processes n Peterson algorithm can be generalized to >2 processes n But in this case there is a more elegant solution...

25 Chapter 5 25 n-process solution: bakery algorithm (not in book) n Before entering their CS, each Pi receives a number. Holder of smallest number enter CS (like in bakeries, ice-cream stores...) n When Pi and Pj receive same number: u if i<j then Pi is served first, else Pj is served first n Pi resets its number to 0 in the exit section n Notation: u (a,b) < (c,d) if a < c or if a = c and b < d u max(a0,...ak) is a number b such that F b >= ai for i=0,..k

26 Chapter 5 26 The bakery algorithm (cont.) n Shared data: u choosing: array[0..n-1] of boolean; F initialized to false u number: array[0..n-1] of integer; F initialized to 0 n Correctness relies on the following fact: u If Pi is in CS and Pk has already chosen its number[k]!= 0, then (number[i],i) < (number[k],k) u but the proof is somewhat tricky...

27 Chapter 5 27 The bakery algorithm (cont.) Process Pi: repeat choosing[i]:=true; number[i]:=max(number[0]..number[n-1])+1; choosing[i]:=false; for j:=0 to n-1 do { while (choosing[j]) {}; while (number[j]!=0 and (number[j],j)<(number[i],i)){}; } CS number[i]:=0; RS forever

28 Chapter 5 28 Drawbacks of software solutions n Complicated logic! n Processes that are requesting to enter in their critical section are busy waiting (consuming processor time needlessly) n If CSs are long, it would be more efficient to block processes that are waiting (just as if they had requested I/O). n We will now look at solutions that require hardware instructions u the first ones will not satisfy criteria above ^ Low Level

29 Chapter 5 29 A simple hardware solution: interrupt disabling n Observe that in a uniprocessor system the only way a program can interleave with another is if it gets interrupted n So simply disable interruptions n Mutual exclusion is preserved but efficiency of execution is degraded: while in CS, we cannot interleave execution with other processes that are in RS n On a multiprocessor: mutual exclusion is not achieved u Generally not an acceptable solution Process Pi: repeat disable interrupts critical section enable interrupts remainder section forever

30 Chapter 5 30 Hardware solutions: special machine instructions n Normally, access to a memory location excludes other access to that same location n Extension: machine instructions that perform several actions atomically (indivisible) on the same memory location (ex: reading and testing) n The execution of such instruction sequences is mutually exclusive (even with multiple CPUs) n They can be used to provide mutual exclusion but need more complex algorithms for satisfying the other requirements of the CS problem

31 Chapter 5 31 The test-and-set instruction n A C description of test-and-set: n An algorithm that uses testset for Mutual Exclusion: n Shared variable b is initialized to 0 n Only the first Pi who sets b enter CS bool testset(int& i) { if (i==0) { i=1; return true; } else { return false; } Process Pi: repeat repeat{} until testset(b); CS b=0; RS forever Non Interruptible!

32 Chapter 5 32 The test-and-set instruction (cont.) n Mutual exclusion is assured: if Pi enter CS, the other Pj are busy waiting u but busy waiting is not a good way! n Needs other algorithms to satisfy the additional criteria u When Pi exit CS, the selection of the Pj who will enter CS is arbitrary: no bounded waiting. Hence starvation is possible n Still a bit too complicated to use in everyday code

33 Chapter 5 33 Exchange instruction: similar idea n Processors (ex: Pentium) often provide an atomic xchg(a,b) instruction that swaps the content of a and b. n But xchg(a,b) suffers from the same drawbacks as test-and-set

34 Chapter 5 34 Using xchg for mutual exclusion n Shared variable lock is initialized to 0 n Each Pi has a local variable key n The only Pi that can enter CS is the one who finds lock=0 n This Pi excludes all the other Pj by setting lock to 1 Process Pi: repeat key:=1 repeat exchg(key,lock) until key=0; CS lock:=0; RS forever

35 Chapter 5 35 Solutions based on systems calls n Appropriate machine language instructions are at the basis of all practical solutions u but they are too elementary n We need instructions allowing to better structure code. n We also need better facilities for preventing common errors, such as deadlocks, starvation, etc. n So there is need of instructions at a higher level n Such instructions are implemented as system calls

36 Chapter 5 36 Semaphores n A semaphore S is an integer variable that, apart from initialization, can only be accessed through 2 atomic and mutually exclusive operations: u wait(S) u signal(S) n It is shared by all processes who are interested in the same resource, shared variable, etc. n Semaphores will be presented in two steps: u busy wait semaphores u semaphores using waiting queues n A distinction is made between counter semaphores and binary semaphores u we`ll see the applications

37 Chapter 5 37 Busy Waiting Semaphores (spinlocks) n The simplest semaphores. n Useful when critical sections last for a short time, or we have lots of CPUs. n S initialized to positive value (to allow someone in at the beginning). signal(S): S++; wait(S): while S<=0 do{}; S--; waits if # of proc who can enter is 0 or negative increases by 1 the number of processes that can enter

38 Chapter 5 38 Atomicity aspects n The testing and decrementing sequence in wait are atomic, but not the loop. n Signal is atomic. n No two processes can be allowed to execute atomic sections simultaneously. u This can be implemented by some of the mechanisms discussed earlier S <= 0 atomic S - - F T

39 Chapter 5 39 Atomicity and Interruptibility S <= 0 atomic S - - F T S++ interruptible other process The loop is not atomic to allow another process to interrupt the wait

40 Chapter 5 40 Using semaphores for solving critical section problems n For n processes n Initialize S to 1 n Then only 1 process is allowed into CS (mutual exclusion) n To allow k processes into CS, we initialize S to k n So semaphores can allow several processes in the CS! Process Pi: repeat wait(S); CS signal(S); RS forever

41 Chapter 5 41 Process P0: repeat wait(S); CS signal(S); RS forever Process P1: repeat wait(S); CS signal(S); RS forever Semaphores: the global view Initialize S to 1

42 Chapter 5 42 Using semaphores to synchronize processes n We have 2 processes: P1 and P2 n Statement S1 in P1 needs to be performed before statement S2 in P2 n Then define a semaphore “synch” n Initialize synch to 0 n Proper synchronization is achieved by having in P1: S1; signal(synch); n And having in P2: wait(synch); S2;

43 Chapter 5 43 Semaphores: observations n When S>=0: u the number of processes that can execute wait(S) without being blocked = S F S processes can enter CS F if S>1 is possible, then a second semaphore is necessary to implement mutual exclusion see producer/consumer example n When S >1, the process that enters in the CS is the one that tests S first (random selection). u this won’t be true in the following solution n When S<0: the number of processes waiting on S is |S| wait(S): while S<=0 do{}; S--;

44 Chapter 5 44 Avoiding Busy Wait in Semaphores n To avoid busy waiting: when a process has to wait for a semaphore to become greater than 0, it will be put in a blocked queue of processes waiting for this to happen. n Queues can be FIFO, or priority, etc.: OS has control on the order processes enter CS. n wait and signal become system calls to the OS (just like I/O calls) n There is a queue for every semaphore just as there is a queue for each I/O unit n A process that waits on a semaphore is in waiting state

45 Chapter 5 45 Semaphores without busy waiting n A semaphore can be seen as a record (structure): type semaphore = record count: integer; queue: list of process end; var S: semaphore; n When a process must wait for a semaphore S, it is blocked and put on the semaphore’s queue n The signal operation removes (by a fair scheduling policy like FIFO) one process from the queue and puts it in the list of ready processes

46 Chapter 5 46 Semaphore’s operations (atomic) wait(S): S.count--; if (S.count<0) { block this process place this process in S.queue } signal(S): S.count++; if (S.count<=0) { remove a process P from S.queue place this process P on ready list } The value to which S.count is initialized depends on the application S was negative: queue nonempty atomic

47 Chapter 5 47 Figure showing the relationship between queue content and value of S

48 Chapter 5 48 Semaphores: Implementation n wait and signal themselves contain critical sections! How to implement them? n Note that they are very short critical sections. n Solutions: u uniprocessor: disable interrupts during these operations (ie: for a very short period). This does not work on a multiprocessor machine. u multiprocessor: use some busy waiting scheme, hardware or software. Busy waiting shouldn’t last long.

49 Chapter 5 49 The producer/consumer problem n A producer process produces information that is consumed by a consumer process u Ex1: a print program produces characters that are consumed by a printer u Ex2: an assembler produces object modules that are consumed by a loader n We need a buffer to hold items that are produced and eventually consumed n A common paradigm for cooperating processes (see Unix Pipes)

50 Chapter 5 50 P/C: unbounded buffer n We assume first an unbounded buffer consisting of a linear array of elements in points to the next item to be produced out points to the next item to be consumed

51 Chapter 5 51 P/C: unbounded buffer n We need a semaphore S to perform mutual exclusion on the buffer: only 1 process at a time can access the buffer n We need another semaphore N to synchronize producer and consumer on the number N (= in - out) of items in the buffer u an item can be consumed only after it has been created

52 Chapter 5 52 P/C: unbounded buffer n The producer is free to add an item into the buffer at any time: it performs wait(S) before appending and signal(S) afterwards to prevent consumer access n It also performs signal(N) after each append to increment N n The consumer must first do wait(N) to see if there is an item to consume and use wait(S)/signal(S) to access the buffer

53 Chapter 5 53 Solution of P/C: unbounded buffer Producer: repeat produce v; wait(S); append(v); signal(S); signal(N); forever Consumer: repeat wait(N); wait(S); w:=take(); signal(S); consume(w); forever Initialization: S.count:=1; //mutual exclusion N.count:=0; //number of items in:=out:=0; //indexes to buffers critical sections append(v): b[in]:=v; in++; take(): w:=b[out]; out++; return w;

54 Chapter 5 54 P/C: unbounded buffer n Remarks: u check that producer can run arbitrarily faster than consumer u Putting signal(N) inside the CS of the producer (instead of outside) has no effect since the consumer must always wait for both semaphores before proceeding u The consumer must perform wait(N) before wait(S), otherwise deadlock occurs if consumer enters CS while the buffer is empty u disaster will occur if one forgets to do a signal after a wait, eg. if a producer has an unending loop inside a critical section. n Using semaphores has pitfalls...

55 Chapter 5 55 P/C: finite circular buffer of size k n can consume only when number N of (consumable) items is at least 1 (now: N!=in-out) n can produce only when number E of empty spaces is at least 1

56 Chapter 5 56 P/C: finite circular buffer of size k n As before: u we need a semaphore S to have mutual exclusion on buffer access u we need a semaphore N to synchronize producer and consumer on the number of consumable items (full spaces) n In addition: u we need a semaphore E to synchronize producer and consumer on the number of empty spaces

57 Chapter 5 57 Solution of P/C: finite circular buffer of size k Initialization: S.count:=1; //mutual excl. N.count:=0; //full spaces E.count:=k; //empty spaces Producer: repeat produce v; wait(E); wait(S); append(v); signal(S); signal(N); forever Consumer: repeat wait(N); wait(S); w:=take(); signal(S); signal(E); consume(w); forever critical sections append(v): b[in]:=v; in++; mod k; take(): w:=b[out]; out++; mod k; return w;

58 Chapter 5 58 The Dining Philosophers Problem n 5 philosophers who only eat and think n each need to use 2 forks for eating n we have only 5 forks n A classical synchron. problem n Illustrates the difficulty of allocating resources among process without deadlock and starvation

59 Chapter 5 59 The Dining Philosophers Problem n Each philosopher is a process n One semaphore per fork: u fork: array[0..4] of semaphores u Initialization: fork[i].count:=1 for i:=0..4 n A first attempt: n Deadlock if each philosopher start by picking his left fork! Process Pi: repeat think; wait(fork[i]); wait(fork[i+1 mod 5]); eat; signal(fork[i+1 mod 5]); signal(fork[i]); forever

60 Chapter 5 60 The Dining Philosophers Problem n A solution: admit only 4 philosophers at a time to the table n Then 1 philosopher can always eat when the other 3 are holding 1 fork n Hence, we can use another semaphore T that would limit at 4 the numb. of philosophers “sitting at the table” n Initialize: T.count:=4 Process Pi: repeat think; wait(T); wait(fork[i]); wait(fork[i+1 mod 5]); eat; signal(fork[i+1 mod 5]); signal(fork[i]); signal(T); forever

61 Chapter 5 61 Binary semaphores n The semaphores we have studied are called counting (or integer) semaphores n We have also binary semaphores u similar to counting semaphores except that “count” is Boolean valued: 0 or 1 u can do anything counting semaphores can do u more difficult to use than counting semaphores: must use additional counting variables protected by binary semaphores.

62 Chapter 5 62 Binary semaphores waitB(S): if (S.value = 1) { S.value := 0; } else { block this process place this process in S.queue } signalB(S): if (S.queue is empty) { S.value := 1; } else { remove a process P from S.queue place this process P on ready list }

63 Chapter 5 63 Advantages of semaphores (w.r.t. previous solutions) n Only one shared variable per critical section n Two simple operations: wait, signal n Avoid busy wait, but not completely n Can be used in the case of many processes n If desired, several processes at a time can be allowed in (see prod/cons) n wait queues managed by the OS: starvation avoided if OS is `fair` (p.ex. queues FIFO).

64 Chapter 5 64 Problems with semaphores n But if wait(S) and signal(S) are scattered among several processes but must correspond u difficult in complex programs n Usage must be correct in all processes, otherwise global problems can occur n One faulty (or malicious) process can fail the entire collection of processes, cause deadlock, starvation n Consider the case of a process that has waits and signals in loops with tests...

65 Chapter 5 65 Monitors n Are high-level language constructs that provide equivalent functionality to that of semaphores but are easier to control n Found in many concurrent programming languages F Concurrent Pascal, Modula-3, Java… F Functioning not identical... n Can be constructed from semaphores (and vice-versa) n Very appropriate for OO programming

66 Chapter 5 66 Monitor n Is a software module containing: u one or more procedures u an initialization sequence u local data variables n Characteristics: u local variables accessible only by monitor’s procedures F note O-O approach u a process enters the monitor by invoking one of its procedures u only one process can execute in the monitor at any given time u but several processes can be waiting in the monitor F conditional variables

67 Chapter 5 67 Monitor n The monitor ensures mutual exclusion: no need to program it explicitly n Hence, shared data are protected by placing them in the monitor u the monitor locks the shared data u assures sequential use. n Process synchronization is done by the programmer by using condition variables that represent conditions a process may need to wait after its entry in the monitor

68 Chapter 5 68 Condition variables n accessible only within the monitor n can be access and changed only by two functions: u cwait(a): blocks execution of the calling process on condition (variable) a F process can resume execution only if another process executes csignal(a) u csignal(a): resume execution of some process blocked on condition (variable) a. F If several such process exists: choose any one (FIFO or priority) F If no such process exists: do nothing

69 Chapter 5 69 Queues in Monitor n Processes await entrance in the entrance queue A n Process puts itself into condition queue cn by issuing cwait(cn) n csignal(cn) brings into the monitor 1 process in condition cn queue n csignal(cn) blocks the calling process and puts it in the urgent queue (unless csignal is the last operation of the process)

70 Chapter 5 70 Producer/Consumer problem: first attempt n Two processes: u producer u consumer n Synchronization is now confined within the monitor n append(.) and take(.) are procedures within the monitor: are the only means by which P/C can access the buffer n If these procedures are correct, synchronization will be correct for all participating processes n Easy to generalize to n processes n BUT incomplete solution... Producer: repeat produce v; append(v); forever Consumer: repeat take(v); consume v; forever

71 Chapter 5 71 Monitor for the bounded P/C problem n Monitor needs to handle the buffer: u buffer: array[0..k-1] of items; n needs two condition variables: u notfull: csignal(notfull) means that buffer not full u notempty: csignal(notempty) means that buffer not empty n needs buffer pointers and counts: u nextin: points to next item to be appended u nextout: points to next item to be taken u count: holds the number of items in buffer

72 Chapter 5 72 Monitor for the bounded P/C problem Monitor boundedbuffer: buffer: array[0..k-1] of items; nextin:=0, nextout:=0, count:=0: integer; notfull, notempty: condition; //buffer state append(v): if (count=k) cwait(notfull); buffer[nextin]:= v; nextin++ mod k; count++; csignal(notempty); take(v): if (count=0) cwait(notempty); v:= buffer[nextout]; nextout++ mod k; count--; csignal(notfull);

73 Chapter 5 73 Conclusions on monitors n Understanding and programming are generally easier than for semaphores n Object Oriented philosophy

74 Chapter 5 74 Message Passing n Is a general method used for interprocess communication (IPC) u for processes inside the same computer u for processes in a distributed system n Yet another mean to provide process synchronization and mutual exclusion n We have at least two primitives: u send(destination, message) u receive(source, message) n In both cases, the process may or may not be blocked

75 Chapter 5 75 Blocking, wait n Are the sender or received blocked until receiving some sort of answer? n For the sender: it is more natural not to be blocked after issuing send(.,.) (wait for reception) u can send several messages to multiple dest. u but sender usually expects acknowledgment of message receipt (in case receiver fails) n For the receiver: it is more natural to be blocked after issuing receive(.,.) u the receiver usually needs the info before proceeding u but could be blocked indefinitely if sender process fails before send(.,.)

76 Chapter 5 76 Blocking (cont.) n Hence other possibilities are sometimes offered u Ex: blocking send, blocking receive: F both are blocked until the message is received F occurs when the communication link is unbuffered (no message queue) F provides tight synchronization (rendez-vous) n Indefinite blocking can be avoided by the use of timeouts.

77 Chapter 5 77 Addressing in message passing n direct addressing: u when a specific process identifier is used for source/destination u but it might be impossible to specify the source ahead of time (ex: a print server) n indirect addressing (more convenient): u messages are sent to a shared mailbox which consists of a queue of messages u senders place messages in the mailbox, receivers pick them up

78 Chapter 5 78 Mailboxes and Ports n A mailbox can be private to one sender/receiver pair n The same mailbox can be shared among several senders and receivers u the OS may then allow the use of message types (for selection) n Port: is a mailbox associated with one receiver and multiple senders u used for client/server applications: the receiver is the server

79 Chapter 5 79 Ownership of ports and mailboxes n A port is usually owned and created by the receiving process n The port is destroyed when the receiver terminates n The OS creates a mailbox on behalf of a process (which becomes the owner) n The mailbox is destroyed at the owner’s request or when the owner terminates

80 Chapter 5 80 Message format n Consists of header and body of message n control info: u sequence numbers u error correction codes u priority... n Message queuing discipline: varying: FIFO or can also include priorities

81 Chapter 5 81 Enforcing mutual exclusion with message passing n create a mailbox mutex shared by n processes n send() is non blocking n receive() blocks when mutex is empty n Initialization: send(mutex, “go”); n The first Pi who executes receive() will enter CS. Others will be blocked until Pi resends msg. n A process can receive own message Process Pi: var msg: message; repeat receive(mutex,msg); CS send(mutex,msg); RS forever

82 Chapter 5 82 The Global View Process Pi: var msg: message; repeat receive(mutex,msg); CS send(mutex,msg); RS forever Process Pj: var msg: message; repeat receive(mutex,msg); CS send(mutex,msg); RS forever Initialize: send(mutex, go) Note: mutex can be rec’d by same process that sends it hence process can reenter CS immediately

83 Chapter 5 83 The bounded-buffer P/C problem with message passing n The producer place items (inside messages) in the mailbox mayconsume n mayconsume acts as buffer: consumer can consume item when at least one message is present n Mailbox mayproduce is filled initially with k null messages (k= buffer size) n The size of mayproduce shrinks with each production and grows with each consumption u it acts as a counter of empty places in mayconsume n can support multiple producers/consumers

84 Chapter 5 84 Prod/cons, finite buffer, messages ProducerConsumer mayprod maycons mayproduce: counter buffer of empty places in mayconsume: contains items sent from prod to cons

85 Chapter 5 85 The bounded-buffer P/C problem with message passing Producer: var pmsg: message; repeat receive(mayproduce, pmsg); pmsg:= produce(); send(mayconsume, pmsg); forever Consumer: var cmsg: message; repeat receive(mayconsume, cmsg); consume(cmsg); send(mayproduce, null); forever To start: send(mayproduce, null) Processes communicate through mbox buffer size is mbox size

86 Chapter 5 86 Conclusions on synchro. mechanisms n A variety of methods exist, and each method has different variations. n The most commonly used are semaphores, monitors, and message passing. n Monitors and message passing are the easiest to use, closest to programmer’s needs. n Simpler mechanisms can be used to implement more sophisticated ones. n No mechanism prevents in principle deadlocks, starvation, or busy waiting but more sophisticated mechanisms make it easier to avoid them.

87 Chapter 5 87 Important concepts of Chapter 5 n Critical Section Problem n Software solutions, difficulties n Machine instructions n Semaphores, examples n Monitors, examples n Messages, examples

88 Chapter 5 88 Concurrency Control in Unix Extra-curricular Reading

89 Chapter 5 89 Unix SVR4 concurrency mechanisms n To communicate data across processes: u Pipes u Messages u Shared memory n To trigger actions by other processes: u Signals u Semaphores

90 Chapter 5 90 Unix Pipes n A shared bounded FIFO queue written by one process and read by another u based on the producer/consumer model u OS enforces Mutual Exclusion: only one process at a time can access the pipe u if there is not enough room to write, the producer is blocked, else he writes u consumer is blocked if attempting to read more bytes that are currently in the pipe u accessed by a file descriptor, like an ordinary file u processes sharing the pipe are unaware of each other’s existence

91 Chapter 5 91 Unix Messages n A process can create or access a message queue (like a mailbox) with the msgget system call. n msgsnd and msgrcv system calls are used to send and receive messages to a queue n There is a “type” field in message headers u FIFO access within each message type u each type defines a communication channel n Process is blocked (put asleep) when: u trying to receive from an empty queue u trying to send to a full queue

92 Chapter 5 92 Shared memory in Unix n A block of virtual memory shared by multiple processes n The shmget system call creates a new region of shared memory or return an existing one n A process attaches a shared memory region to its virtual address space with the shmat system call n Mutual exclusion must be provided by processes using the shared memory n Fastest form of IPC provided by Unix

93 Chapter 5 93 Unix signals n Similar to hardware interrupts without priorities n Each signal is represented by a numeric value. Ex: u 02, SIGINT: to interrupt a process u 09, SIGKILL: to terminate a process n Each signal is maintained as a single bit in the process table entry of the receiving process: the bit is set when the corresponding signal arrives (no waiting queues) n A signal is processed as soon as the process runs in user mode n A default action (eg: termination) is performed unless a signal handler function is provided for that signal (by using the signal system call)

94 Chapter 5 94 Unix Semaphores n Are a generalization of the counting semaphores (more operations are permitted). n A semaphore includes: u the current value S of the semaphore u number of processes waiting for S to increase u number of processes waiting for S to be 0 n We have queues of processes that are blocked on a semaphore n The system call semget creates an array of semaphores n The system call semop performs a list of operations: one on each semaphore (atomically)

95 Chapter 5 95 Unix Semaphores n Each operation to be done is specified by a value sem_op. n Let S be the semaphore value u if sem_op > 0: F S is incremented and process awaiting for S to increase are awaken u if sem_op = 0: F If S=0: do nothing F if S!=0, block the current process on the event that S=0

96 Chapter 5 96 Unix Semaphores u if sem_op < 0 and |sem_op| <= S: F set S:= S + sem_op (ie: S decreases) F then if S=0: awake processes waiting for S=0 u if sem_op S: F current process is blocked on the event that S increases n Hence: flexibility in usage (many operations are permitted)


Download ppt "Chapter 5 1 Concurrency: Mutual Exclusion and Synchronization Chapter 5."

Similar presentations


Ads by Google