Download presentation
Presentation is loading. Please wait.
Published byJasper Owen Modified over 8 years ago
1
Basic Synchronization Principles 1
2
2
3
Independent process Cannot affect or be affected by the other processes in the system Does not share any data with other processes Interacting process Can affect or be affected by the other processes Shares data with other processes We focus on interacting processes through physically or logically shared memory 3
4
Threads/processes which logically appear to be executing in parallel Value of concurrency – speed & economics But few widely-accepted concurrent programming languages (Java is an exception) Few concurrent programming paradigms Each problem requires careful consideration There is no common model OS tools to support concurrency tend to be “low level” 4
5
Need to ensure that interacting processes/ threads begin to execute a designated block of code at the same logical time May cause problems when using shared resources Deadlock Critical sections Nondeterminacy (no assurance that repeating a parallel program will produce same results) 5
6
Want to synchronize a set of threads Parent thread creates N child threads and signals the child threads when they are supposed to halt Child threads attempts to synchronize with the parent at the end of each iteration Child proceeds with computation if parent has not issued signal to synchronize 6
7
7 … Wait runTime seconds Initialize CreateThread(…) runFlag=FALSE Terminate Thread Work Exit runFlag? TRUE FALSE TRUE FALSE TRUE FALSE
8
n processes all competing to use some shared data Each process has a code segment, called critical section, in which the shared data is accessed. Problem – ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section, called mutual exclusion 8
9
9
10
Structure of process P i repeat entry section critical section exit section remainder section until false ; 10
11
11 shared double balance; Code for p 1 Code for p 2...... balance = balance + amount;balance = balance - amount;... balance+=amountbalance-=amount balance
12
12 … load R1, balance load R2, amount … load R1, balance load R2, amount sub R1, R2 store R1, balance … add R1, R2 store R1, balance … Timer interrupt Execution of p 1 Execution of p 2
13
Race condition When several processes access and manipulate the same data concurrently, there is a race among the processes The outcome of the execution depends on the particular order in which the access takes place This is called a race condition 13
14
14
15
Order 1Order 2Order 3 edit-a write-a edit-b write b edit-a edit-b write-a write-b edit-a edit-b write-b write-a Order 4Order 5Order 6 edit-b write-b edit-a write-a edit-b edit-a write-b write-a edit-b edit-a write-a write-b 15
16
Mutual exclusion : Only one process can be in the critical section at a time There is a race to execute critical sections The sections may be defined by different code in different processes cannot easily detect with static analysis Without mutual exclusion, results of multiple execution are not determinate Need an OS mechanism so programmer can resolve races 16
17
Mutual Exclusion. If process P i is executing in its critical section, then no other processes can be executing in their critical sections. Progress If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely. Bounded Waiting A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. 17
18
Disable interrupts Software solution – locks Transactions FORK(), JOIN(), and QUIT() [Chapter 2] Terminate processes with QUIT() to synchronize Create processes whenever critical section is complete … something new … 18
19
In a uni-processor system, an interrupt causes the race condition The scheduler can be called in the interrupt handler, resulting in context switch and data inconsistency 19
20
Process p i repeat disableInterrupts(); // entry section critical section enableInterrupts(); // exit section remainder section until false ; 20
21
Program for p 1 disableInterrupts(); balance = balance+amount; enableInterrupts(); Program for p 2 disableInterrupts(); balance = balance-amount; enableInterrupts(); 21 shared double amount, balance; /* shared variables */
22
This solution may affect the behavior of the I/O system Interrupts can be disabled for an arbitrarily long time The interrupts can be disabled permanently if the program contains an infinite loop in its critical section Only want to prevent p 1 and p 2 from interfering with each other – this blocks all p i ! Try using a software approach – a shared lock variable But even this has problems 22
23
Only 2 processes, P 1 and P 2 General structure of process P i (other process P j ) repeat entry section critical section exit section remainder section until false ; Processes may share some common variables to synchronize their actions. 23
24
Shared variables: var turn : (0..1); initially turn = 0 turn = i P i can enter its critical section Process P i repeat while turn i do no-op ; critical section turn := 1 - i ; remainder section until false ; Satisfies mutual exclusion, but not progress 24
25
Shared variables var flag : array [0..1] of boolean ; initially flag [0] = flag [1] = false. flag [ i ] = true P i ready to enter its critical section Process P i (note that j represents the other process) repeat flag [ i ] := true ; while flag [ j ] do no-op ; critical section flag [ i ] := false ; remainder section until false ; Satisfies mutual exclusion, but not progress requirement. 25
26
Shared variables var flag : array [0..1] of boolean ; initially flag [0] = flag [1] = false. flag [ i ] = true P i ready to enter its critical section Process P i repeat while flag [ j ] do no-op ; flag [ i ] := true ; critical section flag [ i ] := false ; remainder section until false ; Does not satisfy mutual exclusion 26
27
Combined shared variables of algorithms 1 and 2. Process P i (again j represents the other process) repeat flag [ i ] := true ; turn := j ; while ( flag [ j ] and turn = j ) do no-op ; critical section flag [ i ] := false ; remainder section until false ; Meets all three requirements; solves the critical-section problem for two processes. 27
28
Shared variable – a lock boolean lock; initially lock = FALSE lock is FALSE P i can enter its critical section Process P 1 Process P 2 repeat while (lock) do no-op ; lock = TRUE; critical section lock = FALSE reminder section until false ; 28 repeat while (lock) do no-op; lock = TRUE; critical section lock = FALSE; reminder section until false;
29
shared bool lock = FALSE; shared double balance; Code for p 1 Code for p 2/* Acquire the lock */ while(lock) ; lock = TRUE;/* Critical sect */ balance += amount; balance -= amount;/* Release lock */ lock = FALSE; 29
30
Creates a busy-wait condition If p 1 is interrupted while in its critical section (preempted), then p 2 will spend its entire timeslice executing the while statement 30 p1p1 p2p2 Blocked at while lock = TRUE lock = FALSE Interrupt
31
Another problem What if the process is interrupted immediately after executing the while, but before setting the lock? Both process could be in their critical sections at the same time! 31
32
32 shared boolean lock = FALSE; shared double balance; Code for p 1 Code for p 2/* Acquire the lock */ while(lock) ; lock = TRUE;/* Execute critical sect */ balance = balance + amount; balance = balance - amount;/* Release lock */ lock = FALSE; Worse yet … another race condition … Is it possible to solve the problem?
33
Bound the amount of time that interrupts are disabled Can include other code to check that it is OK to assign a lock … but this is still overkill … 33 enter(lock) {exit(lock) { disableInterrupts(); /* Loop until lock is TRUE */ lock = FALSE; while(lock) { enableInterrupts(); /* Let interrupts occur */} enableInterrupts(); disableInterrupts(); } lock = TRUE; enableInterrupts(); }
34
Another more subtle problem Two or more processes/threads get into a state where each is controlling a resource needed by the other 34
35
35
36
36 shared boolean lock1 = FALSE; shared boolean lock2 = FALSE; shared list L; Code for p 1 Code for p 2... /* Enter CS to delete elt *//* Enter CS to update len */ enter(lock1); enter(lock2); ; ; ; /* Enter CS to update len *//* Enter CS to add elt */ enter(lock2); enter(lock1); ; ;/* Exit both CS */ exit(lock1); exit(lock2); exit(lock2); exit(lock1);...
37
37 shared boolean lock1 = FALSE; shared boolean lock2 = FALSE; shared list L; Code for p 1 Code for p 2... /* Enter CS to delete elt *//* Enter CS to update len */ enter(lock1); enter(lock2); ; ;/* Exit CS */ exit(lock1); exit(lock2); ; /* Enter CS to update len *//* Enter CS to add elt */ enter(lock2); enter(lock1); ; ;/* Exit CS */ exit(lock2); exit(lock1);...
38
A transaction is a list of operations When the system begins to execute the list, it must execute all of them without interruption or It must not execute any at all Example: List manipulator Add or delete an element from a list Adjust the list descriptor, e.g., length Too heavyweight – need something simpler 38
39
Recall mechanisms for creating and destroying processes FORK(), JOIN(), and QUIT() May be used to synchronize concurrent processes Adding a synchronization operator is an modern approach 39
40
(a) Create/Destroy(b) Synchronization 40 (Initial Process) FORK JOIN A FORK JOIN QUIT FORK Synchronize JOIN QUIT (waiting) AB B BC B Synchronize A AB BA AB A
41
Process creation/destroying tend to be costly operations Considerable manipulation of process descriptors, protection mechanisms, and memory management mechanisms Synchronization operation can be thought of as a resource request, and can be implemented much more efficiently Contemporary OS use synchronization mechanisms – semaphores 41
42
Invented in the 1960s – Edsger Dijkstra Conceptual OS mechanism, with no specific implementation defined (could be enter() / exit() ) Basis of all contemporary OS synchronization mechanisms A semaphore, s, is a nonnegative integer variable that can only be changed or tested by these two indivisible (atomic) functions: 42 V(s): [s = s + 1] P(s): [while(s == 0) {wait}; s = s - 1]
43
Dijkstra was Dutch, so P: short for proberen, “to test” V: short for verhogen, “to increment” P is the wait operation V is the signal operation Note that some texts use P and V; others use wait and signal 43
44
44
45
45
46
Two types of semaphores Binary semaphore – can only be either 0 or 1 Can also be true or false Counting semaphore – can be any positive number 46
47
Processes p 0 & p 1 enter critical sections Mutual exclusion : Only one process at a time in the critical section (CS) Only processes competing for a CS are involved in resolving who enters the CS Once a process attempts to enter its CS, it cannot be postponed indefinitely After requesting entry, only a bounded number of other processes may enter before the requesting process 47
48
48 Let fork(proc, N, arg 1, arg 2, …, arg N ) be a command to create a process, and to have it execute using the given N arguments Canonical problem: Proc_0() {proc_1() { while(TRUE) { while(TRUE { <compute section>; <critical section>; }} fork(proc_0, 0); fork(proc_1, 0);
49
Memory read/writes are indivisible (simultaneous attempts result in some arbitrary order of access) There is no priority among the processes Relative speeds of the processes/processors is unknown Processes are cyclic and sequential 49
50
50 Proc_0() {proc_1() { while(TRUE) { while(TRUE { <compute section>; P(mutex); P(mutex); <critical section>; V(mutex); V(mutex); }} semaphore mutex = 1; fork(proc_0, 0); fork(proc_1, 0);
51
51 Proc_0() {proc_1() {.../* Enter the CS */ P(mutex); P(mutex); balance += amount; balance -= amount; V(mutex); V(mutex);...} semaphore mutex = 1; fork(proc_0, 0); fork(proc_1, 0);
52
52 proc_A() { while(TRUE) { ; update(x); /* Signal proc_B */ V(s1); ; /* Wait for proc_B */ P(s2); retrieve(y); } semaphore s1 = 0; semaphore s2 = 0; fork(proc_A, 0); fork(proc_B, 0); proc_B() { while(TRUE) { /* Wait for proc_A */ P(s1); retrieve(x); ; update(y); /* Signal proc_A */ V(s2); ; }
53
The semaphore principle is logically used with the busy and done flags in a controller Driver signals controller with a V(busy), then waits for completion with P(done) Controller waits for work with P(busy), then announces completion with V(done) See Fig 8.17, page 310 53
54
Two processes executing concurrently using shared memory One produces information (the producer), other uses the information (the consumer) Processes communicate through fixed number of buffers Producer obtains buffer from pool of empty buffers, fills it with info, and places it in pool of full buffers Consumer obtains buffer from pool of full buffers, gets info, and places it in pool of empty buffers Need to ensure neither process uses wrong pool 54
55
55 Producer Consumer Empty Pool Full Pool
56
Shared data typedef struct {...... } item ; item buffer [n]; int in, out, counter ; in =0; out=0; counter = 0; 56
57
Producer process repeat … produce an item in nextp … while counter == n do no-op; buffer [ in ] = nextp ; in = ( in + 1) % n ; counter ++; until false; 57
58
Consumer process repeat while counter == 0 do no-op ; nextc = buffer [ out ]; out = ( out + 1) % n ; counter-- ; … consume the item in nextc … until false ; 58
59
If we let producer and consumer processes run concurrently, we may have wrong result Note the simple-thread program Each thread is working properly However, the total balance is not kept 59
60
Suppose we have one producer and one consumer, the variable counter is 5 Producer: counter = counter +1 P1: load counter, r1 P2: add r1, #1, r2 P3: store r2, counter Consumer: counter = counter - 1 C1: load counter, r1 C2: add r1, #-1, r2 C3: store r2, counter 60
61
A particular execution sequence P1: load counter, r1 P2: add r1, #1, r2 --- Context switch ---- C1: load counter, r1 C2: add r1, #-1, r2 C3: store r2, counter --- Context switch ---- P3: store r2, counter What is the value of counter ? 61
62
A particular execution sequence C1: load counter, r1 C2: add r1, #-1, r2 --- Context switch ---- P1: load counter, r1 P2: add r1, #1, r2 P3: store r2, counter --- Context switch ---- C3: store r2, counter What is the value of counter this time? 62
63
A particular execution sequence C1: load counter, r1 C2: add r1, #-1, r2 C3: store r2, counter --- Context switch ---- P1: load counter, r1 P2: add r1, #1, r2 P3: store r2, counter --- Context switch ---- What is the value of counter this time? 63
64
We model the empty buffer queue using a semaphore empty We model the full buffer queue using another semaphore full To achieve mutual exclusion when updating the queues, we use another semaphore mutex 64
65
65 producer() { bufType *next, *here; while (TRUE) { produce_item(next); /* Claim an empty buffer */ P(empty); /* Manipulate the pool */ P(mutex); here = obtain(empty); V(mutex); copy_buffer(next, here); /* Manipulate the pool */ P(mutex); release(here, fullPool); V(mutex); /* Signal a full buffer */ V(full); } consumer() { bufType *next, *here; while (TRUE) { /* Claim a full buffer */ P(full); /* Manipulate the pool */ P(mutex); here = obtain(full); V(mutex); copy_buffer(here, next); /* Manipulate the pool */ P(mutex) release(here, emptyPool); V(mutex) /* Signal an empty buffer */ V(empty); consume_item(next); }
66
where semaphore mutex = 1;/* binary semaphore */ semaphore full = 0;/* counting semaphore */ semaphore empty = N;/* counting semaphore */ bufType buffer[N]; 66
67
67 producer() { buf_type *next, *here; while(TRUE) { produce_item(next); /* Claim an empty */ P(empty); P(mutex); here = obtain(empty); V(mutex); copy_buffer(next, here); P(mutex); release(here, fullPool); V(mutex); /* Signal a full buffer */ V(full); } semaphore mutex = 1; semaphore full = 0; /* A general (counting) semaphore */ semaphore empty = N; /* A general (counting) semaphore */ buf_type buffer[N]; fork(producer, 0); fork(consumer, 0); consumer() { buf_type *next, *here; while(TRUE) { /* Claim full buffer */ P(full); P(mutex); here = obtain(full); V(mutex); copy_buffer(here, next); P(mutex); release(here, emptyPool); V(mutex); /* Signal an empty buffer */ V(empty); consume_item(next); }
68
68 producer() { buf_type *next, *here; while(TRUE) { produce_item(next); /* Claim an empty */ P(empty); P(mutex); here = obtain(empty); V(mutex); copy_buffer(next, here); P(mutex); release(here, fullPool); V(mutex); /* Signal a full buffer */ V(full); } semaphore mutex = 1; semaphore full = 0; /* A general (counting) semaphore */ semaphore empty = N; /* A general (counting) semaphore */ buf_type buffer[N]; fork(producer, 0); fork(consumer, 0); consumer() { buf_type *next, *here; while(TRUE) { /* Claim full buffer */ P(full); P(mutex); here = obtain(full); V(mutex); copy_buffer(here, next); P(mutex); release(here, emptyPool); V(mutex); /* Signal an empty buffer */ V(empty); consume_item(next); }
69
69 producer() { buf_type *next, *here; while(TRUE) { produce_item(next); /* Claim an empty */ P(empty); P(mutex); here = obtain(empty); V(mutex); copy_buffer(next, here); P(mutex); release(here, fullPool); V(mutex); /* Signal a full buffer */ V(full); } semaphore mutex = 1; semaphore full = 0; /* A general (counting) semaphore */ semaphore empty = N; /* A general (counting) semaphore */ buf_type buffer[N]; fork(producer, 0); fork(consumer, 0); consumer() { buf_type *next, *here; while(TRUE) { /* Claim full buffer */ P(mutex); P(full); here = obtain(full); V(mutex); copy_buffer(here, next); P(mutex); release(here, emptyPool); V(mutex); /* Signal an empty buffer */ V(empty); consume_item(next); }
70
The counting semaphores serve a dual purpose Keep count of full and empty buffers Synchronize operation of the processes by blocking the producer when there are no empty buffers and blocking the consumer when there are no full buffers The mutex semaphore protects access to the buffer pools 70
71
The order of appearance of the P operations is significant If the first two P operations were switched, then either process could obtain the mutex semaphore and block on the empty / full semaphore – the other process could not proceed deadlock 71
72
A resource is shared among readers and writers A reader process can share the resource with any other reader process but not with any writer process A writer process requires exclusive access to the resource whenever it acquires any access to the resource 72
73
73 Readers Writers
74
74 Reader Shared Resource Reader Writer
75
75 Reader Shared Resource Reader Writer
76
76 Reader Shared Resource Reader Writer
77
77
78
78
79
79
80
reader() { while(TRUE) { ; P(mutex); readCount++; if(readCount == 1) P(writeBlock); V(mutex); /* Critical section */ access(resource); P(mutex); readCount--; if(readCount == 0) V(writeBlock); V(mutex); } resourceType *resource; int readCount = 0; semaphore mutex = 1; semaphore writeBlock = 1; fork(reader, 0); fork(writer, 0); writer() { while(TRUE) { ; P(writeBlock); /* Critical section */ access(resource); V(writeBlock); }
81
reader() { while(TRUE) { ; P(mutex); readCount++; if(readCount == 1) P(writeBlock); V(mutex); /* Critical section */ access(resource); P(mutex); readCount--; if(readCount == 0) V(writeBlock); V(mutex); } resourceType *resource; int readCount = 0; semaphore mutex = 1; semaphore writeBlock = 1; fork(reader, 0); fork(writer, 0); writer() { while(TRUE) { ; P(writeBlock); /* Critical section */ access(resource); V(writeBlock); } First reader competes with writers Last reader signals writers
82
reader() { while(TRUE) { ; P(mutex); readCount++; if(readCount == 1) P(writeBlock); V(mutex); /* Critical section */ access(resource); P(mutex); readCount--; if(readCount == 0) V(writeBlock); V(mutex); } resourceType *resource; int readCount = 0; semaphore mutex = 1; semaphore writeBlock = 1; fork(reader, 0); fork(writer, 0); writer() { while(TRUE) { ; P(writeBlock); /* Critical section */ access(resource); V(writeBlock); } First reader competes with writers Last reader signals writers Any writer must wait for all readers Readers can starve writers “Updates” can be delayed forever May not be what we want
83
reader() { while(TRUE) { ; P(readBlock); P(mutex1); readCount++; if(readCount == 1) P(writeBlock); V(mutex1); V(readBlock); access(resource); P(mutex1); readCount--; if(readCount == 0) V(writeBlock); V(mutex1); } int readCount = 0, writeCount = 0; semaphore mutex1 = 1, mutex2 = 1; semaphore readBlock = 1, writeBlock = 1, writePending = 1; fork(reader, 0); fork(writer, 0); writer() { while(TRUE) { ; P(mutex2); writeCount++; if(writeCount == 1) P(readBlock); V(mutex2); P(writeBlock); access(resource); V(writeBlock); P(mutex2) writeCount--; if(writeCount == 0) V(readBlock); V(mutex2); } 1234
84
reader() { while(TRUE) { ; P(writePending); P(readBlock); P(mutex1); readCount++; if(readCount == 1) P(writeBlock); V(mutex1); V(readBlock); V(writePending); access(resource); P(mutex1); readCount--; if(readCount == 0) V(writeBlock); V(mutex1); } int readCount = 0, writeCount = 0; semaphore mutex = 1, mutex2 = 1; semaphore readBlock = 1, writeBlock = 1, writePending = 1; fork(reader, 0); fork(writer, 0); writer() { while(TRUE) { ; P(mutex2); writeCount++; if(writeCount == 1) P(readBlock); V(mutex2); P(writeBlock); access(resource); V(writeBlock); P(mutex2) writeCount--; if(writeCount == 0) V(readBlock); V(mutex2); } 1234
85
Barber can cut one person’s hair at a time Other customers wait in a waiting room 85 Waiting Room Entrance to Waiting Room (sliding door) Entrance to Barber’s Room (sliding door) Shop Exit
86
customer() { while(TRUE) { customer = nextCustomer(); if(emptyChairs == 0) continue; P(chair); P(mutex); emptyChairs--; takeChair(customer); V(mutex); V(waitingCustomer); } semaphore mutex = 1, chair = N, waitingCustomer = 0; int emptyChairs = N; fork(customer, 0); fork(barber, 0); barber() { while(TRUE) { P(waitingCustomer); P(mutex); emptyChairs++; takeCustomer(); V(mutex); V(chair); }
87
Shared data semaphore chopstick[5]; (=1 initially) 87 while(TRUE) { think(); eat(); }
88
Philosopher i : repeat P( chopstick [ i ]) P( chopstick [ i +1 mod 5]) … eat … V( chopstick [ i ]); V( chopstick [ i +1 mod 5]); … think … until false ; 88
89
Problem with this If all philosophers pick up the chopstick on their right at the same time, then deadlock occurs and they all starve In the next chapter we will look at techniques to solve this problem 89
90
Three smokers (processes) Each wish to use tobacco, papers, & matches Each has an unlimited supply of one Only need the three resources periodically Must have all at once One agent (a fourth process) Has an unlimited amount of all three 3 processes sharing 3 resources Solvable, but difficult 90
91
Due to S. S. Patil in 1971 The agent and smokers share a table. The agent randomly generates two ingredients places these on the table. Once the ingredients are taken from the table by one smoker, the agent supplies another two. On the other hand, each smoker waits for the two missing ingredients. The smoker who has the remaining ingredient then makes and smokes a cigarette for a while, and goes back to the table waiting for his next ingredients. 91
92
The problem here is that if semaphores are used for each ingredient, then deadlock will occur One smoker has tobacco; second has matches Agent puts matches and paper on table Smoker with matches grabs paper, while smoker with tobacco gets matches Since neither smoker has both ingredients, the agent is not signaled DEADLOCK! 92
93
Minimize effect on the I/O system Processes are only blocked on their own critical sections (not critical sections that they should not care about) If disabling interrupts, be sure to bound the time they are disabled 93
94
94 class semaphore { int value; public: semaphore(int v = 1) { value = v;}; P(){ disableInterrupts(); while(value == 0) { enableInterrupts(); disableInterrupts(); } value--; enableInterrupts(); }; V(){ disableInterrupts(); value++; enableInterrupts(); };
95
Binary semaphore through test-and-set instruction TS(m): [ Reg_i = memory[m]; memory[m] = TRUE; ] 95 boolean s = FALSE;... while(TS(s)) ; s = FALSE;... semaphore s = 1;... P(s) ; V(s);...
96
Test and modify the content of a word atomically (indivisibly) The procedure cannot be interrupted until it has completed the routine Implemented as a test-and-set instruction boolean Test-and-Set ( boolean target) { boolean tmp tmp = target ; target = true ; return tmp; } 96
97
97 FALSE m Primary Memory … R3 … Data Register CC Register (a)Before Executing TS TRUE m Primary Memory FALSE R3 =0 Data Register CC Register (b) After Executing TS TS(m): [Reg_i = memory[m]; memory[m] = TRUE;]
98
Shared data: boolean lock ; Initially false Process P i repeat while Test-and-Set ( lock ) do no-op ; critical section lock = false ; remainder section until false ; 98
99
99 struct semaphore { int value = ; boolean mutex = FALSE; boolean hold = TRUE; }; shared struct semaphore s; P(struct semaphore s) { while(TS(s.mutex)) ; s.value--; if(s.value < 0) ( s.mutex = FALSE; while(TS(s.hold)) ; } else s.mutex = FALSE; } V(struct semaphore s) { while(TS(s.mutex)) ; s.value++; if(s.value <= 0) ( while(!s.hold) ; s.hold = FALSE; } s.mutex = FALSE; }
100
100 struct semaphore { int value = ; boolean mutex = FALSE; boolean hold = TRUE; }; shared struct semaphore s; P(struct semaphore s) { while(TS(s.mutex)) ; s.value--; if(s.value < 0) ( s.mutex = FALSE; while(TS(s.hold)) ; } else s.mutex = FALSE; } V(struct semaphore s) { while(TS(s.mutex)) ; s.value++; if(s.value <= 0) ( while(!s.hold) ; s.hold = FALSE; } s.mutex = FALSE; } Block at arrow Busy wait
101
To eliminate wasting the unused portion of the timeslice, the process could yield to another process The busy-waiting statement should be changed to while (TS(s.hold)) yield(*, scheduler); 101
102
102 struct semaphore { int value = ; boolean mutex = FALSE; boolean hold = TRUE; }; shared struct semaphore s; P(struct semaphore s) { while(TS(s.mutex)) ; s.value--; if(s.value < 0) ( s.mutex = FALSE; while(TS(s.hold)) ; } else s.mutex = FALSE; } V(struct semaphore s) { while(TS(s.mutex)) ; s.value++; if(s.value <= 0) ( while(!s.hold) ; s.hold = FALSE; } s.mutex = FALSE; } Block at arrow Busy wait Quiz: Why is this statement necessary?
103
A race condition can occur in which a thread that is blocked in the P procedure, yet the V procedure encounters s.hold as being TRUE Occurs when consecutive V operations occur before any thread executes a P operation Without the while, the result of one of the V operations could be lost 103
104
Define a semaphore as a structure typedef struct { int value; queue L; } semaphore; Assume two simple operations: block suspends the process that invokes it. wakeup( P ) resumes the execution of a blocked process P. 104
105
Semaphore operations now defined as P(S):S.value = S.value – 1; if S.value < 0 then begin add this process to S.L; block; end ; 105
106
V(S): S.value = S.value + 1; if S.value 0 then begin remove a process P from S.L; wakeup(P); end ; 106
107
An acceptable solution to the critical section problem needs to meet the following constraints Mutual exclusion Progress: If a critical section is free, a set of processes are trying to enter the critical section, only those processes participate in the selection of the next process to enter the critical section and the selection cannot be postponed indefinitely Bounded waiting: After a process requests entry into its critical section, only a bounded number of other processes may be allowed to enter their related critical sections before the original process enters its critical section 107
108
Deadlock two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes. Let S and Q be two semaphores initialized to 1 P 0 P 1 P ( S ); P ( Q ); P ( Q ); P ( S ); V( S );V( Q ); V( Q )V( S ); Starvation – indefinite blocking A process may never be removed from the semaphore queue in which it is suspended. 108
109
A semaphore value is a consumable resource The P operation can be considered a resource request operation A process/thread blocks if it requests a positive semaphore value but the semaphore is zero When a process/thread encounters a zero-valued semaphore, it moves to the blocked state The V operation can be considered a resource release operation Process/thread moves from blocked to ready when it detects a positive value of the semaphore 109
110
There is a subtle problem related to semaphore implementation A process can dominate the semaphore Performs V operation, but continues to execute Performs another P operation before releasing CPU Called a passive implementation of V Bounded waiting may not be satisfied when the passive V operation is used where the implementation increments the semaphore with no opportunity for a context switch 110
111
Active implementation calls scheduler as part of the V operation. Changes semantics of semaphore! In contrast to passive semaphores, a yield or similar procedure will be called after incrementing the semaphore for a context switch in a V operation implementation 111
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.