Download presentation
Presentation is loading. Please wait.
Published byAugustine Pitts Modified over 9 years ago
1
CS 346 – Sect. 5.1-5.2 Process synchronization –What is the problem? –Criteria for solution –Producer / consumer example –General problems difficult because of subtleties
2
Problem It’s often desirable for processes/threads to share data –Can be a form of communication –One may need data being produced by the other Concurrent access possible data inconsistency Need to “synchronize”… –HW or SW techniques to ensure orderly execution Bartender & drinker –Bartender takes empty glass and fills it –Drinker takes full glass and drinks contents –What if drinker overeager and starts drinking too soon? –What if drinker not finished when bartender returns? –Must ensure we don’t spill on counter.
3
Key concepts Critical section = code containing access to shared data –Looking up a value or modifying it Race condition = situation where outcome of code depends on the order in which processes take turns –The correctness of the code should not depend on scheduling Simple example: producer / consumer code, p. 204 –Producer adds data to buffer and executes ++count; –Consumer grabs data and executes --count; –Assume count initially 5. –Let’s see what could happen…
4
Machine code Producer’s ++count becomes: 1 r1 = count 2 r1 = r1 + 1 3 count = r1 Consumer’s --count becomes: 4 r2 = count 5 r2 = r2 – 1 6 count = r2 Does this code work? Yes, if we execute in order 1,2,3,4,5,6 or 4,5,6,1,2,3 -- see why? Scheduler may have other ideas!
5
Alternate schedules 1 r1 = count 2 r1 = r1 + 1 4 r2 = count 5 r2 = r2 – 1 3 count = r1 6 count = r2 1 r1 = count 2 r1 = r1 + 1 4 r2 = count 5 r2 = r2 – 1 6 count = r2 3 count = r1 What are the final values of count? How could these situations happen? If the updating of a single variable is nontrivial, you can imagine how critical the general problem is!
6
Solution criteria How do we know we have solved a synchronization problem? 3 criteria: Mutual exclusion – Only 1 process may be inside its critical section at any one time. –Note: For simplicity we’re assuming there is one zone of shared data, so each process using it has 1 critical section. Progress – Don’t hesitate to enter your critical section if no one else is in theirs. –Avoid an overly conservative solution Bounded waiting – There is a limit on # of times you may access your critical section if another is still waiting to enter theirs. –Avoid starvation
7
Solution skeleton while (true) { Seek permission to enter critical section Do critical section Announce done with critical section Do non-critical code } BTW, easy solution is to forbid preemption. –But this power can be abused. –Identifying critical section can avoid preemption for a shorter period of time.
8
CS 346 – Sect. 5.3-5.7 Process synchronization –A useful example is “producer-consumer” problem –Peterson’s solution –HW support –Semaphores –“Dining philosophers” Commitment –Compile and run semaphore code from os-book.com
9
Peterson’s solution … to the 2-process producer/consumer problem. (p. 204) while (true) { ready[ me ] = true turn = other while (ready[ other ] && turn == other) ; Do critical section ready[ me ] = false Do non-critical code } // Don’t memorize but think: Why does this ensure mutual exclusion? // What assumptions does this solution make?
10
HW support As we mentioned before, we can disable interrupts – No one can preempt me. –Disadvantages The usual way to handle synchronization is by careful programming (SW) We require some atomic HW operations –A short sequence of assembly instructions guaranteed to be non-interruptable –This keeps non-preemption duration to absolute minimum –Access to “lock” variables visible to all threads –e.g. swapping the values in 2 variables –e.g. get and set some value (aka “test and set”)
11
Semaphore Dijkstra’s solution to mutual exclusion problem Semaphore object –integer value attribute ( > 0 means resource is available) –acquire and release methods Semaphore variants: binary and counting –Binary semaphore aka “mutex” or “mutex lock” acquire() release() { if (value <= 0) ++value wait/sleep // wake sleeper --value } }
12
Deadlock / starvation After we solve a mutual exclusion problem, also need to avoid other problems –Another way of expressing our synchronization goals Deadlock: 2+ process waiting for an event that can only be performed by one of the waiting processes –the opposite of progress Starvation: being blocked for an indefinite or unbounded amount of time –e.g. Potentially stuck on a semaphore wait queue forever
13
Bounded-buffer problem aka “producer-consumer”. See figures 5.9 – 5.10 Producer class –run( ) to be executed by a thread –Periodically call insert( ) Consumer class –Also to be run by a thread –Periodically call remove( ) BoundedBuffer class –Creates semaphores (mutex, empty, full): why 3? Initial values: mutex = 1, empty = SIZE, full = 0 –Implements insert( ) and remove( ). These methods contain calls to semaphore operations acquire( ) and release( ).
14
Insert & delete public void insert(E item) { empty.acquire(); mutex.acquire(); // add an item to the // buffer... mutex.release(); full.release(); } public E remove() { full.acquire(); mutex.acquire(); // remove item... mutex.release(); empty.release(); } What are we doing with the semaphores?
15
Readers/writers problem More general than producer-consumer We may have multiple readers and writers of shared info Mutual exclusion requirement: Must ensure that writers have exclusive access It’s okay to have multiple readers reading See example solution, Fig. 5.10 – 5.12 Reader and Writer threads periodically want to execute. –Operations guarded by semaphore operations Database class (analogous to BoundedBuffer earlier) –readerCount –2 semaphores: one to protect database, one to protect the updating of readerCount
16
Solution outline Reader: mutex.acquire(); ++readerCount; if(readerCount == 1) db.acquire(); mutex.release(); // READ NOW mutex.acquire(); --readerCount; if(readerCount == 0) db.release(); mutex.release(); Writer: db.acquire(); // WRITE NOW db.release();
17
Example output writer 0 wants to write. writer 0 is writing. writer 0 is done writing. reader 2 wants to read. writer 1 wants to write. reader 0 wants to read. reader 1 wants to read. Reader 2 is reading. Reader count = 1 Reader 0 is reading. Reader count = 2 Reader 1 is reading. Reader count = 3 writer 0 wants to write. Reader 1 is done reading. Reader count = 2 Reader 2 is done reading. Reader count = 1 Reader 0 is done reading. Reader count = 0 writer 1 is writing. reader 0 wants to read. writer 1 is done writing.
18
CS 346 – Sect. 5.7-5.8 Process synchronization –“Dining philosophers” (Dijkstra, 1965) –Monitors
19
Dining philosophers Classic OS problem –Many possible solutions depending on how foolproof you want solution to be Simulates synchronization situation of several resources, and several potential consumers. What is the problem? Model chopsticks with semaphores – available or not. –Initialize each to be 1 Achieve mutual exclusion: –acquire left and right chopsticks (numbered i and i+1) –Eat –release left and right chopsticks What could go wrong?
20
DP (2) What can we say about this solution? mutex.acquire(); Acquire 2 neighboring forks Eat Release the 2 forks mutex.release(); Other improvements: –Ability to see if either neighbor is eating –May make more sense to associate semaphore with the philosophers, not the forks. A philosopher should block if cannot acquire both forks. –When done eating, wake up either neighbor if necessary.
21
Monitor Higher level than semaphore –Semaphore coding can be buggy Programming language construct –Special kind of class / data type –Hides implementation detail Automatically ensures mutual exclusion –Only 1 thread may be “inside” monitor at any one time –Attributes of monitor are the shared variables –Methods in monitor deal with specific synchronization problem. This is where you access shared variables. –Constructor can initialize shared variables Supported by a number of HLLs –Concurrent Pascal, Java, C#
22
Condition variables With a monitor, you get mutual exclusion If you also want to ensure against deadlock or starvation, you need condition variables Special data type associated with monitors Declared with other shared attributes of monitor How to use them: –No attribute value to manipulate. 2 functions only: –Wait: if you call this, you go to sleep. (Enter a queue) –Signal: means you release a resource, waking up a thread waiting for it. –Each condition variable has its own queue of waiting threads/processes.
23
Signal( ) A subtle issue for signal… In a monitor, only 1 thread may be running at a time. Suppose P calls x.wait( ). It’s now asleep. Later, Q calls x.signal( ) in order to yield resource to P. What should happen? 3 design alternatives: –“blocking signal” – Q immediately goes to sleep so that P can continue. –“nonblocking signal” – P does not actually resume until Q has left the monitor –Compromise – Q immediately exits the monitor. Whoever gets to continue running may have to go to sleep on another condition variable.
24
CS 346 – Sect. 5.9 Process synchronization –“Dining philosophers” monitor solution –Java synchronization –atomic operations
25
Monitor for DP Figure 5.18 on page 228 Shared variable attributes: –state for each philosopher –“self” condition variable for each philosopher takeForks( ) –Declare myself hungry –See if I can get the forks. If not, go to sleep. returnForks( ) –Why do we call test( )? test( ) –If I’m hungry and my neighbors are not eating, then I will eat and leave the monitor.
26
Synch in Java “thread safe” = data remain consistent even if we have concurrently running threads If waiting for a (semaphore) value to become positive –Busy waiting loop –Better: Java provides Thread.yield ( ): “block me” But even “yielding” ourselves can cause livelock –Continually attempting an operation that fails –e.g. You wait for another process to run, but the scheduler keeps scheduling you instead because you have higher priority
27
Synchronized Java’s answer to synchronization is the keyword synchronized – qualifier for method as in public synchronized void funName(params) { … When you call a synchronized method belonging to an object, you obtain a “lock” on that object e.g. sem.acquire(); Lock automatically released when you exit method. If you try to call a synchronized method, & the object is already locked by another thread, you are blocked and sent to the object’s entry set. –Not quite a queue. JVM may arbitrarily choose who gets in next
28
Avoid deadlock Producer/consumer example –Suppose buffer is full. Producer now running. –Producer calls insert( ). Successfully enters method has lock on the buffer. Because buffer full, calls Thread.yield( ) so that consumer can eat some data. –Consumer wakes up, but cannot enter remove( ) method because producer still has lock. we have deadlock. Solution is to use wait( ) and notify( ). –When you wait, you release the lock, go to sleep (blocked), and enter the object’s wait set. Not to be confused with entry set. –When you notify, JVM picks a thread T from the wait set and moves it to entry set. T now eligible to run, and continues from point after its call to wait().
29
notifyAll Put every waiting thread into the entry set. –Good idea if you think > 1 thread waiting. –Now, all these threads compete for next use of synchronized object. Sometimes, just calling notify can lead to deadlock –Book’s doWork example *** –Threads are numbered –doWork has a shared variable turn. You can only do work here if it’s your turn: if turn == your number. –Thread 3 is doing work, sets turn to 4, and then leaves. –But thread 4 is not in the wait set. All other threads will go to sleep.
30
More Java support See: java.util.concurrent Built-in ReentrantLock class –Create an object of this class; call its lock and unlock methods to access your critical section (p. 282) –Allows you to set priority to waiting threads Condition interface (condition variable) –Meant to be used with a lock. What is the goal? –await( ) and notify( ) Semaphore class –acquire( ) and release( )
31
Atomic operations Behind the scenes, need to make sure instructions are performed in appropriate order “transaction” = 1 single logical function performed by a thread –In this case, involving shared memory –We want it to run atomically As we perform individual instructions, things might go smoothly or not –If all ok, then commit –If not, abort and “roll back” to earlier state of computation This is easier if we have fewer instructions in a row to do
32
Keeping the order Transaction 1Transaction 2 Read (A) Write (A) Read (B) Write (B) Read (A) Write (A) Read (B) Write (B) Transaction 1Transaction 2 Read (A) Write (A) Read (A) Write (A) Read (B) Write (B) Read (B) Write (B) Are these two schedules equivalent? Why?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.