Download presentation
Presentation is loading. Please wait.
Published byUtami Pranata Modified over 6 years ago
1
Outline Part 1 Objectives: Administrative details:
To define the process and thread abstractions. To briefly introduce mechanisms for implementing processes (threads). To introduce the critical section problem. To learn how to reason about the correctness of concurrent programs. Administrative details: Groups are listed with “class pics”. Pictures – make sure name-to-face mapping is correct. Password protected name: cps110 passwd: OntheGo BRING CARDS TO SHUFFLE
2
Concurrency Multiple things happening simultaneously Causes
logically or physically Causes Interrupts Voluntary context switch (system call/trap) Shared memory multiprocessor P Memory
3
HW Support for Atomic Operations
Could provide direct support in HW Atomic increment Insert node into sorted list?? Just provide low level primitives to construct atomic sequences called synchronization primitives LOCK(counter->lock); // Wait here until unlocked counter->value = counter->value + 1; UNLOCK(counter->lock); test&set (x) instruction: returns previous value of x and sets x to “1” LOCK(x) => while (test&set(x)); UNLOCK(x) => x = 0;
4
The Basics of Processes
Processes are the OS-provided abstraction of multiple tasks (including user programs) executing concurrently. A Process IS: one instance of a program (which is only a passive set of bits) executing (implying an execution context – register state, memory resources, etc.) OS schedules processes to share CPU.
5
Why Use Processes? To capture naturally concurrent activities within the structure of the programmed system. To gain speedup by overlapping activities or exploiting parallel hardware. From DMA to multiprocessors P Memory
6
Separation of Policy and Mechanism (System Design Principle)
“Why and What” vs. “How” Objectives and strategies vs. data structures, hardware and software implementation issues. Process abstraction vs. Process machinery Can you think of examples?
7
Process Abstraction Unit of scheduling
One (or more*) sequential threads of control program counter, register values, call stack Unit of resource allocation address space (code and data), open files sometimes called tasks or jobs Operations on processes: fork (clone-style creation), wait (parent on child), exit (self-termination), signal, kill. Process-related System Calls in Unix.
8
Threads and Processes Decouple the resource allocation aspect from the control aspect Thread abstraction - defines a single sequential instruction stream (PC, stack, register values) Process - the resource context serving as a “container” for one or more threads (shared address space) Kernel threads - unit of scheduling (kernel-supported thread operations still slow)
9
Threads and Processes Address Space Address Space Thread Thread Thread
10
An Example Doc formatting process Editing thread:
Address Space Thread Thread doc Editing thread: Responding to your typing in your doc Autosave thread: periodically writes your doc file to disk
11
User-Level Threads To avoid the performance penalty of kernel-supported threads, implement at user level and manage by a run-time system Contained “within” a single kernel entity (process) Invisible to OS (OS schedules their container, not being aware of the threads themselves or their states). Poor scheduling decisions possible. User-level thread operations can be 100x faster than kernel thread operations, but need better integration / cooperation with OS.
12
Process Mechanisms PCB data structure in kernel memory represents a process (allocated on process creation, deallocated on termination). PCBs reside on various state queues (including a different queue for each “cause” of waiting) reflecting the process’s state. As a process executes, the OS moves its PCB from queue to queue (e.g. from the “waiting on I/O” queue to the “ready to run” queue).
13
Context Switching When a process is running, its program counter, register values, stack pointer, etc. are contained in the hardware registers of the CPU. The process has direct control of the CPU hardware for now. When a process is not the one currently running, its current register values are saved in a process descriptor data structure (PCB - process control block) Context switching involves moving state between CPU and various processes’ PCBs by the OS.
14
Process State Transitions
Create Process Wakeup (due to interrupt) Ready Blocked scheduled sleep (due to outstanding request of syscall) suspended while another process scheduled Running Done
15
Interleaved Schedules
logical concept / multiprocessor implementation Uni-processor implementation context switch
16
PCBs & Queues Ready Queue Wait on Disk Read process state
process identifier PC Stack Pointer (SP) general purpose reg owner userid open files scheduling parameters memory mgt stuff queue ptrs ...other stuff... head ptr tail ptr Ready Queue Wait on Disk Read
17
The Trouble with Concurrency in Threads...
Data: x while(i<10) {xx+1; i++;} while(j<10) j++;} i j See spring 02 jan 22 What is the value of x when both threads leave this while loop?
18
Nondeterminism while (i<10) {xx+1; i++;}
What unit of work can be performed without interruption? Indivisible or atomic operations. Interleavings - possible execution sequences of operations drawn from all threads. Race condition - final results depend on ordering and may not be “correct”. while (i<10) {xx+1; i++;} load value of x into reg yield( ) add 1 to reg yield ( ) store reg value at x
19
Interleaving Thread 0 Thread 1 Load x (value 0)
Incr x (value 1 in reg) Store x (value 1) Load x (value 1) for 2nd iteration Incr x (value 2 in reg) … Store x for 10th iteration (value 10) Thread 1 Load x (value 0) Incr x (value 1 in reg) Store x (value 1) … Store x for 9th iteration (value 9) Load x (value 1) for 10th iteration Incr x (value 2 in reg) Store x (value 2) for 10th iteration
20
Reasoning about Interleavings
On a uniprocessor, the possible execution sequences depend on when context switches can occur Voluntary context switch - the process or thread explicitly yields the CPU (blocking on a system call it makes, invoking a Yield operation). Interrupts or exceptions occurring - an asynchronous handler activated that disrupts the execution flow. Preemptive scheduling - a timer interrupt may cause an involuntary context switch at any point in the code. On multiprocessors, the ordering of operations on shared memory locations is the important factor.
21
The Trouble with Concurrency
Two threads (T1,T2) in one address space or two processes in the kernel One counter T1 T2 count ld (count) add switch st (count+1) count+1 st (count+1) count+1 ld r2, count add r1, r2, r3 st count, r1 ld r2, count add r1, r2, r3 st count, r1 Time Shared Data count
22
Solution: Atomic Sequence of Instructions
Time T1 T2 count begin atomic ld (count) add switch st (count+1) count+1 end atomic ld (count+1) st (count+2) count+2 wait Atomic Sequence Appears to execute to completion without any intervening operations
23
Critical Sections If a sequence of non-atomic operations must be executed as if it were atomic in order to be correct, then we need to provide a way to constrain the possible interleavings in this critical section of our code. Critical sections are code sequences that contribute to “bad” race conditions. Synchronization is needed around such critical sections. Mutual Exclusion - goal is to ensure that critical sections execute atomically w.r.t. related critical sections in other threads or processes. How?
24
The Critical Section Problem
Each process follows this template: while (1) { ...other stuff... //processes in here shouldn’t stop others enter_region( ); critical section exit_region( ); } The problem is to define enter_region and exit_region to ensure mutual exclusion with some degree of fairness.
25
Implementation Options for Mutual Exclusion
Disable Interrupts Busywaiting solutions - spinlocks execute a tight loop if critical section is busy benefits from specialized atomic (read-mod-write) instructions Blocking synchronization sleep (enqueued on wait queue) while C.S. is busy Synchronization primitives (abstractions, such as locks) which are provided by a system may be implemented with some combination of these techniques.
26
The Trouble with Concurrency in Threads...
Data: x while(i<10) {xx+1; i++;} while(j<10) j++;} i j See spring 02 jan 22 What is the value of x when both threads leave this while loop?
27
Range of Answers Process 0 Process1 LD x // x currently 0
Add 1 ST x // x now 1, stored over 9 Do 9 more full loops // leaving x at 10 Process1 LD x // x currently 0 Add 1 ST x // x now 1 Do 8 more full loops // x = 9 LD x // x now 1 ST x // x = 2 stored over 10
28
The Critical Section Problem
while (1) { ...other stuff... critical section exit_region( ); }
29
Proposed Algorithm for 2 Process Mutual Exclusion
Boolean flag[2]; proc (int i) { while (TRUE){ compute; flag[i] = TRUE ; while(flag[(i+1) mod 2]) ; critical section; flag[i] = FALSE; } flag[0] = flag[1]= FALSE; fork (proc, 1, 0); fork (proc, 1,1); Is it correct? Assume they go lockstep. Both set their own flag to TRUE. Both busywait forever on the other’s flag -> deadlock.
30
Proposed Algorithm for 2 Process Mutual Exclusion
enter_region: needin [me] = true; turn = you; while (needin [you] && turn == you) {no_op}; exit_region: needin [me] = false; Is it correct?
31
Interleaving of Execution of 2 Threads (blue and green)
enter_region: needin [me] = true; turn = you; while (needin [you] && turn == you) {no_op}; Critical Section exit_region: needin [me] = false; enter_region: needin [me] = true; turn = you; while (needin [you] && turn == you) {no_op}; Critical Section exit_region: needin [me] = false;
32
needin [blue] = true; needin [green] = true; turn = green; turn = blue; while (needin [green] && turn == green) Critical Section while (needin [blue] && turn == blue){no_op}; while (needin [blue] && turn == blue){no_op}; needin [blue] = false; while (needin [blue] && turn == blue) Critical Section needin [green] = false;
33
Greedy Version (turn = me)
needin [blue] = true; needin [green] = true; turn = blue; while (needin [green] && turn == green) Critical Section turn = green; while (needin [blue] && turn == blue) Critical Section Oooops!
34
Synchronization We illustrated the dangers of race conditions when multiple threads execute instructions that interfere with each other when interleaved. Goal in solving the critical section problem is to build synchronization so that the sequence of instructions that can cause a race condition are executed AS IF they were indivisible (just appearances) “Other stuff” can be interleaved with critical section code as well as the enter_region and exit_region protocols, but it is deemed OK.
35
Peterson’s Algorithm for 2 Process Mutual Exclusion
enter_region: needin [me] = true; turn = you; while (needin [you] && turn == you) {no_op}; exit_region: needin [me] = false; What about more than 2 processes?
36
Can we extend 2-process algorithm to work with n processes?
needin [me] = true; turn = you; needin [me] = true; turn = you; needin [me] = true; turn = you; needin [me] = true; turn = you; needin [me] = true; turn = you; CS Idea:Tournament Details:Bookkeeping (left to the reader)
37
Lamport’s Bakery Algorithm
enter_region: choosing[me] = true; number[me] = max(number[0:n-1]) + 1; choosing[me] = false; for (j=0; n-1; j++) { { while (choosing[j] != 0) ; while((number[j] != 0 ) and ((number[j] < number[me]) or ((number[j] == number[me]) and (j < me)))) ; } exit_region: number[me] = 0;
38
Explanation of Lamport’s Bakery Algorithm
choosing[me] = true; number[me] = max(number[0:n-1]) + 1; choosing[me] = false; /* choosing[i] is false when number[i] is not changing to non-zero */ for (j=0; n-1; j++) { { while (choosing[j] != 0) ; while((number[j] != 0 ) and ((number[j] < number[me]) or ((number[j] == number[me]) and (j < me)))) ; } /* While thread i is in this for-loop, number[i] is non-zero; if thread j (j<i) arrives later, while i is examining number[], and sets choosing[j] false, number[j]>number[i]. Even if thread i has examined j already, i can enter CS(i), and j will NOT enter CS(j) until i leaves CS(i), and sets number[i] to 0. */
39
Interleaving / Execution Sequence with Bakery Algorithm
Thread 0 Thread 1 Choosing= False Choosing= False Number [0]= Number [1]= Thread 2 Thread 3 False Choosing= False Choosing= Number [3]= Number [2]=
40
Thread 0 Thread 1 Choosing= True Choosing= True Number [0]= Number [1]= 1 Thread 2 Thread 3 True Choosing= False Choosing= Number [3]= 1 Number [2]=
41
for (j=0; n-1; j++) { { while (choosing[j]
for (j=0; n-1; j++) { { while (choosing[j] != 0) {skip} while((number[j] != 0 ) and ((number[j] < number[me]) or ((number[j] == number[me]) and (j < me)))) {skip} } Thread 0 Thread 1 Choosing= True Choosing= False Number [0]= Number [1]= 1 Thread 2 Thread 3 False Choosing= False Choosing= Number [3]= 1 Number [2]=
42
for (j=0; n-1; j++) { { while (choosing[j]
for (j=0; n-1; j++) { { while (choosing[j] != 0) {skip} while((number[j] != 0 ) and ((number[j] < number[me]) or ((number[j] == number[me]) and (j < me)))) {skip} } Thread 0 Thread 1 Choosing= False Choosing= False Number [0]= 2 Number [1]= 1 Thread 2 Thread 3 False Choosing= False Choosing= Number [3]= 1 Number [2]=
43
for (j=0; n-1; j++) { { while (choosing[j]
for (j=0; n-1; j++) { { while (choosing[j] != 0) {skip} while((number[j] != 0 ) and ((number[j] < number[me]) or ((number[j] == number[me]) and (j < me)))) {skip} } Thread 0 Thread 1 Choosing= False Choosing= False Number [0]= 2 Number [1]= 1 Thread 2 Thread 3 False Choosing= True Choosing= Number [3]= 1 Number [2]= 3
44
for (j=0; n-1; j++) { { while (choosing[j]
for (j=0; n-1; j++) { { while (choosing[j] != 0) {skip} while((number[j] != 0 ) and ((number[j] < number[me]) or ((number[j] == number[me]) and (j < me)))) {skip} } Thread 3 Stuck Thread 0 Thread 1 Choosing= False Choosing= False Number [0]= 2 Number [1]= 1 Thread 2 Thread 3 False Choosing= True Choosing= Number [3]= 1 Number [2]= 3
45
for (j=0; n-1; j++) { { while (choosing[j]
for (j=0; n-1; j++) { { while (choosing[j] != 0) {skip} while((number[j] != 0 ) and ((number[j] < number[me]) or ((number[j] == number[me]) and (j < me)))) {skip} } Thread 0 Thread 1 Choosing= False Choosing= False Number [0]= 2 Number [1]= Thread 2 Thread 3 False Choosing= True Choosing= Number [3]= 1 Number [2]= 3
46
for (j=0; n-1; j++) { { while (choosing[j]
for (j=0; n-1; j++) { { while (choosing[j] != 0) {skip} while((number[j] != 0 ) and ((number[j] < number[me]) or ((number[j] == number[me]) and (j < me)))) {skip} } Thread 0 Thread 1 Choosing= False Choosing= False Number [0]= 2 Number [1]= Thread 2 Thread 3 False Choosing= False Choosing= Number [3]= 1 Number [2]= 3
47
Hardware Assistance Most modern architectures provide some support for building synchronization: atomic read-modify-write instructions. Example: test-and-set (loc, reg) [ sets bit to 1 in the new value of loc; returns old value of loc in reg ] Other examples: compare-and-swap, fetch-and-op [ ] notation means atomic
48
Busywaiting with Test-and-Set
Declare a shared memory location to represent a busyflag on the critical section we are trying to protect. enter_region (or acquiring the “lock”): waitloop: tsl busyflag, R0 // R0 busyflag; busyflag 1 bnz R0, waitloop // was it already set? exit region (or releasing the “lock”): busyflag 0
49
Pros and Cons of Busywaiting
Key characteristic - the “waiting” process is actively executing instructions in the CPU and using memory cycles. Appropriate when: High likelihood of finding the critical section unoccupied (don’t take context switch just to find that out) or estimated wait time is very short Disadvantages: Wastes resources (CPU, memory, bus bandwidth)
50
Blocking Synchronization
OS implementation involving changing the state of the “waiting” process from running to blocked. Need some synchronization abstraction known to OS - provided by system calls. mutex locks with operations acquire and release semaphores with operations P and V (down, up) condition variables with wait and signal
51
Template for Implementing Blocking Synchronization
Associated with the lock is a memory location (busy) and a queue for waiting threads/processes. Acquire syscall: while (busy) {enqueue caller on lock’s queue} /*upon waking to nonbusy lock*/ busy = true; Release syscall: busy = false; /* wakeup */ move any waiting threads to Ready queue
52
Pros and Cons of Blocking
Waiting processes/threads don’t consume CPU cycles Appropriate: when the cost of a system call is justified by expected waiting time High likelihood of contention for lock Long critical sections Disadvantage: OS involvement doverhead
53
Outline Part 2 Objectives: Administrative details:
Higher level synchronization mechanisms Classic problems in concurrency Administrative details: BRING Forks
54
Semaphores Well-known synchronization abstraction
Defined as a non-negative integer with two atomic operations P(s) - [wait until s > 0; s--] V(s) - [s++] The atomicity and the waiting can be implemented by either busywaiting or blocking solutions.
55
Semaphore Usage Binary semaphores can provide mutual exclusion (solution of critical section problem) Counting semaphores can represent a resource with multiple instances (e.g. solving producer/consumer problem) Signaling events (persistent events that stay relevant even if nobody listening right now)
56
The Critical Section Problem
while (1) { ...other stuff... critical section } Semaphore: mutex initially 1 P(mutex) Fill in the boxes V(mutex)
57
When is a code section a critical section?
Thread 0 a = a + c; b = b + c; Thread 1 a = a + c; b = b + c;
58
When is a code section a critical section?
Thread 0 P(mutex) a = a + c; b = b + c; V(mutex) Thread 1 P(mutex) a = a + c; b = b + c; V(mutex)
59
When is a code section a critical section?
Thread 0 P(mutexa) a = a + c; V(mutexa) P(mutexb) b = b + c; V(mutexb) Thread 1 P(mutexa) a = a + c; V(mutexa) P(mutexb) b = b + c; V(mutexb)
60
When is a code section a critical section?
Thread 0 P(mutex0) a = a + c; b = b + c; V(mutex0) Thread 1 P(mutex1) a = a + c; b = b + c; V(mutex1)
61
When is a code section a critical section?
Thread 0 a = a + c; b = b + c; Thread 1 a = a + c; b = b + c; Thread 2 c = a + b;
62
When is a code section a critical section?
Thread 0 P(mutexa) a = a + c; V(mutexa) P(mutexb) b = b + c; V(mutexb) Thread 1 P(mutexa) a = a + c; V(mutexa) P(mutexb) b = b + c; V(mutexb) Thread 2 ? c = a + b;
63
When is a code section a critical section?
Thread 0 P(mutexa) a = a + c; V(mutexa) P(mutexb) b = b + c; V(mutexb) Thread 1 P(mutexa) a = a + c; V(mutexa) P(mutexb) b = b + c; V(mutexb) Thread 2 P(mutexa) P(mutexb) c = a + b; V(mutexa) V(mutexb)
64
When is a code section a critical section?
Thread 0 P(mutex); a = a + c; b = b + c; V(mutex); Thread 1 P(mutex); a = a + c; b = b + c; V(mutex); Thread 2 P(mutex); c = a + b; V(mutex);
65
Classic Problems There are a number of “classic” problems each of which represents a class of synchronization situations Critical Section problem Producer/Consumer problem Reader/Writer problem 5 Dining Philosophers
66
Producer / Consumer Producer: Consumer: while(whatever)
{ locally generate item fill empty buffer with item } Consumer: while(whatever) { get item from full buffer use item }
67
Producer / Consumer (with Counting Semaphores)
while(whatever) { locally generate item fill empty buffer with item } Consumer: while(whatever) { get item from full buffer use item } P(fullbuf); P(emptybuf); V(emptybuf); V(fullbuf); Semaphores: emptybuf initially N; fullbuf initially 0;
68
What does it mean that Semaphores have persistence
What does it mean that Semaphores have persistence? Tweedledum and Tweedledee Problem Separate threads executing their respective procedures. The code below is intended to cause them to forever take turns exchanging insults through the shared variable X in strict alternation. The Sleep() and Wakeup() routines operate as follows: Sleep blocks the calling thread, Wakeup unblocks a specific thread if that thread is blocked, otherwise its behavior is unpredictable
69
sleeping yet). Both sleep forever.
The code shown above exhibits a well-known synchronization flaw. Outline a scenario in which this code would fail, and the outcome of that scenario void Tweedledum() { while(1) { Sleep(); x = Quarrel(x); Wakeup(Tweedledee); } void Tweedledee() { while(1) { x = Quarrel(x); Wakeup(Tweedledum); Sleep(); } Lost Wakeup: If dee goes first to sleep, the wakeup is lost (since dum isn’t sleeping yet). Both sleep forever.
70
Show how to fix the problem by replacing the Sleep and Wakeup calls with semaphore P (down) and V (up) operations. void Tweedledum() { while(1) { Sleep(); x = Quarrel(x); Wakeup(Tweedledee); } void Tweedledee() { while(1) { x = Quarrel(x); Wakeup(Tweedledum); Sleep(); } P(dum); V(dum); P(dee): V(dee); semaphore dee = 0; semaphore dum = 0;
71
Monitor Abstraction Encapsulates shared data and operations with mutual exclusive use of the object (an associated lock). Associated Condition Variables with operations of Wait and Signal. notEmpty entry queue monitor_lock enQ deQ init shared data conditions
72
Condition Variables We build the monitor abstraction out of a lock (for the mutual exclusion) and a set of associated condition variables. Wait on condition: releases lock held by caller, caller goes to sleep on condition’s queue. When awakened, it must reacquire lock. Signal condition: wakes up one waiting thread. Broadcast: wakes up all threads waiting on this condition.
73
Monitor Abstraction EnQ:{acquire (lock); deQ:{acquire (lock);
if (head == null) {head = item; signal (lock, notEmpty);} else tail->next = item; tail = item; release(lock);} deQ:{acquire (lock); wait (lock, notEmpty); item = head; if (tail == head) tail = null; head=item->next; notEmpty entry queue monitor_lock enQ deQ init shared data conditions
74
Monitor Abstraction EnQ:{acquire (lock); deQ:{acquire (lock);
if (head == null) {head = item; signal (lock, notEmpty);} else tail->next = item; tail = item; release(lock);} deQ:{acquire (lock); wait (lock, notEmpty); item = head; if (tail == head) tail = null; head=item->next; notEmpty entry queue monitor_lock enQ deQ init shared data conditions
75
Monitor Abstraction EnQ:{acquire (lock); deQ:{acquire (lock);
if (head == null) {head = item; signal (lock, notEmpty);} else tail->next = item; tail = item; release(lock);} deQ:{acquire (lock); wait (lock, notEmpty); item = head; if (tail == head) tail = null; head=item->next; notEmpty entry queue monitor_lock enQ deQ init shared data conditions
76
Monitor Abstraction EnQ:{acquire (lock); deQ:{acquire (lock);
if (head == null) {head = item; signal (lock, notEmpty);} else tail->next = item; tail = item; release(lock);} deQ:{acquire (lock); wait (lock, notEmpty); item = head; if (tail == head) tail = null; head=item->next; notEmpty entry queue monitor_lock enQ deQ init shared data conditions
77
Monitor Abstraction EnQ:{acquire (lock); deQ:{acquire (lock);
if (head == null) {head = item; signal (lock, notEmpty);} else tail->next = item; tail = item; release(lock);} deQ:{acquire (lock); while (head == null) wait (lock, notEmpty); item = head; if (tail == head) tail = null; head=item->next; notEmpty entry queue monitor_lock enQ deQ init shared data conditions
78
Monitor Abstraction EnQ:{acquire (lock); deQ:{acquire (lock);
if (head == null) {head = item; signal (lock, notEmpty);} else tail->next = item; tail = item; release(lock);} deQ:{acquire (lock); while (head == null) wait (lock, notEmpty); item = head; if (tail == head) tail = null; head=item->next; notEmpty entry queue monitor_lock enQ deQ init shared data conditions
79
The Critical Section Problem
while (1) { ...other stuff... acquire (mutex); critical section release (mutex); } // conceptually “inside” monitor
80
P&V using Locks & CV (Monitor)
P: {acquire (lock); while (Sval == 0) wait (lock, nonZero); Sval = Sval –1; release(lock);} V: {acquire (lock); Sval = Sval + 1; signal (lock, nonZero); nonZero entry queue lock P V init Sval shared data conditions
81
Nachos-style Synchronization
synch.h, cc Semaphores Semaphore::P Semaphore::V Locks and condition variables Lock::Acquire Lock::Release Condition::Wait (conditionLock) Condition::Signal (conditionLock) Condition::Broadcast (conditionLock)
82
Design Decisions / Issues
Locking overhead (granularity) Broadcast vs. Signal Nested lock/condition variable problem LOCK a DO LOCK b DO while (not_ready) wait (b, c) //releases b not a END END My advice – correctness first! Unseen in call
83
Lock Granularity – how much should one lock protect?
tail head 2 4 6 8 B 10 A 3
84
Lock Granularity – how much should one lock protect?
tail head 2 4 6 8 B 10 A 3 Concurrency vs. overhead Complexity threatens correctness
85
Using Condition Variables
while (! required_conditions) wait (m, c); Why we use “while” not “if” – invariant not guaranteed Why use broadcast vs. signal – can arise if we are using one condition queue for many reasons. Waking threads have to sort it out (spurious wakeups). Possibly better to separate into multiple conditions (but more complexity to code).
86
Outline for Part 3 Objective: Administrative details:
Classic synchronization problems. Using higher level synchronization abstractions. Message passing – communication vs. synchronization Administrative details: Signing up for demos – demo scheduler Rules for demos: Each group will meet with any grader at most once – rotate among graders to even out any grading differences. Each member is expected to be prepared to contribute to each demo Ask your grader where the demo will be held if they don’t post to the newsgroup For me, Microsoft lab on the 2nd floor (across from Owen’s office) in LSRC. Or come by my office to get me.
87
5 Dining Philosophers Philosopher 0 while(food available)
{pick up 2 adj. forks; eat; put down forks; think awhile; }
88
Template for Philosopher
while (food available) { /*pick up forks*/ eat; /*put down forks*/ think awhile; }
89
Naive Solution while (food available) { /*pick up forks*/ eat;
/*put down forks*/ think awhile; } P(fork[left(me)]); P(fork[right(me)]); V(fork[left(me)]); V(fork[right(me)]); Does this work?
90
Simplest Example of Deadlock
Thread 0 P(R1) P(R2) V(R1) V(R2) Interleaving P(R1) P(R2) P(R1) waits P(R2) waits Thread 1 P(R2) P(R1) V(R2) V(R1) R1 and R2 initially 1 (binary semaphore)
91
Conditions for Deadlock
Mutually exclusive use of resources Binary semaphores R1 and R2 Circular waiting Thread 0 waits for Thread 1 to V(R2) and Thread 1 waits for Thread 0 to V(R1) Hold and wait Holding either R1 or R2 while waiting on other No pre-emption Neither R1 nor R2 are forcibly removed from their respective holding Threads.
92
Philosophy 101 (or why 5DP is interesting)
How to eat with your Fellows without causing Deadlock. Circular arguments (the circular wait condition) Not giving up on firmly held things (no preemption) Infinite patience with Half-baked schemes (hold some & wait for more) Why Starvation exists and what we can do about it.
93
Dealing with Deadlock It can be prevented by breaking one of the prerequisite conditions: Mutually exclusive use of resources Example: Allowing shared access to read-only files (readers/writers problem) circular waiting Example: Define an ordering on resources and acquire them in order hold and wait no pre-emption
94
Circular Wait Condition
while (food available) { if (me 0) {P(fork[left(me)]); P(fork[right(me)]);} else {(P(fork[right(me)]); P(fork[left(me)]); } eat; V(fork[left(me)]); V(fork[right(me)]); think awhile; }
95
Hold and Wait Condition
while (food available) { P(mutex); while (forks [me] != 2) {blocking[me] = true; V(mutex); P(sleepy[me]); P(mutex);} forks [leftneighbor(me)] --; forks [rightneighbor(me)]--; V(mutex): eat; P(mutex); forks [leftneighbor(me)] ++; forks [rightneighbor(me)]++; if (blocking[leftneighbor(me)]) {blocking [leftneighbor(me)] = false; V(sleepy[leftneighbor(me)]); } if (blocking[rightneighbor(me)]) {blocking[rightneighbor(me)] = false; V(sleepy[rightneighbor(me)]); } V(mutex); think awhile; }
96
Starvation The difference between deadlock and starvation is subtle:
Once a set of processes are deadlocked, there is no future execution sequence that can get them out of it. In starvation, there does exist some execution sequence that is favorable to the starving process although there is no guarantee it will ever occur. Rollback and Retry solutions are prone to starvation. Continuous arrival of higher priority processes is another common starvation situation.
97
5DP - Monitor Style Boolean eating [5]; Lock forkMutex;
Condition forksAvail; void PickupForks (int i) { forkMutex.Acquire( ); while ( eating[(i-1)%5] eating[(i+1)%5] ) forksAvail.Wait(&forkMutex); eating[i] = true; forkMutex.Release( ); } void PutdownForks (int i) { forkMutex.Acquire( ); eating[i] = false; forksAvail.Broadcast(&forkMutex); forkMutex.Release( ); }
98
What about this? while (food available) { forkMutex.Acquire( );
while (forks [me] != 2) {blocking[me]=true; forkMutex.Release( ); sleep( ); forkMutex.Acquire( );} forks [leftneighbor(me)]--; forks [rightneighbor(me)]--; forkMutex.Release( ): eat; forkMutex.Acquire( ); forks[leftneighbor(me)] ++; forks [rightneighbor(me)]++; if (blocking[leftneighbor(me)] || blocking[rightneighbor(me)]) wakeup ( ); forkMutex.Release( ); think awhile; }
99
Readers/Writers Problem
Synchronizing access to a file or data record in a database such that any number of threads requesting read-only access are allowed but only one thread requesting write access is allowed, excluding all readers.
100
Template for Readers/Writers
{while (true) { read } Writer() {while (true) { write } /*request r access*/ /*request w access*/ /*release r access*/ /*release w access*/
101
Template for Readers/Writers
{while (true) { read } Writer() {while (true) { write } fd = open(foo, 0); fd = open(foo, 1); close(fd); close(fd);
102
Template for Readers/Writers
{while (true) { read } Writer() {while (true) { write } startRead(); startWrite(); endRead(); endWrite();
103
R/W - Monitor Style Boolean busy = false; int numReaders = 0;
Lock filesMutex; Condition OKtoWrite, OKtoRead; void startRead () { filesMutex.Acquire( ); while ( busy ) OKtoRead.Wait(&filesMutex); numReaders++; filesMutex.Release( );} void endRead () { numReaders--; if (numReaders == 0) OKtoWrite.Signal(&filesMutex); void startWrite() { filesMutex.Acquire( ); while (busy || numReaders != 0) OKtoWrite.Wait(&filesMutex); busy = true; filesMutex.Release( );} void endWrite() { busy = false; OKtoRead.Broadcast(&filesMutex); OKtoWrite.Signal(&filesMutex);
104
Semaphore Solution with Writer Priority
int readCount = 0, writeCount = 0; semaphore mutex1 = 1, mutex2 = 1; semaphore readBlock = 1; semaphore writePending = 1; semaphore writeBlock = 1;
105
Reader(){ while (TRUE) { other stuff; P(writePending); P(readBlock); P(mutex1); readCount = readCount +1; if(readCount == 1) P(writeBlock); V(mutex1); V(readBlock); V(writePending); access resource; readCount = readCount -1; if(readCount == 0) V(writeBlock); V(mutex1); }} Writer(){ while(TRUE){ other stuff; P(mutex2); writeCount = writeCount +1; if (writeCount == 1) P(readBlock); V(mutex2); P(writeBlock); access resource; V(writeBlock); writeCount - writeCount - 1; if (writeCount == 0) V(readBlock); }}
106
Reader2 Reader(){ while (TRUE) { other stuff; P(writePending); P(readBlock); P(mutex1); readCount = readCount +1; if(readCount == 1) P(writeBlock); V(mutex1); V(readBlock); V(writePending); access resource; readCount = readCount -1; if(readCount == 0) V(writeBlock); V(mutex1); }} Writer(){ while(TRUE){ other stuff; P(mutex2); writeCount = writeCount +1; if (writeCount == 1) P(readBlock); V(mutex2); P(writeBlock); access resource; V(writeBlock); writeCount - writeCount - 1; if (writeCount == 0) V(readBlock); }} Reader1 Writer1 Assume the writePending semaphore was omitted. What would happen?
107
Reader2 Reader(){ while (TRUE) { other stuff; P(writePending); P(readBlock); P(mutex1); readCount = readCount +1; if(readCount == 1) P(writeBlock); V(mutex1); V(readBlock); V(writePending); access resource; readCount = readCount -1; if(readCount == 0) V(writeBlock); V(mutex1); }} Writer(){ while(TRUE){ other stuff; P(mutex2); writeCount = writeCount +1; if (writeCount == 1) P(readBlock); V(mutex2); P(writeBlock); access resource; V(writeBlock); writeCount - writeCount - 1; if (writeCount == 0) V(readBlock); }} Writer1 Reader1 Assume the writePending semaphore was omitted. What would happen?
108
Reader(){ while (TRUE) { other stuff; P(writePending); P(readBlock); P(mutex1); readCount = readCount +1; if(readCount == 1) P(writeBlock); V(mutex1); V(readBlock); V(writePending); access resource; readCount = readCount -1; if(readCount == 0) V(writeBlock); V(mutex1); }} Writer(){ while(TRUE){ other stuff; P(mutex2); writeCount = writeCount +1; if (writeCount == 1) P(readBlock); V(mutex2); P(writeBlock); access resource; V(writeBlock); writeCount - writeCount - 1; if (writeCount == 0) V(readBlock); }} Reader2 Writer1 Reader1 Assume the writePending semaphore was omitted. What would happen?
109
Assume the writePending semaphore was omitted in the solution just given. What would happen?
This is supposed to give writers priority. However, consider the following sequence: Reader 1 arrives, executes through P(readBlock); Reader 1 executes P(mutex1); Writer 1 arrives, waits at P(readBlock); Reader 2 arrives, waits at P(readBlock); Reader 1 executes V(mutex1); then V(readBlock); Reader 2 may now proceed…wrong
110
Practice Larry, Moe, and Curly are planting seeds. Larry digs the holes. Moe then places a seed in each hole. Curly then fills the hole up. There are several synchronization constraints: Moe cannot plant a seed unless at least one empty hole exists, but Moe does not care how far Larry gets ahead of Moe. Curly cannot fill a hole unless at least one hole exists in which Moe has planted a seed, but the hole has not yet been filled. Curly does not care how far Moe gets ahead of Curly. Curly does care that Larry does not get more than MAX holes ahead of Curly. Thus, if there are MAX unfilled holes, Larry has to wait. There is only one shovel with which both Larry and Curly need to dig and fill the holes, respectively. Sketch out the pseudocode for the 3 threads which represent Larry, Curly, and Moe using whatever synchronization method you like.
111
Nachos Implementation of Semaphores
Semaphore::Semaphore(char* debugName, int initialValue) { name = debugName; value = initialValue; queue = new List; } void Semaphore::V() Thread *thread; IntStatus oldLevel = interrupt->SetLevel(IntOff); thread = (Thread *)queue->Remove(); if (thread != NULL) // make thread ready, consuming the V immediately scheduler->ReadyToRun(thread); value++; (void) interrupt->SetLevel(oldLevel); void Semaphore::P() { IntStatus oldLevel = interrupt->SetLevel(IntOff); // disable interrupts while (value == 0) { // semaphore not available queue->Append((void *)currentThread); // so go to sleep currentThread->Sleep(); } value--; // semaphore available, // consume its value (void) interrupt->SetLevel(oldLevel); // re-enable interrupts
112
Shared and Private Data
Data shared across threads may need critical section protection when modified Data “private” to threads can be used freely Private data: Register contents (and register variables) Variables declared local in the procedure fork() starts, and any procedure called by it Elements of shared arrays used by only one thread Shared data: Globally declared variables State variables of shared objects
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.