Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Outline for Today Objectives: –To define the process and thread abstractions. –To briefly introduce mechanisms for implementing processes (threads).

Similar presentations


Presentation on theme: "1 Outline for Today Objectives: –To define the process and thread abstractions. –To briefly introduce mechanisms for implementing processes (threads)."— Presentation transcript:

1 1 Outline for Today Objectives: –To define the process and thread abstractions. –To briefly introduce mechanisms for implementing processes (threads). –To introduce the critical section problem. –To learn how to reason about the correctness of concurrent programs. Administrative details: –Groups are listed on the web site, linked off main page under Communications.Groups –Picture makeup – next time. –Permission numbers – see me

2 Concurrency Multiple things happening simultaneously –logically or physically Causes –Interrupts –Voluntary context switch (system call/trap) –Shared memory multiprocessor PPPP Memory

3 HW Support for Atomic Operations Could provide direct support in HW –Atomic increment –Insert node into sorted list?? Just provide low level primitives to construct atomic sequences –called synchronization primitives LOCK(counter->lock); counter->value = counter->value + 1; UNLOCK(counter->lock); test&set (x) instruction: returns previous value of x and sets x to “1” LOCK(x) => while (test&set(x)); UNLOCK(x) => x = 0;

4 4 The Basics of Processes Processes are the OS-provided abstraction of multiple tasks (including user programs) executing concurrently. One instance of a program (which is only a passive set of bits) executing (implying an execution context – register state, memory resources, etc.) OS schedules processes to share CPU.

5 5 Why Use Processes? To capture naturally concurrent activities within the structure of the programmed system. To gain speedup by overlapping activities or exploiting parallel hardware. –From DMA to multiprocessors PPPP Memory

6 6 Separation of Policy and Mechanism “Why and What” vs. “How” Objectives and strategies vs. data structures, hardware and software implementation issues. Process abstraction vs. Process machinery Can you think of examples?

7 7 Process Abstraction Unit of scheduling One (or more*) sequential threads of control –program counter, register values, call stack Unit of resource allocation –address space (code and data), open files –sometimes called tasks or jobs Operations on processes: fork (clone-style creation), wait (parent on child), exit (self-termination), signal, kill. Process-related System Calls in Unix.

8 8 Threads and Processes Decouple the resource allocation aspect from the control aspect Thread abstraction - defines a single sequential instruction stream (PC, stack, register values) Process - the resource context serving as a “container” for one or more threads (shared address space) Kernel threads - unit of scheduling (kernel- supported thread operations  still slow)

9 9 Threads and Processes Address Space Thread

10 10 An Example Address Space Thread Editing thread: Responding to your typing in your doc Autosave thread: periodically writes your doc file to disk doc Doc formatting process

11 11 User-Level Threads To avoid the performance penalty of kernel- supported threads, implement at user level and manage by a run-time system –Contained “within” a single kernel entity (process) –Invisible to OS (OS schedules their container, not being aware of the threads themselves or their states). Poor scheduling decisions possible. User-level thread operations can be 100x faster than kernel thread operations, but need better integration / cooperation with OS.

12 12 Process Mechanisms PCB data structure in kernel memory represents a process (allocated on process creation, deallocated on termination). PCBs reside on various state queues (including a different queue for each “cause” of waiting) reflecting the process’s state. As a process executes, the OS moves its PCB from queue to queue (e.g. from the “waiting on I/O” queue to the “ready to run” queue).

13 13 Context Switching When a process is running, its program counter, register values, stack pointer, etc. are contained in the hardware registers of the CPU. The process has direct control of the CPU hardware for now. When a process is not the one currently running, its current register values are saved in a process descriptor data structure (PCB - process control block) Context switching involves moving state between CPU and various processes’ PCBs by the OS.

14 14 Process State Transitions Ready Create Process Running Blocked Done Wakeup (due to interrupt) sleep (due to outstanding request of syscall) scheduled suspended while another process scheduled

15 15 Interleaved Schedules Uni-processor implementation logical concept / multiprocessor implementation context switch

16 16 PCBs & Queues process state process identifier PC Stack Pointer (SP) general purpose reg owner userid open files scheduling parameters memory mgt stuff queue ptrs...other stuff... process state process identifier PC Stack Pointer (SP) general purpose reg owner userid open files scheduling parameters memory mgt stuff queue ptrs...other stuff... process state process identifier PC Stack Pointer (SP) general purpose reg owner userid open files scheduling parameters memory mgt stuff queue ptrs...other stuff... head ptr tail ptr head ptr tail ptr Ready QueueWait on Disk Read

17 17 The Trouble with Concurrency in Threads... Thread0Thread1 Data: x while(i<10) {x  x+1; i++;} 0 while(j<10) {x  x+1; j++;} 0 0 ij What is the value of x when both threads leave this while loop?

18 18 Nondeterminism What unit of work can be performed without interruption? Indivisible or atomic operations. Interleavings - possible execution sequences of operations drawn from all threads. Race condition - final results depend on ordering and may not be “correct”. while (i<10) {x  x+1; i++;} load value of x into reg yield( ) add 1 to reg yield ( ) store reg value at x yield ( )

19 19 Reasoning about Interleavings On a uniprocessor, the possible execution sequences depend on when context switches can occur –Voluntary context switch - the process or thread explicitly yields the CPU (blocking on a system call it makes, invoking a Yield operation). –Interrupts or exceptions occurring - an asynchronous handler activated that disrupts the execution flow. –Preemptive scheduling - a timer interrupt may cause an involuntary context switch at any point in the code. On multiprocessors, the ordering of operations on shared memory locations is the important factor.

20 The Trouble with Concurrency Two threads (T1,T2) in one address space or two processes in the kernel One counter ld r2, count add r1, r2, r3 st count, r1 Shared Data count ld r2, count add r1, r2, r3 st count, r1 Time T1T2count ld (count) add switch ld (count) add st (count+1) count+1 switch st (count+1)count+1

21 Solution: Atomic Sequence of Instructions Atomic Sequence –Appears to execute to completion without any intervening operations Time T1T2count begin atomic ld (count) add switch begin atomic st (count+1) count+1 end atomic switch ld (count+1) add st (count+2)count+2 end atomic wait

22 22 Critical Sections If a sequence of non-atomic operations must be executed as if it were atomic in order to be correct, then we need to provide a way to constrain the possible interleavings in this critical section of our code. –Critical sections are code sequences that contribute to “bad” race conditions. –Synchronization needed around such critical sections. Mutual Exclusion - goal is to ensure that critical sections execute atomically w.r.t. related critical sections in other threads or processes. –How?

23 23 The Critical Section Problem Each process follows this template: while (1) {...other stuff... // processes in here shouldn’t stop others enter_region( ); critical section exit_region( ); } The problem is to define enter_region and exit_region to ensure mutual exclusion with some degree of fairness.

24 24 Implementation Options for Mutual Exclusion Disable Interrupts Busywaiting solutions - spinlocks –execute a tight loop if critical section is busy –benefits from specialized atomic (read-mod-write) instructions Blocking synchronization –sleep (enqueued on wait queue) while C.S. is busy Synchronization primitives (abstractions, such as locks) which are provided by a system may be implemented with some combination of these techniques.

25 25 Outline for Today Objectives: –The critical section problem. –To learn how to reason about the correctness of concurrent programs. –Synchronization mechanisms Administrative details: –Groups again…check to make sure I’ve got them rightGroups –Picture makeup

26 26 The Trouble with Concurrency in Threads... Thread0Thread1 Data: x while(i<10) {x  x+1; i++;} 0 while(j<10) {x  x+1; j++;} 0 0 ij What is the value of x when both threads leave this while loop?

27 27 Range of Answers Process 0 LD x // x currently 0 Add 1 ST x // x now 1, stored over 9 Do 9 more full loops // leaving x at 10 Process1 LD x // x currently 0 Add 1 ST x // x now 1 Do 8 more full loops // x = 9 LD x // x now 1 Add 1 ST x // x = 2 stored over 10

28 28 The Critical Section Problem while (1) {...other stuff... critical section exit_region( ); }

29 29 Boolean flag[2]; proc (int i) { while (TRUE){ compute; flag[i] = TRUE ; while(flag[(i+1) mod 2]) ; critical section; flag[i] = FALSE; } flag[0] = flag[1]= FALSE; fork (proc, 1, 0); fork (proc, 1,1); Is it correct? Assume they go lockstep. Both set their own flag to TRUE. Both busywait forever on the other’s flag -> deadlock. Proposed Algorithm for 2 Process Mutual Exclusion

30 30 Proposed Algorithm for 2 Process Mutual Exclusion enter_region: needin [me] = true; turn = you; while (needin [you] && turn == you) {no_op}; exit_region: needin [me] = false; Is it correct?

31 31 Interleaving of Execution of 2 Threads (blue and green) enter_region: needin [me] = true; turn = you; while (needin [you] && turn == you) {no_op}; Critical Section exit_region: needin [me] = false; enter_region: needin [me] = true; turn = you; while (needin [you] && turn == you) {no_op}; Critical Section exit_region: needin [me] = false;

32 32 needin [blue] = true; needin [green] = true; turn = green; turn = blue; while (needin [green] && turn == green) while (needin [blue] && turn == blue){no_op}; needin [blue] = false; while (needin [blue] && turn == blue) Critical Section needin [green] = false; Critical Section

33 33 needin [blue] = true; needin [green] = true; turn = blue; turn = green; while (needin [green] && turn == green) while (needin [blue] && turn == blue) Critical Section Oooops! Greedy Version (turn = me)

34 34 Synchronization We illustrated the dangers of race conditions when multiple threads execute instructions that interfere with each other when interleaved. Goal in solving the critical section problem is to build synchronization so that the sequence of instructions that can cause a race condition are executed AS IF they were indivisible (just appearances) “Other stuff” can be interleaved with critical section code as well as the enter_region and exit_region protocols, but it is deemed OK.

35 35 Peterson’s Algorithm for 2 Process Mutual Exclusion enter_region: needin [me] = true; turn = you; while (needin [you] && turn == you) {no_op}; exit_region: needin [me] = false; What about more than 2 processes?

36 36 Can we extend 2-process algorithm to work with n processes?

37 37 Can we extend 2-process algorithm to work with n processes? needin [me] = true; turn = you; needin [me] = true; turn = you; needin [me] = true; turn = you; needin [me] = true; turn = you; needin [me] = true; turn = you; CS Idea:Tournament Details:Bookkeeping (left to the reader)

38 38 Lamport’s Bakery Algorithm enter_region: choosing[me] = true; number[me] = max(number[0:n-1]) + 1; choosing[me] = false; for (j=0; n-1; j++) { { while (choosing[j] != 0) {skip} while((number[j] != 0 ) and ((number[j] < number[me]) or ((number[j] == number[me]) and (j < me)))) {skip} } exit_region: number[me] = 0;

39 39 Interleaving / Execution Sequence with Bakery Algorithm Thread 0 Thread 1 Thread 2 Thread 3 Choosing= Number [0]= Number [3]= Number [2]= Number [1]= False 0 0 0 0

40 40 Thread 0 Thread 1 Thread 2 Thread 3 Choosing= Number [0]= Number [3]= Number [2]= Number [1]= True False 0 0 0 0 1 1

41 41 for (j=0; n-1; j++) { { while (choosing[j] != 0) {skip} while((number[j] != 0 ) and ((number[j] < number[me]) or ((number[j] == number[me]) and (j < me)))) {skip} } Thread 0 Thread 1 Thread 2 Thread 3 Choosing= Number [0]= Number [3]= Number [2]= Number [1]= TrueFalse 0 1 1 0

42 42 for (j=0; n-1; j++) { { while (choosing[j] != 0) {skip} while((number[j] != 0 ) and ((number[j] < number[me]) or ((number[j] == number[me]) and (j < me)))) {skip} } Thread 0 Thread 1 Thread 2 Thread 3 Choosing= Number [0]= Number [3]= Number [2]= Number [1]= False 2 1 1 0

43 43 for (j=0; n-1; j++) { { while (choosing[j] != 0) {skip} while((number[j] != 0 ) and ((number[j] < number[me]) or ((number[j] == number[me]) and (j < me)))) {skip} } Thread 0 Thread 1 Thread 2 Thread 3 Choosing= Number [0]= Number [3]= Number [2]= Number [1]= False True 2 1 1 3

44 44 for (j=0; n-1; j++) { { while (choosing[j] != 0) {skip} while((number[j] != 0 ) and ((number[j] < number[me]) or ((number[j] == number[me]) and (j < me)))) {skip} } Thread 0 Thread 1 Thread 2 Thread 3 Choosing= Number [0]= Number [3]= Number [2]= Number [1]= False True 2 1 1 3 Thread 3 Stuck

45 45 for (j=0; n-1; j++) { { while (choosing[j] != 0) {skip} while((number[j] != 0 ) and ((number[j] < number[me]) or ((number[j] == number[me]) and (j < me)))) {skip} } Thread 0 Thread 1 Thread 2 Thread 3 Choosing= Number [0]= Number [3]= Number [2]= Number [1]= False True 2 1 0 3

46 46 for (j=0; n-1; j++) { { while (choosing[j] != 0) {skip} while((number[j] != 0 ) and ((number[j] < number[me]) or ((number[j] == number[me]) and (j < me)))) {skip} } Thread 0 Thread 1 Thread 2 Thread 3 Choosing= Number [0]= Number [3]= Number [2]= Number [1]= False 2 1 0 3

47 47 Hardware Assistance Most modern architectures provide some support for building synchronization: atomic read-modify-write instructions. Example: test-and-set (loc, reg) [ sets bit to 1 in the new value of loc; returns old value of loc in reg ] Other examples: compare-and-swap, fetch-and-op [ ] notation means atomic

48 48 Busywaiting with Test-and-Set Declare a shared memory location to represent a busyflag on the critical section we are trying to protect. enter_region (or acquiring the “lock”): waitloop: tsl busyflag, R0 // R0  busyflag; busyflag  1 bnz R0, waitloop // was it already set? exit region (or releasing the “lock”): busyflag  0

49 49 Outline for Today Objectives: –Higher level synchronization mechanisms –Classic problems in concurrency Administrative details: –Schedule demos on the demo scheduler

50 50 Pros and Cons of Busywaiting Key characteristic - the “waiting” process is actively executing instructions in the CPU and using memory cycles. Appropriate when: –High likelihood of finding the critical section unoccupied (don’t take context switch just to find that out) or estimated wait time is very short Disadvantages: –Wastes resources (CPU, memory, bus bandwidth)

51 51 Blocking Synchronization OS implementation involving changing the state of the “waiting” process from running to blocked. Need some synchronization abstraction known to OS - provided by system calls. –mutex locks with operations acquire and release –semaphores with operations P and V (down, up) –condition variables with wait and signal

52 52 Template for Implementing Blocking Synchronization Associated with the lock is a memory location (busy) and a queue for waiting threads/processes. Acquire syscall: while (busy) {enqueue caller on lock’s queue} /*upon waking to nonbusy lock*/ busy = true; Release syscall: busy = false; /* wakeup */ move any waiting threads to Ready queue

53 53 Pros and Cons of Blocking Waiting processes/threads don’t consume CPU cycles Appropriate: when the cost of a system call is justified by expected waiting time –High likelihood of contention for lock –Long critical sections Disadvantage: OS involvement d  overhead

54 54 Outline for Today Objective: –To clarify threads and processes. –To continue talking about the critical section problem and get more practice thinking about possible interleavings. –Start talking about synchronization primitives. –Introduce other “classic” concurrency problems Administrative details: –Look on the web for office hours at the beginning of next week for assignment 1. –New TA and UTATA and UTA

55 55 The Alpha and MIPS 4000 processor architectures have no atomic read-modify-write instructions, i.e., no test-and-set-lock instruction (TS). Atomic update is supported by pairs of load_locked (LDL) and store-conditional (STC) instructions. The semantics of the Alpha architecture’s LDL and STC instructions are as follows. Executing an LDL Rx, y instruction loads the memory at the specified address (y) into the specified general register (Rx), and holds y in a special per-processor lock register. STC Rx, y stores the contents of the specified general register (Rx) to memory at the specified address (y), but only if y matches the address in the CPU’s lock register. If STC succeeds, it places a one in Rx; if it fails, it places a zero in Rx. Several kinds of events can cause the machine to clear the CPU lock register, including traps and interrupts. Moreover, if any CPU in a multiprocessor system successfully completes a STC to address y, then every other processor’s lock register is atomically cleared if it contains the value y. Show how to use LDL and STC to implement safe busy-waiting

56 56 Acquire: LDL R1 flag /*R1  flag*/ BNZ R1 Acquire /* if (R1 != 0) already locked*/ LDI R2 1 STC R2 flag /* try to set flag*/ BEZ R2 Acquire /* if STC failed, retry*/ Release:LDI R2 0 STC R2 flag /* reset lock, breaking “lock” on flag*/ BEZ R2 Error /* should never happen*/

57 57 Semaphores Well-known synchronization abstraction Defined as a non-negative integer with two atomic operations P(s) - [wait until s > 0; s--] V(s) - [s++] The atomicity and the waiting can be implemented by either busywaiting or blocking solutions.

58 58 Semaphore Usage Binary semaphores can provide mutual exclusion (solution of critical section problem) Counting semaphores can represent a resource with multiple instances (e.g. solving producer/consumer problem) Signaling events (persistent events that stay relevant even if nobody listening right now)

59 59 while (1) {...other stuff... critical section } The Critical Section Problem P(mutex) V(mutex) Semaphore: mutex initially 1 Fill in the boxes

60 60 Classic Problems There are a number of “classic” problems that represent a class of synchronization situations Critical Section problem Producer/Consumer problem Reader/Writer problem 5 Dining Philosophers

61 61 Producer / Consumer Producer: while(whatever) {locally generate item fill empty buffer with item } Consumer: while(whatever) { get item from full buffer use item }

62 62 Producer / Consumer (with Counting Semaphores) Producer: while(whatever) {locally generate item fill empty buffer with item } Consumer: while(whatever) { get item from full buffer use item } P(emptybuf); V(fullbuf); P(fullbuf); V(emptybuf); Semaphores: emptybuf initially N; fullbuf initially 0;

63 63 What does it mean that Semaphores have persistence? Tweedledum and Tweedledee Problem Separate threads executing their respective procedures. The code below is intended to cause them to forever take turns exchanging insults through the shared variable X in strict alternation. The Sleep() and Wakeup() routines operate as follows: –Sleep blocks the calling thread, – Wakeup unblocks a specific thread if that thread is blocked, otherwise its behavior is unpredictable

64 64 The code shown above exhibits a well-known synchronization flaw. Outline a scenario in which this code would fail, and the outcome of that scenario void Tweedledum() { while(1) { Sleep(); x = Quarrel(x); Wakeup(Tweedledee); } void Tweedledee() { while(1) { x = Quarrel(x); Wakeup(Tweedledum); Sleep(); } Lost Wakeup: If dee goes first to sleep, the wakeup is lost (since dum isn’t sleeping yet). Both sleep forever.

65 65 Show how to fix the problem by replacing the Sleep and Wakeup calls with semaphore P (down) and V (up) operations. void Tweedledum() { while(1) { Sleep(); x = Quarrel(x); Wakeup(Tweedledee); } void Tweedledee() { while(1) { x = Quarrel(x); Wakeup(Tweedledum); Sleep(); } P(dum); V(dee); semaphore dee = 0; semaphore dum = 0; V(dum); P(dee):

66 66 Monitor Abstraction Encapsulates shared data and operations with mutual exclusive use of the object (an associated lock). Associated Condition Variables with operations of Wait and Signal. monitor_lock enQdeQ init shared data entry queue notEmpty conditions

67 67 Condition Variables We build the monitor abstraction out of a lock (for the mutual exclusion) and a set of associated condition variables. Wait on condition: releases lock held by caller, caller goes to sleep on condition’s queue. When awakened, it must reacquire lock. Signal condition: wakes up one waiting thread. Broadcast: wakes up all threads waiting on this condition.

68 68 Monitor Abstraction monitor_lock enQdeQ init shared data entry queue notEmpty conditions EnQ :{acquire (lock); if (head == null) {head = item; signal (lock, notEmpty);} else tail->next = item; tail = item; release(lock);} deQ :{acquire (lock); if (head == null) wait (lock, notEmpty); item = head; if (tail == head) tail = null; head=item->next; release(lock);}

69 69 Monitor Abstraction monitor_lock enQdeQ init shared data entry queue notEmpty conditions EnQ :{acquire (lock); if (head == null) {head = item; signal (lock, notEmpty);} else tail->next = item; tail = item; release(lock);} deQ :{acquire (lock); if (head == null) wait (lock, notEmpty); item = head; if (tail == head) tail = null; head=item->next; release(lock);}

70 70 Monitor Abstraction monitor_lock enQdeQ init shared data entry queue notEmpty conditions EnQ :{acquire (lock); if (head == null) {head = item; signal (lock, notEmpty);} else tail->next = item; tail = item; release(lock);} deQ :{acquire (lock); if (head == null) wait (lock, notEmpty); item = head; if (tail == head) tail = null; head=item->next; release(lock);}

71 71 Monitor Abstraction monitor_lock enQdeQ init shared data entry queue notEmpty conditions EnQ :{acquire (lock); if (head == null) {head = item; signal (lock, notEmpty);} else tail->next = item; tail = item; release(lock);} deQ :{acquire (lock); if (head == null) wait (lock, notEmpty); item = head; if (tail == head) tail = null; head=item->next; release(lock);}

72 72 Monitor Abstraction monitor_lock enQdeQ init shared data entry queue notEmpty conditions EnQ :{acquire (lock); if (head == null) {head = item; signal (lock, notEmpty);} else tail->next = item; tail = item; release(lock);} deQ :{acquire (lock); if (head == null) wait (lock, notEmpty); item = head; if (tail == head) tail = null; head=item->next; release(lock);}

73 73 Monitor Abstraction monitor_lock enQdeQ init shared data entry queue notEmpty conditions EnQ :{acquire (lock); if (head == null) {head = item; signal (lock, notEmpty);} else tail->next = item; tail = item; release(lock);} deQ :{acquire (lock); while (head == null) wait (lock, notEmpty); item = head; if (tail == head) tail = null; head=item->next; release(lock);}

74 74 while (1) {...other stuff... acquire (mutex); critical section release (mutex); } The Critical Section Problem // conceptually “inside” monitor

75 75 P&V using Locks & CV (Monitor) lock PV init shared data entry queue nonZero conditions P : {acquire (lock); while (Sval == 0) wait (lock, nonZero); Sval = Sval –1; release(lock);} V : {acquire (lock); Sval = Sval + 1; signal (lock, nonZero); release(lock);} Sval

76 76 Nachos-style Synchronization synch.h, cc Semaphores Semaphore::P Semaphore::V Locks and condition variables Lock::Acquire Lock::Release Condition::Wait (conditionLock) Condition::Signal (conditionLock) Condition::Broadcast (conditionLock)

77 77 Design Decisions / Issues Locking overhead (granularity) Broadcast vs. Signal Nested lock/condition variable problem LOCK a DO LOCK b DO while (not_ready) wait (b, c) //releases b not a END END My advice – correctness first! Unseen in call

78 78 Lock Granularity – how much should one lock protect? 2 46 8 10 3 head tail A B

79 79 2 46 8 10 3 head tail A B Lock Granularity – how much should one lock protect? Concurrency vs. overhead Complexity threatens correctness

80 80 Using Condition Variables while (! required_conditions) wait (m, c); Why we use “while” not “if” – invariant not guaranteed Why use broadcast vs. signal – can arise if we are using one condition queue for many reasons. Waking threads have to sort it out (spurious wakeups). Possibly better to separate into multiple conditions (but more complexity to code).

81 81 5 Dining Philosophers Philosopher 0 Philosopher 1 Philosopher 2 Philosopher 3 Philosopher 4 while(food available) {pick up 2 adj. forks; eat; put down forks; think awhile; }

82 82 Template for Philosopher while (food available) { /*pick up forks*/ eat; /*put down forks*/ think awhile; }

83 83 Naive Solution while (food available) { /*pick up forks*/ eat; /*put down forks*/ think awhile; } P(fork[left(me)]); P(fork[right(me)]); V(fork[left(me)]); V(fork[right(me)]); Does this work?

84 84 Simplest Example of Deadlock Thread 0 P(R1) P(R2) V(R1) V(R2) Thread 1 P(R2) P(R1) V(R2) V(R1) Interleaving P(R1) P(R2) P(R1) waits P(R2) waits R1 and R2 initially 1 (binary semaphore)

85 85 Conditions for Deadlock Mutually exclusive use of resources –Binary semaphores R1 and R2 Circular waiting –Thread 0 waits for Thread 1 to V(R2) and Thread 1 waits for Thread 0 to V(R1) Hold and wait –Holding either R1 or R2 while waiting on other No pre-emption –Neither R1 nor R2 are removed from their respective holding Threads.

86 86 Philosophy 101 (or why 5DP is interesting) How to eat with your Fellows without causing Deadlock. –Circular arguments (the circular wait condition) –Not giving up on firmly held things (no preemption) –Infinite patience with Half-baked schemes ( hold some & wait for more) Why Starvation exists and what we can do about it. Start here

87 87 Dealing with Deadlock It can be prevented by breaking one of the prerequisite conditions: Mutually exclusive use of resources –Example: Allowing shared access to read-only files (readers/writers problem) circular waiting –Example: Define an ordering on resources and acquire them in order hold and wait no pre-emption

88 88 while (food available) { if (me  0) {P(fork[left(me)]); P(fork[right(me)]);} else {(P(fork[right(me)]); P(fork[left(me)]); } eat; V(fork[left(me)]); V(fork[right(me)]); think awhile; } Circular Wait Condition

89 89 Hold and Wait Condition while (food available) {P(mutex); while (forks [me] != 2) {blocking[me] = true; V(mutex); P(sleepy[me]); P(mutex);} forks [leftneighbor(me)] --; forks [rightneighbor(me)]--; V(mutex): eat; P(mutex); forks [leftneighbor(me)] ++; forks [rightneighbor(me)]++; if (blocking[leftneighbor(me)]) {blocking [leftneighbor(me)] = false; V(sleepy[leftneighbor(me)]); } if (blocking[rightneighbor(me)]) {blocking[rightneighbor(me)] = false; V(sleepy[rightneighbor(me)]); } V(mutex); think awhile; }

90 90 Starvation The difference between deadlock and starvation is subtle: –Once a set of processes are deadlocked, there is no future execution sequence that can get them out of it. –In starvation, there does exist some execution sequence that is favorable to the starving process although there is no guarantee it will ever occur. –Rollback and Retry solutions are prone to starvation. –Continuous arrival of higher priority processes is another common starvation situation.

91 91 5DP - Monitor Style Boolean eating [5]; Lock forkMutex; Condition forksAvail; void PickupForks (int i) { forkMutex.Acquire( ); while ( eating[(i-1)%5]  eating[(i+1)%5] ) forksAvail.Wait(&forkMutex); eating[i] = true; forkMutex.Release( ); } void PutdownForks (int i) { forkMutex.Acquire( ); eating[i] = false; forksAvail.Broadcast(&forkMute x); forkMutex.Release( ); }

92 92 What about this? while (food available) {forkMutex.Acquire( ); while (forks [me] != 2) {blocking[me]=true; forkMutex.Release( ); sleep( ); forkMutex.Acquire( );} forks [leftneighbor(me)]--; forks [rightneighbor(me)]--; forkMutex.Release( ): eat; forkMutex.Acquire( ); forks[leftneighbor(me)] ++; forks [rightneighbor(me)]++; if (blocking[leftneighbor(me)] || blocking[rightneighbor(me)]) wakeup ( ); forkMutex.Release( ); think awhile; }

93 93 Readers/Writers Problem Synchronizing access to a file or data record in a database such that any number of threads requesting read-only access are allowed but only one thread requesting write access is allowed, excluding all readers.

94 94 Template for Readers/Writers Reader() {while (true) { read } Writer() {while (true) { write } /*request r access*/ /*release r access*/ /*request w access*/ /*release w access*/

95 95 Template for Readers/Writers Reader() {while (true) { read } Writer() {while (true) { write } fd = open(foo, 0); close(fd); fd = open(foo, 1); close(fd);

96 96 Template for Readers/Writers Reader() {while (true) { read } Writer() {while (true) { write } startRead(); endRead(); startWrite(); endWrite();

97 97 R/W - Monitor Style Boolean busy = false; int numReaders = 0; Lock filesMutex; Condition OKtoWrite, OKtoRead; void startRead () { filesMutex.Acquire( ); while ( busy ) OKtoRead.Wait(&filesMutex); numReaders++; filesMutex.Release( );} void endRead () { filesMutex.Acquire( ); numReaders--; if (numReaders == 0) OKtoWrite.Signal(&filesMutex); filesMutex.Release( );} void startWrite() { filesMutex.Acquire( ); while (busy || numReaders != 0) OKtoWrite.Wait(&filesMutex); busy = true; filesMutex.Release( );} void endWrite() { filesMutex.Acquire( ); busy = false; OKtoRead.Broadcast(&filesMutex); OKtoWrite.Signal(&filesMutex); filesMutex.Release( );}

98 98 Semaphore Solution with Writer Priority int readCount = 0, writeCount = 0; semaphore mutex1 = 1, mutex2 = 1; semaphore readBlock = 1; semaphore writePending = 1; semaphore writeBlock = 1;

99 99 Reader(){ while (TRUE) { other stuff; P(writePending); P(readBlock); P(mutex1); readCount = readCount +1; if(readCount == 1) P(writeBlock); V(mutex1); V(readBlock); V(writePending); access resource; P(mutex1); readCount = readCount -1; if(readCount == 0) V(writeBlock); V(mutex1); }} Writer(){ while(TRUE){ other stuff; P(mutex2); writeCount = writeCount +1; if (writeCount == 1) P(readBlock); V(mutex2); P(writeBlock); access resource; V(writeBlock); P(mutex2); writeCount - writeCount - 1; if (writeCount == 0) V(readBlock); V(mutex2); }}

100 100 Reader(){ while (TRUE) { other stuff; P(writePending); P(readBlock); P(mutex1); readCount = readCount +1; if(readCount == 1) P(writeBlock); V(mutex1); V(readBlock); V(writePending); access resource; P(mutex1); readCount = readCount -1; if(readCount == 0) V(writeBlock); V(mutex1); }} Writer(){ while(TRUE){ other stuff; P(mutex2); writeCount = writeCount +1; if (writeCount == 1) P(readBlock); V(mutex2); P(writeBlock); access resource; V(writeBlock); P(mutex2); writeCount - writeCount - 1; if (writeCount == 0) V(readBlock); V(mutex2); }} Assume the writePending semaphore was omitted. What would happen? Reader1Writer1Reader2

101 101 Reader(){ while (TRUE) { other stuff; P(writePending); P(readBlock); P(mutex1); readCount = readCount +1; if(readCount == 1) P(writeBlock); V(mutex1); V(readBlock); V(writePending); access resource; P(mutex1); readCount = readCount -1; if(readCount == 0) V(writeBlock); V(mutex1); }} Writer(){ while(TRUE){ other stuff; P(mutex2); writeCount = writeCount +1; if (writeCount == 1) P(readBlock); V(mutex2); P(writeBlock); access resource; V(writeBlock); P(mutex2); writeCount - writeCount - 1; if (writeCount == 0) V(readBlock); V(mutex2); }} Assume the writePending semaphore was omitted. What would happen? Reader1Writer1Reader2

102 102 Reader(){ while (TRUE) { other stuff; P(writePending); P(readBlock); P(mutex1); readCount = readCount +1; if(readCount == 1) P(writeBlock); V(mutex1); V(readBlock); V(writePending); access resource; P(mutex1); readCount = readCount -1; if(readCount == 0) V(writeBlock); V(mutex1); }} Writer(){ while(TRUE){ other stuff; P(mutex2); writeCount = writeCount +1; if (writeCount == 1) P(readBlock); V(mutex2); P(writeBlock); access resource; V(writeBlock); P(mutex2); writeCount - writeCount - 1; if (writeCount == 0) V(readBlock); V(mutex2); }} Assume the writePending semaphore was omitted. What would happen? Reader1Writer1Reader2

103 103 Assume the writePending semaphore was omitted in the solution just given. What would happen? This is supposed to give writers priority. However, consider the following sequence: Reader 1 arrives, executes thro’ P(readBlock); Reader 1 executes P(mutex1); Writer 1 arrives, waits at P(readBlock); Reader 2 arrives, waits at P(readBlock); Reader 1 executes V(mutex1); then V(readBlock); Reader 2 may now proceed…wrong

104 104 Outline for Today Objective: To define process and thread. Introduce the critical section problem. Reason about correctness of concurrent programs. Administrative details: –Groups are listed on the web site: groups.html –In the Makefile.common, in the top level of the code you have installed, look for these two lines: CC = g++ LD = g++ Change them to: CC = /usr/pkg/gcc-2.7.2/sun4m_55/bin/g++ LD = /usr/pkg/gcc-2.7.2/sun4m_55/bin/g++


Download ppt "1 Outline for Today Objectives: –To define the process and thread abstractions. –To briefly introduce mechanisms for implementing processes (threads)."

Similar presentations


Ads by Google