Download presentation
Presentation is loading. Please wait.
1
1 OMSE 510: Computing Foundations 7: More IPC & Multithreading Chris Gilmore Portland State University/OMSE Material Borrowed from Jon Walpole’s lectures
2
2 Today Classical IPC Problems Monitors Message Passing Scheduling
3
3 Classical IPC problems Producer Consumer (bounded buffer) Dining philosophers Sleeping barber Readers and writers
4
4 Producer consumer problem 8 Buffers InP OutP Consumer Producer Producer and consumer are separate threads Also known as the bounded buffer problem
5
5 Is this a valid solution? thread producer { while(1){ // Produce char c while (count==n) { no_op } buf[InP] = c InP = InP + 1 mod n count++ } thread consumer { while(1){ while (count==0) { no_op } c = buf[OutP] OutP = OutP + 1 mod n count-- // Consume char } 0 1 2 n-1 … Global variables: char buf[n] int InP = 0 // place to add int OutP = 0 // place to get int count
6
6 0 thread consumer { 1 while(1) { 2 while (count==0) { 3 sleep(empty) 4 } 5 c = buf[OutP] 6 OutP = OutP + 1 mod n 7 count--; 8 if (count == n-1) 9 wakeup(full) 10 // Consume char 11 } 12 } How about this? 0 thread producer { 1 while(1) { 2 // Produce char c 3 if (count==n) { 4 sleep(full) 5 } 6 buf[InP] = c; 7 InP = InP + 1 mod n 8 count++ 9 if (count == 1) 10 wakeup(empty) 11 } 12 } 0 1 2 n-1 … Global variables: char buf[n] int InP = 0 // place to add int OutP = 0 // place to get int count
7
7 Does this solution work? 0 thread producer { 1 while(1){ 2 // Produce char c... 3 down(empty_buffs) 4 buf[InP] = c 5 InP = InP + 1 mod n 6 up(full_buffs) 7 } 8 } 0 thread consumer { 1 while(1){ 2 down(full_buffs) 3 c = buf[OutP] 4 OutP = OutP + 1 mod n 5 up(empty_buffs) 6 // Consume char... 7 } 8 } Global variables semaphore full_buffs = 0; semaphore empty_buffs = n; char buff[n]; int InP, OutP;
8
8 Producer consumer problem 8 Buffers InP OutP Consumer Producer Producer and consumer are separate threads What is the shared state in the last solution? Does it apply mutual exclusion? If so, how?
9
9 Definition of Deadlock A set of processes is deadlocked if each process in the set is waiting for an event that only another process in the set can cause Usually the event is the release of a currently held resource None of the processes can … be awakened run release resources
10
10 Deadlock conditions A deadlock situation can occur if and only if the following conditions hold simultaneously Mutual exclusion condition – resource assigned to one process Hold and wait condition – processes can get more than one resource No preemption condition Circular wait condition – chain of two or more processes (must be waiting for resource from next one in chain)
11
11 Resource acquisition scenarios acquire (resource_1) use resource_1 release (resource_1) Thread A: Example: var r1_mutex: Mutex... r1_mutex.Lock() Use resource_1 r1_mutex.Unlock()
12
12 Resource acquisition scenarios Thread A: acquire (resource_1) use resource_1 release (resource_1) Another Example: var r1_sem: Semaphore... r1_sem.Down() Use resource_1 r1_sem.Up()
13
13 Resource acquisition scenarios acquire (resource_2) use resource_2 release (resource_2) Thread A:Thread B: acquire (resource_1) use resource_1 release (resource_1)
14
14 Resource acquisition scenarios acquire (resource_2) use resource_2 release (resource_2) Thread A:Thread B: No deadlock can occur here! acquire (resource_1) use resource_1 release (resource_1)
15
15 Resource acquisition scenarios: 2 resources acquire (resource_1) acquire (resource_2) use resources 1 & 2 release (resource_2) release (resource_1) acquire (resource_1) acquire (resource_2) use resources 1 & 2 release (resource_2) release (resource_1) Thread A:Thread B:
16
16 Resource acquisition scenarios: 2 resources acquire (resource_1) acquire (resource_2) use resources 1 & 2 release (resource_2) release (resource_1) acquire (resource_1) acquire (resource_2) use resources 1 & 2 release (resource_2) release (resource_1) Thread A:Thread B: No deadlock can occur here!
17
17 Resource acquisition scenarios: 2 resources acquire (resource_1) use resources 1 release (resource_1) acquire (resource_2) use resources 2 release (resource_2) acquire (resource_2) use resources 2 release (resource_2) acquire (resource_1) use resources 1 release (resource_1) Thread A:Thread B:
18
18 Resource acquisition scenarios: 2 resources acquire (resource_1) use resources 1 release (resource_1) acquire (resource_2) use resources 2 release (resource_2) acquire (resource_2) use resources 2 release (resource_2) acquire (resource_1) use resources 1 release (resource_1) Thread A:Thread B: No deadlock can occur here!
19
19 Resource acquisition scenarios: 2 resources acquire (resource_1) acquire (resource_2) use resources 1 & 2 release (resource_2) release (resource_1) acquire (resource_2) acquire (resource_1) use resources 1 & 2 release (resource_1) release (resource_2) Thread A:Thread B:
20
20 Resource acquisition scenarios: 2 resources acquire (resource_1) acquire (resource_2) use resources 1 & 2 release (resource_2) release (resource_1) acquire (resource_2) acquire (resource_1) use resources 1 & 2 release (resource_1) release (resource_2) Thread A:Thread B: Deadlock is possible!
21
21 Examples of deadlock Deadlock occurs in a single program Programmer creates a situation that deadlocks Kill the program and move on Not a big deal Deadlock occurs in the Operating System Spin locks and locking mechanisms are mismanaged within the OS Threads become frozen System hangs or crashes Must restart the system and kill and applications
22
22 Five philosophers sit at a table One fork between each philosopher Why do they need to synchronize? How should they do it? Dining philosophers problem while(TRUE) { Think(); Grab first fork; Grab second fork; Eat(); Put down first fork; Put down second fork; } Each philosopher is modeled with a thread
23
23 Is this a valid solution? #define N 5 Philosopher() { while(TRUE) { Think(); take_fork(i); take_fork((i+1)% N); Eat(); put_fork(i); put_fork((i+1)% N); }
24
24 Working towards a solution … #define N 5 Philosopher() { while(TRUE) { Think(); take_fork(i); take_fork((i+1)% N); Eat(); put_fork(i); put_fork((i+1)% N); } take_forks(i) put_forks(i)
25
25 Working towards a solution … #define N 5 Philosopher() { while(TRUE) { Think(); take_forks(i); Eat(); put_forks(i); }
26
26 Picking up forks // only called with mutex set! test(int i) { if (state[i] == HUNGRY && state[LEFT] != EATING && state[RIGHT] != EATING){ state[i] = EATING; up(sem[i]); } int state[N] semaphore mutex = 1 semaphore sem[i] take_forks(int i) { down(mutex); state [i] = HUNGRY; test(i); up(mutex); down(sem[i]); }
27
27 Putting down forks // only called with mutex set! test(int i) { if (state[i] == HUNGRY && state[LEFT] != EATING && state[RIGHT] != EATING){ state[i] = EATING; up(sem[i]); } int state[N] semaphore mutex = 1 semaphore sem[i] put_forks(int i) { down(mutex); state [i] = THINKING; test(LEFT); test(RIGHT); up(mutex); }
28
28 Dining philosophers Is the previous solution correct? What does it mean for it to be correct? Is there an easier way?
29
29 The sleeping barber problem
30
30 The sleeping barber problem Barber: While there are people waiting for a hair cut, put one in the barber chair, and cut their hair When done, move to the next customer Else go to sleep, until someone comes in Customer: If barber is asleep wake him up for a haircut If someone is getting a haircut wait for the barber to become free by sitting in a chair If all chairs are all full, leave the barbershop
31
31 Designing a solution How will we model the barber and customers? What state variables do we need?.. and which ones are shared? …. and how will we protect them? How will the barber sleep? How will the barber wake up? How will customers wait? What problems do we need to look out for?
32
32 Is this a good solution? Barber Thread: while true Down(customers) Lock(lock) numWaiting = numWaiting-1 Up(barbers) Unlock(lock) CutHair() endWhile Customer Thread: Lock(lock) if numWaiting < CHAIRS numWaiting = numWaiting+1 Up(customers) Unlock(lock) Down(barbers) GetHaircut() else -- give up & go home Unlock(lock) endIf const CHAIRS = 5 var customers: Semaphore barbers: Semaphore lock: Mutex numWaiting: int = 0
33
33 The readers and writers problem Multiple readers and writers want to access a database (each one is a thread) Multiple readers can proceed concurrently Writers must synchronize with readers and other writers only one writer at a time ! when someone is writing, there must be no readers ! Goals: Maximize concurrency. Prevent starvation.
34
34 Designing a solution How will we model the readers and writers? What state variables do we need?.. and which ones are shared? …. and how will we protect them? How will the writers wait? How will the writers wake up? How will readers wait? How will the readers wake up? What problems do we need to look out for?
35
35 Is this a valid solution to readers & writers? Reader Thread: while true Lock(mut) rc = rc + 1 if rc == 1 Down(db) endIf Unlock(mut)... Read shared data... Lock(mut) rc = rc - 1 if rc == 0 Up(db) endIf Unlock(mut)... Remainder Section... endWhile var mut: Mutex = unlocked db: Semaphore = 1 rc: int = 0 Writer Thread: while true...Remainder Section... Down(db)...Write shared data... Up(db) endWhile
36
36 Readers and writers solution Does the previous solution have any problems? is it “fair”? can any threads be starved? If so, how could this be fixed?
37
37 Monitors It is difficult to produce correct programs using semaphores correct ordering of up and down is tricky! avoiding race conditions and deadlock is tricky! boundary conditions are tricky! Can we get the compiler to generate the correct semaphore code for us? what are suitable higher level abstractions for synchronization?
38
38 Monitors Related shared objects are collected together Compiler enforces encapsulation/mutual exclusion Encapsulation: Local data variables are accessible only via the monitor’s entry procedures (like methods) Mutual exclusion A monitor has an associated mutex lock Threads must acquire the monitor’s mutex lock before invoking one of its procedures
39
39 Monitors and condition variables But we need two flavors of synchronization Mutual exclusion Only one at a time in the critical section Handled by the monitor’s mutex Condition synchronization Wait until a certain condition holds Signal waiting threads when the condition holds
40
40 Monitors and condition variables Condition variables (cv) for use within monitors wait(cv) thread blocked (queued) until condition holds monitor mutex released!! signal(cv) signals the condition and unblocks (dequeues) a thread
41
41 Monitor structures initialization code “entry” methods y x shared data condition variables monitor entry queue List of threads waiting to enter the monitor Can be called from outside the monitor. Only one active at any moment. Local to monitor (Each has an associated list of waiting threads) local methods
42
42 Monitor example for mutual exclusion process Producer begin loop BoundedBuffer.deposit(c) end loop end Producer process Consumer begin loop BoundedBuffer.remove(c) end loop end Consumer monitor: BoundedBuffer var buffer :...; nextIn, nextOut :... ; entry deposit(c: char) begin... end entry remove(var c: char) begin... end end BoundedBuffer
43
43 Observations That’s much simpler than the semaphore-based solution to producer/consumer (bounded buffer)! … but where is the mutex? … and what do the bodies of the monitor procedures look like?
44
44 Monitor example with condition variables monitor : BoundedBuffer var buffer : array[0..n-1] of char nextIn,nextOut : 0..n-1 := 0 fullCount : 0..n := 0 notEmpty, notFull : condition entry deposit(c:char) entry remove(var c: char) begin begin if (fullCount = n) then if (fullCount = n) then wait(notFull) wait(notEmpty) end if end if buffer[nextIn] := c c := buffer[nextOut] nextIn := nextIn+1 mod n nextOut := nextOut+1 mod n fullCount := fullCount+1 fullCount := fullCount-1 signal(notEmpty) signal(notFull) end deposit end remove end BoundedBuffer
45
45 Condition variables “Condition variables allow processes to synchronize based on some state of the monitor variables.”
46
46 Condition variables in producer/consumer “NotFull” condition “NotEmpty” condition Operations Wait() and Signal() allow synchronization within the monitor When a producer thread adds an element... A consumer may be sleeping Need to wake the consumer... Signal
47
47 Condition synchronization semantics “Only one thread can be executing in the monitor at any one time.” Scenario: Thread A is executing in the monitor Thread A does a signal waking up thread B What happens now? Signaling and signaled threads can not both run! … so which one runs, which one blocks, and on what queue?
48
48 Monitor design choices Condition variables introduce a problem for mutual exclusion only one process active in the monitor at a time, so what to do when a process is unblocked on signal? must not block holding the mutex, so what to do when a process blocks on wait? Should signals be stored/remembered? signals are not stored if signal occurs before wait, signal is lost! Should condition variables count?
49
49 Monitor design choices Choices when A signals a condition that unblocks B A waits for B to exit the monitor or blocks again B waits for A to exit the monitor or block Signal causes A to immediately exit the monitor or block (… but awaiting what condition?) Choices when A signals a condition that unblocks B & C B is unblocked, but C remains blocked C is unblocked, but B remains blocked Both B & C are unblocked … and compete for the mutex? Choices when A calls wait and blocks a new external process is allowed to enter but which one?
50
50 Option 1: Hoare semantics What happens when a Signal is performed? signaling thread (A) is suspended signaled thread (B) wakes up and runs immediately Result: B can assume the condition is now true/satisfied Hoare semantics give strong guarantees Easier to prove correctness When B leaves monitor, A can run. A might resume execution immediately... or maybe another thread (C) will slip in!
51
51 Option 2: MESA Semantics (Xerox PARC) What happens when a Signal is performed? the signaling thread (A) continues. the signaled thread (B) waits. when A leaves monitor, then B runs. Issue: What happens while B is waiting? can another thread (C) run after A signals, but before B runs? In MESA semantics a signal is more like a hint Requires B to recheck the state of the monitor variables (the invariant) to see if it can proceed or must wait some more
52
52 Code for the “deposit” entry routine monitor BoundedBuffer var buffer: array[n] of char nextIn, nextOut: int = 0 cntFull: int = 0 notEmpty: Condition notFull: Condition entry deposit(c: char) if cntFull == N notFull.Wait() endIf buffer[nextIn] = c nextIn = (nextIn+1) mod N cntFull = cntFull + 1 notEmpty.Signal() endEntry entry remove()... endMonitor Hoare Semantics
53
53 Code for the “deposit” entry routine monitor BoundedBuffer var buffer: array[n] of char nextIn, nextOut: int = 0 cntFull: int = 0 notEmpty: Condition notFull: Condition entry deposit(c: char) while cntFull == N notFull.Wait() endWhile buffer[nextIn] = c nextIn = (nextIn+1) mod N cntFull = cntFull + 1 notEmpty.Signal() endEntry entry remove()... endMonitor MESA Semantics
54
54 Code for the “remove” entry routine monitor BoundedBuffer var buffer: array[n] of char nextIn, nextOut: int = 0 cntFull: int = 0 notEmpty: Condition notFull: Condition entry deposit(c: char)... entry remove() if cntFull == 0 notEmpty.Wait() endIf c = buffer[nextOut] nextOut = (nextOut+1) mod N cntFull = cntFull - 1 notFull.Signal() endEntry endMonitor Hoare Semantics
55
55 Code for the “remove” entry routine monitor BoundedBuffer var buffer: array[n] of char nextIn, nextOut: int = 0 cntFull: int = 0 notEmpty: Condition notFull: Condition entry deposit(c: char)... entry remove() while cntFull == 0 notEmpty.Wait() endWhile c = buffer[nextOut] nextOut = (nextOut+1) mod N cntFull = cntFull - 1 notFull.Signal() endEntry endMonitor MESA Semantics
56
56 “Hoare Semantics” What happens when a Signal is performed? The signaling thread (A) is suspended. The signaled thread (B) wakes up and runs immediately. B can assume the condition is now true/satisfied From the original Hoare Paper: “No other thread can intervene [and enter the monitor] between the signal and the continuation of exactly one waiting thread.” “If more than one thread is waiting on a condition, we postulate that the signal operation will reactivate the longest waiting thread. This gives a simple neutral queuing discipline which ensures that every waiting thread will eventually get its turn.”
57
57 Implementing Hoare Semantics Thread A holds the monitor lock Thread A signals a condition that thread B was waiting on Thread B is moved back to the ready queue? B should run immediately Thread A must be suspended... the monitor lock must be passed from A to B When B finishes it releases the monitor lock Thread A must re-aquire the lock Perhaps A is blocked, waiting to re-aquire the lock
58
58 Implementing Hoare Semantics Problem: Possession of the monitor lock must be passed directly from A to B and then back to A Simply ending monitor entry methods with monLock.Unlock() … will not work A’s request for the monitor lock must be expedited somehow
59
59 Implementing Hoare Semantics Implementation Ideas: Consider a thread like A that hands off the mutex lock to a signaled thread, to be “urgent”. Thread C is not “urgent” Consider two wait lists associated with each MonitorLock UrgentlyWaitingThreads NonurgentlyWaitingThreads Want to wake up urgent threads first, if any
60
60 Brinch-Hansen Semantics Hoare Semantics On signal, allow signaled process to run Upon its exit from the monitor, signaler process continues. Brinch-Hansen Semantics Signaler must immediately exit following any invocation of signal Restricts the kind of solutions that can be written … but monitor implementation is easier
61
61 Reentrant code A function/method is said to be reentrant if... A function that has been invoked may be invoked again before the first invocation has returned, and will still work correctly Recursive routines are reentrant In the context of concurrent programming... A reentrant function can be executed simultaneously by more than one thread, with no ill effects
62
62 Reentrant Code Consider this function... integer seed; integer rand() { seed = seed * 8151 + 3423 return seed; } What if it is executed by different threads concurrently?
63
63 Reentrant Code Consider this function... integer seed; integer rand() { seed = seed * 8151 + 3423 return seed; } What if it is executed by different threads concurrently? The results may be “random” This routine is not reentrant!
64
64 When is code reentrant? Some variables are “local” -- to the function/method/routine “global” -- sometimes called “static” Access to local variables? A new stack frame is created for each invocation Each thread has its own stack What about access to global variables? Must use synchronization!
65
65 Making this function threadsafe integer seed; semaphore m = 1; integer rand() { down(m); seed = seed * 8151 + 3423 tmp = seed; up(m); return tmp; }
66
66 Making this function reentrant integer seed; integer rand( *seed ) { *seed = *seed * 8151 + 3423 return *seed; }
67
67 Message Passing Interprocess Communication via shared memory across machine boundaries Message passing can be used for synchronization or general communication Processes use send and receive primitives receive can block (like waiting on a Semaphore) send unblocks a process blocked on receive (just as a signal unblocks a waiting process)
68
68 Producer-consumer with message passing The basic idea: After producing, the producer sends the data to consumer in a message The system buffers messages The producer can out-run the consumer The messages will be kept in order But how does the producer avoid overflowing the buffer? After consuming the data, the consumer sends back an “empty” message A fixed number of messages (N=100) The messages circulate back and forth.
69
69 Producer-consumer with message passing thread producer var c, em: char while true // Produce char c... Receive(consumer, &em) -- Wait for an empty msg Send(consumer, &c) -- Send c to consumer endWhile end
70
70 Producer-consumer with message passing thread consumer var c, em: char while true Receive(producer, &c) -- Wait for a char Send(producer, &em) -- Send empty message back // Consume char... endWhile end const N = 100 -- Size of message buffer var em: char for i = 1 to N -- Get things started by Send (producer, &em) -- sending N empty messages endFor
71
71 Design choices for message passing Option 1: Mailboxes System maintains a buffer of sent, but not yet received, messages Must specify the size of the mailbox ahead of time Sender will be blocked if the buffer is full Receiver will be blocked if the buffer is empty
72
72 Design choices for message passing Option 2: No buffering If Send happens first, the sending thread blocks If Receiver happens first, the receiving thread blocks Sender and receiver must Rendezvous (ie. meet) Both threads are ready for the transfer The data is copied / transmitted Both threads are then allowed to proceed
73
73 Barriers (a) Processes approaching a barrier (b) All processes but one blocked at barrier (c) Last process arrives; all are let through
74
74 Quiz What is the difference between a monitor and a semaphore? Why might you prefer one over the other? How do the wait/signal methods of a condition variable differ from the up/down methods of a semaphore? What is the difference between Hoare and Mesa semantics for condition variables? What implications does this difference have for code surrounding a wait() call?
75
75 Scheduling New Process Ready Blocked Running Termination Process state model Responsibility of the Operating System
76
76 CPU scheduling criteria CPU Utilization – how busy is the CPU? Throughput – how many jobs finished/unit time? Turnaround Time – how long from job submission to job termination? Response Time – how long (on average) does it take to get a “response” from a “stimulus”? Missed deadlines – were any deadlines missed?
77
77 Scheduler options Priorities May use priorities to determine who runs next amount of memory, order of arrival, etc.. Dynamic vs. Static algorithms Dynamically alter the priority of the tasks while they are in the system (possibly with feedback) Static algorithms typically assign a fixed priority when the job is initially started. Preemptive vs. Nonpreemptive Preemptive systems allow the task to be interrupted at any time so that the O.S. can take over again.
78
78 Scheduling policies First-Come, First Served (FIFO) Shortest Job First (non-preemeptive) Shortest Job First (with preemption) Round-Robin Scheduling Priority Scheduling Real-Time Scheduling
79
79 First-Come, First-Served (FIFO) Start jobs in the order they arrive (FIFO queue) Run each job until completion
80
80 First-Come, First-Served (FIFO) Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Start jobs in the order they arrive (FIFO queue) Run each job until completion
81
81 First-Come, First-Served (FIFO) Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion
82
82 First-Come, First-Served (FIFO) 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Total time taken, from submission to completion Arrival Times of the Jobs Start jobs in the order they arrive (FIFO queue) Run each job until completion
83
83 First-Come, First-Served (FIFO) 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion
84
84 First-Come, First-Served (FIFO) 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion
85
85 First-Come, First-Served (FIFO) 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion
86
86 First-Come, First-Served (FIFO) 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion
87
87 First-Come, First-Served (FIFO) 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion
88
88 First-Come, First-Served (FIFO) 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion
89
89 First-Come, First-Served (FIFO) 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion
90
90 First-Come, First-Served (FIFO) 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion
91
91 First-Come, First-Served (FIFO) 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion
92
92 First-Come, First-Served (FIFO) 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion
93
93 First-Come, First-Served (FIFO) 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion
94
94 First-Come, First-Served (FIFO) 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion
95
95 First-Come, First-Served (FIFO) 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 0 2 2 6 1 3 4 4 5 4 6 5 7 5 8 2 10 Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion
96
96 First-Come, First-Served (FIFO) 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 0 3 2 2 6 1 7 3 4 4 5 9 4 6 5 7 12 5 8 2 10 12 Total time taken, from submission to completion Start jobs in the order they arrive (FIFO queue) Run each job until completion
97
97 Shortest Job First Select the job with the shortest (expected) running time Non-Preemptive
98
98 Shortest Job First 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Same Job Mix Select the job with the shortest (expected) running time Non-Preemptive
99
99 Shortest Job First 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Select the job with the shortest (expected) running time Non-Preemptive
100
100 Shortest Job First 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Select the job with the shortest (expected) running time Non-Preemptive
101
101 Shortest Job First 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Select the job with the shortest (expected) running time Non-Preemptive
102
102 Shortest Job First 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Select the job with the shortest (expected) running time Non-Preemptive
103
103 Shortest Job First 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Select the job with the shortest (expected) running time Non-Preemptive
104
104 Shortest Job First 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Select the job with the shortest (expected) running time Non-Preemptive
105
105 Shortest Job First 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Select the job with the shortest (expected) running time Non-Preemptive
106
106 Shortest Job First 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Select the job with the shortest (expected) running time Non-Preemptive
107
107 Shortest Job First 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Select the job with the shortest (expected) running time Non-Preemptive
108
108 Shortest Job First 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Seslect the job with the shortest (expected) running time Non-Preemptive
109
109 Shortest Job First 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Select the job with the shortest (expected) running time Non-Preemptive
110
110 Shortest Job First 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Select the job with the shortest (expected) running time Non-Preemptive
111
111 Shortest Job First 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 0 2 2 6 1 3 4 4 7 4 6 5 9 5 8 2 1 Select the job with the shortest (expected) running time Non-Preemptive
112
112 Shortest Job First 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 0 3 2 2 6 1 7 3 4 4 7 11 4 6 5 9 14 5 8 2 1 3 Select the job with the shortest (expected) running time Non-Preemptive
113
113 Shortest Remaining Time 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Same Job Mix Preemptive version of SJF
114
114 Shortest Remaining Time 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Preemptive version of SJF
115
115 Shortest Remaining Time 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Preemptive version of SJF
116
116 Shortest Remaining Time 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Preemptive version of SJF
117
117 Shortest Remaining Time 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Preemptive version of SJF
118
118 Shortest Remaining Time 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Preemptive version of SJF
119
119 Shortest Remaining Time 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Preemptive version of SJF
120
120 Shortest Remaining Time 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Preemptive version of SJF
121
121 Shortest Remaining Time 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Preemptive version of SJF
122
122 Shortest Remaining Time 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Preemptive version of SJF
123
123 Shortest Remaining Time 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Preemptive version of SJF
124
124 Shortest Remaining Time 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Preemptive version of SJF
125
125 Shortest Remaining Time 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Preemptive version of SJF
126
126 Shortest Remaining Time 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Preemptive version of SJF
127
127 Shortest Remaining Time 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 0 2 2 6 7 3 4 4 0 4 6 5 9 5 8 2 0 Preemptive version of SJF
128
128 Shortest Remaining Time 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 0 3 2 2 6 7 13 3 4 4 0 4 4 6 5 9 14 5 8 2 0 2 Preemptive version of SJF
129
129 Round-Robin Scheduling Goal: Enable interactivity Limit the amount of CPU that a process can have at one time. Time quantum Amount of time the OS gives a process before intervention The “time slice” Typically: 1 to 100ms
130
130 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
131
131 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
132
132 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
133
133 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
134
134 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
135
135 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
136
136 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
137
137 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
138
138 Ready List: Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
139
139 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
140
140 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
141
141 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
142
142 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
143
143 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
144
144 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
145
145 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
146
146 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
147
147 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
148
148 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
149
149 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
150
150 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
151
151 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
152
152 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
153
153 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
154
154 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
155
155 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Ready List:
156
156 Round-Robin Scheduling 05101520 Arrival Processing Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2
157
157 Round-Robin Scheduling 05101520 Arrival Processing Turnaround Process Time Time Delay Time 1 0 3 1 4 2 2 6 10 16 3 4 4 9 13 4 6 5 9 14 5 8 2 5 7
158
158 Round-Robin Scheduling Effectiveness of round-robin depends on The number of jobs, and The size of the time quantum. Large # of jobs means that the time between scheduling of a single job increases Slow responses Larger time quantum means that the time between the scheduling of a single job also increases Slow responses Smaller time quantum means higher processing rates but also more overhead!
159
159 Scheduling in general purpose systems
160
160 Priority scheduling Assign a priority (number) to each process Schedule processes based on their priority Higher priority jobs processes get more CPU time Managing priorities Can use “nice” to reduce your priority Can periodically adjust a process’ priority Prevents starvation of a lower priority process Can improve performance of I/O-bound processes by basing priority on fraction of last quantum used
161
161 Multi-Level Queue Scheduling CPU High priority Low priority Multiple queues, each with its own priority. Equivalently: each priority has its own ready queue Within each queue...Round-robin scheduling. Simplist Approach: A Process’s priority is fixed & unchanging
162
162 Multi-Level Feedback Queue Scheduling Problem: Fixed priorities are too restrictive Processes exhibit varying ratios of CPU to I/O times. Dynamic Priorities Priorities are altered over time, as process behavior changes!
163
163 Multi-Level Feedback Queue Scheduling Problem: Fixed priorities are too restrictive Processes exhibit varying ratios of CPU to I/O times. Dynamic Priorities Priorities are altered over time, as process behavior changes! Issue: When do you change the priority of a process and how often?
164
164 Multi-Level Feedback Queue Scheduling Problem: Fixed priorities are too restrictive Processes exhibit varying ratios of CPU to I/O times. Dynamic Priorities Priorities are altered over time, as process behavior changes! Issue: When do you change the priority of a process and how often? Solution: Let the amount of CPU used be an indication of how a process is to be handled Expired time quantum more processing needed Unexpired time quantum less processing needed Adjusting quantum and frequency vs. adjusting priority?
165
165 Multi-Level Feedback Queue Scheduling CPU High priority Low priority ?? n priority levels, round-robin scheduling within a level Quanta increase as priority decreases Jobs are demoted to lower priorities if they do not complete within the current quantum
166
166 Multi-Level Feedback Queue Scheduling Details, details, details... Starting priority? High priority vs. low priority Moving between priorities: How long should the time quantum be?
167
167 Lottery Scheduling Scheduler gives each thread some lottery tickets To select the next process to run... The scheduler randomly selects a lottery number The winning process gets to run ExampleThread A gets 50 tickets Thread B gets 15 tickets Thread C gets 35 tickets There are 100 tickets outstanding.
168
168 Lottery Scheduling Scheduler gives each thread some lottery tickets. To select the next process to run... The scheduler randomly selects a lottery number The winning process gets to run ExampleThread A gets 50 tickets 50% of CPU Thread B gets 15 tickets 15% of CPU Thread C gets 35 tickets 35% of CPU There are 100 tickets outstanding.
169
169 Lottery Scheduling Scheduler gives each thread some lottery tickets. To select the next process to run... The scheduler randomly selects a lottery number The winning process gets to run ExampleThread A gets 50 tickets 50% of CPU Thread B gets 15 tickets 15% of CPU Thread C gets 35 tickets 35% of CPU There are 100 tickets outstanding. Flexible Fair Responsive
170
170 A Brief Look at Real- Time Systems Assume processes are relatively periodic Fixed amount of work per period (e.g. sensor systems or multimedia data)
171
171 A Brief Look at Real- Time Systems Assume processes are relatively periodic Fixed amount of work per period (e.g. sensor systems or multimedia data) Two “main” types of schedulers... Rate-Monotonic Schedulers Earliest-Deadline-First Schedulers
172
172 A Brief Look at Real- Time Systems Assume processes are relatively periodic Fixed amount of work per period (e.g. sensor systems or multimedia data) Two “main” types of schedulers... Rate-Monotonic Schedulers Assign a fixed, unchanging priority to each process No dynamic adjustment of priorities Less aggressive allocation of processor Earliest-Deadline-First Schedulers Assign dynamic priorities based upon deadlines
173
173 A Brief Look at Real- Time Systems Typically real-time systems involve several steps (that aren’t in traditional systems) Admission control All processes must ask for resources ahead of time. If sufficient resources exist,the job is “admitted” into the system. Resource allocation Upon admission... the appropriate resources need to be reserved for the task. Resource enforcement Carry out the resource allocations properly
174
174 Rate Monotonic Schedulers Process P1 T = 1 second C = 1/2 second / period For preemptable, periodic processes (tasks) Assigns a fixed priority to each task T = The period of the task C = The amount of processing per task period In RMS scheduling, the question to answer is... What priority should be assigned to a given task?
175
175 Rate Monotonic Schedulers P1 P2
176
176 Rate Monotonic Schedulers P1 PRI > P2 PRI P2 PRI > P1 PRI P1 P2
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.