Download presentation
Presentation is loading. Please wait.
Published byAlfred Richards Modified over 9 years ago
3
Number of processes OS switches context from A to B main use fork, and the child process call execvp() Can compiler do something bad by adding privileged instructions?
4
Lecture 3 Scheduling
5
How to develop scheduling policy What are the key assumptions? What metrics are important? What basic approaches have been used in the earliest of computer systems?
6
Workload Assumptions 1. Each job runs for the same amount of time. 2. All jobs arrive at the same time. 3. Once started, each job runs to completion. 4. All jobs only use the CPU (i.e., they perform no I/O). 5. The run-time of each job is known.
7
Scheduling Metrics Performance: turnaround time T turnaround = T completion − T arrival As T arrival is now 0, T turnaround = T completion
8
First In, First Out Work well under our assumption Relax “Each job runs for the same amount of time” Convoy effect 0120100 20 4060 80 0120100 20 4060 80
9
Shortest Job First SJF would be optimal Relax “All jobs arrive at the same time.” 0120100 20 4060 80 0120100 20 4060 80 ABC B/C arrive
10
Shortest Time-to-Completion First STCF is preemptive, aka PSJF “Once started, each job runs to completion” relaxed 0120100 20 4060 80 ABC B/C arrive A
11
Scheduling Metrics Performance: turnaround time T turnaround = T completion − T arrival As T arrival is now 0, T turnaround = T completion Performance: response time T response = T firstrun − T arrival
12
Turnaround time or response time FIFO, SJF, or STCF Round robin 0120100 20 4060 80 0120100 20 4060 80
13
Conflicting criteria Minimizing response time requires more context switches for many processes incur more scheduling overhead decrease system throughput Increase turnaround time Scheduling algorithm depends on nature of system Batch vs. interactive Designing a generic AND efficient scheduler is difficult
14
Incorporating I/O Poor use of resources Overlap allows better use of resources 0120100 20 4060 80 CPU Disk A A A A A A A B 0120100 20 4060 80 CPU Disk A A A A A A A B BB B
15
Workload Assumptions 1. Each job runs for the same amount of time. 2. All jobs arrive at the same time. 3. Once started, each job runs to completion. 4. All jobs only use the CPU (i.e., they perform no I/O). 5. The run-time of each job is known.
16
Multi-level feedback queue Goal Optimize turnaround time without priori knowledge Optimize response time for interactive users Q6 Q5 Q4 Q3 Q2 Q1 A B C D Rule 1: If Priority(A) > Priority(B) A runs (B doesn’t). Rule 2: If Priority(A) = Priority(B) A & B run in RR.
17
How to Change Priority Rule 3: When a job enters the system, it is placed at the highest priority (the topmost queue). Rule 4a: If a job uses up an entire time slice while running, its priority is reduced (i.e., it moves down one queue). Rule 4b: If a job gives up the CPU before the time slice is up, it stays at the same priority level.
18
Example 0120100 20 4060 80 Q2 Q1 Q0 A A A B B A
19
Example with I/O 0120100 20 4060 80 Q2 Q1 Q0 B A B A B A B A B A B A B A B A B A B A B A B A Problems: Starvation Program can game the scheduler Program may change its behavior over time
20
Priority Boost Rule 5: After some time period S, move all the jobs in the system to the topmost queue. 0120100 20 4060 80 Q2 Q1 Q0 A A A 0120100 20 4060 80 Q2 Q1 Q0 A A A A
21
Gaming the scheduler 0120100 20 4060 80 Q2 Q1 Q0 0120100 20 4060 80 Q2 Q1 Q0 B A B A B A B A B A B A B A B A B A B A B A B AA B A AA BB AA BB AA BB AA BB AA BB
22
Better Accounting Rule 4a: If a job uses up an entire time slice while running, its priority is reduced (i.e., it moves down one queue). Rule 4b: If a job gives up the CPU before the time slice is up, it stays at the same priority level. Rule 4: Once a job uses up its time allotment at a given level (regardless of how many times it has given up the CPU), its priority is reduced (i.e., it moves down one queue).
23
Tuning MLFQ And Other Issues How to parameterize? The system administrator configures it Default values available: on Solaris, there are 60 queues time-slice 20 milliseconds (highest) to 100s milliseconds (lowest) priorities boosted around every 1 second or so. The users provides hints: command-line utility nice
24
Workload Assumptions 1. Each job runs for the same amount of time. 2. All jobs arrive at the same time. 3. Once started, each job runs to completion. 4. All jobs only use the CPU (i.e., they perform no I/O). 5. The run-time of each job is known.
25
MLFQ rules Rule 1: If Priority(A) > Priority(B), A runs (B doesn’t). Rule 2: If Priority(A) = Priority(B), A & B run in RR. Rule 3: When a job enters the system, it is placed at the highest priority (the topmost queue). Rule 4: Once a job uses up its time allotment at a given level (regardless of how many times it has given up the CPU), its priority is reduced (i.e., it moves down one queue). Rule 5: After some time period S, move all the jobs in the system to the topmost queue.
26
Scheduling Metrics Performance: turnaround time T turnaround = T completion − T arrival As T arrival is now 0, T turnaround = T completion Performance: response time T response = T firstrun − T arrival CPU utilization Throughput Fairness
27
A proportional-share or A fair-share scheduler Each job obtain a certain percentage of CPU time. Lottery scheduling tickets to represent the share of a resource that a process should receive If A 75 tickets, B 25 tickets, then 75% and 25% (probabilistically) 63 85 70 39 76 17 29 41 36 39 10 99 68 83 63 62 43 0 49 49 A B A A B A A A A A A B A B A A A A A A higher priority => more tickets
28
Lottery Code int counter = 0; Int winner = getrandom(0, totaltickets); node_t *current = head; while(current) { counter += current->tickets; if (counter > winner) break; current = current->next; } // current is the winner
29
Lottery Fairness Study
30
Ticket currency User A 100 (global currency) -> 500 (A’s currency) to A1 -> 50 (global currency) -> 500 (A’s currency) to A2 -> 50 (global currency) User B 100 (global currency) -> 10 (B’s currency) to B1 -> 100 (global currency)
31
More on Lottery Scheduling Ticket transfer Ticket inflation Compensation ticket How to assign tickets? Why not Deterministic?
32
Stride Scheduling: a deterministic fair-share scheduler Deterministic but requires global state What if a new job enters in the middle
33
Scheduling Workload assumption Metrics MLFQ Lottery Scheduling and stride scheduling
34
Next Work on PA0 Reading: chapter 12-16
35
PA0
36
PA0. 0-1 0. Step 2, `cs-status | head -1 | sed 's/://g'` Step 6, cs-console, (control-@) OR (control-spacebar) 1..section.data.section.text.globl zfunction zfunction: pushl %ebp movl %esp, %ebp …. Leave ret Read http://en.wikibooks.org/wiki/X86_Assembly/GAS_Syntax In C, we count from 0http://en.wikibooks.org/wiki/X86_Assembly/GAS_Syntax
37
PA0. 2,3, and 5 2.Try “man end” and see what you can get Use “kprintf” for output 3.Read “Print the address of the top of the run- time stack for whichever process you are currently in, right before and right after you get into the printos() function call.” carefully You can use in-line assembly Use ebp, esp 5.syscallsummary_start(): should clear all numbers syscallsummary_stop(): should keep all numbers
38
Others https://vcl.ncsu.edu/help/files-data/where-save-my-files Know how to use VirtualBox? feel free to share
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.