Outline Announcements Process Scheduling– continued

Slides:



Advertisements
Similar presentations
CPU Scheduling.
Advertisements

 Basic Concepts  Scheduling Criteria  Scheduling Algorithms.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 5: CPU Scheduling.
Operating Systems Chapter 6
Chap 5 Process Scheduling. Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a.
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
CPU Scheduling CS 3100 CPU Scheduling1. Objectives To introduce CPU scheduling, which is the basis for multiprogrammed operating systems To describe various.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 5: CPU Scheduling.
Chapter 6: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 6: CPU Scheduling Basic.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 5-CPU Scheduling
Scheduling. Model of Process Execution Ready List Ready List Scheduler CPU Resource Manager Resource Manager Resources Preemption or voluntary yield AllocateRequest.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Basic Concepts Maximum CPU utilization.
Chapter 6: CPU Scheduling
Silberschatz, Galvin, and Gagne  Applied Operating System Concepts Module 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
Chapter 5: Process Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Basic Concepts Maximum CPU utilization can be obtained.
1 11/29/2015 Chapter 6: CPU Scheduling l Basic Concepts l Scheduling Criteria l Scheduling Algorithms l Multiple-Processor Scheduling l Real-Time Scheduling.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 5: CPU Scheduling Basic.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
1 CS.217 Operating System By Ajarn..Sutapart Sappajak,METC,MSIT Chapter 5 CPU Scheduling Slide 1 Chapter 5 CPU Scheduling.
6.1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
Basic Concepts Maximum CPU utilization obtained with multiprogramming
CPU Scheduling Algorithms CSSE 332 Operating Systems Rose-Hulman Institute of Technology.
1 Chapter 5: CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
CPU SCHEDULING.
Chapter 6: CPU Scheduling
Dan C. Marinescu Office: HEC 439 B. Office hours: M, Wd 3 – 4:30 PM.
EEE Embedded Systems Design Process in Operating Systems 서강대학교 전자공학과
Chapter 5a: CPU Scheduling
CPU Scheduling Algorithms
Operating Systems Processes Scheduling.
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Process management Information maintained by OS for process management
Chapter 5: CPU Scheduling
CPU Scheduling Basic Concepts Scheduling Criteria
CPU Scheduling G.Anuradha
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 5: CPU Scheduling
Outline Announcements Process Management – continued
Operating System Concepts
3: CPU Scheduling Basic Concepts Scheduling Criteria
Chapter5: CPU Scheduling
Chapter 5: CPU Scheduling
Chapter 6: CPU Scheduling
Outline Scheduling algorithms Multi-processor scheduling
Operating System Concepts
Chapter 5: CPU Scheduling
Lecture 2 Part 3 CPU Scheduling
Outline Announcement Process Scheduling– continued
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Shortest-Job-First (SJR) Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
CPU Scheduling: Basic Concepts
Module 5: CPU Scheduling
Presentation transcript:

Outline Announcements Process Scheduling– continued Preemptive scheduling algorithms SJN Priority scheduling Round robin Multiple-level queue Please pick up your quiz if you have not done so Please pick up a copy of Homework #2

Announcements About the first quiz If the first quiz is not the worst among all of your quizzes, it will replace the worst you have If the first quiz is the worst among all of your quizzes, it will be ignored in grading 4/9/2019 COP4610

Announcements – cont. A few things about the first lab How to check a directory that is valid or not Using opendir and closedir Using access() Note that your program should accept both command names only and full path commands Using strchr to check if ‘/’ occurs in my_argv[0] If yes, execv(my_argv[0], my_argv) If no, then you need to search through the directories 4/9/2019 COP4610

Scheduling - review Scheduling mechanism is the part of the process manager that handles the removal of the running process of CPU and the selection of another process on the basis of a particular strategy Scheduler chooses one from the ready threads to use the CPU when it is available Scheduling policy determines when it is time for a thread to be removed from the CPU and which ready thread should be allocated the CPU next 4/9/2019 COP4610

Process Scheduler - review Ready List Scheduler CPU Resource Manager Resources Preemption or voluntary yield Allocate Request Done New Process job “Ready” “Running” “Blocked” 4/9/2019 COP4610

Voluntary CPU Sharing Each process will voluntarily share the CPU By calling the scheduler periodically The simplest approach Requires a yield instruction to allow the running process to release CPU 4/9/2019 COP4610

Involuntary CPU Sharing Periodic involuntary interruption Through an interrupt from an interval timer device Which generates an interrupt whenever the timer expires The scheduler will be called in the interrupt handler A scheduler that uses involuntary CPU sharing is called a preemptive scheduler 4/9/2019 COP4610

Programmable Interval Timer 4/9/2019 COP4610

Scheduler as CPU Resource Manager Process Units of time for a time-multiplexed CPU Release Ready to run Dispatch Ready List 4/9/2019 COP4610

The Scheduler Organization Ready Process Enqueuer Ready List Dispatcher Context Switcher Process Descriptor CPU From Other States Running Process 4/9/2019 COP4610

Dispatcher Dispatcher module gives control of the CPU to the process selected by the scheduler; this involves: switching context switching to user mode jumping to the proper location in the user program to restart that program 4/9/2019 COP4610

Diagram of Process State 4/9/2019 COP4610

CPU Scheduler Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state. 2. Switches from running to ready state. 3. Switches from waiting/new to ready. 4. Terminates. 4/9/2019 COP4610

CPU Scheduler – cont. Non-preemptive and preemptive scheduling Scheduling under 1 and 4 is non-preemptive A process runs for as long as it likes In other words, non-preemptive scheduling algorithms allow any process/thread to run to “completion” once it has been allocated to the processor All other scheduling is preemptive May preempt the CPU before a process finishes its current CPU burst 4/9/2019 COP4610

Working Process Model and Metrics P will be a set of processes, p0, p1, ..., pn-1 S(pi) is the state of pi t(pi), the service time The amount of time pi needs to be in the running state before it is completed W (pi), the wait time The time pi spends in the ready state before its first transition to the running state TTRnd(pi), turnaround time The amount of time between the moment pi first enters the ready state and the moment the process exists the running state for the last time 4/9/2019 COP4610

Alternating Sequence of CPU And I/O Bursts 4/9/2019 COP4610

Scheduling Criteria CPU utilization – keep the CPU as busy as possible Throughput – # of processes that complete their execution per time unit Turnaround time – amount of time to execute a particular process Waiting time – amount of time a process has been waiting in the ready queue Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment) 4/9/2019 COP4610

First-Come-First-Served Assigns priority to processes in the order in which they request the processor p0 p1 p2 p3 p4 1275 1200 900 475 350 4/9/2019 COP4610

Predicting Wait Time in FCFS In FCFS, when a process arrives, all in ready list will be processed before this job Let m be the service rate Let L be the ready list length Wavg(p) = L*1/m + 0.5* 1/m = L/m+1/(2m) Compare predicted wait with actual in earlier examples 4/9/2019 COP4610

Shortest-Job-Next Scheduling Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time. SJN is optimal gives minimum average waiting time for a given set of processes. 4/9/2019 COP4610

Nonpreemptive SJN p0 p1 p2 p3 p4 1275 800 450 200 75 4/9/2019 COP4610

Determining Length of Next CPU Burst Can only estimate the length. Can be done by using the length of previous CPU bursts, using exponential averaging. 4/9/2019 COP4610

Examples of Exponential Averaging  =0 n+1 = n Recent history does not count.  =1 n+1 = tn Only the actual last CPU burst counts. If we expand the formula, we get: n+1 =  tn+(1 - )  tn -1 + … +(1 -  )j  tn -1 + … +(1 -  )n=1 tn 0 Since both  and (1 - ) are less than or equal to 1, each successive term has less weight than its predecessor. 4/9/2019 COP4610

Priority Scheduling In priority scheduling, processes/threads are allocated to the CPU based on the basis of an externally assigned priority A commonly used convention is that lower numbers have higher priority SJN is a priority scheduling where priority is the predicted next CPU burst time. FCFS is a priority scheduling where priority is the arrival time 4/9/2019 COP4610

Nonpreemptive Priority Scheduling 4/9/2019 COP4610

Priority Scheduling – cont. Starvation problem low priority processes may never execute. Solution through aging as time progresses increase the priority of the process 4/9/2019 COP4610

Deadline Scheduling Allocates service by deadline May not be feasible i t(pi) Deadline 0 350 575 1 125 550 2 475 1050 3 250 (none) 4 75 200 p0 p1 p2 p3 p4 1275 1050 550 200 Allocates service by deadline May not be feasible 575 4/9/2019 COP4610

Real-Time Scheduling Hard real-time systems – required to complete a critical task within a guaranteed amount of time. Soft real-time computing – requires that critical processes receive priority over less fortunate ones. 4/9/2019 COP4610

Preemptive Schedulers Ready List Scheduler CPU Preemption or voluntary yield Done New Process Highest priority process is guaranteed to be running at all times Or at least at the beginning of a time slice Dominant form of contemporary scheduling But complex to build & analyze 4/9/2019 COP4610

Preemptive Shortest Job Next Also called the shortest remaining job next When a new process arrives, its next CPU burst is compared to the remaining time of the running process If the new arriver’s time is shorter, it will preempt the CPU from the current running process 4/9/2019 COP4610

Example of Preemptive SJF Process Arrival Time Burst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 Average time spent in ready queue = (9 + 1 + 0 +2)/4 = 3 P1 P3 P2 4 2 11 P4 5 7 16 4/9/2019 COP4610

Comparison of Non-Preemptive and Preemptive SJF Process Arrival Time Burst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 SJN (non-preemptive) Average time spent in ready queue (0 + 6 + 3 + 7)/4 = 4 P1 P3 P2 7 3 16 P4 8 12 4/9/2019 COP4610

Round Robin (RR) Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units. 4/9/2019 COP4610

Round Robin (TQ=50) i t(pi) 0 350 1 125 2 475 3 250 4 75 W(p0) = 0 50 0 350 1 125 2 475 3 250 4 75 50 p0 W(p0) = 0 4/9/2019 COP4610

Round Robin (TQ=50) – cont. i t(pi) 0 350 1 125 2 475 3 250 4 75 100 p0 p1 W(p0) = 0 W(p1) = 50 4/9/2019 COP4610

Round Robin (TQ=50) – cont. i t(pi) 0 350 1 125 2 475 3 250 4 75 100 p0 p1 p2 W(p0) = 0 W(p1) = 50 W(p2) = 100 4/9/2019 COP4610

Round Robin (TQ=50) – cont. i t(pi) 0 350 1 125 2 475 3 250 4 75 100 200 p0 p1 p2 p3 W(p0) = 0 W(p1) = 50 W(p2) = 100 W(p3) = 150 4/9/2019 COP4610

Round Robin (TQ=50) – cont. i t(pi) 0 350 1 125 2 475 3 250 4 75 100 200 p0 p1 p2 p3 p4 W(p0) = 0 W(p1) = 50 W(p2) = 100 W(p3) = 150 W(p4) = 200 4/9/2019 COP4610

Round Robin (TQ=50) – cont. i t(pi) 0 350 1 125 2 475 3 250 4 75 100 200 300 p0 p1 p2 p3 p4 p0 W(p0) = 0 W(p1) = 50 W(p2) = 100 W(p3) = 150 W(p4) = 200 4/9/2019 COP4610

Round Robin (TQ=50) – cont. i t(pi) 0 350 1 125 2 475 3 250 4 75 100 200 300 400 475 p0 p1 p2 p3 p4 p0 p1 p2 p3 p4 TTRnd(p4) = 475 W(p0) = 0 W(p1) = 50 W(p2) = 100 W(p3) = 150 W(p4) = 200 4/9/2019 COP4610

Round Robin (TQ=50) – cont. i t(pi) 0 350 1 125 2 475 3 250 4 75 100 200 300 400 475 550 p0 p1 p2 p3 p4 p0 p1 p2 p3 p4 p0 p1 TTRnd(p1) = 550 TTRnd(p4) = 475 W(p0) = 0 W(p1) = 50 W(p2) = 100 W(p3) = 150 W(p4) = 200 4/9/2019 COP4610

Round Robin (TQ=50) – cont. i t(pi) 0 350 1 125 2 475 3 250 4 75 100 200 300 400 475 550 650 p0 p1 p2 p3 p4 p0 p1 p2 p3 p4 p0 p1 p2 p3 650 750 850 950 p0 p2 p3 p0 p2 p3 TTRnd(p1) = 550 TTRnd(p3) = 950 TTRnd(p4) = 475 W(p0) = 0 W(p1) = 50 W(p2) = 100 W(p3) = 150 W(p4) = 200 4/9/2019 COP4610

Round Robin (TQ=50) – cont. i t(pi) 0 350 1 125 2 475 3 250 4 75 100 200 300 400 475 550 650 p0 p1 p2 p3 p4 p0 p1 p2 p3 p4 p0 p1 p2 p3 650 750 850 950 1050 p0 p2 p3 p0 p2 p3 p0 p2 p0 TTRnd(p0) = 1100 TTRnd(p1) = 550 TTRnd(p3) = 950 TTRnd(p4) = 475 W(p0) = 0 W(p1) = 50 W(p2) = 100 W(p3) = 150 W(p4) = 200 4/9/2019 COP4610

Round Robin (TQ=50) – cont. i t(pi) 0 350 1 125 2 475 3 250 4 75 100 200 300 400 475 550 650 p0 p1 p2 p3 p4 p0 p1 p2 p3 p4 p0 p1 p2 p3 650 750 850 950 1050 1150 1250 1275 p0 p2 p3 p0 p2 p3 p0 p2 p0 p2 p2 p2 p2 TTRnd(p0) = 1100 TTRnd(p1) = 550 TTRnd(p2) = 1275 TTRnd(p3) = 950 TTRnd(p4) = 475 W(p0) = 0 W(p1) = 50 W(p2) = 100 W(p3) = 150 W(p4) = 200 4/9/2019 COP4610

Round Robin (TQ=50) – cont. i t(pi) 0 350 1 125 2 475 3 250 4 75 p0 TTRnd(p0) = 1100 TTRnd(p1) = 550 TTRnd(p2) = 1275 TTRnd(p3) = 950 TTRnd(p4) = 475 W(p0) = 0 W(p1) = 50 W(p2) = 100 W(p3) = 150 W(p4) = 200 Wavg = (0+50+100+150+200)/5 = 500/5 = 100 475 400 300 200 100 Equitable Most widely-used Fits naturally with interval timer p4 p1 p3 p2 550 650 750 850 950 1050 1150 1250 1275 TTRnd_avg = (1100+550+1275+950+475)/5 = 4350/5 = 870 4/9/2019 COP4610

Round Robin – cont. Performance q large  FIFO q small  q must be large with respect to context switch, otherwise overhead is too high. 4/9/2019 COP4610

Turnaround Time Varies With The Time Quantum 4/9/2019 COP4610

How a Smaller Time Quantum Increases Context Switches 4/9/2019 COP4610

Round-robin Time Quantum 4/9/2019 COP4610

Process/Thread Context Rn . . . Status Registers Functional Unit Left Operand Right Operand Result ALU PC IR Ctl Unit 4/9/2019 COP4610

Context Switching - review CPU New Thread Descriptor Old Thread Descriptor 4/9/2019 COP4610

Cost of Context Switching Suppose there are n general registers and m control registers Each register needs b store operations, where each operation requires K time units (n+m)b x K time units Note that at least two pairs of context switches occur when application programs are multiplexed each time 4/9/2019 COP4610

Round Robin (TQ=50) – cont. Overhead must be considered i t(pi) 0 350 1 125 2 475 3 250 4 75 p0 TTRnd(p0) = 1320 TTRnd(p1) = 660 TTRnd(p2) = 1535 TTRnd(p3) = 1140 TTRnd(p4) = 565 W(p0) = 0 W(p1) = 60 W(p2) = 120 W(p3) = 180 W(p4) = 240 Wavg = (0+60+120+180+240)/5 = 600/5 = 120 540 480 360 240 120 p4 p1 p3 p2 575 790 910 1030 1150 1270 1390 1510 1535 TTRnd_avg = (1320+660+1535+1140+565)/5 = 5220/5 = 1044 635 670 4/9/2019 COP4610

Multi-Level Queues All processes at level i run before Preemption or voluntary yield Ready List0 New Process Scheduler Ready List1 CPU Done Ready List2 All processes at level i run before any process at level j At a level, use another policy, e.g. RR Ready List3 4/9/2019 COP4610

Multilevel Queues Ready queue is partitioned into separate queues foreground (interactive) background (batch) Each queue has its own scheduling algorithm foreground – RR background – FCFS 4/9/2019 COP4610

Multilevel Queues – cont. Scheduling must be done between the queues. Fixed priority scheduling; i.e., serve all from foreground then from background. Possibility of starvation. Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR 20% to background in FCFS 4/9/2019 COP4610

Multilevel Queue Scheduling 4/9/2019 COP4610

Multilevel Feedback Queue A process can move between the various queues; aging can be implemented this way. Multilevel-feedback-queue scheduler defined by the following parameters: number of queues scheduling algorithms for each queue method used to determine when to upgrade a process method used to determine when to demote a process method used to determine which queue a process will enter when that process needs service 4/9/2019 COP4610

Multilevel Feedback Queues – cont. 4/9/2019 COP4610

Example of Multilevel Feedback Queue Three queues: Q0 – time quantum 8 milliseconds Q1 – time quantum 16 milliseconds Q2 – FCFS Scheduling A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1. At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q2. 4/9/2019 COP4610

Two-queue scheduling 4/9/2019 COP4610

Three-queue scheduling 4/9/2019 COP4610

Multiple-Processor Scheduling CPU scheduling more complex when multiple CPUs are available. Homogeneous processors within a multiprocessor. Load sharing Asymmetric multiprocessing – only one processor accesses the system data structures, alleviating the need for data sharing. 4/9/2019 COP4610

Algorithm Evaluation Deterministic modeling – takes a particular predetermined workload and defines the performance of each algorithm for that workload. Queuing models Implementation 4/9/2019 COP4610

Evaluation of CPU Schedulers by Simulation 4/9/2019 COP4610

Contemporary Scheduling Involuntary CPU sharing -- timer interrupts Time quantum determined by interval timer -- usually fixed for every process using the system Sometimes called the time slice length Priority-based process (job) selection Select the highest priority process Priority reflects policy With preemption Usually a variant of Multi-Level Queues 4/9/2019 COP4610

Scheduling in real OSs All use a multiple-queue system UNIX SVR4: 160 levels, three classes time-sharing, system, real-time Solaris: 170 levels, four classes time-sharing, system, real-time, interrupt OS/2 2.0: 128 level, four classes background, normal, server, real-time Windows NT 3.51: 32 levels, two classes regular and real-time Mach uses “hand-off scheduling” 4/9/2019 COP4610

BSD 4.4 Scheduling Involuntary CPU Sharing Preemptive algorithms 32 Multi-Level Queues Queues 0-7 are reserved for system functions Queues 8-31 are for user space functions nice influences (but does not dictate) queue level 4/9/2019 COP4610

Windows NT/2K Scheduling Involuntary CPU Sharing across threads Preemptive algorithms 32 Multi-Level Queues Highest 16 levels are “real-time” Next lower 15 are for system/user threads Range determined by process base priority Lowest level is for the idle thread 4/9/2019 COP4610

Scheduler in Linux In file kernel/sched.c The policy is a variant of RR scheduling 4/9/2019 COP4610

Summary The scheduler is responsible for multiplexing the CPU among a set of ready processes / threads It is invoked periodically by a timer interrupt, by a system call, other device interrupts, any time that the running process terminates It selects from the ready list according to its scheduling policy Which includes non-preemptive and preemptive algorithms 4/9/2019 COP4610