Operating Systems, 112 Practical Session 4, Scheduling.

Slides:



Advertisements
Similar presentations
Topic : Process Management Lecture By: Rupinder Kaur Lecturer IT, SRS Govt. Polytechnic College for Girls,Ludhiana.
Advertisements

Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 17 Scheduling III.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 2 nd Edition Chapter 6a: CPU Scheduling.
 Basic Concepts  Scheduling Criteria  Scheduling Algorithms.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 5: CPU Scheduling.
Operating Systems Chapter 6
Operating Systems CPU Scheduling. Agenda for Today What is Scheduler and its types Short-term scheduler Dispatcher Reasons for invoking scheduler Optimization.
CPU Scheduling Algorithms
Chapter 3: CPU Scheduling
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 5: CPU Scheduling.
Operating Systems 1 K. Salah Module 2.1: CPU Scheduling Scheduling Types Scheduling Criteria Scheduling Algorithms Performance Evaluation.
CS 311 – Lecture 23 Outline Kernel – Process subsystem Process scheduling Scheduling algorithms User mode and kernel mode Lecture 231CS Operating.
02/06/2008CSCI 315 Operating Systems Design1 CPU Scheduling Algorithms Notice: The slides for this lecture have been largely based on those accompanying.
Scheduling in Batch Systems
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
What we will cover…  CPU Scheduling  Basic Concepts  Scheduling Criteria  Scheduling Algorithms  Evaluations 1-1 Lecture 4.
Chapter 5-CPU Scheduling
Job scheduling Queue discipline.
02/11/2004CSCI 315 Operating Systems Design1 CPU Scheduling Algorithms Notice: The slides for this lecture have been largely based on those accompanying.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Basic Concepts Maximum CPU utilization.
CPU-Scheduling Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be executed. The short term scheduler.
Chapter 6: CPU Scheduling
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-3 CPU Scheduling Department of Computer Science and Software Engineering.
CS212: OPERATING SYSTEM Lecture 3: Process Scheduling 1.
Chapter 6 CPU SCHEDULING.
Scheduling Strategies Operating Systems Spring 2004 Class #10.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Scheduling. Alternating Sequence of CPU And I/O Bursts.
CPU Scheduling CSCI 444/544 Operating Systems Fall 2008.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
Alternating Sequence of CPU And I/O Bursts. Histogram of CPU-burst Times.
CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
Operating Systems Practical Session 3, Scheduling 1.
Chapter 5: Process Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Basic Concepts Maximum CPU utilization can be obtained.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
1 CS.217 Operating System By Ajarn..Sutapart Sappajak,METC,MSIT Chapter 5 CPU Scheduling Slide 1 Chapter 5 CPU Scheduling.
Practical Session 2, Processes and Scheduling
Lecture Topics: 11/15 CPU scheduling: –Scheduling goals and algorithms.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Introduction to Operating System Created by : Zahid Javed CPU Scheduling Fifth Lecture.
Operating Systems Scheduling. Scheduling Short term scheduler (CPU Scheduler) –Whenever the CPU becomes idle, a process must be selected for execution.
1 Uniprocessor Scheduling Chapter 3. 2 Alternating Sequence of CPU And I/O Bursts.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU Scheduling CS Introduction to Operating Systems.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
1 Module 5: Scheduling CPU Scheduling Scheduling Algorithms Reading: Chapter
Basic Concepts Maximum CPU utilization obtained with multiprogramming
1 Lecture 5: CPU Scheduling Operating System Fall 2006.
Lecturer 5: Process Scheduling Process Scheduling  Criteria & Objectives Types of Scheduling  Long term  Medium term  Short term CPU Scheduling Algorithms.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 5: CPU Scheduling.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
1 Chapter 5: CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms.
CPU SCHEDULING.
Practical Session 2, Processes and Scheduling
CPU Scheduling Basic Concepts Scheduling Criteria
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 5: CPU Scheduling
Operating System Concepts
Chapter5: CPU Scheduling
Practical Session 3, Scheduling
Scheduling 21 May 2019.
CPU Scheduling.
CPU Scheduling: Basic Concepts
Chapter 5: CPU Scheduling
Presentation transcript:

Operating Systems, 112 Practical Session 4, Scheduling

A quick recap Quality criteria measures: 1.Throughput – The number of completed processes per time unit. 2.Turnaround time – The time interval between the process submission and its completion. 3.Waiting time – The sum of all time intervals in which the process was in the ready queue. 4.Response time – The time taken between submitting a command and the generation of first output. is not 5.CPU utilization – Percentage of time in which the CPU is not idle.

A quick recap Two types of scheduling:  Preemptive scheduling  Preemptive scheduling A task may be rescheduled to operate at a later time (for example, it may be rescheduled by the scheduler upon the arrival of a “more important” task).  Non Preemptive scheduling (cooperative) blocking  Non Preemptive scheduling (cooperative) Task switching can only be performed with explicitly defined system services (for example: the termination task, explicit call to yield(), I/O operation which changes the process state to blocking, etc’…).

A quick recap Scheduling algorithms taught in class: 1.FCFS (First – Come, First – Served). Non preemptive. Convoy effect. 2.SJF (Shortest Job First). Provably minimal with respect to the minimal average turn around time. No way of knowing the length of the next CPU burst. Can approximate according to: T n+1 =  t n +(1-  )T n Preemptive (Shortest Remaining Time First) or non preemptive. 3.Round Robin. When using large time slices it imitates FCFS. When using time slices which are closer to context switch time, more CPU time is wasted on switches.

A quick recap 4.Guaranteed scheduling. Constantly calculates, per process, the ratio of allocated CPU time and amount of CPU time a process is entitled to. Guarantees 1/n of CPU time. 5.Priority scheduling. A generalization of SJF (How?). 6.Multi Level Queue scheduling. Partition the ready queue. Each partition employs its own scheduling scheme. A process from a lower priority group may run only if there is no higher priority process. May cause starvation!

A quick recap 7.Dynamic Multi Level scheduling. Takes into account the time spent waiting (the notion of aging to prevent starvation). i.Highest Response Ratio Next: ii.Feedback scheduling. Demote processes running longer. Combine with aging to prevent starvation. 8.Two Level scheduling. Involves schedulers for Memory-CPU operations, and another scheduler for Memory-Disk operations.

Warm up (1) Why bother with multiprogramming? Assume processes in a given system wait for I/O 60% of the time. 1.What is the approximate CPU utilization with one process running? 2.What is the approximate CPU utilization with three processes running?

Warm up (1) 1.If a process is blocking on I/O 60% of the time, than there is only 40% CPU is utilization. 2.At a given moment, the probability that all three processes are blocking on I/O is That means that the CPU utilization is ( )=0.784, or roughly 78%.

Warm up (2) Assume a single CPU machine with a non preemptive scheduler, attempts to schedule n independent processes. How many possible schedules exist? Answer: This is exactly like ordering a set of n different characters forming an a word of length n. That is, there are n! different possible schedules.

Round Robin The following list of processes require scheduling (each requires x Time Units, or TUs): P A – 6 TU P B – 3 TU P C – 1 TU P D – 7 TU If RR scheduling is used, what quanta size should be used to achieve minimal average turnaround time? (assume 0 cost context switches)

Round Robin 1.Quanta = 1: 2.Quanta = 2: 3.Quanta = 3: TU 4.Quanta = 4: 11.5 TU 5.Quanta = 5: TU 6.Quanta = 6: 10.5 TU 7.Quanta = 7: 10.5 TU ABCDABDABDADADADD AABBCDDAABDDAADDD ( )/4=11 TU ( )/4=11.5 TU

Round Robin Turnaround time depends on the size of the time quantum used. Note that it does not necessarily improve as time quantum size increases!

Non preemptive scheduling (Taken from Tanenbaum) Assume 5 different jobs arrive at a computer center, roughly at the same time (same clock tick). Their expected run times are 10, 6, 2, 4 and 8 TU. Their (externally determined) priorities are 3, 5, 2, 1 and 4 respectively. For each of the following scheduling algorithm, determine the mean process turnaround time. Ignore process switching overhead. All jobs are completely CPU bound. 1.Priority Scheduling (non- preemptive). [Higher number means higher priority] 2.First come first served (in order 10, 6, 2, 4, 8) (non-preemptive) 3.Shortest job first (non-preemptive) 1.Priority Scheduling: ( )/5=20 2.FCFS: ( )/5= SJF: ( )/5=14 PIDPriorityTime P1P1 310 P2P2 56 P3P3 22 P4P4 14 P5P5 48

Preemptive dynamic priorities scheduling (Taken from Silberschatz, 5-9) Consider the following preemptive priority scheduling algorithm with dynamically changing priorities: When a process is waiting for the CPU (in the ready queue, but not running), its priority changes at rate α; when it is running, its priority changes at rate β. All processes are given a priority of 0 when they enter the ready queue. The parameters alpha and beta can be set. Higher priority processes take higher values. 1.What is the algorithm that results from β> α > 0? 2.What is the algorithm that results from α < β< 0? 3.Is there a starvation problem in 1? in 2? explain.

Preemptive dynamic priorities scheduling 1.  >  >0. To get a better feeling of the problem, we will create an example: C, P 1, P 2, P 3 arrive one after the other and last for 3 TU,  =1 and  =2(bold marks the running process): The resulting schedule is FCFS. Slightly more formal: If a process is running it must have the highest priority value. While it is running, it’s priority value increases at a rate greater than any other waiting process. As a result, it will continue it’s run until it completes (or waits on I/O, for example). All processes in the waiting queue, increase their priority at the same rate, hence the one which arrived earliest will have the highest priority once the CPU is available. Time P1P1024 P2P P3P

Preemptive dynamic priorities scheduling 2.  <  <0. We will use (almost) the same example problem as before, but this time  =-2,  =-1: The resulting schedule is LIFO. More formally: If a process is running it must have the highest priority value. While it is running, it’s priority value decreases at a much lower rate than any other waiting process. As a result, it will continue it’s run until it completes (or waits on I/O, for example), or a new process with priority 0 is introduced. As before, all processes in the waiting queue, decrease their priority at the same rate, hence the one which arrived later will have the highest priority once the CPU is available. Time P1P P2P P3P30-2

Preemptive dynamic priorities scheduling 3.In the first case it is easy to see that there is no starvation problem. When the k th process is introduced it will wait for at most (k-1)  max{time i } time units. This number might be large but it is still finite. This is not true for the second case - consider the following scenario: P 1 is introduced and receives CPU time. While still working a 2 nd process, P 2, is initiated. According to the scheduling algorithm of the second case P 2 will receive the CPU time and P 1 will have to wait. As long as enough new processes are introduced P 1 will never have a chance to complete it’s task.

Dynamic Multi Level scheduling An OS keeps two queues, Q 1 and Q 2. Each queue implements the round robin (RR) algorithm for all processes it hold. The OS prioritize processes in Q 1, over those in Q 2. When a process is created or returned from an I/O operation, it enters Q 1. A process enters Q 2 if it just finished running and it used up its whole time quantum. A process returning from I/O enters Q 1 and has precedence over a process which did not start running. In our problem, we have the following processes: Process P 1 – arrival time = 0, req.: 1 TU CPU, 1 TU IO, 3 TU CPU. Process P 2 – arrival time = 2, req.: 2 CPU, 2 IO, 2 CPU. Process P 3 – arrival time = 3, req.: 1 CPU, 3 IO, 3 CPU. Draw the Gantt table and compute the average TA and RT (turnaround and response time). Assume that the time quantum in Q 1 =1 TU, and the time quantum in Q 2 =2 TU. Further assume that the system has preemption. Computing the RT is based on the start of each I/O operation (you may think of the I/O as printing to stdout and that the user is waiting for this printout).

Dynamic Multi Level scheduling The Gantt table: Avg. TA: (7+11+9)/3=9 Avg. RT: (1+6+2)/3=3 Time P1P1 CPUI/OCPU P2P2. I/O CPU P3P3. I/O CPU

(Quiz ‘05) Guaranteed scheduling (Quiz ‘05) Three processes run on a preemptive (switching is done on every round), guaranteed scheduling (ties are broken with preference to the lower ID process) OS. They require the use of both the CPU and I/O: P 1 – 1 TU CPU, 4 TU I/O, 2 TU CPU P 2 – 1 CPU, 2 I/O, 2 CPU P 3 – 2 CPU, 1 I/O, 2 CPU For questions 1-3 draw a Gantt chart and compute the avg. TA time: 1.Identical arrival time, I/O is done on separate devices. 2.Identical arrival time, I/O is done on the same device with FCFS scheduling. 3.Process arrival time is: P 1 at 0, P 2 at 1 and P 3 at 2. I/O is done on separate devices. 4.Would Round Robin produce the same results? Explain your answer.

Guaranteed scheduling 1. The Gantt table: Avg. TA: (7+8+10)/3=8.33 Time P1P1 CPUI/O CPU P2P2 I/O CPU P3P3 I/OCPU

Guaranteed scheduling 2. The Gantt table: Avg. TA: (7+9+11)/3=9 Time P1P1 CPUI/O CPU P2P2 I/O CPU P3P3 I/OCPU

Guaranteed scheduling 3. The Gantt table: Avg. TA: (7+7+8)/3=7.33 Time P1P1 CPUI/O CPU P2P2 I/O CPU P3P3 I/OCPU

Guaranteed scheduling 4. No, RR implements fairness only between process waiting in ready state, and hence no process will receive two consecutive time slices while another process is waiting. This is in contrast to Guarantees scheduling, which takes into account all the time the process spent in the system.

Multi-core Scheduling נניח כי עומד לרשותנו מחשב בעל שני מעבדים (C1, C2). בכל רגע נתון עובדים שני המעבדים אלא אם כן אין יותר עבודות ממתינות. למערכת מגיעים בו זמנית אוסף של 13 תהליכים משלושה טיפוסים כמפורט להלן : תהליכי A – תהליכים קצרים שמסתיימים לאחר יחידת זמן אחת ( תהליך יחיד ) תהליכי B – תהליכים ארוכים מעט יותר שמסתיימים לאחר שתי יחידות זמן (7 תהליכים ) תהליכי C – תהליכים שמסתיימים לאחר שלוש יחידות זמן (5 תהליכים ). עבור שני האלגוריתמים הבאים חשבו : –מה יהיה ה -turnaround time הממוצע ואיזה אלגוריתם מוצלח יותר עפ " י מדד זה ? –בכמה זמן CPU נעשה שימוש ואיזה אלגוריתם מוצלח יותר עפ " י מדד זה ? –מהו משך הזמן הנדרש לסיום החישוב ואיזה אלגוריתם מוצלח יותר עפ " י מדד זה ? –תהליכים מטיפוס A ו -B מופנים למעבד C1 שמפעיל SJF על תהליכים אלו. כל תהליכי C מופנים למעבד C2. במידה ומעבד מסוים סיים את עבודתו לפני השני הוא מטפל בתהליכים הנותרים עפ " י עיקרון SJF. –התהליכים מופנים למעבדים השונים עפ " י SJF.

Multi-core Scheduling נצייר טבלת Gantt לשני המעבדים : עבור האלגוריתם הראשון : ועבור האלגוריתם השני : כעת ניתן לענות על השאלות בקלות : –לראשון – avg. TA=(64+45)/13=109/13=8.38 לשני – avg. TA=(55+45)/13=100/13=7.53 ולכן ברור כי SJF יהיה מוצלח יותר בשיטה זו ( עדיף על פני Affinity). –ברור שבשני השיטות משך החישוב הנדרש יהיה זהה : 30. בשיטה אחת ואילו בשיטה השניה –במפתיע, על אף היותו מוצלח יותר מבחינת turnaround אלגוריתם SJF מסיים מאוחר יותר, כעבור 16 יחיודת זמן, בעוד שהאלגוריתם הראשון מסיים לאחר 15 יחידות זמן.

XV6 / Assignment 1 Proc.h  struct cpu  enum procstate  struct proc Proc.c  struct ptable  allocproc  fork  Exit  wait  scheduler  sched  yield Syscall.h Syscall.c  fetchint  syscalls[]  syscall Traps.h Defs.h