Scheduling: Chapter 3  Process: Entity competing for resources  Process states: New, running, waiting, ready, terminated, zombie (and perhaps more).

Slides:



Advertisements
Similar presentations
Scheduling Algorithems
Advertisements

Scheduling Criteria CPU utilization – keep the CPU as busy as possible (from 0% to 100%) Throughput – # of processes that complete their execution per.
Operating Systems Process Scheduling (Ch 3.2, )
Chap 5 Process Scheduling. Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a.
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
CPU Scheduling CS 3100 CPU Scheduling1. Objectives To introduce CPU scheduling, which is the basis for multiprogrammed operating systems To describe various.
Operating Systems 1 K. Salah Module 2.1: CPU Scheduling Scheduling Types Scheduling Criteria Scheduling Algorithms Performance Evaluation.
Operating System Process Scheduling (Ch 4.2, )
Chapter 6: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 6: CPU Scheduling Basic.
1 Thursday, June 15, 2006 Confucius says: He who play in root, eventually kill tree.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 5: CPU Scheduling
Job scheduling Queue discipline.
Operating Systems Process Scheduling (Ch 4.2, )
Operating System Process Scheduling (Ch 4.2, )
Chapter 5: CPU Scheduling (Continuation). 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Determining Length.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
CISC3595 CPU Scheduling.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Basic Concepts Maximum CPU utilization.
Operating Systems Part III: Process Management (CPU Scheduling)
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Scheduler What is the job of.
Chapter 6 CPU SCHEDULING.
Process Management. Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication.
1 Scheduling Processes. 2 Processes Each process has state, that includes its text and data, procedure call stack, etc. This state resides in memory.
Chapter 5 – CPU Scheduling (Pgs 183 – 218). CPU Scheduling  Goal: To get as much done as possible  How: By never letting the CPU sit "idle" and not.
Chapter 6 Scheduling. Basic concepts Goal is maximum utilization –what does this mean? –cpu pegged at 100% ?? Most programs are I/O bound Thus some other.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Scheduling. Alternating Sequence of CPU And I/O Bursts.
Operating Systems Process Management.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
1 11/29/2015 Chapter 6: CPU Scheduling l Basic Concepts l Scheduling Criteria l Scheduling Algorithms l Multiple-Processor Scheduling l Real-Time Scheduling.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 5: CPU Scheduling Basic.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
1 CS.217 Operating System By Ajarn..Sutapart Sappajak,METC,MSIT Chapter 5 CPU Scheduling Slide 1 Chapter 5 CPU Scheduling.
6.1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture Topics: 11/15 CPU scheduling: –Scheduling goals and algorithms.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
1 Uniprocessor Scheduling Chapter 3. 2 Alternating Sequence of CPU And I/O Bursts.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
Basic Concepts Maximum CPU utilization obtained with multiprogramming
1 Chapter 5: CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms.
Guy Martin, OSLab, GNU Fall-09
lecture 5: CPU Scheduling
EEE Embedded Systems Design Process in Operating Systems 서강대학교 전자공학과
Process Management Presented By Aditya Gupta Assistant Professor
Chapter 5: CPU Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Operating System Concepts
3: CPU Scheduling Basic Concepts Scheduling Criteria
Chapter5: CPU Scheduling
Chapter 5: CPU Scheduling
Process scheduling Chapter 5.
Chapter 6: CPU Scheduling
Outline Scheduling algorithms Multi-processor scheduling
CPU scheduling decisions may take place when a process:
Chapter 6: CPU Scheduling
Process Scheduling Decide which process should run and for how long
Chapter 5: CPU Scheduling
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Shortest-Job-First (SJR) Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Presentation transcript:

Scheduling: Chapter 3  Process: Entity competing for resources  Process states: New, running, waiting, ready, terminated, zombie (and perhaps more).  See also Fig. 3.2 on page 87.

Partial state diagram Running Ready waiting Zombie terminated 1 Hold

States  Hold: waiting for a time to gain entry  Ready: can compute if OS allows  Running: currently executing  Waiting: waiting for an event (e.g. an I/O to complete)  Zombie: process has finished  terminated: process finished and parent waited on it.

State Transitions  gains entry to system at specified time (1)  OS gives process right to use CPU (2)  OS removes right of the process to use CPU (time’s up) (3)  process makes a request (e.g. issues an I/O request) or issues a wait or pause among other things. (4)

 event has occurred (5) (e.g. I/O completed)  process has finished (6)  parent waited on process (7)  OS suspends process (maybe too much activity and it must reduce the load) (8)

 High-level scheduling (Also long-term): Which programs gain entry to the system or exit it. Essentially 1, 6, and 7 above.  Intermediate-level scheduling: Which processes can compete for CPU. Essentially 4, 5, and 8 above.  Low-level scheduling (Also short-term): Who gets the CPU. Essentially 2 and 3 above.

Goals and thoughts  fairness  maximize throughput  minimize turnaround  minimize response time  Consistency  May be incompatible

 avoid loads that degrade system  keep resources busy (e.g. I/O controllers) to maximize concurrent activities.  high priority to interactive users  deal with compute-bound vs. I/O bound processes.  keep CPU busy

 should all processes have the same priority?  should OS distinguish between processes that have done a lot so far from those that have done little?  Consider limits.

 NonPreemptive scheduling: when a process gets the CPU it keeps it until done  Preemptive scheduling: What the OS giveth, the OS can taketh

PCB (Process Control Block):  Every process has one and it contains:  state  program counter  CPU register values  accounting information

 in general, the process context  Saving or storing information in a PCB is called a context switch and usually happens when a process gets or loses CPU control.

 See also task_struct, the linux PCB (page 90 of the text)  located at /usr/src/kernels/ el5- xen-i686/include/linux/sched.h (line 834). Can copy into command line by copying from this document and right clicking the mouse at the command line prompt.

 Process lists are really PCB lists.

 Can skip 3.4 (forks and IPC). Some of this we did (shared memory); some we'll do later (message passing).  Can skip 3.5 (more message passing)  Can skip 3.6 (sockets and RPCs). Some of that’s done in the networks course

 Chapter 4 deals largely with threads. I will postpone that until a little later when I introduce java threads and synchronization.

Chapter 5: CPU Scheduling  Typically programs alternate: CPU burst-I/O burst, CPU burst-I/O burst, CPU burst-I/O burst, etc. Fig 5.1 on p  Compute bound: mostly CPU bursts (e.g. simulations, graphics)  I/O bound: Mostly I/O bursts (e.g. interactions, database)

Scheduling algorithms:  First come-first serve (FIFO): process that asks first gets the CPU first. Keeps it until done or until it requests something it must wait for. Show Gantt chart on p. 173; shows avg wait time and turnaround time. avg times vary according the process at the front of the Q

inappropriate for many environments  many processes could wait a long time for a compute bound process (bad if they’re interactive or need to request their own I/O).  Might be OK if most processes are compute bound (primarily for simulations) Sometimes used in conjunction with other methods. Might be useful in specialized environments where most tasks are compute bound ones.

SJF (Shortest Job First)  Orders processes in order of next CPU burst.  Preemptive: if new process enters it may replace a currently running process.  Can be useful if the OS wants to give high priority to a task like to have a short CPU burst and, thus, keep I/O controllers busy.

 may not know length of CPU burst time  Can estimate, based on time limits in JCL (Job Control Language) or……Job Control Language  Can predict burst length based on previous burst lengths and predictions.

 Possible option: use an exponential average defined by  n+1 =  t n + (1 –  )  n Variable  n+1 is the predicted value for the next burst and t n is the length of the nth burst.  is some constant

 In general  n+1 =  t n + (1 –  )  t n-1 + … + (1 –  ) j  t n-j + … + (1 –  ) n+1 t 0  If  = 0, recent history has no effect  If  = 1, only most recent burst matters.  See Figure 5.3 for an example.  See Gantt chart on page 176

Priority scheduling  Priority associated with each process and scheduled accordingly.  See Gantt chart on page 177  Indefinite postponement, indefinite blocking, starvation: These are all terms that apply to a process that may wait indefinitely due to low priority.

 NOTE: textbook cites a rumor that when the IBM 7094 at MIT was shut down in 1973, they found a low priority process that had been there since  Can deal with this by periodically increasing priorities of processes that are waiting  This is called aging.

Round Robin  Processes just take turns.  Gantt chart on page 178  Process at front of Q runs until it finishes it issues a request for which it must wait (e.g. I/O) time quantum (maximum length of uninterrupted execution time) expires

 Quantum size is an issue. Large quantum looks more like FCFS A process waits longer for “its turn” Small quantum generates frequent context switches (OS intervention). Since OS uses CPU a higher percent of the time the processes use it less.

quantum size responseresponse

 Round Robin does not react much to a changing environment – for example more or fewer I/O requests  Treats all processes the same, which may or may not be appropriate.  I/O bound tasks have same priority as CPU bound ones. Does that make sense?

Multilevel Feedback Queue Scheduling  Multiple Qs  highest priority Q has shortest quantum  Lowest priority Q has longest quantum  quanta range from small to large over all Qs  Schedule from highest priority Q that has a ready process

 Process runs until it finishes it issues a request for which it must wait (e.g. I/O)  When ready again, enter the next higher priority queue (if one) time quantum (for that Q) expires.  Go to the next lower priority queue (if one).

 Interactive processes: keep high priority  Compute bound processes typically have low priority.  In the presence of mostly compute bound processes, acts more like FIFO because of the longer quantum  In the presence of mostly I/O bound processes, acts like Round Robin.  Can react to a changing environment!

Real-time Scheduling  Hard real-time: MUST complete a task in a specified amount of time.  Usually requires special hardware (since Virtual memory, paging, and secondary storage can make the time unpredictable).

 Soft real-time:  Critical processes receive priority over non-critical ones.  Can be implemented using Multilevel Feedback Queues where the highest queues are reserved for the real-time processes.

Threads (just a couple of highlights from Chapter 4)  Thread: Lightweight process  Threads in the same process share code, data, files, etc, but have different stacks and registers.  Note the examples on the web site (thread.c and process.c)

 Kernel threads: managed by the OS kernel.  User threads: managed by a thread library (no kernel support) Less kernel overhead

User-kernel thread relationship  Many user threads map to one kernel thread If one thread blocks, the entire process blocks Cannot run multiple threads in parallel Green threads (from Solaris) GNU portable threads  One-to-one User threads can operate more independently More flexible, but more burden on the kernel Typical of windows and Linux

 Three main thread libraries POSIX (Portable Operating Systems Interface) – an interface standard with worldwide acceptance. IEEE standard [ Also [ Win32 threads Java threads (cover later)

Multiple processor scheduling  Asymmetric multiprocessing: All scheduling routines run on the master processor.  Symmetric multiprocessing (SMP): each processor is self-scheduling. Common queue for all processors One queue for each processor  We’ll consider SMP

 If a common queue for all processors then there are issues of multiple processors accessing and updating a common data structure.  There are many issues associated with this type of concurrency, which we cover in the next chapter.

Processor Affinity  May want to keep a process associated with the same processor.  If a process moves to another processor, current processor’s cache is invalidated and new processor’s cache must be updated  Soft affinity: try but no guarantee  Hard affinity: guarantee

Load balancing  Keep workload balanced – i.e. avoid idle processors if there are ready processes.  More difficult if each processor has its own ready queue (typical of most OS’s).  May also run counter to processor affinity.

 Push migration OS task periodically check processor queues and may redistribute tasks to balance the load.  Pull migration An idle processor pulls a task from another processor’s queue  Linux, for example, does push migration several times per second and pull migration if a queue is empty.

Multicore processors (not in text)  One chip – multiple core processors, each with its own register set.  One thread per core seems logical but presents problems.  Memory stall: processor waits for data to become available such as may happen in a cache or TLB miss. Waiting processors mean no work being done.

 Multithreaded processor core: two or more threads assigned to a single core.  Could interleave threads – i.e. when one thread is waiting, the other executed instruction cycle.  If one thread stalls, the processor can switch to the other thread

 From the OS point of view, each hardware thread is a separate core capable of running a software thread.  i.e. OS may see 4 logical processors on dual- core chip

Windows XP: Read through this:  Some stuff on page and page  uses 32-level priority scheme (top half are soft real- time).  Basically a multilevel Feedback Queue.  Each thread has a base priority and priorities cannot fall below that.  CTRL-ALT-Del to get task manager - right click on task to see priority and affinity.

 Threads get a boost when a wait is over. Amount of boost depends on what the wait was for.  Threads waiting for a keyboard (or mouse) I/O get a larger boost that if it were waiting for a disk I/O. Boost will NOT put thread into the real-time range

Linux  Processes have credits (priority) High number is low priority  Some stuff on page 796 and page  Enter the Linux top command to see processes and their priorities.

 Processes also have a nice value, which can affect scheduling. See info nice.  nice values range from -20 (least nice) to 19 (nicest)  There’s also a nice command which runs a process with a specific nice value.  There’s also a renice command which can change nice values of a running process. Its format is renice n pid where n is the nice value.  Usually need to be root to get more favorable treatment.

Example  In the scheduling directory, run the script runall; then enter the top command to see the processes.  On another machine, log in as root and enter the command renice -20 pid or renice 19 pid.  There may or may not be much difference since nice values are suggestions to the Linux scheduler (see info nice). May have to do both renice commands.  Note: Instead of entering a.out& you can use nice –n value a.out& (to run with a different nice value)  killall a.out will kill the processes