CPU scheduling decisions may take place when a process: CPU Scheduler Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state. 2. Switches from running to ready state. 3. Switches from waiting to ready. 4. Terminates. Scheduling under 1 and 4 is nonpreemptive. All other scheduling is preemptive.
Scheduling Criteria CPU utilization – keep the CPU as busy as possible Throughput – # of processes that complete their execution per time unit Turnaround time – amount of time to execute a particular process Waiting time – amount of time a process has been waiting in the ready queue Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment)
CPU Scheduling - Classical algorithms First come, first served (FCFS) Convoy effect (all processes get bunched up behind long process) Shortest job first (shortest next CPU burst) “Optimal”. Need oracle to predict next duration: prediction based on history Priority: Choose higher priority tasks Priority inversion: High priority tasks wait for lower priority tasks holding critical resources Starvation: Low priority tasks never get their chance Round robin Multilevel queue: Different schemes for background, interactive tasks Multilevel feedback queue scheduling Multiprocessor scheduling Gang scheduling, Non Uniform Memory Access, Processor affinity Real-Time scheduling Hard and soft real time systems
Queue Scheduling The basis of all scheduling is the queue structure. A Round-robin scheduler uses but moves cyclically though the queue at its own speed, instead of waiting for each task in the queue to complete. There are two main types of queue First-come first-server (FCFS), also called first-in first-out (FIFO) (simplest and no system overhead) Sorted queue, in which elements are regularly ordered according to some rule. He most prevalent example of this is the shortest job first (SJF) (can cost quite a lot of system overhead, because of sorting; generally used for print queue)
Starvation Users who print only long jobs do not share same cilinical viewpoint. Because, if only short jobs arrive after one long job, it is possible that the long job will never get printed. Simple queue scheduling is not desirable
No idle Cpu states Utilization of the figure is not too bad. But the problem is that it assumes that every program goes in a kind of cycle. CPU I/O
Round-Robin Time-slices (time-quantum) No process can hold onto the CPU forever Processes which get requeued often (because they spend a lot of time waiting for devices) come around faster. İ.e. We don’t have to wait for CPU intensive processes The length of time-slices can be varied so as to give priority to particular processes. Time sharing implemented by hardware timer. On each context switch the timer loaded for new process. If times-out then switch the next process. Uses FIFO
Solaris 2 Scheduling
Windows 2000 Priorities
FreeBSD: rtprio, idprio Windows XP: Task manager Solaris: prioctl Tools to play with FreeBSD: rtprio, idprio Windows XP: Task manager Solaris: prioctl UNIX: nice, renice, setpriority, ps, top Procfs - a pseudo file system for process stats Look at files in /proc/curproc in FreeBSD, /proc in Linux, /proc in Solaris
Processing within kernel Process model Multiple threads within kernel Unique kernel stack - rescheduled and descheduled transparently E.g. UNIX Easy to use as blocking is transparent Cannot optimize unwanted “stack” space Interrupt model Single per-processor stack Threads explicitly save state before blocking E.g. Quicksilver, V
User level thread - one kernel thread for many user level threads Possible solution User level thread - one kernel thread for many user level threads At least need one kernel level thread Blocked kernel threads still wasted resources Continuations Provide code to save state Behaves like process model Performance of interrupt model Allows further optimizations
User level, Kernel level threads? User level threads Fast - the kernel does not need to know when there is a switch Flexible - each process can use its own scheduler Can block - Kernel does not know about the existence of user level threads Kernel level threads Can block - kernel can schedule other kernel level threads Slower - protection boundry crossed
Cooperative mechanism Scheduler activation Cooperative mechanism Kernel informs the user process of number of virtual “processors” as well as change in the number of processors User threads use these processors without informing the kernel on the scheduling decisions Scheduling decision by the user level library + Processor allocation, blocked thread processing etc by kernel scheduler activation mechanism User thread can request and relinquish virtual processors Upcalls from kernel to application System calls from application to kernel