Download presentation
Presentation is loading. Please wait.
Published byDwight Lewis Modified over 8 years ago
1
Process Scheduling
2
Scheduling Strategies Scheduling strategies can broadly fall into two categories Co-operative scheduling is where the currently running process voluntarily gives up executing to allow another process to run. The obvious disadvantage of this is that the process may decide to never give up execution, probably because of a bug causing some form of infinite loop, and consequently nothing else can ever run. Preemptive scheduling is where the process is interrupted to stop it an allow another process to run. Each process gets a timeslice to run in; at the point of each context switch a timer will be reset and will deliver and interrupt when the timeslice is over.
3
First-Come, First-Served (FCFS) runs the processes in the order they arrive at the short-term scheduler removes a process from the processor only if it blocks (i.e., goes into the Wait state) or terminates wonderful for long processes terrible for short processes if they are behind a long process FCFS is not preemptive.
4
Round Robin (RR) processes are given equal time slices called quanta (or quantums) takes turns of one quantum each if a process finishes early, before its quantum expires, the next process starts immediately and gets a full quantum in some implementations, the next process may get only the rest of the quantum RR gives the processes with short bursts (interactive work) much better service RR gives processes with long bursts somewhat worse service, but greatly benefits processes with short bursts and the long processes do not need to wait that much longer RR is preemptive, that is sometimes the processor is taken away from a process that can still use it
5
Shortest-Job-First (SJF), Shortest next-CPU-burst first assumes we know the length of the next CPU burst of all ready processes the length of a cpu burst is the length of time a process would continue executing if given the processor and not preempted SJF starts with a default expected burst length for a new process SJF estimates the length of the next burst based on the lengths of recent cpu bursts very short processes get very good service shortest job first is optimal at finishing the maximum number of cpu bursts in the shortest time, if estimates are accurate SJF cannot handle infinite loops the Shortest Job First algorithm is a non preemptive algorithm poor performance for processes with short burst times arriving after a process with a long burst time has started processes with long burst times may starve. a process may mislead the scheduler if it previously had a short bursts, but now may be cpu intensive (this algorithm fails very badly for such a case).
6
Shortest Remaining Time (SRT) when a process arrives at the ready queue with an expected CPU-burst-time that is less than the expected remaining time of the running process, the new one preempts the running process very short processes get very good service this algorithm provably gives the highest throughput (number of processes completed) of all scheduling algorithms if the estimates are exactly correct. a process may mislead the scheduler if it previously ran quickly but now may be cpu intensive (this algorithm fails very badly for such a case) long processes can starve
7
Nonpreemptive Priority Algorithm processor is allocated to the ready process with the highest priority shortest remaining time (SJF) algorithm is a priority algorithm with the priority defined as the expected time choose priority equal to the expected CPU burst time (p=t).
8
Preemptive Priority Algorithm when a process arrives at the ready queue with a higher priority than the running process, the new one preempts the running process; low priority processes can starve it uses aging to prevent starvation aging: if a process has not received service for a long time, its priority is increased again
9
Multilevel Feedback-Queue Scheduling Unlike multilevel queue scheduling algorithm where processes are permanently assigned to a queue, multilevel feedback queue scheduling allows a process to move between queues. This movement is facilitated by the characteristic of the CPU burst of the process. A new process is (usually) inserted at the end (tail) of the top-level FIFO queue. priorities are implicit in the position of the queue that the ready process is currently waiting in when process is running on the processor, the OS must remember which queue it came from If the process voluntarily relinquishes control of the CPU, it leaves the queue, and when the process becomes ready again it is inserted at the tail of the same queue which it relinquished earlier. after a process has executed for one time quantum, it is moved down one queue. process is placed at the back of the queue. This next lower level queue will have a time quantum which is more than that of the previous higher level queue. This scheme leaves I/O-bound and interactive processes in the higher priority queues. In addition, a process that waits too long in a lower-priority queue may be moved to a higher priority queue. This form of aging also helps to prevent starvation of certain lower priority processes. the scheduling algorithm for each queue is usually RR. The scheduler always start picking up processes from the head of the highest level queue. If the highest level queue has become empty, then only will the scheduler take up a process from the next lower level queue. (No starvation because there is aging). this is close to what is used for the traditional UNIX scheduler
10
Priority Based Dynamic Round Robin Intelligent Time Slice(ITS) is calculated which allocates different time quantum to each process based on priority, shortest CPU burst time and context switch avoidance time.
11
Fair-share scheduling is a scheduling algorithm for computer operating systems in which the CPU usage is equally distributed among system users or groups, as opposed to equal distribution among processes.
12
Lottery scheduling is a probabilistic scheduling algorithm for processes in an operating system. Processes are each assigned some number of lottery tickets, and the scheduler draws a random ticket to select the next process. The distribution of tickets need not be uniform; granting a process more tickets provides it a relative higher chance of selection. This technique can be used to approximate other scheduling algorithms, such as Shortest job next
13
The O(n) scheduler is the scheduler used in the Linux kernel between versions 2.4 and 2.6. This scheduler divides processor time into epochs. Within each epoch, every task can execute up to its time slice. If a task does not use all of its time slice, then the scheduler adds half of the remaining time slice to allow it to execute longer in the next epoch. This scheduler was an advantage in comparison to the previously used very simple scheduler based on a circular queue. If the number of processes is big, the scheduler may use a notable amount of the processor time itself. Picking the next task to run requires iteration through all currently planned tasks, so the scheduler runs in O(n) time, where n is the number of the planned processes.
14
The O(1) Scheduler vs Completely Fair Scheduler O(log(n)) http://www.bottomupcs.com/scheduling.html (2.6 prior to 2.6.23) http://www.bottomupcs.com/scheduling.html http://www.linuxjournal.com/magazine/completely-fair-scheduler https://en.wikipedia.org/wiki/Completely_Fair_Scheduler (2.6.23) http://www.linuxjournal.com/magazine/completely-fair-scheduler https://en.wikipedia.org/wiki/Completely_Fair_Scheduler
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.