Presentation is loading. Please wait.

Presentation is loading. Please wait.

Process scheduling Chapter 5.

Similar presentations


Presentation on theme: "Process scheduling Chapter 5."— Presentation transcript:

1 Process scheduling Chapter 5

2 BASIC CONCEPTS CPU can execute only one process at time
The other processes must wait until the CPU is free The objectives of multithreading programming is to have some process run at all the time to maximize the CPU utilization

3 One process Load in memory Assign to CPU Process in execution NO yes Continue working I/o need interruption CPU IDLE

4 P1 P2 P3 P3 In execution Load in memory Assign to CPU P2 In execution P1 in execution NO yes Terminate I/o need interruption

5 CPU/IO burst cycle Scheduling is fundamental function of OS all computer resources must be scheduled. Process execution alternate between CPU execution and I/O Wait

6 CPU scheduler Selecting which process must be assigned to the CPU is done by short term scheduler The scheduler selects a process from the processes that are in the ready queue to allocate it the CPU Ready queue is not necessary implemented as a FIFO queue Ready queue can be implemented as FIFO queue, Priority queue, tree, simple unordered list All processes in the ready queue are waiting CPU time

7 Non Preemptive scheduling
CPU scheduling must be done under the following consideration : When process switches from running state to waiting state When process switches from the running state to ready state When process switch from ready state to waiting state When process terminate. In 1, 4 scheduling is not preemptive ( no choice is selected the first process in the queue) . Is used in windows 95 and previous versions of MAC All windows version after windows 95 uses preempltive scheduling and Mac versions

8 Preemptive scheduling
It cause some chaos in the kernel because some time the kernel is busy and the preemptive scheduler selects another process to be executed, before passing to the second process the system needs to read and modify the same structure (switch context) . That means inconsistent state information . The solution is interrupt

9 Preemptive scheduling
CPU scheduling Short term scheduler Preemptive scheduling Dispatcher

10 Dispatcher It takes control of the CPU and the processes selected by short term scheduling. Has the following roles : Switching context Switching to user mode Jumping to the proper location to restart execution. Dispatcher must be fast as possible Dispatcher latency is the time needed to the dispatcher to stop one process and to start another process

11 Scheduling criteria Scheduling criteria measure the performance of any scheduling algorithm. CPU utilization : is to keep CPU busy as possible CPU utilization vary from 0 to 100 In real system it varies from ……we want to maximize the CPU utilization. Throughput : is the number of processes completed in time unit. Long processes one process can require one hour Short process can execute 10 processes by second We want to maximize the system throughput

12 Scheduling criteria (continue)
Turnaround time : is an important criteria that measures how long it takes a process to be executed . Is defined as the time from the submission of process to the time of completion. We want to minimize the turnaround time . Waiting time : is the amount of time a process spends waiting in the ready queue. It depends on the short term scheduler, and the number of processes . We want to minimize the waiting time Response time : the time from the submission of request (process) to the time to first response . It is an alternative of turnaround time . We want to minimize the response time

13 Scheduling algorithms First come –first served
In this algorithm the process that requests first the CPU is allocated to it When a process enter the ready queue, its PCB is linked to the tail of the queue When CPU is free, it allocated to the first process at the head of the queue. And the running process is removed from the queue The average waiting time is always long FCFS is non preemptive scheduling because the CPU is allocated to a process until ending execution In FCFS the response time = the waiting time

14 Example Consider the four processes p1, p2,p3,p4 arriving to the queue at the same time . Time t=0 . The running time is given by milliseconds Average waiting time = /4 Running time (ms) Process 24 P1 16 P2 4 P3 5 p4 P1 P2 P3 P4 24 40 44 49

15 Dynamic FCFS Consider we have one process CPU bound (long process) And three processes I/O bound (short processes) The I/O processes are with shirt execution time and they switch to the ready queue . When ending execution they turn another time to the waiting queue. Leaving the CPU idle . And this process will continue until long process ends execution Ready queue CPU Waiting queue I/O execution All processes is assigned I/O and finishes Then they go to the ready Queue execution Long process

16 Short job first scheduling
This algorithm schedules processes based on the length of the next CPU burst of the same process and it chooses the smallest one. the average waiting time is minimum and is considered as optimal algorithm The difficulty with SJF is to estimate the length of the next CPU burst For long term scheduling the time of the next CPU burst cannot be known but can be predicted (similar to its previous ) Generally the next CPU Burst is calculated as an exponential average of the length of the previous CPU burst. And is calculated by the following formula  n+1 =  t n +(1-)  n Previous CPU burst Next one Last one 0< <1

17 Short job first scheduling calculating the next CPU burst
From the previous equation the next CPU burst can be slightly smaller than previous one or slightly larger than previous one  =1  n+1 = tn  = ½ next CPU burst is less or equal weight to its predecessor SJF can be preemptive or non preemptive scheduling . If a new process arrive to the CPU and one previous one still in execution, if the new arrived one requires more time , the CPU continue execution the current process until terminate For process that arrive at the same time and with the next CPU burst time is used FCFS

18 Priority scheduling algorithm
Is a generalization of SJF where the parameter used id the priority ( length of the next CPU burst) there are an inverse relation , long is the next CPU burst low is the priority. If two processes arrive with the same priority is used FCFS It is important to define what means high or low priority Priority can be internal or external Internal priority are CPU time, memory requirements, the numbers of file, ratio of average I/O burst to the average of CPU burst External priority are criteria outside of OS

19 Priority scheduling algorithm (continue)
Preemptive scheduling : if the new arrived process is with high priority than the current, it will preempt the new process Non preemptive … will put the new process at the head of the queue The problem with priority algorithm is the starvation that means a process with low priority will wait a long time Aging is the solution of starvation problem where the priority of the process that stay along in the queue will increment.

20 Round Robin Algorithm It is similar to FCFS
Each process has a quantum of time to be executed , then it switches to the next process The turn around time to a long process is too long if one process exceeds the quantum time , is executed quantum of time and put it in the tail of the head of ready queue If in the queue there is n process, so each process has 1/n CPU time. If the quantum is q so its waiting time will be (n-1)q If the time quantum too long it becomes similar to FCFS If the quantum time is too smaller, it becomes like parallel processing. N processes are distributed on n processors

21 Problem with Round Robin Algorithm
Is the switching context time . In new OS quantum time is ms Switching time < 10 ms So, switching time < quantum time Time around time depends on quantum time

22 Multilevel scheduling
It is used for the case when processes can be classified into classes easily In multilevel scheduling the ready queue is divided to different queues The processes are permanently assigned to one queue then are divided into different queues given memory , size, process priority and process type. And each queue has different scheduling algorithm. Foreground queue will be scheduled using RR However, background queue will be scheduled using FCFS Each queue has absolute priority and lower priority Each queue gets part (slice of time) of the CPU in which it can schedule between its processes processes Foreground (interactive) Background (batch process)

23 Multilevel feedback algorithm
It gives I/O bound process high priority over CPU bound process If a process with low priority waits a long in the queue , it can be moved to high priority queue (aging) Q the algorithm execute first process in Q Q0 and Q1. when they are empty . The processes Q in Q2 will be executed

24 Multilevel feedback algorithm (continue)
If a process arrives to the Q1 while CPU executes the processes in Q2. the new process will have priority over all processes in Q2, and the CPU will executes it. Consider the case where Q0 has 4 processes, Q1 has 5 processes . Q0 start execution their process using RR if a process needs more than quantum time to be terminate it will be moved to the tail of Q1. When Q0 is empty , CPU starts execution the processes in Q1 , and if a process needs more than one quantum time will be moved to the tail of Q2. When Q0 and Q1 are empty , CPU starts execution the processes in Q2 using FCFS

25 Multilevel feedback algorithm (continue)
8ms 12 8 4 3 Multilevel scheduler is defined By the following parameters: Number of queues(more are , more is the waiting time) The scheduling algorithm for each queue The method to define high priority and which process to move to low priority The method to define which queue has high priority 16 ms 20 16 10 10 FCFS

26 Thread scheduling User level thread Kernel level thread
User level thread are mapped to kernel level thread in three modes : one-to-one, many-to one , many –to-many so it is needed scheduling kernel The algorithm is this context is contention (تنافس) algorithm.

27 Process Contention Scheduling (PCS) and System Contention Scope (SCS)
The number of user thread > the number of kernel thread Defining the number of threads That belong to one process PCS After mapping with CPU Scheduling which kernel Thread will get the CPU System contention Scope It takes place among all threads In the system Which have high priority

28 Multiprocessor scheduling (homogenous)
The criteria of scheduling in multiprocessor is different The problem in multiprocessor scheduling is which processor to assign to a certain processor, and consider a processor that executes only I/O and it is busy and some other process in the queue needs I/O how we can organize that Multiprocessor scheduling Asymmetric multiprocessing Symmetric multiprocessing

29 Asymmetric multiprocessing
Master Is the responsible on executing all system process and takes scheduling decision I/O decision No need of data sharing Slaves Are responsible on executing user mode process

30 Symmetric multiprocessing
Each processor has its own scheduling Each processor have its own ready queue and it examines the queue and selects which process to execute Problems : Two processors do not choose the same process Processes are not lost from the queue Windows XP, 2000, Solaris, Linux, Mac osx

31 SMP process affinity Means ensuring that if a process is executing by a processor , that processor must continue executing until terminate Consider the following scenario: one process is executed by a processor , all the cache addresses are populated by the data used by this process to satisfy fast memory access If the process must migrate to another processor the address of the first processors becoming invalid and the cache must be repopulated by the data needed for the second processor Because the high cost of invalidating and repopulation of data , it is avoided that a process migrate between two processes.

32 Process affinity Process affinity soft
The processors does not guarantee that the process does not migrate Solaris hard The processor must ensure that the process does not migrate linux

33 NUMA and CPU Scheduling
Note that memory-placement algorithms can also consider affinity

34 Load balancing In SMP it is important to keep the workload balanced among all processors to avoid the case where one or more processor still idle It is used when each processor has its own queues Private queue: means a queue used for one processor Eligible queue : means a queue used by more than one processor

35 Load balancing Load balancing Push migration Pull migration
Periodically , one task is to search if the load balanced If not, it distribute it from one overloaded processor to idle one It happens when an idle processor takes a job from another processor Push migration Pull migration Push toward Another processor Pull

36 Multicore processors Multicore processor allows multithread to run concurrently Multicore means more than one core processor in one chip, each core has its own register set for that it appears to OS as separate physical processor It is faster and consumes less power than each processor in a chip. Memory stall : it happens when a processor enters the memory and still waiting data to be available (cache miss) Cache miss happens when the data needed is not presented in the cache To avoid memory stall, to each core is assigned more than one thread, if one thread is in memory stall the core can switch to the other thread

37 Ultrasparc T2 For operating system hardware each thread appears as logical processor that run software Dual core +two process by core = 4 logical processors 8 core per chip + 4 threads per core =32 processor ……fast Ultrasparc T2

38

39

40 A process executed on a processor until a long latency event happens
Multicore Coarse gain A process executed on a processor until a long latency event happens Then switches to another process The cost of switching too high Fine gain Processors switches between threads In architecture design the system includes logic For the thread switching For that the cots of switching is low

41 Scheduling in multicore
Scheduling in multicore is done in two levels : First level : scheduling decision made by OS to choose which user thread to runs on system thread ( can be used any scheduling algorithm) Second level : specifies how each core must have threads to run (can be used Round Robin) Ultrasparc 4 threads per core , 8 cores it uses priority fron 0 (low) to 7 (high) Intel Itanium : 2 cores and 3threads per core . It selects five events when one of this event happens the processor switches to another process

42 Virtualization and scheduling
The virtualization software presents one or more virtual CPUs to each of the virtual machines running on the system and then schedules the use of the physical CPUs among the virtual machines Generally, virtual machines has one CPU and many guest OS Host operating system creates and manages virtual machines and each virtual machines has a guest OS and Application run within this guest.

43 Algorithm evaluation How do select CPU scheduling algorithm for particular system? The criteria based on which is chosen a scheduling algorithm are : Maximizing CPU utilization under the constraint that the maximum response time is 1 second Maximizing throughput so the turnaround time is linearly proportional to total execution time Minimizing the average waiting time .


Download ppt "Process scheduling Chapter 5."

Similar presentations


Ads by Google