Presentation is loading. Please wait.

Presentation is loading. Please wait.

Operating System Concepts 7th Edition Abraham SilBerschatz Peter Baer Galvin Greg Gagne Prerequisite: CSE212.

Similar presentations


Presentation on theme: "Operating System Concepts 7th Edition Abraham SilBerschatz Peter Baer Galvin Greg Gagne Prerequisite: CSE212."— Presentation transcript:

1 Operating System Concepts 7th Edition Abraham SilBerschatz Peter Baer Galvin Greg Gagne Prerequisite: CSE212

2 Chapter 5: CPU Scheduling

3 Chapter 5: CPU Scheduling
Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Thread Scheduling Operating Systems Examples Java Thread Scheduling Algorithm Evaluation

4 Basic Concepts In a single-processor system, only one process can run at a time; any others must wait until the CPU is free and can be rescheduled. The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization. A process is executed until it must wait, typically for the completion of some I/O request. In multiprogramming, several processes are kept in memory at one time. When one process has to wait, the OS takes CPU away from that process and give CPU to another process. Maximum CPU utilization obtained with multiprogramming

5 Basic Concepts CPU burst – CPU is in a state of executing program.
I/O burst – I/O is in a state of executing I/O activities. Process execution (CPU–I/O Burst Cycle) – Process execution consists of a cycle of CPU execution (CPU burst) and I/O wait (I/O burst) In order to improve the CPU performance, CPU burst distribution has been analyzed. A CPU- bound program typically has long CPU bursts and less activities on I/O An I/O-bound program typically has many short CPU bursts and a lot of I/O activities. CPU burst time – ช่วงที่ process ใช้งาน CPU I/O burst time – ช่วงที่ process รอข้อมูลจากอุปกรณ์ต่อพ่วง Process execution – การชัง่ร CPU (CPU burst) และการรอข้อมูลจากอุปกรณ์ต่อพ่วง (I/O burst)

6 Alternating Sequence of CPU And I/O Bursts
Process จะมีการสลับไปมาระหว่าง CPU burst และ I/O burst ตลอด Process จะจบท้ายด้วย CPU burst เสมอเพื่อเปลี่ยนสถานะเป็น terminate

7 Histogram of CPU-burst Times
Process จะใช้ CPU burst ต่างกันขึ้นอยู่กับตัว process และการจัดตารางงานของระบบ โดยปกติ process จะมีการใช้ CPU burst ถี่ในช่วงแรก และช่วงหลังจะใช้ CPU burst เพียงเล็กน้อย (ในช่วงหลังการทำงานส่วนใหญ่จะเป็น I/O burst)

8 CPU Scheduler Whenever the CPU becomes idle, the OS selects from among the processes in memory that are ready to be executed, and allocates the CPU to one of them The selection process is carried out by the short-term scheduler (or CPU scheduler) CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state 2. Switches from running to ready state (an interrupt occurs) 3. Switches from waiting to ready (completion of I/O) 4. Terminates Scheduling under 1 and 4 is nonpreemptive, all other scheduling is preemptive

9 Nonpreemptive/preemptive Scheduling
Nonpreemptive scheduling: switching from running to wait state Terminate Once the CPU has been allocated to a process, the process keeps the CPU busy until it releases the CPU either by terminating or switching to the wait state. The Nonpreemptive scheduling method was used by Microsoft Windows 3.x. Preemptive scheduling allows process to switch from running state to ready state (when an interrupt occurs) and switch from waiting state to ready state (at completion of I/O) The preemptive scheduling method was used by Microsoft Windows 95, Mac OS x (Apple Macintosh) Nonpreemptive scheduling – ไม่มีการสลับงาน process จะครอบครอง CPU แต่เพียงผู้เดียว เหมาะกับงาน batch processing ไม่สะดวกในการใช้งานเพราะถ้าสั่งงานไม่ครบหรือผิดพลาดต้องหยุดลัวสั่งใหม่ Preemptive scheduling – มีการสลับงาน process จะครอบครอง CPU เพียงชั่วขณะเพื่อแบ่งช่วงเวลา CPU ให้แก่ process อื่น เหมาะสำหรับงานแบบ multitasking ใช้ CPU ไม่เต็มที่เพราะต้องทำ context switch สะดวกในการใช้งานเพราะสามารถสั่งงานได้เรื่อยๆ ถ้าผิดพลาดสามารถหยุด และสั่งใหม่ที่จุดนั้นได้ทันที

10 Dispatcher A component involved in the CPU-scheduling.
Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: switching context switching to user mode jumping to the proper location in the user program to restart that program The dispatcher should be as fast as possible, since it invoked during every process switch. Dispatch latency – time it takes for the dispatcher to stop one process and start another running

11 Scheduling Criteria Different CPU scheduling algorithms have different properties, and the choice of a particular algorithm may favor one class of processes over another. Properties of various algorithms must be considered, the criteria for choosing scheduling algorithms are: CPU utilization – keep the CPU as busy as possible Throughput – number of processes that complete their execution per time unit Turnaround time – amount of time to execute a particular process Waiting time – amount of time a process has been waiting in the ready queue Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment) CPU utilization – การทำงานของ CPU Turnaround time – เวลาที่สูญเสียไปในการทำง (รวมเวลลาตั้งแต่เริ่มสร้างงานใหม่ + เวลาที่คอยคิวในสถานะ ready + เวลาที่รอข้อมูล I/O + เวลาที่ CPU ใช้ในการประมวลผล) Waiting time – เวลาที่คอยคิวในสถานะ ready Response time – เวลาที่ใช้ในการรอการตอบรับ (ระบบควรตอบรับให้เร็วที่สุด) ค่านี้เริ่มนับตั้งแต่สร้างงานใหม่จนถึงการตอบรับงานครั้งแรก

12 The Goal of Optimization Criteria
In order to reach the goal of optimization, each criteria must be maximized or minimized accordingly as follow: Maximize CPU utilization (so that CPU busy most of the time executing something useful) Maximize the throughput (system performance will be better if we can get a lot of things done) Minimize the turnaround time (make execution process shorter) Minimize the waiting time (do have to wait so long for the process to get execution) Minimize the response time (make the user happy to get a much better response from the computer system)

13 CPU Scheduling Algorithms
CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated to CPU. Examples of scheduling algorithms are: First-Come, First-Served (FCFS) Scheduling (who comes first, service them first) Shortest-Job-First (SJF) Scheduling (the shortest process will be executed first) Priority Scheduling (the highest priority process will be executed first, the rest of them have to wait for their turns) Round-Robin (RR) Scheduling (execute each process within a limited time quantum, the scheduler will put unfinished process back into the round-robin queue and allocates CPU to execute the next available process in the queue) FCFS – เป็นการจัดตารางแบบง่ายที่สุด ยุติธรรมสูง เพราะเรียงลำดับตามคิว

14 First-Come, First-Served (FCFS) Scheduling
Process Burst Time P1 24 P2 3 P3 3 Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is: Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: ( )/3 = 17 P1 P2 P3 24 27 30

15 FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order P2 , P3 , P1 The Gantt chart for the schedule is: Waiting time for P1 = 6; P2 = 0; P3 = 3 Average waiting time: ( )/3 = 3 Much better than previous case Convoy effect short process behind long process P1 P3 P2 6 3 30

16 Advantage/disadvantage of FCFS
There are advantage and disadvantage of First-Come, First-Serve scheduling, they are: Scheduling algorithm is simple and fast, no need to evaluate anything except allocating the next process in the queue to the CPU The order of process execution time in the queue plays a major role in deciding whether the criteria such as: CPU utilization may be maximized if the process is CPU-burst. throughput is unpredictable turnaround time is unpredictable waiting time is unpredictable response time is unpredictable

17 Shortest-Job-First (SJF) Scheduling
Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time Two schemes: nonpreemptive – once CPU given to the process it cannot be preempted until completes its CPU burst preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First (SRTF) SJF is optimal – gives minimum average waiting time for a given set of processes

18 Example of Non-Preemptive SJF
Process Arrival Time Burst Time P P P P SJF (non-preemptive) Average waiting time = ( )/4 = 4 P1 P3 P2 7 3 16 P4 8 12

19 Example of Preemptive SJF
Process Arrival Time Burst Time P P P P SJF (preemptive) Average waiting time = ( )/4 = 3 P1 P3 P2 4 2 11 P4 5 7 16

20 Determining Length of Next CPU Burst
Can only estimate the length Can be done by using the length of previous CPU bursts, using exponential averaging

21 Prediction of the Length of the Next CPU Burst

22 Examples of Exponential Averaging
 =0 n+1 = n Recent history does not count  =1 n+1 =  tn Only the actual last CPU burst counts If we expand the formula, we get: n+1 =  tn+(1 - ) tn -1 + … +(1 -  )j  tn -j + … +(1 -  )n +1 0 Since both  and (1 - ) are less than or equal to 1, each successive term has less weight than its predecessor

23 Advantage/Disadvantage of SJF
There are advantage and disadvantage of Shortest-Job-First scheduling, they are: Short job which the least execution time will be executed first. This approach is suitable for an environment with many jobs with short execution time. CPU utilization may not be maximized since it will spend time to do context switching between jobs throughput is likely to be maximized since total number of jobs is coming out fast. turnaround time is likely to be maximized but it also depends on the number of jobs in the queue. waiting time is likely to be maximized but it also depends on the number of jobs in the queue response time is likely to be maximized but it also depends on the number of jobs in the queue.

24 Priority Scheduling

25 Priority Scheduling A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer  highest priority depending of the system policy) Preemptive nonpreemptive SJF is a priority scheduling where priority is the predicted next CPU burst time Problem  Starvation – low priority processes may never execute Solution  Aging – as time progresses increase the priority of the process

26 Round Robin (RR) Scheduling
Each process gets a small unit of CPU time (time quantum), usually milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units. Performance q large  FIFO q small  q must be large with respect to context switch, otherwise overhead is too high

27 Example of RR with Time Quantum = 20
Process Burst Time P1 53 P2 17 P3 68 P4 24 The Gantt chart is: Typically, higher average turnaround than SJF, but better response P1 P2 P3 P4 20 37 57 77 97 117 121 134 154 162

28 The effects of the amount of Time Quantum and Context Switch Time

29 Turnaround Time Varies With The Time Quantum

30 Multilevel Queue Scheduling
Ready queue is partitioned into separate queues: foreground (interactive) background (batch) Each queue has its own scheduling algorithm foreground – RR background – FCFS Scheduling must be done between the queues Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. Time slice – each queue gets a certain amount of CPU time which it can schedule among its processes; i.e., 80% to foreground in RR 20% to background in FCFS

31 Multilevel Queue Scheduling

32 Multilevel Feedback Queue
A process can move between the various queues; aging can be implemented this way Multilevel-feedback-queue scheduler defined by the following parameters: number of queues scheduling algorithms for each queue method used to determine when to upgrade a process method used to determine when to demote a process method used to determine which queue a process will enter when that process needs service

33 Example of Multilevel Feedback Queue
Three queues: Q0 – RR with time quantum 8 milliseconds Q1 – RR time quantum 16 milliseconds Q2 – FCFS Scheduling A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1. At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q2.

34 Multilevel Feedback Queues

35 Multiple-Processor Scheduling
CPU scheduling more complex when multiple CPUs are available Homogeneous processors (all processors have the same specification, they are identical) within a multiprocessor. (Heterogeneous processors means all processors have different specification, they are not identical) Load sharing must be considered to make sure that all the CPUs must be utilized equally. Asymmetric multiprocessing – only one processor accesses the system data structures, reducing the need for data sharing Symmetric multiprocessing – each processor is self-scheduling. All processes may be in a common ready queue or each processor may have its own private queue of ready processes.

36 Concern regarding to multiprocessing
In multiprocessing environment, care must be taken in access and updating common data structure. Processor Affinity is a condition where each processor in the multiprocessing environment has its own cache memory. Everything works out fine only if the process (or remaining of the process) is being assigned to the same processor. If the remaining of the process is being assigned to different processor (differ from the first processor), the problem is the information on cache memory of the first processor must be copied to the second processor and it takes time to do so. Load balancing: It is important to keep the workload balanced among all processors to fully utilize the benefits of having more than one processor. Load balancing attempts to keep the workload evenly distributed across all processors in an SMP (Symmetrical Processor system) by push migration or pull migration.

37 Real-Time Scheduling Hard real-time systems – required to complete a critical task within a guaranteed amount of time Soft real-time computing – requires that critical processes receive priority over less fortunate ones

38 Thread Scheduling Local Scheduling – How the threads library decides which thread to put onto an available LWP (Light Weight Process another name for thread) Global Scheduling – How the kernel decides which kernel thread to run next

39 Pthread Scheduling API
#include <pthread.h> #include <stdio.h> #define NUM THREADS 5 int main(int argc, char *argv[]) { int i; pthread t tid[NUM THREADS]; pthread attr t attr; /* get the default attributes */ pthread attr init(&attr); /* set the scheduling algorithm to PROCESS or SYSTEM */ pthread attr setscope(&attr, PTHREAD SCOPE SYSTEM); /* set the scheduling policy - FIFO, RT, or OTHER */ pthread attr setschedpolicy(&attr, SCHED OTHER); /* create the threads */ for (i = 0; i < NUM THREADS; i++) pthread create(&tid[i],&attr,runner,NULL);

40 Pthread Scheduling API
/* now join on each thread */ for (i = 0; i < NUM THREADS; i++) pthread join(tid[i], NULL); } /* Each thread will begin control in this function */ void *runner(void *param) { printf("I am a thread\n"); pthread exit(0);

41 Operating System Examples
Solaris scheduling Windows XP scheduling Linux scheduling

42 Solaris 2 Scheduling

43 Solaris Dispatch Table

44 Windows XP Priorities

45 Linux Scheduling Two algorithms: time-sharing and real-time
Prioritized credit-based – process with most credits is scheduled next Credit subtracted when timer interrupt occurs When credit = 0, another process chosen When all processes have credit = 0, re-crediting occurs Based on factors including priority and history Real-time Soft real-time Posix.1b compliant – two classes FCFS and RR Highest priority process always runs first

46 The Relationship Between Priorities and Time-slice length

47 List of Tasks Indexed According to Prorities

48 Algorithm Evaluation How can we select a CPU scheduling algorithm for a particular system? To select an algorithm, our criteria must include several measures, such as: Maximizing CPU utilization under the constraint that the maximum response time is 1 second. Maximizing throughput such that turnaround time (on average) linearly proportional to total execution time. Deterministic modeling – takes a particular predetermined workload and defines the performance of each algorithm for that workload Queuing models Implementation

49 Algorithm Evaluation

50 End of Chapter 5

51 5.08

52 Dispatch Latency

53 5.15

54 Java Thread Scheduling
JVM Uses a Preemptive, Priority-Based Scheduling Algorithm FIFO Queue is Used if There Are Multiple Threads With the Same Priority

55 Java Thread Scheduling (cont)
JVM Schedules a Thread to Run When: The Currently Running Thread Exits the Runnable State A Higher Priority Thread Enters the Runnable State * Note – the JVM Does Not Specify Whether Threads are Time-Sliced or Not

56 Time-Slicing Since the JVM Doesn’t Ensure Time-Slicing, the yield() Method May Be Used: while (true) { // perform CPU-intensive task . . . Thread.yield(); } This Yields Control to Another Thread of Equal Priority

57 Thread Priorities Priority Comment
Thread.MIN_PRIORITY Minimum Thread Priority Thread.MAX_PRIORITY Maximum Thread Priority Thread.NORM_PRIORITY Default Thread Priority Priorities May Be Set Using setPriority() method: setPriority(Thread.NORM_PRIORITY + 2);


Download ppt "Operating System Concepts 7th Edition Abraham SilBerschatz Peter Baer Galvin Greg Gagne Prerequisite: CSE212."

Similar presentations


Ads by Google