Download presentation
Presentation is loading. Please wait.
Published bySophia Barrett Modified over 9 years ago
1
Operating Systems Part III: Process Management (CPU Scheduling)
2
CPU-I/O Burst Cycle CPU burst - CPU executes process I/O burst - process does I/O Processes go back & forth between 2 states Frequency curve of CPU bursts is inverse exponential: – Large number of short CPU bursts, small number of long CPU bursts CPU-bound: few but long CPU bursts I/O-bound: many very short CPU bursts
3
Preemptive & Non-preemptive Scheduling CPU decisions needed in: – switching from running to waiting (i/o request) – switching from running to ready (timer interrupt) – switching from waiting to ready (i/o done) – process termination 1 st and 4 th : no choice in terms of scheduling: new process must be selected for execution 2 nd and 3 rd : scheduling is pre-empted
4
Preemptive & Non-preemptive Scheduling Non-preemptive scheduling – Once CPU is allocated to a process, it is released only by termination or waiting – Does not need special hardware (timer)
5
Preemptive & Non-preemptive Scheduling Preemptive scheduling – Processes may be interrupted any time – Costly because: shared data update by a process may be preempted kernel process may be preempted (e.g. medium-term scheduler) solution: wait for system call to finish -> not good for real- time computing
6
Dispatcher Gives control to process selected by short-term scheduler Does: – Context-switching – Switching to user-mode – Jumping to program location to start it Dispatch latency - time it takes the dispatcher to stop one process and start another running
7
Scheduling Criteria CPU utilization - keep CPU busy (40% - 100%) Throughput - number of processes completed per unit time Maximize these.
8
Scheduling Criteria Waiting time - sum of time spent in ready queue Turnaround time - time it takes to execute process (from submission to completion) Response time - time it takes to start responding (from submission to first response); does not include time to output response Minimize the above.
9
Scheduling Algorithms Scheduling: which process in the ready queue is to be run (allocated to the CPU)? First Come First Served (FCFS) – Easily managed by a FIFO queue – Simple to write and understand – Long waiting time depending on order of process – Non-preemptive
10
Scheduling Algorithms Shortest Job First (SJF) – CPU assigned to process that has shortest next CPU burst – FCFS used for processes that have same length – Can be proved to be optimal (minimum waiting time) – Difficulty is determining length or time duration of next CPU burst.
11
Scheduling Algorithms Shortest Job First (continued) – Determining length or time duration of next CPU burst: approximate by using the exponential average of the job's previous CPU bursts. – t n is the length of the nth CPU burst (previous), ξ n+1 the predicted value for the next CPU burst: ξ n+1 = α t n + (1 – α ) ξ n
12
Scheduling Algorithms Shortest Job First (continued) – Long-term scheduling (batch system): length of process time limit specified by user – Short-term scheduling: SJF cannot be implemented since there’s no way to know length of next CPU burst Implementation relies on predicting length of next CPU burst, using the previous formula
13
Scheduling Algorithms Shortest Job First (continued) – May be: Preemptive - also called shortest-remaining-time-first Non-preemptive - allows current process to finish
14
Scheduling Algorithms Priority Scheduling – A priority is associated with each process. CPU is allocated to process with the highest priority. Equal priority processes are scheduled in FCFS order or SJF order. – SJF special case of priority scheduling algorithm. CPU is allocated to process with shortest next burst (highest priority), and longest next burst has lowest priority.
15
Scheduling Algorithms Priority Scheduling (continued) – Priority defined either Internally - measurable quantities to compute priority (memory requirements, time limits, number of open files, etc.) Externally - set by criteria outside the O/S (importance of process, funded projects, etc.) – Scheduling is also preemptive or non-preemptive
16
Scheduling Algorithms Priority Scheduling (continued) – Indefinite blocking or starvation - can leave low priority processes starving for CPU allocation – Solution: Aging - low priority process gradually increases priority until it is finally allocated to the CPU
17
Scheduling Algorithms Round-Robin Scheduling – Designed specifically for time-sharing systems – Similar to FCFS but with preemption – Time quantum or time slice is defined (typically between 10 and 100 milliseconds) – CPU goes around ready queue allocating 1 time quantum to each process
18
Scheduling Algorithms Round-Robin Scheduling (continued) – Two things can happen: Process takes up 1 time quantum, or Process gives CPU up voluntarily
19
Scheduling Algorithms Round-Robin Scheduling (continued) – Performance depends heavily on size of time quantum, n n is very small - called processor sharing (virtual processor runs at 1/n the speed) -> smaller time quantum increases context-switching -> make time-quantum >> context switch (10%) n is large - scheduling degenerates to FCFS policy Rule of thumb: 80% of CPU bursts should be shorter than time quantum
20
Scheduling Algorithms Multilevel Queue Scheduling – Ready queue classified into several groups – Processes are permanently assigned to a queue depending on priority, memory size, etc. (e.g. separate queues used for foreground and background processes) – Examples: Absolute priority: System -> Interactive -> Batch -> Low- priority Time-slice: 80% foreground, 20% background
21
Scheduling Algorithms Multilevel Feedback Queue Scheduling – Allows processes to move between queues – Separate processes w/ different CPU burst characteristics Too much CPU time -> move to low-priority queue I/O-bound and interactive -> high-priority queue Use aging : if process waits too long in low-priority queue - > move to high-priority queue to prevent starvation
22
Scheduling Algorithms Multilevel Feedback Queue (continued) – General parameters: Number of queues Scheduling algorithm for each queue Method to upgrade process to high-priority queue Method to demote process to low-priority queue Method to determine what queue a process will enter – Considered the most general (can be configured to fit a system), but also the most complex
23
Multiprocessor Scheduling Scheduling process becomes more complex Many possibilities tried, no one best solution Heterogeneous system – Processors are different (e.g. distributed system) – Process can only run on processor it was compiled in
24
Multiprocessor Scheduling Homogenous system – Identical CPUs within a multiprocessor system – Process can run in any CPU – Can have load sharing Separate queue for each processor May have one processor busy while another is idle (empty queue) Remedied with a common ready queue
25
Multiprocessor Scheduling Homogeneous system (continued) – Load sharing - 2 possibilities Self-scheduling (symmetric scheduling) SMP - each processor examines queue and selects process to execute -> careful programming to ensure no two processors choose the same process Master-slave (asymmetric scheduling) - one processor is appointed as scheduler -> in some systems, other system activities are performed by the master, slaves only execute user code
26
Real-Time Scheduling Hard real-time – Resource reservation -> scheduler either admits a process w/ guaranteed response time, or rejects it – Guarantee impossible for systems with virtual memory or secondary storage – Composed of special software running hardware dedicated to critical processes Soft real-time -> less restrictive – Requires only that critical processes receive priority – Can support multimedia & high speed graphics
27
Algorithm Analysis Analytic Evaluation – Uses an algorithm and the system workload to produce a formula to evaluate performance – Deterministic Modeling Takes a particular predetermined workload and defines performance of each algorithm for that (sample) workload Simple, fast, and gives exact numbers (input/output) Requires too specific and too much exact knowledge to be useful -> processes that run daily vary greatly
28
Algorithm Analysis Queueing Models – CPU burst distribution is used instead of predetermined workload Simulations – Uses software to simulate major system components – Expensive -> requires hours of computer time – Design, coding, and debugging of simulator is complex
29
Algorithm Analysis Implementation – Simulations have limited accuracy – Only way to find out is to implement – Puts algorithm to test in real environment – Disadvantages: Cost Reaction of users to constantly changing O/S
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.