Chapter Four : Processor Management

Slides:



Advertisements
Similar presentations
Operating Systems: Introduction n 1. Historical Development n 2. The OS as a Resource Manager n 3. Definitions n 4. The Process.
Advertisements

Chap 5 Process Scheduling. Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a.
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Day 25 Uniprocessor scheduling. Algorithm: Selection function Selection function – which process among ready processes to select. w – time spent in system,
Understanding Operating Systems Fifth Edition
Understanding Operating Systems Sixth Edition
Chapter 3: CPU Scheduling
Scheduling in Batch Systems
Understanding Operating Systems Sixth Edition
CS 104 Introduction to Computer Science and Graphics Problems Operating Systems (2) Process Management 10/03/2008 Yang Song (Prepared by Yang Song and.
5: CPU-Scheduling1 Jerry Breecher OPERATING SYSTEMS SCHEDULING.
Wk 2 – Scheduling 1 CS502 Spring 2006 Scheduling The art and science of allocating the CPU and other resources to processes.
1Chapter 05, Fall 2008 CPU Scheduling The CPU scheduler (sometimes called the dispatcher or short-term scheduler): Selects a process from the ready queue.
CPU Scheduling Chapter 6 Chapter 6.
Chapter 6: CPU Scheduling
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-3 CPU Scheduling Department of Computer Science and Software Engineering.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Lecture 5 Operating Systems.
Processor Management Introduction
Chapter 4 Processor Management
Chapter 4 - Processor Management Ivy Tech State College Northwest Region 01 CIS106 Microcomputer Operating Systems Gina Rue CIS Faculty.
OPERATING SYSTEMS CPU SCHEDULING.  Introduction to CPU scheduling Introduction to CPU scheduling  Dispatcher Dispatcher  Terms used in CPU scheduling.
Chapter 6 CPU SCHEDULING.
Chapter 4 Processor Management
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Scheduling. Alternating Sequence of CPU And I/O Bursts.
Operating Systems Process Management.
CPU Scheduling CSCI 444/544 Operating Systems Fall 2008.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm.
3.1 : Resource Management Part2 :Processor Management.
Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts
Chapter 5 Processor Scheduling Introduction Processor (CPU) scheduling is the sharing of the processor(s) among the processes in the ready queue.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
Uniprocessor Scheduling
Chapter 5: Process Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Basic Concepts Maximum CPU utilization can be obtained.
1 11/29/2015 Chapter 6: CPU Scheduling l Basic Concepts l Scheduling Criteria l Scheduling Algorithms l Multiple-Processor Scheduling l Real-Time Scheduling.
Process Program (Passive Entity) is a sequence of instructions, written to perform a specified task on a computer. A computer requires programs to function,
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
1 CS.217 Operating System By Ajarn..Sutapart Sappajak,METC,MSIT Chapter 5 CPU Scheduling Slide 1 Chapter 5 CPU Scheduling.
6.1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture Topics: 11/15 CPU scheduling: –Scheduling goals and algorithms.
CPU Scheduling Operating Systems CS 550. Last Time Deadlock Detection and Recovery Methods to handle deadlock – Ignore it! – Detect and Recover – Avoidance.
Operating Systems Scheduling. Scheduling Short term scheduler (CPU Scheduler) –Whenever the CPU becomes idle, a process must be selected for execution.
1 Uniprocessor Scheduling Chapter 3. 2 Alternating Sequence of CPU And I/O Bursts.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
Basic Concepts Maximum CPU utilization obtained with multiprogramming
Lecturer 5: Process Scheduling Process Scheduling  Criteria & Objectives Types of Scheduling  Long term  Medium term  Short term CPU Scheduling Algorithms.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 5: CPU Scheduling.
Scheduling.
1 Chapter 5: CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms.
CPU SCHEDULING.
EEE Embedded Systems Design Process in Operating Systems 서강대학교 전자공학과
Understanding Operating Systems Seventh Edition
CPU Scheduling Basic Concepts Scheduling Criteria
CPU Scheduling G.Anuradha
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 5: CPU Scheduling
Operating System Concepts
Chapter5: CPU Scheduling
Chapter 6: CPU Scheduling
CPU SCHEDULING.
Understanding Operating Systems Seventh Edition
Chapter 5: CPU Scheduling
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Module 5: CPU Scheduling
Chapter 6: Scheduling Algorithms Dr. Amjad Ali
Module 5: CPU Scheduling
Presentation transcript:

Chapter Four : Processor Management Introduction Job Scheduling Versus Process Scheduling Process Scheduler Process Identification Process Status Process State Accounting Process Scheduling Policies Process Scheduling Algorithms Cache Memory Interrupts Job Scheduling Process Scheduling Interrupt Management

How Does Processor Manager Allocate CPU(s) to Jobs? Process Manager performs job scheduling, process scheduling and interrupt management. In single-user systems, processor is busy only when user is executing a job—at all other times it is idle. Processor management is simple. In multiprogramming environment, processor must be allocated to each job in a fair and efficient manner. Requires scheduling policy and a scheduling algorithm. Understanding Operating Systems

Some Important Terms Program – inactive unit, such as a file stored on a disk. To an operating system, a program or job is a unit of work that has been submitted by user. “Job” is usually associated with batch systems. Process (task) – active entity, which requires a set of resources, including a processor and special registers to perform its function. A single instance of an executable program. Understanding Operating Systems

Important Terms - 2 Thread of control (thread) – a portion of a process that can run independently. Processor ( CPU, central processing unit) –part of machine that performs calculations and executes programs. Multiprogramming requires that the processor be “allocated” to each job or to each process for a period of time and “deallocated” at an appropriate moment. Understanding Operating Systems

Job Scheduling vs. Process Scheduling Processor Manager has 2 sub-managers: Job Scheduler in charge of job scheduling. Process Scheduler in charge of process scheduling. Understanding Operating Systems

Job Scheduler High-level scheduler. Selects jobs from a queue of incoming jobs. Places them in process queue (batch or interactive), based on each job’s characteristics. Goal is to put jobs in a sequence that uses all system’s resources as fully as possible. Strives for balanced mix of jobs with large I/O interaction and jobs with lots of computation. Tries to keep most system components busy most of time. Understanding Operating Systems

Process Scheduler Low-level scheduler – assigns the CPU to execute processes of those jobs placed on ready queue by Job Scheduler. After a job has been placed on the READY queue by Job Scheduler, Process Scheduler that takes over. Determines which jobs will get CPU, when, and for how long. Decides when processing should be interrupted. Determines queues job should be moved to during execution. Recognizes when a job has concluded and should be terminated. Understanding Operating Systems

CPU Cycles and I/O Cycles To schedule CPU, Process Scheduler uses common trait among most computer programs: they alternate between CPU cycles and I/O cycles.

Poisson Distribution Curve I/O-bound jobs (such as printing a series of documents) have many brief CPU cycles and long I/O cycles. CPU-bound jobs (such as finding the first 300 prime numbers) have long CPU cycles and shorter I/O cycles. Total effect of all CPU cycles, from both I/O-bound and CPU-bound jobs, approximates a Poisson distribution curve. Figure 4.1 Understanding Operating Systems

Processor Manager : Middle-Level Scheduler In a highly interactive environment there’s a third layer called middle-level scheduler. Removes active jobs from memory to reduce degree of multiprogramming and allows jobs to be completed faster. Understanding Operating Systems

Job and Process Status Job status -- one of the 5 states that a job takes as it moves through the system. HOLD READY WAITING RUNNING FINISHED Understanding Operating Systems

Job and Process Status Hold Admitted Finished Interrupt Exit Ready Running Scheduler dispatch I/O or event completion I/O or event wait Waiting Handled by Process Scheduler Handled by Job Scheduler

Transition Among Process States HOLD to READY : Job Scheduler using a predefined policy. READY to RUNNING : Process Scheduler using some predefined algorithm RUNNING back to READY : Process Scheduler according to some predefined time limit or other criterion. RUNNING to WAITING : Process Scheduler and is initiated by an instruction in the job. WAITING to READY : Process Scheduler and is initiated by signal from I/O device manager that I/O request has been satisfied and job can continue. RUNNING to FINISHED : Process Scheduler or Job Scheduler. Understanding Operating Systems

Process Control Block (PCB) Process Control Block (PCB) -- data structure that contains basic info about the job Process identification Process status (HOLD, READY, RUNNING, WAITING) Process state (process status word, register contents, main memory info, resources, process priority) Accounting (CPU time, total amount of time, I/O operations, number input records read, etc.) Understanding Operating Systems

PCBs and Queuing PCB of job created when Job Scheduler accepts it updated as job goes from beginning to termination. Queues use PCBs to track jobs. PCBs, not jobs, are linked to form queues. E.g., PCBs for every ready job are linked on READY queue; all PCBs for jobs just entering system are linked on HOLD queue. Queues must be managed by process scheduling policies and algorithms. Understanding Operating Systems

Process Scheduling Policies Before operating system can schedule all jobs in a multiprogramming environment, it must resolve three limitations of system: finite number of resources (such as disk drives, printers, and tape drives) some resources can’t be shared once they’re allocated (such as printers) some resources require operator intervention (such as tape drives). Understanding Operating Systems

Scheduling Criteria CPU utilization – keep the CPU as busy as possible Throughput – # of processes that complete their execution per time unit Turnaround time – amount of time to execute a particular process Waiting time – amount of time a process has been waiting in the ready queue Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment) Understanding Operating Systems

A Good Scheduling Policy Maximize throughput by running as many jobs as possible in a given amount of time. Maximize CPU efficiency by keeping CPU busy 100 % of time. Ensure fairness for all jobs by giving everyone an equal amount of CPU and I/O time. Minimize response time by quickly turning around interactive requests. Minimize turnaround time by moving entire jobs in/out of system quickly. Minimize waiting time by moving jobs out of READY queue as quickly as possible.

Interrupts There are instances when a job claims CPU for a very long time before issuing an I/O request. builds up READY queue & empties I/O queues. Creates an unacceptable imbalance in the system. Process Scheduler uses a timing mechanism to periodically interrupts running processes when a predetermined slice of time has expired. suspends all activity on the currently running job and reschedules it into the READY queue. Understanding Operating Systems

Preemptive & Non-preemptive Scheduling Policies Preemptive scheduling policy interrupts processing of a job and transfers the CPU to another job. Non-preemptive scheduling policy functions without external interrupts. Once a job captures processor and begins execution, it remains in RUNNING state uninterrupted. Until it issues an I/O request (natural wait) or until it is finished (exception for infinite loops). Understanding Operating Systems

Process Scheduling Algorithms First Come First Served (FCFS) Shortest Job Next (SJN) Priority Scheduling Shortest Remaining Time (SRT) Round Robin Multiple Level Queues Understanding Operating Systems

First Come First Served (FCFS) Non-preemptive. Handles jobs according to their arrival time -- the earlier they arrive, the sooner they’re served. Simple algorithm to implement -- uses a FIFO queue. Good for batch systems; not so good for interactive ones. Turnaround time is unpredictable. Understanding Operating Systems

FCFS Example Process CPU Burst (Turnaround Time) A 15 milliseconds B 2 milliseconds C 1 millisecond If they arrive in order of A, B, and C. What does the time line look like? What’s the average turnaround time? Understanding Operating Systems

Shortest Job Next (SJN) Non-preemptive. Handles jobs based on length of their CPU cycle time. Use lengths to schedule process with shortest time. Optimal – gives minimum average waiting time for a given set of processes. optimal only when all of jobs are available at same time and the CPU estimates are available and accurate. Doesn’t work in interactive systems because users don’t estimate in advance CPU time required to run their jobs. Understanding Operating Systems

Priority Scheduling Non-preemptive. Gives preferential treatment to important jobs. Programs with highest priority are processed first. Aren’t interrupted until CPU cycles are completed or a natural wait occurs. If 2+ jobs with equal priority are in READY queue, processor is allocated to one that arrived first (first come first served within priority). Many different methods of assigning priorities by system administrator or by Processor Manager. Understanding Operating Systems

Shortest Remaining Time (SRT) Preemptive version of the SJN algorithm. Processor allocated to job closest to completion. This job can be preempted if a newer job in READY queue has a “time to completion” that's shorter. Can’t be implemented in interactive system -- requires advance knowledge of CPU time required to finish each job. SRT involves more overhead than SJN. OS monitors CPU time for all jobs in READY queue and performs “context switching”. Understanding Operating Systems

Context Switching Is Required by All Preemptive Algorithms When Job A is preempted All of its processing information must be saved in its PCB for later (when Job A’s execution is continued). Contents of Job B’s PCB are loaded into appropriate registers so it can start running again (context switch). Later when Job A is once again assigned to processor another context switch is performed. Info from preempted job is stored in its PCB. Contents of Job A’s PCB are loaded into appropriate registers. Understanding Operating Systems

Round Robin Preemptive. Used extensively in interactive systems because it’s easy to implement. Isn’t based on job characteristics but on a predetermined slice of time that’s given to each job. Ensures CPU is equally shared among all active processes and isn’t monopolized by any one job. Time slice is called a time quantum size crucial to system performance (100 ms to 1-2 secs) Understanding Operating Systems

If Job’s CPU Cycle < Time Quantum If job’s last CPU cycle & job is finished, then all resources allocated to it are released & completed job is returned to user. If CPU cycle was interrupted by I/O request, then info about the job is saved in its PCB & it is linked at end of the appropriate I/O queue. Later, when I/O request has been satisfied, it is returned to end of READY queue to await allocation of CPU. Understanding Operating Systems

Time Slices Should Be ... Long enough to allow 80 % of CPU cycles to run to completion. At least 100 times longer than time required to perform one context switch. Flexible -- depends on the system. Understanding Operating Systems

Multiple Level Queues Not a separate scheduling algorithm. Works in conjunction with several other schemes where jobs can be grouped according to a common characteristic. Examples: Priority-based system with different queues for each priority level. Put all CPU-bound jobs in 1 queue and all I/O-bound jobs in another. Alternately select jobs from each queue to keep system balanced. Put batch jobs “background queue” & interactive jobs in a “foreground queue”; treat foreground queue more favorably than background queue. Understanding Operating Systems

Policies To Service Multi-level Queues No movement between queues. Move jobs from queue to queue. Move jobs from queue to queue and increasing time quantums for “lower” queues. Give special treatment to jobs that have been in system for a long time (aging). Understanding Operating Systems

Cache Memory Cache memory -- quickly accessible memory that’s designed to alleviate speed differences between a very fast CPU and slower main memory. Stores copy of frequently used data in an easily accessible memory area instead of main memory.

Types of Interrupts Page interrupts to accommodate job requests. Time quantum expiration. I/O interrupts when issue READ or WRITE command. Internal interrupts (synchronous interrupts) result from arithmetic operation or job instruction currently being processed. Illegal arithmetic operations (e.g., divide by 0). Illegal job instructions (e.g., attempts to access protected storage locations). Understanding Operating Systems

Interrupt Handler Describe & store type of interrupt (passed to user as error message). Save state of interrupted process (value of program counter, mode specification, and contents of all registers). Process the interrupt Send error message & state of interrupted process to user. Halt program execution Release any resources allocated to job are released Job exits the system. Processor resumes normal operation. Understanding Operating Systems

Key Terms aging cache memory context switching CPU-bound first come first served high-level scheduler I/O-bound indefinite postponement internal interrupts interrupt interrupt handler Job Scheduler job status low-level scheduler middle-level scheduler multiple level queues multiprogramming non-preemptive scheduling policy Understanding Operating Systems

Key Terms - 2 preemptive scheduling policy priority scheduling process Process Control Block Process Scheduler process scheduling algorithm process scheduling policy process status processor queue response time round robin shortest job next (SJN) shortest remaining time synchronous interrupts time quantum time slice turnaround time Understanding Operating Systems