Outline Announcements Process Management – continued

Slides:



Advertisements
Similar presentations
SCHEDULING Kshama Desai Bijal Shah Kishore Putta Kashyap Sheth.
Advertisements

Slide 7-1 Copyright © 2004 Pearson Education, Inc. Operating Systems: A Modern Perspective, Chapter 7 7 Scheduling.
Operating Systems Chapter 6
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Chapter 3: CPU Scheduling
CS 311 – Lecture 23 Outline Kernel – Process subsystem Process scheduling Scheduling algorithms User mode and kernel mode Lecture 231CS Operating.
Scheduling in Batch Systems
Chapter 5-CPU Scheduling
Scheduling. Model of Process Execution Ready List Ready List Scheduler CPU Resource Manager Resource Manager Resources Preemption or voluntary yield AllocateRequest.
Modified from Silberschatz, Galvin and Gagne ©2009 Lecture 8 Chapter 5: CPU Scheduling.
Slide 6-1 Copyright © 2004 Pearson Education, Inc. Operating Systems: A Modern Perspective, Chapter 6.
Slide 6-1 Copyright © 2004 Pearson Education, Inc. Operating Systems: A Modern Perspective, Chapter 6.
Chapter 6: CPU Scheduling
Chapter 6 CPU SCHEDULING.
Scheduling. Alternating Sequence of CPU And I/O Bursts.
CPU Scheduling CSCI 444/544 Operating Systems Fall 2008.
Alternating Sequence of CPU And I/O Bursts. Histogram of CPU-burst Times.
Purpose of Operating System Part 2 Monil Adhikari.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
1 Module 5: Scheduling CPU Scheduling Scheduling Algorithms Reading: Chapter
1 Lecture 5: CPU Scheduling Operating System Fall 2006.
Lecturer 5: Process Scheduling Process Scheduling  Criteria & Objectives Types of Scheduling  Long term  Medium term  Short term CPU Scheduling Algorithms.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
lecture 5: CPU Scheduling
CPU SCHEDULING.
Chapter 6: CPU Scheduling
EEE Embedded Systems Design Process in Operating Systems 서강대학교 전자공학과
Chapter 5a: CPU Scheduling
Operating Systems: A Modern Perspective, Chapter 6
Operating Systems Processes Scheduling.
Process Scheduling B.Ramamurthy 9/16/2018.
Scheduling (Priority Based)
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Operating Systems CPU Scheduling.
Process management Information maintained by OS for process management
ICS 143 Principles of Operating Systems
CPU Scheduling Basic Concepts Scheduling Criteria
CPU Scheduling G.Anuradha
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 5: CPU Scheduling
Operating System Concepts
So far…. Firmware identifies hardware devices present
3: CPU Scheduling Basic Concepts Scheduling Criteria
Chapter5: CPU Scheduling
Chapter 5: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 5: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 5: CPU Scheduling
Lecture 2 Part 3 CPU Scheduling
Scheduling.
Outline Announcements Process Scheduling– continued
Outline Announcement Process Scheduling– continued
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Implementing Processes, Threads, and Resources
Chapter 6: CPU Scheduling
Chapter 5: CPU Scheduling
Module 5: CPU Scheduling
Outline Process Management Process manager Hardware process
CPU Scheduling: Basic Concepts
Implementing Processes, Threads, and Resources
Chapter 6: CPU Scheduling
CPU Scheduling.
CPU Scheduling: Basic Concepts
Module 5: CPU Scheduling
Chapter 5: CPU Scheduling
Presentation transcript:

Outline Announcements Process Management – continued Process Scheduling Non-preemptive scheduling algorithms FCFS SJN Priority scheduling Deadline scheduling

Announcements We will have recitation session tomorrow We will go over the first quiz We will discuss thread creation and thread synchronization through mutex On Oct. 2, Dr. Andy Wang will give a lecture I will be attending a symposium on that day On Oct. 16 I need to attend a conference I will use Oct. 15 to make up the lecture and use Oct. 16 class time for demonstration purpose of the first lab 11/27/2018 COP4610

Announcements – cont. The midterm exam will be on Oct. 23, 2003 During the regular class time We will have a review on Tuesday, Oct. 21, 2003 I will answer questions on Wed., Oct. 22, 2003 during the recitation sessions 11/27/2018 COP4610

Hardware Process - Review Bootstrap Loader Process Manager Interrupt Handler P1 P,2 Pn … Machine is Powered up Initialization Load the kernel Service an interrupt Hardware process progress Execute a thread Schedule 11/27/2018 COP4610

Implementing the Process Abstraction - review OS Address Space Control Unit OS interface … Machine Executable Memory ALU CPU Pi Address Pi CPU Pi Executable Memory Pk Address Pk CPU Pk Executable Pj Address Pj CPU Pj Executable 11/27/2018 COP4610

Context Switching - review CPU New Thread Descriptor Old Thread Descriptor 11/27/2018 COP4610

Process Descriptors OS creates/manages process abstraction Descriptor is data structure for each process Type & location of resources it holds List of resources it needs List of threads List of child processes Security keys 11/27/2018 COP4610

System Overview 11/27/2018 COP4610

The Abstract Machine Interface User Mode Instructions Application Program Abstract Machine Instructions Trap Instruction Supervisor Mode fork() create() open() OS 11/27/2018 COP4610

Modern Processes and Threads – cont. 11/27/2018 COP4610

The Address Space Process Files Other objects Address Space Binding Executable Memory Other objects Files 11/27/2018 COP4610

Diagram of Process State 11/27/2018 COP4610

A Process Hierarchy 11/27/2018 COP4610

Process Hierarchies Parent-child relationship may be significant: parent controls children’s execution Ready-Active Blocked-Active Running Start Schedule Request Done Allocate Ready-Suspended Blocked-Suspended Suspend Yield Activate 11/27/2018 COP4610

UNIX State Transition Diagram Request Wait by parent Done Running zombie Schedule Request Sleeping I/O Request Start Allocate Runnable I/O Complete Resume Traced or Stopped Uninterruptible Sleep 11/27/2018 COP4610

Scheduling Scheduling mechanism is the part of the process manager that handles the removal of the running process of CPU and the selection of another process on the basis of a particular strategy Scheduler chooses one from the ready threads to use the CPU when it is available Scheduling policy determines when it is time for a thread to be removed from the CPU and which ready thread should be allocated the CPU next 11/27/2018 COP4610

Process Scheduler Organization Ready List Scheduler CPU Resource Manager Resources Preemption or voluntary yield Allocate Request Done New Process job “Ready” “Running” “Blocked” 11/27/2018 COP4610

Scheduler as CPU Resource Manager Process Units of time for a time-multiplexed CPU Release Ready to run Dispatch Ready List 11/27/2018 COP4610

The Scheduler From Other States Process Ready Process Descriptor Ready Enqueuer Ready List Dispatcher Context Switcher Process Descriptor CPU From Other States Running Process 11/27/2018 COP4610

Process/Thread Context Rn . . . Status Registers Functional Unit Left Operand Right Operand Result ALU PC IR Ctl Unit 11/27/2018 COP4610

Context Switching - review CPU New Thread Descriptor Old Thread Descriptor 11/27/2018 COP4610

Dispatcher Dispatcher module gives control of the CPU to the process selected by the scheduler; this involves: switching context switching to user mode jumping to the proper location in the user program to restart that program 11/27/2018 COP4610

Diagram of Process State 11/27/2018 COP4610

CPU Scheduler Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state. 2. Switches from running to ready state. 3. Switches from waiting/new to ready. 4. Terminates. 11/27/2018 COP4610

CPU Scheduler – cont. Non-preemptive and preemptive scheduling Scheduling under 1 and 4 is non-preemptive A process runs for as long as it likes In other words, non-preemptive scheduling algorithms allow any process/thread to run to “completion” once it has been allocated to the processor All other scheduling is preemptive May preempt the CPU before a process finishes its current CPU burst 11/27/2018 COP4610

Voluntary CPU Sharing Each process will voluntarily share the CPU By calling the scheduler periodically The simplest approach Requires a yield instruction to allow the running process to release CPU 11/27/2018 COP4610

Voluntary CPU Sharing – cont. 11/27/2018 COP4610

Involuntary CPU Sharing Periodic involuntary interruption Through an interrupt from an interval timer device Which generates an interrupt whenever the timer expires The scheduler will be called in the interrupt handler A scheduler that uses involuntary CPU sharing is called a preemptive scheduler 11/27/2018 COP4610

Programmable Interval Timer 11/27/2018 COP4610

Strategy Selection The scheduling criteria will depend in part on the goals of the OS and on priorities of processes, fairness, overall resource utilization, throughput, turnaround time, response time, and deadlines 11/27/2018 COP4610

Working Process Model and Metrics P will be a set of processes, p0, p1, ..., pn-1 S(pi) is the state of pi t(pi), the service time The amount of time pi needs to be in the running state before it is completed W (pi), the waiting time The time pi spends in the ready state before its first transition to the running state TTRnd(pi), turnaround time The amount of time between the moment pi first enters the ready state and the moment the process exists the running state for the last time 11/27/2018 COP4610

Partitioning a Process into Small Processes A process intersperses computation and I/O requests If a process requests k different I/O operations during its life time, the result is k+1 service time requests interspersed with k I/O requests For CPU scheduling, pi can be decomposed into k+1 smaller processes pij, where each pij can be executed without I/O 11/27/2018 COP4610

Alternating Sequence of CPU And I/O Bursts 11/27/2018 COP4610

Histogram of CPU-burst Times 11/27/2018 COP4610

Review: Compute-bound and I/O-bound Processes Compute-bound processes Generate I/O requests infrequently Spend more of its time doing computation I/O-bound processes Spend more of its time doing I/O than doing computation 11/27/2018 COP4610

Scheduling Criteria CPU utilization – keep the CPU as busy as possible Throughput – # of processes that complete their execution per time unit Turnaround time – amount of time to execute a particular process Waiting time – amount of time a process has been waiting in the ready queue Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment) 11/27/2018 COP4610

Optimization Criteria Max CPU utilization Max throughput Min turnaround time Min waiting time Min response time Which one to use depends on the system’s design goal 11/27/2018 COP4610

Everyday scheduling methods First-come, first served Shorter jobs first Higher priority jobs first Job with the closest deadline first Round-robin 11/27/2018 COP4610

FCFS at the supermarket 11/27/2018 COP4610

SJF at the supermarket 11/27/2018 COP4610

Round-robin scheduling 11/27/2018 COP4610

First-Come-First-Served Assigns priority to processes in the order in which they request the processor 11/27/2018 COP4610

First-Come-First-Served – cont. i t(pi) 0 350 1 125 2 475 3 250 4 75 p0 TTRnd(p0) = t(p0) = 350 W(p0) = 0 350 11/27/2018 COP4610

First-Come-First-Served – cont. i t(pi) 0 350 1 125 2 475 3 250 4 75 350 475 p0 p1 TTRnd(p0) = t(p0) = 350 TTRnd(p1) = (t(p1) +TTRnd(p0)) = 125+350 = 475 W(p0) = 0 W(p1) = TTRnd(p0) = 350 11/27/2018 COP4610

First-Come-First-Served – cont. i t(pi) 0 350 1 125 2 475 3 250 4 75 475 950 p0 p1 p2 TTRnd(p0) = t(p0) = 350 TTRnd(p1) = (t(p1) +TTRnd(p0)) = 125+350 = 475 TTRnd(p2) = (t(p2) +TTRnd(p1)) = 475+475 = 950 W(p0) = 0 W(p1) = TTRnd(p0) = 350 W(p2) = TTRnd(p1) = 475 11/27/2018 COP4610

First-Come-First-Served – cont. i t(pi) 0 350 1 125 2 475 3 250 4 75 950 1200 p0 p1 p2 p3 TTRnd(p0) = t(p0) = 350 TTRnd(p1) = (t(p1) +TTRnd(p0)) = 125+350 = 475 TTRnd(p2) = (t(p2) +TTRnd(p1)) = 475+475 = 950 TTRnd(p3) = (t(p3) +TTRnd(p2)) = 250+950 = 1200 W(p0) = 0 W(p1) = TTRnd(p0) = 350 W(p2) = TTRnd(p1) = 475 W(p3) = TTRnd(p2) = 950 11/27/2018 COP4610

First-Come-First-Served – cont. i t(pi) 0 350 1 125 2 475 3 250 4 75 1200 1275 p0 p1 p2 p3 p4 TTRnd(p0) = t(p0) = 350 TTRnd(p1) = (t(p1) +TTRnd(p0)) = 125+350 = 475 TTRnd(p2) = (t(p2) +TTRnd(p1)) = 475+475 = 950 TTRnd(p3) = (t(p3) +TTRnd(p2)) = 250+950 = 1200 TTRnd(p4) = (t(p4) +TTRnd(p3)) = 75+1200 = 1275 W(p0) = 0 W(p1) = TTRnd(p0) = 350 W(p2) = TTRnd(p1) = 475 W(p3) = TTRnd(p2) = 950 W(p4) = TTRnd(p3) = 1200 11/27/2018 COP4610

FCFS Average Wait Time Easy to implement Ignores service time, etc i t(pi) 0 350 1 125 2 475 3 250 4 75 p0 p1 p2 p3 p4 TTRnd(p0) = t(p0) = 350 TTRnd(p1) = (t(p1) +TTRnd(p0)) = 125+350 = 475 TTRnd(p2) = (t(p2) +TTRnd(p1)) = 475+475 = 950 TTRnd(p3) = (t(p3) +TTRnd(p2)) = 250+950 = 1200 TTRnd(p4) = (t(p4) +TTRnd(p3)) = 75+1200 = 1275 W(p0) = 0 W(p1) = TTRnd(p0) = 350 W(p2) = TTRnd(p1) = 475 W(p3) = TTRnd(p2) = 950 W(p4) = TTRnd(p3) = 1200 Wavg = (0+350+475+950+1200)/5 = 2974/5 = 595 1275 1200 900 475 350 Easy to implement Ignores service time, etc Not a great performer 11/27/2018 COP4610

Predicting Wait Time in FCFS In FCFS, when a process arrives, all in ready list will be processed before this job Let m be the service rate Let L be the ready list length Wavg(p) = L*1/m + 0.5* 1/m = L/m+1/(2m) Compare predicted wait with actual in earlier examples 11/27/2018 COP4610

Shortest-Job-Next Scheduling Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time. SJN is optimal gives minimum average waiting time for a given set of processes. 11/27/2018 COP4610

Shortest-Job-Next Scheduling – cont. Two schemes: non-preemptive – once CPU given to the process it cannot be preempted until completes its CPU burst. Preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-Next (SRTN). 11/27/2018 COP4610

Nonpreemptive SJN 11/27/2018 COP4610

Shortest Job Next – cont. i t(pi) 0 350 1 125 2 475 3 250 4 75 75 p4 TTRnd(p4) = t(p4) = 75 W(p4) = 0 11/27/2018 COP4610

Shortest Job Next – cont. i t(pi) 0 350 1 125 2 475 3 250 4 75 75 200 p4 p1 TTRnd(p1) = t(p1)+t(p4) = 125+75 = 200 TTRnd(p4) = t(p4) = 75 W(p1) = 75 W(p4) = 0 11/27/2018 COP4610

Shortest Job Next – cont. i t(pi) 0 350 1 125 2 475 3 250 4 75 75 200 450 p4 p1 p3 TTRnd(p1) = t(p1)+t(p4) = 125+75 = 200 TTRnd(p3) = t(p3)+t(p1)+t(p4) = 250+125+75 = 450 TTRnd(p4) = t(p4) = 75 W(p1) = 75 W(p3) = 200 W(p4) = 0 11/27/2018 COP4610

Shortest Job Next – cont. i t(pi) 0 350 1 125 2 475 3 250 4 75 75 200 450 800 p4 p1 p3 p0 TTRnd(p0) = t(p0)+t(p3)+t(p1)+t(p4) = 350+250+125+75 = 800 TTRnd(p1) = t(p1)+t(p4) = 125+75 = 200 TTRnd(p3) = t(p3)+t(p1)+t(p4) = 250+125+75 = 450 TTRnd(p4) = t(p4) = 75 W(p0) = 450 W(p1) = 75 W(p3) = 200 W(p4) = 0 11/27/2018 COP4610

Shortest Job Next – cont. i t(pi) 0 350 1 125 2 475 3 250 4 75 75 200 450 800 1275 p4 p1 p3 p0 p2 TTRnd(p0) = t(p0)+t(p3)+t(p1)+t(p4) = 350+250+125+75 = 800 TTRnd(p1) = t(p1)+t(p4) = 125+75 = 200 TTRnd(p2) = t(p2)+t(p0)+t(p3)+t(p1)+t(p4) = 475+350+250+125+75 = 1275 TTRnd(p3) = t(p3)+t(p1)+t(p4) = 250+125+75 = 450 TTRnd(p4) = t(p4) = 75 W(p0) = 450 W(p1) = 75 W(p2) = 800 W(p3) = 200 W(p4) = 0 11/27/2018 COP4610

Shortest Job Next – cont. i t(pi) 0 350 1 125 2 475 3 250 4 75 Minimizes wait time May starve large jobs Must know service times 75 200 450 800 1275 p4 p1 p3 p0 p2 W(p0) = 450 W(p1) = 75 W(p2) = 800 W(p3) = 200 W(p4) = 0 TTRnd(p0) = t(p0)+t(p3)+t(p1)+t(p4) = 350+250+125+75 = 800 TTRnd(p1) = t(p1)+t(p4) = 125+75 = 200 TTRnd(p2) = t(p2)+t(p0)+t(p3)+t(p1)+t(p4) = 475+350+250+125+75 = 1275 TTRnd(p3) = t(p3)+t(p1)+t(p4) = 250+125+75 = 450 TTRnd(p4) = t(p4) = 75 Wavg = (450+75+800+200+0)/5 = 1525/5 = 305 11/27/2018 COP4610

Priority Scheduling In priority scheduling, processes/threads are allocated to the CPU based on the basis of an externally assigned priority A commonly used convention is that lower numbers have higher priority Static priorities vs. dynamic priorities Static priorities are computed once at the beginning and are not changed Dynamic priorities allow the threads to become more or less important depending on how much service it has recently received 11/27/2018 COP4610

Priority Scheduling – cont. There are non-preemptive and preemptive priority scheduling algorithms Preemptive nonpreemptive SJN is a priority scheduling where priority is the predicted next CPU burst time. FCFS is a priority scheduling where priority is the arrival time 11/27/2018 COP4610

Nonpreemptive Priority Scheduling 11/27/2018 COP4610

Priority Scheduling – cont. 11/27/2018 COP4610

Priority Scheduling – cont. 11/27/2018 COP4610

Priority Scheduling – cont. Starvation problem low priority processes may never execute. Solution through aging as time progresses increase the priority of the process. 11/27/2018 COP4610

Deadline Scheduling Allocates service by deadline May not be feasible i t(pi) Deadline 0 350 575 1 125 550 2 475 1050 3 250 (none) 4 75 200 p0 p1 p2 p3 p4 1275 1050 550 200 Allocates service by deadline May not be feasible 575 11/27/2018 COP4610

Summary Processes/threads scheduler organization Non-preemptive scheduling algorithms FCFS SJN Priority Deadline 11/27/2018 COP4610