CS703 – Advanced Operating Systems

Slides:



Advertisements
Similar presentations
Scheduling Criteria CPU utilization – keep the CPU as busy as possible (from 0% to 100%) Throughput – # of processes that complete their execution per.
Advertisements

CPU Scheduling Questions answered in this lecture: What is scheduling vs. allocation? What is preemptive vs. non-preemptive scheduling? What are FCFS,
CS 3013 & CS 502 Summer 2006 Scheduling1 The art and science of allocating the CPU and other resources to processes.
Tao Yang, UCSB CS 240B’03 Unix Scheduling Multilevel feedback queues –128 priority queues (value: 0-127) –Round Robin per priority queue Every scheduling.
Informationsteknologi Tuesday, October 9, 2007Computer Systems/Operating Systems - Class 141 Today’s class Scheduling.
Wk 2 – Scheduling 1 CS502 Spring 2006 Scheduling The art and science of allocating the CPU and other resources to processes.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-3 CPU Scheduling Department of Computer Science and Software Engineering.
1 Scheduling Processes. 2 Processes Each process has state, that includes its text and data, procedure call stack, etc. This state resides in memory.
Multiprocessor and Real-Time Scheduling Chapter 10.
1 Multiprocessor and Real-Time Scheduling Chapter 10 Real-Time scheduling will be covered in SYSC3303.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 5: CPU Scheduling.
Traditional UNIX Scheduling Scheduling algorithm objectives Provide good response time for interactive users Ensure that low-priority background jobs do.
1 Computer Systems II Process Scheduling. 2 Review of Process States.
Lecture Topics: 11/15 CPU scheduling: –Scheduling goals and algorithms.
Scheduling Algorithms : Important Aspects Minimize Response Time –Elapsed time to do an operation (job) –Response time is what the user sees Time to echo.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture 12 Scheduling Models for Computer Networks Dr. Adil Yousif.
Process Scheduling. Scheduling Strategies Scheduling strategies can broadly fall into two categories  Co-operative scheduling is where the currently.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Uniprocessor Scheduling Chapter 9 Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community College, Venice,
CPU Scheduling Scheduling processes (or kernel-level threads) onto the cpu is one of the most important OS functions. The cpu is an expensive resource.
CPU SCHEDULING.
Chapter 6: CPU Scheduling
Dan C. Marinescu Office: HEC 439 B. Office hours: M, Wd 3 – 4:30 PM.
Chapter 5a: CPU Scheduling
Processes and Threads Processes and their scheduling
April 6, 2001 Gary Kimura Lecture #6 April 6, 2001
Chapter 2 Scheduling.
Uniprocessor Scheduling
Day 25 Uniprocessor scheduling
Chapter 5: CPU Scheduling
CPU Scheduling.
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Process management Information maintained by OS for process management
CS 143A - Principles of Operating Systems
CPU Scheduling Basic Concepts Scheduling Criteria
CPU Scheduling G.Anuradha
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 5: CPU Scheduling
Andy Wang Operating Systems COP 4610 / CGS 5765
Operating System Concepts
Chapter 5: CPU Scheduling
3: CPU Scheduling Basic Concepts Scheduling Criteria
Scheduling Adapted from:
Chapter5: CPU Scheduling
TDC 311 Process Scheduling.
Scheduling.
Chapter 5: CPU Scheduling
Chapter 6: CPU Scheduling
CPU SCHEDULING.
Outline Scheduling algorithms Multi-processor scheduling
CPU scheduling decisions may take place when a process:
Multiprocessor and Real-Time Scheduling
Chapter 5: CPU Scheduling
CSE 451: Operating Systems Winter 2003 Lecture 6 Scheduling
Processor Scheduling Hank Levy 1.
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Shortest-Job-First (SJR) Scheduling
Chapter 6: CPU Scheduling
Concurrency and Threading: CPU Scheduling
Module 5: CPU Scheduling
Chapter 6: CPU Scheduling
Uniprocessor Scheduling
Don Porter Portions courtesy Emmett Witchel
CPU Scheduling: Basic Concepts
Module 5: CPU Scheduling
CSE 451: Operating Systems Winter 2001 Lecture 6 Scheduling
Presentation transcript:

CS703 – Advanced Operating Systems By Mr. Farhan Zaidi

Lecture No. 17

Some problems with multilevel queue concept Can’t low priority threads starve? Ad hoc: when skipped over, increase priority What about when past doesn’t predict future? e.g., CPU bound switches to I/O bound Want past predictions to “age” and count less towards current view of the world.

Summary + simple - short jobs can get stuck behind long ones; poor I/O FIFO: + simple - short jobs can get stuck behind long ones; poor I/O RR: + better for short jobs - poor when jobs are the same length STCF: + optimal (ave. response time, ave. time-to-completion) - hard to predict the future - unfair Multi-level feedback: + approximate STCF - unfair to long running jobs

Some Unix scheduling problems How does the priority scheme scale with number of processes? How to give a process a given percentage of CPU? OS implementation problem: OS takes precedence over user process user process can create lots of kernel work: e.g., many network packets come in, OS has to process. When doing a read or write system call, ….

Linux Scheduling Builds on traditional UNIX multi-level feedback queue scheduler by adding two new scheduling classes. Linux scheduling classes: - SCHED_FIFO: FCFS real-time threads - SCHED_RR: round-robin real-time threads - SCHED_OTHER: Other non-real-time threads Multiple priorities may be used within a class Priorities in real-time classes are higher than non-real- time classes.

Linux Scheduling (2) Rules for SCHED_FIFO 1. The system will not interrupt a SCHED_FIFO thread except in the following cases: Another FIFO thread of higher priority becomes ready The executing FIFO thread blocks on I/O etc. The executing FIFO threads voluntarily gives up the CPU e.g. terminates or yields 2. When an executing FIFO thread is interrupted, it is placed in the queue associated with its priority

Linux Scheduling (3) SCHED_RR class is similar to SCHED_FIFO except that there is a time-slice associated with an RR thread. On expiry of the time slice, if the thread is still executing, it is pre-empted and another thread from either SCHED_FIFO or SCHED_RR is selected for execution. SCHED_OTHER class is managed by the traditional UNIX scheduling algorithms i.e. multi-level feedback queue.

Lottery scheduling: random simplicity Problem: this whole priority thing is really ad hoc. How to ensure that processes will be equally penalized under load? Lottery scheduling! Very simple idea: give each process some number of lottery tickets On each scheduling event, randomly pick ticket run winning process to give process P n% of CPU, give it (total tickets)* n% How to use? Approximate priority: low-priority, give few tickets, high-priority give many Approximate STCF: give short jobs more tickets, long jobs fewer. Key: If job has at least 1, will not starve

Grace under load change Add or delete jobs (and their tickets): affect all proportionally Example: give all jobs 1/n of cpu? 4 jobs, 1 ticket each each gets (on average) 25% of CPU. Delete one job: automatically adjusts to 33% of CPU! Easy priority donation: Donate tickets to process you’re waiting on. Its CPU% scales with tickets of all waiters. 1 1

Classifications of Multiprocessor Systems Loosely coupled or distributed multiprocessor, or cluster Each processor has its own memory and I/O channels Functionally specialized processors Such as I/O processor Controlled by a master processor Tightly coupled multiprocessing Processors share main memory Controlled by operating system

Granularity of parallelism Coarse and Very Coarse-Grained Parallelism Synchronization among processes at a very gross level (after > 2000 instructions on the average) Good for concurrent processes running on a multiprogrammed uniprocessor Can by supported on a multiprocessor with little change

Granularity of parallelism Medium grained parallelism Single application is a collection of threads Threads usually interact frequently Fine-Grained Parallelism Highly parallel applications Specialized and fragmented area