Operating Systems 9 – scheduling

Slides:



Advertisements
Similar presentations
© 2004, D. J. Foreman 1 Scheduling & Dispatching.
Advertisements

Operating Systems Process Scheduling (Ch 3.2, )
Ceng Operating Systems Chapter 2.2 : Process Scheduling Process concept  Process scheduling Interprocess communication Deadlocks Threads.
UC Santa Barbara Project 1 Discussion Bryce Boe 2011/04/12.
Operating System Process Scheduling (Ch 4.2, )
Operating System I Process Scheduling. Schedulers F Short-Term –“Which process gets the CPU?” –Fast, since once per 100 ms F Long-Term (batch) –“Which.
CS 3013 & CS 502 Summer 2006 Scheduling1 The art and science of allocating the CPU and other resources to processes.
Wk 2 – Scheduling 1 CS502 Spring 2006 Scheduling The art and science of allocating the CPU and other resources to processes.
Operating Systems Process Scheduling (Ch 4.2, )
Operating System Process Scheduling (Ch 4.2, )
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
OPERATING SYSTEMS 9 – SCHEDULING PIETER HARTEL 1.
Chapter 4: Threads. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Threads A thread (or lightweight process) is a basic unit of CPU.
Exec Function calls Used to begin a processes execution. Accomplished by overwriting process imaged of caller with that of called. Several flavors, use.
Processes and Threads CS550 Operating Systems. Processes and Threads These exist only at execution time They have fast state changes -> in memory and.
Scheduling. Alternating Sequence of CPU And I/O Bursts.
1 Chapter 2.1 : Processes Process concept Process concept Process scheduling Process scheduling Interprocess communication Interprocess communication Threads.
Alternating Sequence of CPU And I/O Bursts. Histogram of CPU-burst Times.
ITEC 502 컴퓨터 시스템 및 실습 Chapter 2-1: Process Mi-Jung Choi DPNM Lab. Dept. of CSE, POSTECH.
8-Sep Operating Systems Yasir Kiani. 8-Sep Agenda for Today Review of previous lecture Process scheduling concepts Process creation and termination.
2.5 Scheduling Given a multiprogramming system. Given a multiprogramming system. Many times when more than 1 process is waiting for the CPU (in the ready.
Copyright ©: Nahrstedt, Angrave, Abdelzaher1 Scheduling II: priority scheduling.
Traditional UNIX Scheduling Scheduling algorithm objectives Provide good response time for interactive users Ensure that low-priority background jobs do.
ITFN 2601 Introduction to Operating Systems Lecture 4 Scheduling.
OPERATING SYSTEMS 1 - HARDWARE PIETER HARTEL 1. Hardware 2.
Processes, Threads, and Process States. Programs and Processes  Program: an executable file (before/after compilation)  Process: an instance of a program.
2.5 Scheduling. Given a multiprogramming system, there are many times when more than 1 process is waiting for the CPU (in the ready queue). Given a multiprogramming.
Uniprocessor Scheduling Chapter 9. Aim of Scheduling Assign processes to be executed by the processor or processors: –Response time –Throughput –Processor.
OPERATING SYSTEMS 3 - PROCESSES PIETER HARTEL 1. Principle of concurrency - giving a process the illusion that it owns the whole machine  A process has:
Operating Systems Scheduling. Scheduling Short term scheduler (CPU Scheduler) –Whenever the CPU becomes idle, a process must be selected for execution.
UNIT–II: Process Management
CPU SCHEDULING.
Chapter 6: CPU Scheduling
Linux Scheduler.
Linux Scheduling.
Uniprocessor Scheduling
Linux 202 Training Module Program and Process.
Chapter 5: CPU Scheduling
Uniprocessor Scheduling
O/S State Diagrams © 2004, D. J. Foreman.
Process and Thread State Diagrams
CPU Scheduling Chapter 5.
Review An OS in action Processes and Programs
Uniprocessor Scheduling
Chapter 2.2 : Process Scheduling
Day 25 Uniprocessor scheduling
Chapter 5: CPU Scheduling
Process management Information maintained by OS for process management
Chapter 5: CPU Scheduling
Message Passing, Scheduler
OverView of Scheduling
Operating System Concepts
Operating Systems Lecture 6.
Chapter 5: CPU Scheduling
TDC 311 Process Scheduling.
Processes and Threads Part III
Chapter 5: CPU Scheduling
Chapter 9 Uniprocessor Scheduling
CPU SCHEDULING.
Process and Thread State Diagrams
Process Scheduling Decide which process should run and for how long
Uniprocessor scheduling
Scheduling & Dispatching
Concurrency and Threading: CPU Scheduling
Topic 5 (Textbook - Chapter 6) CPU Scheduling
Chapter 5: CPU Scheduling
Scheduling 21 May 2019.
Uniprocessor Scheduling
Scheduling & Dispatching
Chapter 5: CPU Scheduling
Presentation transcript:

Operating Systems 9 – scheduling PIETER HARTEL

Types of scheduling Short-term: which runnable process to handle by which CPU Medium-term: which processes to swap in (challenge?) Long-term: which processes to accept in a batch environment I/O scheduling: which pending I/O request to handle by which I/O device More processes is more scheduling opportunities but also more resources that are in use

Scheduling is managing queues to minimise delays User oriented criteria: response time, deadlines, predictability System oriented criteria: throughput, resource utilisation Tension? Events include? Good response time for one user probably means poor throughput More processes is more scheduling opportunities but also more resources that are in use Events can be Interrupts, signals, sys calls etc

Common scheduling policies Round robin requires pre-emption, quantum can be varied Example arrival&service times: A: 0&3; B: 2&6; C: 4&4; D: 6&5; E: 8&2 Q = Quantum Feedback: each time the process is pre-empted, it gets double the quantum at the next lower priority for the next time it runs

Multi-level feedback queue : past behaviour predicts future Round Robin in RQi for 2i time units Promote waiting processes Demote running processes

Linux scheduling (section 10.3) Three levels Real-time FIFO, pre-empted only by higher priority RT FIFO Round robin, pre-empted by clock after quantum expiry Time sharing, lower priority, otherwise as above Run queue per CPU with two arrays of 140 queue heads each, active and expired (i.e. out of quantum) Dynamic priority rewards interactivity and punishes CPU hogging Wait queue for threads waiting for events Completely Fair Scheduling Real-time round robin is a misnomer, no deadlines can be specified, simply high priority threads CFS basically means that interactive tasks get their fair share of the CPU

loop #ifndef _loop_h #define _loop_h 1 extern void loop(int N) ; #endif loop loop.h gcc –c loop.c –o loop.o Write a test program #include "loop.h" #define M 1690 /* Burn about N * 10 ms CPU time */ void loop(int N) { int i, j, k ; for(i = 0; i < N; i++) { for(j = 0; j < M; j++) { for(k = 0; k < M; k++) { } loop.c #include <stdio.h> int main(int argc, char * argv[]) { int i; for(i = 0; i < 10; i++) { loop(100); printf("%d\n", i); } return 0 ;

SchedXY int main(int argc, char *argv[]) { pid_t pid = getpid(); gcc –o RR80 -DX=SCHED_RR -DY=80 SchedXY.c sudo RR80& top int main(int argc, char *argv[]) { pid_t pid = getpid(); struct sched_param param; param.sched_priority=Y; if( sched_setscheduler(pid, X, &param) != 0 ) { printf("cannot setscheduler\n"); } else { for(;;); }; return 0; }

ThreadSched #define N 8 #define M 1000000 void *tproc(void *ptr) { int k, i = *((int *) ptr); int bgn = sched_getcpu(); printf("thread %d on CPU %d\n", i,bgn); for(k=0;k<M;k++) { int now = sched_getcpu(); if( bgn != now ) { printf("thread %d to CPU %d\n", i,now); break; } sched_yield(); pthread_exit(0); ThreadSched Output? gcc ThreadSched.c -lpthread ./a.out ./a.out xx + ./a.out thread 1 on CPU 0 thread 2 on CPU 2 thread 0 on CPU 3 thread 3 on CPU 3 thread 4 on CPU 0 thread 5 on CPU 3 thread 6 on CPU 2 thread 7 on CPU 3 thread 7 to CPU 7 thread 3 to CPU 7 thread 5 to CPU 1 thread 4 to CPU 4 thread 6 to CPU 4 + ./a.out xx thread 0 on CPU 0 thread 4 on CPU 4 thread 1 on CPU 1 thread 5 on CPU 5 thread 6 on CPU 6 thread 7 on CPU 7

Nice int main(int argc, char *argv[]) { int p, q, r; cpu_set_t cpuset; CPU_ZERO(&cpuset); CPU_SET(1, &cpuset); sched_setaffinity(parent, sizeof(cpu_set_t), &cpuset); for( p = 0; p < P; p++ ) { for( q = 0; q < Q; q++ ) { pid_t child = fork(); if (child == 0) { setpriority(PRIO_PROCESS, getpid(), p) ; for(r = 0; r < R; r++) { loop(100); } exit(0) ; Nice Output? gcc Nice.c loop.o ./a.out >junk& top ./a.out ./a.out xx $ top PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 18503 pieter 20 0 4696 220 136 R 14 0.0 0:03.06 a.out 18507 pieter 20 0 4696 220 136 R 14 0.0 0:02.16 a.out 18508 pieter 20 0 4696 220 136 R 13 0.0 0:02.10 a.out 18509 pieter 21 1 4696 220 136 R 11 0.0 0:01.66 a.out 18510 pieter 21 1 4696 220 136 R 11 0.0 0:01.62 a.out 18511 pieter 21 1 4696 220 136 R 11 0.0 0:01.60 a.out 18512 pieter 22 2 4696 220 136 R 9 0.0 0:01.28 a.out 18513 pieter 22 2 4696 220 136 R 9 0.0 0:01.26 a.out 18514 pieter 22 2 4696 220 136 R 8 0.0 0:01.24 a.out cpu=1 parent=9829 policy=0 loop=990000 us cpu=1 child=9847 prio=0 started. cpu=1 child=9847 r=0. cpu=1 child=9848 prio=0 started. cpu=1 child=9849 prio=0 started. cpu=1 child=9847 r=1. cpu=1 child=9850 prio=1 started. cpu=1 child=9848 r=0. cpu=1 child=9851 prio=1 started. cpu=1 child=9853 prio=1 started. cpu=1 child=9854 prio=2 started. cpu=1 child=9849 r=0. cpu=1 child=9855 prio=2 started. cpu=1 child=9847 r=2. cpu=1 child=9850 r=0. cpu=1 child=9848 r=1. cpu=1 child=9856 prio=2 started. cpu=1 child=9851 r=0. cpu=1 child=9849 r=1. … cpu=1 child=9850 finished. cpu=1 child=9856 r=4. cpu=1 child=9851 finished. cpu=1 child=9853 r=6. cpu=1 child=9853 finished. cpu=1 child=9854 r=5. cpu=1 child=9855 r=5. cpu=1 child=9856 r=5. cpu=1 child=9854 r=6. cpu=1 child=9854 finished. cpu=1 child=9856 r=6. cpu=1 child=9856 finished. cpu=1 parent=9829 finished.

Summary Maximising resource usage, while minimising delays Decisions Long term: admission of processes Medium term: swapping Short term: CPU assignment to ready process Criteria Response time: users Throughput: system