CS444/544 Operating Systems II Scheduler

Slides:



Advertisements
Similar presentations
Scheduling Criteria CPU utilization – keep the CPU as busy as possible (from 0% to 100%) Throughput – # of processes that complete their execution per.
Advertisements

1 CMSC421: Principles of Operating Systems Nilanjan Banerjee Principles of Operating Systems Acknowledgments: Some of the slides are adapted from Prof.
Project 2 – solution code
5: CPU-Scheduling1 Jerry Breecher OPERATING SYSTEMS SCHEDULING.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-3 CPU Scheduling Department of Computer Science and Software Engineering.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Lecture 5 Operating Systems.
Chapter 4 Processor Management
OPERATING SYSTEMS CPU SCHEDULING.  Introduction to CPU scheduling Introduction to CPU scheduling  Dispatcher Dispatcher  Terms used in CPU scheduling.
SYNCHRONIZATION Module-4. scheduling Scheduling is an operating system mechanism that arbitrate CPU resources between running tasks. Different scheduling.
CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts
Lecture 15 Semaphore & Bugs. Concurrency Threads Locks Condition Variables Fixing atomicity violations and order violations.
ITFN 2601 Introduction to Operating Systems Lecture 4 Scheduling.
CSE 153 Design of Operating Systems Winter 2015 Midterm Review.
CS333 Intro to Operating Systems Jonathan Walpole.
Lecture Topics: 11/15 CPU scheduling: –Scheduling goals and algorithms.
CS333 Intro to Operating Systems Jonathan Walpole.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
CPU Scheduling Operating Systems CS 550. Last Time Deadlock Detection and Recovery Methods to handle deadlock – Ignore it! – Detect and Recover – Avoidance.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Process Scheduling. Scheduling Strategies Scheduling strategies can broadly fall into two categories  Co-operative scheduling is where the currently.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
Lecturer 5: Process Scheduling Process Scheduling  Criteria & Objectives Types of Scheduling  Long term  Medium term  Short term CPU Scheduling Algorithms.
Chapter 7 Scheduling Chien-Chung Shen CIS/UD
Scheduling.
CPU Scheduling Andy Wang Operating Systems COP 4610 / CGS 5765.
CPU Scheduling Scheduling processes (or kernel-level threads) onto the cpu is one of the most important OS functions. The cpu is an expensive resource.
Non-Preemptive Scheduling
Copyright ©: Nahrstedt, Angrave, Abdelzaher
CPU SCHEDULING.
Dan C. Marinescu Office: HEC 439 B. Office hours: M, Wd 3 – 4:30 PM.
EEE Embedded Systems Design Process in Operating Systems 서강대학교 전자공학과
Operating Systems Design (CS 423)
Copyright ©: Nahrstedt, Angrave, Abdelzaher
Chapter 5a: CPU Scheduling
Networks and Operating Systems: Exercise Session 2
April 6, 2001 Gary Kimura Lecture #6 April 6, 2001
Chapter 2.2 : Process Scheduling
Chapter 5: CPU Scheduling
Chapter 6: CPU Scheduling
Process management Information maintained by OS for process management
Lecture 13 Concurrency Bugs
Chapter 6: CPU Scheduling
Chapter 5: CPU Scheduling
Andy Wang Operating Systems COP 4610 / CGS 5765
Lecture 21: Introduction to Process Scheduling
Operating System Concepts
Chapter5: CPU Scheduling
Concurrency Bugs Questions answered in this lecture:
Scheduling.
Operating systems Process scheduling.
CPU SCHEDULING.
Process Scheduling Decide which process should run and for how long
Chapter 5: CPU Scheduling
Implementing Mutual Exclusion
CSE 451: Operating Systems Winter 2003 Lecture 6 Scheduling
February 5, 2004 Adrienne Noble
Lecture 21: Introduction to Process Scheduling
Processor Scheduling Hank Levy 1.
CS333 Intro to Operating Systems
Chapter 5: CPU Scheduling
Scheduling 21 May 2019.
CPU Scheduling ( Basic Concepts)
CSE 153 Design of Operating Systems Winter 2019
EECE.4810/EECE.5730 Operating Systems
CPU Scheduling: Basic Concepts
CPU Scheduling CSE 2431: Introduction to Operating Systems
CSE 451: Operating Systems Winter 2001 Lecture 6 Scheduling
CPU Scheduling David Ferry CSCI 3500 – Operating Systems
Presentation transcript:

CS444/544 Operating Systems II Scheduler Yeongjin Jang 05/28/19

Concurrency Lab 2 Tasks Please ‘fork’ the repository to start… Resolve deadlock issues Deadlock q1, q2, q3, q4 Implement a simple thread-safe array https://gitlab.unexploitable.systems/root/concurrency2 Please ‘fork’ the repository to start… Due: 6/10

Quiz 2 This week Attend any lab session to take, but please do not take more than 1 time.. Please study sample quiz before taking the quiz.. https://os.unexploitable.systems/l/sample_quiz_2_answer.pdf

Recap: Deadlock Theory Deadlocks can only happen if threads are having Mutual exclusion Hold-and-wait No preemption Circular wait We can eliminate deadlock by removing such conditions…

How to Remove Mutual Exclusion Do not use lock Use atomic operations instead Replace locks with atomic primitives compare_and_swap(uint64_t *addr, uint64_t prev, uint64_t value); if *addr == prev, then update *addr = value; lock cmpxchg in x86.. void add (int *val, int amt) { do { int old = *value; } while(!CompAndSwap(val, old, old+amt); } void add (int *val, int amt) { Mutex_lock(&m); *val += amt; Mutex_unlock(&m); }

Hold-and-Wait Definition Threads hold resources allocated to them (e.g., locks they have already acquired) while waiting for additional resources (e.g., locks they wish to acquire). Mutex_lock(&setA->lock); Mutex_lock(&setB->lock);

How to Remove Hold-and-Wait Strategy: Acquire all locks atomically once Can release locks over time, but cannot acquire again until all have been released How to do this? Use a meta lock, like this: lock(&meta); lock(&L1); lock(&L2); … unlock(&meta); // Critical section code unlock(…);

Remove Hold-and-Wait set_t *set_intersection (set_t *s1, set_t *s2) { Mutex_lock(&meta_lock) Mutex_lock(&s1->lock); Mutex_lock(&s2->lock); … Mutex_unlock(&s2->lock); Mutex_unlock(&s1->lock); Mutex_unlock(&meta_lock); }

No Preemption lock(A); lock(B); … Definition Resources (e.g., locks) cannot be forcibly removed from threads that are holding them. lock(A); lock(B); … In case if B is acquired by other thread All other threads must wait for acquiring A

How to Remove No Preemption Release the lock if obtaining a resource fails… top: lock(A); if (trylock(B) == -1) { unlock(A); goto top; } … Can’t acquire B, then Release A!

Circular Wait holds wanted wanted by by holds Definition Thread 1 Lock A Definition There exists a circular chain of threads such that each thread holds a resource (e.g., lock) being requested by next thread in the chain. wanted by wanted by Thread 2 Lock B holds

How to Remove Circular Wait

How to Remove Circular Wait Lock variable is mostly a pointer, then provide a correct order of having a lock e.g., if(l1 > l2) { Mutex_lock(l1); Mutex_lock(l2); } else {

Scheduler An algorithm that decides which process to run at certain moment in the system’s execution When the scheduler switches the program’s execution? Which process to run for the next cycle? Goal of scheduler Efficiency, fairness, etc. Problems to avoid Starvation

Scheduler An algorithm that decides which process to run at certain moment in the system’s execution When the scheduler switches the program’s execution? Which process to run for the next cycle? Goal of scheduler Efficiency, fairness, etc. Problems to avoid Starvation

Multi-Programming Run multiple programs on a single CPU simultaneously Why? Increase CPU utilization / job throughput I/O operations are slow Recv() returns! Recv() JOB 1 Waiting for I/O ops JOB 1 JOB 2 A better CPU Utilization

Scheduler An algorithm in OS that determines which job (processes, threads, etc.) to run at certain moment Scheduler runs when: Interrupt occurs (preemptive) A job surrenders its execution rights (non-preemptive) A new job has created

Non-preemptive/preemptive In non-preemptive systems, the scheduler waits until a scheduled process surrenders its execution rights Voluntary context switch E.g., switch if the program calls recv/read, etc. This means that no read/recv, no yield, then the process could run forever… In preemptive systems, the scheduler can interrupt a running process and take back the execution rights Timer Interrupt? Any other I/O interrupts, etc. This is more responsive

Goals of a Scheduler Maximize CPU utilization Maximize job throughput ( # jobs finished / time unit) Supporting a responsive execution (minimize Tfinish – Tstart) Minimize average wait time Maximize average response time Goals depends on the system’s purpose Batch process of a big amount of data Job throughput is important Interactive systems for many small actions Low latency is important

First-In-First-Out (FIFO) Scheduler A non-preemptive scheduler Schedule jobs for their arrival time JOB 1 JOB 1 arrives JOB 2 arrives JOB 3 arrives JOB 2 JOB 3

First-In-First-Out (FIFO) Scheduler A non-preemptive scheduler Schedule jobs for their arrival time Think about the real-world scenario Grocery cashier line Drive-thru JOB 1 JOB 1 arrives JOB 2 arrives JOB 3 arrives JOB 2 JOB 3

FIFO Scheduler Pros Cons A fair rule “Come earlier if you wish to get scheduled…” Cons Long average wait time if long jobs and short jobs are mixed…

FIFO Scheduler JOB 1 (10) JOB 2(10) Arrival seq: 1,2,3,4,5 JOB 3 (2) JOB 1 waits 0 seconds JOB 2 waits 10 seconds JOB 3 waits 20 seconds JOB 4 waits 22 seconds JOB 5 waits 24 seconds JOB 4 (2) JOB 5 (2) AVG: 15.2 seconds of wait time

What if we schedule like.. JOB 1 (10) JOB 2(10) Arrival seq: 3,4,5,1,2 JOB 3 (2) JOB 4 (2) JOB 5 (2) JOB 3 (2) JOB 4 (2) JOB 5 (2) JOB 3 waits 0 seconds JOB 4 waits 2 seconds JOB 5 waits 4 seconds JOB 1 waits 6 seconds JOB 2 waits 16 seconds JOB 1 (10) JOB 2 (10) 5.6 vs 15.2 AVG: 5.6 seconds of wait time

Shortest Job First (SJF) Always schedule the job that finishes earlier than others… Also called as Shortest Remaining Job First (SRJF)

SJF 5.6 vs 15.2 JOB 1 (10) JOB 2(10) Arrival seq: 3,4,5,1,2 JOB 3 (2) JOB 3 waits 0 seconds JOB 2 waits 2 seconds JOB 3 waits 4 seconds JOB 4 waits 6 seconds JOB 5 waits 16 seconds JOB 1 (10) JOB 2 (10) 5.6 vs 15.2 AVG: 5.6 seconds of wait time

Problems: Job Finish Time How do we know the job finish time? User program asserts the time What if they extends that???? A simple cheating is possible Set the finish time as 1 Extend that during the execution (another 1) Schedule again and again…

Problems: Starvation I am a job that requires 10 seconds to finish. Let me be scheduled! Before its scheduling, one hundred of 1 second job has come… JOB 1 (10) J (1) J (1) J (1) J (1) J (1) J (1) J (1) J (1) J (1) J (1) J (1) J (1) JOB 1 (10) 100….

Problems: Starvation After finishing 99 of such small jobs, Another one hudreds of 1 second job has come… JOB 1 (10) J (1) J (1) J (1) J (1) J (1) J (1) J (1) J (1) J (1) J (1) J (1) J (1) JOB 1 (10) 100….

SJF: Starvation A longer job might not be scheduled forever…

Round-Robin - Preemptive A more fair scheduler Each tasks will get a fixed period time (a quantum) for an execution If task does not complete, it goes back to the line Required to pick a time quantum What happens if a quantum is too long, say, 10 seconds? What happens if a quantum is too short, say, 1 nanoseconds?

Round-Robin Example JOB 1 (10) JOB 2(10) J1 JOB 3 (2) JOB 4 (2) JOB 5 Fin J2 Fin Fin J1 J2 J1 J2 J1 J2

Round-Robin Pros Fair! No starvation Cons Many context-switch