Scheduling Non-Preemptive Policies

Slides:



Advertisements
Similar presentations
Scheduling Criteria CPU utilization – keep the CPU as busy as possible (from 0% to 100%) Throughput – # of processes that complete their execution per.
Advertisements

Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-3 CPU Scheduling Department of Computer Science and Software Engineering.
Chapter 6 CPU SCHEDULING.
CPU Scheduling CSCI 444/544 Operating Systems Fall 2008.
1 Our focus  scheduling a single CPU among all the processes in the system  Key Criteria: Maximize CPU utilization Maximize throughput Minimize waiting.
Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts
Analysis of SRPT Scheduling: Investigating Unfairness Nikhil Bansal (Joint work with Mor Harchol-Balter)
Process Scheduling. Scheduling Strategies Scheduling strategies can broadly fall into two categories  Co-operative scheduling is where the currently.
Energy-Aware Scheduling on Heterogeneous Processors
M/G/1 Queue & Renewal Reward Theory
Discrete Time Markov Chains (A Brief Overview)
Burke Theorem, Reversibility, and Jackson Networks of Queues
CPU SCHEDULING.
Dan C. Marinescu Office: HEC 439 B. Office hours: M, Wd 3 – 4:30 PM.
EEE Embedded Systems Design Process in Operating Systems 서강대학교 전자공학과
Chapter 5a: CPU Scheduling
Serve Assignment Policies
Networks and Operating Systems: Exercise Session 2
April 6, 2001 Gary Kimura Lecture #6 April 6, 2001
CPU Scheduling Algorithms
M/G/1 Busy Period & Power Optimization
Chapter 2 Scheduling.
Scheduling Preemptive Policies
Transform Analysis.
Scheduling Preemptive Policies
Process Scheduling B.Ramamurthy 9/16/2018.
Scheduling (Priority Based)
CPU Scheduling.
Chapter 6: CPU Scheduling
Process management Information maintained by OS for process management
Lecture 23: Process Scheduling for Interactive Systems
CPU Scheduling Basic Concepts Scheduling Criteria
CPU Scheduling G.Anuradha
Chapter 6: CPU Scheduling
Lecture 16 Syed Mansoor Sarwar
Module 5: CPU Scheduling
Chapter 5: CPU Scheduling
Andy Wang Operating Systems COP 4610 / CGS 5765
Operating Systems Lecture 15.
Operating System Concepts
3: CPU Scheduling Basic Concepts Scheduling Criteria
Chapter5: CPU Scheduling
Queueing Theory II.
COT 4600 Operating Systems Spring 2011
Chapter 6: CPU Scheduling
Outline Scheduling algorithms Multi-processor scheduling
CPU scheduling decisions may take place when a process:
Exercise Ms.Reema alOraini
CGS 3763 Operating Systems Concepts Spring 2013
Process Scheduling Decide which process should run and for how long
Chapter 5: CPU Scheduling
Javad Ghaderi, Tianxiong Ji and R. Srikant
Processor Scheduling Hank Levy 1.
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Shortest-Job-First (SJR) Scheduling
Chapter 6: CPU Scheduling
M/G/1/MLPS Queue Mean Delay Analysis
Module 5: CPU Scheduling
CPU SCHEDULING CPU SCHEDULING.
Scheduling 21 May 2019.
Chapter 6: CPU Scheduling
Mean Delay Analysis of Multi Level Processor Sharing Disciplines
Prof. Deptii Chaudhari.
Uniprocessor Scheduling
Processor Sharing Queues
CPU Scheduling.
CPU Scheduling: Basic Concepts
Module 5: CPU Scheduling
Chapter 5: CPU Scheduling
Presentation transcript:

Scheduling Non-Preemptive Policies

Scheduling in General Definition: Deciding which job to serve next Types of scheduling policies Preemptive (job in service can be interrupted by another job) vs. non-preemptive (job in service must finish its service before starting service on another job) Aware of job size (e.g., smaller jobs are served first) vs. unaware of job size Work-conserving (server is never idle when there is work in the system) vs. non-work-conserving (server waits for short job before starting on a big job) Sample policies Non-preemptive & non-size-based: FCFS, LCFS, RANDOM, PRIORITY Preemptive & non-size-based: PRIORITY, PS, Preemptive-LCFS, Foreground-Background (FB – job with smallest CPU age is served) Non-preemptive & size-based: Shortest-Job-First (SJF) Preemptive & size-based: Preemptive-Shortest-Job-First (PSJF), Shortest-Remaining-Processing-Time (SRPT)

Properties of Non-Size-Based Non-Preemptive Policies Theorem 29.2: All such policies have the same distribution of the number of jobs in the system Hence, both E[N] and E[T] are identical across policies Proof relies on the embedded DTMC at departures Note that Var(T) is not the same across policies Var(T)FCFS < Var(T)RANDOM < Var(T)LCFS

“Proof” of Theorem 29.2 In the M/G/1/FCFS queue, we focused on departure instants and looked at the embedded DTMC with transition probabilities Pij Those probabilities are based on the number of jobs that arrive during a service time This then gave us the limiting probabilities πi for the number of jobs in the system The arguments are unchanged if we change the service order in a manner that is independent of job size, i.e., the transition probabilities of the DTMC are unaffected Note that if the service order depended on job size, this would affect the distribution of the number of jobs that arrive during one service time

Analyzing LCFS – (1) Approach relies on deriving the Laplace transform of the waiting time Recall the following relations from our analysis of M/G/1/FCFS BW(s) = W(s+λ–λB(s)) - Laplace transform of busy period started by work W, where B(s) is Laplace transform of busy period made-up of jobs of size S Se(s) = (1-S(s))/sE[s] – Laplace transform of excess service time, where S(s) is Laplace transform of service time In LCFS and conditioning on whether system is empty or not on arrival TQLCFS(s) = (1–ρ)TQLCFS(s | idle) + ρTQLCFS(s | busy) where TQLCFS(s | idle) = 1 and TQLCFS(s | busy) = Se(s+λ–λB(s)) – Waiting time is duration of busy period started by Se (jobs that arrive after are served first) Recalling Se(s) = (1-S(s))/sE[S] (from Problem 25.14), this gives TQLCFS(s | busy) = [1–S(s+λ–λB(s))]/(s+λ–λB(s))E[S] = [1–B(s)]/(s+λ–λB(s))E[S] – (because S(s+λ–λB(s)) = B(s) ) And therefore TQLCFS(s) = (1–ρ) + λ[1–B(s)]/(s+λ–λB(s))

Analyzing LCFS – (2) Differentiating TQLCFS(s) = (1–ρ) + λ[1–B(s)]/(s+λ–λB(s)) twice (and applying L’Hôspital rule several times) yields In contrast, we recall that So that

Non-Preemptive Priority – (1) Server always chooses from the highest priority non-empty queue, but jobs in service cannot be interrupted λk = λpk, arrival rate for jobs of priority k with 1 = Σk pk ρk = λkE[Sk] and ρ = Σkρk < 1 Similarly, E[S] = ΣkpkE[Sk], E[S2] = ΣkpkE[Sk2], E[Se] = E[S2]/2E[S] Considering a “tagged” priority 1 job and a stable system E[TQ(1)]NP = ρE[Se] + E[NQ(1)]E[S1] = ρE[Se] + E[TQ(1)]NPλ1E[S1] = ρE[Se] + E[TQ(1)]NPρ1 So that E[TQ(1)]NP = (ρE[Se])/(1–ρ1) = (λE[S2])/2(1–ρ1)

Non-Preemptive Priority – (2) Considering next at tagged priority 2 job E[TQ(2)]NP = ρE[Se] + E[NQ(1)]E[S1] + E[NQ(2)]E[S2] + E[TQ(2)]NPλ1E[S1] (job in service, jobs in queues 1 & 2, type 1 jobs that arrive while waiting) = ρE[Se] + E[TQ(1)]ρ1 + E[TQ(2)]NPρ2 + E[TQ(2)]NPρ1 E[TQ(2)]NP(1–ρ1–ρ2) = ρE[Se] + E[TQ(1)]NPρ1 = ρE[Se] + ρ1ρE[Se])/(1–ρ1) (using E[TQ(1)] = (ρE[Se])/(1–ρ1) ) = ρE[Se])/(1–ρ1) E[TQ(2)]NP = ρE[Se]/[(1–ρ1)(1–ρ1–ρ2)] = (λE[S2])/[2(1–ρ1)(1–ρ1–ρ2)]

Non-Preemptive Priority – (3) In general, we have for a priority k job E[TQ(k)]NP = ρE[Se]/[(1–Σ{i=1 to k}ρi)(1–Σ{i=1 to k -1}ρi)] = (λE[S2])/[2(1–Σ{i=1 to k}ρi)(1–Σ{i=1 to k -1}ρi)] ≈ (λE[S2])/2  1/(1–Σ{i=1 to k}ρi)2 = (1–ρ)/(1–Σ{i=1 to k}ρi)2  E[TQ]FCFS Comparing to FCFS At high load, for “high” priority flows Σ{i=1 to k}ρi << ρ, so that E[TQ(k)] < E[TQ]FCFS

Shortest-Job-First (SJF) Priority maps to job size (the smaller the job, the higher its priority) f(t) is p.d.f. of job sizes We can model such a system as a multi-class priority queue, where the number n of classes is very large with (xk–xk-1) → 0 as n → ∞ Load of jobs in classes 1 to k, i.e., job size < xk Σ{i=1 to k}ρi → λ∫{t=0 to xk}tf(t)dt Load of jobs in classes 1 to k –1 , i.e., job size < xk-1 Σ{i=1 to k-1}ρi → λ∫{t=0 to xk-1}tf(t)dt → λ∫{t=0 to xk}tf(t)dt This implies that E[TQ(x)]SJF = [λE[S2])2]  [1/(1–λ∫{t=0 to x}tf(t)dt)2] And therefore E[TQ]SJF = [λE[S2])2]  [∫{x=0 to ∞}f(x)dx/(1–λ∫{t=0 to x}tf(t)dt)2]

Comparing SJF to FCFS Let ρx = λF(x)∫{t=0 to x}t[f(t)/F(x)]dt Arrival rate from jobs of size <x times expected size of jobs of size <x This allows us to rewrite E[TQ]SJF as E[TQ(x)]SJF = [λE[S2])2]1/(1–ρx)2 While we have E[TQ(x)]FCFS = E[TQ]FCFS = [λE[S2])2]1/(1–ρ) For small jobs E[TQ(x)]SJF < E[TQ(x)]FCFS When job size distribution is heavy-tailed E[TQ]SJF < E[TQ]FCFS because most jobs are small Note though that the presence of the term E[S2] in the numerator of E[TQ(x)]SJF means that small jobs are still affected by large jobs Small jobs are still occasionally stuck behind large jobs