Burke Theorem, Reversibility, and Jackson Networks of Queues

Slides:



Advertisements
Similar presentations
Lecture 10 Queueing Theory. There are a few basic elements common to almost all queueing theory application. Customers arrive, they wait for service in.
Advertisements

TCOM 501: Networking Theory & Fundamentals
Queueing Theory: Recap
ECS 152A Acknowledgement: slides from S. Kalyanaraman & B.Sikdar
Multiple server queues In particular, we look at M/M/k Need to find steady state probabilities.
TCOM 501: Networking Theory & Fundamentals
Single queue modeling. Basic definitions for performance predictions The performance of a system that gives services could be seen from two different.
1 TCOM 501: Networking Theory & Fundamentals Lecture 7 February 25, 2003 Prof. Yannis A. Korilis.
Modeling and Analysis of Manufacturing Systems Session 2 QUEUEING MODELS January 2001.
Queueing Theory.
Queuing Networks: Burke’s Theorem, Kleinrock’s Approximation, and Jackson’s Theorem Wade Trappe.
1 TCOM 501: Networking Theory & Fundamentals Lecture 8 March 19, 2003 Prof. Yannis A. Korilis.
Queuing Networks. Input source Queue Service mechanism arriving customers exiting customers Structure of Single Queuing Systems Note: 1.Customers need.
Introduction to Queuing Theory
Flows and Networks Plan for today (lecture 5): Last time / Questions? Waiting time simple queue Little Sojourn time tandem network Jackson network: mean.
Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / Introduction to Continuous-Time Markov Chains and Queueing Theory.
Queueing Theory I. Summary Little’s Law Queueing System Notation Stationary Analysis of Elementary Queueing Systems  M/M/1  M/M/m  M/M/1/K  …
Flows and Networks Plan for today (lecture 5): Last time / Questions? Blocking of transitions Kelly / Whittle network Optimal design of a Kelly / Whittle.
MIT Fun queues for MIT The importance of queues When do queues appear? –Systems in which some serving entities provide some service in a shared.
Lecture 14 – Queuing Networks Topics Description of Jackson networks Equations for computing internal arrival rates Examples: computation center, job shop.
1 Networks of queues Networks of queues reversibility, output theorem, tandem networks, partial balance, product-form distribution, blocking, insensitivity,
NETE4631:Capacity Planning (2)- Lecture 10 Suronapee Phoomvuthisarn, Ph.D. /
Manijeh Keshtgary. Queuing Network: model in which jobs departing from one queue arrive at another queue (or possibly the same queue)  Open and Closed.
CSE 538 Instructor: Roch Guerin – Office hours: Mon & Wed 4:00-5:00pm in Bryan 509 TA: Junjie Liu
Network Design and Analysis-----Wang Wenjie Queueing System IV: 1 © Graduate University, Chinese academy of Sciences. Network Design and Analysis Wang.
Networks of Queues Plan for today (lecture 6): Last time / Questions? Product form preserving blocking Interpretation traffic equations Kelly / Whittle.
Queuing Theory Basic properties, Markovian models, Networks of queues, General service time distributions, Finite source models, Multiserver queues Chapter.
Flows and Networks Plan for today (lecture 4): Last time / Questions? Output simple queue Tandem network Jackson network: definition Jackson network: equilibrium.
TexPoint fonts used in EMF.
Queuing Networks Jean-Yves Le Boudec 1. Contents 1.The Class of Multi-Class Product Form Networks 2.The Elements of a Product-Form Network 3.The Product-Form.
Networks Plan for today (lecture 8): Last time / Questions? Quasi reversibility Network of quasi reversible queues Symmetric queues, insensitivity Partial.
CDA6530: Performance Models of Computers and Networks Chapter 7: Basic Queuing Networks TexPoint fonts used in EMF. Read the TexPoint manual before you.
State N 2.6 The M/M/1/N Queueing System: The Finite Buffer Case.
Cheng-Fu Chou, CMLab, CSIE, NTU Open Queueing Network and MVA Cheng-Fu Chou.
Chapter 4: Fundamentals of Queuing Models Structure and Performance Parameters Operational Analysis of Queuing Network General Features of Queuing Network.
The M/M/ N / N Queue etc COMP5416 Advanced Network Technologies.
Flows and Networks Plan for today (lecture 6): Last time / Questions? Kelly / Whittle network Optimal design of a Kelly / Whittle network: optimisation.
Network Design and Analysis-----Wang Wenjie Queueing Theory II: 1 © Graduate University, Chinese academy of Sciences. Network Design and Performance Analysis.
Discrete Time Markov Chains
Chapter 6 Product-Form Queuing Network Models Prof. Ali Movaghar.
Flows and Networks Plan for today (lecture 3): Last time / Questions? Output simple queue Tandem network Jackson network: definition Jackson network: equilibrium.
Flows and Networks Plan for today (lecture 6): Last time / Questions? Kelly / Whittle network Optimal design of a Kelly / Whittle network: optimisation.
QUEUING. CONTINUOUS TIME MARKOV CHAINS {X(t), t >= 0} is a continuous time process with > sojourn times S 0, S 1, S 2,... > embedded process X n = X(S.
Queueing Theory II.
Markov Chains.
M/G/1 Queue & Renewal Reward Theory
Discrete Time Markov Chains (A Brief Overview)
Lecture 14 – Queuing Networks
Ergodicity, Balance Equations, and Time Reversibility
CSE 538 Instructor: Roch Guerin
Flows and Networks Plan for today (lecture 3):
Flows and Networks Plan for today (lecture 4):
CTMCs & N/M/* Queues.
Al-Imam Mohammad Ibn Saud University
Scheduling Non-Preemptive Policies
Chapter 6 Queuing Models.
Generalized Jackson Networks (Approximate Decomposition Methods)
Queueing Theory Carey Williamson Department of Computer Science
Queuing models Basic definitions, assumptions, and identities
Queuing models Basic definitions, assumptions, and identities
TexPoint fonts used in EMF.
Queueing Theory II.
Flows and Networks Plan for today (lecture 6):
Networks of queues Networks of queues reversibility, output theorem, tandem networks, partial balance, product-form distribution, blocking, insensitivity,
Queueing networks.
Lecture 14 – Queuing Networks
Queuing Networks Mean Value Analysis
Processor Sharing Queues
LECTURE 09 QUEUEING THEORY PART3
Presentation transcript:

Burke Theorem, Reversibility, and Jackson Networks of Queues

Reverse CTMC Basic idea is to run time in reverse Departures become arrivals and vice versa The reverse chain is also a CTMC Some basic properties of the reverse chain The fraction of time spent in state i in the forward chain is the same as the fraction of time spent in state i in the reverse chain: π*i = πi Rate of transitions from i to j in the reverse chain is equal to the rate of transitions from j to i in the forward chain: π*iq*ij = πjqji If a CTMC is time-reversible, i.e., πiqij = πjqji and Σiπi=1, then the forward and reverse chains are statistically identical and q*ij = qij

Burke’s Theorem Consider an M/M/k system with arrival rate λ. At steady state, the following holds The departure process is Poisson(λ) At all times t, the number of jobs in the system at t, N(t), is independent of the sequence of departures times prior to time t Implications Tandem M/M/k queues can be analyzed as independent M/M/k queue Acyclic networks of M/M/k queues with probabilistic routing can be analyzed as networks of independent M/M/k queues Arrival process to each queue is Poisson

The Case of Cyclic Networks Burke’s Theorem does not hold for cyclic networks (the arrival process is not Poisson) Consider the following example Very low external arrival rate Very fast server Most jobs repeat The independent increment property is clearly not valid (probability of another arrival is much higher just following an arrival Arrival process is, therefore, not Poisson The system can, however, still be represented by a Markov chain (a resubmitted job is indistinguishable from a new one), so there is hope μ fast 0.99 0.01 Poisson(λ≈0)

Markov Chain for Cyclic Network μ fast 0.99 0.01 Poisson(λ≈0) 1 2 3 4 λ 0.01μ Same Markov chain as standard M/M/1 queue with ρ = λ/(0.01μ)

Due to outside departure Rate leaving (n1,n2,…,nk) Jackson Networks Due to departure from Due to outside arrival A general set of single server queues with infinite waiting rooms, probabilistic routing, and exponentially distributed service times External arrivals, if any (open networks) are Poisson Total arrival rates at individual server are sum of external arrival rates and internal transition rates to that server They can be computed by solving a set of linear equations Solving Jackson networks Identify “local” balance equations (the art is in figuring what should balance what) Guess a solution for those balance equations Server 1 Server 2 Server k = = = = Server 1 Server 2 Server k Due to an arrival at Due to outside departure Rate entering (n1,n2,…,nk)

Jackson Networks – Main Results An open Jackson network with k servers has product form P(n1,n2,…,nk) =ρ1n1(1-ρ1)ρ2n2(1-ρ2)…ρknk(1-ρk) ρi = λi/μi, where is λ1,λ2,…, λk is the solution of the set of equations λi = ri +ΣjλjPji (ri is external arrival rate to server i, and λjPji is fraction of departures from server j going to server i) All queues behave like M/M/1 queues even though the arrival process is not Poisson A closed Jackson network with k servers has product form P(n1,n2,…,nk) = Cρ1n1ρ2n2…ρknk where ρi = λi/μi, C’= [ΣΣini=N ρ1n1ρ2n2…ρknk ]-1 is a normalization constant, and the λi’s are any solution to the simultaneous rate equations λi = ΣjλjPji

Open Network Example Rate equations: State probabilities μ1 1-p p Poisson(λ) μ2 I/O CPU Rate equations: λ1 = λ + λ2 λ1 = λ/p ρ1= λ/(pμ1)   λ2 = (1-p)λ1 λ2 = λ(1-p)/p ρ2= λ(1-p)/(pμ2) State probabilities πn1,n2 = ρ1n1ρ2n2 (1-ρ1)(1-ρ2), n1, n2 = 0,1,2,… E[N] = E[N1]+E[N2] = ρ1/(1-ρ1) + ρ2/(1-ρ2)

Closed Network Example μ1=5 0.5 μ2=4 0.4 0.6 N = 2 Rate equations: λ1 = 0.5λ1+ 0.4λ2  λ2 = 4/5λ1 λ2 = 0.5λ1 + 0.6λ2  λ2 = 4/5λ1 (only one independent equation) Choose λ1 = 5 and therefore λ2 = 4, so that ρ1 = ρ2 = 1 State probabilities (for N = 2) πn1,n2 = C ρ1n1ρ2n2, where n1+n2 = 2, i.e., (0,2); (2,0); (1,1) π(0,2) + π(2,0) + π(1,1) = 3C = 1  C = 1/3

Extensions – Open Classed Networks The results hold for classed networks, i.e., networks with k servers and l job classes Same service rate μi for all classes at server i, but Jobs can change class after service (from c to c’) Different external arrival rates: ri(c) for class c at server i Different routing probabilities per class: Pij(c)(c’) for probability that when completing service at server i, a class c job moves to server j as a class c’ job Can be used to emulate different per class job sizes Arrival rate for class c at server i is (arrival rate equations) λi(c) = ri(c) + Σ{j=1 to k} Σ {c’=1 to l} λj(c’) Pji(c’)(c) Network state: z = (z1,z2,…,zk), where z = [(c1(1),c1(2),…,ck(n1)), (c2(1),c2(2),…,c2(n2)), …, (ck(1),ck(2),…,ck(nk))] ci(j) is class of job in position j, j = 1, 2,…, ni, at server i State probabilities π(z1,z1,…,zk)={i=1 to k}P{state at server i is zi} where P{state at server i is zi} = (1- ρi)[λi(ci(1))λi(ci(2))…λi(ci(ni))]/μini Aggregate state probabilities P(n1,n2,…,nk) = {i=1 to k}P{ni jobs at server i} = {i=1 to k}ρini(1-ρi), where ρi = λi/μi and λi = Σ {c=1 to l} λi(c)

Distribution of Job Classes Example with Two Classes In a two-class system, the probability of s jobs of class 1 and t jobs of class 2 at server i is P{Server i has s class 1 jobs and t class 2 jobs}

CPU & I/O Bound System Solving those two systems of equations yields: μ1=2 0.5 μ2=1 I/O CPU PC1,out=0.3 PI1,2=0.95 PC1,2=0.05 PI1,1=0.05, PC1,1=0.65 rC1=0.2 rI2=0.25 PI2,2=0.5 PI2,out=0.4 PI2,1=0.1 PC2,1=1 Rate equations: λ1C = rC1+ λ1CPC1,1 + λ2C PC2,1  λ1C = 0.2 + 0.65λ1C + λ2C λ1C = rC2+ λ1CPC1,2 + λ2C PC2,2  λ2C = 0.05λ1C λ1I = rI1+ λ1IPI1,1 + λ2I PI2,1  λ1I = 0.05λ1I+ 0.1λ2I λ1I = rI2+ λ1IPI1,2 + λ2I PI2,2  λ2I = 0.25 + 0.95λ1I + 0.5λ2I Solving those two systems of equations yields: λ1C = 2/3, λ2C = 1/30, λ1I = 5/76, λ2I = 5/8 λ1 = λ1C + λ1I = 0.7325, ρ1 = λ1/μ1= 0.3663 λ2 = λ2C + λ2I = 0.6583, ρ2 = λ2/μ2= 0.6583 Which immediately gives for i = 1,2 E[Ni] = ρi/(1- ρi) and E[Ti] = E[Ni]/λi More interestingly, what is E[TC] or E[TI]? E[TC] = E[V1C]  E[T1] + E[V2C]  E[T2] E[V1C] = 1 + 0.65 E[V1C] + E[V2C] E[V2C] = 0.05 E[V1C] E[V1C] = 3.333, E[V2C] = 0.167 so that E[TC] = 3.117 Similarly, we can compute E[N1C] E[N1C] is E[N1]  p, where p is the fraction p of CPU-bound jobs at server 1, i.e., p = λ1C/(λ1C + λ1I)

Back to Closed Networks Recall that in a closed Jackson network with k servers and N jobs, the state probabilities are of the form P(n1,n2,…,nk) = Cρ1n1ρ2n2…ρknk where ρi = λi/μi, and C’= [ΣΣini=N ρ1n1ρ2n2…ρknk ]-1 is a normalization constant Solving for the λi’s calls for solving a system of k simultaneous rate equations λi = ΣjλjPji Computing C calls for adding up a total of terms This grows exponentially in N and k We need a better approach

Arrival Theorem In a closed Jackson network with M > 1 jobs, an arrival to (any) server j sees a distribution of the number of jobs at each server equal to the distribution of the number of jobs at each server in the same network, but with only M – 1 jobs. The mean number of jobs the arrival sees at server j is equal to E[Nj(M – 1)] We can use the Arrival Theorem to derive a recursion for the mean response time at server j

Mean Value Analysis A simple recursive approach to computing E[Ti(M)] (and E[Ni(M)]) in a system with M > 1 jobs and k servers E[Ti(M)] = 1/μi(1+piλ(M-1)E[Ti(M-1)]), where λ(M-1) is the total arrival rate to all servers pi = λi(M-1)/λ(M-1) is the fraction of those arrivals headed for server i (pi is independent of M) – pi = Vi/Σ{j=1 to k}Vj , Vi is # visits of server i for each job completion In a system with k servers, λ(M) is given by λ(M) = Σ{i=1 to k}λi(M) = M/[Σ{i=1 to k}piE[Ti(M)]] (*) (*) Based on Little’s Law and the fact that M = Σ{i=1 to k} E[Ni(M)]

Mean Value Analysis – Recursion Initial condition of recursion: E[Tj(1)] = 1/μj Recursive step: E[Tj(M)] = 1/μj + E[Number at server j seen by arrival at j]/μj = 1/μj + E[Nj(M-1)]/μj – by Arrival Theorem = 1/μj + λj(M-1)E[Tj(M-1)]/μj – by Little’s Law = 1/μj + pjλ(M-1)E[Tj(M-1)]/μj – since pj = λj(M-1)/λ(M-1) Next step is to computer λ(M-1) using Little’s Law and the fact that M –1 = Σ{j=1 to k} E[Nj(M-1)] = Σ{j=1 to k}λj(M-1) E[Tj(M-1)] = Σ{j=1 to k} pjλ(M-1) E[Tj(M-1)] = λ(M-1) Σ{j=1 to k} pj E[Tj(M-1)] λ(M-1) = (M–1)/[Σ{j=1 to k}pjE[Tj(M-1)]]

MVA Example λ(M) = M/Σ{i=1 to k}piE[Ti(M)] (*) E[Ti(M)] = 1/μi(1+piλ(M-1)E[Ti(M-1)]) λ(M) = M/Σ{i=1 to k}piE[Ti(M)] (*) μ=1 M = 3 2μ What are E[N1(3)] and E[N2(3)], for M = 3? Note that in this system p1 = p2 =1/2 (each server sees each job once, and so experience half of all job visits to servers in the system) Recursion for E[T1(i)] and E[T2(i)] E[T1(1)] = 1/μ1= 1, E[T2(1)] = 1/μ2= 1/2, λ(1) = 4/3 (by (*)) E[T1(2)] = 1+(1/24/31)/1 = 5/3, E[T2(2)] = 1/2+(1/24/31/2)/2 = 2/3, λ(2) = 12/7 E[T1(3)] = 1+(1/212/75/3)/1 = 17/7, E[T2(3)] = 1/2+(1/212/72/3)/2 = 11/14 = 1/2, λ(3) = 28/15 This gives E[N1(3)] = E[T1(3)]λ1(3) = E[T1(3)]p1λ(3) = 17/71/228/15 = 34/15

More on MVA μ=1 M 2μ Note that λ(M) is NOT the system throughput when they are M jobs in circulation. It is the total arrival rate across all servers The system throughput would be λ1(M) Hence, while Little’s Law holds and we have M = λ(M)E[T(M)], E[T(M)] is not the standard system response time. It is simply a quantity defined as E[T(M)] = Σi piE[Ti(M)] Consider the case M = 1 We found E[T1(1)] = 1/μ1= 1, E[T2(1)] = 1/μ2= 1/2, λ(1) = 1/(1/21+ 1/21/2) = 4/3 We have E[T(1)] = 1/21+ 1/21/2 = 3/4, and λ(1)E[T(1)] = 4/33/4 = 1, as it should according to Little’s Law However, we also know that the system’s response time is E[R] = 1/μ1+ 1/μ2= 3/2. Applying Little’s law to this system, we get λ1(1)E[R] = p1λ(1)E[R] = 1/24/33/2 = 1, as it again should