M/G/1 variants and Priority Queue

Slides:



Advertisements
Similar presentations
Performance analysis for high speed switches Lecture 6.
Advertisements

CS 5253 Workshop 1 MAC Protocol and Traffic Model.
Probability Distributions
Mean Delay in M/G/1 Queues with Head-of-Line Priority Service and Embedded Markov Chains Wade Trappe.
1 Performance Evaluation of Computer Networks Objectives  Introduction to Queuing Theory  Little’s Theorem  Standard Notation of Queuing Systems  Poisson.
Delay models in Data Networks
Lecture 11. Matching A set of edges which do not share a vertex is a matching. Application: Wireless Networks may consist of nodes with single radios,
1 TCOM 501: Networking Theory & Fundamentals Lectures 9 & 10 M/G/1 Queue Prof. Yannis A. Korilis.
Queuing Networks: Burke’s Theorem, Kleinrock’s Approximation, and Jackson’s Theorem Wade Trappe.
CMPE 252A: Computer Networks Review Set:
Lecture 11: Cellular Networks
Introduction to Queuing Theory
Introduction to Stochastic Models GSLM 54100
Chap 4 Multiaccess Communication (Part 1)
Chapter 6: CPU Scheduling
Tch-prob1 Chap 3. Random Variables The outcome of a random experiment need not be a number. However, we are usually interested in some measurement or numeric.
جلسه دهم شبکه های کامپیوتری به نــــــــــــام خدا.
MIT Fun queues for MIT The importance of queues When do queues appear? –Systems in which some serving entities provide some service in a shared.
Lecture 5: Cellular networks Anders Västberg Slides are a selection from the slides from chapter 10 from:
Queuing Theory Basic properties, Markovian models, Networks of queues, General service time distributions, Finite source models, Multiserver queues Chapter.
1 Queueing Theory Frank Y. S. Lin Information Management Dept. National Taiwan University
Modeling and Analysis of Computer Networks
yahoo.com SUT-System Level Performance Models yahoo.com SUT-System Level Performance Models8-1 chapter11 Single Queue Systems.
Chapter 01 Probability and Stochastic Processes References: Wolff, Stochastic Modeling and the Theory of Queues, Chapter 1 Altiok, Performance Analysis.
Virtual Memory The memory space of a process is normally divided into blocks that are either pages or segments. Virtual memory management takes.
Network Design and Analysis-----Wang Wenjie Queueing Theory II: 1 © Graduate University, Chinese academy of Sciences. Network Design and Performance Analysis.
The Cellular Concept Early Mobile Communications The Cellular Concept
1 Lecture 06 EEE 441: Wireless And Mobile Communications BRAC University.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
OPERATING SYSTEMS CS 3502 Fall 2017
Discrete-time Markov chain (DTMC) State space distribution
Random variables (r.v.) Random variable
EEE Embedded Systems Design Process in Operating Systems 서강대학교 전자공학과
Fundamentals of Cellular Networks (Part IV)
Availability Availability - A(t)
Medium Access Control Protocols
Chapter 5a: CPU Scheduling
Computer Architecture
Module 3 Medium Access Control.
Chapter 6: CPU Scheduling
Lecture on Markov Chain
CPU Scheduling G.Anuradha
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 5: CPU Scheduling
Operating System Concepts
System Performance: Queuing
3: CPU Scheduling Basic Concepts Scheduling Criteria
Chapter5: CPU Scheduling
Queueing Theory II.
Chapter 5: CPU Scheduling
Chapter 6: CPU Scheduling
STOCHASTIC HYDROLOGY Random Processes
Channel Allocation Problem/Multiple Access Protocols Group 3
Channel Allocation Problem/Multiple Access Protocols Group 3
Chapter 5: CPU Scheduling
Lecture 2 Part 3 CPU Scheduling
September 1, 2010 Dr. Itamar Arel College of Engineering
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Chapter 6: CPU Scheduling
Queueing Theory Frank Y. S. Lin Information Management Dept.
M/G/1 Cheng-Fu Chou.
Module 5: CPU Scheduling
Chapter 6: CPU Scheduling
LECTURE 09 QUEUEING THEORY PART3
CPU Scheduling: Basic Concepts
Module 5: CPU Scheduling
Satellite Packet Communications A UNIT -V Satellite Packet Communications.
Presentation transcript:

M/G/1 variants and Priority Queue Cheng-Fu Chou

HW. 1 M/G/1 with bulk service Consider an M/G/1 system with bulk service. Whenever the server becomes free, he accepts 2 customers from the queue into service simultaneously, or , if only one is on queue, he accepts that one; in either case, the service time for the group (of size 1 or 2) is taken from B(x). Let qn be the number of customers remaining after the nth service instant. Let vn be the number arrivals during the nth service. Define B*(s), Q(z), and V(z) as transforms associated with the random variables x, q, v as usual. Let r= lx/2 P. 2

Find Q(z) in term of B*(.), p0, p1= P(q=1) Express p1 in terms of p0 Using the method of imbedded Markov chain, find E(q) in term of r, P(q=0) =p0. Find Q(z) in term of B*(.), p0, p1= P(q=1) Express p1 in terms of p0 Ans: (a) E[q]= r + (2(1-p0)+l2E[x2]-4r2)/(4(1-r)) (b) Q(z)= B*(l-lz) (p0(1-z2)+p1z(1-z))/(B*(l-lz)-z2) (c) p1 = 2(1-p0-r) P. 3

P. 4

HW2 M/G/1 (service time) Consider an M/G/1 queueing system in which service is given as follows. Upon entry into service, a coin is tossed, which has probability p of giving Heads. If the result is Heads, the service time for that customer is 0 seconds. If Tails, his service time is drawn from the following uniform distribution: f(x)=1/(b-a), if a<x<b; otherwise f(x)=0 Find the average service time x Find the variance of service time Find the expected waiting time Find W*(s) P. 5

M/G/1 with vacations Consider a first-come-first-served M/G/1 queue with the following changes. The server serves the queue as long as someone is in the system. Whenever the system empties the server goes away on vacation for a certain length of time, which may be a random variable. At the end of his vacation the server returns and begins to serve customers again: if he returns to an empty system then he goes away on vacation. Let be the z-transform for the number of customers awaiting service when the server returns from vacation to find at least one customer waiting P. 6

(a) Derive an expression which gives qn+1 in term of qn, vn+1 and j (b) Derive Q(z) in term of p0 (c) Show that p0 = (1-r)/F1(1), r = lx (d) Assume now that the service vacation will end whenever a new customer enters the empty system. For this case find F(z) and show that when we substitute it back into our answer for (b) then we arrive at the classical M/G/1 solution. P. 7

P. 8

P. 9

P. 10

P. 11

M/G/1 with vacations At the end of each busy period, the server goes on vacation for some random interval of time A new arrival to an idle system rather than going into service immediately, waits for the end of the vacation x1 x2 x3 x4 v1 v2 v3 x5 P. 12

So, the waiting time formula W = R/(1-r) is still valid Let v1, v2, …, be the duration of successive vacations taken by the server and they are i.i.d. r.v. Observation A new arrival to the system, waits for the completion of the current service or vacation So, the waiting time formula W = R/(1-r) is still valid R is the mean residual time for completion of the service or vacation P. 13

By using the same graphical argument rt x1 time t x1 x2 v1 v2 xM(t) P. 14

Residual service time for an M/G/1 system with vacations M(t): # of services completed by time t L(t) : # of vacations completed by time t P. 15

P. 16

M/G/1 with feedback queue Consider an M/G/1 system in which a departing customer immediately joins the queue again with probability p, or departs forever with probability q = 1- p. Service is FCFS, and the service time for a returning customer is independent of his previous service time. Let B*(s) be the transform for the service time pdf and let B*T(s) be the transform for a customer’s total service time pdf. (a) Find B*T(s) in term of B*(s), p and q (b) Find QT(z) (c) Find N, the average number of customer in the systsem P. 17

P. 18

(b) In determining the number in the system, we may assume that a customer cycles backs directly into service instead of to the tail of the queue. This is allowed due to the “memoryless” selection of a new service time each time a customer returns in addition to the independence of the feedback decision. Thus, we may consider our queue as an M/G/1 system with B*T(s) as the transform for the service time. P. 19

P. 20

P. 21

Priority Queue M/G/1 system with n different priority classes class 1 > class 2 > class 3 >… Arrival rate: lk Mean service time: xk = 1/mk Second moment of service time: P. 22

Nonpreemptive priority A customer undergoing service is allowed to complete service without interruption even if a customer of higher priority arrives in the meantime. A separate queue is maintained for each priority class Goal: find an equation for average delay for each priority class Total n classes NQk: average number on queue for class k Wk: average queueing time for class k rk = lk/mk: system utilization for class k R: mean residual service time P. 23

Assume that overall system utilization is less than 1, i.e., r1 + r2+ ... + rn < 1 P. 24

P. 25

HW Consider a nonpreemptive system and 2 customer classes A and B with respective arrival and service rate lA, mA, and lB, mB . If mA > mB show that the average delay per customer (avg. over both classes) T = lATA + lBTB/(lA+lB) is smaller when class A with higher priority (class A > class B) than the case when class B with higher priority ( class B > class A) P. 26

Preemptive resume priority Service of a customer is interrupted when a higher priority customer arrives and is resumed from the point of interruption once all customers of high priority have been served. P. 27

P. 28

CASE Study DTMC for Slotted Aloha CTMC for Wireless handoff model DTMC for 802.11 Model P. 29

Slotted Aloha Model The Aloha network was developed to provide radio-based data communication. Slotted system Collision or perfect reception Consider m users, n of which are currently backlogged. Each of the m-n unbacklogged users is assumed to transmit independently in each slot with probability a, while each backlogged user transmits independently in each slot with probability b P. 30

DTMC for Slotted Aloha P. 31

DTMC for Slotted Aloha P. 32

Wireless Handoff Model Consider the performance model of a single cell in a cellular wireless communication network. New calls arrive in a Poisson stream at the rate l1 Handoff calls arrive in a Poisson stream at the rate l2 An ongoing call (new or handoff) completes service at the rate m1 The mobile engaged in the call departs the cell at the rate m2 P. 33

More… There are a limited number of channels, n, in the channel pool. When a handoff call arrives and an idle channel is available in the channel pool, the call is accepted and a channel is assigned to it. Otherwise, the handoff call is dropped. When a new call arrives, it is accepted provided g+1 or more channel are available in the channel pool; otherwise the new call is blocked. Here, g is the number of guard channels used to give priority to handoff calls. P. 34

CTMC for Handoff Model P. 35

DTMC for Memory Interference in Multiprocessor System The processors’ ability to share the entire memory space provides a convenient means of sharing information and provides flexibility in memory allocation The price of sharing is the contention for the shared resource To reduce contention, the memory is usually split up into modules, which can be accessed independently and concurrently with other modules When more than one processor attempts to access to the same module, only one processor can be granted access, while other processors must await their turn in a queue. P. 36

The effect of such contention, or interference, is to increase the average memory access time. P. 37

Interconnection network Memory modules M1 Mj Mm ● ● ● ● ● ● Interconnection network P1 Pi Pn ● ● ● ● ● ● Processors P. 38

Assumptions The time to complete a memory access is a constant and all modules are synchronized Processors are assumed to be fast enough to generate a new request as soon as their current request is satisfied A processor cannot generate a new request when it is waiting for the current request to be completed P. 39

The operation of the system can be visualized as a discrete-time queueing network

Consider a system with 2 memory and 2 processors The memory modules are the servers and the fixed number, n, of processors constitute the jobs or customers circulating in this closing queueing network. Let qi denotes the probability that a processor generated request is directed at memory modules i, i= 1, 2, …, m. Consider a system with 2 memory and 2 processors P. 41

P. 42

Memory Referencing Behavior In the renewal model, the successive intervals between references to a given page were assumed to be i.i.d. r.v.. We consider a special case of the renewal model, where the intervals are geometrically distributed. This is known as the independent reference model (IRM) of program behavior. P. 43

Independent Reference Model A program’s address space typically consists of continuous pages represented by the indices 1,2,…,n. To study a program’s reference behavior, it can be represented by the reference string w=x1, x2,…,xt. Successive references are assumed to form a sequence of i.i.d. r.v. with P(xt = i) = bi, 1  i  n. The interval between 2 successive references to page i is geometrically distributed with parameter bi. P. 44

LRU model Assume that a fixed number, m (1m n), of page frames have been allocated to the program. The state of the paging algorithm at time t, denoted by q(t), is an ordered list of the m pages in main memory. If the next page referenced (xt+1) is not in main memory, then a page fault occurs. This will, in general, require the replacement of an existing page from main memory. We will assume that the rightmost page in the ordered list q(t) P. 45

LRU model (cont.) If the next page referenced (xt+1) is in main memory, no page fault (and replacement) occurs, but the q(t) is updated to q(t+1) We can see that the sequence of states q(0), q(1), …,q(t),… forms a DTMC with the state space consisting of n!/(n-m)! Permutations over {1,2,…,n} Assume that the main memory is preloaded initially with m pages. P. 46

LRU model Example Consider the LRU paging algorithm with n=3 and m=2. Let q(t) be ordered by the recency of usage, so that q(t) = (i,j) implies that page indexed i was more recently used than page j, and page j will be the candidate for replacement. P. 47

P. 48