Performance analysis for high speed switches Lecture 6.

Slides:



Advertisements
Similar presentations
Lecture 5 This lecture is about: Introduction to Queuing Theory Queuing Theory Notation Bertsekas/Gallager: Section 3.3 Kleinrock (Book I) Basics of Markov.
Advertisements

Lecture 10 Queueing Theory. There are a few basic elements common to almost all queueing theory application. Customers arrive, they wait for service in.
Many useful applications, especially in queueing systems, inventory management, and reliability analysis. A connection between discrete time Markov chains.
E&CE 418: Tutorial-4 Instructor: Prof. Xuemin (Sherman) Shen
1 Chapter 8 Queueing models. 2 Delay and Queueing Main source of delay Transmission (e.g., n/R) Propagation (e.g., d/c) Retransmission (e.g., in ARQ)
Discrete Time Markov Chains
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Continuous Time Markov Chains and Basic Queueing Theory
Lecture 13 – Continuous-Time Markov Chains
Queuing Analysis Based on noted from Appendix A of Stallings Operating System text 6/10/20151.
#11 QUEUEING THEORY Systems Fall 2000 Instructor: Peter M. Hahn
Queueing Theory (2). Home Work 12-9 and Due Day: October 31 (Monday) 2005.
ECS 152A Acknowledgement: slides from S. Kalyanaraman & B.Sikdar
Crossbar Switches Crossbar switches are an important general architecture for fast switches. 2 x 2 Crossbar Switches A general N x N crossbar switch.
Mean Delay in M/G/1 Queues with Head-of-Line Priority Service and Embedded Markov Chains Wade Trappe.
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Input-Queued.
1 Performance Evaluation of Computer Networks Objectives  Introduction to Queuing Theory  Little’s Theorem  Standard Notation of Queuing Systems  Poisson.
Queueing Theory: Part I
Delay models in Data Networks
Little’s Theorem Examples Courtesy of: Dr. Abdul Waheed (previous instructor at COE)
Queuing Analysis Based on noted from Appendix A of Stallings Operating System text 6/28/20151.
Rensselaer Polytechnic Institute © Shivkumar Kalvanaraman & © Biplab Sikdar1 ECSE-4730: Computer Communication Networks (CCN) Network Layer Performance.
7/3/2015© 2007 Raymond P. Jefferis III1 Queuing Systems.
Queuing Networks: Burke’s Theorem, Kleinrock’s Approximation, and Jackson’s Theorem Wade Trappe.
1 TCOM 501: Networking Theory & Fundamentals Lecture 8 March 19, 2003 Prof. Yannis A. Korilis.
Analysis of Input Queueing More complex system to analyze than output queueing case. In order to analyze it, we make a simplifying assumption of "heavy.
Lecture 14 – Queuing Systems

Lesson 11: Solved M/G/1 Exercises
Lecture 7  Poisson Processes (a reminder)  Some simple facts about Poisson processes  The Birth/Death Processes in General  Differential-Difference.
Introduction to Queuing Theory
Queueing Theory I. Summary Little’s Law Queueing System Notation Stationary Analysis of Elementary Queueing Systems  M/M/1  M/M/m  M/M/1/K  …
1 Performance Evaluation of Computer Networks: Part II Objectives r Simulation Modeling r Classification of Simulation Modeling r Discrete-Event Simulation.
The Poisson Process. A stochastic process { N ( t ), t ≥ 0} is said to be a counting process if N ( t ) represents the total number of “events” that occur.
Copyright ©: Nahrstedt, Angrave, Abdelzaher, Caccamo1 Queueing Systems.
Probability Review Thinh Nguyen. Probability Theory Review Sample space Bayes’ Rule Independence Expectation Distributions.
CS433 Modeling and Simulation Lecture 13 Queueing Theory Dr. Anis Koubâa 03 May 2009 Al-Imam Mohammad Ibn Saud University.
MIT Fun queues for MIT The importance of queues When do queues appear? –Systems in which some serving entities provide some service in a shared.
NETE4631:Capacity Planning (2)- Lecture 10 Suronapee Phoomvuthisarn, Ph.D. /
Network Design and Analysis-----Wang Wenjie Queueing System IV: 1 © Graduate University, Chinese academy of Sciences. Network Design and Analysis Wang.
Queuing Theory Basic properties, Markovian models, Networks of queues, General service time distributions, Finite source models, Multiserver queues Chapter.
TexPoint fonts used in EMF.
1 Elements of Queuing Theory The queuing model –Core components; –Notation; –Parameters and performance measures –Characteristics; Markov Process –Discrete-time.
Modeling and Analysis of Computer Networks
1 Chapters 8 Overview of Queuing Analysis. Chapter 8 Overview of Queuing Analysis 2 Projected vs. Actual Response Time.
yahoo.com SUT-System Level Performance Models yahoo.com SUT-System Level Performance Models8-1 chapter11 Single Queue Systems.
Why Wait?!? Bryan Gorney Joe Walker Dave Mertz Josh Staidl Matt Boche.
State N 2.6 The M/M/1/N Queueing System: The Finite Buffer Case.
The M/M/ N / N Queue etc COMP5416 Advanced Network Technologies.
Network Design and Analysis-----Wang Wenjie Queueing Theory II: 1 © Graduate University, Chinese academy of Sciences. Network Design and Performance Analysis.
Maciej Stasiak, Mariusz Głąbowski Arkadiusz Wiśniewski, Piotr Zwierzykowski Model of the Nodes in the Packet Network Chapter 10.
Copyright ©: Nahrstedt, Angrave, Abdelzaher, Caccamo1 Queueing Systems.
1 1 Slide Chapter 12 Waiting Line Models n The Structure of a Waiting Line System n Queuing Systems n Queuing System Input Characteristics n Queuing System.
1 Queuing Delay and Queuing Analysis. RECALL: Delays in Packet Switched (e.g. IP) Networks End-to-end delay (simplified) = End-to-end delay (simplified)
Queuing Theory.  Queuing Theory deals with systems of the following type:  Typically we are interested in how much queuing occurs or in the delays at.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
Queueing Fundamentals for Network Design Application ECE/CSC 777: Telecommunications Network Design Fall, 2013, Rudra Dutta.
Random Variables r Random variables define a real valued function over a sample space. r The value of a random variable is determined by the outcome of.
Mohammad Khalily Islamic Azad University.  Usually buffer size is finite  Interarrival time and service times are independent  State of the system.
Queuing Theory Important definition in Queuing Theory: We define various terms, which are used in our queue model. 1.Queue length: The number of customers.
Medium Access Control Protocols
Lecture on Markov Chain
ECE 358 Examples #1 Xuemin (Sherman) Shen Office: EIT 4155
CPSC 531: System Modeling and Simulation
System Performance: Queuing
Handling Routing Transport Haifa JFK TLV BGN To: Yishay From: Vered
CSE 550 Computer Network Design
Lecture 5 This lecture is about: Introduction to Queuing Theory
Switch Performance Analysis and Design Improvements
Presentation transcript:

Performance analysis for high speed switches Lecture 6

The M/M/1 Queueing System

The M/M/1 Queueing System consisits of a single queueing station with a single server. The name M/M/1 reflects standard queueing theory nomenclature whereby: –The first letter indicates the nature of the arrival process. e.g., M stands for memoryless, which here means a Poisson process, G stands for a general distribution of interarrival time, D stands for deterministic interarrival times. –The second letter indicates the nature of the probability distribution of the service times. –The last number indicates the number of servers.

The M/M/1 Queueing System We have already established, via Little’s Theorem, the relations Between the basic quantities, N = Average number of customers in the system T = Average customer time in the system N Q = Average number of customers waiting in queue W = Average customer waiting time in queue Given these statistics, we will be able to derive the steady-state probabilities –p n = Probability of n customers in the system, n = 0,1, … From these probabilities, we can get and using Little’s Theorem,

Arrival statistics—the Poisson process A stochastic process taking nonnegative integer values is said to be a Poisson process with rate λif –A(t) is a counting process that represents the total number of arrivals that have occurred from 0 to t, and for s < t, A(t)-A(s) equals the numbers of arrivals in the interval (s, t]. –The numbers of arrivals that occur in disjoint time intervals are independent. –The number of arrivals in any interval of length τ is Poisson distributed with parameter. That is, for all t, τ> 0,

Arrival statistics—the Poisson process (Cont) We list some of the properties of the Poisson process that will be of interest: –Interarrival times are independent and exponentially distributed with parameter λ; that is, if t n denotes the time of the n th arrival, the intervals have the probability distribution and are mutually independent. –For every and, where we generically denote by o(δ) a function of δsuch that

Arrival statistics—the Poisson process (Cont) We list some of the properties of the Poisson process that will be of interest: –If two or more independent Poisson processes are merged into a single process, the latter process is Poisson with a rate equal to the sum of the rates of its components. –If a Poisson process is split into two other processes by independently assigning each arrival to the first (second) of these processes with probability p ( 1-p, respectively), the two arrival processes thus obtained are Poisson.

Service Statistics Our assumption regarding the service process is that the customer service times have an exponential distribution with parameter μ, that is, if s n is the service time of the n th customer, An important fact regarding the exponential distribution is its memoryless character, which can be expressed as for the interarrival and service times and,respectively Verification of the memoryless property follows the calculation

Markov Chain Formulation Let us focus attention at the times where δis a small positive number. We denote N k = Number of customers in the system at time Let denote the corresponding transition probabilities We can show that

Derivation of the Stationary Distribution Consider now the steady-state probabilities The probability that the system is in state n and makes a transition to n+1 in the next transition is the same as the probability that the system is in state n+1 and makes a transition to n, that is,

Derivation of the Stationary Distribution (Cont.) By taking the limit in the equation as we obtain These equations can also be written as where It follows that If, the probabilities p n are all positive and add up to unity, so Combing the last two equations, we finally obtain

Derivation of the Stationary Distribution (Cont.) We can now calculate the average number of customers in the system in steady-state: and finally, using, we have The average delay per customer is given by Little’s Theorem, Using, this becomes

Derivation of the Stationary Distribution (Cont.) The average waiting time in queue, W, is the average delay T less the average service time 1/μ, so By Little’s Theorem, the average number of customers in queue is Average Number in the system N Utilization Factor

Occupancy Distribution upon Arrival The steady-state occupancy probabilities upon arrival, need not be equal to the corresponding unconditional steady-state probabilities, It turn out, however, that for the M/M/1 system, we have

Occupancy Distribution upon Arrival (Cont.) A formal proof under the preceding assumption: –Let –We have, using Bayes’ rule, –By assumption, the event A(t, t+δ) is independent of the number in the system at time t, therefore, and we obtain

Occupancy Distribution upon Departure Let us consider the distribution of the number of customers in the system just after a departure has occurred, that is, the probabilities The corresponding steady-state values are denoted It turns out that

The M/G/1 Queueing System

The M/G/1 System Let The Pollaczek-Khinchin(P-K) formula: where W is the expected customer waiting time in queue and The total waiting time, in queue and in service, is

M/G/1: System (Cont.) Appling Little’s formula to W and T, we get the expected number of customers in the queue N Q and the expected number in the system N: Under exponential service time, i.e.,, When service time is deterministic, i.e.,

M/G/1: System (Cont.) Denote – = Waiting time in queue of the ith customer – = Residual service time seen by the ith customer. By this we mean that the customer j is already being serve when i arrives, is the remaining time until customer j’s service time is complete. If no customer is in service(i.e., the system is empty when i arrives), then is zero – = Service time of the ith customer – = Number of customers found waiting in queue by the ith customer upon arrival We have

M/G/1: System (Cont.) By taking expectations and using the independence of the random variable and, we have Taking the limit as, we obtain where R = Mean residual time, define as

M/G/1: System (Cont.) By Little’s Theorem, we have and by substitution in the waiting time formula, we obtain where is the utilization factor; so finally,

M/G/1: System (Cont.) The time average of in the interval [0,t] is where M(t) is the number of service completions within[0,t], and X i is the service time of the i th customer. We can also write this equation as and assuming the limits below exist, we obtain Assuming that time averages can be replaced by ensemble averages, we obtain The P-K formula,

Crossbar Switches Crossbar switches are an important general architecture for fast switches. 2 x 2 Crossbar Switches A general N x N crossbar switch

Input Queueing versus Output Queueing Input Queueing -- "If we come in together then we wait together" Output Queueing -- "We wait at the destination (output) together"

The queueing will be at the input or at the output ? The switch fabric speed is equal to the input line speed –To avoid collision on the single speed switch fabric, only one input line can can place a packet on the switch fabric at a time. This requires the other inputs to stop the packet from entering the switch fabric. This is implemented using an queue at the input. The switch fabric speed is N times faster than the input line speed –The internal switch has slot times which are N times as fast as those of the input lines. The packets enter the crossbar switch together and are shifted to the outputs together. This requires queueing at the outputs to avoid collisions.

General Assumptions for Analysis In any given time slot, the probability that a packet will arrive on a particular input is p. Thus p represents the average utilization of each input. Each packet has equal probability 1/N of being addressed to any given output, and successive packets are independent.

Analysis of Output Queueing p= load as N   Poisson Distribution. Switch with Speedup factor of N. Arriving packets reach the targeted output ”immediately”. = # arriving packets at the tagged queue during a given time slot m

Analysis of the Output Queue Size : the number of packets in the tagged queue at the end of the time slot m Using a standard approach in queueing analysis The mean stead-state queue size The mean queue size for an M/D/1 queue As

The State transition diagram for the output queue size 2 01 …

The Steady-State Queue Size Probabilities …

Analysis of the Packet Waiting Time The time slots that packet must wait while packets that arrived in earlier time slots are transmitted The time slots that packet must wait additionally until it is randomly selected out of the packet arrivals in the time slot m

Analysis of the Packet Waiting Time b: the size of the batch the packet arrives in

Analysis of the Packet Waiting Time the mean steady-state waiting time The mean waiting time for an M/D/1 queue

Analysis of the Packet Waiting Time The steady-state waiting time probabilities:

Analysis of Input Queueing More complex system to analyze than output queueing case. In order to analyze it, we make a simplifying assumption of "heavy load", i.e. all queues are always full. This is a worst-case assumption.

Outputs Internally Nonblocking Switch Losing packet Winning packet Input Queues cannot access output 2 because it is blocked by the first packet 3 2 head-of-line (HOL) blocking

for large N For p=1,  0 = Pr[ carry a packet ] = p  0 = Pr[ carry a packet ] Internally Nonblocking Switch: Dropping packets

Outputs Internally Nonblocking Switch (input, output) Fictitious Output Queues formed by HOL packets (1,2)(1,1) (2,3)(2,1) (3,2) (4,4)(4,1) (1,2)(3,2) (2,3) (4,4)Output 4 Output 3 Output 2 Output 1 the fictitious output queues used for analysis the fictitious output queues used for analysis

–How about small N?  * : the maximum throughput with input queueing –Simulation Results with Large N N **

–Consider a fictitious queue i = # packets at start of time slot m. = # packets arriving at start of time slot m. – – is Poisson and independent of as N   – Throughout of Input-Buffered Switch

i 1 i 2 2 i 1 i i time slot m time slot m-1 e.g. Fictitious Queue i

– under saturation – –

Meaning of Saturation Throughput p 0 =  p = throughput For finite buffer size, if p 0 > p * = at least (p 0 - p * )/ p 0 fraction of packets are dropped. Must keep p 0 < p * Input Queue

Output 1 Fictitious QueuesOutput 2 Output N Input Queue Time spent in HOL are independent for successive packets when N is large Service times at different fictitious queues are independent 2N HOL 1/N Queuing scenario for the delay analysis of the input-buffered switch

X0X0 X3X3 X2X2 X 1  X 0  Busy period Idle period Busy period Y t U(t) Arrivals here are considered as arrivals in intervals i-2 Arrivals here are considered as arrivals in intervals i-1 X i-1 XiXi The busy periods and interpretations for delay analysis of an input queue

m i =2 prior arrivals Arrival of the packet of focus. One simultaneous arrival to be served before the packet; L=1. Departure of packet of focus. XiXi X i+1 RiRi W -- Packet arrival in interval i. -- packet departure in interval i number of arrivals(n) (1) (2) Illustration of the meanings of random variables used in the delay analysis of an input queue

Three Selection Policies Random Selection Policy –If k packets are addressed to a particular output, one of the k packets is chosen at random, each selected with equal probability 1/k. Longest Queue Selection Policy –The controller sends the packet from the longest queue Fixed Priority Selection Policy –The N inputs have fixed priority levels and of the k packets, the controller send the one with highest priority

W _ p0p0 Different contention-resolution policies have different waiting time versus load relationships, but a common maximum load at which waiting time goes to infinity.

Conclusion Mean queue length are always greater for queueing on inputs than on outputs Output queues saturate only as the utilization approaches unity Input queues saturate at a utilization that depends on N, but is approximately when N is large