EE 122: Network Performance, Queueing Theory, Evaluation

Slides:



Advertisements
Similar presentations
1 EE 122: Networks Performance & Modeling Ion Stoica TAs: Junda Liu, DK Moon, David Zats (Materials with thanks.
Advertisements

Fundamentals of Computer Networks ECE 478/578
1 Chapter 8 Queueing models. 2 Delay and Queueing Main source of delay Transmission (e.g., n/R) Propagation (e.g., d/c) Retransmission (e.g., in ARQ)
Katz, Stoica F04 EECS 122: Introduction to Computer Networks Transport Analysis Computer Science Division Department of Electrical Engineering and Computer.
ECS 152A Acknowledgement: slides from S. Kalyanaraman & B.Sikdar
EECS122 – Lecture 5 Department of Electrical Engineering and Computer Sciences University of California Berkeley.
Katz, Stoica F04 EECS 122: Introduction to Computer Networks Performance Modeling Computer Science Division Department of Electrical Engineering and Computer.
1 Performance Evaluation of Computer Networks Objectives  Introduction to Queuing Theory  Little’s Theorem  Standard Notation of Queuing Systems  Poisson.
EE 122: Error detection and reliable transmission Ion Stoica September 16, 2002.
Data Communication and Networks Lecture 13 Performance December 9, 2004 Joseph Conron Computer Science Department New York University
1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues.
1 Lecture 24: Interconnection Networks Topics: communication latency, centralized and decentralized switches (Sections 8.1 – 8.5)
Queueing Theory.
EE 122: Network Performance Modeling Kevin Lai September 4, 2002.
CS144, Stanford University Error in Q3-7. CS144, Stanford University Using longest prefix matching, the IP address will match which entry? a /8.
TCP Enhancement for Random Loss Jiang Wu Computer Science Lakehead University.
Probability Review Thinh Nguyen. Probability Theory Review Sample space Bayes’ Rule Independence Expectation Distributions.
These materials are licensed under the Creative Commons Attribution-Noncommercial 3.0 Unported license (
MIT Fun queues for MIT The importance of queues When do queues appear? –Systems in which some serving entities provide some service in a shared.
Computer Networks Performance Metrics. Performance Metrics Outline Generic Performance Metrics Network performance Measures Components of Hop and End-to-End.
Queuing Theory Basic properties, Markovian models, Networks of queues, General service time distributions, Finite source models, Multiserver queues Chapter.
TexPoint fonts used in EMF.
1 Chapters 8 Overview of Queuing Analysis. Chapter 8 Overview of Queuing Analysis 2 Projected vs. Actual Response Time.
CS 164: Slide Set 2: Chapter 1 -- Introduction (continued).
ICOM 6115©Manuel Rodriguez-Martinez ICOM 6115 – Computer Networks and the WWW Manuel Rodriguez-Martinez, Ph.D. Lecture 7.
Internet and Intranet Protocols and Applications The Internet: Delay, Errors, Detection February, 2002 Joseph Conron Computer Science Department New York.
1 Queuing Delay and Queuing Analysis. RECALL: Delays in Packet Switched (e.g. IP) Networks End-to-end delay (simplified) = End-to-end delay (simplified)
LECTURE 12 NET301 11/19/2015Lect NETWORK PERFORMANCE measures of service quality of a telecommunications product as seen by the customer Can.
Mohammad Khalily Islamic Azad University.  Usually buffer size is finite  Interarrival time and service times are independent  State of the system.
Ó 1998 Menascé & Almeida. All Rights Reserved.1 Part VI System-level Performance Models for the Web.
OPERATING SYSTEMS CS 3502 Fall 2017
Topics discussed in this section:
Chapter 3 outline 3.1 transport-layer services
CS 5565 Network Architecture and Protocols
CTMCs & N/M/* Queues.
Multiple Access Mahesh Jangid Assistant Professor JVW University.
Reliable Transport I: Concepts
Internet Queuing Delay Introduction
TCP, XCP and Fair Queueing
Lecture on Markov Chain
Performance Questions
Lecture 19 – TCP Performance
Computer Network Performance Measures
ECE 358 Examples #1 Xuemin (Sherman) Shen Office: EIT 4155
Internet Queuing Delay Introduction
PUSH Flag A notification from the sender to the receiver to pass all the data the receiver has to the receiving application. Some implementations of TCP.
CS Lecture 2 Network Performance
FAST TCP : From Theory to Experiments
TexPoint fonts used in EMF.
Congestion Control (from Chapter 05)
COMP/ELEC 429/556 Introduction to Computer Networks
Signals, Media, And Data Transmission
Net301 LECTURE 10 11/19/2015 Lect
Computer Science Division
Congestion Control (from Chapter 05)
EE 122: Lecture 7 Ion Stoica September 18, 2001.
Congestion Control (from Chapter 05)
Getting Connected (Chapter 2 Part 3)
Propagation & Transmission delay
Queueing Theory 2008.
Sample Network Performance Problems
Congestion Control (from Chapter 05)
CSE 550 Computer Network Design
Understanding Congestion Control Mohammad Alizadeh Fall 2018
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Sample Network Performance Problems
Congestion Control (from Chapter 05)
EECS 122: Introduction to Computer Networks Packet Scheduling and QoS
Presentation transcript:

EE 122: Network Performance, Queueing Theory, Evaluation Computer Science Division Department of Electrical Engineering and Computer Science University of California, Berkeley, Berkeley, CA 94720-1776 September 7, 2004

Overview Motivations Timing diagrams Queueing Theory Evaluation Techniques

Motivations Understanding network behavior Improving protocols Verifying correctness of implementation Detecting faults Monitor service level agreements Choosing providers Billing

Outline Motivations Timing diagrams Queueing Theory Evaluation techniques

Definitions Link bandwidth (capacity): maximum rate (in bps) at which the sender can send data along the link Propagation delay: time it takes the signal to travel from source to destination Packet transmission time: time it takes the sender to transmit all bits of the packet Queuing delay: time the packet need to wait before being transmitted because the queue was not empty when it arrived Processing Time: time it takes a router/switch to process the packet header, manage memory, etc

Sending One Packet R bits per second (bps) Bandwidth: R bps Propagation delay: T sec T seconds P bits time Transmission time = P/R T Propagation delay =T = Length/speed_of_light 1/speed = 3.3 nsec in free space 4 nsec in copper 5 nsec in fiber

Sending one Packet: Examples P = 1 Kbyte R = 1 Gbps 100 Km, fiber => T = 500 usec P/R = 8 usec T >> P/R T P/R time T << P/R T P = 1 Kbyte R = 100 Mbps 1 Km, fiber => T = 5 usec P/R = 80 usec P/R time

Queueing The queue has Q bits when packet arrives  packet has to wait for the queue to drain before being transmitted Capacity = R bps Propagation delay = T sec P bits Q bits Queueing delay = Q/R T P/R time

Queueing Example P bits Q bits P = 1 Kbit; R = 1 Mbps  P/R = 1 ms Time (ms) Packet arrival 0.5 1 7 7.5 Delay for packet that arrives at time t, d(t) = Q(t)/R + P/R Time # bits in queue: Q(t) 1 Kb 0.5 Kb 1.5 Kb 2 Kb packet 1, d(0) = 1ms packet 2, d(0.5) = 1.5ms packet 3, d(1) = 2ms

Router: Store and Forward A packet is stored (enqueued) before being forwarded (sent) 10 Mbps 5 Mbps 100 Mbps 10 Mbps Receiver Sender time

Store and Forward: Multiple Packet Example 10 Mbps 5 Mbps 100 Mbps 10 Mbps Receiver Sender time

Cut-Through A packet starts being forwarded (sent) as soon as its header is received R1 = 10 Mbps R2 = 10 Mbps Receiver Sender Header time What happens if R2 > R1 ?

Throughput Throughput of a connection or link = total number of bits successfully transmitted during some period [t, t + T) divided by T Link utilization = throughput of the link / link rate Bit rate units: 1Kbps = 103bps, 1Mbps = 106bps, 1Gbps = 109bps [For memory: 1 Kbyte = 210 bytes = 1024 bytes] Some rates are expressed in packets per second (pps)  relevant for routers/switches where the bottleneck is the header processing

Delay Delay (Latency) of bit (packet, file) from A to B Jitter The time required for bit (packet, file) to go from A to B Jitter Variability in delay Round-Trip Time (RTT) Two-way delay from sender to receiver and back Bandwidth-Delay product Product of bandwidth and delay  “storage” capacity of network

Bandwidth-Delay Product Source Destination Window-based flow control: Send W bits (window size) Wait for ACKs (i.e., make sure packets reached destination) Repeat Throughput = W/RTT  W = Throughput x RTT Numerical example: W = 64 Kbytes RTT = 200 ms Throughput = W/T = 2.6 Mbps RTT W RTT RTT time

Delay: Illustration 1 1 2 Latest bit seen by time t at point 2 Sender Receiver Latest bit seen by time t at point 2 at point 1 Delay time

Delay: Illustration 2 1 2 Packet arrival times at 1 1 2 Sender Receiver Packet arrival times at 1 1 2 Packet arrival times at 2 Delay time

Overview Motivations Timing diagrams Queueing Theory Little Theorem M/M/1 Queue Evaluation Techniques

Little’s Theorem Assume a system at which packets arrive at rate λ(t) Let d(i) be the delay of packet i , i.e., time packet i spends in the system What is the average number of packets in the system? d(i) = delay of packet i λ(t) – arrival rate system Intuition: Assume arrival rate is λ = 1 packet per second and the delay of each packet is d = 5 seconds What is the average number of packets N in the system?

Answer Answer: N = λ x d = 5 packets time 1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Little’s Theorem 1 2 Latest bit seen by time t d(i) B(t) time T Sender Receiver d(i) = delay of packet i B(t) = number of bits in transit (in the system) at time t d(i) B(t) time T What is the system occupancy, i.e., average number of packets in transit between 1 and 2 ?

Little’s Theorem 1 2 Latest bit seen by time t d(i) B(t) A= area time Sender Receiver d(i) = delay of packet i B(t) = number of bits in transit (in the system) at time t d(i) B(t) A= area time T Average occupancy = A/T

Little’s Theorem 1 2 Latest bit seen by time t P B(t) A= area time T Sender Receiver d(i) = delay of packet i x(t) = number of packets in transit (in the system) at time t A(m) P d(m-1) A(m-1) B(t) A= area time T A = A(1) + A(2) + … + A(m) = P*(d(1) + d(2) + … + d(m))

Little’s Theorem 1 2 Latest bit seen by time t P B(t) A= area time Sender Receiver d(i) = delay of packet i B(t) = number of bits in transit (in the system) at time t A(m) P d(m-1) A(m-1) B(t) A= area time Average occupancy T A/T = (P*(d(1) + d(2) + … + d(m)))/T = ((P*m)/T) * ((d(1) + d(2) + … + d(m))/m) Arrival rate Average delay

Little’s Theorem 1 2 Latest bit seen by time t λ(i) B(t) A= area time Sender Receiver d(i) = delay of packet i x(t) = number of packets in transit (in the system) at time t A(m) λ(i) A(m-1) d(m-1) B(t) A= area time T Average occupancy = (arrival rate) x (average delay)

Overview Motivations Timing diagrams Queueing Theory Little Theorem M/M/1 Queue Evaluation Techniques

Problem How many clerks do you need to handle all these people?

? What do You Know? Customer arrival distribution 5 10 15 20 25 30 35 40 45 50 service time (sec) # of customers that have been served in more than 20 sec but less than 25 sec Customer arrival distribution Customer service distribution 1 2 3 4 5 6 7 8 9 10 11 inter-arrival time (sec) # of customers that have arrived 5 sec after the previous customer ?

What I Assume? Basic probability knowledge: Let v be a random distributed variable that takes n distinct values, and takes value vi with probability pi. Then Expected value of v is E(v) = Σi=0 vi*pi Σi=0 pi = 1 Let a an b be two random distributed variables. Then E(a+b) = E(a) + E(b) n n

Queueing System λ µ Queueing system: Parameters: Typical question: Server Queue λ µ Queueing system: A queue at which packets (customers) arrive A server that processes the packets (customers) Parameters: λ : arrival rate x : service time, i.e., time to serve (transmit) a customer (packet) µ = 1/E(x) : service rate (E(x) : mean service time) Typical question: What is the time (delay) spent by a customer (packet) in the system?

M / M / 1 Queue Arrival rate (λ): Poisson process Exponential distributed service Poisson arrival M / M / 1 Queue One server Arrival rate (λ): Poisson process Service time (x): Exponentially distributed Compute average time (d) spent by packet in system Two key properties of M/M/1 queue that makes the analysis easy: Memoryless property of exponential distribution PASTA property of Poisson process Server Queue λ µ

Poisson Arrival Process and Exponential Distribution Arrival process where inter-arrival time is exponentially distributed Assume t1, t2, t3, …, tn are arrival instances of n packets Then inter-arrival time yi = ti+1 – ti is exponentially distributed Exponential distribution is memoryless, i.e., the future does not depend on the past Example: Assume a Poisson process with an arrival rate of λ = 5 pkts/sec Then, at any instance of time t, the expected time until the next packet arrives is 1/λ = 1/5 = 0.2sec, irrespective of when previous packet has arrived!

PASTA Principle 1/2 Problem: Assume a system characterized by a time dependent variable X(t). Compute E(X) using n measurements. When do you need to take these measurements? Does periodic measurement work? Example: assume you want to measure the following variable, but you don’t know period. What do you do? 1 2 3 4 5 6 7 8 X(t)

PASTA Principle 2/2 PASTA: Poisson Arrival See Time Averages Performing measurements at times drawn from a Poisson process  allow to compute the mean value of the measured parameter Example Let N(ti) be the number of packets in the queue at time ti (1 ≤ i ≤ n) If ti is given by a Poisson process then (Σni=1N(ti))/n = E(N)

M/M/1 Queue Notations: dn = x1 + x2 + … + xn + z pn = Prob(N(t) = n): probability that queue size is n at time t rn = Prob(Na(t) = n): number of packets found by a packet arriving at t xi = service (transmission) time for packet i z = remaining transmission time of a packet at the arrival of a new packet dn = delay of a packet finding n packets in the queue dn = x1 + x2 + … + xn + z

M/M/1 Queue Notations: dn = x1 + x2 + … + xn + z pn = Prob(N(t) = n): probability that queue size is n at time t rn = Prob(Na(t) = n): number of packets found by a packet arriving at t xi = service (transmission) time for packet i z = remaining transmission time of a packet at the arrival of a new packet dn = delay of a packet finding n packets in the queue because exp. distribution, E(x) = E(z) dn = x1 + x2 + … + xn + z E(dn) = n*E(x) + E(z) = (n+1)*E(x)

M/M/1 Queue Notations: dn = x1 + x2 + … + xn + z pn = Prob(N(t) = n): probability that queue size is n at time t rn = Prob(Na(t) = n): number of packets found by a packet arriving at t xi = service (transmission) time for packet i z = remaining transmission time of a packet at the arrival of a new packet dn = delay of a packet finding n packets in the queue because exp. distribution, E(x) = E(z) dn = x1 + x2 + … + xn + z E(dn) = n*E(x) + E(z) = (n+1)*E(x) d = Σn=0 E(dn)*rn = Σn=0 (n+1)*E(x)*pn ∞ ∞ because PASTA, pn = rn

M/M/1 Queue Notations: dn = x1 + x2 + … + xn + z pn = Prob(N(t) = n): probability that queue size is n at time t rn = Prob(Na(t) = n): number of packets found by a packet arriving at t xi = service (transmission) time for packet i z = remaining transmission time of a packet at the arrival of a new packet dn = delay of a packet finding n packets in the queue dn = x1 + x2 + … + xn + z E(dn) = n*E(x) + E(z) = (n+1)*E(x) d = Σn=0 E(dn)*rn = Σn=0 (n+1)*E(x)*pn = E(x)*(Σn=0(n*pn + pn)) = E(x)*(Σn=0n*pn)+ E(x)*(Σn=0 pn)) = E(x)*(E(n)+1) because exp. distribution, E(x) = E(z) because PASTA, pn = rn ∞

M/M/1 Queue d = E(x)*(E(n) + 1) By Little result: E(n) = λ*d By definition: E(x) = 1/µ λ/µ = u: system utilization d = (1/µ)/(1 – λ/µ) d = (1/µ)/(1 – u) d 1/µ 1 u

Overview Motivations Timing diagrams Queueing Theory Evaluation Techniques

Evaluation Techniques Measurements gather data from a real network e.g., ping www.berkeley.edu realistic, specific Simulations: run a program that pretends to be a real network e.g., NS network simulator, Nachos OS simulator Models, analysis write some equations from which we can derive conclusions general, may not be realistic Usually use combination of methods

Evaluation: Putting Everything Together Abstraction Reality Model Plausibility Derivation Tractability Prediction Conclusion Hypothesis Realism Tradeoff between plausibility, tractability and realism Better to have a few realistic conclusions than none (could not derive) or many conclusions that no one believes (not plausible)