Download presentation
1
048866: Packet Switch Architectures
Output-Queued Switches Deterministic Queueing Analysis Fairness and Delay Guarantees Dr. Isaac Keslassy Electrical Engineering, Technion
2
048866 – Packet Switch Architectures
Outline Output Queued Switches Terminology: Queues and Arrival Processes. Deterministic Queueing Analysis Output Link Scheduling Spring 2006 – Packet Switch Architectures
3
Generic Router Architecture
Lookup IP Address Update Header Header Processing Address Table Data Hdr 1 1 Queue Packet Buffer Memory Lookup IP Address Update Header Header Processing Address Table 2 2 N times line rate Queue Packet Buffer Memory N times line rate Lookup IP Address Update Header Header Processing Address Table N N Queue Packet Buffer Memory Spring 2006 – Packet Switch Architectures
4
Simple Output-Queued (OQ) Switch Model
Link 1, ingress Link 1, egress Link 2, ingress Link 2, egress Link 3, ingress Link 3, egress Link 4, ingress Link 4, egress Link rate, R Link 2 Link rate, R Link 1 R1 Link 3 R R Link 4 R R R R Spring 2006 – Packet Switch Architectures
5
How an OQ Switch Works Output Queued (OQ) Switch Spring 2006
– Packet Switch Architectures
6
OQ Switch Characteristics
Arriving packets are immediately written into the output queue, without intermediate buffering. The flow of packets to one output does not affect the flow to another output. Spring 2006 – Packet Switch Architectures
7
OQ Switch Characteristics
An OQ switch is work conserving: an output line is always busy when there is a packet in the switch for it. OQ switches have the highest throughput, and lowest average delay. We will also see that the rate of individual flows, and the delay of packets can be controlled. Spring 2006 – Packet Switch Architectures
8
The Shared-Memory Switch
A single, physical memory device Link 1, ingress Link 1, egress Link 2, ingress Link 2, egress R R Link 3, ingress Link 3, egress R R Link N, ingress Link N, egress R R Spring 2006 – Packet Switch Architectures
9
048866 – Packet Switch Architectures
OQ vs. Shared-Memory Memory Bandwidth Buffer Size Spring 2006 – Packet Switch Architectures
10
048866 – Packet Switch Architectures
Memory Bandwidth (OQ) In Out R ? Total: (N+1)R Spring 2006 – Packet Switch Architectures
11
048866 – Packet Switch Architectures
Memory Bandwidth Basic OQ switch: Consider an OQ switch with N different physical memories, and all links operating at rate R bits/s. In the worst case, packets may arrive continuously from all inputs, destined to just one output. Maximum memory bandwidth requirement for each memory is (N+1)R bits/s. Shared Memory Switch: Maximum memory bandwidth requirement for the memory is 2NR bits/s. Spring 2006 – Packet Switch Architectures
12
048866 – Packet Switch Architectures
OQ vs. Shared-Memory Memory Bandwidth Buffer Size Spring 2006 – Packet Switch Architectures
13
048866 – Packet Switch Architectures
Buffer Size In an OQ switch, let Qi(t) be the length of the queue for output i at time t. Let M be the total buffer size in the shared memory switch. Is a shared-memory switch more buffer-efficient than an OQ switch? Spring 2006 – Packet Switch Architectures
14
048866 – Packet Switch Architectures
Buffer Size Answer: Depends on the buffer management policy Static queues: Same as OQ switch For no loss, needs Qi(t) · M/N for all i Dynamic queues: Better than OQ switch (multiplexing effects) Needs Spring 2006 – Packet Switch Architectures
15
How fast can we make a centralized shared memory switch?
5ns SRAM Shared Memory 5ns per memory operation Two memory operations per packet Therefore, upper-bound of: 1 2 N 200 byte bus Spring 2006 – Packet Switch Architectures
16
048866 – Packet Switch Architectures
Outline Output Queued Switches Terminology: Queues and Arrival Processes. Deterministic Queueing Analysis Output Link Scheduling Spring 2006 – Packet Switch Architectures
17
048866 – Packet Switch Architectures
Queue Terminology S,m A(t), l Q(t) D(t) Arrival process, A(t): In continuous time, usually the cumulative number of arrivals in [0,t], In discrete time, usually an indicator function as to whether or not an arrival occurred at time t=nT. l is the arrival rate: the expected number of arriving packets (or bits) per second. Queue occupancy, Q(t): Number of packets (or bits) in queue at time t. Spring 2006 – Packet Switch Architectures
18
048866 – Packet Switch Architectures
Queue Terminology S,m A(t), l Q(t) D(t) Service discipline, S: Indicates the sequence of departures: e.g. FIFO/FCFS, LIFO, … Service distribution: Indicates the time taken to process each packet: e.g. deterministic, exponentially distributed service time. m is the service rate: the expected number of served packets (or bits) per second. Departure process, D(t): In continuous time, usually the cumulative number of departures in [0,t], In discrete time, usually an indicator function as to whether or not a departure occurred at time t=nT. Spring 2006 – Packet Switch Architectures
19
048866 – Packet Switch Architectures
More terminology Customer: Queueing theory usually refers to queued entities as “customers”. In class, customers will usually be packets or bits. Work: Each customer is assumed to bring some work which affects its service time. For example, packets may have different lengths, and their service time might be a function of their length. Waiting time: Time that a customer waits in the queue before beginning service. Delay: Time from when a customer arrives until it has departed. Spring 2006 – Packet Switch Architectures
20
048866 – Packet Switch Architectures
Arrival Processes Deterministic arrival processes: E.g. 1 arrival every second; or a burst of 4 packets every other second. A deterministic sequence may be designed to be adversarial to expose some weakness of the system. Random arrival processes: (Discrete time) Bernoulli i.i.d. arrival process: Let A(t) = 1 if an arrival occurs at time t, where t = nT, n=0,1,… A(t) = 1 w.p. p and 0 w.p. 1-p. Series of independent coin tosses with p-coin. (Continuous time) Poisson arrival process: Exponentially distributed interarrival times. Spring 2006 – Packet Switch Architectures
21
Adversarial Arrival Process Example for “Knockout” Switch
Memory write bandwidth = k.R < N.R 1 R R 2 R R 3 R R N R R If our design goal was to not drop packets, then a simple discrete time adversarial arrival process is one in which: A1(t) = A2(t) = … = Ak+1(t) = 1, and All packets are destined to output t mod N. Spring 2006 – Packet Switch Architectures
22
Bernoulli arrival process
Memory write bandwidth = N.R A1(t) 1 R R 2 A2(t) R R A3(t) 3 R R N AN(t) R R Assume A i(t) = 1 w.p. p, else 0. Assume each arrival picks an output independently, uniformly and at random. Some simple results follow: 1. Probability that at time t a packet arrives to input i destined to output j is p/N. 2. Probability that two consecutive packets arrive to input i = probability that packets arrive to inputs i and j simultaneously = p2. Spring 2006 – Packet Switch Architectures
23
048866 – Packet Switch Architectures
Outline Output Queued Switches Terminology: Queues and Arrival Processes. Deterministic Queueing Analysis Output Link Scheduling Spring 2006 – Packet Switch Architectures
24
Simple Deterministic Model
Cumulative number of bits that arrived up until time t. A(t) A(t) Cumulative number of bits D(t) Q(t) R Service process time D(t) Properties of A(t), D(t): A(t), D(t) are non-decreasing A(t) ¸ D(t) Cumulative number of departed bits up until time t. Spring 2006 – Packet Switch Architectures
25
Simple Deterministic Model
Cumulative number of bits d(t) A(t) Q(t) D(t) time Queue occupancy: Q(t) = A(t) - D(t). Queueing delay d(t): time spent in the queue by a bit that arrived at time t (assuming that the queue is served FCFS/FIFO). Spring 2006 – Packet Switch Architectures
26
Discrete-Time Queueing Model
Discrete-time: at each time-slot n, first a(n) arrivals, then d(n) departures. Cumulative arrivals: Cumulative departures: Queue size at end of time-slot n: Q(n)=A(n)-D(n) Spring 2006 – Packet Switch Architectures
27
Work-Conserving Queue
Out R ? Spring 2006 – Packet Switch Architectures
28
Work-Conserving Queue
Out R ? Spring 2006 – Packet Switch Architectures
29
Work-Conserving Queue
We saw that an output queue in an OQ switch is work-conserving: it is always busy when there is a packet for it. Let A(n), D(n) and Q(n) denote the arrivals, departures and queue size of some output queue. Let R be the queue departure rate (amount of traffic that can depart at each time-slot). After arrivals at start of time-slot n, this output link contains Q(n-1)+a(n) amount of traffic. Spring 2006 – Packet Switch Architectures
30
Work-Conserving Output Link
Case 1: Q(n-1)+a(n) · R ) everything is serviced, nothing is left in the queue. Case 2: Q(n-1)+a(n) > R ) exactly R amount of traffic is serviced, Q(n)=Q(n-1)+a(n) - R. Lindley’s Equation: Q(n) = max(Q(n-1)+a(n)-R,0) = (Q(n-1)+a(n)-R)+ Note: to find cumulative departures, use: D(n)=A(n)-Q(n) Spring 2006 – Packet Switch Architectures
31
048866 – Packet Switch Architectures
Outline Output Queued Switches Terminology: Queues and Arrival Processes. Deterministic Queueing Analysis Output Link Scheduling Spring 2006 – Packet Switch Architectures
32
The problems caused by FIFO output-link scheduling
A FIFO queue does not take fairness into account ) it is “unfair”. (A source has an incentive to maximize the rate at which it transmits.) It is hard to control the delay of packets through a network of FIFO queues. Fairness Guarantees Delay Spring 2006 – Packet Switch Architectures
33
Fairness 10 Mb/s 0.55 Mb/s A 1.1 Mb/s 100 C R1 Mb/s B 0.55 Mb/s
e.g. an http flow with a given (IP SA, IP DA, TCP SP, TCP DP) B 0.55 Mb/s What is the “fair” allocation: (0.55Mb/s, 0.55Mb/s) or (0.1Mb/s, 1Mb/s)? Spring 2006 – Packet Switch Architectures
34
Fairness 10 Mb/s A 1.1 Mb/s 100 R1 D Mb/s B 0.2 Mb/s
C What is the “fair” allocation? Spring 2006 – Packet Switch Architectures
35
Max-Min Fairness A common way to allocate flows
N flows share a link of rate C. Flow f wishes to send at rate W(f), and is allocated rate R(f). Pick the flow, f, with the smallest requested rate. If W(f) < C/N, then set R(f) = W(f). If W(f) > C/N, then set R(f) = C/N. Set N = N – 1. C = C – R(f). If N>0 goto 1. Spring 2006 – Packet Switch Architectures
36
Max-Min Fairness An example
W(f1) = 0.1 1 W(f2) = 0.5 C R1 W(f3) = 10 W(f4) = 5 Round 1: Set R(f1) = 0.1 Round 2: Set R(f2) = 0.9/3 = 0.3 Round 3: Set R(f4) = 0.6/2 = 0.3 Round 4: Set R(f3) = 0.3/1 = 0.3 Spring 2006 – Packet Switch Architectures
37
Water-Filling Analogy
10 Resource Requested/ Allocated 5 0.5 0.1 0.3 0.3 0.3 Customers (sorted by requested amount) Spring 2006 – Packet Switch Architectures
38
048866 – Packet Switch Architectures
Max-Min Fairness How can an Internet router “allocate” different rates to different flows? First, let’s see how a router can allocate the “same” rate to different flows… Spring 2006 – Packet Switch Architectures
39
048866 – Packet Switch Architectures
Fair Queueing Packets belonging to a flow are placed in a FIFO. This is called per-flow queueing. FIFOs are scheduled one bit at a time, in a round-robin fashion. This is called Bit-by-Bit Fair Queueing. Flow 1 Bit-by-bit round robin Classification Scheduling Flow N Spring 2006 – Packet Switch Architectures
40
Bit-by-Bit Weighted Fair Queueing (WFQ)
Likewise, flows can be allocated different rates by servicing a different number of bits for each flow during each round. Also called Generalized Processor Sharing (GPS) (with “infinitesimal amount of flow” instead of “bits”) 1 R(f1) = 0.1 R(f3) = 0.3 R1 C R(f4) = 0.3 R(f2) = 0.3 Order of service for the four queues: … f1, f2, f2, f2, f3, f3, f3, f4, f4, f4, f1,… Spring 2006 – Packet Switch Architectures
41
048866 – Packet Switch Architectures
GPS Guarantees An output link implements GPS with k sessions, allocated rates R(f1), …, R(fk). Assume session i is continually backlogged. For all j, let Sj(t1,t2) be the amount of service received by session j between times t1 and t2. Then: Si(t1,t2) ¸ R(fi) ¢ (t2-t1) For all j≠i, If denominator non-zero Spring 2006 – Packet Switch Architectures
42
Packetized Weighted Fair Queueing (WFQ)
Problem: We need to serve a whole packet at a time. Solution: Determine at what time a packet p would complete if we served flows bit-by-bit. Call this the packet’s finishing time, Fp. Serve packets in the order of increasing finishing time. Also called Packetized Generalized Processor Sharing (PGPS) Spring 2006 – Packet Switch Architectures
43
048866 – Packet Switch Architectures
Understanding Bit-by-Bit WFQ queues, sharing 4 bits/sec of bandwidth, Equal Weights Weights : 1:1:1:1 1 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1 C1 = 1 Time 1 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1 C1 = 1 A1 B1 C1 D1 A2 = 2 C3 = 2 Weights : 1:1:1:1 D1, C1 Depart at R=1 A2, C3 arrive Time Round 1 Weights : 1:1:1:1 1 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1 C1 = 1 A1 B1 C1 D1 A2 = 2 C3 = 2 C2 D2 C2 Departs at R=2 Time Round 1 Round 2 Spring 2006 – Packet Switch Architectures
44
048866 – Packet Switch Architectures
Understanding Bit-by-Bit WFQ queues, sharing 4 bits/sec of bandwidth, Equal Weights Weights : 1:1:1:1 1 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1 C1 = 1 A1 B1 C1 D1 A2 = 2 C3 = 2 C2 D2 D2, B Depart at R=3 C3 Time Round 3 Round 1 Round 2 1 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1 C1 = 1 A1 B1 C1 D1 A2 = 2 C3 = 2 C2 D2 A1 Depart at R=4 C3 A2 Time Round 1 Round 2 Round 3 Round 4 C3, Departs at R=6 5 6 Weights : 1:1:1:1 1 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1 C3 = 2 C1 = 1 C1 D1 C2 B1 D2 A 1 A1 A2 = 2 C3 A2 Departure order for packet by packet WFQ: Sort by finish round of packets Time Sort packets Spring 2006 – Packet Switch Architectures
45
048866 – Packet Switch Architectures
Understanding Bit-by-Bit WFQ queues, sharing 4 bits/sec of bandwidth, Weights 3:2:2:1 Weights : 3:2:2:1 3 2 1 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1 C1 = 1 Time 3 2 1 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1 C1 = 1 A1 B1 A2 = 2 C3 = 2 Time Weights : 3:2:2:1 Round 1 3 2 1 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1 C1 = 1 A1 B1 A2 = 2 C3 = 2 D1, C2, C1 Depart at R=1 Time C1 C2 D1 Weights : 3:2:2:1 Round 1 Spring 2006 – Packet Switch Architectures
46
048866 – Packet Switch Architectures
Understanding Bit-by-Bit WFQ queues, sharing 4 bits/sec of bandwidth, Weights 3:2:2:1 Weights : 3:2:2:1 3 2 1 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1 C1 = 1 A2 = 2 C3 = 2 B1, A A1 Depart at R=2 Time A1 B1 C1 C2 D1 A2 Round 2 Round 1 Weights : 3:2:2:1 3 2 1 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1 C1 = 1 A2 = 2 C3 = 2 D2, C3 Depart at R=2 Time A1 B1 C1 C2 D1 A2 C3 D2 Round 1 Round 2 Weights : 3:2:2:1 3 2 1 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1 C3 = 2 C1 = 1 C1 C2 D1 A1 A2 B1 B 1 A2 = 2 C3 D2 Departure order for packet by packet WFQ: Sort by finish time of packets Time Sort packets Weights : 1:1:1:1 Spring 2006 – Packet Switch Architectures
47
048866 – Packet Switch Architectures
WFQ is complex There may be hundreds to millions of flows; the linecard needs to manage a FIFO per flow. The finishing time must be calculated for each arriving packet, Packets must be sorted by their departure time. Naively, with m packets, the sorting time is O(logm). In practice, this can be made to be O(logN), for N active flows: Egress linecard 1 2 Packets arriving to egress linecard Calculate Fp Find Smallest Fp Departing packet 3 N Spring 2006 – Packet Switch Architectures
48
048866 – Packet Switch Architectures
Deficit Round Robin (DRR) [Shreedhar & Varghese, ’95] An O(1) approximation to WFQ 100 400 600 150 60 340 50 Active packet queues 200 Step 2,3,4: Remaining credits Step 1: Active packet queues 200 100 600 100 400 400 150 600 50 60 400 340 Quantum Size = 200 It is easy to implement Weighted DRR using a different quantum size for each queue. Often-adopted solution in practice Spring 2006 – Packet Switch Architectures
49
The problems caused by FIFO output-link scheduling
A FIFO queue does not take fairness into account ) it is “unfair”. (A source has an incentive to maximize the rate at which it transmits.) It is hard to control the delay of packets through a network of FIFO queues. Fairness Guarantees Delay Spring 2006 – Packet Switch Architectures
50
Deterministic analysis of a router queue
Cumulative bytes FIFO delay, d(t) m A(t) D(t) Model of router queue Q(t) A(t) D(t) Q(t) m time Spring 2006 – Packet Switch Architectures
51
So how can we control the delay of packets?
Assume continuous time, bit-by-bit flows for a moment… Let’s say we know the arrival process, Af(t), of flow f to a router. Let’s say we know the rate, R(f) that is allocated to flow f. Then, in the usual way, we can determine the delay of packets in f, and the buffer occupancy. Spring 2006 – Packet Switch Architectures
52
048866 – Packet Switch Architectures
WFQ Scheduler Flow 1 R(f1), D1(t) A1(t) Classification WFQ Scheduler Flow N AN(t) R(fN), DN(t) Assume a WFQ scheduler… Spring 2006 – Packet Switch Architectures
53
048866 – Packet Switch Architectures
WFQ Scheduler Cumulative bytes Af(t) Df(t) R(f) time We know the allocated rate R(f)) If we knew the arrival process, we would know the packet delay Key idea: constrain the arrival process Spring 2006 – Packet Switch Architectures
54
Let’s say we can bound the arrival process
Cumulative bytes Number of bytes that can arrive in any period of length t is bounded by: This is called (s,r) regulation A1(t) s time Spring 2006 – Packet Switch Architectures
55
(,) Constrained Arrivals and Minimum Service Rate
dmax Bmax Cumulative bytes A1(t) D1(t) r s R(f1) time Theorem [Parekh,Gallager ’93]: If flows are leaky-bucket constrained, and routers use WFQ, then end-to-end delay guarantees are possible. Spring 2006 – Packet Switch Architectures
56
The leaky bucket “(s,r)” regulator
Tokens at rate, r Token bucket size, s Packets Packets One byte (or packet) per token Packet buffer Spring 2006 – Packet Switch Architectures
57
Making the flow conform to (s,r) regulation Leaky bucket as a “shaper”
Tokens at rate, r Token bucket size s To network Variable bit-rate compression C r bytes bytes bytes time time time Spring 2006 – Packet Switch Architectures
58
Checking up on the flow Leaky bucket as a “policer”
Router Tokens at rate, r Token bucket size s From network C r bytes bytes time time Spring 2006 – Packet Switch Architectures
59
048866 – Packet Switch Architectures
QoS Router Policer Classifier Per-flow Queue Scheduler Per-flow Queue Policer Classifier Per-flow Queue Scheduler Per-flow Queue Remember: These results assume that it is an OQ switch! Spring 2006 – Packet Switch Architectures
60
048866 – Packet Switch Architectures
References [GPS] A. K. Parekh and R. Gallager “A Generalized Processor Sharing Approach to Flow Control in Integrated Services Networks: The Single Node Case,” IEEE Transactions on Networking, June 1993. [DRR] M. Shreedhar and G. Varghese “Efficient Fair Queueing using Deficit Round Robin,” ACM Sigcomm, 1995. Spring 2006 – Packet Switch Architectures
61
048866 – Packet Switch Architectures
Questions Do packets always finish in Packetized WFQ earlier than (or as late as) in Bit-by-Bit WFQ? Is DRR with quantum size 1 equal to Packetized WFQ? Spring 2006 – Packet Switch Architectures
62
Answer: NO Example: 2 queues, equal weights, 1b/s link
Time A = 6 B = 1 In packetized WFQ: A is serviced from 0 to 6 (packets cannot be broken), then B is serviced from 6 to 7 In bit-by-bit WFQ: A is serviced from 0 to 2, then B from 2 to 3, then A from 3 to 7. Spring 2006 – Packet Switch Architectures
63
048866 – Packet Switch Architectures
Outline Do packets always finish in Packetized WFQ earlier than (or as late as) in Bit-by-Bit WFQ? Is DRR with quantum size 1 equal to Packetized WFQ? Spring 2006 – Packet Switch Architectures
64
048866 – Packet Switch Architectures
Answer: NO Quantum Size = 1, with n flows 100 Active packet queues Remaining credits 1 100 Active packet queues 99 Remaining credits 2 After 99 rounds Spring 2006 – Packet Switch Architectures
65
Answer: NO 3 4 The scheduler starts serving the first flow…
… but the scheduler continues to service other flows… 100 Active packet queues 99 Remaining credits 3 Active packet queues 100 99 Remaining credits 4 1 1 … and a small packet arrives for the last flow Conclusion: DRR: packet serviced after >100*(n-2) slots Packetized WFQ: packet serviced immediately Spring 2006 – Packet Switch Architectures
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.