Download presentation
Presentation is loading. Please wait.
Published byErica Hodges Modified over 10 years ago
1
Packet Switches with Output Buffers and Shared Buffer Packet switches with output buffers, or shared buffer Delay Guarantees Fairness Fair Queueing Deficit Round Robin Random Eearly Detection Fair Packet Discard Weighted Fair Early Packet Discard
2
Packet Switches with Output Buffers
3
Packet Switches with Shared Buffer
4
Delay Guarantees All flows must police their traffic: send certain amount of data within one policing interval E.g. 10Mbps flow should send 10Kb within 1ms If output is not overloaded, it is guaranteed that the data passes the switch within one policing interval
5
Policing Schemes Simplest TDM scheme is to admit 1 packet each 1/r seconds, where r is the rate. This scheme incurs the large delay for bursty traffic. Windowing scheme: initially counter is W, flow counter is incremented by 1 W/r seconds after a packet is transmitted, a packet is admitted when counter is positive and then counter is decremented by 1. Leaky bucket scheme: counter is incremented by 1 every 1/r seconds, and its maximum value is W. Etc. Many papers calculate the delay that some scheduling algorithm incurs when the traffic is policed by using leaky bucket scheme.
6
Fairness When some output is overloaded, its bandwidth should be fairly shared among different flows. What is fair? Widely adopted definition is max-min fairness. The simplest definition (for me) for fair service is bit-by-bit round-robin (BR).
7
Fairness Definitions 1.Max-min fairness: 1)No user receives more than it requests 2)No other allocation scheme has a higher minimum allocation (received service divided by weight w) 3)Condition (2) recursively holds when the minimal user is removed 2.General Processor Sharing: if S i (t 1,t 2 ) is the amount of traffic of flow i served in (t 1,t 2 ) and flow i is backlogged during, then it holds
8
Examples Link bandwidth is 10Mbps; Flow rates: 10Mbps, 30Mbps; Flow weights: 1,1; Fair shares: 5Mbps, 5Mbps Link bandwidth is 10Mbps; Flow rates: 10Mbps, 30Mbps; Flow weights: 4,1; Fair shares: 8Mbps, 2Mbps Link bandwidth is 10Mbps; Flow capacities: 4Mbps, 30Mbps; Flow weights: 3,1; Fair shares: 4Mbps, 6Mbps Homework: Link bandwidth 100Mbps; Flow rates: 5,10,20,50,50,100; Flow weights: 1,4,4,2,7,2; Fair shares ?
9
Fairness Measure It is obviously impossible to implement bit-by- bit round-robin Other practical algorithm will not be perfectly fair, there is a trade-off between the protocol complexity and its level of fairness Fairness measure is defined as: where flows i an j are backlogged during (t 1,t 2 ) and should be as low as possible
10
Fair Queueing (FQ) It is emulation of bit-by-bit round robin, proposed by Demers, Keshav and Shenker Introduce virtual time which is the number of service rounds until time t, and is calculated as: Define with S i the virtual time when packet k of flow i arrives to the switch at t i, and F i the virtual time when this packet departs the switch. Its length is L i. It holds Packets are transmitted in an increasing order of their departure times k k k k
11
Examples of FQ Performance The performance of different end-to-end flow control mechanisms passing through switches employing WFQ has been examined by Demers et al. Generic flow control algorithm uses a sliding window like TCP, and timeout mechanism where congestion recovery starts after 2RTT (RTT is an exponentially averaged round trip time) Flow control algorithm proposed by Jacobson and Karels, TCP Tahoe version. It comprises: slow start, adaptive window threshold, tedious estimation of RTT In the selective DECbit algorithm, switches send congestion messages to sources using more than their fair shares
12
Examples of FQ Performance Telnet source 40B per 5s, FTP 1KB, maximum window size 5 F6F6 T7T7 T8T8 B F1F1 800kbps 56kbps 15packets Policy FTP Telnet F1F1 F2F2 F3F3 F4F4 F5F5 F6F6 T7T7 T8T8 G/FIFO 18115411593114915313 G/FQ 1788385916006156219698 JK/FIFO 582583585 58358230 JK/FQ 5745795465945996018796 DEC 582 9990 Sl DEC 582 10597
13
Examples of FQ Performance Telnet source 40B per 5s, FTP 1KB, maximum window size 5, ill behaved source twice the line bit-rate T2T2 I3I3 B F1F1 800kbps 56kbps 20packets Policy FTPTelnetIll behaved F1F1 T2I3 G/FIFO 3113497 G/FQ 3491955 JK/FIFO 003500 JK/FQ 34891106 DEC 16603334 Sl DEC 3493953
14
Examples of FQ Performance FTP 1KB, maximum window size 5 S F1F1 56kbps 20packets Policy FTP F1F1 F2F2 F3F3 F4F4 G/FIFO 2500 G/FQ 1750 JK/FIFO 2500 JK/FQ 1750 DEC 239524062377783 Sl DEC 1750 S S S S S F1F1 S2S2 F2F2 S1S1 F4F4 S3S3 F3F3 S4S4
15
Packet Generalized Processor Sharing (PGPS) Parekh and Gallager generalized FQ by introducing weights and simplified it a little Virtual time is updated whenever there is an event in the system: arrival or departure as follows Virtual arrival and departure of packet k of flow i whose last bit arrives at a i k
16
Properties of PGPS Theorem: For PGPS it holds that where L max is the maximum packet length and flows i and j are busy in (t 1,t 2 ).
17
Deficit Round Robin Proposed by Shreedhar and Varghese at Washington University in St. Louis In DRR, flow i is assigned quantum Q i proportional to its weight w i, and counter c i. Initially counter value is set to 0. The number of bits of packets that are transmitted in some round-robin round must satisfy t i <c i +Q i. And counter is set to new value c i =c i +Q i -t i. If queue gets emptied c i =0; The complexity of this algorithm is O(1) because a couple of operations should be performed within a packet duration time, if algorithm serves non-empty queue whenever it visits the queue.
18
Properties of DRR Theorem: For PGPS it holds that where L max is the maximum packet length. Proof: Counter c i 0 if heading packet is longer than ci. It holds that S i (t 1,t 2 )=mQ i +c i (0)-c i (m) where m is the number of round-robin round and (t 1,t 2 ) is the busy interval and therefore |Si(t 1,t 2 )- mQ i |<L max.
19
Properties of DRR Proof(cont.): S i (t 1,t 2 )/w i ≤(m-1)·Q+Q+L max /w i, and S j (t 1,t 2 )/w j ≥m’·Q-L max /w j where m’ is the number of round-robin rounds for flow j. Because m’≥m-1, FM=Q+L max /w i +L max /w j =3L max because w i,w j ≥1, and Q≥L max in order for the protocol to have complexity of O(1). Namely if Q<L max it may happen that queue is not served when round-robin pointer points to it and the complexity of the algorithm is larger than O(1). Namely each queue visit incurs the operation of comparison, and many queues may be visited, up to N per packet transmission.
20
Properties of DRR Maximum delay in BR is NL max /B. In DRR, an incoming packet might have to wait for ∑ i Q i /B, and its maximum delay is NQ max /B. So, the ratio of the DRR delay and the ideal delay is Q max /L max =Q max /Q min =w max /w min, and may be significant if the fairness granularity should be very fine. Shreedhar and Varghese propose to serve the delay sensitive traffic with reservations and which is policed.
21
Examples of DRR Performance 20 flows in single hop topology of 10p/s, one is ill-behaving (misbehaving) 30p/s, packet length randomly distributed 0-4500bit, poisson arrivals. Ill behaving gets more bandwidth than the regular flows in FIFO, and fair share in DRR. Multihop topology not clear to me? It has been shown through examples that fairness is preserved when sources generate packets of different lengths or with different distribution. Examples unclear.
22
Packet Discard First schemes discard packets coming to the full buffer or coming to the buffer with the number of queued packets exceeding some specified threshold. They are biased against bursty traffic, because the probability that a packet is discarded increases with its burst length. TCP sources sending discarded packets would slow down their rates and underutilize the network. All sources are synchronized, the network throughput would be oscillatory and the efficiency becomes low.
23
Random Early Detection (RED) Floyd and Jacobson introduce two threshold for the queue length in random early detection (RED) algorithm. When the queue length exceeds the low threshold but is below the high threshold, packets are dropped with a probability which increases with the queue length. The probability is calculated so that the packets that are dropped are equally spaced. When the queue length exceeds the higher threshold, all incoming packets are dropped. The queue length is calculated as an exponential weighted moving average, and it depends on the instantaneous queue length, and past values of the queue length.
24
Random Early Detection (RED) When the queue length is between thresholds, packets are dropped with probability: where U is the number of unmarked packets after the last marking. U becomes uniformly distributed. The queue length is calculated as an exponential weighted moving average:
25
Motivation for RED The global synchronization is avoided by making a softer decision on packet dropping, i.e. by using two thresholds. Also by evenly dropping packets. By evenly dropping packets between thresholds, RED is not biased against bursty traffic. The queue length is calculation as an exponential weighted moving average allow short term bursts because they do not trigger packet drops. Also, Authors argue that fair queueing is not required because the flows sending more traffic will loose more packets. But, it was shown in the subsequent papers that the fairness is not satisfactory because the flows are not isolated.
26
Examples of RED Performance RED parameters: th max =15, th min =5, w q =0.002, p max =1/50. Infinite TCP flows: 1KB packets, delay bandwidth product 33-112. Queue length exceeds 30, average queue length is below 20. Utilization for Drop Tail was shown to be smaller. Short TCP flows: window size 8 or 16, 200 packets per connection. Utilization of RED drops to 60%. BurstyTCP flows: for one flow window size is 8, distance 16ms, rate 45Mbps, and for the other parameters are 12, 1ms, 100Mbps. The burstier flow suffers under Drop Tail scheme and Random Drop schemes.
27
Severe Criticism of RED Bonald, May and Bolot severely criticize RED. They analyzed RED and TailDrop Removing bias against bursty traffic means higher drop probabilities for UDP traffic because TCP dominates The average number of consecutive dropped packets is higher for RED, and so (they claim) the possibility for synchronization They show that jitter introduced by RED is higher
28
Weighted Fair Early Packet Discard (WFEPD) Our neighbors Racz, Fodor, and Turanyi proposed protocol WFPD to ensure fair throughput to different flows Calculate average flow rate as a moving average where is the number of bytes arrived in the last interval of length
29
Weighted Fair Early Packet Discard (WFEPD) Violating, non-violating and pending sources are determined based on their rates Flows are ordered so that If first k-1 flows are violating, and E is the rate in excess then the bandwidth of violating flows is
30
Weighted Fair Early Packet Discard (WFEPD) If k min is minimal k for which the inequality is satisfied, then all flows below k min are violating, and they get:
31
Weighted Fair Early Packet Discard (WFEPD) If p max is maximal p for which the inequality holds If p min is minimal p that satisfies: Here 0 1. Flows from p min to p max are pending, and are dropped with the probability which linearly increases with the flow rate
32
Examples of WFEPD Performance Fair for TCP flows, gives bandwidth according to the weights. FIFO and early packet discard (EPD) protocol Isolate misbehaving UDP flows that overload the output port and give them almost equal shares as to TCP flows with equal weights. FIFO queueing gives remaining bandwidth to TCP flows. Give equal shares to TCP flows with different round- trip times (RTT) and equal weights, while FIFO queueing gives three times more bandwidth to the flows with three times shorter RTT
33
References A. Demers, S. Keshav, and S. Shenker, “Analysis and simulation of a fair queueing algorithm,” Internet Research and Experiments, vol.1, 1990. A. Parekh, and R. Gallager, “A generalized processor sharing approach to flow control in integrated services networks: The single-node case,” IEEE/ACM Transactions on Networking, vol. 1 no.3, June 1993? M. Shreedhar, and G. Varghese, “Efficient fair queueing using deficit round robin,” IEEE/ACM Transactions on Networking, vol. 4, no. 3, 1996. J. Bennett, and H. Zhang, “Hierarchical packet fair queueing algorithms,” IEEE/ACM Transactions on Networking, vol. 5, no. 5, October 1997.
34
References S. Floyd and V. Jacobson, “Random early detection gateways for congestion avoidance,” IEEE/ACM Transactions on Networking, vol. 1, no. 4, August 1993, pp. 397-413.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.