Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Packet Switches with Output and Shared Buffer. 2 Packet Switches with Output Buffers and Shared Buffer Packet switches with output buffers, or shared.

Similar presentations


Presentation on theme: "1 Packet Switches with Output and Shared Buffer. 2 Packet Switches with Output Buffers and Shared Buffer Packet switches with output buffers, or shared."— Presentation transcript:

1 1 Packet Switches with Output and Shared Buffer

2 2 Packet Switches with Output Buffers and Shared Buffer Packet switches with output buffers, or shared buffer Delay Guarantees Fairness Fair Queueing Deficit Round Robin Random Eearly Detection Weighted Fair Early Packet Discard

3 3 Quality of Service: Requirements How stringent the quality-of-service requirements are. 5-30

4 4 Buffering Smoothing the output stream by buffering packets.

5 5 Quality of Service Integrated Services Bandwidth is negotiated and the traffic is policed or shaped accordingly Differentiated Services Traffic is served according to its priority: Expedite forwarding (EF), assured forwarding (AF), best effort forwarding (BE)

6 6 The Leaky Bucket Algorithm (a) A leaky bucket with water. (b) a leaky bucket with packets.

7 7 The Leaky Bucket Algorithm (a) Input to a leaky bucket. (b) Output from a leaky bucket. Output from a token bucket with capacities of (c) 250 KB, (d) 500 KB, (e) 750 KB, (f) Output from a 500KB token bucket feeding a 10- MB/sec leaky bucket.

8 8 The Token Bucket Algorithm (a) Before (b) After 5-34

9 9 Admission Control An example of flow specification 5-34

10 10 Packet Switches with Output Buffers

11 11 Packet Switches with Shared Buffer

12 12 Delay Guarantees All flows must police their traffic: send certain amount of data within one policing interval E.g. 10Mbps flow should send 10Kb within 1ms If output is not overloaded, it is guaranteed that the data passes the switch within one policing interval

13 13 Fairness When some output is overloaded, its bandwidth should be fairly shared among different flows. What is fair? Widely adopted definition is max-min fairness. The simplest definition (for me) for fair service is bit-by-bit round-robin (BR).

14 14 Fairness Definitions 1.Max-min fairness: 1)No user receives more than it requests 2)No other allocation scheme has a higher minimum allocation (received service divided by weight w) 3)Condition (2) recursively holds when the minimal user is removed 2.General Processor Sharing: if S i (t 1,t 2 ) is the amount of traffic of flow i served in (t 1,t 2 ) and flow i is backlogged during, then it holds

15 15 Examples Link bandwidth is 10Mbps; Flow rates: 10Mbps, 30Mbps; Flow weights: 1,1; Fair shares: 5Mbps, 5Mbps Link bandwidth is 10Mbps; Flow rates: 10Mbps, 30Mbps; Flow weights: 4,1; Fair shares: 8Mbps, 2Mbps Link bandwidth is 10Mbps; Flow capacities: 4Mbps, 30Mbps; Flow weights: 3,1; Fair shares: 4Mbps, 6Mbps Exercise: Link bandwidth 100Mbps; Flow rates: 5,10,20,50,50,100; Flow weights: 1,4,4,2,7,2; Fair shares ?

16 16 Fairness Measure It is obviously impossible to implement bit-by- bit round-robin Other practical algorithm will not be perfectly fair, there is a trade-off between the protocol complexity and its level of fairness Fairness measure is defined as: where flows i an j are backlogged during (t 1,t 2 ) and should be as low as possible

17 17 Fair Queueing (FQ) It is emulation of bit-by-bit round robin, proposed by Demers, Keshav and Shenker Introduce virtual time which is the number of service rounds until time t, and is calculated as: Define with S i the virtual time when packet k of flow i is serviced, and F i the virtual time when this packet departs the switch. Its length is L i, and arriving time to the switch at t i. It holds Packets are transmitted in an increasing order of their departure times k k k k

18 18 Examples of FQ Performance The performance of different end-to-end flow control mechanisms passing through switches employing WFQ has been examined by Demers et al. Generic flow control algorithm uses a sliding window like TCP, and timeout mechanism where congestion recovery starts after 2RTT (RTT is an exponentially averaged round trip time) Flow control algorithm proposed by Jacobson and Karels, TCP Tahoe version. It comprises: slow start, adaptive window threshold, tedious estimation of RTT In the selective DECbit algorithm, switches send congestion messages to sources using more than their fair shares

19 19 Examples of FQ Performance Telnet source 40B per 5s, FTP 1KB, maximum window size 5 F6F6 T7T7 T8T8 B F1F1 800kbps 56kbps 15packets Policy FTP Telnet F1F1 F2F2 F3F3 F4F4 F5F5 F6F6 T7T7 T8T8 G/FIFO 18115411593114915313 G/FQ 1788385916006156219698 JK/FIFO 582583585 58358230 JK/FQ 5745795465945996018796 DEC 582 9990 Sl DEC 582 10597

20 20 Examples of FQ Performance Telnet source 40B per 5s, FTP 1KB, maximum window size 5, ill behaved source twice the line bit-rate T2T2 I3I3 B F1F1 800kbps 56kbps 20packets Policy FTPTelnetIll behaved F1F1 T2I3 G/FIFO 3113497 G/FQ 3491955 JK/FIFO 003500 JK/FQ 34891106 DEC 16603334 Sl DEC 3493953

21 21 Examples of FQ Performance FTP 1KB, maximum window size 5 S F1F1 56kbps 20packets Policy FTP F1F1 F2F2 F3F3 F4F4 G/FIFO 2500 1000 G/FQ 1750 JK/FIFO 2500 1000 JK/FQ 1750 DEC 239524062377783 Sl DEC 1750 S S S S S F1F1 S2S2 F2F2 S1S1 F4F4 S3S3 F3F3 S4S4

22 22 Packet Generalized Processor Sharing (PGPS) Parekh and Gallager generalized FQ by introducing weights and simplified it a little Virtual time is updated whenever there is an event in the system: arrival or departure as follows Virtual arrival and departure of packet k of flow i are calculated as

23 23 Properties of PGPS Theorem: For all packets p, it holds that where r is the link rate, F p and F p are the virtual departure times under ideal GPS and PGPS. Proof: Let t k and u k be the departure times of packet p k under PGPS and GPS, and a k its arrival time. P k is the kth packet served by PGPS. Let m u k ≥u i for m 0, p m begins transmission at t m -l m /r, so min{a m+1,…,a k }>t m -L m /r. So, u k ≥(L k +…+L m+1 )/r+t m -L m /r, and u k ≥t k -L m /r. ^

24 24 Properties of PGPS Theorem: For PGPS it holds that where L max is the maximum packet length. Complexity of the algorithm is O(N) because so many packets may arrive within a packet transmission time.

25 25 Deficit Round Robin Proposed by Shreedhar and Varghese at Washington University in St. Louis In DRR, flow i is assigned quantum Q i proportional to its weight w i, and counter c i. Initially counter value is set to 0. The number of bits of packets that are transmitted in some round-robin round must satisfy t i <c i +Q i. And counter is set to new value c i =c i +Q i -t i. If queue gets emptied c i =0; The complexity of this algorithm is O(1) because a couple of operations should be performed within a packet duration time, if algorithm serves non-empty queue whenever it visits the queue.

26 26 Properties of DRR Theorem: For PGPS it holds that where L max is the maximum packet length. Proof: Counter c i 0 if heading packet is longer than c i. It holds that S i (t 1,t 2 )=mQ i +c i (0)-c i (m) where m is the number of round-robin round and (t 1,t 2 ) is the busy interval and therefore |S i (t 1,t 2 )- mQ i |<L max.

27 27 Properties of DRR Proof(cont.): S i (t 1,t 2 )/w i ≤(m-1)·Q+Q+L max /w i, and S j (t 1,t 2 )/w j ≥m’·Q-L max /w j where m’ is the number of round-robin rounds for flow j. Because m’≥m-1, FM=Q+L max /w i +L max /w j =3L max because w i,w j ≥1, and Q≥L max in order for the protocol to have complexity of O(1). Namely if Q<L max it may happen that queue is not served when round-robin pointer points to it and the complexity of the algorithm is larger than O(1). Namely each queue visit incurs the operation of comparison, and many queues may be visited, up to N per packet transmission.

28 28 Properties of DRR Maximum delay in BR is NL max /B. In DRR, an incoming packet might have to wait for ∑ i Q i /B, and its maximum delay is NQ max /B. So, the ratio of the DRR delay and the ideal delay is Q max /L max =Q max /Q min =w max /w min, and may be significant if the fairness granularity should be very fine. Shreedhar and Varghese propose to serve the delay sensitive traffic with reservations and which is policed.

29 29 Examples of DRR Performance 20 flows in single hop topology of 10p/s, one is ill-behaving (misbehaving) 30p/s, packet length randomly distributed 0-4500bit, poisson arrivals. Ill behaving gets more bandwidth than the regular flows in FIFO, and fair share in DRR. Multihop topology not clear to me? It has been shown through examples that fairness is preserved when sources generate packets of different lengths or with different distribution. Examples unclear.

30 30 Packet Discard First schemes discard packets coming to the full buffer or coming to the buffer with the number of queued packets exceeding some specified threshold. They are biased against bursty traffic, because the probability that a packet is discarded increases with its burst length. TCP sources sending discarded packets would slow down their rates and underutilize the network. All sources are synchronized, the network throughput would be oscillatory and the efficiency becomes low.

31 31 Random Early Detection (RED) Floyd and Jacobson introduce two threshold for the queue length were introduced in random early detection (RED) algorithm. When the queue length exceeds the low threshold but is below the high threshold, packets are dropped with a probability which increases with the queue length. The probability is calculated so that the packets that are dropped are equally spaced. When the queue length exceeds the higher threshold, all incoming packets are dropped. The queue length is calculated as an exponential weighted moving average, and it depends on the instantaneous queue length, and past values of the queue length.

32 32 Motivation for RED The global synchronization is avoided by making a softer decision on packet dropping, i.e. by using two thresholds, and by evenly dropping packets between thresholds. The queue length is calculation as an exponential weighted moving average allow short term bursts because they do not trigger packet drops. Also, Authors argue that fair queueing is not required because the flows sending more traffic will loose more packets. But, it was shown in the subsequent papers that the fairness is not satisfactory because the flows are not isolated.

33 33 Severe Criticism of RED Bonald, May and Bolot severely criticize RED. They analyzed RED and TailDrop Removing bias against bursty traffic means higher drop probabilities for UDP traffic because TCP dominates The average number of consecutive dropped packets is higher for RED, and so (they claim) the possibility for synchronization They show that jitter introduced by RED is higher

34 34 Weighted Fair Early Packet Discard (WFEPD) Racz, Fodor, and Turanyi proposed protocol WFPD to ensure fair throughput to different flows Calculate average flow rate as a moving average where is the number of bytes arrived in the last interval of length

35 35 Weighted Fair Early Packet Discard (WFEPD) Violating, non-violating and pending sources are determined based on their rates Flows are ordered so that If first k-1 flows are violating, and E is the rate in excess then the bandwidth of violating flows is

36 36 Weighted Fair Early Packet Discard (WFEPD) If k min is minimal k for which the inequality is satisfied, then all flows below k min are violating, and they get:

37 37 Weighted Fair Early Packet Discard (WFEPD) If p min is the largest p for which the inequality holds If p max is the minimal integer that satisfies: Here 0 1. Flows from p min to p max are pending, and are dropped with the probability which linearly increases with the flow rate

38 38 Examples of WFEPD Performance Fair for TCP flows, gives bandwidth according to the weights. FIFO and early packet discard (EPD) protocol Isolate misbehaving UDP flows that overload the output port and give them almost equal shares as to TCP flows with equal weights. FIFO queueing gives remaining bandwidth to TCP flows. Give equal shares to TCP flows with different round- trip times (RTT) and equal weights, while FIFO queueing gives three times more bandwidth to the flows with three times shorter RTT

39 39 References A. Demers, S. Keshav, and S. Shenker, “Analysis and simulation of a fair queueing algorithm,” Internet Research and Experiments, vol.1, 1990. A. Parekh, and R. Gallager, “A generalized processor sharing approach to flow control in integrated services networks: The single-node case,” IEEE/ACM Transactions on Networking, vol. 1 no.3, June 1993? M. Shreedhar, and G. Varghese, “Efficient fair queueing using deficit round robin,” IEEE/ACM Transactions on Networking, vol. 4, no. 3, 1996. J. Bennett, and H. Zhang, “Hierarchical packet fair queueing algorithms,” IEEE/ACM Transactions on Networking, vol. 5, no. 5, October 1997.

40 40 References S. Floyd and V. Jacobson, “Random early detection gateways for congestion avoidance,” IEEE/ACM Transactions on Networking, vol. 1, no. 4, August 1993, pp. 397-413. T. Bonald, M. May, J.C. Bolot, “Analytic evaluation of RED performance,” INFOCOM 2000, March 2000, pp. 1415 – 1424. A. Racz, G. Fodor, Z. Turanyi, “Weighted fair early packet discard at an ATM switch output port,” INFOCOM 1999, pp. 1160-1168.


Download ppt "1 Packet Switches with Output and Shared Buffer. 2 Packet Switches with Output Buffers and Shared Buffer Packet switches with output buffers, or shared."

Similar presentations


Ads by Google