Download presentation
Presentation is loading. Please wait.
Published bySolomon Walker Modified over 9 years ago
1
CUBIC : A New TCP-Friendly High-Speed TCP Variant 2005.2. Injong Rhee, Lisong Xu Member, IEEE 2005.1.30 v 0.2
2
2 Outline 1. Motivation 2. Introduction 3. Performance Evaluation 4. Conclusion
3
3 1. Motivation In the last few years, Many TCP variants have been proposed to address the under-utilization problem due to the slow growth of TCP congestion window. (e.g. FAST, HSTCP, STCP, HTCP, SQRT, Westwood BIC) While the window growth of new protocols is scalable, their fairness issue has remained as a major challenge. (e.g. TCP Friendliness, RTT fairness, and inter/intra protocol fairness) The crux of the problem is to find a “suitable” growth function.
4
4 2. Introduction: CUBIC – A New TCP Variant CUBIC is an enhanced version of BIC Simplifies the BIC window control using a cubic function. Improves its TCP friendliness & RTT fairness The window growth function of CUBIC is based on real-time (the elapsed time since the last loss event), so that it is independent of RTT. First proposed by [Shorten and Leith, May 2003 Yale workshop], and also later in [HTCP]. Window growth becomes independent on RTT RTT fairness and also TCP friendliness – under low delays. HTCP, SQRT.
5
5 2. Introduction : BIC function BIC overall performs very well in evaluation of advanced TCP stacks on fast long-distance production networks by SLAC ( Stanford Linear Accelerator Center). BIC (also HSTCP & STCP) growth function can be still aggressive for TCP especially under short RTTs or low speed networks. Currently a default TCP stack for Redhat Linux 2.6. Microsoft and Sun are considering BIC to include in their OS stacks.
6
6 2. Introduction : CUBIC function where C is a scaling factor, t is the elapsed time from the last window reduction, and β is a constant multiplication decrease factor accelerate slow down
7
7 2. Introduction: CUBIC – New TCP Mode In short RTT networks, the window growth of CUBIC is slower than TCP since CUBIC is independent of RTT. We emulate the TCP window algorithm after a packet loss event. Average sending rate of AIMD = (TCP). Thus, : window size = if> Otherwise : window size = The size of TCP window after time t from window reduction.
8
8 3.1 Testbed (Dummynet) Setup Router 1 Sender 1 Sender 2 Background Traffic Generator 1 Router 2 Receiver Background Traffic Generator 2 FreeBSD Linux Bottleneck Point : 800 Mbps Setting RTT for each path between Senders and Receiver RTT for Background Traffic : Exponential Distribution (Next Slide) 1 Gbps link Background Traffic Generation (Next Slide) Background Traffic Generation (Next Slide) High-Speed TCP Variants : e.g. CUBIC, BIC, FAST, HSTCP, STCP High-Speed TCP or TCP SACK
9
9 3.1 Testbed Setup : Background Traffic Generation TCP Flow RTT: Exponential Distribution The mean is set to 66 ms (one-way delay), then the CDF is very similar to the CDF of RTT samples shown in paper [“Variability in TCP Roundtrip Times” by J. Ajkat, J. Kaur, F.D. Smith, and K. Jeffay in SigComm Internet Measurement Conference, 2003]. Inter-Arrival Time Between Two Successive TCP connections: Exponential Distribution (observed from Floyd and Paxson) This is the parameter that we used to control the background traffic load TCP Flow Duration: Lognormal (Body) and Pareto (Tail) Distribution Using the parameters from paper “Generating Representative Web Workloads for Network and Server Performance Evaluation” by Paul Barford, Mark Crovella in SigMetric 1998
10
10 3.2 TCP Friendliness NS simulation : RTT 10 ms & 20 Mbps ~ 1 Gbps
11
11 3.2 TCP Friendliness (cont.) NS simulation : RTT 100 ms & 20 Mbps ~ 1 Gbps
12
12 3.2 TCP Friendliness (cont.) TCP Friendliness on short RTT - 5ms 80 Mbps200 Mbps Background traffic Link Utilization (%) Dummynet Testbed : RTT 5ms & 800 Mbps, 100% router buffer of the BDP with 80 ~ 200 Mbps background traffic
13
13 3.2 TCP Friendliness (cont.) TCP Friendliness on short RTT - 10ms 80 Mbps200 Mbps Background traffic Link Utilization (%) Dummynet Testbed : RTT 10ms & 800 Mbps, 100% router buffer of the BDP with 80 ~ 200 Mbps background traffic
14
14 3.2 TCP Friendliness (cont.) TCP Friendliness on long RTT - 100ms 80 Mbps200 Mbps Background traffic Link Utilization (%) Dummynet Testbed : RTT 100ms & 800 Mbps, 100% router buffer of the BDP with 80 ~ 200 Mbps background traffic
15
15 3.2 TCP Friendliness (cont.) TCP Friendliness on long RTT - 200ms 80 Mbps200 Mbps Background traffic Link Utilization (%) Dummynet Testbed : RTT 200ms & 800 Mbps, 100% router buffer of the BDP with 80 ~ 200 Mbps background traffic
16
16 3.3 RTT Fairness Dummynet testbed : RTT 40, 120, 240 ms & 800 Mbps, Router buffer: 50% of the BDP with 200 Mbps background traffic
17
17 3.4 Stability : NS Simulation Setup NS simulation : High-Speed TCP Variants on 220ms, TCP SACK on 20ms and 2.5 Gbps with 5% router buffer of the BDP
18
18 3.4 Stability : NS Simulation Result (cont.)
19
19 3.4 Stability : NS Simulation Result (cont.) NS simulation : High-Speed TCP Variants on 220ms, TCP SACK on 20ms and 2.5 Gbps, Router buffer: 5% of the BDP * HTCP have some stability issues (this needs to be confirmed with the original authors of HTCP).
20
20 3.4 Stability : NS Simulation Result (cont.) Coefficient of Variations in the stability test on NS simulation
21
21 3.4 Stability : Dummynet Testbed Setup (cont.) Dummynet testbed : High-Speed TCP Variants on 200ms, TCP SACK on 20ms, & 800 Mbps Router buffer: 100% of the BDP with 200Mbps background traffic Router 1 Sender 1 Sender 2 Background Traffic Generator 1 Router 2 Receiver Background Traffic Generator 2 FreeBSD Linux 1 Gbps link 800 Mbps Drop Tail RTT 5ms for both of senders RTT 95 ms for Sender 1 RTT 5 ms for Sender 2 RTT 5msRTT : Exponential Distribution 1000 Mbps Drop Tail High-Speed TCP Variant Flows Long-lived TCP Flows
22
22 3.4 Stability : Dummynet Testbed Result (cont.) CUBIC BIC STCPHSTCP
23
23 3.4 Stability : Dummynet Testbed Result (cont.) FAST * The throughput of FAST flows was lower than that of TCP as much as TCP Friendliness experiments due to small alpha parameter value.
24
24 3.5 Evaluation Summary CUBIC and HTCP had good TCP Friendliness especially on short RTT networks. FAST needs alpha parameter tuning. CUBIC and FAST had good RTT Fairness under both short and long RTT paths. CUBIC showed the best stability. FAST requires tuning alpha parameter.
25
25 4. Discussion How to define TCP-friendliness. How to measure stability and fairness. The role of background traffic – what is the realistic traffic mix?
26
26 5. Conclusion A real-time based protocol seems a good idea. A CUBIC seems a good simplification of BIC, but is there any other choice for the window growth function? What makes a cubic function better than others? Any odd-order function would do well?
27
27 Reference [1] H. Bullot, R. Les Cottrell, and R. Hughes-Jones, "Evaluation of Advanced TCP Stacks on Fast Long-Distance Production Networks,“ Second International Workshop on Protocols for Fast Long-Distance Networks, February 16-17, 2004, Argonne, Illinois USA [2] C. Jin, D. X. Wei and S. H. Low, "FAST TCP: motivation, architecture, algorithms, performance," In Proceedings of IEEE INFOCOM 2004, March 2004 [3] S. Floyd, “HighSpeed TCP for large congestion windows,” INTERNET DRAFT, draft-floyd-tcp- highspeed-01.txt, 2003 [4] T. Kelly, “Scalable TCP: Improving performance in highspeed wide area networks,” ACM SIGCOMM Computer Communication Review, Volume 33, Issue 2, pp. 83-91, April 2003 [5] R. Shorten, and D. Leith, "H-TCP: TCP for high-speed and longdistance networks,” Second International Workshop on Protocols for Fast Long-Distance Networks, February 16-17, 2004, Argonne, Illinois USA [6] T. Hatano, M. Fukuhara, H. Shigeno, and K. Okada, "TCP-friendly SQRT TCP for High Speed Networks," in Proceedings of APSITT 2003, pp455-460, Nov 2003. [7] C. Casetti, M. Gerla, S. Mascolo, M. Y. Sanadidi, and R. Wang, "TCP Westwood: Bandwidth Estimation for Enhanced Transport over Wireless Links," In Proceedings of ACM Mobicom 2001, pp 287-297, Rome, Italy, July 16-21 2001 [8] L. Xu, K. Harfoush, and I. Rhee, "Binary Increase Congestion Control (BIC) for Fast Long- Distance Networks," In Proceedings of IEEE INFOCOM 2004, March 2004
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.