Prepared by Les Cottrell & Hadrien Bullot, SLAC & EPFL, for the

Slides:



Advertisements
Similar presentations
Martin Suchara, Ryan Witt, Bartek Wydrowski California Institute of Technology Pasadena, U.S.A. TCP MaxNet Implementation and Experiments on the WAN in.
Advertisements

Cheng Jin David Wei Steven Low FAST TCP: design and experiments.
TCP Vegas: New Techniques for Congestion Detection and Control.
Ahmed El-Hassany CISC856: CISC 856 TCP/IP and Upper Layer Protocols Slides adopted from: Injong Rhee, Lisong Xu.
CUBIC : A New TCP-Friendly High-Speed TCP Variant Injong Rhee, Lisong Xu Member, IEEE v 0.2.
CUBIC Qian HE (Steve) CS 577 – Prof. Bob Kinicki.
Advanced Computer Networking Congestion Control for High Bandwidth-Delay Product Environments (XCP Algorithm) 1.
Congestion Control An Overview -Jyothi Guntaka. Congestion  What is congestion ?  The aggregate demand for network resources exceeds the available capacity.
XCP: Congestion Control for High Bandwidth-Delay Product Network Dina Katabi, Mark Handley and Charlie Rohrs Presented by Ao-Jan Su.
1 Is the Round-trip Time Correlated with the Number of Packets in Flight? Saad Biaz, Auburn University Nitin H. Vaidya, University of Illinois at Urbana-Champaign.
High speed TCP’s. Why high-speed TCP? Suppose that the bottleneck bandwidth is 10Gbps and RTT = 200ms. Bandwidth delay product is packets (1500.
High-performance bulk data transfers with TCP Matei Ripeanu University of Chicago.
1 Characterization and Evaluation of TCP and UDP-based Transport on Real Networks Les Cottrell, Saad Ansari, Parakram Khandpur, Ruchi Gupta, Richard Hughes-Jones,
Congestion Control for High Bandwidth-delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs.
Congestion Control for High Bandwidth-Delay Product Environments Dina Katabi Mark Handley Charlie Rohrs.
The Effects of Systemic Packets Loss on Aggregate TCP Flows Thomas J. Hacker May 8, 2002 Internet 2 Member Meeting.
Experiences in Design and Implementation of a High Performance Transport Protocol Yunhong Gu, Xinwei Hong, and Robert L. Grossman National Center for Data.
1 Robust Transport Protocol for Dynamic High-Speed Networks: enhancing XCP approach Dino M. Lopez Pacheco INRIA RESO/LIP, ENS of Lyon, France Congduc Pham.
Maximizing End-to-End Network Performance Thomas Hacker University of Michigan October 26, 2001.
TFRC: TCP Friendly Rate Control using TCP Equation Based Congestion Model CS 218 W 2003 Oct 29, 2003.
Implementing High Speed TCP (aka Sally Floyd’s) Yee-Ting Li & Gareth Fairey 1 st October 2002 DataTAG CERN (Kinda!)
CSE 461 University of Washington1 Topic How TCP implements AIMD, part 1 – “Slow start” is a component of the AI portion of AIMD Slow-start.
1 Using Netflow data for forecasting Les Cottrell SLAC and Fawad Nazir NIIT, Presented at the CHEP06 Meeting, Mumbai India, February
Experience with Loss-Based Congestion Controlled TCP Stacks Yee-Ting Li University College London.
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
HighSpeed TCP for High Bandwidth-Delay Product Networks Raj Kettimuthu.
Rate Control Rate control tunes the packet sending rate. No more than one packet can be sent during each packet sending period. Additive Increase: Every.
Congestion Control for High Bandwidth-Delay Product Networks D. Katabi (MIT), M. Handley (UCL), C. Rohrs (MIT) – SIGCOMM’02 Presented by Cheng.
Udt.sourceforge.net 1 :: 23 Supporting Configurable Congestion Control in Data Transport Services Yunhong Gu and Robert L. Grossman Laboratory for Advanced.
1 Measurements of Internet performance for NIIT, Pakistan Jan – Feb 2004 PingER From Les Cottrell, SLAC For presentation by Prof. Arshad Ali, NIIT.
Paper Review: Latency Evaluation of Networking Mechanisms for Game Traffic Jin, Da-Jhong.
Masaki Hirabaru NICT Koganei 3rd e-VLBI Workshop October 6, 2004 Makuhari, Japan Performance Measurement on Large Bandwidth-Delay Product.
1 Characterization and Evaluation of TCP and UDP-based Transport on Real Networks Les Cottrell, Saad Ansari, Parakram Khandpur, Ruchi Gupta, Richard Hughes-Jones,
TCP transfers over high latency/bandwidth networks Internet2 Member Meeting HENP working group session April 9-11, 2003, Arlington T. Kelly, University.
Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar.
Hamilton Institute Evaluating TCP Congestion Control Doug Leith Hamilton Institute Ireland Thanks: Robert Shorten, Yee Ting Lee, Baruch Even.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
XCP: eXplicit Control Protocol Dina Katabi MIT Lab for Computer Science
Final EU Review - 24/03/2004 DataTAG is a project funded by the European Commission under contract IST Richard Hughes-Jones The University of.
1 Evaluation of Advanced TCP stacks on Fast Long-Distance production Networks Prepared by Les Cottrell & Hadrien Bullot, Richard Hughes-Jones EPFL, SLAC.
INDIANAUNIVERSITYINDIANAUNIVERSITY Status of FAST TCP and other TCP alternatives John Hicks TransPAC HPCC Engineer Indiana University APAN Meeting – Hawaii.
1 FAST TCP for Multi-Gbps WAN: Experiments and Applications Les Cottrell & Fabrizio Coccetti– SLAC Prepared for the Internet2, Washington, April 2003
Network-aware OS DOE/MICS ORNL site visit January 8, 2004 ORNL team: Tom Dunigan, Nagi Rao, Florence Fowler, Steven Carter Matt Mathis Brian.
1 Stochastic Ordering for Internet Congestion Control Han Cai, Do Young Eun, Sangtae Ha, Injong Rhee, and Lisong Xu PFLDnet 2007 February 7, 2007.
Congestion Control for High Bandwidth-Delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs Presented by Yufei Chen.
© 2006 Andreas Haeberlen, MPI-SWS 1 Monarch: A Tool to Emulate Transport Protocol Flows over the Internet at Large Andreas Haeberlen MPI-SWS / Rice University.
Distributed Systems 11. Transport Layer
CUBIC Marcos Vieira.
Fast Pattern-Based Throughput Prediction for TCP Bulk Transfers
TCP-in-UDP draft-welzl-irtf-iccrg-tcp-in-udp-00.txt
Transport Protocols over Circuits/VCs
Khiem Lam Jimmy Vuong Andrew Yang
TransPAC HPCC Engineer
TCP-LP: A Distributed Algorithm for Low Priority Data Transfer
High Speed File Replication
Using Netflow data for forecasting
HighSpeed TCP for Large Congestion Windows
Prepared by Les Cottrell & Hadrien Bullot, SLAC & EPFL, for the
MB-NG Review High Performance Network Demonstration 21 April 2004
Wide Area Networking at SLAC, Feb ‘03
Characterization and Evaluation of TCP and UDP-based Transport on Real Networks Les Cottrell, Saad Ansari, Parakram Khandpur, Ruchi Gupta, Richard Hughes-Jones,
My Experiences, results and remarks to TCP BW and CT Measurements Tools Jiří Navrátil SLAC.
Fast TCP Matt Weaver CS622 Fall 2007.
ECE 599: Multimedia Networking Thinh Nguyen
Advanced Networking Collaborations at SLAC
Prepared by Les Cottrell & Hadrien Bullot, SLAC & EPFL, for the
TCP flow and congestion control
Review of Internet Protocols Transport Layer
Using NetLogger and Web100 for TCP analysis
Presentation transcript:

Evaluation of Advanced TCP stacks on Fast Long-Distance production Networks Prepared by Les Cottrell & Hadrien Bullot, SLAC & EPFL, for the UltraNet Workshop, FNAL November, 2003 www.slac.stanford.edu/grp/scs/net/talk03/ultranet-nov03.ppt Partially funded by DOE/MICS Field Work Proposal on Internet End-to-end Performance Monitoring (IEPM), also supported by IUPAP

Project goals Test new advanced TCP stacks, see how they perform on short and long-distance real production WAN links Compare & contrast: ease of configuration, throughput, convergence, fairness, stability etc. For different RTTs, windows, txqueuelen Recommend “optimum” stacks for data intensive science (BaBar) transfers using bbftp, bbcp, GridFTP Validate simulator & emulator findings & provide feedback Problems with real networks: They do not stay still, routes change, links are upgraded, pings are blocked, hosts go down, hosts have configurations changed, routers are misconfigured and/or crash Must avoid impacting production traffic, need agreement of remote administrators and/or network folks Many of the stacks are alpha or beta releases and are in the kernel: kernel crashes, developer wants to add new feature, measurer wants stability or throw out previous measurements. Some stacks were not available in time (Westwood+, GridDT)

Protocol selection TCP only No Rate based transport protocols (e.g. SABUL, UDT, RBUDP) at the moment No iSCSI or FC over IP Sender mods only, HENP model is few big senders, lots of smaller receivers Simplifies deployment, only a few hosts at a few sending sites No DRS Runs on production nets No router mods (XCP/ECN), no jumbos,

Protocols Evaluated Linux 2.4 New Reno with SACK: single and parallel streams (P-TCP) Scalable TCP (S-TCP) Fast TCP HighSpeed TCP (HS-TCP) HighSpeed TCP Low Priority (HSTCP-LP) Binary Increase Control TCP (Bic-TCP) Hamilton TCP (H-TCP)

Reno single stream Low performance on fast long distance paths 700 AIMD (add a=1 pkt to cwnd / RTT, decrease cwnd by factor b=0.5 in congestion) 700 SLAC to Florida Throughput Mbps Reno RTT ms RTT (~70ms) 1200 s

Measurements 20 minute tests, long enough to see stable patterns Iperf reports incremental and cumulative throughputs at 5 second intervals Ping interval about 100ms At sender use: 1 for iperf/TCP, 2nd for cross-traffic (UDP or TCP), 3rd for ping At receiver: use 1 machine for ping (echo) and TCP, 2nd for cross-traffic UDP or TCP cross-traffic Xs Xr bottleneck Remote site TCP TCPs TCPr ping ICMP/ping traffic SLAC Ping traffic goes to TCPr when also running cross-traffic Otherwise goes to Xr

Networks 3 main network paths Short distance: SLAC-Caltech (RTT~10ms) Middle distance: UFL and DataTAG Chicago (RTT~70ms) Long distance: CERN and University of Manchester (RTT ~ 170ms) Tests during nights and weekends to avoid unacceptable impacts on production traffic

Windows Set large maximum windows (typically 32MB) on all hosts Used 3 different windows with iperf: Small window size, factor 2-4 below optimal Roughly optimal window size (~BDP) Oversized window

RTT Only P-TCP appears to dramatically affect the RTT E.g. increases by RTT by 200ms (factor 20 for short distances) 700 600 700 600 SLAC-Caltech P-TCP 16 stream SLAC-Caltech FAST TCP 1 stream RTT (ms) RTT (ms) Throughput (Mbps) RTT Throughput (Mbps) RTT 1200 1200 Time (secs) Time (secs)

Throughput SLAC to Remote Throughput (Mbps) Windows too small (worse for longer distance) Throughput SLAC to Remote Reno 16 Sc Bic Fast HS LP H HS Reno 1 Avg Caltech 256 KB 395 226 238 233 236 225 239 253 UFl 1 MB 451 110 133 136 141 140 129 172 Caltech 512 KB 413 377 372 408 374 339 307 362 369 UFl 4 MB 428 437 387 340 383 348 431 294 381 Caltech 1 MB 434 429 382 284 384 UFl 8 MB 442 404 357 351 278 Average 427 327 319 313 312 298 295 279 321 Rank 1 2 4   Poor performance Reasonable performance Best performance Reno with 1 stream has problems on Medium distance link (70ms)

Stability Definition: standard deviation normalized by the average throughput At short RTT (10ms) stability is usually good (<=12%) At medium RTT (70ms) P-TCP, Scalable & Bic-TCP and appear more stable than the other protocols SLAC-UFl Scalable 8MB window, txq=2000, 380Mbps SLAC-UFl FAST 8MB window, txq=100, 350Mbps Stability ~ 0.098 Stability ~ 0.28 Need examples of best and less good

Sinusoidal UDP UDP does not back off in face of congestion, it has a “stiff” behavior We modified iperf to allow it to create UDP traffic with a sinusoidal time behavior, following an idea from Tom Hacker See how TCP responds to varying cross-traffic Used 2 periods of 30 and 60 seconds and amplitude varying from 20 to 80 Mbps Sent from 2nd sending host to 2nd receiving host while sending TCP from 1st sending host to 1st receiving host As long as the window size was large enough all protocols converged quickly and maintain a roughly constant aggregate throughput Especially for P-TCP & Bic-TCP

TCP Convergence against UDP Stability better at short distances P-TCP & Bic more stable 700 600 SLAC-UFl H-TCP Stability~0.27, 276Mbps SLAC-UFl Bic-TCP Stability~0.11, 355Mbps Aggregate Need same frequency UDP,same site poor convergence RTT (ms) Throughput (Mbps) TCP RTT UDP 1200 Time (secs)

Cross TCP Traffic Important to understand how fair a protocol is For one protocol competing against the same protocol (intra-protocol) we define the fairness for a single bottleneck as: All protocols have good intra-protocol Fairness (F>0.98) Except HS-TCP (F<0.94) when the window size > optimal 700 600 700 600 SLAC-Caltech Fast-TCP (F~0.997) SLAC-Florida HS-TCP (F~0.935) Aggregate Need good intra-protocol for Florida Aggregate RTT (ms) Throughput (Mbps) Throughput (Mbps) RTT (ms) TCP TCPs RTT RTT 1200 Time (secs) Time (secs) 1200

Fairness (F) Most have good intra-protocol fairness (diagonal elements), except HS-TCP Inter protocol Bic & H appear more fair against others Worst fairness are HSTCP-LP, P-TCP, S-TCP, Fast, HSTCP-LP But cannot tell who is aggressive and who is timid

Inter protocol Fairness For inter-protocol fairness we introduce the asymmetry between the two throughputs: Where x1 and x2 are the throughput averages of TCP stack 1 competing with TCP stack 2 Avg. Asymmetry vs all stacks Avg. Asymmetry vs all stacks Need more granularity of all the asymmetries rather than one average for each, need x-bar in ppt Reno 16 v. aggressive at short RTT, Reno & Scalable aggressive at medium distance HSTCP-LP very timid on medium RTT, HS-TCP also timid

Inter protocol Fairness – UFl (A) Cross traffic=> Major source Reno 16 Sca Fast HS Bic H HSLP Avg Reno 16 + 4 MB 0.00 0.38 0.26 0.45 0.05 0.12 0.66 0.27 Reno 16 + 8 MB -0.16 0.25 0.35 0.10 0.09 0.61 0.18 S-TCP + 4 MB -0.38 0.33 0.07 0.19 0.65 0.14 S-TCP + 8 MB 0.16 0.63 0.56 0.54 0.70 0.46 Fast TCP + 4 MB -0.26 -0.33 -0.29 0.11 0.68 0.03 Fast TCP + 8 MB -0.25 -0.63 0.48 HS-TCP + 4 MB -0.45 -0.07 -0.17 0.37 -0.12 HS-TCP + 8 MB -0.35 -0.65 -0.48 -0.41 0.13 -0.30 Bic-TCP + 4 MB -0.05 -0.19 0.29 -0.10 Bic-TCP + 8 MB -0.56 -0.15 0.31 H TCP + 4 MB -0.11 0.17 0.01 H TCP + 8 MB -0.09 -0.54 0.41 0.15 Average -0.24 0.28 -0.01 0.02 0.47 A=(xm-xc)/(xm+xc) Aggressive Fair Timid Diagonal = 0 by definition Symmetric off diagonal Down how does X traffic behave Scalable & Reno 16 streams are aggressive HS LP is very timid Fast more aggressive than HS & H HS is timid

Reverse Traffic Cause queuing on reverse path by using P-TCP 16 streams ACKs are lost or come back in bursts (compressed ACKs) Fast TCP throughput is 4 to 8 times less than the other TCPs. 700 600 600 700 SLAC-Florida Fast TCP SLAC-Florida Bic-TCP Looks like Bic-TCP is making big changes to RTT contrary to what we said earlier. Reverse traffic Reverse traffic RTT (ms) Throughput (Mbps) Throughput (Mbps) RTT (ms) RTT TCP RTT Time (secs) 1200 1200 Time (secs)

Preliminary Conclusions Advanced stacks behave like TCP-Reno single stream on short distances for up to Gbits/s paths, especially if window size limited TCP Reno single stream has low performance and is unstable on long distances P-TCP is very aggressive and impacts the RTT badly HSTCP-LP is too gentle, this can be important for providing scavenger service without router modifications. By design it backs off quickly, otherwise performs well Fast TCP is very handicapped by reverse traffic S-TCP is very aggressive on long distances HS-TCP is very gentle, like H-TCP has lower throughput than other protocols Bic-TCP performs very well in almost all cases Heavy reverse traffic may not often be a problem in today’s Internet but it could be more important in future.

More Information TCP Stacks Evaluation: www-iepm.slac.stanford.edu/bw/tcp-eval/