INDIANAUNIVERSITYINDIANAUNIVERSITY Status of FAST TCP and other TCP alternatives John Hicks TransPAC HPCC Engineer Indiana University APAN Meeting – Hawaii.

Slides:



Advertisements
Similar presentations
Helping TCP Work at Gbps Cheng Jin the FAST project at Caltech
Advertisements

Appropriateness of Transport Mechanisms in Data Grid Middleware Rajkumar Kettimuthu 1,3, Sanjay Hegde 1,2, William Allcock 1, John Bresnahan 1 1 Mathematics.
FAST TCP Anwis Das Ajay Gulati Slides adapted from : IETF presentation slides Link:
Cheng Jin David Wei Steven Low FAST TCP: design and experiments.
Congestion Control Created by M Bateman, A Ruddle & C Allison As part of the TCP View project.
1 End to End Bandwidth Estimation in TCP to improve Wireless Link Utilization S. Mascolo, A.Grieco, G.Pau, M.Gerla, C.Casetti Presented by Abhijit Pandey.
Restricted Slow-Start for TCP William Allcock 1,2, Sanjay Hegde 3 and Rajkumar Kettimuthu 1,2 1 Argonne National Laboratory 2 The University of Chicago.
Ahmed El-Hassany CISC856: CISC 856 TCP/IP and Upper Layer Protocols Slides adopted from: Injong Rhee, Lisong Xu.
Presentation by Joe Szymanski For Upper Layer Protocols May 18, 2015.
CUBIC : A New TCP-Friendly High-Speed TCP Variant Injong Rhee, Lisong Xu Member, IEEE v 0.2.
CUBIC Qian HE (Steve) CS 577 – Prof. Bob Kinicki.
Advanced Computer Networking Congestion Control for High Bandwidth-Delay Product Environments (XCP Algorithm) 1.
XCP: Congestion Control for High Bandwidth-Delay Product Network Dina Katabi, Mark Handley and Charlie Rohrs Presented by Ao-Jan Su.
Congestion Control on High-Speed Networks
Texas A&M University Improving TCP Performance in High Bandwidth High RTT Links Using Layered Congestion Control Sumitha.
Recent Research in Congestion Control The problem of high bandwidth-delay product connections By Guillaume Marceau Presented for WPI CS577, Advanced Computer.
Cheng Jin David Wei Steven Low FAST TCP: Motivation, Architecture, Algorithms, Performance.
TCP on High-Speed Networks Sangtae Ha and Injong Rhee North Carolina State University.
1 Minseok Kwon and Sonia Fahmy Department of Computer Sciences Purdue University {kwonm, TCP Increase/Decrease.
Rice Networks Group Aleksandar Kuzmanovic Edward W. Knightly Rice University R. Les Cottrell SLAC/SCS-Network Monitoring.
1 Characterization and Evaluation of TCP and UDP-based Transport on Real Networks Les Cottrell, Saad Ansari, Parakram Khandpur, Ruchi Gupta, Richard Hughes-Jones,
1 TCP-LP: A Distributed Algorithm for Low Priority Data Transfer Aleksandar Kuzmanovic, Edward W. Knightly Department of Electrical and Computer Engineering.
1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues.
TCP-Carson A Loss-event Based Adaptive AIMD Protocol for Long-lived Flows Hariharan Kannan Advisor: Prof. M Claypool Co-Advisor: Prof. R Kinicki Reader:
1 Reliability & Flow Control Some slides are from lectures by Nick Mckeown, Ion Stoica, Frans Kaashoek, Hari Balakrishnan, and Sam Madden Prof. Dina Katabi.
TCP Congestion Control
10 June 2004 Protocols for Long-Distance Networks Terena Networking Conference 2004 Rhodes.
A Simulation of Adaptive Packet Size in TCP Congestion Control Zohreh Jabbari.
1 Robust Transport Protocol for Dynamic High-Speed Networks: enhancing XCP approach Dino M. Lopez Pacheco INRIA RESO/LIP, ENS of Lyon, France Congduc Pham.
Masaki Hirabaru Internet Architecture Group GL Meeting March 19, 2004 High Performance Data transfer on High Bandwidth-Delay Product Networks.
Implementing High Speed TCP (aka Sally Floyd’s) Yee-Ting Li & Gareth Fairey 1 st October 2002 DataTAG CERN (Kinda!)
CSE 461 University of Washington1 Topic How TCP implements AIMD, part 1 – “Slow start” is a component of the AI portion of AIMD Slow-start.
Masaki Hirabaru CRL, Japan APAN Engineering Team Meeting APAN 2003 in Busan, Korea August 27, 2003 Common Performance Measurement Platform.
Experience with Loss-Based Congestion Controlled TCP Stacks Yee-Ting Li University College London.
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
HighSpeed TCP for High Bandwidth-Delay Product Networks Raj Kettimuthu.
Masaki Hirabaru Network Performance Measurement and Monitoring APAN Conference 2005 in Bangkok January 27, 2005 Advanced TCP Performance.
BIC Control for Fast Long-Distance Networks paper written by Injong Rhee, Lisong Xu & Khaled Harfoush (2004) Presented by Jonathan di Costanzo (2009/02/18)
TCP with Variance Control for Multihop IEEE Wireless Networks Jiwei Chen, Mario Gerla, Yeng-zhong Lee.
Masaki Hirabaru NICT Koganei 3rd e-VLBI Workshop October 6, 2004 Makuhari, Japan Performance Measurement on Large Bandwidth-Delay Product.
1 Characterization and Evaluation of TCP and UDP-based Transport on Real Networks Les Cottrell, Saad Ansari, Parakram Khandpur, Ruchi Gupta, Richard Hughes-Jones,
What is TCP? Connection-oriented reliable transfer Stream paradigm
Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March
Compound TCP in NS-3 Keith Craig 1. Worcester Polytechnic Institute What is Compound TCP? As internet speeds increased, the long ‘ramp’ time of TCP Reno.
TCP transfers over high latency/bandwidth networks Internet2 Member Meeting HENP working group session April 9-11, 2003, Arlington T. Kelly, University.
Performance Engineering E2EpiPEs and FastTCP Internet2 member meeting - Indianapolis World Telecom Geneva October 15, 2003
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
Final EU Review - 24/03/2004 DataTAG is a project funded by the European Commission under contract IST Richard Hughes-Jones The University of.
1 Evaluation of Advanced TCP stacks on Fast Long-Distance production Networks Prepared by Les Cottrell & Hadrien Bullot, Richard Hughes-Jones EPFL, SLAC.
Peer-to-Peer Networks 13 Internet – The Underlay Network
FAST TCP Cheng Jin David Wei Steven Low netlab.CALTECH.edu GNEW, CERN, March 2004.
Masaki Hirabaru (NICT) and Jin Tanaka (KDDI) Impact of Bottleneck Queue on Long Distant TCP Transfer August 25, 2005 NOC-Network Engineering Session Advanced.
1 FAST TCP for Multi-Gbps WAN: Experiments and Applications Les Cottrell & Fabrizio Coccetti– SLAC Prepared for the Internet2, Washington, April 2003
Congestion Control for High Bandwidth-Delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs Presented by Yufei Chen.
© 2006 Andreas Haeberlen, MPI-SWS 1 Monarch: A Tool to Emulate Transport Protocol Flows over the Internet at Large Andreas Haeberlen MPI-SWS / Rice University.
Network Congestion Control HEAnet Conference 2005 (David Malone for Doug Leith)
Sandeep Kakumanu Smita Vemulapalli Gnan
CUBIC Marcos Vieira.
Prepared by Les Cottrell & Hadrien Bullot, SLAC & EPFL, for the
Transport Protocols over Circuits/VCs
TransPAC HPCC Engineer
TCP-LP Distributed Algorithm for Low-Priority Data Transfer
Prepared by Les Cottrell & Hadrien Bullot, SLAC & EPFL, for the
Wide Area Networking at SLAC, Feb ‘03
Characterization and Evaluation of TCP and UDP-based Transport on Real Networks Les Cottrell, Saad Ansari, Parakram Khandpur, Ruchi Gupta, Richard Hughes-Jones,
TCP Cubic CS577 Brett Levasseur 10/1/2013.
Prepared by Les Cottrell & Hadrien Bullot, SLAC & EPFL, for the
TCP flow and congestion control
Review of Internet Protocols Transport Layer
Presentation transcript:

INDIANAUNIVERSITYINDIANAUNIVERSITY Status of FAST TCP and other TCP alternatives John Hicks TransPAC HPCC Engineer Indiana University APAN Meeting – Hawaii 30-January-2004

INDIANAUNIVERSITYINDIANAUNIVERSITY Overview Brief introduction to TCP alternatives Results from SLAC Internet2 information TransPAC work Future plans Questions

INDIANAUNIVERSITYINDIANAUNIVERSITY Linux 2.4 New Reno Low performance on fast long distance paths –AIMD (add a=1 pkt to cwnd / RTT, decrease cwnd by factor b=0.5 in congestion) RTT (~70ms) RTT ms Reno Throughput Mbps s Information courtesy of Les Cottrell from the SLAC group at Stanford

INDIANAUNIVERSITYINDIANAUNIVERSITY Parallel TCP Reno TCP Reno with 16 streams –Parallel streams heavily used in HENP & elsewhere to achieve needed performance, so it is today’s de facto baseline –However, hard to optimize both the window size AND number of streams since optimal values can vary due to network capacity, routes or utilization changes Information courtesy of Les Cottrell from the SLAC group at Stanford

INDIANAUNIVERSITYINDIANAUNIVERSITY FAST TCP Based on TCP Vegas Uses both queuing delay and packet losses as congestion measures Developed at Caltech by Steven Low and collaborators Beta code available soon Information courtesy of Les Cottrell from the SLAC group at Stanford

INDIANAUNIVERSITYINDIANAUNIVERSITY Scalable TCP Uses exponential increase everywhere (in slow start and congestion avoidance) Multiplicative decrease factor b = Introduced by Tom Kelly of Cambridge Information courtesy of Les Cottrell from the SLAC group at Stanford

INDIANAUNIVERSITYINDIANAUNIVERSITY Highspeed TCP Behaves like Reno for small values of cwnd Above a chosen value of cwnd (default 38) a more aggressive function is used Uses a table to indicate by how much to increase cwnd when an ACK is received Available with web100 Introduced by Sally Floyd Information courtesy of Les Cottrell from the SLAC group at Stanford

INDIANAUNIVERSITYINDIANAUNIVERSITY Highspeed TCP Low priority Mixture of HS-TCP with TCP-LP (Low Priority) Backs off early in face of congestion by looking at RTT Idea is to give scavengers service without router modifications From Rice University Information courtesy of Les Cottrell from the SLAC group at Stanford

INDIANAUNIVERSITYINDIANAUNIVERSITY Binary Increase Control TCP (BIC TCP) Combine: –An additive increase used for large cwnd –A binary increase used for small cwnd –Developed Injong Rhee at NC State University Information courtesy of Les Cottrell from the SLAC group at Stanford

INDIANAUNIVERSITYINDIANAUNIVERSITY Hamilton TCP (H TCP) Similar to HS-TCP in switching to aggressive mode after threshold Uses an heterogeneous AIMD algorithm Developed at Hamilton U Ireland Information courtesy of Les Cottrell from the SLAC group at Stanford

INDIANAUNIVERSITYINDIANAUNIVERSITY SLAC TCP Testing TCP –No Rate based transport protocols (e.g. SABUL, UDT, RBUDP) at the moment –No iSCSI or FC over IP Sender mods only, HENP model is few big senders, lots of smaller receivers –Simplifies deployment, only a few hosts at a few sending sites –No DRS Runs on production nets –No router mods (XCP/ECN), no jumbos Information courtesy of Les Cottrell from the SLAC group at Stanford

INDIANAUNIVERSITYINDIANAUNIVERSITY SLAC preliminary test results Advanced stacks behave like TCP-Reno single stream on short distances for up to Gbits/s paths, especially if window size limited TCP Reno single stream has low performance and is unstable on long distances P-TCP is very aggressive and impacts the RTT badly HSTCP-LP is too gentle, this can be important for providing scavenger service without router modifications. By design it backs off quickly, otherwise performs well Fast TCP is very handicapped by reverse traffic S-TCP is very aggressive on long distances HS-TCP is very gentle, like H-TCP has lower throughput than other protocols Bic-TCP performs very well in almost all cases Information courtesy of Les Cottrell from the SLAC group at Stanford

INDIANAUNIVERSITYINDIANAUNIVERSITY SLAC preliminary test results Information courtesy of Les Cottrell from the SLAC group at Stanford With optimal window all stacks within ~20% of one another, except Reno 1 stream on medium and long distances P-TCP & S-TCP get best throughput

INDIANAUNIVERSITYINDIANAUNIVERSITY Internet2 information Stanislav Shalunov developed i2perf ( i2perf initially developed for FAST TCP to measure RTT Testing Topology ( ) –Reno, from Seattle to Atlanta, RTT = 57.6ms –FAST, from Raleigh to Atlanta, RTT = 23.7ms –FAST, from Seattle to Atlanta, RTT = 57.4ms –FAST, from Pittsburgh to Atlanta, RTT = 26.9ms FAST testing planned over TransPAC (waiting on kernel mods from CalTech) More information at

INDIANAUNIVERSITYINDIANAUNIVERSITY TransPAC Work Setup test from SURFNET to APAN Setup took 3 days due to time differences Purpose of this test was to establish contact personnel and identify equipment needs Only standard tests done Kernel mods and TCP alternatives require more time to setup More testing planned

INDIANAUNIVERSITYINDIANAUNIVERSITY Future Plans Possible future testing to and from the following sites: Indiana University MIT (David Lapsley) L. A. other Abilene locations StarLight APAN (Tokyo XP and others?) Looking for groups interested in testing TCP Contact me to help coordinate testing over TransPAC

INDIANAUNIVERSITYINDIANAUNIVERSITY For More Information FAST TCP Scalable TCP Highspeed TCP Highspeed TCP Low-Priority Binary Increase Control (BIC) TCP

INDIANAUNIVERSITYINDIANAUNIVERSITY Even More Information Hamilton TCP SLAC TCP Internet2 (Stanislav Shalunov) TransPAC APAN NOC

INDIANAUNIVERSITYINDIANAUNIVERSITY Questions and discussion John Hicks Indiana University