TCP performance Sven Ubik FTP throughput capacity load ftp.uninett.no 12.3 Mb/s 1.2 Gb/s 80 Mb/s (6.6%) ftp.stanford.edu 1.3 Mb/s 600.

Slides:



Advertisements
Similar presentations
Tuning and Evaluating TCP End-to-End Performance in LFN Networks P. Cimbál* Measurement was supported by Sven Ubik**
Advertisements

TCP transfers over high latency/bandwidth network & Grid TCP Sylvain Ravot
LOGO Transmission Control Protocol 12 (TCP) Data Flow.
TCP/IP Over Lossy Links - TCP SACK without Congestion Control.
Packet Video TCP Video Streaming to Bandwidth-Limited Access Links Puneet Mehra and Avideh Zakhor Video and Image Processing Lab University of California,
Iperf Tutorial Jon Dugan Summer JointTechs 2010, Columbus, OH.
Ahmed El-Hassany CISC856: CISC 856 TCP/IP and Upper Layer Protocols Slides adopted from: Injong Rhee, Lisong Xu.
Presentation by Joe Szymanski For Upper Layer Protocols May 18, 2015.
TCP loss sensitivity analysis ADAM KRAJEWSKI, IT-CS-CE.
Congestion Control Tanenbaum 5.3, /12/2015Congestion Control (A Loss Based Technique: TCP)2 What? Why? Congestion occurs when –there is no reservation.
Explicit Congestion Notification ECN Tilo Hamann Technical University Hamburg-Harburg, Germany.
TCP on High-Speed Networks Sangtae Ha and Injong Rhee North Carolina State University.
High-performance bulk data transfers with TCP Matei Ripeanu University of Chicago.
1 Minseok Kwon and Sonia Fahmy Department of Computer Sciences Purdue University {kwonm, TCP Increase/Decrease.
1 TCP-LP: A Distributed Algorithm for Low Priority Data Transfer Aleksandar Kuzmanovic, Edward W. Knightly Department of Electrical and Computer Engineering.
TCP Friendliness CMPT771 Spring 2008 Michael Jia.
Medium Start in TCP-Friendly Rate Control Protocol CS 217 Class Project Spring 04 Peter Leong & Michael Welch.
Reduced TCP Window Size for VoIP in Legacy LAN Environments Nikolaus Färber, Bernd Girod, Balaji Prabhakar.
Transport Level Protocol Performance Evaluation for Bulk Data Transfers Matei Ripeanu The University of Chicago Abstract:
Reduced TCP Window Size for Legacy LAN QoS Niko Färber July 26, 2000.
Ns Simulation Final presentation Stella Pantofel Igor Berman Michael Halperin
Spring 2002CS 4611 Reliable Byte-Stream (TCP) Outline Connection Establishment/Termination Sliding Window Revisited Flow Control Adaptive Timeout.
Introduction 1 Lecture 14 Transport Layer (Congestion Control) slides are modified from J. Kurose & K. Ross University of Nevada – Reno Computer Science.
1 Ns Tutorial: Case Studies John Heidemann (USC/ISI) Polly Huang (ETH Zurich) March 14, 2002.
KEK Network Qi Fazhi KEK SW L2/L3 Switch for outside connections Central L2/L3 Switch A Netscreen Firewall Super Sinet Router 10GbE 2 x GbE IDS.
Transport Layer 4 2: Transport Layer 4.
Congestion control for multimedia Henning Schulzrinne Dept. of Computer Science Columbia University Fall 2003.
Copyright © Lopamudra Roychoudhuri
Maximizing End-to-End Network Performance Thomas Hacker University of Michigan October 26, 2001.
INFOCOM A Receiver-Driven Bandwidth Sharing System (BWSS) for TCP Puneet Mehra, Avideh Zakhor UC Berkeley, USA Christophe De Vleeschouwer Université.
TFRC: TCP Friendly Rate Control using TCP Equation Based Congestion Model CS 218 W 2003 Oct 29, 2003.
UDT: UDP based Data Transfer Yunhong Gu & Robert Grossman Laboratory for Advanced Computing University of Illinois at Chicago.
High TCP performance over wide area networks Arlington, VA May 8, 2002 Sylvain Ravot CalTech HENP Working Group.
Pavel Cimbál, Sven Ubik CESNET TNC2005, Poznan, 9 June 2005 Tools for TCP performance debugging.
Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March
1. Introduction REU 2006-Packet Loss Distributions of TCP using Web100 Zoriel M. Salado, Mentors: Dr. Miguel A. Labrador and Cesar D. Guerrero 2. Methodology.
Spring 2009CSE Congestion Control Outline Resource Allocation Queuing TCP Congestion Control.
Debugging end-to-end performance in commodity operating system Pavel Cimbál, CTU, Sven Ubik, CESNET,
The Macroscopic behavior of the TCP Congestion Avoidance Algorithm.
Recap of Lecture 19 If symptoms persist, please consult Dr Jacobson.
Receiver Driven Bandwidth Sharing for TCP Authors: Puneet Mehra, Avideh Zakor and Christophe De Vlesschouwer University of California Berkeley. Presented.
Development of a QoE Model Himadeepa Karlapudi 03/07/03.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
IP Configuration API. Network Interface Configuration NAIfconfigIsDeviceUp() NAIfconfigDeviceFromInterface() NAIfconfigBringDeviceUp() NAIfconfigSetIpAddress()
Midterm Review Chapter 1: Introduction Chapter 2: Application Layer
Congestion Control CS 168 Discussion Week 7. RECAP: How does TCP set rate? How much data can be outstanding? – min{RWND, CWND} RWND: do not overload the.
© Janice Regan, CMPT 128, CMPT 371 Data Communications and Networking Congestion Control 0.
Peer-to-Peer Networks 13 Internet – The Underlay Network
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
Network Protocols: Design and Analysis Polly Huang EE NTU
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
28/09/2016 Congestion Control Ian McDonald (with many other WAND members)
Window Control Adjust transmission rate by changing Window Size
13. TCP Flow Control and Congestion Control – Part 2
CS450 – Introduction to Networking Lecture 19 – Congestion Control (2)
COMP 431 Internet Services & Protocols
Achieving reliable high performance in LFNs (long-fat networks)
Experimental Networking (ECSE 4963)
Khiem Lam Jimmy Vuong Andrew Yang
Understanding Throughput & TCP Windows
Automatic TCP Buffer Tuning
Sven Ubik TCP performance Sven Ubik
RAP: Rate Adaptation Protocol
If both sources send full windows, we may get congestion collapse
TCP flow and congestion control
Anant Mudambi, U. Virginia
Review of Internet Protocols Transport Layer
Achieving reliable high performance in LFNs (long-fat networks)
Presentation transcript:

TCP performance Sven Ubik

FTP throughput capacity load ftp.uninett.no 12.3 Mb/s 1.2 Gb/s 80 Mb/s (6.6%) ftp.stanford.edu 1.3 Mb/s 600 Mb/s 180 Mb/s (30%) Protocols: TCP 95%, UDP 3%, other 2% International traffic 30% (april 2002) Géant, Internet2 vs. 10 Mb/s Ethernet ?

TCP flow control & congestion control

BW * delay product From CESNET: ping max.throughput for 64kB owin ftp.uninett.no 38 ms 13.8 Mb/s ftp.cs.columbia.edu 90 ms 5.8 Mb/s ftp.tamu.edu 133 ms 3.9 Mb/s ftp.stanford.edu 166 ms 2.6 Mb/s

Window Scale TCP Option (RFC 1323) Advertised rwnd shifted internally 1-14 bits 1. OS must support, for example, Linux 2.4: sysctl -w net/ipv4/tcp_adv_win_scale=1 2. Application must use a) default for all TCP connections sysctl -w net/ipv4/tcp_rmem=„ “ sysctl -w net/ipv4/tcp_wmem=„ “ b) application sets its own setsockopt(sockfd, SOL_SOCKET, SO_RCVBUF, (char *)&size, sizeof(int)); setsockopt(sockfd, SOL_SOCKET, SO_SNDBUF, (char *)&size, sizeof(int)); before connect() or listen() (e.g., netperf, modified ncftp+wuftpd) c) OS tunes automatically (dynamic right-sizing) 3. Timestamps + PAWS (Protect Against Wrapped Sequence Numbers)

Initialization: cwnd<=2*MSS, ssthresh high (rwnd) Slow start (cwnd<ssthresh): received new ack => cwnd=cwnd+max.segment Congestion avoidance (cwnd>ssthresh): RTT => cwnd=cwnd+max.segment usually approximated Timeout: ssthresh=max(owin/2, 2*max.segment) cwnd=max.segment (implies slow start) TCP congestion control

TCP congestion control throughput limitation Cesnet  Uninett: MSS=1460 bytes, RTT=44ms, packet loss rate~5*10 -6, Timeout=250 ms Padhye [1] equation: BW ~ Mb/s Mathis [2] equation (BW ~ MSS/RTT * C/sqrt(p)): Mb/s Higher MSS to speed-up congestion avoidance? [1] J. Padhye, V. Firoiu, D. Towsley, J. Kurose. „Modeling TCP Throughput: A Simple Model and its Emprirical Validation“ [2] M. Mathis, J. Semke, J. Mahdavi. „The Macroscopic Behaviour of the TCP Congestion Avoidance Algorithm“.

CESNET  UNINETT UDP throughput  Géant  Teleglobe created by qosplot

Path capacity estimation tools pathchar: pathchar to tcp4-ge.uninett.no ( ) 0 localhost | 61 Mb/s, 48 us (293 us) ( ) | 348 Mb/s, 13 us (354 us) 2 r21-pos0-0-stm16.cesnet.cz ( ) | 108 Mb/s, 64 us (594 us) 3 cesnet.cz1.cz.geant.net ( ) | 614 Mb/s, 4.15 ms (8.91 ms) 4 cz.de1.de.geant.net ( ) | 557 Mb/s, 7 us (8.95 ms) 5 de1-1.de2.de.geant.net ( ) | 1229 Mb/s, 10.9 ms (30.7 ms) 6 de.se1.se.geant.net ( ) | ?? b/s, -13 us (30.6 ms) 7 nordunet-gw.se1.se.geant.net ( ) | 805 Mb/s, 3.67 ms (38.0 ms) 8 no-gw.nordu.net ( ) | 884 Mb/s, 13 us (38.0 ms) 9 oslo-gw1.uninett.no ( ) | 656 Mb/s, 3.39 ms (44.8 ms) 10 trd-gw.uninett.no ( ) | 99 Mb/s, 29 us (45.0 ms), 13% dropped 11 tcp4-ge.uninett.no ( ) pathrate: „Phase I was aborted“ Final capacity estimate: 757 Mbps to 792 Mbps

CESNET  UNINETT TCP throughput max. rwnd CESNET  UNINETT  CESNET  [bytes] UNINETT [Mb/s] CESNET [Mb/s]UNINETT [Mb/s] (Teleglobe) FTP 150 MB: standard FTP: 130 s rwnd increased to 4MB: 5s

CESNET  UNINETT TCP throughput, cont.

Measurement tools Simulation (ns/2, sim.cesnet.cz) Emulation (Nist Net) Path capacity estimation tools (pathchar, pathrate, …) Capture + follow-up analysis (tcpdump + tcptrace and others) On-the-fly monitoring of TCP state variables (web100)

Capture + follow-up analysis tcpdump -i eth1 -p -s 96 -w trace.log tcp and host tcp4-ge.uninett.no tcptrace -l -f ’s_port!=12865’ -T -A300 -G trace.log xplot  tcpplot 

On-the-fly monitoring of TCP state variables - “Tools for end hosts to automatically achieve high bandwidth” - kernel data structures (approx. 120 variables), library and userland tools

On-the-fly monitoring of TCP state variables readvars 0.01 CurrentCwnd CurrentRwndRcvd

Parallel TCP GridFTP, LFTP ~100 Mb Uninett -> Cesnet: pget -n s 6.5 Mb/s pget -n s 12.1 Mb/s pget -n s 17.4 Mb/s to 10s 9.5 Mb/s pget -n s 9.6 Mb/s to 12s 7.9 Mb/s

E2E performance: No changes to network. Is it best-effort? Is it fair? Ultimate goal of E2E performance: Fully automatic adjustment to network and receiver conditions inside the operating system to maximize utilization of available resources. Use SACKs to avoid slow start after timeout Are you far enough from us? You are welcome! Sven Ubik, Further research