TCP loss sensitivity analysis

Slides:



Advertisements
Similar presentations
A Survey of Recent High Speed TCP Variants
Advertisements

TCP transfers over high latency/bandwidth network & Grid TCP Sylvain Ravot
FAST TCP Anwis Das Ajay Gulati Slides adapted from : IETF presentation slides Link:
August 10, Circuit TCP (CTCP) Helali Bhuiyan
TCP Vegas: New Techniques for Congestion Detection and Control.
Congestion Control Algorithms: Open Questions Benno Overeinder NLnet Labs.
Iperf Tutorial Jon Dugan Summer JointTechs 2010, Columbus, OH.
CUBIC Qian HE (Steve) CS 577 – Prof. Bob Kinicki.
TCP: Reno vs Cubic Matt Kane, Ryan Chu.
TCP loss sensitivity analysis ADAM KRAJEWSKI, IT-CS-CE.
Measurements of Congestion Responsiveness of Windows Streaming Media (WSM) Presented By:- Ashish Gupta.
Presenter - Eric Wang CS577 Advanced Computer Networks
MULTIMEDIA TRAFFIC MANAGEMENT ON TCP/IP OVER ATM-UBR By Dr. ISHTIAQ AHMED CH.
Computer Networks Transport Layer. Topics F Introduction  F Connection Issues F TCP.
Ns Simulation Final presentation Stella Pantofel Igor Berman Michael Halperin
All rights reserved © 2006, Alcatel Accelerating TCP Traffic on Broadband Access Networks  Ing-Jyh Tsang 
Transport Layer 4 2: Transport Layer 4.
TCP Tuning Domenico Vicinanza DANTE, Cambridge, UK EGI Technical Forum 2013, Madrid, Spain.
Large File Transfer on 20,000 km - Between Korea and Switzerland Yusung Kim, Daewon Kim, Joonbok Lee, Kilnam Chon
UDT: UDP based Data Transfer Protocol, Results, and Implementation Experiences Yunhong Gu & Robert Grossman Laboratory for Advanced Computing / Univ. of.
Data transfer over the wide area network with a large round trip time H. Matsunaga, T. Isobe, T. Mashimo, H. Sakamoto, I. Ueda International Center for.
RAC parameter tuning for remote access Carlos Fernando Gamboa, Brookhaven National Lab, US Frederick Luehring, Indiana University, US Distributed Database.
High TCP performance over wide area networks Arlington, VA May 8, 2002 Sylvain Ravot CalTech HENP Working Group.
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
TCP performance Sven Ubik FTP throughput capacity load ftp.uninett.no 12.3 Mb/s 1.2 Gb/s 80 Mb/s (6.6%) ftp.stanford.edu 1.3 Mb/s 600.
Paper Review: Latency Evaluation of Networking Mechanisms for Game Traffic Jin, Da-Jhong.
Pavel Cimbál, Sven Ubik CESNET TNC2005, Poznan, 9 June 2005 Tools for TCP performance debugging.
Masaki Hirabaru NICT Koganei 3rd e-VLBI Workshop October 6, 2004 Makuhari, Japan Performance Measurement on Large Bandwidth-Delay Product.
Thanks to Edoardo Martelli, Stefan Stancu and Adam Krajewski
TCP CUBIC in ns-3 CS577 Brett Levasseur 12/10/2013.
An Introduction to UDT Internet2 Spring Meeting Yunhong Gu Robert L. Grossman (Advisor) National Center for Data Mining University.
Murari Sridharan and Kun Tan (Collaborators: Jingmin Song, MSRA & Qian Zhang, HKUST.
Compound TCP in NS-3 Keith Craig 1. Worcester Polytechnic Institute What is Compound TCP? As internet speeds increased, the long ‘ramp’ time of TCP Reno.
Murari Sridharan Windows TCP/IP Networking, Microsoft Corp. (Collaborators: Kun Tan, Jingmin Song, MSRA & Qian Zhang, HKUST)
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
Final EU Review - 24/03/2004 DataTAG is a project funded by the European Commission under contract IST Richard Hughes-Jones The University of.
Increasing TCP's CWND based on Throughput draft-you-iccrg-throughput-based-cwnd-increasing-00 Jianjie You IETF92 Dallas.
ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester 1 The Uptake of High Speed Protocols or Are these protocols making their way.
11 CS716 Advanced Computer Networks By Dr. Amir Qayyum.
Trojan Express Network II Goal: Develop Next Generation research network in parallel to production network to address increasing research data transfer.
28/09/2016 Congestion Control Ian McDonald (with many other WAND members)
iperf a gnu tool for IP networks
Performance Troubleshooting across Networks
13. TCP Flow Control and Congestion Control – Part 2
Impact of New CC on Cross Traffic
Approaches towards congestion control
Sustained Wide-Area TCP Memory Transfers Over Dedicated Connections
CUBIC Marcos Vieira.
CS 268: Lecture 6 Scott Shenker and Ion Stoica
Ian McDonald, Richard Nelson
TCP Vegas: New Techniques for Congestion Detection and Avoidance
Performance Issues in 2010 PERT workshop 2010 Chris Welti
Protocols for Low Power
Chapter 3 outline 3.1 Transport-layer services
Transport Protocols over Circuits/VCs
Experimental Networking (ECSE 4963)
Khiem Lam Jimmy Vuong Andrew Yang
Understanding Throughput & TCP Windows
Internet Congestion Control Research Group
Lecture 19 – TCP Performance
Monkey See, Monkey Do A Tool for TCP Tracing and Replaying
TCP Cubic CS577 Brett Levasseur 10/1/2013.
Compound TCP Summary (by Song, Zhang & Murari)
Sven Ubik TCP performance Sven Ubik
Chapter 3 outline 3.1 Transport-layer services
Anant Mudambi, U. Virginia
Network Transport Layer: TCP/Reno Analysis, TCP Cubic, TCP/Vegas
Review of Internet Protocols Transport Layer
When to use and when not to use BBR:
Presentation transcript:

TCP loss sensitivity analysis Adam krajewski, IT-CS-CE, CERN

The original presentation has been modified for the 2nd ATCF 2016 Edoardo Marteli, CERN

Adam Krajewski - TCP Congestion Control Original problem IT-DB was backuping some data to Wigner CC. High transfer rate was required in order to avoid service degradation. But the transfer rate was: Initial: ~1G After some tweaking: ~4G The problem lies within TCP internals! 4G out of... 10G Meyrin Wigner Adam Krajewski - TCP Congestion Control

Adam Krajewski - TCP Congestion Control TCP algorithms (1) The default TCP algorithm (reno) behaves poorly. Other congestion control algorithms have been developed such as: BIC CUBIC Scalable Compund TCP Each of them behaves differently and we want to see how they perform in our case... Adam Krajewski - TCP Congestion Control

Adam Krajewski - TCP Congestion Control Testbed Two servers connected back-to-back. How the congestion control algorithm performance changes with packet loss and delay ? Tested with iperf3. Packet loss and delay emulated using NetEm in Linux: tc qdisc add dev eth2 root netem loss 0.1 delay 12.5ms Adam Krajewski - TCP Congestion Control

0 ms delay, default settings Adam Krajewski - TCP Congestion Control

25ms delay, default settings PROBLEM: only 1G` Adam Krajewski - TCP Congestion Control

Adam Krajewski - TCP Congestion Control Growing the window (1) We need... 𝑾𝟐𝟓=𝑩∗𝑹𝑻𝑻=𝟏𝟎𝑮𝒃𝒑𝒔 ∗𝟐𝟓𝒎𝒔≈𝟑𝟏.𝟐 𝑴𝑩 Default settings: net.core.rmem_max = 124K net.core.wmem_max = 124K net.ipv4.tcp_rmem = 4K 87K 4M net.ipv4.tcp_wmem = 4K 16K 4M net.core.netdev_max_backlog = 1K 𝑩 𝒎𝒂𝒙 = 𝑾 𝒎𝒂𝒙 𝑹𝑻𝑻 = 𝟒𝑴𝑩 𝟐𝟓𝒎𝒔 ≈𝟏.𝟐𝟖 𝑮𝒃𝒑𝒔 WARNING: The real OS numbers are in pure bytes count so less readable. Window can’t grow bigger than maximum TCP send and receive buffers! Adam Krajewski - TCP Congestion Control

Adam Krajewski - TCP Congestion Control Growing the window (2) We tune TCP settings on both server and client. Tuned settings: net.core.rmem_max = 67M net.core.wmem_max = 67M net.ipv4.tcp_rmem = 4K 87K 67M net.ipv4.tcp_wmem = 4K 65K 67M net.core.netdev_max_backlog = 30K WARNING: TCP buffer size doesn’t translate directly to window size because TCP uses some portion of its buffers to allocate operational data structures. So the effective window size will be smaller than maximum buffer size set. Helpful link: https://fasterdata.es.net/host-tuning/linux/ Now it’s fine Adam Krajewski - TCP Congestion Control

25 ms delay, tuned settings Adam Krajewski - TCP Congestion Control

0 ms delay, tuned settings DEFAULT CUBIC  Adam Krajewski - TCP Congestion Control

Adam Krajewski - TCP Congestion Control Bandwith fairness Goal: Each TCP flow should get a fair share of available bandwidth B 10G 10G 10G 5G 5G 5G t Adam Krajewski - TCP Congestion Control

Adam Krajewski - TCP Congestion Control Bandwith competition How do congestion control algorithms compete? Tests were done for: Reno, BIC, CUBIC, Scalable 0ms and 25ms delay Two cases: 2 flows starting at the same time 1 flow delayed by 30s 10G 10G congestion 10G Adam Krajewski - TCP Congestion Control

Adam Krajewski - TCP Congestion Control cubic vs cubic + offset Adam Krajewski - TCP Congestion Control

Adam Krajewski - TCP Congestion Control cubic vs bic Adam Krajewski - TCP Congestion Control

Adam Krajewski - TCP Congestion Control cubic vs scalable Adam Krajewski - TCP Congestion Control

TCP (reno) friendliness Adam Krajewski - TCP Congestion Control

Adam Krajewski - TCP Congestion Control cubic vs reno Adam Krajewski - TCP Congestion Control

Adam Krajewski - TCP Congestion Control Why default cubic? (1) BIC was the default Linux TCP congestion algorithm until kernel version 2.6.19 It was changed to CUBIC in kernel 2.6.19 with commit message: ”Change default congestion control used from BIC to the newer CUBIC which it the successor to BIC but has better properties over long delay links.” Why? Adam Krajewski - TCP Congestion Control

Adam Krajewski - TCP Congestion Control Why default cubic? (2) It is more friendly to other congestion algorithms to Reno... to CTCP\ (Windows default)... Adam Krajewski - TCP Congestion Control

Adam Krajewski - TCP Congestion Control Why default cubic? (3) It has better RTT fairness properties 25 ms RTT 0 ms RTT Adam Krajewski - TCP Congestion Control

Adam Krajewski - TCP Congestion Control Why default cubic? (4) RTT fairness test: 0 ms RTT 0 ms RTT 25 ms RTT 25 ms RTT Adam Krajewski - TCP Congestion Control

Setting TCP algorithms (1) OS-wide setting (sysctl ...): net.ipv4.tcp_congestion_control = reno net.ipv4.tcp_available_congestion_control = cubic reno Local setting: setsockopt(), TCP_CONGESTION optname net.ipv4.tcp_allowed_congestion_control = cubic reno Add new: # modprobe tcp_bic bic Adam Krajewski - TCP Congestion Control

Setting TCP algorithms (2) OS-wide settings: C:\> netsh interface tcp show global Add-On Congestion Control Provider: ctcp Enabled by default in Windows Server 2008 and newer. Needs to be enabled manually on client systems: Windows 7: C:\>netsh interface tcp set global congestionprovider=ctcp Windows 8/8.1 [Powershell]: Set-NetTCPSetting –CongestionProvider ctcp Adam Krajewski - TCP Congestion Control

TCP loss sensitivity analysis Adam krajewski, IT-CS-CE