Achieving reliable high performance in LFNs (long-fat networks)

Slides:



Advertisements
Similar presentations
Tuning and Evaluating TCP End-to-End Performance in LFN Networks P. Cimbál* Measurement was supported by Sven Ubik**
Advertisements

SHARKFEST '09 | Stanford University | June 15–18, 2009 Tips and Tricks: Case Studies Laura Chappell Founder, Wireshark University
TCP Variants.
Bandwidth Estimation Workshop 2003 Evaluating pathrate and pathload with realistic cross-traffic Ravi Prasad Manish Jain Constantinos Dovrolis (ravi, jain,
FAST TCP Anwis Das Ajay Gulati Slides adapted from : IETF presentation slides Link:
Pathload A measurement tool for end-to-end available bandwidth Manish Jain, Univ-Delaware Constantinos Dovrolis, Univ-Delaware Sigcomm 02.
1 TCP Vegas: New Techniques for Congestion Detection and Avoidance Lawrence S. Brakmo Sean W. O’Malley Larry L. Peterson Department of Computer Science.
TCP Vegas: New Techniques for Congestion Detection and Control.
Performance Analysis of Orb Rabin Karki and Thangam V. Seenivasan 1.
High-performance bulk data transfers with TCP Matei Ripeanu University of Chicago.
1 Chapter 3 Transport Layer. 2 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4.
Internet and Intranet Protocols and Applications Section V: Network Application Performance Lecture 11: Why the World Wide Wait? 4/11/2000 Arthur P. Goldberg.
DataTAG Meeting CERN 7-8 May 03 R. Hughes-Jones Manchester 1 High Throughput: Progress and Current Results Lots of people helped: MB-NG team at UCL MB-NG.
Sven Ubik, CESNET TNC2004, Rhodos, 9 June 2004 Performance monitoring of high-speed networks from NREN perspective.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
VoIP over Wireless LAN Brandon Wilson PI: Alexander L. Wijesinha.
1 CSTS WG CSTS WG Prototyping for Forward CSTS Performance Boulder November 2011 Martin Karch.
Masaki Hirabaru Internet Architecture Group GL Meeting March 19, 2004 High Performance Data transfer on High Bandwidth-Delay Product Networks.
TCP Performance over Gigabit Networks Arne Øslebø NORDUnet2002.
A Measurement Based Memory Performance Evaluation of High Throughput Servers Garba Isa Yau Department of Computer Engineering King Fahd University of Petroleum.
Srihari Makineni & Ravi Iyer Communications Technology Lab
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
ICOM 6115©Manuel Rodriguez-Martinez ICOM 6115 – Computer Networks and the WWW Manuel Rodriguez-Martinez, Ph.D. Lecture 7.
TCP performance Sven Ubik FTP throughput capacity load ftp.uninett.no 12.3 Mb/s 1.2 Gb/s 80 Mb/s (6.6%) ftp.stanford.edu 1.3 Mb/s 600.
Pavel Cimbál, Sven Ubik CESNET TNC2005, Poznan, 9 June 2005 Tools for TCP performance debugging.
Bandwidth Estimation Workshop 2003 Evaluating pathrate and pathload with realistic cross-traffic Ravi Prasad Manish Jain Constantinos Dovrolis (ravi, jain,
Masaki Hirabaru NICT Koganei 3rd e-VLBI Workshop October 6, 2004 Makuhari, Japan Performance Measurement on Large Bandwidth-Delay Product.
1 Capacity Dimensioning Based on Traffic Measurement in the Internet Kazumine Osaka University Shingo Ata (Osaka City Univ.)
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Debugging end-to-end performance in commodity operating system Pavel Cimbál, CTU, Sven Ubik, CESNET,
Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Chapter 13 TCP Implementation.
Chapter 11.4 END-TO-END ISSUES. Optical Internet Optical technology Protocol translates availability of gigabit bandwidth in user-perceived QoS.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
TCP Traffic Characteristics—Deep buffer Switch
Spring 2000CS 4611 Router Construction Outline Switched Fabrics IP Routers Extensible (Active) Routers.
1 Ram Dantu University of North Texas, Practical Networking.
Access Link Capacity Monitoring with TFRC Probe Ling-Jyh Chen, Tony Sun, Dan Xu, M. Y. Sanadidi, Mario Gerla Computer Science Department, University of.
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
Network Protocols: Design and Analysis Polly Huang EE NTU
A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their original slides that accompany the.
Quality and Value for the Exam 100% Guarantee to Pass Your Exam Based on Real Exams Scenarios Verified Answers Researched by Industry.
Bandwidth Estimation: Metrics Measurement Techniques and Tools
Accelerating Peer-to-Peer Networks for Video Streaming
Window Control Adjust transmission rate by changing Window Size
Chapter 5 TCP Sequence Numbers & TCP Transmission Control
Chapter 3 outline 3.1 transport-layer services
COMP 431 Internet Services & Protocols
TCP Vegas: New Techniques for Congestion Detection and Avoidance
Realization of a stable network flow with high performance communication in high bandwidth-delay product network Y. Kodama, T. Kudoh, O. Tatebe, S. Sekiguchi.
Achieving reliable high performance in LFNs (long-fat networks)
Transport Protocols over Circuits/VCs
Understanding Throughput & TCP Windows
Lecture 19 – TCP Performance
My Experiences, results and remarks to TCP BW and CT Measurements Tools Jiří Navrátil SLAC.
NET 536 Network Security Lab 1: TCP IP Attacks
CS Lecture 2 Network Performance
Sven Ubik TCP performance Sven Ubik
CS640: Introduction to Computer Networks
Router Construction Outline Switched Fabrics IP Routers
TCP Congestion Control
Advanced Computer Networks
IP Control Gateway (IPCG)
TCP Overview.
Transport Layer: Congestion Control
pathChirp Efficient Available Bandwidth Estimation
pathChirp Efficient Available Bandwidth Estimation
Review of Internet Protocols Transport Layer
Ram Dantu University of North Texas,
Ram Dantu University of North Texas,
Presentation transcript:

Achieving reliable high performance in LFNs (long-fat networks) Sven Ubik, Pavel Cimbál CESNET

End-to-end performance E2E performance is a result of interaction of all computer system components: - network, network adapter, communication protocols, operating system, applications Decided to concentrate on E2E performance on Linux PCs Primary problem areas: - TCP window configuration - OS / network adapter interaction - ssh / TCP interaction

TCP windows configuration Linux 2.4: how big are my TCP windows?

Throughput with large TCP windows Interaction with network adapter must be considered: large TCP sender window allows large chunks of data submitted from IP through txqueue to adapter full txqueue -> send_stall() to application and context switch no problem as long as txqueue is large enough for a timeslice for Gigabit Ethernet adapter and standard Linux system timer: txqueuelen > 1 Gb/s * 10 ms / 8 bits / 1500 bytes = 833 packets ifconfig eth0 txqueuelen 1000

Throughput with large TCP windows, cont.

Using “buffered pipe” is not good Router queues must be considered: No increase in throughput over using „wire pipe“ Self-clocking adjusts sender to bottleneck speed, but does not stop sender from accumulating data in queues Filled-up queues are sensitive to losses caused by cross-traffic Check throughput (TCP Vegas) or RTT increase ? rwnd<=pipe capacity bw=rwnd/rtt rwnd>pipe capacity bw~(mss/rtt)*1/sqrt(p) Flat lower bound RTT=45ms Fluctuations up to RTT=110ms Bottleneck installed BW=1 Gb/s Buffer content ~8 MB

Other configuration problems TCP cache must be considered owin development rwin development initial ssthresh locked at 1.45 MB echo 1 > /proc/sys/net/ipv4/route/flush

Bandwidth measurement and estimation Test paths: cesnet.cz <--> uninett.no, switch.ch pathload over one week: 27% measurements too low (50-70 Mb/s) 7% measurements too high (1000 Mb/s) 66% measurements realistic (750-850 Mb/s), but range sometimes too wide (150 Mb/s) pathrate: lots of fluctuations UDP iperf: can stress existing traffic TCP iperf: more fluctuations for larger TCP windows

Bandwidth measurement and estimation, cont. cesnet.cz -> switch.ch

Bandwidth measurement and estimation, cont. uninett.no -> cesnet.cz

ssh performance Cesnet -> Uninett, 1.5 MB window, 10.4 Mb/s, 9% load CPU sequence number development - rwin not utilised

ssh performance, cont.

ssh performance, cont. Bw=1 Gb/s, RTT=45 ms, TCP window=8 MB, Xeon 2.4 GHz

ssh performance, cont. sequence number development - rwin utilised CHAN_SES_WINDOW_DEFAULT=40 * 32 kB blocks, 85% CPU load sequence number development - rwin utilised

Conclusion Large TCP windows require other configuration tasks for good performance Buffer autoconfiguration should not just conserve memory and set TCP windows „large enough“, but also „small enough“ ssh has a significant performance problem that must be resolved at the application level Influence of OS configuration and implementation-specific features on performance can be stronger than amendments in congestion control Thank you for your attention http://staff.cesnet.cz/~ubik