Debugging end-to-end performance in commodity operating system Pavel Cimbál, CTU, Sven Ubik, CESNET,

Slides:



Advertisements
Similar presentations
Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
Advertisements

Tuning and Evaluating TCP End-to-End Performance in LFN Networks P. Cimbál* Measurement was supported by Sven Ubik**
TCP Variants.
TCP transfers over high latency/bandwidth network & Grid TCP Sylvain Ravot
Connect. Communicate. Collaborate The Performance Enhancement Response Team: Origins and Evolution Ann Harding, HEAnet Toby Rodwell,
APOHN: Subnetwork Layering to Improve TCP Performance over Heterogeneous Paths April 4, 2006 Dzmitry Kliazovich, Fabrizio Granelli, University of Trento,
Restricted Slow-Start for TCP William Allcock 1,2, Sanjay Hegde 3 and Rajkumar Kettimuthu 1,2 1 Argonne National Laboratory 2 The University of Chicago.
Advanced Computer Networking Congestion Control for High Bandwidth-Delay Product Environments (XCP Algorithm) 1.
Congestion Control An Overview -Jyothi Guntaka. Congestion  What is congestion ?  The aggregate demand for network resources exceeds the available capacity.
Transport Layer 3-1 outline r TCP m segment structure m reliable data transfer m flow control m congestion control.
Congestion Control Tanenbaum 5.3, /12/2015Congestion Control (A Loss Based Technique: TCP)2 What? Why? Congestion occurs when –there is no reservation.
Transport Layer 3-1 Outline r TCP m Congestion control m Flow control.
Client Side Mirror Selection Will Lefevers CS 526 Advanced Internet and Web Systems.
High Performance Networking with Little or No Buffers Yashar Ganjali on behalf of Prof. Nick McKeown High Performance Networking Group Stanford University.
Internet and Intranet Protocols and Applications Section V: Network Application Performance Lecture 11: Why the World Wide Wait? 4/11/2000 Arthur P. Goldberg.
L13: Sharing in network systems Dina Katabi Spring Some slides are from lectures by Nick Mckeown, Ion Stoica, Frans.
TCP Westwood: Experiments over Large Pipes Cesar Marcondes Anders Persson Prof. M.Y. Sanadidi Prof. Mario Gerla NRL – Network Research Lab UCLA.
Low-Rate TCP-Targeted Denial of Service Attacks Presenter: Juncao Li Authors: Aleksandar Kuzmanovic Edward W. Knightly.
Sven Ubik, CESNET TNC2004, Rhodos, 9 June 2004 Performance monitoring of high-speed networks from NREN perspective.
1 #6 in Mid-Term  Most answered:  many users thru the same bottleneck -> increased queueing delay -> increased e2e latency  Possible reasons behind.
Experiences in Design and Implementation of a High Performance Transport Protocol Yunhong Gu, Xinwei Hong, and Robert L. Grossman National Center for Data.
These materials are licensed under the Creative Commons Attribution-Noncommercial 3.0 Unported license (
Maximizing End-to-End Network Performance Thomas Hacker University of Michigan October 26, 2001.
Connect. Communicate. Collaborate PERT Performance Enhancement & Response Team Toby Rodwell, DANTE Joint Techs, New Mexico USA 7 February 2006.
1 MaxNet and TCP Reno/RED on mice traffic Khoa Truong Phan Ho Chi Minh city University of Technology (HCMUT)
CS/EE 145A Congestion Control Netlab.caltech.edu/course.
Distance-Dependent RED Policy (DDRED)‏ Sébastien LINCK, Eugen Dedu and François Spies LIFC Montbéliard - France ICN07.
MaxNet NetLab Presentation Hailey Lam Outline MaxNet as an alternative to TCP Linux implementation of MaxNet Demonstration of fairness, quick.
High TCP performance over wide area networks Arlington, VA May 8, 2002 Sylvain Ravot CalTech HENP Working Group.
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
TCP performance Sven Ubik FTP throughput capacity load ftp.uninett.no 12.3 Mb/s 1.2 Gb/s 80 Mb/s (6.6%) ftp.stanford.edu 1.3 Mb/s 600.
Copyright 2008 Kenneth M. Chipps Ph.D. Controlling Flow Last Update
Pavel Cimbál, Sven Ubik CESNET TNC2005, Poznan, 9 June 2005 Tools for TCP performance debugging.
Masaki Hirabaru NICT Koganei 3rd e-VLBI Workshop October 6, 2004 Makuhari, Japan Performance Measurement on Large Bandwidth-Delay Product.
Wide Area Network Performance Analysis Methodology Wenji Wu, Phil DeMar, Mark Bowden Fermilab ESCC/Internet2 Joint Techs Workshop 2007
TCP: Transmission Control Protocol Part II : Protocol Mechanisms Computer Network System Sirak Kaewjamnong Semester 1st, 2004.
TCP transfers over high latency/bandwidth networks Internet2 Member Meeting HENP working group session April 9-11, 2003, Arlington T. Kelly, University.
Performance Engineering E2EpiPEs and FastTCP Internet2 member meeting - Indianapolis World Telecom Geneva October 15, 2003
The Macroscopic behavior of the TCP Congestion Avoidance Algorithm.
PRT Performance Response Team TF-NGN, October 18 th, 2002 Victor Reijs
Academic and Research Network of Slovenia Static IPv6 in GÉANT and Internet2 LSR a joint effort by ARNES, DANTE, RedIRIS and Juniper TF-NGN meeting Budapest,
Chapter 11.4 END-TO-END ISSUES. Optical Internet Optical technology Protocol translates availability of gigabit bandwidth in user-perceived QoS.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
XCP: eXplicit Control Protocol Dina Katabi MIT Lab for Computer Science
TCP Traffic Characteristics—Deep buffer Switch
Trickles :A stateless network stack for improved Scalability, Resilience, and Flexibility Alan Shieh,Andrew C.Myers,Emin Gun Sirer Dept. of Computer Science,Cornell.
Peer-to-Peer Networks 13 Internet – The Underlay Network
1 Ram Dantu University of North Texas, Practical Networking.
Sven Ubik, Aleš Friedl CESNET TNC 2009, Malaga, Spain, 11 June 2009 Experience with passive monitoring deployment in GEANT2 network.
Access Link Capacity Monitoring with TFRC Probe Ling-Jyh Chen, Tony Sun, Dan Xu, M. Y. Sanadidi, Mario Gerla Computer Science Department, University of.
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
Network Congestion Control HEAnet Conference 2005 (David Malone for Doug Leith)
Window Control Adjust transmission rate by changing Window Size
Wide Area Network Performance Analysis Methodology
Receiver Assistant Congestion Control in High Speed and Lossy Networks
Achieving reliable high performance in LFNs (long-fat networks)
Transport Protocols over Circuits/VCs
IP Network Layer and Ethernet Encapsulation
Understanding Throughput & TCP Windows
Internet Congestion Control Research Group
Lecture 19 – TCP Performance
Sven Ubik TCP performance Sven Ubik
If both sources send full windows, we may get congestion collapse
TCP flow and congestion control
Anant Mudambi, U. Virginia
Adaptive RED: An Algorithm for Increasing the Robustness of RED’s Active Queue Management or How I learned to stop worrying and love RED Presented by:
Achieving reliable high performance in LFNs (long-fat networks)
Lecture 6, Computer Networks (198:552)
Presentation transcript:

Debugging end-to-end performance in commodity operating system Pavel Cimbál, CTU, Sven Ubik, CESNET,

End-to-end performance Protective QoS -> proactive QoS Interaction of all computer system components (applications, operating system, network adapter, network) Decided to concentrate on E2E performance on Linux PCs Need to increase TCP buffers, how much? What „autoconfiguration“ do we need?

How big are my TCP windows? Linux 2.2

How big are my TCP windows? Linux 2.4

Is the bigger the better? rwnd<=pipe capacity - bw=rwnd/rtt rwnd>pipe capacity - AIMD controlled, bw~(mss/rtt)*1/sqrt(p) Linux is not RFC standard slow start + congestion avoidance, includes a lot of modifications Interaction with network adapter must be considered TCP cache must be considered Router queues must be considered

Testing environment Cesnet (CZ) -> Uninett (NO) 12 hops, 1GE + OC-48, rtt=40 ms, wire pipe capacity<=5 MB tcpdump on a mirrored port captures all headers for > 800 Mb/s pathload: 149 measurements - 40 too low (50-70 Mb/s) - 10 too high (1000 Mb/s) - 99 realistic ( Mb/s), but range sometimes too wide (150 Mb/s) iperf: 50 measurements 1 MB 135 Mb/s 2 MB 227 Mb/s 4 MB 107 Mb/s 8 MB 131 Mb/s 16 MB 127 Mb/s

Gigabit interfaces and txqueuelen ifconfig eth0 txqueuelen 1000

TCP cache initial ssthresh locked at 1.45 MB echo 1 > /proc/sys/net/ipv4/route/flush

What is the right “pipe capacity” Gigabit routers have very big buffers 150 Mb/s overload buffered for 2 s => buffer=37.5 MB

Using “buffered pipe” is not good No increase in throughput over using „wire pipe“ in long term Self-clocking adjusts sender to bottleneck speed, but does not stop sender from accumulating data in queues Filled-up queues are sensitive to losses caused by cross-traffic

How to use not too much more than “wire pipe” Can sender control filling pipe by checking RTT ? Can receiver better moderate its advertised window?

scp Cesnet -> Uninett, 1.5 MB window, 10.4 Mb/s, 9% load CPU

scp, cont. Patched scp, with increased CHAN_SES_WINDOW_DEFAULT : set to 20, rwnd=1.5 MB 48 Mb/s, 45% CPU load set to 40, rwnd=1.5 MB 88 Mb/s, 85% CPU load

PERT Performance Enhancement and Response Team TF-NGN initiative (TERENA, DANTE, so far 6 NRENs) Comparable to CERT A support structure for users to help solve performance issues when using applications over a computer network - hierarchical structure - accepting and resolving performance cases - knowledge dissemination - measurement and monitoring infrastructure Relations with other organizations Pilot project in 2003

Conclusion Configuration and interaction of existing components (application, TCP buffers, OS, network adapter, router buffers) may have greater influence on E2E performance than new congestion control algorithms Use „wire pipe“ rather than „buffered pipe“ Autoconfiguration should not just save memory and set buffers „large enough“, but also „small enough“ PERT pilot project started