Masaki Hirabaru NICT Koganei 3rd e-VLBI Workshop October 6, 2004 Makuhari, Japan Performance Measurement on Large Bandwidth-Delay Product.

Slides:



Advertisements
Similar presentations
Martin Suchara, Ryan Witt, Bartek Wydrowski California Institute of Technology Pasadena, U.S.A. TCP MaxNet Implementation and Experiments on the WAN in.
Advertisements

Engineering Meeting Report Aug. 29, 2003 Kazunori Konishi.
Helping TCP Work at Gbps Cheng Jin the FAST project at Caltech
Tuning and Evaluating TCP End-to-End Performance in LFN Networks P. Cimbál* Measurement was supported by Sven Ubik**
TCP Performance over IPv6 Yoshinori Kitatsuji KDDI R&D Laboratories, Inc.
Pathload A measurement tool for end-to-end available bandwidth Manish Jain, Univ-Delaware Constantinos Dovrolis, Univ-Delaware Sigcomm 02.
August 10, Circuit TCP (CTCP) Helali Bhuiyan
Iperf Tutorial Jon Dugan Summer JointTechs 2010, Columbus, OH.
Ahmed El-Hassany CISC856: CISC 856 TCP/IP and Upper Layer Protocols Slides adopted from: Injong Rhee, Lisong Xu.
Katz, Stoica F04 EECS 122: Introduction to Computer Networks Performance Modeling Computer Science Division Department of Electrical Engineering and Computer.
The Effect of Router Buffer Size on HighSpeed TCP Performance Dhiman Barman Joint work with Georgios Smaragdakis and Ibrahim Matta.
Internet and Intranet Protocols and Applications Section V: Network Application Performance Lecture 11: Why the World Wide Wait? 4/11/2000 Arthur P. Goldberg.
PFLDNet Argonne Feb 2004 R. Hughes-Jones Manchester 1 UDP Performance and PCI-X Activity of the Intel 10 Gigabit Ethernet Adapter on: HP rx2600 Dual Itanium.
TCP Westwood: Experiments over Large Pipes Cesar Marcondes Anders Persson Prof. M.Y. Sanadidi Prof. Mario Gerla NRL – Network Research Lab UCLA.
Bandwidth Estimation: Metrics Mesurement Techniques and Tools By Ravi Prasad, Constantinos Dovrolis, Margaret Murray and Kc Claffy IEEE Network, Nov/Dec.
KEK Network Qi Fazhi KEK SW L2/L3 Switch for outside connections Central L2/L3 Switch A Netscreen Firewall Super Sinet Router 10GbE 2 x GbE IDS.
Masaki Hirabaru NICT The 3rd International HEP DataGrid Workshop August 26, 2004 Kyungpook National Univ., Daegu, Korea High Performance.
Experiences in Design and Implementation of a High Performance Transport Protocol Yunhong Gu, Xinwei Hong, and Robert L. Grossman National Center for Data.
Large File Transfer on 20,000 km - Between Korea and Switzerland Yusung Kim, Daewon Kim, Joonbok Lee, Kilnam Chon
Best-Case WiBro Performance for a Single Flow 1 MICNET 2009 Shinae Woo †, Keon Jang †, Sangman Kim † Soohyun Cho *, Jaehwa Lee *, Youngseok Lee ‡, Sue.
Masaki Hirabaru ISIT and Genkai Genkai / Hyeonhae Workshop in Fukuoka Feb. 27, 2003 Performance of G / H Link.
Masaki Hirabaru Internet Architecture Group GL Meeting March 19, 2004 High Performance Data transfer on High Bandwidth-Delay Product Networks.
The Internet Hall of Fame Induction Memorial Party for the late Dr. Masaki Hirabaru Shigeki Goto JPNIC Wednesday 4 March,
1 Masaki Hirabaru and Yasuhiro Koyama APEC-TEL APGrid Workshop September 6, 2005 e-VLBI: Science over High-Performance Networks.
E-VLBI over TransPAC Masaki HirabaruDavid LapsleyYasuhiro KoyamaAlan Whitney Communications Research Laboratory, Japan MIT Haystack Observatory, USA Communications.
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea.
Masaki Hirabaru CRL, Japan APAN Engineering Team Meeting APAN 2003 in Busan, Korea August 27, 2003 Common Performance Measurement Platform.
Masaki Hirabaru CRL, Japan ITRC MAI BoF Kochi November 6, 2003 広帯域・高遅延ネットワークでの TCP性能計測.
Measurement of Routing Switch-over Time with Redundant Link Masaki Hirabaru (NICT), Teruo Nakai (KDDI), Yoshitaka Hattori (KDDI), Motohiro Ishii (QIC),
Data transfer over the wide area network with a large round trip time H. Matsunaga, T. Isobe, T. Mashimo, H. Sakamoto, I. Ueda International Center for.
APAN 10Gbps End-to-End Performance Measurement Masaki Hirabaru (NICT), Takatoshi Ikeda (KDDI/NICT), and Yasuichi Kitamura (NICT) July 19, 2006 Network.
10GE network tests with UDP
E-VLBI and End-to-End Performance Masaki HirabaruYasuhiro KoyamaTetsuro Kondo NICT KoganeiNICT Kashima
High TCP performance over wide area networks Arlington, VA May 8, 2002 Sylvain Ravot CalTech HENP Working Group.
APII/APAN Measurement Framework: Advanced Network Observatory Masaki Hirabaru (NICT), Takatoshi Ikeda (KDDI Lab), Motohiro Ishii (QIC), and Yasuichi Kitamura.
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
Masaki Hirabaru Tsukuba WAN Symposium 2005 March 8, 2005 e-VLBI and End-to-End Performance over Global Research Internet.
CS 164: Slide Set 2: Chapter 1 -- Introduction (continued).
Requirements for Simulation and Modeling Tools Sally Floyd NSF Workshop August 2005.
Rate Control Rate control tunes the packet sending rate. No more than one packet can be sent during each packet sending period. Additive Increase: Every.
Masaki Hirabaru Network Performance Measurement and Monitoring APAN Conference 2005 in Bangkok January 27, 2005 Advanced TCP Performance.
Data Transport Challenges for e-VLBI Julianne S.O. Sansa* * With Arpad Szomoru, Thijs van der Hulst & Mike Garret.
Masaki Hirabaru 情報処理学会四国支部講演会 December 17, 2004 インターネットでの長距離高性能データ転送.
First of ALL Big appologize for Kei’s absence Hero of this year’s LSR achievement Takeshi in his experiment.
1 Capacity Dimensioning Based on Traffic Measurement in the Internet Kazumine Osaka University Shingo Ata (Osaka City Univ.)
TCP transfers over high latency/bandwidth networks Internet2 Member Meeting HENP working group session April 9-11, 2003, Arlington T. Kelly, University.
Performance Engineering E2EpiPEs and FastTCP Internet2 member meeting - Indianapolis World Telecom Geneva October 15, 2003
Debugging end-to-end performance in commodity operating system Pavel Cimbál, CTU, Sven Ubik, CESNET,
1 Masaki Hirabaru Network Architecture Group PL Meeting New Generation Network Research Center July 26, 2006 A Role of Network Architecture in e-VLBI.
Masaki Hirabaru NICT APAN JP NOC Meeting August 31, 2004 e-VLBI for SC2004 bwctl experiment with Internet2.
Data Transport Challenges for e-VLBI Julianne S.O. Sansa* * With Arpad Szomoru, Thijs van der Hulst & Mike Garret.
GNEW2004 CERN March 2004 R. Hughes-Jones Manchester 1 Lessons Learned in Grid Networking or How do we get end-2-end performance to Real Users ? Richard.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
LECTURE 12 NET301 11/19/2015Lect NETWORK PERFORMANCE measures of service quality of a telecommunications product as seen by the customer Can.
TCP Traffic Characteristics—Deep buffer Switch
Final EU Review - 24/03/2004 DataTAG is a project funded by the European Commission under contract IST Richard Hughes-Jones The University of.
INDIANAUNIVERSITYINDIANAUNIVERSITY Status of FAST TCP and other TCP alternatives John Hicks TransPAC HPCC Engineer Indiana University APAN Meeting – Hawaii.
1 Masaki Hirabaru and Yasuhiro Koyama PFLDnet 2006 Febrary 2, 2006 International e-VLBI Experience.
An Analysis of AIMD Algorithm with Decreasing Increases Yunhong Gu, Xinwei Hong, and Robert L. Grossman National Center for Data Mining.
Masaki Hirabaru (NICT) and Jin Tanaka (KDDI) Impact of Bottleneck Queue on Long Distant TCP Transfer August 25, 2005 NOC-Network Engineering Session Advanced.
Connect communicate collaborate Performance Metrics & Basic Tools Robert Stoy, DFN EGI TF, Madrid September 2013.
A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their original slides that accompany the.
Bandwidth estimation: metrics, measurement techniques, and tools Presenter: Yuhang Wang.
Recent experience with PCI-X 2.0 and PCI-E network interfaces and emerging server systems Yang Xia Caltech US LHC Network Working Group October 23, 2006.
Prepared by Les Cottrell & Hadrien Bullot, SLAC & EPFL, for the
Transport Protocols over Circuits/VCs
Prepared by Les Cottrell & Hadrien Bullot, SLAC & EPFL, for the
Achieving reliable high performance in LFNs (long-fat networks)
Presentation transcript:

Masaki Hirabaru NICT Koganei 3rd e-VLBI Workshop October 6, 2004 Makuhari, Japan Performance Measurement on Large Bandwidth-Delay Product Networks

An Example How much speed can we get? Receiver Sender a-2) High- Speed Backbone L2/L3 SW GbE 100M RTT 200ms a-1) Receiver Sender High- Speed Backbone SW GbE 100M RTT 200ms GbE SW GbE 100M

Average TCP Throughput less than 20Mbps In case we limit the sending rate at 100Mbps This is TCP’s fundamental behavior.

An Example (2) Receiver Sender High- Speed Backbone GbE RTT 200ms b) GbE Only 900 Mbps available

Purposes Measure, analyze and improve end-to-end performance in high bandwidth-delay product, packet-switched networks –to support for networked science applications –to help operations in finding a bottleneck –to evaluate advanced transport protocols (e.g. Tsunami, SABUL, HSTCP, FAST, XCP, [ours]) Improve TCP under easier conditions –with a signle TCP stream –memory to memory –bottleneck but no cross traffic Consume all the available bandwidth

TCP on a path with bottleneck bottleneck overflow queue The sender may generate burst traffic. The sender recognizes the overflow after the delay > RTT/2. The bottleneck may change over time. loss

Web100 ( A kernel patch for monitoring/modifying TCP metrics in Linux kernel We need to know TCP behavior to identify a problem. Iperf ( –TCP/UDP bandwidth measurement bwctl ( –Wrapper for iperf with authentication and scheduling tcpplot –visualizer for web100 data

1 st Step: Tuning a Host with UDP Remove any bottlenecks on a host –CPU, Memory, Bus, OS (driver), … Dell PowerEdge 1650 (*not enough power) –Intel Xeon 1.4GHz x1(2), Memory 1GB –Intel Pro/1000 XT onboard PCI-X (133Mhz) Dell PowerEdge 2650 –Intel Xeon 2.8GHz x1(2), Memory 1GB –Intel Pro/1000 XT PCI-X (133Mhz) Iperf UDP throughput 957 Mbps –GbE wire rate: headers: UDP(8B)+IP(20B)+EthernetII(38B) –Linux (RedHat 9) with web100 –PE1650: TxIntDelay=0

2 nd Step: Tuning a Host with TCP Maximum socket buffer size (TCP window size) –net.core.wmem_max net.core.rmem_max (64MB) –net.ipv4.tcp_wmem net.tcp4.tcp_rmem (64MB) Driver descriptor length –e1000: TxDescriptors=1024 RxDescriptors=256 (default) Interface queue length –txqueuelen=100 (default) –net.core.netdev_max_backlog=300 (default) Interface queue descriptor –fifo (default) MTU –mtu=1500 (IP MTU) Iperf TCP throughput 941 Mbps –GbE wire rate: headers: TCP(32B)+IP(20B)+EthernetII(38B) –Linux (RedHat 9) with web100 Web100 (incl. High Speed TCP) –net.ipv4.web100_no_metric_save=1 (do not store TCP metrics in the route cache) –net.ipv4.WAD_IFQ=1 (do not send a congestion signal on buffer full) –net.ipv4.web100_rbufmode=0 net.ipv4.web100_sbufmode=0 (disable auto tuning) –Net.ipv4.WAD_FloydAIMD=1 (HighSpeed TCP) –net.ipv4.web100_default_wscale=7 (default)

TransPAC/I2 Test: High Speed TCP (60 mins) From Tokyo to Indianapolis

Test in a Laboratory – with Bottleneck Network Emulator ReceiverSender L2SW (FES12GCF) Bandwidth 800Mbps Delay 88 ms Loss 0 GbE/SX GbE/T PE 2650PE *BDP = 16MB BGP: Bandwidth Delay Product

Laboratory Tests: 800Mbps Bottleneck TCP NewReno (Linux) HighSpeed TCP (Web100)

BIC TCP buffer size 100 packets buffer size 1000 packets

FAST TCP buffer size 100 packets buffer size 1000 packets

Identify the Bottleneck existing tools: pathchar, pathload, pathneck, etc. –Available bandwidth along the path –How much the bottleneck (router) buffer size? pathbuff (under development) –measuring buffer size at the bottleneck –sending a packet train then detect a loss and delay

A Method of Measuring Buffer Size Receiver Sender network with bottlenec k packet train T Capacity C n packets

Typical cases of congestion points RouterSwitch Congestion Point with small buffer (~100 packets) Router Congestion Point with large buffer (>=1000 packets) Inexpensive, but… Poor TCP performance for high BW delay path Better TCP performance for high BW delay path

Summary Performance measurement to get a reliable result and identify a bottleneck Bottleneck buffer size impact on the result Future Work Performance measurement platform in cooperation with applications

Kwangju Busan 2.5G Fukuoka Korea 2.5G SONET KOREN Taegu Daejon 10G 1G (10G) 1G Seoul XP Genkai XP Kitakyushu Kashima 1G (10G) Fukuoka Japan 250km 1,000km 2.5G TransPAC / JGN II 9,000km 4,000km Los Angeles Chicago Washington DC MIT Haystack 10G 2.4G (x2) APII/JGNII Abilene Koganei 1G(10G) Indianapolis 100km bwctl server Network Diagram for e-VLBI and test servers 10G Tokyo XP *Performance Measurement Point Directory perf server e-vlbi server JGNII 10G GEANT SWITCH 7,000km