Realization of a stable network flow with high performance communication in high bandwidth-delay product network Y. Kodama, T. Kudoh, O. Tatebe, S. Sekiguchi.

Slides:



Advertisements
Similar presentations
A Proposal of Capacity and Performance Assured Storage in The PRAGMA Grid Testbed Yusuke Tanimura 1) Hidetaka Koie 1,2) Tomohiro Kudoh 1) Isao Kojima 1)
Advertisements

TCP Performance over IPv6 Yoshinori Kitatsuji KDDI R&D Laboratories, Inc.
TCP transfers over high latency/bandwidth network & Grid TCP Sylvain Ravot
Current Testbed : 100 GE 2 sites (NERSC, ANL) with 3 nodes each. Each node with 4 x 10 GE NICs Measure various overheads from protocols and file sizes.
Experience of High Performance Experiments Yoshinori Kitatsuji Tokyo XP KDDI R&D Laboratories, Inc.
Restricted Slow-Start for TCP William Allcock 1,2, Sanjay Hegde 3 and Rajkumar Kettimuthu 1,2 1 Argonne National Laboratory 2 The University of Chicago.
Iperf Tutorial Jon Dugan Summer JointTechs 2010, Columbus, OH.
Kento Aida, Tokyo Institute of Technology 1 Tutorial: Technology of the Grid 1. Definition 2. Components 3. Infrastructure Kento Aida Tokyo Institute of.
High-performance bulk data transfers with TCP Matei Ripeanu University of Chicago.
RDMA ENABLED WEB SERVER Rajat Sharma. Objective  To implement a Web Server serving HTTP client requests through RDMA replacing the traditional TCP/IP.
All rights reserved © 2006, Alcatel Accelerating TCP Traffic on Broadband Access Networks  Ing-Jyh Tsang 
KEK Network Qi Fazhi KEK SW L2/L3 Switch for outside connections Central L2/L3 Switch A Netscreen Firewall Super Sinet Router 10GbE 2 x GbE IDS.
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
Toulouse 2008/6/24 Network monitoring on 10GbE Y. Kodama ITRI (Information Technology Research Institute) AIST (National Institute of Advanced.
Advanced Network Architecture Research Group 2001/11/149 th International Conference on Network Protocols Scalable Socket Buffer Tuning for High-Performance.
2006/1/23Yutaka Ishikawa, The University of Tokyo1 An Introduction of GridMPI Yutaka Ishikawa and Motohiko Matsuda University of Tokyo Grid Technology.
Ishikawa, The University of Tokyo1 GridMPI : Grid Enabled MPI Yutaka Ishikawa University of Tokyo and AIST.
Experiences in Design and Implementation of a High Performance Transport Protocol Yunhong Gu, Xinwei Hong, and Robert L. Grossman National Center for Data.
5th e-VLBI Workshop, September 2006, Haystack Observatory 1 A Simulation model for e-VLBI traffic on network links in the Netherlands Julianne Sansa*
Data GRID Activity in Japan Yoshiyuki WATASE KEK (High energy Accelerator Research Organization) Tsukuba, Japan
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea.
L.R.He, B.M.G. Cheetham Mobile Systems Architecture Group, Department of Computer Science, University of Manchester, Oxford Rd, M13 9PL, U.K.
Advanced Network Architecture Research Group 2001/11/74 th Asia-Pacific Symposium on Information and Telecommunication Technologies Design and Implementation.
Rate Control Rate control tunes the packet sending rate. No more than one packet can be sent during each packet sending period. Additive Increase: Every.
Worldwide Fast File Replication on Grid Datafarm Osamu Tatebe 1, Youhei Morita 2, Satoshi Matsuoka 3, Noriyuki Soda 4, Satoshi Sekiguchi 1 1 Grid Technology.
Masaki Hirabaru Network Performance Measurement and Monitoring APAN Conference 2005 in Bangkok January 27, 2005 Advanced TCP Performance.
Worldwide File Replication on Grid Datafarm Osamu Tatebe and Satoshi Sekiguchi Grid Technology Research Center, National Institute of Advanced Industrial.
Data Transport Challenges for e-VLBI Julianne S.O. Sansa* * With Arpad Szomoru, Thijs van der Hulst & Mike Garret.
Masaki Hirabaru NICT Koganei 3rd e-VLBI Workshop October 6, 2004 Makuhari, Japan Performance Measurement on Large Bandwidth-Delay Product.
First of ALL Big appologize for Kei’s absence Hero of this year’s LSR achievement Takeshi in his experiment.
Performance Engineering E2EpiPEs and FastTCP Internet2 member meeting - Indianapolis World Telecom Geneva October 15, 2003
GNEW2004 CERN March 2004 R. Hughes-Jones Manchester 1 Lessons Learned in Grid Networking or How do we get end-2-end performance to Real Users ? Richard.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
Status of Tokyo LCG tier-2 center for atlas / H. Sakamoto / ISGC07 Status of Tokyo LCG Tier 2 Center for ATLAS Hiroshi Sakamoto International Center for.
Final EU Review - 24/03/2004 DataTAG is a project funded by the European Commission under contract IST Richard Hughes-Jones The University of.
Run-time Adaptation of Grid Data Placement Jobs George Kola, Tevfik Kosar and Miron Livny Condor Project, University of Wisconsin.
Network Models. 2.1 what is the Protocol? A protocol defines the rules that both the sender and receiver and all intermediate devices need to follow,
UNM SCIENCE DMZ Sean Taylor Senior Network Engineer.
iperf a gnu tool for IP networks
Team: Aaron Sproul Patrick Hamilton
Transport Control Protocol
Fast Pattern-Based Throughput Prediction for TCP Bulk Transfers
Efficient utilization of 40/100 Gbps long-distance network
R. Hughes-Jones Manchester
Networking between China and Europe
Transport Protocols over Circuits/VCs
Data Center Networks and Switching and Queueing
Grid Datafarm and File System Services
Queue Dynamics with Window Flow Control
A Framework for Automatic Resource and Accuracy Management in A Cloud Environment Smita Vijayakumar.
ECE 4450:427/527 - Computer Networks Spring 2017
Wide Area Networking at SLAC, Feb ‘03
A data Grid test-bed environment in Gigabit WAN with HPSS in Japan
Network Core and QoS.
The University of Adelaide, School of Computer Science
FAST TCP : From Theory to Experiments
PRAGMA Telescience at iGRID 2002
Router Construction Outline Switched Fabrics IP Routers
University of Houston Datacom II Lecture 1B Review Dr Fred L Zellner
Network Systems and Throughput Preservation
Lecture 16, Computer Networks (198:552)
Wide-Area Networking at SLAC
ECSE-4670: Computer Communication Networks (CCN)
Optical communications & networking - an Overview
Jan. 24th, 2003 Kento Aida (TITECH) Sissades Tongsima (NECTEC)
Review of Internet Protocols Transport Layer
Achieving reliable high performance in LFNs (long-fat networks)
Evaluation of Objectivity/AMS on the Wide Area Network
Network Core and QoS.
Presentation transcript:

Realization of a stable network flow with high performance communication in high bandwidth-delay product network Y. Kodama, T. Kudoh, O. Tatebe, S. Sekiguchi Grid Technology Research Center National Institute of Advanced Industrial Science and Technology (AIST)

Outline Background What is a problem in a high bandwidth-delay product network Smooth traffic shaping Hardware network testbed GNET-1 Experiments Results of a network emulated by GNET-1 Results of a transpacific network in BWC03 Conclusion 30 Sep. 2004 CHEP 2004

Background Why traffic on a high bandwidth-delay product network is not stable ? RTT Peak 1Gbps > 2.4 Gbps But sometimes packets are lost ! Stream A 500Mbps Stream B 500Mbps 1.5Gbps < 2.4Gbps Stream C 500Mbps 2.4Gbps network TCP has software pacing by self clocking of ACK packet, but it is not always effective. 30 Sep. 2004 CHEP 2004

Smooth traffic shaping Limit the bandwidth of each stream to 1/n of bottleneck line rigidly by adjusting IFG (Inter Frame Gap) Adjusting IFG is very smoothly limit the stream bandwidth, we realize it on hardware network testbed GNET-1 1Gbps Average 500Mbps 500Mbps 0Mbps IFG = Frame Len. Frame 30 Sep. 2004 CHEP 2004

Adapting smooth traffic shaping Traffic on a high bandwidth-delay product network become stable. RTT Peak 1Gbps GNET-1 Stream A 500Mbps < 2.4Gbps RTT GNET-1 Stream B 500Mbps 1.5Gbps RTT GNET-1 500Mbps Stream C Network 30 Sep. 2004 CHEP 2004

The Look of GNET-1 GNET-1 GNET-1 Control Width:19inch, SNMP Agent Height:1U(1.7inch) GBIC: 4 ports USB 30 Sep. 2004 CHEP 2004

Block Diagram of GNET-1 via GBIC I/F 30 Sep. 2004 CHEP 2004

Usage of GNET-1 Emulation Measurement New Protocol prototype a delay, bit error rate, output bandwidth, buffer control, etc. Measurement Precise network statistics input/output bandwidth in every 100 microsecond. Synchronize the local clock using GPS. New Protocol prototype Proposing protocol in feasibility study. GNET-1 Internet GNET-1 GNET-1 30 Sep. 2004 CHEP 2004

Outline Background What is a problem in a high bandwidth-delay product network Smooth traffic shaping Hardware network testbed GNET-1 Experiments Results of a network emulated by GNET-1 Results of a transpacific network in BWC03 Conclusion 30 Sep. 2004 CHEP 2004

Network emulated by GNET-1 PC3 PC4 PC1 PC2 PC1: iperf –c PC3 –w 8M PC2: iperf –c PC4 –w 8M increase sockbuff limit Standard TCP/IP with WADIFQ option by Web100 GNET-1 Smooth traffic shaping 250Mbps for each stream Finegrain measurement 2ms interval for bandwidth SW GNET-1 SW Emulate bottleneck network Bottleneck one way delay (100ms) Bottleneck Bandwidth (500 Mbps) Bottleneck Buffer size (512KB) 30 Sep. 2004 CHEP 2004

Effects of traffic shaping No traffic shaping: traffic shaping: 250Mbps each bottleneck: one-way delay 100ms, 500Mbps, Buffer size 512KBytes 30 Sep. 2004 CHEP 2004

Transpacific network in BWC03 Bandwidth Challenge in SC'03 Computer Fabrics in CHEP 04 Trans-Pacific Gfarm Datafarm testbed 147 nodes 16 TBytes 4 GB/s SuperSINET Indiana Univ Titech Trans-Pacific thoretical peak 3.9 Gbps Gfarm disk capacity 70 TBytes disk read/write 13 GB/sec SuperSINET NII 10 nodes 1 TBytes 0.3 GB/s 2.4G Univ Tsukuba Abilene 32 nodes 23 TBytes 2 GB/s NY 2.4G(1G) 7 nodes 4 TBytes 0.2 GB/s KEK OC-12 ATM 500M Chicago Tsukuba WAN APAN Tokyo XP Maffin 16 nodes 12 TBytes 1 GB/s AIST LA SC2003 Phoenix 2.4G 16 nodes 12 TBytes 1 GB/s APAN/TransPAC SDSC 30 Sep. 2004 CHEP 2004

Environment 11 PC on both ends, LA line was divided to 3 link SW 1G GNET-1 SuperSINET NewYork 1G 285ms SW 1G / shaping APAN/TransPAC Chicago 500M 250ms PC SW GNET-1 1G 1G / shaping 10G PC SW GNET-1 1G APAN/TransPAC LosAngeles 2.4G 141ms 1G / shaping 11 PC on both ends, LA line was divided to 3 link HighSpeedTCP/IP with WADIFQ, MTU size : 6000 30 Sep. 2004 CHEP 2004

Smooth traffic shaping (results of BWC03) 930 Mbps in NY 500 Mbps in Chicago 800 Mbps in LA3 750 Mbps in LA2 800 Mbps in LA1 950 Mbps in NY (+20) 500 Mbps in Chicago 800 Mbps in LA3 750 Mbps in LA2 780 Mbps in LA1 (-20) Achieved stable 3.78Gbps Disk to Disk data transfer on 3.9 Gbps,144ms long-fat network. Currently the shaping bandwidth is defined by user, we will make automatic tuning facility. 30 Sep. 2004 CHEP 2004

Conclusion and Future Plan Smooth traffic shaping of GNET-1 realizes stable network traffic on a high bandwidth-delay product network. Automatic tuning of the bandwidth of each stream is a next challenge. Please refer to http://www.gtrc.aist.go.jp/gnet/ about details of GNET-1. We are also developing a software pacing method in network driver. We are now developing a new tool for 10GbE. 30 Sep. 2004 CHEP 2004

Photograph of GNET-10 19 inch rack mountable 2U height FPGA:XC2VP75 x 2 Memory: 1GByte x 2 10GbE: LR 2 ports GbE: GBIC 2ports 30 Sep. 2004 CHEP 2004

Sockbuf and WADIFQ effects on a stream Measurement every 1ms Bottleneck line: 100ms, 1Gbps, 16MB no packet loss on a network WADIFQ: full of IFQ counted as no congestion, same effect as setting IFQ large Required sockbuf size : 100ms * 2 * 1Gbps = 25MB 30 Sep. 2004 CHEP 2004