Download presentation
Presentation is loading. Please wait.
Published byAusten Manning Modified over 9 years ago
1
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea Representing the HEP Working Group for ANF/the HEP Data Grid WG The 3 rd International Workshop on HEP Data Grid August 26, 2004, Daegu, Korea
2
2 Introduction Network Tests using Iperf Domestic tests International tests (USA, Europe) Real File Transfer Tests using bbFTP Domestic tests International tests (Europe) Summary & Future Works
3
3 HEP Data Grid Implementation of the Tier-1 Regional Data Center of LHC Networking Tier0 (CERN)–Tier1 (CHEP) : ~2.5Gbps via TEIN Tier1(CHEP)–Tier1(US and Japan): ~Gbps via APII Tier1(CHEP), Tier2 or 3(inside Korea): 0.155~1 Gbps via KOREN/KREONET Computing(1000 CPU Linux clusters) Data Storage Capability Storage: 1.1 PB Raid Type Disk (Tier1+Tier2) Tape Drive: ~ 3.2 PB HPSS Servers
4
4 Network at CHEP & Available Research Networks Servers/PCs … … Servers Japan APII/H-G 1 G 2.5G Cisco7606 1 G Kreonet CHEP Clustered PCs … … APII/KREONET2 2 * 622 M USA TEIN 34 M KOREN CERN/Geneva … 100 M 1 G HSM TransPAC 2.5G KNU CC 1 G Cisco6509 DataTaG 2.5G Network Test PC
5
5 Test Tools Iperf A tool for measuring TCP and bandwidth performance. Data sent by default from the client ’ s memory to the server memory bbFTP File transfer software that is optimized for large files. Supports multi-stream transfer and big windows TCP Reno: Linux 2.4.26 TCP protocol
6
6 Factors affecting TCP performance Window Size Number of Streams MTU – We have not tried yet. Txqueuelen – No gain in performance SACK – No gain in performance
7
7 CHEP KOREN-NOC Iperf Test RTT: ~2ms BDP=0.002*1000Mbps=0.25MB Max throughput: 920Mbps Throughput of five streams: 916Mbps KOREN KN U KOREN- NOC 1 G CHEP 1 G GigabitEthernet3/1 -- KOREN
8
8 Single Stream Tests between CHEP Caltech Duration: 10min each, 10min interval, over KOREN-TransPAC path (1Gbps), 20MB window, TCP(Linux 2.4.26) KNU Genkai LA Caltech Busan 1 G 2.5G 1 G KOREN H-G transPACCalREN2 Tokyo 1G RTT: ~130ms BDP: 0.13*1000Mbps=16MB Max throughput: 146Mbps
9
9 Multi-Stream Tests between CHEP Caltech Duration: 10min each over KOREN-TransPAC path (1Gbps), (Stream* Window ) <=100MB APII/Genkai Link (APII-Juniper ge-0/1/0.1) TransPAC LA link (TPR2 so-2/0/0.0) Max throughput: 783Mbps
10
10 Single Stream Tests between CHEP CERN KNU Genkai LA Chicago Busan 1 G 2.5G 1 G KOREN H-G transPACAbilene Tokyo 1G CERN 2.5 G DataTag Duration: 10min each, 10min interval, over KOREN-TransPAC path 40MB window, TCP(Linux 2.4.26) RTT: ~370ms BDP: 0.37*1000Mbps=46MB Max throughput: 99Mbs
11
11 Multi-Stream Tests between CHEP CERN APII/Genkai Link (APII-Juniper ge-0/1/0.1) TransPAC LA link (TPR2 so-2/0/0.0) Duration: 10min each over KOREN-TransPAC path (1Gbps), (Stream* Window ) <=100MB Max throughput:714Mbps
12
12 Other TCP Stacks Setup for HS-TCP: net.ipv4.tcp_rmem= 4096 87380 67108864 net.ipv4.tcp_wmem= 4096 87380 67108864 net.ipv4.tcp_mem= 8388608 8388608 67108864 txqueuelen =1000 Setup for FAST TCP: net.ipv4.tcp_rmem= 4096 33554422 134217728 net.ipv4.tcp_wmem= 4096 33554422 134217728 net.ipv4.tcp_mem= 4096 33554422 134217728 txqueuelen =1000
13
13 Real File Transfer between KNU & KOREN over Linux TCP Real File Transfer between KNU & KOREN over Linux TCP [root@cluster90 bbftpc]#./bbftp -V -e 'setrecvwinsize 1024; setsendwinsize 1024; put ams' -u root 203.255.252.26 Password: >> USER root PASS << bbftpd version 3.0.2 : OK >> COMMAND : setremotecos 0 << OK : COS set >> COMMAND : setrecvwinsize 1024 << OK >> COMMAND : setsendwinsize 1024 << OK >> COMMAND : put ams ams << OK 1024000000 bytes send in 19.8 secs (5.06e+04 Kbytes/sec or 395 Mbits/s)
14
14 I/O test run rules The maximum file size to be greater than the total physical memory to get accurate results (Iozone file system benchmark) Perform 40X physical RAM size worth of IO to minimize the percentage of error due to IO being read out of cache(3ware white paper)
15
15 Real File Transfer (100GB) between KNU & KOREN KOREN(2.5G) AMD Opteron Dual, Tuned RAID 0, (Read 197MB/s with Iozone, Write 178MB/s) Xeon 2GHz Dual, ATA DISK Drive at KOREN-NOC, Daejon 1G Time Taken: 1 hour 20 min 58 Average throughput: 164Mbps
16
16 Real File Transfer(1TB) between KNU & KOREN KOREN KNU File Server (Tuned RAID 0, Read 197MB/s with Iozone, Write 178MB/s) machines at KOREN-NOC in Daejon 1G 200GB each A: 3 hr. 13 min. 20 B: 3 hr. 13 min. 31 C: 3 hr. 14 min. 36 D: 3 hr. 14 min. 46 E: 3 hr. 11 min. 22 E D C A B Throughput: 701Mbps
17
17 File Transfer (100GB)with Lustre Lustre(Linux + Cluster): Distributed file system for large clusters OST Client MDS GigE 100GB 1G KNU File Server Time Take: 52min 55 Throughput: 251Mbps
18
18 Real File Transfer between KNU & CERN over HS-TCP [kihwan@w01gva bbftpc]$./bbftp -V -e 'setrecvwinsize 41024; setsendwinsize 410 24; cd /d/Bandwidth/BBftp/bbftp-3.0.2/bbftpd; get ams' -u root cluster90.knu.ac.kr Password: >> USER root PASS << bbftpd version 3.0.2 : OK >> COMMAND : setremotecos 0 << OK : COS set >> COMMAND : setrecvwinsize 41024 << OK >> COMMAND : setsendwinsize 41024 << OK >> COMMAND : cd /d/Bandwidth/BBftp/bbftp-3.0.2/bbftpd << OK : Current remote directory is /d/Bandwidth/BBftp/bbftp-3.0.2/bbftpd >> COMMAND : get ams ams << OK 1024000000 bytes got in 47.7 secs (2.1e+04 Kbytes/sec or 164 Mbits/s)
19
Summary High Bandwidth Network is essential for HEP Data Grid Domestic Links Only Window size need to be adjusted to fully utilize available bandwidth Real file transfer shows that speed is limited by physical I/O, rather than network International Links Single stream: ~100Mbps Parallel streams needed to achieve significant throughput Other TCP Stacks may help improve performance Further tests and investigations should be done
20
20 Future Works Jumbo Frame (9000Byte MTU) File transfer using RAID disks between KNU and CERN Tests over Lambda Networks
21
21 Terabyte Transfer Test Equipments (4TB Raid, 2 Network Test machines) CHEP File servers KREONET IPv6 (with Lambda) 2.5 Gbps KISTI
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.