The transfer performance of iRODS between CC-IN2P3 and KEK

Slides:



Advertisements
Similar presentations
Current Testbed : 100 GE 2 sites (NERSC, ANL) with 3 nodes each. Each node with 4 x 10 GE NICs Measure various overheads from protocols and file sizes.
Advertisements

CS162 Section Lecture 9. KeyValue Server Project 3 KVClient (Library) Client Side Program KVClient (Library) Client Side Program KVClient (Library) Client.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Mapping a Network by Latency (and other things) Client connecting to , UDP port.
McGraw-Hill©The McGraw-Hill Companies, Inc., 2004 Application Layer PART VI.
Internet Bandwidth Measurement Techniques Muhammad Ali Dec 17 th 2005.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
1 Chapter Overview Creating Sites and Subnets Configuring Intersite Replication Troubleshooting Active Directory Replication.
Fundamental of IP network ~ ping, traceroute ~ Practice 1 Information and Communications Technology Internet Engineering.
KEK Network Qi Fazhi KEK SW L2/L3 Switch for outside connections Central L2/L3 Switch A Netscreen Firewall Super Sinet Router 10GbE 2 x GbE IDS.
Report : Zhen Ming Wu 2008 IEEE 9th Grid Computing Conference.
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
IRODS performance test and SRB system at KEK Yoshimi KEK Building data grids with iRODS 27 May 2008.
Large File Transfer on 20,000 km - Between Korea and Switzerland Yusung Kim, Daewon Kim, Joonbok Lee, Kilnam Chon
Profiling Grid Data Transfer Protocols and Servers George Kola, Tevfik Kosar and Miron Livny University of Wisconsin-Madison USA.
KONOE, a toolkit for an object- oriented online environment, with Gate Package M.Abe,Y.Nagasaka,F.Fujiwara, T.Tamura,I.Nakano,H.Sakamoto, Y.Sakamoto,S.Enomoto,
1 CSTS WG CSTS WG Prototyping for Forward CSTS Performance Boulder November 2011 Martin Karch.
An Efficient Approach for Content Delivery in Overlay Networks Mohammad Malli Chadi Barakat, Walid Dabbous Planete Project To appear in proceedings of.
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea.
Production Data Grids SRB - iRODS Storage Resource Broker Reagan W. Moore
Data transfer over the wide area network with a large round trip time H. Matsunaga, T. Isobe, T. Mashimo, H. Sakamoto, I. Ueda International Center for.
RAC parameter tuning for remote access Carlos Fernando Gamboa, Brookhaven National Lab, US Frederick Luehring, Indiana University, US Distributed Database.
Data Replication and Power Consumption in Data Grids Susan V. Vrbsky, Ming Lei, Karl Smith and Jeff Byrd Department of Computer Science The University.
ROOT and Federated Data Stores What Features We Would Like Fons Rademakers CERN CC-IN2P3, Nov, 2011, Lyon, France.
Harnessing Multicore Processors for High Speed Secure Transfer Raj Kettimuthu Argonne National Laboratory.
Masaki Hirabaru NICT Koganei 3rd e-VLBI Workshop October 6, 2004 Makuhari, Japan Performance Measurement on Large Bandwidth-Delay Product.
Challenges of deploying Wide-Area-Network Distributed Storage System under network and reliability constraints – A case study
Hepix LAL April 2001 An alternative to ftp : bbftp Gilles Farrache In2p3 Computing Center
Measuring the Capacity of a Web Server USENIX Sympo. on Internet Tech. and Sys. ‘ Koo-Min Ahn.
Efficiency of small size tasks calculation in grid clusters using parallel processing.. Olgerts Belmanis Jānis Kūliņš RTU ETF Riga Technical University.
Computational Research in the Battelle Center for Mathmatical medicine.
Installation of Storage Foundation for Windows High Availability 5.1 SP2 1 Daniel Schnack Principle Technical Support Engineer.
San Diego Super Computing (SDSC) Testing Summary and Recommendations Physical Media Working Group March 2008.
EDC Intenet2 AmericaView TAH Oct 28, 2002 AlaskaView Proof-of-Concept Test Tom Heinrichs, UAF/GINA/ION Grant Mah USGS/EDC Mike Rechtenbaugh USGS/EDC Jeff.
Scaling up from local DB to distributed DB Cristiano Bozza European Emulsion Group Nagoya, Jan 2004 Presented by Giuseppe Grella.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
Big Data transfer over computer networks Initial Sergey Khoruzhnikov Vladimir Grudinin Oleg Sadov Andrey Shevel Anatoly Oreshkin Elena Korytko Alexander.
Connect communicate collaborate Performance Metrics & Basic Tools Robert Stoy, DFN EGI TF, Madrid September 2013.
1 K. Salah Application Layer Module K. Salah Network layer duties.
Recent experience with PCI-X 2.0 and PCI-E network interfaces and emerging server systems Yang Xia Caltech US LHC Network Working Group October 23, 2006.
G. Russo, D. Del Prete, S. Pardi Frascati, 2011 april 4th-7th The Naples' testbed for the SuperB computing model: first tests G. Russo, D. Del Prete, S.
SRB at KEK Yoshimi Iida, Kohki Ishikawa KEK – CC-IN2P3 Meeting on Grids at Lyon September 11-13, 2006.
Performance measurement of transferring files on the federated SRB
Network speed tests CERN 14-dec-2010.
Experience of Lustre at QMUL
Measurement team Hans Ludwing Reyes Chávez Network Operation Center
The Beijing Tier 2: status and plans
ALICE internal and external network
Status report on LHC_2: ATLAS computing
Status Report on LHC_2 : ATLAS computing
Belle II Physics Analysis Center at TIFR
An Overview of iRODS Integrated Rule-Oriented Data System
Distributed Network Traffic Feature Extraction for a Real-time IDS
The Institute of Applied Astronomy of the Russian Academy of Sciences Operating experience of Data Transmission and Buffer System. Research of.
LCG Deployment in Japan
Experience of Lustre at a Tier-2 site
SAM at CCIN2P3 configuration issues
Project: COMP_01 R&D for ATLAS Grid computing
CC-IN2P3 Lyon March 14, 2012 Yoshimi KEK
Interoperability of Digital Repositories
CS 140 Lecture Notes: Technology and Operating Systems
A data Grid test-bed environment in Gigabit WAN with HPSS in Japan
CC-IN2P3 Jean-Yves Nief, CC-IN2P3 HEPiX, SLAC
FTS Issue in Beijing Erming PEI 2010/06/18.
GridTorrent Framework: A High-performance Data Transfer and Data Sharing Framework for Scientific Computing.
- Measurement of traffic volume -
Achieving reliable high performance in LFNs (long-fat networks)
Summer 2002 at SLAC Ajay Tirumala.
Evaluation of Objectivity/AMS on the Wide Area Network
Presentation transcript:

The transfer performance of iRODS between CC-IN2P3 and KEK Yoshimi Iida Workshop KEK-CC-IN2P3 27-29 November 2007

iRODS iRODS, which stands for i Rule Oriented Data System, is the next generation data management cyberinfrastructure Rules Control the operations that are being performed when a rule is invoked by a particular task The core set are defined in the "core.irb" file Micro-services Small, well-defined procedures/functions that perform a certain task C-functions are called when executing the rule body http://www.irods.org/index.php/Main_Page

SRB test system 2 type of SRB system Internet KEK FW SINET KEK LAN SRB Grid LAN SRB test system KEK SRB system

Install at test system At KEK internal network Internet KEK FW CC-IN2P3 KEK LAN iRODS client iRODS server iRODS server iCAT

Transfer parameter Set the parameter for a data transfer at acSetNumThreads rule in core.irb file on the server at Lyon acSetNumThreads||msiSetNumThreads(sizePerThrInMb, maxNumThr, windowSize)|nop sizePerThrInMb is used for computing the number of threads numThreads=fileSizeInMb/sizePerThrInMb+1 maxNumThr is the maximum number of threads to use (up to 16) windowSize is the TCP window size in Bytes

iput with several window size The maximum socket buffer size of server at CC-IN2P3 is 4MB

iput test iput 1GB data file from KEK-internal to Lyon with 16 threads during 2days

Install at KEK The server at Grid LAN Internet KEK FW SINET KEK LAN CC-IN2P3 Grid LAN iRODS server iRODS server iRODS server iCAT

Specification iRODS server at CC-IN2P3 iRODS server at KEK OS: Solaris 10 CPU: AMD Opteron 2.6GHz x 4 Memory: 16GB iRODS server at KEK OS: RedHat AS 3 CPU: Intel Xeon 3.0GHz x 4 Memory: 4GB

iput from KEK to Lyon 1GB data file transfer during 1 days window size: 4MB, number of threads: 16 Sput from KEK to Lyon was about 2.3MB/sec

iput from Lyon to KEK 1GB data file transfer during 12 hours window size: 4MB, number of threads: 16 Sput from Lyon to KEK was about 3.5MB/sec

From KEK to Lyon 1GB data transfer bbcp often fail to connect window size 4MB number of parallel streams 16 bbcp often fail to connect

From Lyon to KEK 1GB data transfer iput is better than bbcp window size 4MB number of parallel streams 16 iput is better than bbcp

Plan for iRODS at KEK Data transfer for J-PARC project Store the data at Tokai storage once, then copy to KEK and delete from Tokai 1PB data in a year in maximum Bandwidth between 2sites will become 10Gbps Tokai KEK

Plan for iRODS at KEK iRODS provide the means for data registration, replication and purging Set up with our policy Investigation for high performance transfer One of the solutions for data transfer

Summary Transfer performance of iRODS is better than SRB and it’s better than bbcp, too Further investigation for transfer performance iRODS test for data transfer

Bandwidth Lyon-KEK iperf with some options; -w 4M : TCP window size [Bytes] -P 16 : the number of parallel threads -i 5 : periodic bandwidth reports [sec]

Transfer performance with Sput Sput 1GB data file between CC-IN2P3 and KEK with 16 threads KEK→CCIN2P3: 2.3MB/sec CCIN2P3→KEK: 3.5MB/sec

Network between EU and JP From JP to EU, the route passes USA JP-USA has at least 2.4Gbps EU-USA has 2.5Gbps or more?? The RTT between KEK and IN2P3 is about 285ms traceroute to ccbbsn09.in2p3.fr (134.158.104.79), 30 hops max, 38 byte packets 1 130.87.208.203 (130.87.208.203) 0.340 ms 0.245 ms 0.224 ms 2 ns1kb.kek.jp (130.87.5.11) 0.273 ms 0.311 ms 0.250 ms 3 keksw1-ns.kek.jp (130.87.4.34) 0.457 ms 0.420 ms 0.397 ms 4 kekgw.kek.jp (130.87.4.1) 0.554 ms 0.564 ms 0.514 ms 5 kek-S1-2-2.sinet.ad.jp (150.99.197.9) 0.735 ms 0.779 ms 0.565 ms 6 tokyo-core1-P10-0.sinet.ad.jp (150.99.197.33) 5.946 ms 6.299 ms 6.247 ms 7 nii-S1-P4-0.sinet.ad.jp (150.99.197.22) 6.243 ms 6.437 ms 6.311 ms 8 nii-IX1-P2-0.sinet.ad.jp (150.99.199.174) 6.148 ms 6.230 ms 6.121 ms 9 NYC-gate1-P3-0.sinet.ad.jp (150.99.198.246) 180.621 ms 180.704 ms 180.666 ms 10 sinet.ny1.ny.geant.net (62.40.103.233) 180.679 ms 180.750 ms 180.729 ms 11 ny.uk1.uk.geant.net (62.40.96.170) 249.457 ms 249.374 ms 249.406 ms 12 so-4-0-0.rt1.par.fr.geant2.net (62.40.112.105) 256.739 ms 256.797 ms 384.238 ms 13 renater-gw.rt1.par.fr.geant2.net (62.40.124.70) 256.639 ms 256.790 ms 256.617 ms 14 lyon-pos6-0.cssi.renater.fr (193.51.179.14) 262.451 ms 262.371 ms 262.214 ms 15 in2p3-lyon.cssi.renater.fr (193.51.181.6) 262.434 ms 262.715 ms 262.433 ms 16 lyon-core.in2p3.fr (134.158.224.3) 263.994 ms 262.435 ms 262.991 ms 17 ccbbsn09.in2p3.fr (134.158.104.79) 262.862 ms 262.470 ms 263.725 ms