Grid Computing for High Energy Physics in Japan Hiroyuki Matsunaga International Center for Elementary Particle Physics (ICEPP), The University of Tokyo.

Slides:



Advertisements
Similar presentations
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Torsten Antoni – LCG Operations Workshop, CERN 02-04/11/04 Global Grid User Support - GGUS -
Advertisements

LCG-France Project Status Fabio Hernandez Frédérique Chollet Fairouz Malek Réunion Sites LCG-France Annecy, May
Toward Production Level Operation of Authentication System for High Performance Computing Infrastructure in Japan Eisaku Sakane and Kento Aida National.
Overview of LCG-France Tier-2s and Tier-3s Frédérique Chollet (IN2P3-LAPP) on behalf of the LCG-France project and Tiers representatives CMS visit to Tier-1.
11 September 2007Milos Lokajicek Institute of Physics AS CR Prague Status of the GRID in the Czech Republic NEC’2007.
Les Les Robertson WLCG Project Leader WLCG – Worldwide LHC Computing Grid Where we are now & the Challenges of Real Data CHEP 2007 Victoria BC 3 September.
Tier 2 Prague Institute of Physics AS CR Status and Outlook J. Chudoba, M. Elias, L. Fiala, J. Horky, T. Kouba, J. Kundrat, M. Lokajicek, J. Svec, P. Tylka.
ALICE Tier-2 at Hiroshima Toru Sugitate of Hiroshima University for ALICE-Japan GRID Team LHCONE workshop at the APAN 38 th.
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En’yo, Takashi Ichihara, Yasushi Watanabe and Satoshi.
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
IRODS performance test and SRB system at KEK Yoshimi KEK Building data grids with iRODS 27 May 2008.
H IGH E NERGY A CCELERATOR R ESEARCH O RGANIZATION KEKKEK Current Status and Plan on Grid at KEK/CRC Go Iwai, KEK/CRC On behalf of KEK Data Grid Team Links.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Computing Coordination in Japan Takashi Sasaki Computing Research Center KEK, Inter-University Research Institute Corporation High Energy Accelerator Research.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
Data GRID Activity in Japan Yoshiyuki WATASE KEK (High energy Accelerator Research Organization) Tsukuba, Japan
Grid Applications for High Energy Physics and Interoperability Dominique Boutigny CC-IN2P3 June 24, 2006 Centre de Calcul de l’IN2P3 et du DAPNIA.
Data GRID deployment in HEPnet-J Takashi Sasaki Computing Research Center KEK.
H IGH E NERGY A CCELERATOR R ESEARCH O RGANIZATION KEKKEK Current Status and Recent Activities on Grid at KEK Go Iwai, KEK/CRC On behalf of KEK Data Grid.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Workshop KEK - CC-IN2P3 KEK new Grid system 27 – 29 Oct. CC-IN2P3, Lyon, France Day2 14: :55 (40min) Koichi Murakami, KEK/CRC.
Data transfer over the wide area network with a large round trip time H. Matsunaga, T. Isobe, T. Mashimo, H. Sakamoto, I. Ueda International Center for.
1 High Energy Physics (HEP) Computing HyangKyu Park Kyungpook National University Daegu, Korea 2008 Supercomputing & KREONET Workshop Ramada Hotel, JeJu,
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
Site Report from KEK, Japan JP-KEK-CRC-01 and JP-KEK-CRC-02 Go Iwai, KEK/CRC Grid Operations Workshop – 2007 Kungliga Tekniska högskolan, Stockholm, Sweden.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.
LCG-France Vincent Breton, Eric Lançon and Fairouz Malek, CNRS-IN2P3 and LCG-France ISGC Symposium Taipei, March 27th 2007.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
UKQCD Grid Status Report GridPP 13 th Collaboration Meeting Durham, 4th—6th July 2005 Dr George Beckett Project Manager, EPCC +44.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
GRID Deployment Status and Plan at KEK ISGC2007 Takashi Sasaki KEK Computing Research Center.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
SiGNET – Slovenian Production Grid Marko Mikuž Univ. Ljubljana & J. Stefan Institute on behalf of SiGNET team ICFA DDW’06 Kraków, 10 th October 2006.
Computing Research Center, High Energy Accelerator Organization (KEK) Site Status Report Go Iwai, KEK/CRC, Japan WLCG Tier-2 Workshop Dec. 1 ~ 4, 2006.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
IAG – Israel Academic Grid, EGEE and HEP in Israel Prof. David Horn Tel Aviv University.
KEK GRID for ILC Experiments Akiya Miyamoto, Go Iwai, Katsumasa Ikematsu KEK LCWS March 2010.
Status of Tokyo LCG tier-2 center for atlas / H. Sakamoto / ISGC07 Status of Tokyo LCG Tier 2 Center for ATLAS Hiroshi Sakamoto International Center for.
ICEPP regional analysis center (TOKYO-LCG2) ICEPP, The University of Tokyo 2013/5/15Tomoaki Nakamura ICEPP, Univ. of Tokyo1.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Hiroyuki Matsunaga (Some materials were provided by Go Iwai) Computing Research Center, KEK Lyon, March
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Workshop KEK - CC-IN2P3 LCG update and plan at KEK Go Iwai, KEK/CRC 27 – 29 Oct. CC-IN2P3, Lyon, France Day1 15: :05 (40min)
INFSO-RI Enabling Grids for E-sciencE Turkish Tier-2 Site Report Emrah AKKOYUN High Performance and Grid Computing Center TUBITAK-ULAKBIM.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
GRID & Parallel Processing Koichi Murakami11 th Geant4 Collaboration Workshop / LIP - Lisboa (10-14/Oct./2006) 1 GRID-related activity in Japan Go Iwai,
SRB at KEK Yoshimi Iida, Kohki Ishikawa KEK – CC-IN2P3 Meeting on Grids at Lyon September 11-13, 2006.
LCG Service Challenge: Planning and Milestones
Status report on LHC_2: ATLAS computing
Status Report on LHC_2 : ATLAS computing
Belle II Physics Analysis Center at TIFR
LCG Deployment in Japan
Status and Plans on GRID related activities at KEK
LCG-France activities
Update on Plan for KISTI-GSDC
The CCIN2P3 and its role in EGEE/LCG
A high-performance computing facility for scientific research
NAREGI at KEK and GRID plans
Project: COMP_01 R&D for ATLAS Grid computing
Interoperability of Digital Repositories
LHC Data Analysis using a worldwide computing grid
Grid related activities at KEK
Presentation transcript:

Grid Computing for High Energy Physics in Japan Hiroyuki Matsunaga International Center for Elementary Particle Physics (ICEPP), The University of Tokyo International Workshop on e-Science for Physics 2008

2 Major High Energy Physics Program in Japan KEK-B (Tsukuba) –Belle J-PARC (Tokai) –Japan Proton Accelerator Research Complex –Operation will start within this year –T2K (Tokai to Kamioka) long baseline neutrino experiment Kamioka –SuperKamiokande –KamLAND International collaboration –CERN LHC (ATLAS, ALICE) –Fermilab Tevatron (CDF) –BNL RHIC (PHENIX)

3 Grid Related Activities ICEPP, University of Tokyo –WLCG Tier2 site for ATLAS Regional Center for ATLAS-Japan group Hiroshima University –WLCG Tier2 site for ALICE KEK –Two EGEE production sites BELLE experiment, J-PARC, ILC… –University support –NAREGI Grid deployment at universities –Nagoya U. (Belle), Tsukuba U. (CDF)… Network

4 Grid Deployment at University of Tokyo ICEPP, University of Tokyo –Involved in international HEP experiments since 1974 Operated pilot system since 2002 Current computer system started working last year –TOKYO-LCG2. gLite3 installed CC-IN2P3 (Lyon, France) is the associated Tier 1 site within ATLAS computing model –Detector data from CERN go through CC-IN2P3 –Exceptionally far distance for T1-T2 RTT ~280msec, ~10 hops Challenge for efficient data transfer Data catalog for the files in Tokyo located at Lyon –ASGC (Taiwan) could be additional associated Tier1 Geographically nearest Tier 1 (RTT ~32msec) Operations have been supported by ASGC –Neighboring timezone

5 Hardware resources Tier-2 site plus (non-grid) regional center facility –Support local user analysis by ALTAS Japan group Blade servers –650 nodes (2600 cores) Disk arrays –140 Boxes (~6TB/box) –4Gb Fibre-Channel File servers –Attach 5 disk arrays –10 GbE NIC Tape robot (LTO3) –8000 tapes, 32 drives PledgedPlanned to be pledged CPU (kSI2k) Disk (Tbyes) Nominal WAN (Mbits/sec) 2000 Tape robot Blade servers Disk arrays

6 SINET3SINET3 SINET3 (Japanese NREN) –Third generation of SINET, since Apr –Provided by NII (National Institute of Informatics) Backbone: up tp 40Gbps Major universities connect with 1-10 Gbps –10 Gbps to Tokyo RC International links –2 x 10 Gbps to US –2 x 622 Mbps to Asia

7 International Link 10Gbps between Tokyo and CC-IN2P3 –SINET3 + GEANT + RENATER (French NREN) –public network (shared with other traffic) 1Gbps link to ASGC (to be upgraded to 2.4 Gbps) Tokyo New York Lyon SINET3 (10Gbps) GEANT (10Gbps) RENATER (10Gbps) Taipei

8 Network test with Iperf Memory-to-memory test performed with Iperf program Use Linux boxes dedicated for iperf test at both ends –1Gbps limited by NIC –Linux kernel (BIC TCP) –Window size 8Mbytes, 8 parallel streams For Lyon-Tokyo: long recovery time due to long RTT Lyon Tokyo (RTT: 280ms) Taipei Tokyo (RTT: 32ms)

9 Data Transfer from Lyon Tier1 center Data transferred from Lyon to Tokyo –Used Storage Elements in production –ATLAS MC simulation data Storage Elements –Lyon: dCache (>30 gridFTP servers, Solaris, ZFS) –Tokyo: DPM (6 gridFTP servers, Linux, XFS) FTS (File Transfer System) –Main tool for bulk data transfer –Execute multiple file transfers (by using gridFTP) concurrently Set number of streams for gridFTP –Used in ATLAS Distributed Data Management system

10 Performance of data transfer >500 Mbytes/s observed in May, 2008 –Filesize: 3.5Gbytes –20 files in parallel, 10 streams each –~40Mbytes/s for each file transfer Low activity at CC-IN2P3 during the period (other than ours) 500 Mbytes/s Mbytes/s Throughput per file transfer

11 Data transfer between ASGC and Tokyo Transferred 1000 files at a test (1Gbytes filesize) Tried various numbers of concurrent files / streams –From 4/1 to 25/15 Saturate 1Gbps WAN bandwidth Tokyo -> ASGC ASGC -> Tokyo 20/10 4/2 4/4 16/1 20/10 25/15 4/1 8/1 4/2 8/2 4/4 16/1 25/10

12 CPU Usage in the last year (Sep 2007 – Aug 2008) 3,253,321 CPU time (kSI2k*hours) in last year –Most jobs are ATLAS MC simulation Job submission is coordinated by CC-IN2P3 (the associated Tier1) Outputs are uploaded to the data storage at CC-IN2P3 –Large contribution to the ATLAS MC production TOKYO-LCG2 CPU time per month CPU time at Large Tier2 sites

13 ALICE Tier2 center at Hiroshima University WLCG/EGEE site –“JP-HIROSHIMA-WLCG” Possible Tier 2 site for ALICE

14 Status at Hiroshima Just became EGEE production site – Aug Associated Tier1 site will likely be CC-IN2P3 –No ALICE Tier1 in Asia-Pacific region Resources –568 CPU cores Dual-Core Xeon(3GHz) X 2cpus X 38boxes Quad-Core Xeon(2.6GHz) X 2cpus X 32boxes Quad-Core Xeon(3GHz) X 2cpus X 20blades –Storage: ~200 TB next year Network: 1Gbps –On SINET3

15 KEKKEK Belle experiment has been running –Need to have access to existing peta-bytes of data Site operations –KEK does not support any LHC experiment –Try to gain experience by operating sites in order to prepare for future Tier1 level Grid center University support NAREGI KEK Tsukuba campus Mt. Tsukuba KEKB Linac Belle exp.

16 Grid Deployment at KEK Two EGEE sites –JP-KEK-CRC-1 Rather experimental use and R&D –JP-KEK-CRC-2 More stable services NAREGI –Used beta version for testing and evaluation Supported VOs –belle (main target at present), ilc, calice, … –Not support LCG VOs VOMS operation –belle (registered in CIC) –ppj (accelerator science in Japan), naokek –g4med, apdg, atlasj, ail

17 Belle VO Federation established –5 countries, 7 institutes, 10 sites Nagoya Univ., Univ. of Melbourne, ASGC, NCU, CYFRONET, Korea Univ., KEK VOMS is provided by KEK Activities –Submit MC production jobs –Functional and performance tests –Interface to existing peta-bytes of data

18 Takashi Sasaki (KEK)

19 ppj VO Federated among major universities and KEK –Tohoku U. (ILC, KamLAND) –U. Tsukuba (CDF) –Nagoya U. (Belle, ATLAS) –Kobe U. (ILC, ATLAS) –Hiroshima IT (ATLAS, Computing Science) Common VO for accelerator science in Japan –NOT depend on specific projects, but resources shared KEK acts as GOC –Remote installation –Monitoring Based on Nagios and Wiki –Software update

20 KEK Grid CA Started since Jan Accredited as an IGTF (International Grid Trust Federation) compliant CA JFY 2006 Apr Mar 2007 JFY 2007 Apr2007 – Mar 2008 Personal cert Host cert Web server cert.40 Numbers of Issued certificates

21 NAREGINAREGI NAREGI: NAtional REsearch Grid Initiative –Host institute: National Institute of Infrmatics (NII) –R&D of the Grid middleware for research and industrial applications –Main targets are nanotechnology and biotechnology More focused on computing grid Data grid part integrated later Ver. 1.0 of middleware released in May, 2008 –Software maintenance and user support services will be continued

22 NAREGI at KEK NAREGI-  version installed on the testbed –1.0.1: Jun – Nov Manual installation for all the steps –1.0.2: Feb 2007 –2.0.0: Oct apt-rpm installation –2.0.1: Dec Site federation test –KEK-NAREGI/NII: Oct –KEK-National Astronomy Observatory (NAO): Mar Evaluation of application environment of NAREGI –job submission/retrieval, remote data stage-in/out

23 Takashi Sasaki (KEK)

24 Data Storage: Gfarm Gfarm: distributed file system –DataGrid part in NAREGI –Data are stored in multiple disk servers Tests performed : –Stage-in and stage-out to the Gfarm storage –GridFTP interface Between gLite site and NAREGI site –File access from application Have access with FUSE (Filesystem in userspace) –Without the need of changing application program –IO speed is several times slower than local disk

25 Future Plan on NAREGI at KEK Migration to the production version Test of interoperability with gLite Improve the middleware in the application domain –Development of the new API to the application Virtualization of the middleware for script languages (to be used at web portal as well) –Monitoring Jobs, sites,…

26 SummarySummary WLCG –ATLAS Tier2 at Tokyo Stable operation –ALICE Tier2 at Hiroshima Just started operation in production Coordinated effort lead by KEK –Site operations with gLite and NAREGI middlewares Belle VO: SRB –Will be replaced with iRODs ppj VO: deployment at universities –Supported and monitored by KEK –NAREGI R&D, interoperability