Grid related activities at KEK

Slides:



Advertisements
Similar presentations
UPKI Inter-University Authentication and Authorization Platform for Japanese Cyber-Science Infrastructure Yasuo OKABE Academic Center for Computing and.
Advertisements

© 2008 Open Grid Forum Data Grid Federation by RNS GFS-WG, OGF23 Balcelona Hideo Matsuda Osaka University / NAREGI.
Toward Production Level Operation of Authentication System for High Performance Computing Infrastructure in Japan Eisaku Sakane and Kento Aida National.
Grid Computing for High Energy Physics in Japan Hiroyuki Matsunaga International Center for Elementary Particle Physics (ICEPP), The University of Tokyo.
IRODS performance test and SRB system at KEK Yoshimi KEK Building data grids with iRODS 27 May 2008.
H IGH E NERGY A CCELERATOR R ESEARCH O RGANIZATION KEKKEK Current Status and Plan on Grid at KEK/CRC Go Iwai, KEK/CRC On behalf of KEK Data Grid Team Links.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Computing Coordination in Japan Takashi Sasaki Computing Research Center KEK, Inter-University Research Institute Corporation High Energy Accelerator Research.
Data GRID Activity in Japan Yoshiyuki WATASE KEK (High energy Accelerator Research Organization) Tsukuba, Japan
Grid Applications for High Energy Physics and Interoperability Dominique Boutigny CC-IN2P3 June 24, 2006 Centre de Calcul de l’IN2P3 et du DAPNIA.
Data GRID deployment in HEPnet-J Takashi Sasaki Computing Research Center KEK.
H IGH E NERGY A CCELERATOR R ESEARCH O RGANIZATION KEKKEK Current Status and Recent Activities on Grid at KEK Go Iwai, KEK/CRC On behalf of KEK Data Grid.
Workshop KEK - CC-IN2P3 KEK new Grid system 27 – 29 Oct. CC-IN2P3, Lyon, France Day2 14: :55 (40min) Koichi Murakami, KEK/CRC.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
Computing for ILC experiments Akiya Miyamoto KEK 14 May 2014 AWLC14 Any comments are welcomed.
Site Report from KEK, Japan JP-KEK-CRC-01 and JP-KEK-CRC-02 Go Iwai, KEK/CRC Grid Operations Workshop – 2007 Kungliga Tekniska högskolan, Stockholm, Sweden.
1 ILDG Status in Japan  Lattice QCD Archive(LQA) a gateway to ILDG Japan Grid  HEPNet-J/sc an infrastructure for Japan Lattice QCD Grid A. Ukawa Center.
Installing, running, and maintaining large Linux Clusters at CERN Thorsten Kleinwort CERN-IT/FIO CHEP
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
Nov. 8, 2000RIKEN CC-J RIKEN CC-J (PHENIX Computing Center in Japan) Report N.Hayashi / RIKEN November 8, 2000 PHENIX Computing
KISTI & Belle experiment Eunil Won Korea University On behalf of the Belle Collaboration.
GRID Deployment Status and Plan at KEK ISGC2007 Takashi Sasaki KEK Computing Research Center.
© 2008 Open Grid Forum File Catalog Development in Japan e-Science Project GFS-WG, OGF24 Singapore Hideo Matsuda Osaka University.
Computing Research Center, High Energy Accelerator Organization (KEK) Site Status Report Go Iwai, KEK/CRC, Japan WLCG Tier-2 Workshop Dec. 1 ~ 4, 2006.
KEK GRID for ILC Experiments Akiya Miyamoto, Go Iwai, Katsumasa Ikematsu KEK LCWS March 2010.
RENKEI:UGI Takashi Sasaki. Project history The RENKEI project led by Prof. Ken Miura of NII is funded by MEXT during JFY The goal of the project.
Hiroyuki Matsunaga (Some materials were provided by Go Iwai) Computing Research Center, KEK Lyon, March
COMP_3:Grid Interoperability and Data Management CC-IN2P3 and KEK Computing Research Center FJPPL Annecy June 15, 2010.
Tutorial on Science Gateways, Roma, Catania Science Gateway Framework Motivations, architecture, features Riccardo Rotondo.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
Summary GRID and Computing Takashi Sasaki KEK Computing Research Center.
ILC_3: DISTRIBUTED COMPUTING TOWARD ILC (PROPOSAL) CC-IN2P3 and KEK Computing Research Center (KEK-CRC) Hiroyuki Matsunaga (KEK) 2014 Joint Workshop of.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
GRID & Parallel Processing Koichi Murakami11 th Geant4 Collaboration Workshop / LIP - Lisboa (10-14/Oct./2006) 1 GRID-related activity in Japan Go Iwai,
Grid Interoperability and Data Management KEK-CRC & CC-IN2P3 Yonny CARDENAS JFPPL09 Workshop, Tsukuba, May 2009.
A GOS Interoperate Interface's Design & Implementation GOS Adapter For JSAGA Meng You BUAA.
KEK CC - present and future - Mitsuaki NOZAKi (KEK)
Computing_3:GRID Interoperability CC-IN2P3 and KEK Computing Research Center Takashi Sasaki KEK Computing Research Center FJPPL Workshop 08.
Bob Jones EGEE Technical Director
Performance measurement of transferring files on the federated SRB
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Grid Interoperability
The Beijing Tier 2: status and plans
Status report on LHC_2: ATLAS computing
Overview of the Belle II computing
Update on SINET5 implementation for ICEPP (ATLAS) and KEK (Belle II)
Data Bridge Solving diverse data access in scientific applications
News and update from KEK-CRC
LCG Deployment in Japan
CONNECTING SWEDISH AND JAPANESE UNIVERSITIES THROUGH RESEARCH, EDUCATION AND INNOVATION PHOTOLIFE14.COM.
Status and Plans on GRID related activities at KEK
NAREGI-CA Development of NAREGI-CA NAREGI-CA Software CP/CPS Audit
EGEE support for HEP and other applications
Network between CC-IN2P3 and KEK
A high-performance computing facility for scientific research
NAREGI at KEK and GRID plans
Project: COMP_01 R&D for ATLAS Grid computing
LCG Operations Workshop, e-IRG Workshop
Particle Physics at KISTI
gLite deployment and operation toward the KEK Super B factory
Interoperability of Digital Repositories
e-Science for High Energy Physics
Module 01 ETICS Overview ETICS Online Tutorials
 YongPyong-High Jan We appreciate that you give an opportunity to have this talk. Our Belle II computing group would like to report on.
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
Current Grid System in Belle
Grid Engine Diego Scardaci (INFN – Catania)
Status of Grids for HEP and HENP
Short to middle term GRID deployment plan for LHCb
Prospect of Indo-US Collaboration
Presentation transcript:

Grid related activities at KEK Takashi Sasaki

Outline Strategy Grid deployment R&D Future plans LCG NAREGI Renkei, e-Science Future plans

Strategy LCG/gLite for the e-Science infrastructure for HEP experiments, e.g. ILC, Belle and BelleII NAREGI for groups at universities and super computing NAREGI is the Japanese the e-Science infrastructure 10PFlops super computer at Kobe 10 PFlops will be realized by June 2012 Operation in production starting in April 2013 A half year delay due to the budget cut R&D in the collaboration with people in ICT iRODS for experiments at J-PARC SRB is still running for Belle

SINET4 Upgrade on the academic network in Japan operated by National Institute of Informatics (JFY2011-) Backbone will be gradually upgraded from 40Gbps to 120Gbps CWDM+dark fibers 1-40Gbps connection to the end points International connection Gradually upgraded 10Gbos to 20Gbps between NY and TKO 622MbpsX2-> 2.4Gbps for Asian countries

High Energy Physics in Japan Major High Energy Activities in Japan Active Experiments Belle Experiment at KEK KamLAND at Kamioka CDF at FermiLab/USA ATLAS and ALICE at LHC PHENIX at BNL Under construction J-PARC T2K Experiment at Tokai and Kamioka Future Plan SuperB Factory International Linear Collider (ILC) HEPnet-J: High Energy Phy KEK provides the network facility, NEPnet-J, on the SINET3 (NII)。 ATLAS at LHC CDF at FNAL KamLand Tokyo Metropolitan U. Kamioka SuperKamiokande RIKEN Tohoku U. Niigata U. Osaka U. Kanazawa U. J-PARC Shinshu U. JAEA Tokai Okayama U. Kyoto U. Kobe U. Nagoya U. KEK Hiroshima U Nara Women U. Tsukuba U. Osaka city U. ICRR/Tpkyo Waseda U. U. Tokyo Tokyo Inst. of Tech. KEK BELLE SuperSINET nodes SINET nodes 2019/5/3

GRID federation of Japanese HEP+ LCG/gLite federation Tohoku, Tsukuba, Nagoya, Kobe, Hiroshima-IT and KEK NAREGI federation NAOJ, Hiroshima-It, NII and KEK JAXA will join soon Tohoku Univ. KEK Univ. of Tsukuba Nagoya Univ. Kobe Univ. Hiroshima IT Tohoku Univ. KEK Univ. of Tsukuba Nagoya Univ. Kobe Univ. Hiroshima IT SINET3: 20Gbits/sec backbone NAOJ Tokyo tier-2 April 21, 2009 Hiroshima tier-2

Virtualization of computing nodes We have started the evaluation of Platform ISF Share the same nodes with gLite and NAREGI Different experiment groups want different software set with different version including Linux kernels Licensing costs will be the problem HEP wide negotiation including LSF?

RENKEI RENKEI: REsource liNKage for E-science Funded by MEXT directly during JFY2008-2011 Lead by Prof. Ken Miura at NII NAREGI is the code middleware Develops a necessary technologies to establish an e-Science community based on Grid technologies KEK is developing an API to absorb the differences of Grid middleware

ACGRID-II -- Go Iwai, KEK/CRC (4) APIs for Multi Grid Middles e.g. NAREGI, LCG, … 14 Nov 2009 ACGRID-II -- Go Iwai, KEK/CRC

Grid interoperability Sub-group2 covers Grid interoperability NAREGI side GIN to PGI Job exchange among the different Grid middleware

NAREGI NAREGI is the Japanese e-Science Infrastructure developed by NII mainly Small groups at Japanese universities will have the benefit to use NAREGI because they expect the assistance from their computing centers LCG operation is very cost consuming in human resource

SAGA Simple API for Grid Applications Job handling and file handling Absorbs middleware differences at the API level Grid and also clouds KEK is working with the C++ implementation while CC-IN2P3 is working with Java implementation Who we can converge? KEK wants Python interface

Python library We provide command and library in Python Why we like Python, not Java? We felt the difficulties with Java Many VMs are running on the same machine Memory cunsumption Very sensitive to the version

RNS Remote naming service Junctions EPR: End Point Reference Middleware independent file catalogue will be provided Developed in the sub-group 2 in the RENKEI project Junctions Other RNS servers can connect as sub-trees Higher availability and performance EPR: End Point Reference Metadata URI of physical file location

RNS: Resource Namespace Service /EXP MC RAW DST data data file1 file2 file1 file2 file1 file2 file3 Hierarchical namespace management that provides name-to-resource mapping Basic Namespace Component Virtual Directory Non-leaf node in hierarchical namespace tree Junction Name-to-resource mapping that interconnects a reference to any existing resource into hierarchical namespace EPR: End Point Reference EPR2 EPR1 gsiftp://cc.in2pr.fr/….. gsiftp://desy.de/….. http://www.ogf.org/documents/GFD.101.pdf

Current status Following SAGA adaptors are implemented in C++ NAREGI adaptor Job adaptor + Gfarm adaptor has been implemented RNS adaptor PBS and torque Other adaptors are also developed or under development in C++ Globus LSF Cloud : Amazon EC2, goolge gLite Python interface Python commands are under design and a proto-type has been implemented Demonstrated at SC2010

Python SAGA-C** OGF standard gLite NAREGI RNS iRODS PBS/torque LSF cloud globus

Simple Python example 1: import saga 2: import sys 3: argvs = sys.argv 4: argc = len(argvs) 5: try: 6:      # Create a Job Description 7:      js url = saga.url(argvs[1]) 8:      job service = saga.job.service(js url) 9:      job desc = saga.job.description() 10:      job desc.executable = ’./test.sh’ 11:      job desc.working directory = ’$HOME/workdir’ 12:      job desc.candidate hosts = (argvs[2],) 13: 14:      # Submit a job 15:      my job = job service.create job(job desc) 16:      my job.run() 17: 18: except saga.exception, e: 19: print ”SAGA Error: ”, e

FKPPL+FJPPL possible joint activities CC-IN2P3, KISTI and KEK have the mutual relationship between either of two laboratories Establishing the three points collaboration will be beneficial for everybody FKPPL FJPPL KEK-KISTI MoU (BelleII)

KISTI-KEK NAREGI deployment Belle-II Bio-medical KISTI will introduce a full set of NAREGI soon Belle-II KISTI will a tier-1 center KISTI has started to make the replica of the current Belle data Bio-medical Geant4 based simulation Cancer treatment

Future plans SAGA-Python commands and libraries will be implemented Web services