HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Batch Production and Monte Carlo + CDB work status Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
Jean-Yves Nief, CC-IN2P3 Wilko Kroeger, SCCS/SLAC Adil Hasan, CCLRC/RAL HEPiX, SLAC October 11th – 13th, 2005 BaBar data distribution using the Storage.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
11 September 2007Milos Lokajicek Institute of Physics AS CR Prague Status of the GRID in the Czech Republic NEC’2007.
Development of Simulation Framework for Advanced Radiation Therapy Takashi Sasaki KEK (This project is supported by JST CREST)
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Grid Computing for High Energy Physics in Japan Hiroyuki Matsunaga International Center for Elementary Particle Physics (ICEPP), The University of Tokyo.
1 Deployment of an LCG Infrastructure in Australia How-To Setup the LCG Grid Middleware – A beginner's perspective Marco La Rosa
Experience of a low-maintenance distributed data management system W.Takase 1, Y.Matsumoto 1, A.Hasan 2, F.Di Lodovico 3, Y.Watase 1, T.Sasaki 1 1. High.
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En’yo, Takashi Ichihara, Yasushi Watanabe and Satoshi.
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
CCJ Computing Center in Japan for spin physics at RHIC T. Ichihara, Y. Watanabe, S. Yokkaichi, O. Jinnouchi, N. Saito, H. En’yo, M. Ishihara,Y.Goto (1),
IRODS performance test and SRB system at KEK Yoshimi KEK Building data grids with iRODS 27 May 2008.
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
Computing Coordination in Japan Takashi Sasaki Computing Research Center KEK, Inter-University Research Institute Corporation High Energy Accelerator Research.
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
Data GRID Activity in Japan Yoshiyuki WATASE KEK (High energy Accelerator Research Organization) Tsukuba, Japan
Data GRID deployment in HEPnet-J Takashi Sasaki Computing Research Center KEK.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Nick Brook Current status Future Collaboration Plans Future UK plans.
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea.
Data transfer over the wide area network with a large round trip time H. Matsunaga, T. Isobe, T. Mashimo, H. Sakamoto, I. Ueda International Center for.
1 High Energy Physics (HEP) Computing HyangKyu Park Kyungpook National University Daegu, Korea 2008 Supercomputing & KREONET Workshop Ramada Hotel, JeJu,
A Plan for HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF/D0 Grid Meeting August 5,
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
HEPIX - HEPNT, 1 Nov Milos Lokajicek, IP AS CR, Prague1 Status Report - Czech Republic HEP Groups and experiments Networking and Computing Grid activities.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
PHENIX Computing Center in Japan (CC-J) Takashi Ichihara (RIKEN and RIKEN BNL Research Center ) Presented on 08/02/2000 at CHEP2000 conference, Padova,
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
11 November 2010 Natascha Hörmann Computing at HEPHY Evaluation 2010.
KISTI & Belle experiment Eunil Won Korea University On behalf of the Belle Collaboration.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
GRID Deployment Status and Plan at KEK ISGC2007 Takashi Sasaki KEK Computing Research Center.
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
1 e-Science AHM st Aug – 3 rd Sept 2004 Nottingham Distributed Storage management using SRB on UK National Grid Service Manandhar A, Haines K,
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
6 march Building the INFN Grid Proposal outline a.ghiselli,l.luminari,m.sgaravatto,c.vistoli INFN Grid meeting, milano.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Status of Tokyo LCG tier-2 center for atlas / H. Sakamoto / ISGC07 Status of Tokyo LCG Tier 2 Center for ATLAS Hiroshi Sakamoto International Center for.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
A data Grid test-bed environment in Gigabit WAN with HPSS in Japan A.Manabe, K.Ishikawa +, Y.Itoh +, S.Kawabata, T.Mashimo*, H.Matsumoto*, Y.Morita, H.Sakamoto*,
Hiroyuki Matsunaga (Some materials were provided by Go Iwai) Computing Research Center, KEK Lyon, March
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
Collaborative Research Projects in Australia: High Energy Physicists Dr. Greg Wickham (AARNet) Dr. Glenn Moloney (University of Melbourne) Global Collaborations.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
GRID & Parallel Processing Koichi Murakami11 th Geant4 Collaboration Workshop / LIP - Lisboa (10-14/Oct./2006) 1 GRID-related activity in Japan Go Iwai,
SRB at KEK Yoshimi Iida, Kohki Ishikawa KEK – CC-IN2P3 Meeting on Grids at Lyon September 11-13, 2006.
KEK CC - present and future - Mitsuaki NOZAKi (KEK)
Performance measurement of transferring files on the federated SRB
Belle II Physics Analysis Center at TIFR
LCG Deployment in Japan
Status and Plans on GRID related activities at KEK
CERN, the LHC and the Grid
Interoperability of Digital Repositories
A data Grid test-bed environment in Gigabit WAN with HPSS in Japan
 YongPyong-High Jan We appreciate that you give an opportunity to have this talk. Our Belle II computing group would like to report on.
Current Grid System in Belle
Gridifying the LHCb Monte Carlo production system
Presentation transcript:

HEP Data Grid in Japan Takashi Sasaki Computing Research Center KEK

28/07/2004 Contents Japanese HENP projects Belle GRID ATLAS-Japan Regional Center Collaboration in Asia Pacific

28/07/2004 Major HENP projects in Japan KEK Proton Synchrotron K2K etc. KEKB –BELLE experiment 300 members from 54 institutes in10 countries Super-B is planning J-PARC –New facility under construction at Tokai (~2008) –particle physics, material science and life science T2K International collaboration –LHC ATLAS, CERN –HERA ZEUS, DESY –RHIC, BNL –etc.

28/07/2004 KEK-B Belle Experiment

28/07/2004 1km

28/07/2004 Belle Data 15MB/sec data rate in maximum 200TB/year raw data recorded Equivalent number of Monte Carlo events

28/07/2004 Belle PC farms

28/07/2004 Super KEK-B L = 10^35cm^-2s^-1 in 2007? Data rate ~ 250MB/s

28/07/2004 K2K now upgrading to T2K

28/07/2004 T2K (Tokai to Kamioka) J-PARC –for particles physics, material and life science –joint project of JAERI and KEK –100 times intense neutrino beam High trigger rate at the near detector –Operational in 2007

28/07/2004 LHC-ATLAS Japanese contribution –Semiconductor Tracker –Endcap Muon Trigger System –Muon Readout Electronics –Superconducting Solenoid –DAQ system –Regional Analysis Facility ICEPP will the site of the regional center

28/07/2004 HEP GRID related activities Networking –domestic/international BELLE –data distribution and remote MC production ALTAS Japan –collaboration between ICEPPand KEK Hadron therapy simulation –application of HEP tools into medical field

28/07/2004 SuperSInet Major HEP sites have a DWDM connection to KEK with 1Gbps

28/07/2004 NY Chicago To Novosibirsk 0.5 M 10 G 5 G -> 10G ASNET/TANET2 622 M APII 1G SuperSINET APAN TransPAC CERNET 155 M Tsukuba WAN Taiwan China Russia USA SuperSINET Current Network Connections around Japan As of July 2004 SINET 2M Thailand Hawaii 155M Korea

28/07/2004 BELLE GRID Distributed analysis and data sharing among institutes –Experimental data distribution to remote sites –Monte Carlo production at remote sites and send back events to KEK –remote job submission –etc Testing SRB with GSI now –among Australia, Taiwan and KEK hopefully Korea, soon

28/07/2004 Why SRB? Easy to install, use and manage –Most of Belle’s collaborating institutes are middle or small size universities poor in man power Useful features –file replication –parallel data transfer –Grid aware: GSI authentication –Command line interface, API, GUI –Fancy Web interface –etc Excellent user support –quick response –seminars Available today! –Solution is needed for Belle

28/07/2004 SRB(Storage Resource Broker) SRB server SRB server SRB server SRB server Internet disk tape DB nfs MCAT RAID SRB client SRB client Zone

28/07/2004 SRB zone federation SRB server SRB server SRB server SRB server Internet disk tape DB nfs MCAT RAID disk DB MCAT SRB server SRB server SRB server ZoneA ZoneB

28/07/2004 SRB Command line interface

28/07/2004 SRB test bed system HPSS 120TB ZONE:glsrb01 Fire Wall MCAT ZONE:anusf store Internet federation MCAT PostgreSQL gtdmz01 ZONE:gtsrb13 ZONE:kekgt15 KEKAustralia gt13 gl03 gl01 MCAT DB2 MCAT PostgreSQL gt15 RAID 800GB bcs20 NFS (Belle data) BELLE Computer system client DMZ BELLE Secure net

28/07/2004 Status Zone federation between ANU and KEK has been established –ANU, University of Melbourne and University of Sydney are collaborating for BELLE/ATLAS Grid issue One MCAT running at ANU –Data can store/retrieve on Belle data system and also HPSS in KEK side Zone federation between Academia Sinica, Taiwan and KEK has been established –still need to solve some problems

28/07/2004 BELLE GRID Future plan Participation of Korean sites Mutual job submission using Globus LCG2+SRB (if they wish) –LCG2 is under testing at KEK also with help of ICEPP. –Because many foreign institutes are working both for BELLE and one of LHC experiments, they want to use LCG rather than vanilla Globus baby “tier-0” at KEK and “tier-1” at ICEPP –SRB and LCG-RLS synchronization will be tried based on Simon Matson’s (Bristol, CMS) work

28/07/2004 Grid in ATLAS Japan Regional analysis center for ATLAS –ICEPP, the University of Tokyo Joint collaboration between ICEPP and KEK for Grid deployment

28/07/2004 LCG MW Deployment in Japan RC Pilot Model –Since 2002 –LCG testbed. Now LCG2_1_1 LCG2 test –baby tier-0 –LCG2_1_1 Regional Center Facility –Will be introduced in JFY2006 –Aiming “Tier1” size resource

28/07/2004

28/07/2004 SuperSInet Performance Measurement “A” setting TCP 479Mbps -P 1 -t w 128KB TCP 925Mbps -P 2 -t w 128KB TCP 931Mbps -P 4 -t w 128KB UDP 953Mbps -b 1000MB -t w 128KB "A" setting: 104.9MB/s "B" setting: 110.2MB/s “B” setting (longer window size) TCP 922Mbps -P 1 -t w 4096KB UDP 954Mbps -b 1000MB -t w 4096KB DWDM

28/07/2004 1Gbps 100Mbps ICEPPKEK 100 CPUs 6CPUs HPSS 120TB GRID testbed environment with HPSS through GbE-WAN NorduGrid - grid-manager - gridftp-server Globus-mds Globus-replica PBS server NorduGrid - grid-manager - gridftp-server Globus-mds PBS server PBS clients HPSS servers ~ 60km 0.2TB SECE SE User PCs

28/07/2004 Client disk KEK = 48MB/s Client disk ICEPP=33MB/s KEK client ( LAN ) ICEPP client( WAN ) # of file transfer in parallel Aggregate Transfer speed (MB/s) Pftp→pftp HPSS mover disk Client disk to /dev/null to client disk Ftp buffer=64MB Even 3ms latency affects on results

28/07/2004 microscopic monitoring on network performance speed=increment of sum of TCP data size in every 10ms window size grows slowly after packet losss longer windows is not perfect when you have packet loss w/o packet loss w/ packet loss

28/07/2004 FAST Looks nice and we want to try, but we haven’t because of the patent and IP issue. –Some people at KEK afraid that they might have a difficulty to work in the similar topic once they see the source code of it. –Is this true?

28/07/2004 CA A key issue of GRID ICEPP and KEK will run one CA jointly for BELLE and ATLAS –only for active KEK users to simplify the procedure –current situation ICEPP depends on a foreign CA to join LCG KEK is running a test CA locally CA management costs are not cheap. Any good idea?

28/07/2004 Storage evaluation SAN solutions –IBM SANfile (aka. StorageTank) Linux+AIX in server side tested AIX, Solaris and Linux clients –Linux clients were fastest in our tests –HP Lustre waiting for beta product delivery

28/07/2004 Distributed simulation in advanced radio therapy A model on hospitals and computer centers –Hospitals send CT, MRI or PET images (DICOM) with treatment planning data to a computing center as input higher security is required to protect personal data –Full simulation using Geant4 at computing centers parallel simulation –Analysis results and feedbacks in DICOM to hospitals Validation of treatment planning

28/07/2004 Toward Asia-Pacific collaboration Taka an advantage on working together with people in neighboring time zones HEP population in Asia-Pacific –ATLAS Australia(7), China(15), Japan(45) and Taiwan(5) 72/1446 = 5.0% –CMS China(31), Korea(17) and Taiwan(14) 62/1676 = 3.7% –Alice China(20), Japan(3) and Korea(12) 35/747 = 4.7% –LHC-b China(19) 19/737 = 2.6% –Belle Australia(8),China(11),Japan(122),Korea(15) and Taiwan(13) 169/246 = 67% (excluding Japan 20%)

28/07/2004

28/07/2004 Summary Japan is an unique country in Asia-Pacific region which have large accelerators for HENP –should have a “tier-0” center SRB is under testing for BELLE ICEPP, U-Tokyo and KEK are collaborating together to build the ATLAS regional center in Japan We seek a collaboration among Asia-Pacific countries –BELLE, LHC –More bandwidth is necessary among sites