Construction Experience and Application of the HEP DataGrid in Korea Bockjoo Kim On behalf of Korean HEP Data Grid Working Group.

Slides:



Advertisements
Similar presentations
1 The K2K Experiment (Japan) Production of neutrinos at KEK and detection of it at Super-K Data acquisition and analysis is in progress Goals Detection.
Advertisements

Andrew McNab - Manchester HEP - 17 September 2002 Putting Existing Farms on the Testbed Manchester DZero/Atlas and BaBar farms are available via the Testbed.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH Home server AFS using openafs 3 DB servers. Web server AFS Mail Server.
4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
NIKHEF Testbed 1 Plans for the coming three months.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
DataGrid Kimmo Soikkeli Ilkka Sormunen. What is DataGrid? DataGrid is a project that aims to enable access to geographically distributed computing power.
A tool to enable CMS Distributed Analysis
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
HEP GRID CHEP, KNU 11/9/2002 Youngjoon Kwon (Yonsei Univ.) 1 Belle Computing / Data Handling  What is Belle and why we need large-scale computing?
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
HEP Experiment Integration within GriPhyN/PPDG/iVDGL Rick Cavanaugh University of Florida DataTAG/WP4 Meeting 23 May, 2002.
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
Computational grids and grids projects DSS,
:: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Data GRID Activity in Japan Yoshiyuki WATASE KEK (High energy Accelerator Research Organization) Tsukuba, Japan
CHEP 2003Stefan Stonjek1 Physics with SAM-Grid Stefan Stonjek University of Oxford CHEP th March 2003 San Diego.
CHEP'07 September D0 data reprocessing on OSG Authors Andrew Baranovski (Fermilab) for B. Abbot, M. Diesburg, G. Garzoglio, T. Kurca, P. Mhashilkar.
November 16, 2012 Seo-Young Noh Haengjin Jang {rsyoung, Status Updates on STAR Computing at KISTI.
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea.
1 st December 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center) The International Workshop.
Nadia LAJILI User Interface User Interface 4 Février 2002.
1 DIRAC – LHCb MC production system A.Tsaregorodtsev, CPPM, Marseille For the LHCb Data Management team CHEP, La Jolla 25 March 2003.
Jan. 17, 2002DØRAM Proposal DØRACE Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Remote Analysis Station ArchitectureRemote.
March 2003 CERN 1 EDG and AliEn in Prague Dagmar Adamova INP Rez near Prague.
Group 1 : Grid Computing Laboratory of Information Technology Supervisors: Alexander Ujhinsky Nikolay Kutovskiy.
1 High Energy Physics (HEP) Computing HyangKyu Park Kyungpook National University Daegu, Korea 2008 Supercomputing & KREONET Workshop Ramada Hotel, JeJu,
A Plan for HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF/D0 Grid Meeting August 5,
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
Networks for participating in HEP experiments from Korea Youngdo Oh, Dongchul Son Center for High Energy Physics Kyungpook Nat’l Univ., Daegu, Korea APAN.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
KISTI-GSDC SITE REPORT Sang-Un Ahn, Jin Kim On the behalf of KISTI GSDC 24 March 2015 HEPiX Spring 2015 Workshop Oxford University, Oxford, UK.
CEOS WGISS-21 CNES GRID related R&D activities Anne JEAN-ANTOINE PICCOLO CEOS WGISS-21 – Budapest – 2006, 8-12 May.
Application of the EDG Testbed Bockjoo Kim*, Soo-Bong Kim Seoul National University (SNU) Kihyeon Cho, Youngdo Oh, Dongchul Son Center for High Energy.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
KISTI & Belle experiment Eunil Won Korea University On behalf of the Belle Collaboration.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
Outline: Status: Report after one month of Plans for the future (Preparing Summer -Fall 2003) (CNAF): Update A. Sidoti, INFN Pisa and.
DCAF (DeCentralized Analysis Farm) Korea CHEP Fermilab (CDF) KorCAF (DCAF in Korea) Kihyeon Cho (CHEP, KNU) (On the behalf of HEP Data Grid Working Group)
Feb. 14, 2002DØRAM Proposal DØ IB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) Introduction Partial Workshop Results DØRAM Architecture.
DCAF(DeCentralized Analysis Farm) for CDF experiments HAN DaeHee*, KWON Kihwan, OH Youngdo, CHO Kihyeon, KONG Dae Jung, KIM Minsuk, KIM Jieun, MIAN shabeer,
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
Feb. 13, 2002DØRAM Proposal DØCPB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Partial Workshop ResultsPartial.
Gennaro Tortone, Sergio Fantinel – Bologna, LCG-EDT Monitoring Service DataTAG WP4 Monitoring Group DataTAG WP4 meeting Bologna –
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
GRID & Parallel Processing Koichi Murakami11 th Geant4 Collaboration Workshop / LIP - Lisboa (10-14/Oct./2006) 1 GRID-related activity in Japan Go Iwai,
The EDG Testbed Deployment Details
Belle II Physics Analysis Center at TIFR
LCG Deployment in Japan
High Energy Physics Computing Coordination in Pakistan
Data Processing for CDF Computing
Presentation transcript:

Construction Experience and Application of the HEP DataGrid in Korea Bockjoo Kim On behalf of Korean HEP Data Grid Working Group CHEP2003, UC San Diego Monday 24 March 2003

Outline  Korean Committe HEP Experiments  Development of Korean HEP Data Grid  Goals of Korean HEP Data Grid  Hardware and Software Resources  Network  CPU’s  Storages  Grid Software – EDG testbed, SAMGrid  Achievements in Y2002  Prospects in Y2003

Korean Institutions and HEP Experiments US FNAL (CDF) US BNL (PHENIX) Korea CHEP Other Korean HEP institutions Space Station (AMS) 2005 Europe CERN (CMS) 2007 Japan KEK (K2K/Belle) Space Station (AMS) ( Korean DataGrid Related Experiments Only)  12 Institutions are active HEP participants  Current Experiments:  Belle/ KEK, K2K/KEK,  Pheonix / BNL, CDF / Fermilab  Near Future Experiments  AMS / ISS (MIT, NASA, CERN) : Y2005  CMS (CERN, Europe) : Y2007  Linear Collider Experiment(s)

Development of Korean HEP Data Grid  Grid Forum Korea (GFK) ’ s formed in 2001 and thus KHEPDGWG started  Korean HEP Data Grid approved by KISTI / MIC(GFK) on March 22,  NCA supports CHEP with two international networking utilization projects which are closely related with the Korean HEP Data Grid : Europe and Japan/USA  Networking  KT/KOREN-NOC supports CHEP with PC clusters for networking  Companies like IBM-Korea, CIES agreed to support CHEP 50TB Tape Library and 1TB +Servers)  INDUSTRY  CHEP itself supports HEP Data Grid with its own research fund from the Ministry of Science and Technology (MOST)  MOST and CHEP  Kyungpook Nat ’ l Univ. supports CHEP with spaces for the KHEPDGWG  KOREN/APAN supports Korean HEP DG with 1 Gbps bandwidth of CHEP to KOREN (2002)  networking (one is CHEP and the other is in Seoul) is under discussion  Hyeonhae/Genkai APII (GbE) for HEP (beteewn Korea-Japan) project is in progress  1 st International HEP Data Grid Workshop in Nov 2002

Goals of Korean HEP Data Grid Implementation of the Tier-1 Regional Data Center for the LHC-CMS (CERN) experiment in Asia. The Regional Data Center can be also used as regional data center for other experiments (Belle, CDF, AMS, etc.) Networking  Multi-leveled (Tier) hierarchy of distributed servers (both for data and for computation) to provide transparent access to both raw and derived data.  Tier0 (CERN) – Tier1 (CHEP) : ~Gbps via TEIN  Tier1(CEHP) – Tier1(US and Japan) : ~Gbps via APII/Hyeonhae  Tier-1 (CHEP), Tier-2 or 3 (participating institutions): 45Mbps ~ 1 Gbps via KOREN Computing(1000 CPU Linux clusters) Data Storage Capability  Storage 1.1 PB Raid Type Disk (Tier1+Tier2)  Tape Drive ~ 3.2 PB  HPSS Servers Software: Contribute to grid application package development

Korean HEP Data Grid Network Configuration (2002)  Network Bandwidth between institutions  CHEP-KOREN: 1 Gbps (ready to Users)  SNU-KOREN: 1Gbps ready for test  CHEP-SNU: 1Gbps ready for test  SKKU-KOREN: 155 Mbps (not yet to Users)  Yonsei-KOREN: 155 Mbps (not yet to Users)  File Transfer Tests:  KNU-SNU, KNU-SKKU : ~50 Mbps  KNU-KEK, KNU-Fermilab : 17 Mbps(155Mbps,45Mbps)  KNU-CERN : 8 Mbps (10 Mbps)

Distributed PC-linux Farm  Distributed PC-linux Clusters (~206 CPU’s so far)  10 sites for testbed setup or/and tests  Center for High Energy Physics(CHEP): 142 CPU’s  SNU: 6 CPU’s  KOREN/NOC: ~40 CPU’s  CHEP to KOREN: 1 GbE test established  Yonsei U, SKKU, Konkuk U, Chonnam Nat’l Univ, Ewha WU, Dongshin U: 1 CPU each  4 sites outside of Korea : 18 CPU’s KEK,FNAL,CERN, and ETH

PC-Linux Farm at KNU

CHEP/KNU 48 TBStorage and network equipment Storages and Network Equipment

Storage Specification  IBM TAPE LIBRARY SYSTEM-48 TB (13~18/Nov/2002)  3494-L TB  3494-S10 16 TB  3494-L TB  3494-S10 16 TB  Raid Disks  Fast T200: 1 TB  Raid Disks: 1 TB  Disks on Nodes (4.4 TB)  SW: TSM (HSM)  HSM Server : S7A 262Mhz, 8Way, 4GB Memory 48 TB L12 S10

Grid Software  All is globus 2 based software  KNU and SNU host one EDG testbed each and are running within Korea at the full scale  Application of the EDG testbed to currently running experiments is configured for  EDG testbed for CDF data analysis  EDG testbed for Belle data analysis (This is in progress)  Worker Nodes for the SAM Grid (Fermilab, USA) is also installed for the CDF data analysis at KNU  CHEP assigned a few CPU’s for iVDGL testbed setup (Feb 2003)

EDG Testbeds EDG Test bed at SNU EDG Test bed at KNU

Configuration of EDG testbed in Korea Web Services: SE VOuser WN VO user NFS GSIFTP MAP on disk With maximum security grid-security NFS GSIFTP NFS GSIFTP NFS GSIFTP GDMP server (with new VO) GDMP client (with new VO) GDMP client (with new VO) SNU SKKU KNU/CHEP UI Real user In operation In preparation LDAP RB Disk CE VO user Big Fat Disk CDF CPU K2K CPU

An Application of EDG testbed  The EDG testbed functionality is extended to include Korean CDF as a VO  The extension is to attach existing CPU’s with CDF softwares to the EDG testbed  Add a VO following EDG discussion list  CE in the EDG testbed is modified  Define a que in a non-CE machine  grid-mapfile, grid-mapfile.que1_on_ce, grid-mapfile.que2_on_nonce (exclusive job submission )  ce-static.ldif.que1_on_ce, ce-static.ldif.que1_on_nonce  ce-globus.skel  globus-script-pbs-submit  globus-script-pbs-poll (for ques on non-CE)  Experiment Specific Machine (= que on non-CE) is modified  Make a minimal WN configuration without greatly modifying existing machine (pbs install/setup, Pooled accounts, mounting security)  /etc/hosts.equiv for pooled account users to submit jobs on non-CE que  References [1] [2]

Overview of the EDG application SE VOuser WN VO user NFS GSIFTP NFS GSIFTP NFS GSIFTP NFS GSIFTP GDMP server (with new VO) GDMP client (with new VO) UI Real user RB /home Modified CE VO user /flatfiles/SE00 CDF Run2 Softwares Local LDAP Server Authorized Users dguser for RB VO users for CDF LDAP and.fr VO users for CMS, LHCB, ATLAS, etc Modified CE Q ’ s for EDG VO ’ s Q for CDF VO EDG WN /etc/grid-security grid-mapfile PBS Server PBS Client CDF VO Q Another site NFS New WN CAF Feynman Center Fermilab

Working Sample Files for CDF Job  JDL Executable = "run_cdf_tofsim.sh"; StdOutput = "run_cdf_tofsim.out"; StdError = "run_cdf_tofsim.err"; InputSandbox = {"run_cdf_tofsim.sh"}; OutputSandbox = {"run_cdf_tofsim.out","run_cdf_tofsim.err",".BrokerInfo"};  Input Shell Script #!/bin/sh source ~cdfsoft/cdf2.shrc setup cdfsoft int1 newrel -t 4.9.0int1 test1 cd test1 addpkg -h TofSim gmake TofSim.all gmake TofSim.tbin./bin/Linux2-KCC_4_0/tofSim_test tofSim/test/tofsim_usr_pythia_bbar_dbfile.tcl

Web Service for EDG testbed  To facilitate access to the EDG testbeds in Korea  Mailman python cgi wrapper is utilized  EDG job related Python commands are modified for web service  At the moment, login is possible through a proxy file  Logged user can see the user’s Job ID’s  Retrieved job output remains at the web server machine

Web Service for EDG testbed Login by Loading Proxy Job submission by Loading jdl Job Submission Result Page 1.Job Status can be checked 2.Submitted Job can be cancelled List of JOB ID’s to get output

SAM Grid Monitoring Home page DCAF (DeCenteralized Analysis Farm) in KNU for SAM Grid for SAM Grid

What KHEPDG achieves in Y2002  Infrastructure  206 CPUs/ 6.5 TB Disk/ 48 TB Tape library + Networking Infrastructure  HSM system for tape library  KNU and SNU host one EDG testbed each which is running within Korean in full scale and accessible via web  KNU installed SAMGrid (US Fermilab products) worknodes (as demonstrated at SC2002)  CHEP started discussing on collaboration with iVDGL  SNU/KNU implemented an application of the EUDG testbed f or the CDF and the implementation is working  Network test is performed between Korea-US, Korea-Japan, Korea-EU, and within Korea.  1 st Internatonal HEP DataGrid workshop held at CHEP

Prospects of KHEPDGWG for Y2003  More testbed setup (e.g., iVDGL’s WorldGrid)  Extend application of EDG testbed with currently running experiments to, e.g., Belle  Cross Grid Tests between EDG – iVDGL in Korea  Investigate possibility of Globus3  Full operation of HPSS (HSM) with Grid Softwares  Increase number of clusters to 400 CPU or more  Increase Storages to 100 TB  Participate in the CMS data challenge  2 nd HEP DataGrid Workshop will be held in August

Summary  HEP Data Grid is being considered for most of Korean HEP institutions  So far the HEP Data Grid project has received excellent supports from government, industry, and research institutions  EDG testbeds and its application are operational in Korea, and we will expand with other testbeds, e.g., iVDGL WorldGrid  1 st international workshop on HEP Data Grid was held successfully in November 2002  CHEP will host 2 nd international workshop on HEP Data Grid in August 2003