National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,

Slides:



Advertisements
Similar presentations
1 The K2K Experiment (Japan) Production of neutrinos at KEK and detection of it at Super-K Data acquisition and analysis is in progress Goals Detection.
Advertisements

Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Construction Experience and Application of the HEP DataGrid in Korea Bockjoo Kim On behalf of Korean HEP Data Grid Working Group.
Building Large Scale Fabrics – A Summary Marcel Kunze, FZK.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
HEP Prospects, J. Yu LEARN Strategy Meeting Prospects on Texas High Energy Physics Network Needs LEARN Strategy Meeting University of Texas at El Paso.
TRIUMF SITE REPORT – Corrie Kost April Catania (Italy) Update since last HEPiX/HEPNT meeting.
Terabyte IDE RAID-5 Disk Arrays David A. Sanders, Lucien M. Cremaldi, Vance Eschenburg, Romulus Godang, Christopher N. Lawrence, Chris Riley, and Donald.
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
High Performance Computing G Burton – ICG – Oct12 – v1.1 1.
HEP GRID CHEP, KNU 11/9/2002 Youngjoon Kwon (Yonsei Univ.) 1 Belle Computing / Data Handling  What is Belle and why we need large-scale computing?
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
An Overview of PHENIX Computing Ju Hwan Kang (Yonsei Univ.) and Jysoo Lee (KISTI) International HEP DataGrid Workshop November 8 ~ 9, 2002 Kyungpook National.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea.
A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center) The International Workshop.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
1 The D0 NIKHEF Farm Kors Bos Ton Damen Willem van Leeuwen Fermilab, May
LCG-2 Plan in Taiwan Simon C. Lin and Eric Yen Academia Sinica Taipei, Taiwan 13 January 2004.
1 1APCTP LHC Konkuk University. Introduction to GSDC Project Activities in 2009 Strategies and Plans in 2010 GSDC office opening ceremony CERN.
1 High Energy Physics (HEP) Computing HyangKyu Park Kyungpook National University Daegu, Korea 2008 Supercomputing & KREONET Workshop Ramada Hotel, JeJu,
A Plan for HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF/D0 Grid Meeting August 5,
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
Spending Plans and Schedule Jae Yu July 26, 2002.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
Networks for participating in HEP experiments from Korea Youngdo Oh, Dongchul Son Center for High Energy Physics Kyungpook Nat’l Univ., Daegu, Korea APAN.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
CHEP as an AMS Remote Data Center International HEP DataGrid Workshop CHEP, KNU G.N. Kim, H.B. Park, K.H. Cho, S. Ro, Y.D. Oh, D. Son (Kyungpook)
October 2002 INFN Catania 1 The (LHCC) Grid Project Initiative in Prague Dagmar Adamova INP Rez near Prague.
Grid Glasgow Outline LHC Computing at a Glance Glasgow Starting Point LHC Computing Challenge CPU Intensive Applications Timeline ScotGRID.
HEPIX - HEPNT, 1 Nov Milos Lokajicek, IP AS CR, Prague1 Status Report - Czech Republic HEP Groups and experiments Networking and Computing Grid activities.
Application of the EDG Testbed Bockjoo Kim*, Soo-Bong Kim Seoul National University (SNU) Kihyeon Cho, Youngdo Oh, Dongchul Son Center for High Energy.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
Nov. 8, 2000RIKEN CC-J RIKEN CC-J (PHENIX Computing Center in Japan) Report N.Hayashi / RIKEN November 8, 2000 PHENIX Computing
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
CDF computing in the GRID framework in Santander
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
Grid Glasgow Outline LHC Computing at a Glance Glasgow Starting Point LHC Computing Challenge CPU Intensive Applications Timeline ScotGRID.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
Partner Logo A Tier1 Centre at RAL and more John Gordon eScience Centre CLRC-RAL HEPiX/HEPNT - Catania 19th April 2002.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
Storage and Data Movement at FNAL D. Petravick CHEP 2003.
DCAF (DeCentralized Analysis Farm) Korea CHEP Fermilab (CDF) KorCAF (DCAF in Korea) Kihyeon Cho (CHEP, KNU) (On the behalf of HEP Data Grid Working Group)
November 10, 1999PHENIX CC-J Updates in Nov.991 PHENIX CC-J Updates in Nov New Hardware - N.Hayashi / RIKEN November 10, 1999 PHENIX Computing Meeting.
DCAF(DeCentralized Analysis Farm) for CDF experiments HAN DaeHee*, KWON Kihwan, OH Youngdo, CHO Kihyeon, KONG Dae Jung, KIM Minsuk, KIM Jieun, MIAN shabeer,
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
Scientific Computing Facilities for CMS Simulation Shams Shahid Ayub CTC-CERN Computer Lab.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
Oct. 6, 1999PHENIX Comp. Mtg.1 CC-J: Progress, Prospects and PBS Shin’ya Sawada (KEK) For CCJ-WG.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
LCG Deployment in Japan
Vanderbilt Tier 2 Project
SAM at CCIN2P3 configuration issues
UK GridPP Tier-1/A Centre at CLRC
Particle Physics at KISTI
A data Grid test-bed environment in Gigabit WAN with HPSS in Japan
e-Science for High Energy Physics
Grid activities in NIKHEF
Data Processing for CDF Computing
Presentation transcript:

National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12, 2002

CENTER FOR HIGH ENERGY PHYSICS In Korea, HEP Data Grid Project has started on March Five year project ( ) 30 High Energy Physicists (Ph.D level) from 5 Experiments (CDF, AMS, CMS, K2K, PHENIX) are involved in this project. This project includes to make Asia Regional Data Center (Tier- 1) for CMS, AMS, CDF(?). The center will be located at CHEP, Kyungpook National University. CHEP itself is involved in CDF, Belle, CMS and AMS experiments HEP Data Grid in Korea

CENTER FOR HIGH ENERGY PHYSICS Europe CERN (CMS) US FNAL (CDF) US BNL (PHENIX) Space Station (AMS) Japan KEK (K2K) Korea CHEP Experiments for HEP Data Grid Project in Korea

CENTER FOR HIGH ENERGY PHYSICS Networking  Multi-leveled hierarchy (both for data and for computation) Network Services  Videoconferencing  OO Database access Data Storage Capability in Data Center  Storage : 1100 TB Raid Type Disk  Tape Drive : 60 Tape Drives of IBM 6590  TSM Server/HPSS : 3Pbyte Computing powers (1000 CPUs)  Need both centralized and distributed linux clusters  Parallel computing and parallel storage capability Final Goal Project ( )

CENTER FOR HIGH ENERGY PHYSICS Processing Fabric 1000 CPUs Commodity PCs 3 PBytes storage Gbit Networks Sundomang Supercomputers National network LANs (1) Connection Data Grid abroad  Between CERN(Tier-0) and CHEP(Tier-1), Korea : Gbps network  Between Fermilab, USA and CHEP(Tier-1), Korea : APII : 1 Gbps  Between KEK, Japan and CHEP(Tier-1), Korea : APII and Hyunhae : 1 Gbps (2) Domestic Data Grid  Between CHEP(Tier-1) and Tier 2 : 155 Mbps ~ 1 Gbps  Between CHEP(Tier-1) and Other : 45 ~ 155 Mbps The Goal of HEP Data Grid Network Project ( )

CENTER FOR HIGH ENERGY PHYSICS JAPAN AP II or APAN-ESNET USA ESNET (Fermilab, BNL) TEIN-155 (CERN, DESY) China(IHEP) Using KOREN Topology Or HPCNET/Kreonet Topology 1Gbps Project ( ) CHEP

CENTER FOR HIGH ENERGY PHYSICS

CENTER FOR HIGH ENERGY PHYSICS Current Status (As of July 12, 2002) CPU : 40 CPU (1.7 GHz) Clusters HDD –about 3 Tbyte (80 GB IDE X 40) –100Gbyte Raid Gbyte Tape library Network CHEP KNU ---- Sundomang ---- Star tap ---- Fermilab 1Gbps 1 Gbps 45Mbps ? The actual network performance between CHEP and FNAL is 3~5 Mbps (30~50 Gbyte/day). Hardware

CENTER FOR HIGH ENERGY PHYSICS Current Status PC Clusters ObjectItemQunatity File ServerPentium III 866MHz (dual) 2 Login and Compile Server Pentinum IV 1.7 GHz AMD 1.47 GHz 1111 Main CPU Pentium III 1 GHz (dual) Pentium IV 1.7 GHz AMD 1.47 GHz AMD 1 GHz UPS10 KW (30 min)1 NAS 100 GB HDD (RAID)+ 600 GB Tape library1 Network Switch24x100Mbps + 2 x 1 Gbps 1+1 By the end of this year, a PC cluster of total 100 CPUs will be constructed.

CENTER FOR HIGH ENERGY PHYSICS Installed Globus 2.0 on 12 PCs. Constructed Private CA (Certificate Authority). Installed MDS (metacomputing directory service). Installed GridFTP, Replica Catalog, Replica Management. Test the Grid test-bed between CHEP(Tier-1) and SNU(Tier-2) & CHEP and Fermilab. Current Status Data Grid Software and Middleware

CENTER FOR HIGH ENERGY PHYSICS PC clusters will be constructed by at the end of this year. 6 Tbyte HPSS system is supposed to come. Contribute 1 T byte hard disk (now) + 1 T byte hard disk (December) to CDF for the network buffer between CHEP and FCC. On early November, International HEP Data Grid Work Shop will be held at CHEP, Kyungpook National University in Korea. CAF system will be constructed in Korea by Frank’s suggestions. Then, the CAF in Korea and the CAF in FCC will be connected in the future. Any suggestions? Future Plan (This year)