Download presentation
Presentation is loading. Please wait.
Published byBernard Simon Modified over 9 years ago
1
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12, 2002
2
CENTER FOR HIGH ENERGY PHYSICS 7.12. 2002. 2 In Korea, HEP Data Grid Project has started on March 2002. Five year project (2002-2006) 30 High Energy Physicists (Ph.D level) from 5 Experiments (CDF, AMS, CMS, K2K, PHENIX) are involved in this project. This project includes to make Asia Regional Data Center (Tier- 1) for CMS, AMS, CDF(?). The center will be located at CHEP, Kyungpook National University. CHEP itself is involved in CDF, Belle, CMS and AMS experiments HEP Data Grid in Korea
3
CENTER FOR HIGH ENERGY PHYSICS 7.12. 2002. 3 Europe CERN (CMS) US FNAL (CDF) US BNL (PHENIX) Space Station (AMS) Japan KEK (K2K) Korea CHEP Experiments for HEP Data Grid Project in Korea
4
CENTER FOR HIGH ENERGY PHYSICS 7.12. 2002. 4 Networking Multi-leveled hierarchy (both for data and for computation) Network Services Videoconferencing OO Database access Data Storage Capability in Data Center Storage : 1100 TB Raid Type Disk Tape Drive : 60 Tape Drives of IBM 6590 TSM Server/HPSS : 3Pbyte Computing powers (1000 CPUs) Need both centralized and distributed linux clusters Parallel computing and parallel storage capability Final Goal Project (2002-2006)
5
CENTER FOR HIGH ENERGY PHYSICS 7.12. 2002. 5 Processing Fabric 1000 CPUs Commodity PCs 3 PBytes storage Gbit Networks Sundomang Supercomputers National network LANs (1) Connection Data Grid abroad Between CERN(Tier-0) and CHEP(Tier-1), Korea : Gbps network Between Fermilab, USA and CHEP(Tier-1), Korea : APII : 1 Gbps Between KEK, Japan and CHEP(Tier-1), Korea : APII and Hyunhae : 1 Gbps (2) Domestic Data Grid Between CHEP(Tier-1) and Tier 2 : 155 Mbps ~ 1 Gbps Between CHEP(Tier-1) and Other : 45 ~ 155 Mbps The Goal of HEP Data Grid Network Project (2002-2006)
6
CENTER FOR HIGH ENERGY PHYSICS 7.12. 2002. 6 JAPAN AP II or APAN-ESNET USA ESNET (Fermilab, BNL) TEIN-155 (CERN, DESY) China(IHEP) Using KOREN Topology Or HPCNET/Kreonet Topology 1Gbps Project (2002-2006) CHEP
7
CENTER FOR HIGH ENERGY PHYSICS 7.12. 2002. 7
8
CENTER FOR HIGH ENERGY PHYSICS 7.12. 2002. 8 Current Status (As of July 12, 2002) CPU : 40 CPU (1.7 GHz) Clusters HDD –about 3 Tbyte (80 GB IDE X 40) –100Gbyte Raid + 600 Gbyte Tape library Network CHEP ----- KNU ---- Sundomang ---- Star tap ---- Fermilab 1Gbps 1 Gbps 45Mbps ? The actual network performance between CHEP and FNAL is 3~5 Mbps (30~50 Gbyte/day). Hardware
9
CENTER FOR HIGH ENERGY PHYSICS 7.12. 2002. 9 Current Status PC Clusters ObjectItemQunatity File ServerPentium III 866MHz (dual) 2 Login and Compile Server Pentinum IV 1.7 GHz AMD 1.47 GHz 1111 Main CPU Pentium III 1 GHz (dual) Pentium IV 1.7 GHz AMD 1.47 GHz AMD 1 GHz 1 16 14 5 UPS10 KW (30 min)1 NAS 100 GB HDD (RAID)+ 600 GB Tape library1 Network Switch24x100Mbps + 2 x 1 Gbps 1+1 By the end of this year, a PC cluster of total 100 CPUs will be constructed.
10
CENTER FOR HIGH ENERGY PHYSICS 7.12. 2002. 10 Installed Globus 2.0 on 12 PCs. (10 @CHEP, 1@SNU, 1@Fermilab) Constructed Private CA (Certificate Authority). Installed MDS (metacomputing directory service). Installed GridFTP, Replica Catalog, Replica Management. Test the Grid test-bed between CHEP(Tier-1) and SNU(Tier-2) & CHEP and Fermilab. Current Status Data Grid Software and Middleware
11
CENTER FOR HIGH ENERGY PHYSICS 7.12. 2002. 11 100 PC clusters will be constructed by at the end of this year. 6 Tbyte HPSS system is supposed to come. Contribute 1 T byte hard disk (now) + 1 T byte hard disk (December) to CDF for the network buffer between CHEP and FCC. On early November, International HEP Data Grid Work Shop will be held at CHEP, Kyungpook National University in Korea. CAF system will be constructed in Korea by Frank’s suggestions. Then, the CAF in Korea and the CAF in FCC will be connected in the future. Any suggestions? Future Plan (This year)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.