CHEP as an AMS Regional Center The Third International Workshop on HEP Data Grid CHEP, KNU 2004. 8. 26-28 G. N. Kim, J. W. Shin, N. Tasneem, M. W. Lee,

Slides:



Advertisements
Similar presentations
1 The K2K Experiment (Japan) Production of neutrinos at KEK and detection of it at Super-K Data acquisition and analysis is in progress Goals Detection.
Advertisements

Computing Strategy of the AMS-02 Experiment B. Shan 1 On behalf of AMS Collaboration 1 Beihang University, China CHEP 2015, Okinawa.
AMS TIM, CERN Jul 23, 2004 AMS Computing and Ground Centers Alexei Klimentov —
Institute for High Energy Physics ( ) NEC’2007 Varna, Bulgaria, September Activities of IHEP in LCG/EGEE.
AMS COMPUTING VIII International Workshop Advanced Computing And Analysis Techniques in Physics Research Moscow, June 24-29, 2002 Vitali Choutko, Alexei.
AMS Computing Y2001-Y2002 AMS Technical Interchange Meeting MIT Jan 22-25, 2002 Vitali Choutko, Alexei Klimentov.
Terabyte IDE RAID-5 Disk Arrays David A. Sanders, Lucien M. Cremaldi, Vance Eschenburg, Romulus Godang, Christopher N. Lawrence, Chris Riley, and Donald.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
CCJ Computing Center in Japan for spin physics at RHIC T. Ichihara, Y. Watanabe, S. Yokkaichi, O. Jinnouchi, N. Saito, H. En’yo, M. Ishihara,Y.Goto (1),
IRODS performance test and SRB system at KEK Yoshimi KEK Building data grids with iRODS 27 May 2008.
HEP GRID CHEP, KNU 11/9/2002 Youngjoon Kwon (Yonsei Univ.) 1 Belle Computing / Data Handling  What is Belle and why we need large-scale computing?
AMS TIM, CERN Apr 12, 2005 AMS Computing and Ground Centers Status Report Alexei Klimentov —
OO Software and Data Handling in AMS Computing in High Energy and Nuclear Physics Beijing, September 3-7, 2001 Vitali Choutko, Alexei Klimentov MIT, ETHZ.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
An Overview of PHENIX Computing Ju Hwan Kang (Yonsei Univ.) and Jysoo Lee (KISTI) International HEP DataGrid Workshop November 8 ~ 9, 2002 Kyungpook National.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
W.A.Wojcik/CCIN2P3, May Running the multi-platform, multi-experiment cluster at CCIN2P3 Wojciech A. Wojcik IN2P3 Computing Center
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea.
A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center) The International Workshop.
AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, Interlaken Alexei Klimentov — ETH Zurich and
Status of AMS Regional Data Center The Second International Workshop on HEP Data Grid CHEP, KNU G. N. Kim, J. W. Shin, N. Tasneem, M. W.
LCG-2 Plan in Taiwan Simon C. Lin and Eric Yen Academia Sinica Taipei, Taiwan 13 January 2004.
28 April 2003Imperial College1 Imperial College Site Report HEP Sysman meeting 28 April 2003.
1 High Energy Physics (HEP) Computing HyangKyu Park Kyungpook National University Daegu, Korea 2008 Supercomputing & KREONET Workshop Ramada Hotel, JeJu,
A Plan for HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF/D0 Grid Meeting August 5,
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
Systems in AMS02 AMS July 2003 Computing and Ground MIT Alexei Klimentov —
CHEP as an AMS Remote Data Center International HEP DataGrid Workshop CHEP, KNU G.N. Kim, H.B. Park, K.H. Cho, S. Ro, Y.D. Oh, D. Son (Kyungpook)
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
STAR Off-line Computing Capabilities at LBNL/NERSC Doug Olson, LBNL STAR Collaboration Meeting 2 August 1999, BNL.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
Hepix LAL April 2001 An alternative to ftp : bbftp Gilles Farrache In2p3 Computing Center
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
The Alpha Magnetic Spectrometer (AMS) on the International Space Station (ISS) Maria Ionica I.N.F.N. Perugia International School.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Computing Strategy of the AMS-02 Experiment V. Choutko 1, B. Shan 2 A. Egorov 1, A. Eline 1 1 MIT, USA 2 Beihang University, China CHEP 2015, Okinawa.
DCAF (DeCentralized Analysis Farm) Korea CHEP Fermilab (CDF) KorCAF (DCAF in Korea) Kihyeon Cho (CHEP, KNU) (On the behalf of HEP Data Grid Working Group)
DCAF(DeCentralized Analysis Farm) for CDF experiments HAN DaeHee*, KWON Kihwan, OH Youngdo, CHO Kihyeon, KONG Dae Jung, KIM Minsuk, KIM Jieun, MIAN shabeer,
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Data Management with SAM at DØ The 2 nd International Workshop on HEP Data Grid Kyunpook National University Daegu, Korea August 22-23, 2003 Lee Lueking.
Computing Strategy of the AMS-02 Experiment
AMS02 Data Volume, Staging and Archiving Issues AMS Computing Meeting CERN April 8, 2002 Alexei Klimentov.
1 AMS-02 POCC & SOC MSFC, 9-Apr-2009 Mike Capell Avionics & Operations Lead Senior Research Scientist ISS: 108x80m 420T 86KW 400km AMS: 5x4x3m 7T 2.3+KW.
Simulation Production System Science Advisory Committee Meeting UW-Madison March 1 st -2 nd 2007 Juan Carlos Díaz Vélez.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Performance measurement of transferring files on the federated SRB
Simulation Production System
Evolution of Monitoring System for AMS Science Operation Center
SAM at CCIN2P3 configuration issues
UK GridPP Tier-1/A Centre at CLRC
Interoperability of Digital Repositories
Presentation transcript:

CHEP as an AMS Regional Center The Third International Workshop on HEP Data Grid CHEP, KNU G. N. Kim, J. W. Shin, N. Tasneem, M. W. Lee, D. Son H. Park (Center for High Energy Physics, KNU) (Supercomputing Center, KISTI)

Outline AMS Experiment Data flow from ISS to AMS Remote Center Data Size for AMS Experiment for 3 years AMS Science Operating Center Status of CHEP as an AMS Regional Center Status of MC Data Production bbFTP test Summary

AMS (Alpha Magnetic Spectrometer) Experiment PHYSICS GOALS : A Particle Physics Experiment on the International Space Station for 3 or 4 Years Will Collect ~10 10 Cosmic Rays in Near-Earth Orbit from 300 MeV to 3 TeV To search for Antimatter (He,C) in space with a sensitivity of 10 3 to 10 4 better than current limits. To search for Dark matter High statistics precision measurements of e ,  and p spectrum. To study Astrophysics. High statistics precision measurements of D, 3 He, 4 He, B, C, 9 Be, 10 Be spectrum  B/C: to understand CR propagation in the Galaxy (parameters of galactic wind).  10 Be/ 9 Be: to determine CR confinement time in the Galaxy.

International Collaboration International Collaboration ~200 scientists + dozens of contractors from 14 countries Spokesperson: TING, Samuel C. C.

AMS-02 (Alpha Magnetic Spectrometer)

White Sands, NM Data Flow from ISS to Remote AMS Centers Internet Distribute Locally & Conduct Science JSC 0.9 meter dish Commercial Satellite Service A Dedicated Service/Circuit Or MSFC POIC Telemetry Voice Planning Commanding Five Downlink Video Feeds & Telemetry Backup Remote User Facility Space Station

ACOP: AMS Crew Operations Post, POIC:Payload Operation Integration Center, GSC: Ground Support Computers

Table. AMS02 data transmitted to SOC from POIC Table. AMS02 Data Volumes Total AMS02 data volume is about 200 TB Stream Band Width Data Category Volume (TB/year) High Rate 3-4 Mbit/s Scientific 11 – 15 Calibration 0.01 Slow Rate 16 kbit/s Housing Keeping 0.06 NASA Auxillary Data 0.01 Origin Data Category Volume (TB) Beam Calibrations Calibration 0.3 Preflight Tests Calibration years flight Scientific 33 – 45 3 years flight Calibration years flight House Keeping 0.18 Data Summary Files (DST) Ntuples or ROOT files Catalogs Flat files or ROOT files or ORACLE 0.05 Event Tages Flat files or ROOT files or ORACLE 0.2 TDV files Flat files or ROOT files or ORACLE 0.5

AMS Science Operations Center (SOC) Data processing and Science Analysis receive the complete copy of data science analysis primary data storage data archiving data distribution to AMS Universities and Laboratories MC production The SOC will provide all functioning and will give the possibility to process and analyze all data. SOC computing facilities should be enough to provide data access and analysis for all members of the collaboration.

Science Operations Center Computing Facilities CERN/AMS Network AMS Physics Services N Central Data Services Shared Disk Servers 25 TeraByte disk 6 PC based servers 25 TeraByte disk 6 PC based servers tape robots tape drives LTO, DLT tape robots tape drives LTO, DLT Shared Tape Servers Home directories & registry consoles & monitors Production Facilities, Linux dual-CPU computers Linux, Intel and AMD Engineering Cluster 5 dual processor PCs 5 dual processor PCs Data Servers, Analysis Facilities (linux cluster) dual processor PCs 5 PC servers AMS regional Centers batch data processing batch data processing interactive physics analysis Interactive and Batch physics analysis

AMS Science Operation Center Computing Facilities

AMS Regional Center(s) Monte-Carlo Production Data Storage and data access for the remote stations AMS Regional Stations(s) and Center(s) Access to the SOC data storage * for the detector verification studies * for detector calibration purposes * for alignment * event visualization Access to the SOC to get the detector and production status Access to SOC computing facilities for Science Analysis Science Analysis using local computing facilities of Universities and Laboratories.

Connectivity from ISS to CHEP (RC) JSC MSFC POIC/ POCC Telemetry Voice Planning Commanding MIT LANs NINS B4207 vBNSMIT vBNS Chicago NAP CERN (SOC) CHEP ISP-1ISP-2 무궁화위성 ISP-3 RCRS Five Downlink Video Feeds & Telemetry Backup White Sands, NM ISP :Internet Service Provider NAP - Network Access Point vBNS - very high Broadband Network Service NINS - NASA Integrated Network Services POCC - Payload Operations Control Center POIC- Payload Operations Integration Center Commercial Satellite Service International Space Station (ISS)

Tape Library (~200 TB) Disk Storage (20TB) DB Server Gigabit Ethernet Linux Clusters Cluster Servers Hub Internet Server AMS RC CHEP as an AMS Regional Center Analysis Facility Data Storage Display Facility Ewha AMS RS 무궁화위성

AMS Analysis Facility at CHEP 1)CHEP AMS Cluster Server: 1 CPU: AMD Athlon MP dual Disk Space: 80 GB OS: Redhat 9.0 Linux Clusters : 12 CPU: AMD Athlon MP dual Disk Space: 80 GB x 12 =960 GB OS : Redhat 7.3 2) KT AMS Cluster Server: 1 CPU: Intel XEON 2GHz dual Disk Space: 80 GB OS: Redhat 9.0 Linux Clusters : 4 CPU: Intel XEON 2GHz dual Disk Space: 80 GB x 4 =320 GB OS:Redhat 7.3

 IBM TAPE LIBRARY SYSTEM – 43.7 TB  3494-L TB  3494-S TB  3494-L TB  3494-S TB  Raid Disks  Fast T200: 1 TB (Raid 0:striping) 43.7 TB L12 S10 Intermediate DISKPOOL TapeLibrary CDF, CMS, AMS, BELLE, PHENIX file system FastT200 RAID Disks Migration experimental group Login Machin e CCC CCC CCC CCC NFS cluster Data Storage

Network Configuration Servers L3 Switch C6509 KOREN … IBM 8271 PCs … Servers Gigabit Ethernet CERN GEANT-TEIN(EU) Fermilab KREONET Gigabit Ethernet APII/KREONET2 (US) KEK APII(Japan) 8Mbps 622Mbps x 2 1 Gbps x 2 Research Traffics KORNET Boranet other Traffics (total 145 Mbps) Gigabit Ethernet/ATM155 (Physics Department) Gigabit Switches (CHEP) 2.5 Gbps 34 Mbps

Connectivity to the Outside from CHEP APII JAPAN (KEK) APII USA (Chicago) TEIN (CERN) APII C hina (IHEP) USA StarTap, ESNET KOREN Topology CHEP (Regional Center) Daegu 2.5 Gbps Seoul Daejeon Daegu ■ Singapore (1) SIngAREN through APII (2Mbps) ■ China (1, Preplanned) CSTNET (APII) thru APII

Europe CERN TEIN APII-TransPac Hyunhai/Genkai US FNAL Connectivity to the Outside from Korea StarTap

MC Production Center and their Capacity

Step:2 Click here! Step:1 Write here registered info. Step:3 Choose any one from datasets. Step:4 Choose appropriate dataset. MC Production Procedure by Remote Client

Step:5 Choose appropriate CPU Type & CPU clock. Step:6 Put appropriate CPU time limit. Step:7 Put total number of jobs requested Step:8 Put ‘Total Real Time Required’. Step:9 Choose MC production Mode. Step:10 Click On ‘Submit Request’ MC Production Procedure by Remote Client

MC Database Query Form

MC Data Production Status MC Center ResponsibleGB%CIEMATJ.Casuas CERN V.Choutko, A.Eline,A.Klimentov YaleE.Finch Academia Sinica Z.Ren, Y.Lei LAPP/Lyon C.Goy, J.Jacquemier INFN Milano M.Boschini, D.Grandi CNAF & INFN Bologna D.Casadei UMDA.Malinine EKP, Karlsruhe V.Zhukov GAM, Montpellier J.Bolmont, M.Sapinski INFN Siena&Perugia, ITEP, LIP, IAC, SEU, KNU P.Zuccon, P.Maestro, Y.Lyublev, F.Barao, C.Delgado, Ye Wei, J.Shin

MC Production Statistics Particle Million Events % of Total protons helium electrons positrons deuterons anti- protons carbon photons Nuclei (Z 3 … 28) URL: pcamss0.cern.ch/mm.html 185 days, 1196 computers 8.4 TB, 250 PIII 1 GHz/day

Total Number of Requested Job : 229 Completed Job : 218 Failed Job : 1 Unchecked Job (may be error) : 10 MC Events Production at CHEP ParticleRequestedCompletedFailed DST size[GB] Positron Electron Proton Nuclear C He Total

Data Handling Program: bbFTP bbFTP is a file transfer software developed by Gilles Farrache in Lyon Encoded username and password at connection SSH and Certificate authentification modules Multi-stream transfer Big windows as defined in RFC1323 On-the-fly data compression Automatic retry Customizable time-outs Transfer simulation AFS authentification integration RFIO interface

Data Transmission Test using bbFTP Using TEIN in Dec for a number of TCP/IP streams - for a file size

Using APII/KREONET2-Startap in Aug for a number of TCP/IP streams - for a file size

CHEP as AMS Regional Center -Prepared an Analysis Facility and Data Storage -Producing MC events -Progress in MC production with GRID Tools Summary