CHEP as an AMS Remote Data Center International HEP DataGrid Workshop CHEP, KNU 2002. 11. 8-9 G.N. Kim, H.B. Park, K.H. Cho, S. Ro, Y.D. Oh, D. Son (Kyungpook)

Slides:



Advertisements
Similar presentations
1 The K2K Experiment (Japan) Production of neutrinos at KEK and detection of it at Super-K Data acquisition and analysis is in progress Goals Detection.
Advertisements

Computing Strategy of the AMS-02 Experiment B. Shan 1 On behalf of AMS Collaboration 1 Beihang University, China CHEP 2015, Okinawa.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
AMS COMPUTING VIII International Workshop Advanced Computing And Analysis Techniques in Physics Research Moscow, June 24-29, 2002 Vitali Choutko, Alexei.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
ACAT 2002, Moscow June 24-28thJ. Hernández. DESY-Zeuthen1 Offline Mass Data Processing using Online Computing Resources at HERA-B José Hernández DESY-Zeuthen.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
AMS Computing Y2001-Y2002 AMS Technical Interchange Meeting MIT Jan 22-25, 2002 Vitali Choutko, Alexei Klimentov.
Introduction to Computers Personal Computing 10. What is a computer? Electronic device Performs instructions in a program Performs four functions –Accepts.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
HEP GRID CHEP, KNU 11/9/2002 Youngjoon Kwon (Yonsei Univ.) 1 Belle Computing / Data Handling  What is Belle and why we need large-scale computing?
AMS TIM, CERN Apr 12, 2005 AMS Computing and Ground Centers Status Report Alexei Klimentov —
GLAST LAT ProjectDOE/NASA Baseline-Preliminary Design Review, January 8, 2002 K.Young 1 LAT Data Processing Facility Automatically process Level 0 data.
OO Software and Data Handling in AMS Computing in High Energy and Nuclear Physics Beijing, September 3-7, 2001 Vitali Choutko, Alexei Klimentov MIT, ETHZ.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
An Overview of PHENIX Computing Ju Hwan Kang (Yonsei Univ.) and Jysoo Lee (KISTI) International HEP DataGrid Workshop November 8 ~ 9, 2002 Kyungpook National.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea.
A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center) The International Workshop.
Jan. 17, 2002DØRAM Proposal DØRACE Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Remote Analysis Station ArchitectureRemote.
AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, Interlaken Alexei Klimentov — ETH Zurich and
Status of AMS Regional Data Center The Second International Workshop on HEP Data Grid CHEP, KNU G. N. Kim, J. W. Shin, N. Tasneem, M. W.
LCG-2 Plan in Taiwan Simon C. Lin and Eric Yen Academia Sinica Taipei, Taiwan 13 January 2004.
1 High Energy Physics (HEP) Computing HyangKyu Park Kyungpook National University Daegu, Korea 2008 Supercomputing & KREONET Workshop Ramada Hotel, JeJu,
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
A Plan for HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF/D0 Grid Meeting August 5,
Computing for LHCb-Italy Domenico Galli, Umberto Marconi and Vincenzo Vagnoni Genève, January 17, 2001.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
Systems in AMS02 AMS July 2003 Computing and Ground MIT Alexei Klimentov —
CHEP as an AMS Regional Center The Third International Workshop on HEP Data Grid CHEP, KNU G. N. Kim, J. W. Shin, N. Tasneem, M. W. Lee,
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Probe Plans and Status SciDAC Kickoff July, 2001 Dan Million Randy Burris ORNL, Center for.
STAR Off-line Computing Capabilities at LBNL/NERSC Doug Olson, LBNL STAR Collaboration Meeting 2 August 1999, BNL.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Computing Strategy of the AMS-02 Experiment V. Choutko 1, B. Shan 2 A. Egorov 1, A. Eline 1 1 MIT, USA 2 Beihang University, China CHEP 2015, Okinawa.
DCAF (DeCentralized Analysis Farm) Korea CHEP Fermilab (CDF) KorCAF (DCAF in Korea) Kihyeon Cho (CHEP, KNU) (On the behalf of HEP Data Grid Working Group)
Feb. 14, 2002DØRAM Proposal DØ IB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) Introduction Partial Workshop Results DØRAM Architecture.
DCAF(DeCentralized Analysis Farm) for CDF experiments HAN DaeHee*, KWON Kihwan, OH Youngdo, CHO Kihyeon, KONG Dae Jung, KIM Minsuk, KIM Jieun, MIAN shabeer,
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Feb. 13, 2002DØRAM Proposal DØCPB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Partial Workshop ResultsPartial.
EMTTS UAT Day1 & Day2 Powered by:. Topics CoversTopics Remaining Comparison Network Infrastructure Separate EP Hosting Fault Tolerance.
POCC activities AMS02 meeting at KSC October 11 th 2010.
January 20, 2000K. Sliwa/ Tufts University DOE/NSF ATLAS Review 1 SIMULATION OF DAILY ACTIVITITIES AT REGIONAL CENTERS MONARC Collaboration Alexander Nazarenko.
AMS02 Data Volume, Staging and Archiving Issues AMS Computing Meeting CERN April 8, 2002 Alexei Klimentov.
1 AMS-02 POCC & SOC MSFC, 9-Apr-2009 Mike Capell Avionics & Operations Lead Senior Research Scientist ISS: 108x80m 420T 86KW 400km AMS: 5x4x3m 7T 2.3+KW.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
Simulation Production System Science Advisory Committee Meeting UW-Madison March 1 st -2 nd 2007 Juan Carlos Díaz Vélez.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
12/19/01MODIS Science Team Meeting1 MODAPS Status and Plans Edward Masuoka, Code 922 MODIS Science Data Support Team NASA’s Goddard Space Flight Center.
Evolution of Monitoring System for AMS Science Operation Center
Overview of the Belle II computing
PC Farms & Central Data Recording
Bernd Panzer-Steindel, CERN/IT
Technology for a NASA Space-based Science Operations Grid Internet2 Members Meeting Advanced Applications Track Session April, 2003 Robert N. Bradford.
Proposal for a DØ Remote Analysis Model (DØRAM)
Presentation transcript:

CHEP as an AMS Remote Data Center International HEP DataGrid Workshop CHEP, KNU G.N. Kim, H.B. Park, K.H. Cho, S. Ro, Y.D. Oh, D. Son (Kyungpook) J. Yang (Ewha), Jysoo Lee (KISTI)

AMS (Alpha Magnetic Spectrometer) PHYSICS GOALS : High Energy Experiment in the International Space Station (ISS) Installation of AMS detector at ISS in 2005 and run for 3 or 4 years To search for Antimatter (He,C) in space with a sensitivity of 10 3 to 10 4 better than current limits(<1.1  ). To search for dark matter High statistics precision measurements of e ,  and p spectrum. To study Astrophysics. High statistics precision measurements of D, 3 He, 4 He, B, C, 9 Be, 10 Be spectrum  B/C: to understand CR propagation in the Galaxy (parameters of galactic wind).  10 Be/ 9 Be: to determine CR confinement time in the Galaxy.

AMS 02 on ISS for 3 years AMS 02 In Cargo Bay

AMS Crew Operation Post Data Flow from ISS to Remote AMS Centers

White Sands, NM Data Flow from ISS to Remote AMS Centers Internet Distribute Locally & Conduct Science JSC 0.9 meter dish Commercial Satellite Service A Dedicated Service/Circuit Or MSFC POIC Telemetry Voice Planning Commanding Five Downlink Video Feeds & Telemetry Backup Remote User Facility Space Station

Data Flow from ISS to Remote AMS Centers ACOP AMS High Rate Frame MUX White Sand, NM facility Payload Operations Control Center Science Operations Center Monitoring & science data Stored data Real-time Data H&S Real-time & “Dump” data Real-time, “Dump”, & White Sand’s LOR playback External Communications Payload Data Service system Long Term Short Term GSC Telescience centers ISS NASA Ground Infrastructure Remote AMS Sites H&S Monitoring Science Flight ancillary data Real-time & “dump” NearReal-time File transfer playback MSFC, Al

AMS Ground Centers Science Center POCC AL AMS Remote center RT data Commanding Monitoring NRT Analysis NRT Data Processing Primary storage Archiving Distribution Science Analysis MC production Data mirror archiving External Communications Science Operations Center XTerm HOSC Web Server and xterm TReK WS commands Monitoring, H&S data Flight Ancillary data AMS science data (selected) TReK WS “voice”loop Video distribution Production Farm Analysis Facilities PC Farm Data Server Analysis Facilities GSC D S A e T r A v e r GSC Buffer data Retransmit To SDC AMS Remote Station AMS Remote Station AMS Station GSC MC production commands archive AMS Data, NASA data, metadata CHEP Internet

AMS Science Data Center (SDC) Data processing and Science Analysis receive the complete copy of data science analysis primary data storage data archiving data distribution to AMS Universities and Laboratories MC production The SDC will provide all functioning and will give the possibility to process and analyze all data. SDC computing facilities should be enough to provide data access and analysis for all members of the collaboration.

Data Processing Farm of SDC A farm of Pentiums (AMD) based systems running Linux is proposed. Depending on the processor clock speed the farm will contain 25 to 30 nodes. -Processing node: * Processor:Dual-CPU GHz or single-CPU 2 + GHz Pentium/AMD * Memory: 1 GB RAM * Mother Board Chip Set: Intel or AMD * Disk: EIDE Tbyte, 3-ware Escalade Raid controller * Ethernet Adapter: 3x100 Mbit/s - 1Gbit/s * Linux OS -Server node: * dual-CPU GHz Pentium/AMD * 2 Gbyte RAM * 3 Tbyte of disk space with SCSI UW RAID external Tower * 3x100 Mbit/s (or 1Gbit/s) network controllers * Linux OS

batch physics analysis batch physics analysis event summary data raw data event reconstruction event reconstruction event simulation event simulation interactive physics analysis analysis objects (extracted by physics topic) event filter (selection & reconstruction) event filter (selection & reconstruction) processed data Analysis Chain: Farms

Table. AMS02 data transmitted to SDC from POIC Table. AMS02 Data Volumes Total AMS02 data volume is about 200 Tbyte Stream Band Width Data Category Volume (TB/year) High Rate 3-4 Mbit/s Scientific 11 – 15 Calibration 0.01 Slow Rate 16 kbit/s Housing Keeping 0.06 NASA Auxillary Data 0.01 Origin Data Category Volume (TB) Beam Calibrations Calibration 0.3 Preflight Tests Calibration years flight Scientific 33 – 45 3 years flight Calibration years flight House Keeping 0.18 Data Summary Files (DST) Ntuples or ROOT files Catalogs Flat files or ROOT files or ORACLE 0.05 Event Tages Flat files or ROOT files or ORACLE 0.2 TDV files Flat files or ROOT files or ORACLE 0.5

Data Storage of SDC Purposes: - Detector verification studies; - Calibration - Alignment - Event visualization - Data processing by the general reconstruction program - Data reprocessing Requirement: - Tag information for all events during the whole period of data taking must be kept on direct access disks - Raw data taken during last 9 months and 30 % of all ESD should be on direct access disks  20TB - All taken and reconstructed data must be archived  200TB

ORACLE Data Base Organization Organization of database by machine, server, database, table - flexibility to load, locking data volume - the loading of machines A and B should be balanced most probably both machines will be LINUX Pentiums backup and replication of database Machine A Server A Database A Task A Task B Task C Database B Server B Machine B Server A Database A Task A Task B Task C Database B Server B

AMS Remote Center(s) Monte-Carlo Production Data Storage and data access for the remote stations AMS Remote Stations(s) and Center(s) Access to the SDC data storage * for the detector verification studies * for detector calibration purposes * for alignment * event visualization Access to the SDC to get the detector and production status Access to SDC computing facilities for Science Analysis Science Analysis using local computing facilities of Universities and Laboratories.

producers Raw data server Oracle RDBMS Conditions DB Tag DB Active Tables : Hosts, Interfaces, Producers, Servers Catalogues server Nominal Tables Hosts, Interfaces Producers, Servers… ESD server Raw data {I} {II} {III} {IV} {V} {VI} {I} submit 1 st server {II} “cold” start {III} read “active” tables (available hosts, number of servers, producers, jobs/host) {IV} submit servers {V} get “run”info (runs to be processed, ESD output path) {VI} submit producers (LILO, LIRO,RIRO…) Notify servers AMS Data Production Flow

Connectivity from AMS02 on ISS to CHEP JSC MSFC POIC/ POCC Telemetry Voice Planning Commanding MIT LANs NISN B4207 vBNSMIT vBNS Chicago NAP CERN (SDC) CHEP ISP-1ISP-2 무궁화위성 ISP-3 RCRS Five Downlink Video Feeds & Telemetry Backup White Sands, NM ISP :Internet Service Provider NAP - Network Access Point vBNS - very high Broadband Network Service NISN - NASA Integrated Network Services POCC - Payload Operations Control Center POIC- Payload Operations Integration Center Commercial Satellite Service International Space Station (ISS)

Tape Library (~200 TB) Disk Storage (20TB) DB Server Gigabit Ethernet Linux Clusters Cluster Servers Hub Internet Server AMS RC CHEP Analysis Facility 200 cpu Data Storage ( TB) Display Facility Ewha AMS RS 무궁화위성

Network Configuration (July-Aug, 2002) Servers L3 Switch C6509 KOREN Gigabit Switches (CHEP) … IBM 8271 PCs … Servers Gigabit Ethernet CERN GEANT-TEIN(EU) 10~45Mbps (2002)  10 Gbps Fermilab KREONET Gigabit Ethernet APII(US) 45Mbps (2002) KEK APII(Japan) 8Mbps (2002) 45Mbps 1 Gbps Research Traffics KORNET Boranet other Traffics (total 145 Mbps) Gigabit Ethernet/ATM155 (Physics Department)

Connectivity to the Outside from CHEP APII JAPAN (KEK) APII USA (Chicago) TEIN (CERN) APII C hina (IHEP) USA StarTap, ESNET KOREN Topology CHEP (Regional Center) Daegu 1 Gbps Seoul Daejeon Daegu ■ Singapore (1) SIngAREN through APII (2Mbps) ■ China (1, Preplanned) CSTNET (APII) thru APII

Europe CERN TEIN APII-TransPac Hyunhai/Genkai US FNAL