Status of AMS Regional Data Center The Second International Workshop on HEP Data Grid CHEP, KNU 2003. 8. 22-23 G. N. Kim, J. W. Shin, N. Tasneem, M. W.

Slides:



Advertisements
Similar presentations
Chapter 20 Oracle Secure Backup.
Advertisements

Computing Strategy of the AMS-02 Experiment B. Shan 1 On behalf of AMS Collaboration 1 Beihang University, China CHEP 2015, Okinawa.
Institute for High Energy Physics ( ) NEC’2007 Varna, Bulgaria, September Activities of IHEP in LCG/EGEE.
AMS COMPUTING VIII International Workshop Advanced Computing And Analysis Techniques in Physics Research Moscow, June 24-29, 2002 Vitali Choutko, Alexei.
Secure Off Site Backup at CERN Katrine Aam Svendsen.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
March 2004 At A Glance ITOS is a highly configurable low-cost control and monitoring system. Benefits Extreme low cost Database driven - ITOS software.
- Software block schemes & diagrams - Communications protocols & data format - Conclusions EUSO-BALLOON DESIGN REVIEW, , CNES TOULOUSE F. S.
Overview of the ODP Data Provider Sergey Sukhonosov National Oceanographic Data Centre, Russia Expert training on the Ocean Data Portal technology, Buenos.
Download & Play E-Learning System PROPOSAL draft1.0.
AMS Computing Y2001-Y2002 AMS Technical Interchange Meeting MIT Jan 22-25, 2002 Vitali Choutko, Alexei Klimentov.
CERN 14/01/20021 Data Handling Scheme for the Italian Ground Segment (IGS), as part of AMS-02 Ground Segment (P.G. Rancoita) Functions of a “Regional Center”
HEP GRID CHEP, KNU 11/9/2002 Youngjoon Kwon (Yonsei Univ.) 1 Belle Computing / Data Handling  What is Belle and why we need large-scale computing?
AMS TIM, CERN Apr 12, 2005 AMS Computing and Ground Centers Status Report Alexei Klimentov —
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
GLAST LAT ProjectDOE/NASA Baseline-Preliminary Design Review, January 8, 2002 K.Young 1 LAT Data Processing Facility Automatically process Level 0 data.
OO Software and Data Handling in AMS Computing in High Energy and Nuclear Physics Beijing, September 3-7, 2001 Vitali Choutko, Alexei Klimentov MIT, ETHZ.
National Aeronautics and Space Administration George C. Marshall Space Flight Center Date: 1 October :32 PM Originator:Bob Bradford/FD40 Chart #:1.
2nd April 2001Tim Adye1 Bulk Data Transfer Tools Tim Adye BaBar / Rutherford Appleton Laboratory UK HEP System Managers’ Meeting 2 nd April 2001.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
An Overview of PHENIX Computing Ju Hwan Kang (Yonsei Univ.) and Jysoo Lee (KISTI) International HEP DataGrid Workshop November 8 ~ 9, 2002 Kyungpook National.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea.
A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center) The International Workshop.
AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, Interlaken Alexei Klimentov — ETH Zurich and
A Plan for HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF/D0 Grid Meeting August 5,
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
4/5/2007Data handling and transfer in the LHCb experiment1 Data handling and transfer in the LHCb experiment RT NPSS Real Time 2007 FNAL - 4 th May 2007.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
Systems in AMS02 AMS July 2003 Computing and Ground MIT Alexei Klimentov —
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
CHEP as an AMS Remote Data Center International HEP DataGrid Workshop CHEP, KNU G.N. Kim, H.B. Park, K.H. Cho, S. Ro, Y.D. Oh, D. Son (Kyungpook)
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
CHEP as an AMS Regional Center The Third International Workshop on HEP Data Grid CHEP, KNU G. N. Kim, J. W. Shin, N. Tasneem, M. W. Lee,
Lee Lueking 1 The Sequential Access Model for Run II Data Management and Delivery Lee Lueking, Frank Nagy, Heidi Schellman, Igor Terekhov, Julie Trumbo,
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
Hepix LAL April 2001 An alternative to ftp : bbftp Gilles Farrache In2p3 Computing Center
The Alpha Magnetic Spectrometer (AMS) on the International Space Station (ISS) Maria Ionica I.N.F.N. Perugia International School.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Computing Strategy of the AMS-02 Experiment V. Choutko 1, B. Shan 2 A. Egorov 1, A. Eline 1 1 MIT, USA 2 Beihang University, China CHEP 2015, Okinawa.
DCAF(DeCentralized Analysis Farm) for CDF experiments HAN DaeHee*, KWON Kihwan, OH Youngdo, CHO Kihyeon, KONG Dae Jung, KIM Minsuk, KIM Jieun, MIAN shabeer,
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Computing Strategy of the AMS-02 Experiment
POCC activities AMS02 meeting at KSC October 11 th 2010.
Overview of PHENIX Muon Tracker Data Analysis PHENIX Muon Tracker Muon Tracker Software Muon Tracker Database Muon Event Display Performance Muon Reconstruction.
D0 File Replication PPDG SLAC File replication workshop 9/20/00 Vicky White.
1 AMS-02 POCC & SOC MSFC, 9-Apr-2009 Mike Capell Avionics & Operations Lead Senior Research Scientist ISS: 108x80m 420T 86KW 400km AMS: 5x4x3m 7T 2.3+KW.
Simulation Production System Science Advisory Committee Meeting UW-Madison March 1 st -2 nd 2007 Juan Carlos Díaz Vélez.
E-USOC operations in Increments José Miguel Ezquerro Navarro E-USOC/UPM Jul 2010POIWG#28 - Huntsville, AL.
Big Data transfer over computer networks Initial Sergey Khoruzhnikov Vladimir Grudinin Oleg Sadov Andrey Shevel Anatoly Oreshkin Elena Korytko Alexander.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Simulation Production System
Evolution of Monitoring System for AMS Science Operation Center
Overview of the Belle II computing
ISS Institutional DTN Overview for CCSDS
Existing Perl/Oracle Pipeline
SAM at CCIN2P3 configuration issues
Grid Canada Testbed using HEP applications
Technology for a NASA Space-based Science Operations Grid Internet2 Members Meeting Advanced Applications Track Session April, 2003 Robert N. Bradford.
Presentation transcript:

Status of AMS Regional Data Center The Second International Workshop on HEP Data Grid CHEP, KNU G. N. Kim, J. W. Shin, N. Tasneem, M. W. Lee, D. Son (On behalf of the HEP Data Grid Working Group)

AMS (Alpha Magnetic Spectrometer) PHYSICS GOALS : A Particle Physics Experiment on the International Space Station for 3 or 4 Years Will Collect ~10 10 Cosmic Rays in Near-Earth Orbit from 300 MeV to 3 TeV To search for Antimatter (He,C) in space with a sensitivity of 10 3 to 10 4 better than current limits. To search for Dark matter High statistics precision measurements of e ,  and p spectrum. To study Astrophysics. High statistics precision measurements of D, 3 He, 4 He, B, C, 9 Be, 10 Be spectrum  B/C: to understand CR propagation in the Galaxy (parameters of galactic wind).  10 Be/ 9 Be: to determine CR confinement time in the Galaxy.

AMS-02 (Alpha Magnetic Spectrometer)

White Sands, NM Data Flow from ISS to Remote AMS Centers Internet Distribute Locally & Conduct Science JSC 0.9 meter dish Commercial Satellite Service A Dedicated Service/Circuit Or MSFC POIC Telemetry Voice Planning Commanding Five Downlink Video Feeds & Telemetry Backup Remote User Facility Space Station

Data Flow from ISS to Remote AMS Centers ACOP AMS High Rate Frame MUX White Sand, NM facility Payload Operations Control Center Science Operations Center Monitoring & science data Stored data Real-time Data H&S Real-time & “Dump” data Real-time, “Dump”, & White Sand’s LOR playback External Communications Payload Data Service system Long Term Short Term GSE Telescience centers ISS NASA Ground Infrastructure Remote AMS Sites H&S Monitoring Science Flight ancillary data Real-time & “dump” NearReal-time File transfer playback MSFC, Al AMS Crew Operation Post

Connectivity from ISS to CHEP JSC MSFC POIC/ POCC Telemetry Voice Planning Commanding MIT LANs NINS B4207 vBNSMIT vBNS Chicago NAP CERN (SDC) CHEP ISP-1ISP-2 무궁화위성 ISP-3 RCRS Five Downlink Video Feeds & Telemetry Backup White Sands, NM ISP :Internet Service Provider NAP - Network Access Point vBNS - very high Broadband Network Service NINS - NASA Integrated Network Services POCC - Payload Operations Control Center POIC- Payload Operations Integration Center Commercial Satellite Service International Space Station (ISS)

Table. AMS02 data transmitted to SDC from POIC Table. AMS02 Data Volumes Total AMS02 data volume is about 200 TB Stream Band Width Data Category Volume (TB/year) High Rate 3-4 Mbit/s Scientific 11 – 15 Calibration 0.01 Slow Rate 16 kbit/s Housing Keeping 0.06 NASA Auxillary Data 0.01 Origin Data Category Volume (TB) Beam Calibrations Calibration 0.3 Preflight Tests Calibration years flight Scientific 33 – 45 3 years flight Calibration years flight House Keeping 0.18 Data Summary Files (DST) Ntuples or ROOT files Catalogs Flat files or ROOT files or ORACLE 0.05 Event Tages Flat files or ROOT files or ORACLE 0.2 TDV files Flat files or ROOT files or ORACLE 0.5

Tape Library (~200 TB) Disk Storage (20TB) DB Server Gigabit Ethernet Linux Clusters Cluster Servers Hub Internet Server AMS RC CHEP as an AMS Regional Center Analysis Facility Data Storage Display Facility Ewha AMS RS 무궁화위성

AMS Analysis Facility at CHEP 1)CHEP AMS Cluster Server: 1 CPU: AMD Athlon MP dual Disk Space: 80 GB OS: Redhat 9.0 Linux Clusters : 12 CPU: AMD Athlon MP dual Disk Space: 80 GB x 12 =960 GB OS : Redhat 7.3 2) KT AMS Cluster Server: 1 CPU: Intel XEON 2GHz dual Disk Space: 80 GB OS: Redhat 9.0 Linux Clusters : CPU: Intel XEON 2GHz dual Disk Space: 80 GB x 4 =320 GB – 880 GB OS:Redhat 7.3

MC Production Center and their Capacity

Step:2 Click here! Step:1 Write here registered info. Step:3 Choose any one from datasets. Step:4 Choose appropriate dataset. MC Production Procedure by Remote Client

Step:5 Choose appropriate CPU Type & CPU clock. Step:6 Put appropriate CPU time limit. Step:7 Put total number of jobs requested Step:8 Put ‘Total Real Time Required’. Step:9 Choose MC production Mode. Step:10 Click On ‘Submit Request’ MC Production Procedure by Remote Client

MC Database Query Form

Status of MC Events at CHEP 1.Photon Events with trigger level 1 Energy [GeV] N T [x10 6 ] N S Size[GB] Generated by , CHEP , CHEP , CHEP , CHEP , CHEP 64 7 Proc. - CHEP , CHEP Proc. - CHEP Proc. - CHEP Total Data Format: Ntuple

2. Proton Events with trigger level 1 Energy [GeV] N T [x10 6 ] N S Size[GB] Generated by , CHEP , CERN , CERN , CERN Total Data Format: Ntuple

Data Handling Program: bbFTP bbFTP is a file transfer software developed by Gilles Farrache in Lyon Encoded username and password at connection SSH and Certificate authentification modules Multi-stream transfer Big windows as defined in RFC1323 On-the-fly data compression Automatic retry Customizable time-outs Transfer simulation AFS authentification integration RFIO interface

Data Transmission Tests Using TEIN (10 Mbit/s ) in Dec for a number of TCP/IP streams - for a file size

CHEP as AMS Regional Center -Prepared an Analysis Facility and Data Storage -Producing MC events -Progress in MC production with GRID Tools Summary