Presentation is loading. Please wait.

Presentation is loading. Please wait.

CHEP as an AMS Remote Data Center International HEP DataGrid Workshop CHEP, KNU 2002. 11. 8-9 G.N. Kim, H.B. Park, K.H. Cho, S. Ro, Y.D. Oh, D. Son (Kyungpook)

Similar presentations


Presentation on theme: "CHEP as an AMS Remote Data Center International HEP DataGrid Workshop CHEP, KNU 2002. 11. 8-9 G.N. Kim, H.B. Park, K.H. Cho, S. Ro, Y.D. Oh, D. Son (Kyungpook)"— Presentation transcript:

1 CHEP as an AMS Remote Data Center International HEP DataGrid Workshop CHEP, KNU 2002. 11. 8-9 G.N. Kim, H.B. Park, K.H. Cho, S. Ro, Y.D. Oh, D. Son (Kyungpook) J. Yang (Ewha), Jysoo Lee (KISTI)

2 AMS (Alpha Magnetic Spectrometer) PHYSICS GOALS : High Energy Experiment in the International Space Station (ISS) Installation of AMS detector at ISS in 2005 and run for 3 or 4 years To search for Antimatter (He,C) in space with a sensitivity of 10 3 to 10 4 better than current limits(<1.1  10 -6 ). To search for dark matter High statistics precision measurements of e ,  and p spectrum. To study Astrophysics. High statistics precision measurements of D, 3 He, 4 He, B, C, 9 Be, 10 Be spectrum  B/C: to understand CR propagation in the Galaxy (parameters of galactic wind).  10 Be/ 9 Be: to determine CR confinement time in the Galaxy.

3 AMS 02 on ISS for 3 years AMS 02 In Cargo Bay

4 AMS Crew Operation Post Data Flow from ISS to Remote AMS Centers

5 White Sands, NM Data Flow from ISS to Remote AMS Centers Internet Distribute Locally & Conduct Science JSC 0.9 meter dish Commercial Satellite Service A Dedicated Service/Circuit Or MSFC POIC Telemetry Voice Planning Commanding Five Downlink Video Feeds & Telemetry Backup Remote User Facility Space Station

6 Data Flow from ISS to Remote AMS Centers ACOP AMS High Rate Frame MUX White Sand, NM facility Payload Operations Control Center Science Operations Center Monitoring & science data Stored data Real-time Data H&S Real-time & “Dump” data Real-time, “Dump”, & White Sand’s LOR playback External Communications Payload Data Service system Long Term Short Term GSC Telescience centers ISS NASA Ground Infrastructure Remote AMS Sites H&S Monitoring Science Flight ancillary data Real-time & “dump” NearReal-time File transfer playback MSFC, Al

7 AMS Ground Centers Science Center POCC POIC@MSFC AL AMS Remote center RT data Commanding Monitoring NRT Analysis NRT Data Processing Primary storage Archiving Distribution Science Analysis MC production Data mirror archiving External Communications Science Operations Center XTerm HOSC Web Server and xterm TReK WS commands Monitoring, H&S data Flight Ancillary data AMS science data (selected) TReK WS “voice”loop Video distribution Production Farm Analysis Facilities PC Farm Data Server Analysis Facilities GSC D S A e T r A v e r GSC Buffer data Retransmit To SDC AMS Remote Station AMS Remote Station AMS Station GSC MC production commands archive AMS Data, NASA data, metadata CHEP Internet

8 AMS Science Data Center (SDC) Data processing and Science Analysis receive the complete copy of data science analysis primary data storage data archiving data distribution to AMS Universities and Laboratories MC production The SDC will provide all functioning and will give the possibility to process and analyze all data. SDC computing facilities should be enough to provide data access and analysis for all members of the collaboration.

9 Data Processing Farm of SDC A farm of Pentiums (AMD) based systems running Linux is proposed. Depending on the processor clock speed the farm will contain 25 to 30 nodes. -Processing node: * Processor:Dual-CPU 1.5 + GHz or single-CPU 2 + GHz Pentium/AMD * Memory: 1 GB RAM * Mother Board Chip Set: Intel or AMD * Disk: EIDE 0.5-1.0 Tbyte, 3-ware Escalade Raid controller * Ethernet Adapter: 3x100 Mbit/s - 1Gbit/s * Linux OS -Server node: * dual-CPU 1.5 + GHz Pentium/AMD * 2 Gbyte RAM * 3 Tbyte of disk space with SCSI UW RAID external Tower * 3x100 Mbit/s (or 1Gbit/s) network controllers * Linux OS

10 batch physics analysis batch physics analysis event summary data raw data event reconstruction event reconstruction event simulation event simulation interactive physics analysis analysis objects (extracted by physics topic) event filter (selection & reconstruction) event filter (selection & reconstruction) processed data Analysis Chain: Farms

11 Table. AMS02 data transmitted to SDC from POIC Table. AMS02 Data Volumes Total AMS02 data volume is about 200 Tbyte Stream Band Width Data Category Volume (TB/year) High Rate 3-4 Mbit/s Scientific 11 – 15 Calibration 0.01 Slow Rate 16 kbit/s Housing Keeping 0.06 NASA Auxillary Data 0.01 Origin Data Category Volume (TB) Beam Calibrations Calibration 0.3 Preflight Tests Calibration 0.4 3 years flight Scientific 33 – 45 3 years flight Calibration 0.03 3 years flight House Keeping 0.18 Data Summary Files (DST) Ntuples or ROOT files 165-260 Catalogs Flat files or ROOT files or ORACLE 0.05 Event Tages Flat files or ROOT files or ORACLE 0.2 TDV files Flat files or ROOT files or ORACLE 0.5

12 Data Storage of SDC Purposes: - Detector verification studies; - Calibration - Alignment - Event visualization - Data processing by the general reconstruction program - Data reprocessing Requirement: - Tag information for all events during the whole period of data taking must be kept on direct access disks - Raw data taken during last 9 months and 30 % of all ESD should be on direct access disks  20TB - All taken and reconstructed data must be archived  200TB

13 ORACLE Data Base Organization Organization of database by machine, server, database, table - flexibility to load, locking data volume - the loading of machines A and B should be balanced most probably both machines will be LINUX Pentiums backup and replication of database Machine A Server A Database A Task A Task B Task C Database B Server B Machine B Server A Database A Task A Task B Task C Database B Server B

14 AMS Remote Center(s) Monte-Carlo Production Data Storage and data access for the remote stations AMS Remote Stations(s) and Center(s) Access to the SDC data storage * for the detector verification studies * for detector calibration purposes * for alignment * event visualization Access to the SDC to get the detector and production status Access to SDC computing facilities for Science Analysis Science Analysis using local computing facilities of Universities and Laboratories.

15 producers Raw data server Oracle RDBMS Conditions DB Tag DB Active Tables : Hosts, Interfaces, Producers, Servers Catalogues server Nominal Tables Hosts, Interfaces Producers, Servers… ESD server Raw data {I} {II} {III} {IV} {V} {VI} {I} submit 1 st server {II} “cold” start {III} read “active” tables (available hosts, number of servers, producers, jobs/host) {IV} submit servers {V} get “run”info (runs to be processed, ESD output path) {VI} submit producers (LILO, LIRO,RIRO…) Notify servers AMS Data Production Flow

16 Connectivity from AMS02 on ISS to CHEP JSC MSFC POIC/ POCC Telemetry Voice Planning Commanding MIT LANs NISN B4207 vBNSMIT vBNS Chicago NAP CERN (SDC) CHEP ISP-1ISP-2 무궁화위성 ISP-3 RCRS Five Downlink Video Feeds & Telemetry Backup White Sands, NM ISP :Internet Service Provider NAP - Network Access Point vBNS - very high Broadband Network Service NISN - NASA Integrated Network Services POCC - Payload Operations Control Center POIC- Payload Operations Integration Center Commercial Satellite Service International Space Station (ISS)

17 Tape Library (~200 TB) Disk Storage (20TB) DB Server Gigabit Ethernet Linux Clusters Cluster Servers Hub Internet Server AMS RC CHEP Analysis Facility 200 cpu Data Storage (20-200 TB) Display Facility Ewha AMS RS 무궁화위성

18 Network Configuration (July-Aug, 2002) Servers L3 Switch C6509 KOREN Gigabit Switches (CHEP) … IBM 8271 PCs … Servers Gigabit Ethernet CERN GEANT-TEIN(EU) 10~45Mbps (2002)  10 Gbps Fermilab KREONET Gigabit Ethernet APII(US) 45Mbps (2002) KEK APII(Japan) 8Mbps (2002) 45Mbps 1 Gbps Research Traffics KORNET Boranet other Traffics (total 145 Mbps) Gigabit Ethernet/ATM155 (Physics Department)

19 Connectivity to the Outside from CHEP APII JAPAN (KEK) APII USA (Chicago) TEIN (CERN) APII C hina (IHEP) USA StarTap, ESNET KOREN Topology CHEP (Regional Center) Daegu 1 Gbps Seoul Daejeon Daegu ■ Singapore (1) SIngAREN through APII (2Mbps) ■ China (1, Preplanned) CSTNET (APII) thru APII

20 Europe CERN TEIN APII-TransPac Hyunhai/Genkai US FNAL


Download ppt "CHEP as an AMS Remote Data Center International HEP DataGrid Workshop CHEP, KNU 2002. 11. 8-9 G.N. Kim, H.B. Park, K.H. Cho, S. Ro, Y.D. Oh, D. Son (Kyungpook)"

Similar presentations


Ads by Google