Presentation is loading. Please wait.

Presentation is loading. Please wait.

AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — ETH Zurich and

Similar presentations


Presentation on theme: "AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — ETH Zurich and"— Presentation transcript:

1 AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

2 2 September 29. 2004. CHEP04. Alexei Klimentov. Outline  AMS – particle physics experiment STS91 precursor flight AMS-02 ISS mission  Classes of AMS data  Data Flow  Ground centers  Data Transmission SW  AMS-02 distributed Monte-Carlo Production

3 3 September 29. 2004. CHEP04. Alexei Klimentov. AMS : a particle physics experiment in space PHYSICS GOALS : Accurate, high statistics measurements of charged, cosmic ray spectra in space > 0.1GV Nuclei and e - /e + spectra measurement The study of dark matter (90% ?) Determination of the existence or absence of antimatter in the Universe Look for negative nuclei as anti-Helium, anti-Carbon The study of the origin and composition of cosmic rays Measure isotopes D, He, Li, Be… AMS, the Alpha Magnetic Spectrometer, scheduled for a three years mission on the International Space Station (ISS).

4 4 September 29. 2004. CHEP04. Alexei Klimentov. Magnet : Nd 2 Fe 14 B TOF : trigger, velocity and Z Si Tracker : charge sign, rigidity, Z Aerogel Threshold Cerenkov : velocity Anticounters : reject multi particle events Results :  Anti-matter search : He / He = 1.1x10 -6  Charged Cosmic Ray spectra Pr, D, e - /e +, He, N  Geomagnetic effects on CR under/over geomagnetic cutoff components 100M events recorded Trigger rates 0.1-1kHz DAQ lifetime 90% The operation principles of the apparatus have been tested in space during a precursor flight : AMS-01

5 5 September 29. 2004. CHEP04. Alexei Klimentov.

6 6 AMS-02.  Super Conducting Magnet (B = 1 Tesla).  Transition Radiation Detector (TRD), rejects protons better than 10 -2, lepton identification up to 300 GeV.  Time Of Flight Counters (TOF), time of flight measurement to an accuracy of 100 ps.  Silicon Tracker, 3D particle trajectory measurement with coordinate resolution 10um, and energy loss measurement.  Anti-Coincidence Veto Counters (ACC) reject particles that leave or enter via the shell of the magnet.  Ring Image Cherenkov Counter (RICH), measures velocity and charge of particles and nucleis.  Electromagnetic Calorimeter (ECAL). Measures energy of gamma-rays, e -,e +, distinguishes e - /e + from hadrons.

7 7 September 29. 2004. CHEP04. Alexei Klimentov. DAQ Numbers Sub-detectorChannelsTotal Raw KBits Tracker196,6083,146 ToF+ACC38449 TRD5,24884 RICH27,760348 ECAL2,59247 Total232.6K~3700 Raw data rate : 3.7 Mbit x 200-2000 Hz = 0.7-7 GBit/s Data reduction, filtering : 2 Mbit/s AMS Power budget : 2KW

8 8 September 29. 2004. CHEP04. Alexei Klimentov. AMS

9 9 September 29. 2004. CHEP04. Alexei Klimentov. AMS02 Ground Support Centers Payload Operations Control Center (POCC) at CERN (first 2-3 months in Houston TX). “counting room”, usual source of commands. receives Health & Status (H&S), monitoring and science data in real-time receives NASA video. voice communication with NASA flight operations. Science Operations Center (SOC) at CERN (first 2-3 months in Houston TX). receives the complete copy of ALL data. data processing and science analysis. data archiving and distribution to Universities and Laboratories. Ground Support Computers (GSC) at Marshall Space Flight Center Huntsville AL. receives data from NASA -> buffer -> retransmit to Science Center. Regional Centers. Aachen, ITEP, Karlsruhe, Lyon, Madrid, Milan, MIT, Nanjing, Shanghai, Taipei, Yale -> 19centers. analysis facilities to support geographically close Universities.

10 10 September 29. 2004. CHEP04. Alexei Klimentov. Classes of AMS Data (Health & Status data) Critical Health and Status Data. –Status of detector Magnet State (charging, persistent, quenched…) Input Power (1999 W) Temperature (low, high) DAQ state (active, stuck) Rate < 1 Kbit/sec need in Real-Time (RT) to AMS Payload Operation and Control Center (POCC), to ISS crew and NASA ground

11 11 September 29. 2004. CHEP04. Alexei Klimentov. Classes of AMS Data (Monitoring data)  Monitoring (House-Keeping, Slow Control) Data All slow control data from all slow control sensors Data rate ~ 10 Kbit/sec needed in Near Real Time (NRT) to AMS POCC visible to ISS crew complete copy “later” (close to NRT) for science analysis

12 12 September 29. 2004. CHEP04. Alexei Klimentov. Classes of AMS Data (Science data)  Science Data events, sub-detectors calibrations samples approximately 10% to POCC to monitor detector performance in RT complete copy “later” to SOC for event reconstruction and physics analysis. 2 Mbit/sec orbit/average

13 13 September 29. 2004. CHEP04. Alexei Klimentov. Classes of AMS Data (Flight Ancillary Data)  Flight Ancillary Data ISS latitude, attitude, speed, etc needed in Near Real Time (NRT) to AMS POCC complete copy “later” (close to NRT) for science analysis 2 Kbit/sec

14 14 September 29. 2004. CHEP04. Alexei Klimentov. Commands  Command Types simple, fixed (few bytes – require H&S data visibility) short, variable length (<1KByte – require monitoring data) files, variable length (K to Mbytes – requires science data) In the beginning we may need to command intensively, over the long haul anticipate : a few simple or short commands per orbit occasional (daily-weeks) periods of heavy commanding very occasional (weekly-monthly) file loading  Command Sources Ground : one source of commands – POCC Crew via ACOP – contingency use only of simple or short commands

15 15 September 29. 2004. CHEP04. Alexei Klimentov.

16 16 September 29. 2004. CHEP04. Alexei Klimentov. AMS Crew Operations Post (ACOP)  Serves as internal recording device to preserve data  Allows for burst mode playback operations up to 20 times original speed to assist in data management  Allows access to MRDL link (another path to ground), and will enable AMS to take advantage of future ISS upgrades such as 100baset MRDL  Potential for additional data compression/ triggering functions to minimize data downlink  Serves as additional command interface –Upload of files to AMS (adjust main triggering) –Direct commanding to AMS ACOP is a general purpose computer, the main duties of ACOP are to :

17 17 September 29. 2004. CHEP04. Alexei Klimentov. AMS Ground Data Handling How much of (and how soon) AMS data gets to the ground centers determines how well :  Detector performance can be monitored  Detector performance can be optimized  Detector performance can be tuned into physics

18 18 September 29. 2004. CHEP04. Alexei Klimentov. AMS Ground Data Centers

19 19 September 29. 2004. CHEP04. Alexei Klimentov. Ground Support Computers  At Marshall Space Flight Center (MSFC), Huntsville Al  Receives data from NASA Payload Operation and Integration Center (POIC)  Buffers data until retransmission to the AMS Science Operation Center (SOC) and if necessary to AMS Payload Operations and Control Center (POCC)  Runs unattended 24h/day, 7 days/week  Must buffer about 600 GB (data for 2 weeks)

20 20 September 29. 2004. CHEP04. Alexei Klimentov. Payload Operation and Control Center  AMS “counting room”  Usual source of AMS commands  Receives H&S, monitoring, science and NASA data in real-time mode  Monitor the detector state and performance  Process about 10% of data in near real time mode to provide fast information to the shift taker  Video distribution “box”  Voice loops with NASA

21 21 September 29. 2004. CHEP04. Alexei Klimentov. Science Operation Center  Receives the complete copy of ALL data  Data reconstruction and processing, generates event summary data and does event classification  Science analysis  Archive and record ALL raw, reconstructed and H&S data  Data distribution to AMS Universities and Laboratories

22 22 September 29. 2004. CHEP04. Alexei Klimentov. Regional Centers  Analysis facility to support local AMS Universities and Laboratories  Monte-Carlo Production  Mirroring DST (ESD)  Provide access to SOC data storage (event visualization, detector and data production status, samples of data, video distribution)

23 23 September 29. 2004. CHEP04. Alexei Klimentov. Telescience Resource Kit (TReK)  TReK is a suite of software applications that provide: –Local ground support system functions. –An interface with the POIC to utilize POIC remote ground support system services.  TReK is suitable for individuals or payload teams that need to monitor and control low/medium data rate payloads.  The initial cost of a TReK system is less than $5,000. M.Schneider MSFC/NASA

24 International Space Station Processed P/L Data TDRS P/L Uplinks US. Investigator Sites Telescience Support Centers (TSC’s) P/L Uplinks P/L User Data Space Shuttle POIC (EHS, PDSS, PPS) IP’s SSCC/ MCC-H P/L Uplinks White Sands Complex ISS Payload Telemetry and Command Flow P/L Uplinks M.Schneider MSFC/NASA

25 25 September 29. 2004. CHEP04. Alexei Klimentov. Telemetry Services TReK Telemetry Capabilities: * Receive, Process, Record, Forward, and Playback Telemetry Packets. * Display, Record, and Monitor Telemetry Parameters * View Incoming Telemetry Packets (Hex/Text Format) * Telemetry Processing Statistics M.Schneider MSFC/NASA

26 26 September 29. 2004. CHEP04. Alexei Klimentov. Command Services TReK Command Capabilities: * Command System Status & Configuration Information * Remotely Initiated Command (Cmd Built from POIC DB) * Remotely Generated Command (Cmd Built at Remote Site) * Command Updates * Command Responses * Command Session Recording/Viewing * Command Track * Command Statistics M.Schneider MSFC/NASA

27 27 September 29. 2004. CHEP04. Alexei Klimentov. EVoDS Voice Switch VOIP Telephony Gateways Voice Loops Administrator Server Conference Servers Virtual Private Network Server Remote Sites MSFC Payload Operations and Integration Center IVoDS User Client PC’s NASA, Research, and Public IP Networks Administrator Client PC PAYCOM Client PC LAN EVoDS Keysets Internet Voice Distribution System (IVoDS) 1 Windows NT/2000 PC with COTS sound card and headset Web-based for easy installation and use PC location very mobile – anywhere on LAN Challenge: minor variations in PC hardware and software configurations at remote sites Encrypted IP Voice Packets K.Nichols MSFC/NASA

28 28 September 29. 2004. CHEP04. Alexei Klimentov. IVoDS User Client Capabilities  Monitor 8 conferences simultaneously  Talk on one of these eight conferences using spacebar, ‘Click to Talk’ button, or ‘Mic Lock’  User selects from authorized subset of available voice conferences  Volume control/mute for individual conferences  Assign talk and monitor privileges per user and conference  Show lighted talk traffic per conference  Talk to crew on Space (Air) to Ground if enabled by PAYCOM  Save and Load conference configuration  Set password K.Nichols MSFC/NASA

29 29 September 29. 2004. CHEP04. Alexei Klimentov. Data Transmission  Facing the long running period (3+ years) and the way how data will be transmitted from the detector to the ground centers. High Rate Data Transfer between MSFC Al and AMS centers (POCC, SOC) will become a paramount importance

30 30 September 29. 2004. CHEP04. Alexei Klimentov. Data Transmission SW to speed up data transfer to encrypt sensitive data and not encrypt bulk data to run in batch mode with automatic retry in case of failure  … starting to look around and came up with bbftp (still looking for a good network monitoring tools) (bbftp developed in BaBar and used to transmit data from SLAC to IN2P3@Lyon) adapted it for AMS, wrote service and control programs

31 31 September 29. 2004. CHEP04. Alexei Klimentov. Data Transmission SW (the inside details)  Server  copy data files between directories (optional)  scan data directories and make list of files to be transmitted  purge successfully transmitted files and do book-keeping of transmission sessions  Client  periodically connect to server and check if new data available  bbftp new data and update transmission status in the catalogues.

32 32 September 29. 2004. CHEP04. Alexei Klimentov. Data Transmission Tests

33 33 September 29. 2004. CHEP04. Alexei Klimentov. AMS Distributed Data Production  Computer simulation of detector response is a good possibility to study not only detector performance, but also to test HW and SW solutions that will be used for AMS-02 data processing  Data are generated in 19 Universities and Laboratories, transmitted to CERN and then available for the analysis

34 34 September 29. 2004. CHEP04. Alexei Klimentov. Year 2004 MC Production  Started Jan 15, 2004  Central MC Database  Distributed MC Production  Central MC storage and archiving  Distributed access

35 35 September 29. 2004. CHEP04. Alexei Klimentov. AMS Distributed Data Production  CORBA client/server for inter-process communication  Central relational database (ORACLE) to store regional centers description, list of authorized users, list of known hosts, jobs parameters, files catalogues, version of programs and executables files etc.  Automated and Standalone mode for processing jobs - automated job description file is generated by remote user request (via Web) user submits job file to a local batch system job requests from central server : calibration constants slow control corrections service info (e.g. path to store DSTs) central server keeps the table of active clients, number of processed events, handle all interactions with database and data transmission - standalone job description file is generated by remote user request (via Web) user receives a stripped database version, submits the job client doesn’t communicate with central server during job execution DSTs and log files are bbftp’ed to CERN by user

36 36 September 29. 2004. CHEP04. Alexei Klimentov. MC Production Statistics ParticleMillion Events % of Total protons763099.9 helium375099.6 electrons128099.7 positrons1280100 deuterons250100 anti-protons 352.5100 carbon 291.597.2 photons128100 Nuclei (Z 3…28) 856.285 URL: pcamss0.cern.ch/mm.html 185 days, 1196 computers 8.4 TB, 250 PIII 1 GHz/day

37 37 September 29. 2004. CHEP04. Alexei Klimentov. Y2004 MC Production Highlights  Data are generated at remote sites, transmitted to AMS@CERN and available for the analysis (only 20% of data was generated at CERN)  Transmission, process communication and book-keeping programs have been debugged, the same approach will be used for AMS-02 data handling  185 days of running (~97% stability)  18 Universities & Labs  8.4 Tbytes of data produced, stored and archived  Peak rate 130 GB/day (12 Mbit/sec), average 55 GB/day (AMS-02 raw data transfer ~24 GB/day)  1196 computers  Daily CPU equiv 250 1 GHz CPUs running 184 days/24h Good simulation of AMS-02 Data Processing and Analysis

38 38 September 29. 2004. CHEP04. Alexei Klimentov. List of Acronyms LAN LOR LTO MRDL MSFC Local Area Network Lost Of Record Linear Tape Open Middle Rate Data Link Marshall Space Flight Center, Huntsville Alabama NASA NCFTP NM NRT National Aeronautics and Space Agency See FTP New Mexico Near Real Time Payload Data Service System PDSS POCCPayload Operations Control Center POIC RAID RT RTDS SMP SOC STS SW TB,Tbyte TDRSS TReK Payload Operation and Integration Center, MSFC Redundant Array of Inexpensive Disks Real Time Real Time Data System Symmetric Multi-Processor Science Operations Center Space Shuttle Software TeraByte Tracking & Data Relay Satellite System Telescience Resource Kit ACOP AMS Crew Operation Post Al Alabama AMS amsbbftp Alpha Magnetic Spectrometer AMS ftp (see ftp) bbftp bps CA CERN CET DLT DST ESD FTP GB GSC BaBar ftp (see ftp) Ground Support Computers H&S HW Hz ISS Health and Status data Hardware Hertz International Space Station Bit per second California European Laboratory for Particle Physics, Geneva, CH Central Europe Time Digital Linear Tape Data Summary Tape Event Summary Data File Transfer Protocol GigaByte


Download ppt "AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — ETH Zurich and"

Similar presentations


Ads by Google