Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 AMS-02 POCC & SOC MSFC, 9-Apr-2009 Mike Capell Avionics & Operations Lead Senior Research Scientist ISS: 108x80m 420T 86KW 400km AMS: 5x4x3m 7T 2.3+KW.

Similar presentations


Presentation on theme: "1 AMS-02 POCC & SOC MSFC, 9-Apr-2009 Mike Capell Avionics & Operations Lead Senior Research Scientist ISS: 108x80m 420T 86KW 400km AMS: 5x4x3m 7T 2.3+KW."— Presentation transcript:

1 1 AMS-02 POCC & SOC MSFC, 9-Apr-2009 Mike Capell Avionics & Operations Lead Senior Research Scientist ISS: 108x80m 420T 86KW 400km AMS: 5x4x3m 7T 2.3+KW 3+ years

2 Power: 109-124VDC ~2.3 KW LRDL for Cmd & Mon 1553B Bus 1 Kbit/s in 10 Kbit/s out 10 B/sec CHD HRDL for Event Data Taxi F/O (STS:RS422) orbit S-Band CHD disk & Command Monitoring Science Data Earth TDRS LRDL HRDL UMA EVA International Space Station Data M&C Power AMS POCC ACOP Data Link Duty Cycles: ~70% Contingency Critical Health Data (CHD) Duty Cycle > 90% AMS-02 Electrical Interfaces on ISS 2

3 Subdetector Requirements: Summary SubdetectorReq’mentsChannelsRaw Kbits U: TRDGas gain5,24884 S: ToF+ACC100 ps48*4*849 T: Trackerfew fC196,6083,146 R: RICH Single γ 680*16*2348 E: ECAL1:60,000324*(4*2+1)47  Raw Kbits/event 3,674 * Event Rate≤ 2 Khz = Total Raw Data Rate~7 Gbit/sec 7 Gbit/sec >> 4 Mbit/sec ⇒ Restrict Rate & Size Specify, design, develop, produce: High Speed, High Capacity, Low Power, Low Weight, Reliable Signal & Data Processing ON ORBIT ! 3

4 4 Examples: Data Reduction (UDR2, TDR2) Boards 70 types of boards, 454 boards total

5 Electronics on mounting jigs, 4 Aug 2008 5 CAB

6 Electronics cabled on mounting jigs, 5 Nov 2008 6

7 AMS Operational Locations 1.Pre-integration (done) and Integration in AMS Clean Room, B867. Operations Center in B892. 2.Beam Test at CERN, EHN1. Operations Center colocated or in B892. 3. Thermal-Vacuum and Electro-Magnetic Compatibility Tests at ESA ESTEC, Nordwijk, NL. Operations Center at ESTEC. 4. Interface Verification Testing at SSPF Bldg, Kennedy Space Center (KSC), FL. Operations Center at KSC. 5.End-to-end testing on the launch pad. Operations Centers at KSC and then at Johnson Space Center (JSC), Houston, TX. 6.Inside the Space Shuttle en route to the Station. Operations at JSC. 7.Installation, activation and full operations on the Space Station. Operations at JSC (~3 months), then shift to CERN B892. 7’.Backup Operations Center in the US (U. Maryland). 7

8 AMS Data Flow (ISS) AMS Science Data and Monitoring Data A HRDL (F/O) High Rate System Low Rate System Monitoring Data B and Critical Health Data Commands LRDL (1553B) Monitoring Data B ISS Ancillary Data, Critical Health Data S-Band Commands Ku-Band Science Data and Monitoring Data A+B White Sands, NM NASA Networks SOC: Science Operations Center a.k.a. “offline” POCC: Payload Operations Control Center a.k.a. “online” Regional Center Payload Data Service System Real Time Data Service Short Term Storage Long Term Storage AMS GSC-N AMS GSC-R Playback File Transfer Voice Loop Commands Nominal: All Data UDP All Data POIC, MSFC, AL Regional Center Redundant: All Data Cmds to GSC-N INTERNET ISS AMS provided NASA provided Cmds to GSC-R All Data Internet Connection (TCP/IP, FTP, X) M.Capell, Oct 2008 All Data: Science Data, Monitoring Data A+B, ISS Ancillary Data and Critical Health Data TDRS The Wall 8

9 Ground Support Computers (as of today) 9

10 Ground Support Computers in Clean Room 10 PCI-HRDL/ PMCHRDL PCI-HRDL/ PMCHRDL USB422 PCMCIA ACE 1553 PCMCIA ACE 1553 EPP BOX HRDL Splitter RS422 ISS 1553 STS 1553 AMS CAN Buses PCGSC00 (commanding & recording) PCGSC04 (recording) PCGSC01 (commanding & recording) PCGSC03 (commanding & recording) PCGSC02 (commanding & monitoring) NFS Server for HRDL/422 frames Command Server for HRDL/422 NFS Server for HRDL/422 frames NFS Server for ISS 1553 frames Command Server for ISS 1553 NFS Server for STS 1553 frames Command Server for STS 1553 Backup stream RS232-USB JMDC Terminals

11 AMS POCC at CERN for ISS SOC AMS GSC-N AMS GSC-R Voice Loop Commands Nominal: All Data Redundant: All Data Cmds to GSC-N Cmds to GSC-R All Data All Data: Science Data, Monitoring Data A+B, ISS Ancillary Data and Critical Health Data 1. Shift Leader (alt. Commander) 2. Commander 3. DAQ+Trigger +RunControl 4. Magnet+ Cryocoolers 10. PDS+GPS+ Star Tracker 5. Tracker+Laser Align.+Cooling 6.TRD+TRD Gas 7. TOF+ACC 9. ECAL 8. RICH Storage Server (+10 TByte) Production Node 0. Management Switch Functional Data Flow in the POCC (in reality, everything is done over Ethernet with the Switch in the center of a star configuration). Voice Loop Open source voice loop server handles internal communications and mixes in voice channel to/from NASA. Commands One specific station can actually send commands, others can prepare them. Video Live NASA video feeds are just another network service. “All Data” Stored, processed and redistributed from the Storage Server. Extra processing power provided by Processing Node. POIC, MSFC, AL The Wall POCC Station: PC + 2 Screens + headset + laptop location 11 11. Thermal

12 POCC Commanding and Monitoring Consoles (as of today) 12 PCAAL12 PCAAL13 PCAAL14

13 All members of the cluster are connected to 2 Fiber Channel (FC) switches and have direct, redundant access to all arrays using GFS software. These arrays are also exported via NFS to production nodes. RAID 6 126TB GFS RAID 5 40TB GFS FC Switch 1FC Switch 2 File Server 1 RAID 6 126TB GFS RAID 6 126TB GFS RAID 5 10TB GFS DB Server 1 File Server 2 AMS Gateway DB Server 2 File Server 3 File Server 4 Already existPlanned for 2009Planed for 2010 – 2012 Production Node 1 Production Node 2 Production Node 6 Production Node 3 Ethernet (NFS) AMS SOC at CERN 13

14 SOC Computers (as of today) DB Server 1 DB Server2 AMS Gateway Raid Array & FC Switch1 File server1 File server2 FC Switch2 Raid Array UPS UPSes

15 SOC Software Currently: SLC4 (64bit) Operating System on all servers. Oracle 10g (64bit) installed, database migrated LSF batch system installed and in use GFS open source shared file system in use on disk arrays, few glitches. Future: These will be updated following CERN IT best practices. 15

16 Preintegration Cosmic Ray Data Rerun To verify the concept of the Remote Center Data reruns a trial rerun was done at CNAF/Milano. Few initial problems resolved – e.g., Data Transfer/Data Management software has been modified to cope with differences between Data & MC (use since 2005 for MC and transferred 20 TByte); 30 days to do complete rerun, mainly because of CNAF/Castor maintenance (in preparation for LHC) Lessons learned: Larger bandwidth to RC may be needed; It is essential to have raw data locally for reruns. 16


Download ppt "1 AMS-02 POCC & SOC MSFC, 9-Apr-2009 Mike Capell Avionics & Operations Lead Senior Research Scientist ISS: 108x80m 420T 86KW 400km AMS: 5x4x3m 7T 2.3+KW."

Similar presentations


Ads by Google