Download presentation
Presentation is loading. Please wait.
1
GLAST Large Area Telescope
Gamma-ray Large Area Space Telescope GLAST Large Area Telescope Instrument Science Operations Center CDR Section 6 Network and Hardware Architecture Richard Dubois SAS System Manager
2
Outline SAS Summary Requirements Pipeline Networking Requirements
Processing database Prototype status Data Storage and Archive Networking Proposed Network Topology Network Monitoring File exchange Security
3
Level III Requirements Summary
Ref: LAT-SS-00020 Basic requirements are to routinely handle ~10 GB/day in multiple passes coming from the MOC @ 150 Mb/s – 2 GB should take < 2 mins Outgoing volume to SSC is << 1 GB/day NASA 2810 Security Regs – normal security levels for IOC’s; as practiced by computing centers already
4
Pipeline Spec Function The Pipeline facility has five major functions
automatically process Level 0 data through reconstruction (Level 1) provide near real-time feedback to IOC facilitate the verification and generation of new calibration constants produce bulk Monte Carlo simulations backup all data that passes through Must be able to perform these functions in parallel Fully configurable, parallel task chains allow great flexibility for use online as well as offline Will test the online capabilities during Flight Integration The pipeline database and server, and diagnostics database have been specified (will need revision after prototype experience!) database: LAT-TD-00553 server: LAT-TD-00773 diagnostics: LAT-TD-00876
5
ISOC Network and Hardware Architecture
SLAC Internet LAT ISOC … Web Server Firewall Firewall Linux PC (Hkpg Replay ITOS) Linux PC (Realtime connection ITOS) SAS/SP Workstations PVO Workstations FSW Workstations CHS Workstations SCS CPU Farm Gateway System (Oracle, GINO, FastCopy/DTS) Firewall Abilene Network MOC SCS Storage Farm GSSC Solaris Workstation (VxWorks tools) LAT Test Bed 1553 SIIS (S/C Sim) Linux PC (Test Bed ITOS) LVDS Anomaly Tracking & Notification System LAT Test Bed Lab
6
Expected Capacity We routinely made use of processors on the SLAC farm for repeated Monte Carlo simulations, lasting weeks Expanding farm net to France and Italy Unknown yet what our MC needs will be We are very small compared to our SLAC neighbour BABAR – computing center sized for them CPUS; 300 TB of disk; 6 robotic silos holding ~ GB tapes total SLAC computing center has guaranteed our needs for CPU and disk, including maintenance for the life of the mission. Data rate expanded to ~300 Hz with fatter pipe and compression ~75 CPUs to handle 5 hrs of data in sec/event
7
Straw Budget Profile Upper Limit on needs - approved
FY05 FY06 FY07 FY08 farm CPU total 20 40 75 95 farm CPU increment 35 farm CPU cost 25 43.75 compute servers total 4 6 8 12 compute servers incr 2 compute srv cost 3.5 7 user servers total 3 5 9 user servers incr user srv cost 2.5 pipeline servers total pipeline servers incr pipeline srv cost database server cost 10 disk (TB) total 50 200 400 disk (TB) incr 150 disk cost 125 600 800 tapes needed total 250 500 2000 4000 tapes needed incr 1500 tape cost 120 160 Total cost (k$) 256 196 772.25 994.5 Upper Limit on needs - approved Dominated by disk/tape costs:
8
A Possible 10% solution FY05 FY06 FY07 FY08 CPU 20 35 25k 44k disk
25 TB 25 40 200k 150k 160k tape 20k 24k 32k Total 245k 195k 228k 217k base per flight year of L0 + all digi = ~25 TB then 10% of 300 Hz recon disk in is for Flight Int, DC2 and DC3 (WAG)
9
Pipeline in Pictures State machine + complete processing record
Expandable and configurable set of processing nodes Configurable linked list of applications to run
10
Processing Dataset Catalogue
Processing records Datasets grouped by task Datasets info is here
11
First Prototype - OPUS Open source project from STScI
In use by several missions Now outfitted to run DC1 dataset Replaced by GINO OPUS Java mangers for pipelines
12
Gino - Pipeline View Once we had inserted Oracle DB and LSF batch, there was only a small piece of OPUS left. Gone now!
13
Disk and Archives We expect ~10 GB raw data per day and assume comparable volume of events for MC Leads to ~40 TB/year for all data types No longer frightening – keep it all on disk Have funding approval for up to 200 TB/yr Use SLAC’s mstore archiving system to keep a copy in the silo Already practicing with it and will hook it up to Gino Archive all data we touch; track in dataset catalogue Not an issue
14
Network Path: SLAC-Goddard
٭ ٭ ٭ ٭ ٭ SLAC Stanford Oakland (CENIC) LA UC-AID (Abilene) Houston Atlanta Washington GSFC (77 ms ping)
15
ISOC Stanford/SLAC Network
SLAC Computing Center OC48 connection to outside world provides data connections to MOC and SSC hosts the data and processing pipeline Transfers MUCH larger datasets around the world for BABAR World renowned for network monitoring expertise Will leverage this to understand our open internet model Sadly, a great deal of expertise with enterprise security as well Part of ISOC expected to be in new Kavli Institute building on campus Connected by fiber (~2 ms ping) Mostly monitoring and communicating with processes/data at SLAC
16
Network Monitoring Need to understand failover reliability, capacity and latency
17
LAT Monitoring LAT Monitoring
Keep track of connections to collaboration sites Alerts if they go down Fodder for complaints if poor connectivity Monitoring nodes at most LAT collaborating institutions
18
File Exchange: DTS & FastCopy
Secure No passwords in plain text etc Reliable Has to work > 99% of the time (say) handle the (small) data volume order 10 GB/day from Goddard (MOC); 0.3 GB/day back to Goddard (SSC) keep records of transfers database records of files sent and received handshakes both ends agree on what happened some kind of clean error recovery Notification sent out on failures Web interface to track performance GOWG investigating DTS & FastCopy now Either will work
19
Security Network security – application vs network
ssh/vpn among all sites – MOC, SSC and internal ISOC A possible avenue is to make all applications secure (ie encrypted), using SSL. File and Database security Controlled membership in disk ACLs Controlled access to databases Depend on SLAC security otherwise
20
Summary We are testing out the Gino pipeline as our first prototype
Getting its first test in Flight Integration support Interfaces to processing database and SLAC batch done Additional practice with DC2, 3 We expect to need O(50 TB)/year of disk and ~2-3x that in tape archive Not an issue, even if we go up to 200 TB/yr We expect to use Internet2 connectivity for reliable and fast transfer of data between SLAC and Goddard Transfer rates of > 150 Mb/s already demonstrated < 2 min transfer for standard downlink. More than adequate. Starting a program of routine network monitoring to practice Network security is an ongoing, but largely solved, problem There are well-known mechanisms to protect sites We will leverage considerable expertise from the SLAC and Stanford networking/security folks
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.