Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computing Architecture

Similar presentations


Presentation on theme: "Computing Architecture"— Presentation transcript:

1 Computing Architecture
Jeff Kern

2 ngVLA Definition (from a software perspective)
ngVLA (noun): A geographically distributed large N array for PI driven science with high sensitivity, wide instantaneous bandwidth, and low operational cost.

3 ngVLA Computing Architecture Constraints
Geographically distributed PI Driven Science Antennas must be self protecting Need full suite of operations tools (Proposal, Observing Preparation, Data Processing) Local control for maintenance Minimize tight central communication loops Produce science ready products Large N Array (200+) High Sensitivity Avoid central bottlenecks (logging, monitoring) High calibration and imaging accuracy Snapshot observations likely Robust to elements entering and leaving the array Wide Instantaneous Bandwidth RFI concerns Large (but not unsupportable data volumes) Data Volume (again) Low Operational Cost Automation everywhere possible

4 Control and Monitoring
In principle M&C of the ngVLA is very similar to existing arrays. Devices have gotten smarter, interface to software at a higher level. Use time tags for time critical operations and do synchronization in HW. Use standard communication protocols Large number of antennas and larger geographic area implies: Subarrays will be used nearly continuously (esp. during commissioning) Antennas need to be very independent, local monitoring and error detection Send sky position rather than encoder commands Operations Optimizations Dynamic Scheduling algorithm Need a more sophisticated weather model than typically used (variable across the array). Hardware drivers should self diagnose when updates to parameter data needs to be made, or operation conditions are out of normal. May need to include more self check modes in the hardware design. No RealTime OS

5 Correlator Software Correlator details TBD but assume:
FX Architecture Assume delay tracking performed digitally in Correlator HW Subarrays will be needed early and often. Correlator Back End, cluster for processing and formatting More conditioning of visibilities in this stage than for VLA (Tsys, Sub-band stitching, Telcal Application) RFI Mitigation (or even further upstream) Data Flagging and blanking Data into final archiving / processing format Correlator functionality is easy to add in hardware. Cost is in commissioning and support (firmware and software complexity) ngVLA should limit correlator modes to those that have clear use cases. Commissioned

6 Data Archiving and Storage
Estimated archived data rates are few TB per hour (visibility only) Not trivial, but possible (esp. with continued improvements in storage cost). Baseline design is to store the “raw” visibilities. But current paradigm doesn’t work Moving data, and multiple copies will likely be prohibitively expensive “Filling” data from an archival to working format, expensive and unnecessary Beginning to work on Measurement Set V3 with SKA Designed for massively parallel processing Robust to node failures, thread safe Support backwards compatibility to MS V2 Compute cluster should be “close” to storage Primary processing on PI hardware unlikely

7 Data Processing Algorithmic and performance requirements are still uncertain. Needs more study now that the system design is stabilizing. Preliminary analysis shows that most use cases are tractable Assumes that compute cost follows historical trend Wide-field low-frequency (<4 GHz) imaging is likely to be computationally prohibitive in early operations. Excluded from the baseline for this reason. Delivered products for most projects will be science quality images Science Ready Data Products (SRDP) project is a precursor to the ngVLA pipeline ALMA Calibration and Imaging Pipeline VLA Calibration Pipeline VLA Sky Survey Pipeline Key Differences from SKA: Higher frequency and later start date

8 Beginning End to End Operations
ngVLA must be a telescope for all astronomers, not just radio “blackbelts”. Proposal process should focus on desired products, not on hardware functionality. Reprocessing will be expensive. Initial generation of the proper science images will be important Archival researchers also need to be supported (next talk). Product quality is an observatory deliverable, thus the observatory must control calibration strategy. Challenge is to allow the flexibility that traditional radio astronomers expect.

9 Software Architecture Summary:
No blockers The RMS community knows how to design, implement, and operate the software to run the ngVLA. Some changes in emphasis because of large numbers of antennas and baseline length. Radio community is gaining experience with full lifecycle software support (Proposal to Delivery). Lessons to be learned from current generation of NRAO telescopes. SRDP project is pioneering science quality image production across the frequency range of the ngVLA.

10


Download ppt "Computing Architecture"

Similar presentations


Ads by Google