Mark 6: design and status update VLBI Technical Meeting MIT Haystack Observatory 2012.10.23 Roger Cappallo Alan Whitney Chet Ruszczyk.

Slides:



Advertisements
Similar presentations
Mark 6 VLBI Data System Alan Whitney Roger Cappallo Chet Ruszczyk Jason SooHoo MIT Haystack Observatory 11 Oct nd International VLBI Technical Jeju,
Advertisements

StreamBlade SOE TM Initial StreamBlade TM Stream Offload Engine (SOE) Single Board Computer SOE-4-PCI Rev 1.2.
Arecibo Telescope Servo & Drive System Technical Meeting May 2004 VERTEX ANTENNENTECHNIK GmbH Arecibo Telescope Servo Drive System Ideas For Computer.
26-Sep-11 1 New xTCA Developments at SLAC CERN xTCA for Physics Interest Group Sept 26, 2011 Ray Larsen SLAC National Accelerator Laboratory New xTCA Developments.
1 SMART Training S - Setup M - Measurement A - Analysis RT - ReporT.
Integration of the Mk5B Playback System into the Mk4 Correlator Roger Cappallo MIT Haystack Observatory Concepcion
The CVN Real Time Correlator Zhang Xiuzhong, Chen Zhong Shanghai Astronomical Observatory China 4th e-VLBI Workshop Sydney
Mark 5C VLBI Data System Alan Whitney MIT Haystack Observatory 16 June 2008 e-VLBI workshop Shanghai, China.
Internet Accessible Home Control Team 61. Team 61 Members Brandon Dwiel, Project Manager Sammi Karei Brandon McCormack Richard Reed Anthony Kulis Dr.
3U ATCA Shelf October Proprietary and Confidential-
DBBC3 Development - Digital Base-Band Converter 3
Aztec PC Oscilloscope Michael Mason Jed Brown Josh Price Andrew Youngs.
1 Design of the Front End Readout Board for TORCH Detector 10, June 2010.
MUST HAVE SHOULD HAVE COULD HAVE Module # 010. Qi Hardware Objectives Recognize hardware Know how to interface to field equipment Know the 4 different.
 Main Components:  Sensors  Micro controller  Motor drivers  Chasis.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
ASUS Confidential ASUS AP140R Server Introduction By Server Team V1.0.
Current plan for e-VLBI demonstrations at iGrid2005 and SC2005 Yasuhiro Koyama *1, Tetsuro Kondo *1, Hiroshi Takeuchi *1, Moritaka Kimura, Masaki Hirabaru.
THE CPU Cpu brands AMD cpu Intel cpu By Nathan Ferguson.
E-VLBI Development at Haystack Observatory 5 th Annual e-VLBI Workshop Haystack Observatory 20 September 2006 Alan R. Whitney Kevin Dudevoir Chester Ruszczyk.
Chapter 7Assembling Your Own Computer System  7.1Assembling the Hardware 7.1Assembling the Hardware 7.1Assembling the Hardware  7.2Installing the Operating.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
A* candidate for the power supply Wiener MPOD-LV crate w/ remote control only (except for local on/off switch) Type “EC LV” Front or rear connections (reverse.
Chongo Service Training Hardware Overview Prepared by Merlin Miller, Dave Jordahl, John Ciardi, March 2005.
Open, Scalable Real-Time Solutions Pentium Core Solo or Duo 1.6 to 2 GHz or Single or dual-core Pentium support in hard real-time up to.
GeoVision Solutions Storage Management & Backup. ๏ RAID - Redundant Array of Independent (or Inexpensive) Disks ๏ Combines multiple disk drives into a.
Current LBA Developments Chris Phillips CSIRO ATNF 13/7/2005.
A. Homs, BLISS Day Out – 15 Jan 2007 CCD detectors: spying with the Espia D. Fernandez A. Homs M. Perez C. Guilloud M. Papillon V. Rey V. A. Sole.
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
TE-MPE-EP, VF, 11-Oct-2012 Update on the DQLPU type A design and general progress. TE-MPE Technical Meeting.
Motherboard (Main board)
Basic LAN techniques IN common with all other computer based systems networks require both HARDWARE and SOFTWARE to function. Networks are often explained.
Data Acquisition Data acquisition (DAQ) basics Connecting Signals Simple DAQ application Computer DAQ Device Terminal Block Cable Sensors.
Update on LBA Technical Developments CASS Chris Phillips 11 October 2013.
The Mark 5B VLBI Data System Alan Whitney for the Mark 5 team MIT Haystack Observatory 20 September th e-VLBI Workshop Haystack Observatory.
SLAC Particle Physics & Astrophysics The Cluster Interconnect Module (CIM) – Networking RCEs RCE Training Workshop Matt Weaver,
Guide to Linux Installation and Administration, 2e1 Chapter 2 Planning Your System.
High Bandwidth Data Acquisition and Network Streaming in VLBI Jan Wagner, Guifré Molera et al. TKK / Metsähovi Radio Observatory.
Open, Scalable Real-Time Solutions Intel Core 2 Duo to Quad processor up to 5 VIRTEX II Pro FPGA board RT-LAB, SIMULINK, RTW, XILINX SG.
Nov 3, 2009 RN - 1 Jet Propulsion Laboratory California Institute of Technology Current Developments for VLBI Data Acquisition Equipment at JPL Robert.
DiFX Performance Testing Chris Phillips eVLBI Project Scientist 25 June 2009.
Lecture 12: Reconfigurable Systems II October 20, 2004 ECE 697F Reconfigurable Computing Lecture 12 Reconfigurable Systems II: Exploring Programmable Systems.
-Proprietary and Confidential-
Fast Fault Finder A Machine Protection Component.
DiFX New Features. Disk based file specification Improved specification of files to correlate Wild cards with time of scan name filtering? External reference.
Computer Hardware Maintenance & Repairs Computer Hardware Maintenance & Repairs Suleiman Mohammed (mncs,mcpn) Instructor Institute of Computing & ICT,
The Mark 5B VLBI Data System Alan Whitney MIT Haystack Observatory 10 Jan 2006 IVS General Meeting Concepion, Chile.
A+ Guide to Managing and Maintaining Your PC Fifth Edition Chapter 22 All About SCSI.
Demonstration of 16Gbps/station broadband-RF VLBI system Alan Whitney for the VLBI2010 development team MIT Haystack Observatory 22 October st Intl.
Reconfigurable Communication Interface Between FASTER and RTSim Dec0907.
High Speed Digital Systems Lab Spring/Winter 2010 Project definition Instructor: Rolf Hilgendorf Students: Elad Mor, Ilya Zavolsky Integration of an A/D.
1 MICE Tracker Readout Update Introduction/Overview TriP-t hardware tests AFE IIt firmware development VLSB firmware development Hardware progress Summary.
1 Conduant Mark5C VLBI Recording System 7 th US VLBI Technical Coordination Meeting Manufacturers of StreamStor ® real-time recording products
Potential VLBI2010 Data System Alan Whitney MIT Haystack Observatory Mikael Taveniku XCube Systems 24 Feb 2011 East Coast VLBI meeting Haystack Observatory.
ROM. ROM functionalities. ROM boards has to provide data format conversion. – Event fragments, from the FE electronics, enter the ROM as serial data stream;
Mark 5 – Status and Developments Alan Whitney MIT Haystack Observatory 14 May 2007 U.S. VLBI Technical Coordination Meeting NRAO Socorro, NM.
By Harshal Ghule Guided by Mrs. Anita Mahajan G.H.Raisoni Institute Of Engineering And Technology.
CRISP WP18, High-speed data recording Krzysztof Wrona, European XFEL PSI, 18 March 2013.
AS 61/62 USP The 1 st BRASWELL NAS (Celeron N3050/N3150) – Best performance: 224 MB/s read, 213 MB/s write Equipped with dedicated hardware encryption.
A Wide-Band VLBI Digital Backend System Alan Whitney Shep Doeleman Brian Fanous Hans Hinteregger Alan Rogers MIT Haystack Observatory 10 Jan 2006 IVS General.
AVAPS I to AVAPS II - Chassis Hardware & Flight Software Changes April 28 th & 29 th, 2009 AVAPS Users Group Meeting Dean Lauritsen.
Future LBO Developments
Development of T3Maps adapter boards
USB The topics covered, in order, are USB background
A. PCI Slot - This board has 2 PCI slots
Blackbird 002 – Value Proposition
Progress on EXPreS at JBO Ralph Spencer
Cost Effective Network Storage Solutions
Universal Serial Bus (USB)
Presentation transcript:

Mark 6: design and status update VLBI Technical Meeting MIT Haystack Observatory Roger Cappallo Alan Whitney Chet Ruszczyk

Why Mark6? drivers for ever- increasing datarates  Sensitivity!  VLBI2010 – enables smaller antennas & shorter scans  EHT – enables coherent detection of Sgr A* at mm wavelengths (through a fluctuating atmosphere)

Mark 6 goals 16Gbps sustained record capability >=32Gbps burst-mode capability Support all common VLBI formats possibly general ethernet packet recorder COTS hardware relatively inexpensive upgradeable to follow Moore’s Law progress 100% open-source software Linux O/S Other considerations playback as standard Linux files e-VLBI support smooth user transition from Mark 5 preserve Mk5 hardware investments, where possible

Prototype Mark 6

New Front Panel New PCB and power connector New Latch provided (pre-installed on new Panel) Re-use Handle from old Module Mark 5 SATA Drive Module Upgrade to Mark 6 8x LED (1 per drive) Front Rear Easily removable disks Connectors for two eSATA cables Cooling slots

New Drive Module Backplane (x2): -Sequences power to disks -Regulates voltage at disk power pins New Connector Board: -simple disconnect to allow easy removal of Module Tray from chassis Mark 5 Chassis Backplane Upgrade Module Tray Cooling fans Module guide rails Connections to chassis Power Supply

Conduant prices (preliminary) Mark 6 system chassis (complete, w/electronics); ~$10K final motherboard yet to get determined Mark 6 expansion chassis (complete) $2675 Cable tray $55 Data cable (each) $85 each Mark 5-to-Mark 6 system-chassis upgrade kit (DIY) ~$8K (w/o power supply; can re-use 850W PS) Mark 5-to-Mark 6 expansion-chassis upgrade kit (DIY) $675 Mark 6 module (empty) $495 Mark 5-to-Mark 6 module upgrade kit (DIY) $250 7 Project late 2012 availability for complete Mark 6 system

People come and go, but the project carries on…  prototype software v.0 achieved 16 Gb/s to a RAID array  author of prototype (David Lapsley) left Haystack for greener pastures  Roger Cappallo and Chet Ruszczyk writing production version

Demonstration Experiment  June 2012  Westford – GGAO  done with prototype software (v.0)  16 Gb/s  4 GHz bandwidth on the sky  dual polarization with 2 GHz IF’s  processed as four 512 MHz channels

mark6 block diagram

c-plane  control plane  written in python  interface to user (e.g. field system)  VSI-S protocol  responsible for high-level functions  aggregation of disks into modules  scan-based setup & record  error-checking, etc.

dplane  data plane  written in C  implements the high-speed data flow  input from NIC’s  output to disks within mk6 modules  manages:  start and stop of data flow via packet inspection  organization of data into files  addition of metadata to files

dplane block diagram dplane

dplane - Technical Highlights  pf_ring used for high-speed packet buffering  efficient use of multiple cores – based on # of available cores  smp affinity of IRQ’s  thread binding to cores  most of physical RAM grabbed for large ring buffers and locked in  one large ring buffer per stream  1..4 streams – can be changed dynamically  ~10 MB blocks scattered to files resident on different disks  prepended block# for ease of reassembly  uses faster disks to keep up with flow, but balances disk usage as much as possible

Additonal Features  capture to ring buffers is kept separate from file writing, to facilitate eVLBI, etc.  FIFO design decouples writing from capturing (e.g. keep writing during slew)  all mk6 software is open source for the community  gather uses asynchronous I/O to read n disks into n ring buffers, and write in correct order to single file

Software versions and strategies v.0 prototype (RAID 0)  command line one-off control v.1 operational RAID-based code  continuous operation  control via messages v.2 with single output file per stream:  write (ordinary) Linux files on RAID arrays  can use normal file-based correlation directly v.2 with multiple files data need to be reconstructed:  gather program – interim solution  does so very efficiently  requires an extra step  FUSE/mk6 interface could be written  in difx: will likely implement native mk6 datastream

Current Status  control plane and v.2 data plane functional  dplane v.2 works robustly with 8 Gb/s into  SSD module continuously  8 disk module configured as RAID array  8 disk module with scattered disks  performance testing  integration of two planes

Timeline  Dec 2011: v.0 prototype achieved 16 Gb/s to four 8 disk RAID arrays  July 2012: v.1 operational dataplane code using RAID array  Sept 2012: integration of v.1 control and data plane codes  Sept 2012: v.2 dataplane code with scattered filesystem  Oct 2012: complete integration of control and v.2 dplane  Oct-Nov 2012: performance testing, assessment, & tuning  Jan 2013: first operational use for VLBI2010 (8 Gb/s on 1 module) now