E-VLBI Development Program at MIT Haystack Observatory Alan R. Whitney Chester A. Ruszczyk MIT Haystack Observatory 13 July 2005 e-VLBI Workshop Australia.

Slides:



Advertisements
Similar presentations
M A Wajid Tanveer Infrastructure M A Wajid Tanveer
Advertisements

DRAGON Dynamic Resource Allocation via GMPLS Optical Networks Tom Lehman University of Southern California Information Sciences Institute (USC/ISI) National.
Institute of Computer Science Foundation for Research and Technology – Hellas Greece Computer Architecture and VLSI Systems Laboratory Exploiting Spatial.
Mark 6: design and status update VLBI Technical Meeting MIT Haystack Observatory Roger Cappallo Alan Whitney Chet Ruszczyk.
Storage area Network(SANs) Topics of presentation
RIT Campus Data Network. General Network Statistics Over 23,000 wired outlets Over 14,500 active switched ethernet ports > 250 network closets > 1,000.
Shadow Configurations: A Network Management Primitive Richard Alimi, Ye Wang, Y. Richard Yang Laboratory of Networked Systems Yale University.
1 ITC242 – Introduction to Data Communications Week 12 Topic 18 Chapter 19 Network Management.
Review on Networking Technologies Linda Wu (CMPT )
Current plan for e-VLBI demonstrations at iGrid2005 and SC2005 Yasuhiro Koyama *1, Tetsuro Kondo *1, Hiroshi Takeuchi *1, Moritaka Kimura, Masaki Hirabaru.
E-VLBI Development at Haystack Observatory Alan Whitney Chet Ruszczyk Kevin Dudevoir Jason SooHoo MIT Haystack Observatory 24 March 2006 EVN TOG meeting.
E-VLBI Development at Haystack Observatory Alan Whitney Chet Ruszczyk MIT Haystack Observatory 10 Jan 2006 IVS General Meeting Concepion, Chile.
Electronic Transmission of Very- Long Baseline Interferometry Data National Internet2 day, March 18, 2004 David LapsleyAlan Whitney MIT Haystack Observatory,
E-VLBI Development at Haystack Observatory 5 th Annual e-VLBI Workshop Haystack Observatory 20 September 2006 Alan R. Whitney Kevin Dudevoir Chester Ruszczyk.
C OLUMBIA U NIVERSITY Lightwave Research Laboratory Embedding Real-Time Substrate Measurements for Cross-Layer Communications Caroline Lai, Franz Fidler,
LAN Overview (part 2) CSE 3213 Fall April 2017.
1 The SpaceWire Internet Tunnel and the Advantages It Provides For Spacecraft Integration Stuart Mills, Steve Parkes Space Technology Centre University.
Current LBA Developments Chris Phillips CSIRO ATNF 13/7/2005.
Database System Development Lifecycle © Pearson Education Limited 1995, 2005.
DRAGON Supercomputing 2004 Presentation Plans October 6, 2004 National Science Foundation.
E-VLBI: Connecting the Global Array of Radio Telescopes through High-Speed Networks Participating U.S. organizations: MIT Haystack Observatory MIT Lincoln.
DBBC Stutus Report November 2007 G. Tuccari, W. Alef, S. Buttaccio, G. Nicotra, M. Wunderlich.
E-VLBI Software Suite & DRAGON Interoperability Chet Ruszczyk, Jason Soohoo.
Repeaters and Hubs Repeaters: simplest type of connectivity devices that regenerate a digital signal Operate in Physical layer Cannot improve or correct.
Chapter 1. Introduction. By Sanghyun Ahn, Deot. Of Computer Science and Statistics, University of Seoul A Brief Networking History §Internet – started.
A novel approach of gateway selection and placement in cellular Wi-Fi system Presented By Rajesh Prasad.
e-VLBI: Overview and Update Alan R. Whitney MIT Haystack Observatory.
E-VLBI over TransPAC Masaki HirabaruDavid LapsleyYasuhiro KoyamaAlan Whitney Communications Research Laboratory, Japan MIT Haystack Observatory, USA Communications.
High Data Rate Transfer for HEP and VLBI Ralph Spencer, Richard Hughes-Jones and Simon Casey The University of Manchester Netwrokshop33 March 2005.
High Bandwidth Data Acquisition and Network Streaming in VLBI Jan Wagner, Guifré Molera et al. TKK / Metsähovi Radio Observatory.
EVN-NREN meeting, Schiphol, , A. Szomoru, JIVE Recent eVLBI developments at JIVE Arpad Szomoru Joint Institute for VLBI in Europe.
Support for Radio Astronomy Michael Enrico DANTE.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
E-VLBI: Creating a Global Radio Telescope via High-Speed Networks Alan R. Whitney MIT Haystack Observatory SLAC Data Management Workshop 17 March 2004.
Masaki Hirabaru Tsukuba WAN Symposium 2005 March 8, 2005 e-VLBI and End-to-End Performance over Global Research Internet.
GrangeNet Dr. Greg Wickham APAN NOC 25 August 2005.
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 1: Introduction to Scaling Networks Scaling Networks.
VTP: VDIF Transport Protocol Chris Phillips, Alan Whitney, Mamoru Sekido & Mark Kettenis November 2011.
Masaki Hirabaru Network Performance Measurement and Monitoring APAN Conference 2005 in Bangkok January 27, 2005 Advanced TCP Performance.
1 © 2003, Cisco Systems, Inc. All rights reserved. CCNP 1 v3.0 Module 1 Overview of Scalable Internetworks.
Advanced Networks: The Past and the Future – The Internet2 Perspective APAN 7 July 2004, Cairns, Australia Douglas Van Houweling, President & CEO Internet2.
VMware vSphere Configuration and Management v6
BASIC NETWORK PROTOCOLS AND THEIR FUNCTIONS Created by: Ghadeer H. Abosaeed June 23,2012.
Westerbork Netherlands Dedicated Gbit link Onsala Sweden Gbit link Jodrell Bank UK Dwingeloo DWDM link Cambridge UK MERLIN Medicina Italy Chalmers University.
Masaki Hirabaru NICT APAN JP NOC Meeting August 31, 2004 e-VLBI for SC2004 bwctl experiment with Internet2.
Lawrence H. Landweber National Science Foundation SC2003 November 20, 2003
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
Challenges in the Next Generation Internet Xin Yuan Department of Computer Science Florida State University
E-VLBI: A Brief Overview Alan R. Whitney MIT Haystack Observatory.
1 Masaki Hirabaru and Yasuhiro Koyama PFLDnet 2006 Febrary 2, 2006 International e-VLBI Experience.
1 eVLBI Developments at Jodrell Bank Observatory Ralph Spencer, Richard Hughes- Jones, Simon Casey, Paul Burgess, The University of Manchester.
Data-Acquisition and Transport – Looking Forward to 2010 and Beyond Definition of ‘Data-Acquisition and Transport’ Continuum of Transport Options Limitations.
NEXPReS Period 3 Overview WP8 FlexBuff High-Bandwidth, High-Capacity Networked Storage Ari Mujunen Aalto University Metsähovi Radio Observatory Finland.
SA1: second year overview Arpad Szomoru JIVE January 30EXPReS Board Meeting, Utrecht, the Netherlands: SA1Slide #2 Outline Accomplishments in 2007.
CRISP WP18, High-speed data recording Krzysztof Wrona, European XFEL PSI, 18 March 2013.
E-VLBI – Creating a Global Radio-Telescope Array via High-Speed Networks Alan R. Whitney MIT Haystack Observatory Optical Waves: Who Needs Them and Why?
NEXPReS WP5 - Cloud Correlation Work Package Overview Arpad Szomoru JIVE.
What is FABRIC? Future Arrays of Broadband Radio-telescopes on Internet Computing Huib Jan van Langevelde, JIVE Dwingeloo.
GGF 17 - May, 11th 2006 FI-RG: Firewall Issues Overview Document update and discussion The “Firewall Issues Overview” document.
TOG, Dwingeloo, March 2006, A. Szomoru, JIVE Status of JIVE Arpad Szomoru.
SA1: overview, first results
Instructor Materials Chapter 1: LAN Design
Instructor Materials Chapter 6: Quality of Service
Lab A: Planning an Installation
WP18, High-speed data recording Krzysztof Wrona, European XFEL
e-VLBI: Creating a Global Radio Telescope via High-Speed Networks
Introduction to Networks
VTP: VDIF Transport Protocol
DBBC Stutus Report November 2007
Presentation transcript:

e-VLBI Development Program at MIT Haystack Observatory Alan R. Whitney Chester A. Ruszczyk MIT Haystack Observatory 13 July 2005 e-VLBI Workshop Australia

Current Projects at Haystack Observatory Standardization –VSI-E Draft VSI-E standard distributed in January 2004 –Reference implementation released in October 2004 Network interfacing equipment for e-VLBI –Mark 5 VLBI data system Network Monitoring –Evaluation, development and deployment of monitoring systems Intelligent Applications –Automation of e-VLBI transfers an ongoing process –Development of optimization-based algorithms for intelligent applications ongoing (EGAE) –Intelligent optically-switched networks (DRAGON) e-VLBI Experiments –Goal to put e-VLBI into routine use

VSI-E Architecture

VSI-E Purpose: –To specify standardized e-VLBI data formats and transmission protocols that allow data exchange between heterogeneous VLBI data systems Characteristics: –Based on standard RTP/RTCP high-level protocols –Allows choice of IP transport protocols (TCP-IP, UDP, FAST, etc.) –Scalable Implementation; supports up to 100Gbps –Ability to transport individual data-channel streams as individual packet streams; potentially useful for distributed correlators –Ability to make use of multicasting to transport data and/or control information in an efficient manner Status –Draft VSI-E specification completed January 2004 –Prototype VSI-E prototype implementation Nov 2004 –Practical implementation for K5 and Mark 5 now is progress –Plan to use VSI-E in real-time demo at SC05, Nov 05

Reaching 1024 Mbps with Mark 5 Achieving 1024Mbps with Mark 5 is challenging Can move ~1.2 Gbps between StreamStor card memory via PCI bus, but –If GigE NIC is on same PCI bus, bus contention slows aggregate transfers to ~ Mbps, depending on motherboard –Single GigE connections tops out at ~980Mbps (theoretically and experimentally) –Typical GigE drivers require interrupt service every Ethernet frame; can generate up to ~100,000 interrupts/sec Elements of Solution –Capable motherboard with multiple independent PCI buses –Dual ‘channel-bonded’ GigE links –Driver or hardware interrupt mitigation; use of ‘jumbo frames’ –Careful software structure

Mark 5 e-VLBI Connectivity Mark 5 supports a triangle of connectivity for e-VLBI requirements Data Port/FPDP Disc array PCI bus/Network (64bit/66MHz) Mark 5 can support several possible e-VLBI modes: e-VLBI data buffer (first to Disc Array, then to Network); vice versa Direct e-VLBI (Data Port directly to Network); vice versa Data Port simultaneously to Disc Array and Network at ~800 Mbps

Anatomy of a (fairly) modern motherboard (Tyan Thunder i7501 Pro)

Best transfer rates to date Memory-to-memory transfers between Tyan motherboards – ~1900Mbps Uses dual channel-bonded GigE connection Mark 5A-to-memory transfer – ~1200 Mbps Required major re-working of Mark 5A software to improve efficiency of data-transfer to/from NIC, minimize number of internal buffer-to-buffer transfers, and support multiple threads More work still to be done to achieve routine 1024 Mbps Mark5-to- Mark5 transfers We plan to concentrate our efforts on implementing and optimizing with VSI-E to achieve 1024 Mbps There should be no performance difference between Mark 5A and Mark 5B

e-VLBI Network Monitoring Use of centralized/integrated network monitoring helped to enable identification of bottleneck (hardware fault) Automated monitoring allows view of network throughput variation over time –Highlights route changes, network outages Automated monitoring also helps to highlight any throughput issues at end points: –E.g. Network Inteface Card failures, Untuned TCP Stacks Integrated monitoring provides overall view of network behavior at a glance Also examining performance-monitoring packages such as MonaLisa, which would provide better standardization

Network State DataBase (NSDB) Tool to keep track of state of e-VLBI state: –Network performance –Configuration of end systems –State of end systems Integrates and builds on standard monitoring tools to provide a single, coherent view of e-VLBI network state: –Maintain continuous state monitoring of entire e-VLBI system –Essential for being able to identify issues with network/end system configuration –Diagnose at-a-glance (cf. current practice)

NSDB Architecture

e-VLBI Weather Map Web Page (Haystack to Kashima)

Network Layer Statistics

New Application-Layer Protocols for e-VLBI Based on observed usage statistics of networks such as Abilene, it is clear there is much unused capacity New protocols are being developed which are tailored to e-VLBI characteristics; for example: –Can tolerate some loss of data (perhaps 1% or so) in many cases –Can tolerate delay in transmission of data in many cases ‘Experiment-Guided Adaptive Endpoint’ (EGAE) strategy being developed at Haystack Observatory under 3-year NSF grant: –Will ‘scavenge’ and use ‘secondary’ bandwidth –‘Less than best effort’ service will not interfere with high-priority users –Translates science-user criteria into network constraints

Automation of e-VLBI transfers Based on EGAE, major effort is now underway to fully automate routine e-VLBI file transfers Algorithms are being built around use of standardized e-VLBI file- naming conventions (as agreed by Himwich, Koyama, Reynolds, Whitney, Nov 2004); see memo #49 at ftp://web.haystack.edu/pub/e- vlbi/memoindex.htmlftp://web.haystack.edu/pub/e- vlbi/memoindex.html –We urge universal adoption of standardized e-VLBI file naming for ease of data interchange

Experimental and Production e-VLBI August 2004: –Haystack link link upgraded to 2.5 Gbps –Real-time fringes at 128 Mbps, Westford and GGAO antennas, Haystack Correlator September 2004: –Real-time fringes at 512 Mbps, Westford and GGAO antennas, Haystack Correlator November 2004 –Real-time e-VLBI demonstration at SC2004 at 512 Mbps –Use DRAGON optically-switched light paths February 2005 –Real-time fringes Westford-Onsala at 256Mbps –Used optically-switched light paths over part of route October 2004 – present –Regular transfers from Kashima (~300GB per experiment; ~200 Mbps) Starting April 2005 –Routine weekly transfers from Tsukuba (~1.2TB/transfer) –Preparing for CONT05 (15 days continuously; ~1TB/day)

Real-time e-VLBI SC2004 Demo Bossnet DRAGON Haystack Westford Goddard GGAO Pittsburgh Convention Center 512 Mbps

DRAGON Project (Dynamic Resource Allocation for FMPLS Optical Networks) Dynamically-provisionally optically-switched network research project –U. of Maryland, ISI – PI’s 10GBPS DRAGON network is being installed around Washington, D.C. area, with connections to Abilene, HOPI and NLR e-VLBI is primary demonstration application, using 2.4Gbps dedicated connection to Haystack –Programmatic interfaces to EGAE are under development –Hope to upgrade Haystack connection to 10 Gbps in near future DRAGON will play a prominent role in e-VLBI demos scheduled for iGRID (Sep 05) and SC05 (Nov 05)

DRAGON Network RE1 RE3 OSPF control plane adjacencies WXC2 RE1 RE2 WXC1 RE1 RE4 HOPI ATDnet/ Bossnet WXC2 EXC2 EXC1 UMCP GSFC ISIE NCSA MCLN ARLG CLPK M10 WXC HAYS Abilene

Movaz Networks iWSS Optical Switch MEMS-based switching fabric 400 x 400 wavelength switching, scalable to 1000s x 1000s 9.23"x7.47"x3.28" in size Integrated multiplexing and demultiplexing, eliminating the cost and challenge of complex fiber management Dynamic power equalization (<1 dB uniformity), eliminating the need for expensive external equalizers Ingress and egress fiber channel monitoring outputs to provide sub-microsecond monitoring of channel performance using the OPM Switch times < 5ms

In summary - Some lessons learned High-performance e-VLBI is still hard to do –Cannot count on consistent performance –Varying traffic loads –Network configuration changes –Equipment failures –Continuous network monitoring is critical to success of on-demand RT e- VLBI Jumbo-frame support is important at rates >~256Mbps on GigE –Jumbo-frame support is spotty, but improving

Some Challenges Network bottlenecks well below advertised rates Performance of transport protocols –untuned TCP stacks, fundamental limits of regular TCP Throughput limitations of COTS hardware –Disk-I/O - Network Complexity of e-VLBI experiments –e-VLBI experiments currently require significant network expertise to conduct Time-varying nature of network Define standard formats for transfer of data and control information between different VLBI systems ‘Last-mile’ connectivity to telescopes –Most telescopes are deliberately placed in remote areas –Extensive initiatives in Europe, Japan and Australia to connect; U.S. is lagging

Some Frustrations Telescope connectivity, particularly in U.S., remains a significant challenge –Westford – 1 Gbps –GGAO – 1 Gbps –Arecibo – 155 Gbps –VLBA – not connected –GBT – not connected –CARMA – not connected –JCMT – not connected –SMA – not connected Much difficulty in securing funding support from NSF Astronomy for e- VLBI –Need to develop convincing science case

Future Directions Further EGAE and VSI-E development and deployment Improved IP protocols for e-VLBI Optically-switched networks for highly provisioned high-data-rate pipes Solving ‘last mile’ problem to U.S. telescopes Distributed correlation using clusters and/or highly distributed PC’s Extending to higher bandwidths –Haystack has Astronomy NSF grant to push for 4Gbps/station –Preparing NSF proposal to extend to 16Gbps/station using new digital- filter and recording technology Continuing to move e-VLBI into routine practice on a global basis

e-VLBI Technical Working Group Established at this e-VLBI workshop as group of technical experts, David Lapsley chair On hold until David Lapsley replacement is on-board Hope to re-invigorate at July e-VLBI workshop in Sydney Objectives –Evaluate e-VLBI/VSI-E hardware/software/procedures –Implement standardized global e-VLBI network performance/monitoring tools –Provide expert assistance to e-VLBI users ~2 members from each major e-VLBI geographical area

Thank you - THE END Questions?

Antenna/Correlator Connectivity JIVE Correlator (6 x 1 Gbps) Haystack (2.5 Gbps) Kashima, Japan (1 Gbps) Tsukuba, Japan (1 Gbps) GGAO, MD (10 Gbps) Onsala, Sweden (1 Gbps) Torun, Poland (1 Gbps) Westerbork, The Netherlands (1 Gbps) Westford, MA (2 Gbps) Jodrell Bank (1 Gbps?) Arecibo, PR (155 Mbps) Wettzell, Germany (~30 Mbps) Kokee Park, HA (nominally ~30 Mbps, but problems) TIGO (~2 Mbps) In progress: Australia – plan to connect all major antennas at 10Gbps! Hobart – agreement reached to install high-speed fiber NyAlesund – work in progress to provide ~200Mbps link to NASA/GSFC