Presentation is loading. Please wait.

Presentation is loading. Please wait.

E-EVN developments in 2006 Arpad Szomoru.

Similar presentations


Presentation on theme: "E-EVN developments in 2006 Arpad Szomoru."— Presentation transcript:

1 e-EVN developments in 2006 Arpad Szomoru

2 Outline The past Current status Expansion of e-EVN
EXPReS: first results Connectivity improvements The future

3 e-VLBI Milestones September 2002: 2 X 1 Gbit Ethernet links to JIVE
Demonstrations at igrid2002 and ER2002 UDP data rates over 600 Mbit/s

4 Chalmers University of Technology, Gothenburg
May 2003: First use of FTP for VLBI session fringe checks. e-VLBI Milestones: 2003 July:10 Gbit access GEANT-Surfnet 6 X 1 Gbit links to JIVE Onsala Sweden Chalmers University of Technology, Gothenburg November 2003: Onsala Space Observatory (Sweden) connected at 1Gb/s. November 2003: Cambridge – Westerbork fringes detected, only 15 minutes after observations were made. 64Mb/s, with disk buffering at JIVE only. September: e-VLBI data transfer between Bologna and JIVE – 300Mb/s October 2003: first light on Westerbork – JIVE 1 Gb/s connection

5 April 2004: Three-telescope, real-time fringes at 64Mb/s (On, Jb, Wb).
First real-time EVN image at 32Mb/s December : connection of JBO to Manchester at 2 x 1 Gb/s e-VLBI test with Tr, On and Jb Jb - Tr fringes at 256Mb/s March 2004: first real-time fringes Westford-GGAO to Haystack Intercontinental real-time fringes, Wf -On, 32 Mb/s September 2004: Four telescope real-time e-VLBI (Ar, Cm, Tr, Wb) First fringes to Ar at 32 Mb/s September 2004: First e-EVN science session (Ar, Cm, Tr, On, Wb) Spectral line observations at 32 Mb/s January 2004: Disk buffered e-VLBI On, Wb, Cm at 128Mb/s for first e-VLBI image On – Wb fringes at 256Mb/s e-VLBI Milestones: 2004 June 2004: Torun connected at 1Gb/s. June 2004: network stress test (iperf) involving Bologna, Torun, Onsala and JIVE

6 e-VLBI Milestones: 2005 March 2005: e-VLBI science session
January 2005: Huygens descent tracking, salvage of Doppler experiment Use of dedicated lightpath Australia-JIVE, data transferred at ~450 Mb/s February 2005: network transfer test (BWCTL) employing various network monitoring tools involving Jb, Cm, On, Tr, Bologna and JIVE e-VLBI Milestones: 2005 Summer 2005: trench for “last mile” connection to Medicina dug March 2005: e-VLBI science session First continuum science observations at 128 and 64 Mb/s, involving 6 radio telescopes (Wb, Ar, Jb, Cm, On, Tr) Spring 2006: Metsahovi connected at 10Gb/s

7 1 Gbps 10 Gbps 155 Mbps 2.5 Gbps

8 Why bother? (change is bad…)
Target of Opportunity - unscheduled observations triggered by sudden astronomical events. This capability will become much more important when LOFAR comes online Adaptive Observing - Use e-VLBI as a finder experiment Or, e-VLBI sessions a few days apart, adapt schedules for later observations based on results (rapid results on large sample, focus in detail on best candidates) Automatic Observing - small number of telescopes observing for extended periods doing spectral line observations of large galactic samples Interface with other real-time arrays – e-MERLIN, LOFAR, SKA.. Also function as SKA-pathfinder Bandwidth no longer limited by magnetic media: 10Gbps technology already becoming mainstream Because we can…

9 Recent developments Regular science/test sessions throughout the year
First open calls for e-VLBI science proposals First science run completely lost, but, first ever real-time fringes to Mc (128 Mbps) Second and third science runs: many hours of smooth sailing at 128 Mbps. No excitement, no drama. JIVE becoming an observatory? Fourth run: 16 hours at 256 Mbps. However, nearly 25% of time lost to technical problems..

10 First e-EVN Astronomy Publications

11 Current status Technical tests: Current connectivity:
6-station fringes at 256 Mbps first European 512 Mbps fringes (Jb and Wb, May 18) 3-station 512 Mbps fringes (Cm, Wb, On, August 21) first fringes using new 5 GHz receiver at Mc Current connectivity: Ar: 64 Mbps in the past, but <32 Mbps this year European telescopes: 128 Mbps always, 256 Mbps often, 512 Mbps to Wb, Jb and On

12

13 Tr connectivity bottleneck – (partially) solved
Black Diamond 6808 switches: New interfaces (10GE) system in old architecture (1GE) Originally 8x1GE interface per card 10GE NIC served by 8 x 1GE queues Queuing regime – RR (packet based) and flow-based Flow based: Max. flow capacity – 1Gbit/s – backround traffic. There is no known reordering workaround to solve this problem.

14 e-VLBI to South America? SMART-1
SMART-1 factsheet Testing solar-electric propulsion and other deep-space technologies   Name SMART stands for Small Missions for Advanced Research in Technology.     Description SMART-1 is the first of ESA’s Small Missions for Advanced Research in Technology. It travelled to the Moon using solar-electric propulsion and carrying a battery of miniaturised instruments. As well as testing new technology, SMART-1 is making the first comprehensive inventory of key chemical elements in the lunar surface. It is also investigating the theory that the Moon was formed following the violent collision of a smaller planet with Earth, four and a half thousand million years ago.     Launched 27 September     Status Arrived in lunar orbit, 15 November Conducting lunar orbit science operations.     Notes SMART-1 is the first European spacecraft to travel to and orbit around the Moon. This is only the second time that ion propulsion has been used as a mission's primary propulsion system (the first was NASA's Deep Space 1 probe launched in October 1998). SMART-1 is looking for water (in the form of ice) on the Moon. To save precious xenon fuel, SMART-1 uses 'celestial mechanics', that is, techniques such as making use of 'lunar resonances' and fly-bys.

15 e-VLBI to Sourth America (2)

16 And other continents.. Australia: Telescopes connected
PCEVN-Mk5 interface needed China: Shanghai Observatory connected at 2.5Gbps Connection via TEIN (622Mbps), ORIENT? Issues with CERNET, CSTNet Direct lightpath Hong Kong-Netherlight?

17 Hybrid networks in the Netherlands..

18 Switch from Cisco to Nortel/Avici equipment has been completed: for now, 7 * 1 Gbps, ultimately 16 * 1 Gbps lightpaths + 10 Gbps IP connection

19 ..and across Europe: GÉANT2 network upgrade
Outline

20 EXPReS: getting underway
SA1: new hires at JIVE; two software engineers, one network engineer (finally!), one e-VLBI postdoc Inclusion of e-MERLIN telescopes in e-EVN Operational improvements (deliverable driven): Robustness Reliability Speed Ease of operation Station feedback And still, pushing data rates, protocols, UDP, Circuit TCP? Get rid of fairness… Better usage of available bandwidth.

21 e-VLBI control interface

22 Interface to station Mk5s

23 Runtime control

24 Integrating fringe display

25 Data status monitor

26 Streamlining of post processing

27 Web-based Post-processing

28 Ongoing New control computers (Solaris AMD servers)
Cut down dramatically on (re-)start time Powerful code development platform Tightening up of existing code Other hardware upgrades: SX optics (fibres + NICs), managed switch at JIVE Mark5A→B: motherboards, memory, power supplies, serial links, CIBs

29 And coming.. Distributed software correlation
FABRIC: (Huib Jan van Langevelde) Distributed software correlation High bandwidth data transport (On part of 4Gbps) Two new hires at JIVE And coming.. SCARIe: Collaboration with SARA and UvA Distributed software correlation using Dutch grid Lambda switching, dynamical allocation of lightpaths, collaboration with DRAGON project JIVE postdoc hired, still looking for UvA postdoc

30 FABRIC components VSIe?? on?? user correlator parameters GRID
resources data observing schedule in VEX format earth orientation parameters field system controls antenna and acquisition resource allocation and routing correlator control including model calculation DBBC VSI output data FABRIC = The GRID VSIe?? on?? PC-EVN #2

31 Connectivity improvements
Martin Swany

32

33 2 Heavy duty gamer PCs Tyan Thunder K8WE Motherboards Dual AMD Opteron 2.4GHz processors 4GB RAM 2 1Gb PCI-Expres Nics First one at Torun, back-to-back to Mark5 Second one located at Poznan Supercomputing Centre

34 Protocol work in Manchester:
(Richard Hughes-Jones, Ralph Spencer & collaborators) Protocol Investigation for eVLBI Data Transfer Protocols considered for investigation include: TCP/IP UDP/IP DCCP/IP VSI-E RTP/UDP/IP Remote Direct Memory Access TCP Offload Engines Work in progress – Links to ESLEA UK e-science Vlbi-udp – UDP/IP stability & the effect of packet loss on correlations Tcpdelay – TCP/IP and CBR data

35 Protocols (1) Mix of High Speed and Westwood TCP (Sansa)

36 Protocols (2) Circuit TCP (Mudambi, Zheng and Veeraraghavan)
Meant for Dedicated End-to-End Circuits, fixed congestion window No slow start, no backoff: finally, a TCP rude enough for e-VLBI?

37 Protocols (3) Home-grown version of CTCP using pluggable TCP congestion avoidance algorithms in newer Linux kernels (Mark Kettenis) Rock-steady 780 Mbps transfer using iperf from Mc to JIVE Serious problem with new version of Mk5A software under newer kernels

38 e-EVN: the future Aim: 16 * 1 Gbps production e-EVN network
IP: not possible/affordable 10 Gbps lightpath across Europe: currently ~20k€/year Lightpaths across GÉANT terminating at JIVE If possible, all the way from telescopes. If not, overprovisioned IP connections from telescopes to GÉANT, lightpaths from there on. Guaranteed bandwidth, possibility to use ethernet frames, no more need to worry about congestion.. Towards a true connected-element interferometer

39 Proposed connection Surfnet-JIVE
External LPs OME network Switch Mk5 10 G IP Dynamic capabilities through DRAC 16 GE N GE


Download ppt "E-EVN developments in 2006 Arpad Szomoru."

Similar presentations


Ads by Google