Presentation is loading. Please wait.

Presentation is loading. Please wait.

TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 1 The Performance of High Throughput Data Flows for e-VLBI in Europe Multiple.

Similar presentations


Presentation on theme: "TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 1 The Performance of High Throughput Data Flows for e-VLBI in Europe Multiple."— Presentation transcript:

1 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 1 The Performance of High Throughput Data Flows for e-VLBI in Europe Multiple vlbi_udp Flows, Constant Bit-Rate over TCP & Multi-Gigabit over GÉANT2 Richard Hughes-Jones The University of Manchester www.hep.man.ac.uk/~rich/ then Talks www.hep.man.ac.uk/~rich/

2 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 2 What is VLBI ? u Data wave front sent over the network to the Correlator uVLBI signal wave front Resolution Baseline Sensitivity Bandwidth B is as important as time τ : Can use as many Gigabits as we can get!

3 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 3 Dedicated DWDM link Onsala Sweden Gbit link Jodrell Bank UK Dwingeloo Netherlands Medicina Italy Chalmers University of Technology, Gothenburg Torun Poland Gbit link Metsähovi Finland European e-VLBI Test Topology 2* 1 Gbit links

4 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 4 uiGrid2002 monolithic code uConvert to use pthreads control Data input Data output uWork done on vlbi_recv: Output thread polled for data in the ring buffer – burned CPU Input thread signals output thread when there is work to do – else wait on semaphore – had packet loss at high rate, variable throughput Output thread uses sched_yield() when no work to do uMulti-flow Network performance – set up in Dec06 3 Sites to JIVE: Manc UKLight; Manc production; Bologna GEANT PoP Measure: throughput, packet loss, re-ordering, 1-way delay vlbi_udp: UDP on the WAN

5 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 5 vlbi_udp: Some of the Problems uJIVE made Huygens, mark524 (.54) and mark620 (.59) available uWithin minutes of Arpad leaving, the Alteon NIC of mark524 lost the data network! OK used mark623 (.62) – faster CPU uFirewalls needed to allow vlbi_udp ports Aarrgg (!!!) Huygens is SUSE Linux uRouting – well this ALWAYS needs to be fixed !!! uAMD Opteron did not like sched_getaffinity() sched_setaffinity() Comment out this bit uudpmon flows Onsala to JIVE look good uudpmon flows JIVE mark623 to Onsala & Manc UKL dont work Firewall down stops after 77 udpmon loops Firewall up udpmon cant communicate with Onsala uCPU load issues on the MarkV systems Dont seem to be able to keep up with receiving UDP flow AND emptying the ring buffer uTorun PC / Link lost as the test started

6 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 6 Multiple vlbi_udp Flows uGig7 Huygens UKLight 15 us spacing 816 Mbit/s sigma <1Mbit/s step 1 Mbit/s Zero packet loss Zero re-ordering uGig8 mark623 Academic Internet 20 us spacing 612 Mbit/s 0.6 falling to 0.05% packet loss 0.02 % re-ordering uBologna mark620 Academic Internet 30 us spacing 396 Mbit/s 0.02 % packet loss 0 % re-ordering

7 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 7 The Impact of Multiple vlbi_udp Flows uGig7 Huygens UKLight 15 us spacing 800 Mbit/s uGig8 mark623 Academic Internet 20 us spacing 600 Mbit/s uBologna mark620 Academic Internet 30 us spacing 400 Mbit/s SURFnet Access link SJ5 Access link GARR Access link

8 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 8 Microquasar GRS1915+105 (11 kpc) on 21 April 2006 at 5 Ghz using 6 EVN telescopes, during a weak flare (11 mJy), just resolved in jet direction (PA140 deg). (Rushton et al.) Microquasar Cygnus X-3 (10 kpc) on 20 April (a) and 18 May 2006 (b). The source as in a semi-quiescent state in (a) and in a flaring state in (b), The core of the source is probably ~20 mas to the N of knot A. (Tudose et al.) a b e-VLBI: Driven by Science u128 Mbit/s from each telescope u4 TBytes raw samples data over 12 hours u2.8 GBytes of correlated data

9 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 9 RR001 The First Rapid Response Experiment (Rushton Spencer) The experiment was planned as follows: 1.Operate EVN 6 telescope in real time on 29 th Jan 2007 2.Correlate and Analyse results in double quick time 3.Select sources for follow up observations 4.Observe selected sources 1 Feb 2007 The experiment worked – we successfully observed and analysed 16 sources (weak microquasars), ready for the follow up run but we found that none of the sources were suitably active at that time. – a perverse universe!

10 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 10 Constant Bit-Rate Data over TCP/IP

11 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 11 CBR Test Setup

12 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 12 Moving CBR over TCP Timely arrival of data Effect of loss rate on message arrival time. TCP buffer 1.8 MB (BDP) RTT 27 ms When there is packet loss TCP decreases the rate. TCP buffer 0.9 MB (BDP) RTT 15.2 ms Can TCP deliver the data on time?

13 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 13 Message number / Time Packet loss Delay in stream Expected arrival time at CBR Arrival time Resynchronisation

14 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 14 uMessage size: 1448 Bytes uData Rate: 525 Mbit/s uRoute: Manchester - JIVE uRTT 15.2 ms uTCP buffer 160 MB uDrop 1 in 1.12 million packets uThroughput increases Peak throughput ~ 734 Mbit/s Min. throughput ~ 252 Mbit/s CBR over TCP – Large TCP Buffer

15 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 15 uMessage size: 1448 Bytes uData Rate: 525 Mbit/s uRoute: Manchester - JIVE uRTT 15.2 ms uTCP buffer 160 MB uDrop 1 in 1.12 million packets uOK you can recover BUT: Peak Delay ~2.5s TCP buffer RTT 4 CBR over TCP – Message Delay

16 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 16 Multi-gigabit tests over GÉANT But will 10 Gigabit Ethernet work on a PC?

17 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 17 High-end Server PCs for 10 Gigabit u Boston/Supermicro X7DBE u Two Dual Core Intel Xeon Woodcrest 5130 2 GHz Independent 1.33GHz FSBuses u 530 MHz FD Memory (serial) Parallel access to 4 banks uChipsets: Intel 5000P MCH – PCIe & Memory ESB2 – PCI-X GE etc. u PCI 3 8 lane PCIe buses 3* 133 MHz PCI-X u 2 Gigabit Ethernet u SATA

18 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 18 10 GigE Back2Back: UDP Latency uMotherboard: Supermicro X7DBE uChipset: Intel 5000P MCH uCPU: 2 Dual Intel Xeon 5130 2 GHz with 4096k L2 cache uMem bus: 2 independent 1.33 GHz uPCI-e 8 lane uLinux Kernel 2.6.20-web100_pktd-plus uMyricom NIC 10G-PCIE-8A-R Fibre umyri10ge v1.2.0 + firmware v1.4.10 rx-usecs=0 Coalescence OFF MSI=1 Checksums ON tx_boundary=4096 uMTU 9000 bytes uLatency 22 µs & very well behaved uLatency Slope 0.0028 µs/byte uB2B Expect: 0.00268 µs/byte Mem0.0004 PCI-e0.00054 10GigE0.0008 PCI-e0.00054 Mem0.0004 uHistogram FWHM ~1-2 us

19 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 19 10 GigE Back2Back: UDP Throughput uKernel 2.6.20-web100_pktd-plus uMyricom 10G-PCIE-8A-R Fibre rx-usecs=25 Coalescence ON uMTU 9000 bytes uMax throughput 9.4 Gbit/s uNotice rate for 8972 byte packet u~0.002% packet loss in 10M packets in receiving host uSending host, 3 CPUs idle uFor 90% in kernel mode inc ~10% soft int uReceiving host 3 CPUs idle uFor <8 µs packets, 1 CPU is 70-80% in kernel mode inc ~15% soft int

20 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 20 10 GigE UDP Throughput vs packet size uMotherboard: Supermicro X7DBE uLinux Kernel 2.6.20-web100_ pktd-plus uMyricom NIC 10G-PCIE-8A-R Fibre umyri10ge v1.2.0 + firmware v1.4.10 rx-usecs=0 Coalescence ON MSI=1 Checksums ON tx_boundary=4096 uSteps at 4060 and 8160 bytes within 36 bytes of 2 n boundaries uModel data transfer time as t= C + m*Bytes C includes the time to set up transfers Fit reasonable C= 1.67 µs m= 5.4 e4 µs/byte Steps consistent with C increasing by 0.6 µs uThe Myricom driver segments the transfers, limiting the DMA to 4096 bytes – PCI-e chipset dependent!

21 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 21 10 GigE X7DBE X7DBE: TCP iperf uNo packet loss uMTU 9000 uTCP buffer 256k BDP=~330k uCwnd SlowStart then slow growth Limited by sender ! uDuplicate ACKs One event of 3 DupACKs uPackets Re-Transmitted uIperf TCP throughput 7.77 Gbit/s Web100 plots of TCP parameters

22 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 22 OK so it works !!!

23 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 23 ESLEA-FABRIC:4 Gbit flows over GÉANT2 uSet up 4 Gigabit Lightpath Between GÉANT2 PoPs Collaboration with DANTE GÉANT2 Testbed London – Prague – London PCs in the DANTE London PoP with 10 Gigabit NICs uVLBI Tests: UDP Performance Throughput, jitter, packet loss, 1-way delay, stability Continuous (days) Data Flows – VLBI_UDP and udpmon Multi-Gigabit TCP performance with current kernels Multi-Gigabit CBR over TCP/IP Experience for FPGA Ethernet packet systems uDANTE Interests: Multi-Gigabit TCP performance The effect of (Alcatel 1678 MCC 10GE port) buffer size on bursty TCP using BW limited Lightpaths

24 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 24 The GÉANT2 Testbed u10 Gigabit SDH backbone uAlcatel 1678 MCCs uGE and 10GE client interfaces uNode location: London Amsterdam Paris Prague Frankfurt uCan do lightpath routing so make paths of different RTT uLocate the PCs in London

25 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 25 Provisioning the lightpath on ALCATEL MCCs uSome jiggery-pokery needed with the NMS to force a looped back lightpath London-Prague-London uManual XCs (using element manager) possible but hard work 196 needed + other operations! uInstead used RM to create two parallel VC-4-28v (single- ended) Ethernet private line (EPL) paths Constrained to transit DE uThen manually joined paths in CZ Only 28 manually created XCs required

26 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 26 Provisioning the lightpath on ALCATEL MCCs uPaths come up u(Transient) alarms clear uResult: provisioned a path of 28 virtually concatenated VC-4s UK-NL-DE-NL-UK uOptical path ~4150 km uWith dispersion compensation ~4900 km uRTT 46.7 ms

27 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 27 Photos at The PoP 10 GE Test-bed SDH Production SDH Optical Transport Production Router

28 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 28 4 Gig Flows on GÉANT: UDP Throughput uKernel 2.6.20-web100_pktd- plus uMyricom 10G-PCIE-8A-R Fibre rx-usecs=25 Coalescence ON uMTU 9000 bytes uMax throughput 4.199 Gbit/s uSending host, 3 CPUs idle uFor 90% in kernel mode inc ~10% soft int uReceiving host 3 CPUs idle uFor <8 µs packets, 1 CPU is ~37% in kernel mode inc ~9% soft int

29 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 29 4 Gig Flows on GÉANT: 1-way delay uKernel 2.6.20-web100_pktd-plus uMyricom 10G-PCIE-8A-R Fibre Coalescence OFF u1-way delay stable at 23.435 µs uPeak separation 86 µs u~40 µs extra delay uLab Tests: uPeak separation 86 µs u~40 µs extra delay uLightpath adds no unwanted effects

30 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 30 4 Gig Flows on GÉANT: Jitter hist uKernel 2.6.20-web100_pktd-plus uMyricom 10G-PCIE-8A-R Fibre Coalescence OFF uPeak separation ~36 µs uFactor 100 smaller Packet separation 300 µs Packet separation 100 µs Lab Tests: Lightpath adds no effects

31 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 31 4 Gig Flows on GÉANT: UDP Flow Stability uKernel 2.6.20-web100_pktd- plus uMyricom 10G-PCIE-8A-R Fibre Coalescence OFF uMTU 9000 bytes uPacket spacing 18 us uTrials send 10 M packets uRan for 26 Hours uThroughput very stable 3.9795 Gbit/s uOccasional trials have packet loss ~40 in 10M - investigating uOur thanks go to all our collaborators uDANTE really provided Bandwidth on Demand uA record 6 hours ! including Driving to the PoP Installing the PCs Provisioning the Light-path

32 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 32 Any Questions?

33 TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 33 Provisioning the lightpath on ALCATEL MCCs uCreate a virtual network element to a planned port (non-existing) in Prague VNE2 uDefine end points Out port 3 in UK & VNE2 CZ In port 4 in UK & VNE2 CZ uAdd Constraint: to go via DE Or does OSPF uSet capacity ( 28 VC-4s ) uAlcatel Resource Manager allocates routing of EXPReS_out VC-4 trails uRepeat for EXPReS_ret uSame time slots used in CZ for EXPReS_out & EXPReS_ret paths


Download ppt "TERENA Networking Conference, Lyngby, 21-24 May 2007, R. Hughes-Jones Manchester 1 The Performance of High Throughput Data Flows for e-VLBI in Europe Multiple."

Similar presentations


Ads by Google