Presentation is loading. Please wait.

Presentation is loading. Please wait.

FABRIC WP1.2 Broadband Data Path: Protocols and Processor Interface Bonn 20/09/07 Ralph Spencer The University of Manchester.

Similar presentations


Presentation on theme: "FABRIC WP1.2 Broadband Data Path: Protocols and Processor Interface Bonn 20/09/07 Ralph Spencer The University of Manchester."— Presentation transcript:

1 FABRIC WP1.2 Broadband Data Path: Protocols and Processor Interface Bonn 20/09/07 Ralph Spencer The University of Manchester

2 20 September 2007Fabric: WP1.2 Broadband data pathSlide #2 Contents: Outline WP1.2.1 Broadband Protocols WP1.2.2 Broadband data processing interface

3 20 September 2007Fabric: WP1.2 Broadband data pathSlide #3 Outline WP1.2.1 Protocols Investigation of suitable protocols for real time e-VLBI in EVN context 1 FTE funded from EXPReS, RA: Stephen Kershaw Contributed work over last year funded by ESLEA project Strategic document May 2006 Protocols performance report (interim) June 2007 WP1.2.2 Broadband Data Processor interface Interface to e-MERLIN correlator 4 Gbps input (from Onsala) 4 x 1 Gbps output to JIVE (SA1, EXPReS) 2 FTE (EXPReS+FABRIC), Johnathan Hargreaves (since Dec 2006) Using iBOBs: Xilinx vertex 2 FPGAs e-MERLIN station boards: Xilinx IVs.

4 20 September 2007Fabric: WP1.2 Broadband data pathSlide #4 WP1.2.1 Protocols: What’s in the report? TCP_delay – constant bit rate data transfer over TCP Reaction to lost packets – data delayed Can catch up, needs large data buffers and provided link bandwidth adequate Impractical, needs alternative protocol VLBI_UDP UDP based transfer system using ring buffers Allows selective packet dropping Implementation on PCs works, tests with correlator Implemented on MkVAs – code diversion (JIVE/JBO) Both work at 512 Mbps. 1 Gbps tests….. DCCP Datagram congestion control In Linux kernel, uses selectable congestion control algorithm (CCID) Needs suitable CCID for e-VLBI Further work needed if to be sued in eVLBI

5 20 September 2007Fabric: WP1.2 Broadband data pathSlide #5 WP1.2.1 Protocols: What’s now/next? Work on TCP-delay completed - Stephen VLBI_UDP ideas incorporated into Haro’s/Arpad’s code – 512 Mbps successful on Mk5A’s Bottleneck on VLBI_UDP identified: selective packet dropping implemented (can run 1024 Mbps VLBI over 1 GE) -Simon Work on multi-destination protocols initiated - Stephen VSI-E implemented, trans-Atlantic tests underway -Tony 10 Gbps tests undertaken on GEANT2 research network - Rich Tests to Onsala being planned

6 20 September 2007Fabric: WP1.2 Broadband data pathSlide #6 4 Gbit flows over G É ANT2 Set up 4 Gigabit Lightpath Between G É ANT2 PoPs Collaboration with DANTE G É ANT2 Testbed London – Prague – London And London-Amsterdam-Frankfurt-Prague-Paris-London PCs in the DANTE London PoP with 10 Gigabit NICs VLBI Tests: UDP Performance Throughput, jitter, packet loss, 1-way delay, stability Continuous (days) Data Flows – VLBI_UDP and udpmon Multi-Gigabit TCP performance with current kernels Multi-Gigabit CBR over TCP/IP Experience for FPGA Ethernet packet systems DANTE Interests: Multi-Gigabit TCP performance The effect of (Alcatel 1678 MCC 10GE port) buffer size on bursty TCP using BW limited Lightpaths 10 Gigabit London –New York Alcatel-Ciena Interoperability

7 20 September 2007Fabric: WP1.2 Broadband data pathSlide #7 The G É ANT2 Testbed 10 Gigabit SDH backbone Alcatel 1678 MCCs GE and 10GE client interfaces Node location: London Amsterdam Paris Prague Frankfurt Can do lightpath routing so make paths of different RTT Locate the PCs in London

8 20 September 2007Fabric: WP1.2 Broadband data pathSlide #8 Provisioning the lightpath on ALCATEL MCCs Some jiggery-pokery needed with the NMS to force a “looped back” lightpath London-Prague-London Manual XCs (using element manager) possible but hard work 196 needed + other operations! Instead used RM to create two parallel VC-4-28v (single-ended) Ethernet private line (EPL) paths Constrained to transit DE Then manually joined paths in CZ Only 28 manually created XCs required

9 20 September 2007Fabric: WP1.2 Broadband data pathSlide #9 Provisioning the lightpath on ALCATEL MCCs Paths come up (Transient) alarms clear Result: provisioned a path of 28 virtually concatenated VC-4s UK-NL-DE-NL-UK Optical path ~4150 km With dispersion compensation ~4900 km RTT 46.7 ms

10 20 September 2007Fabric: WP1.2 Broadband data pathSlide #10 Photos at The PoP 10 GE Test-bed SDH Production SDH Optical Transport Production Router

11 20 September 2007Fabric: WP1.2 Broadband data pathSlide #11 4 Gbps on G É ANT: UDP Throughput Kernel 2.6.20-web100_pktd-plus Myricom 10G-PCIE-8A-R Fibre rx-usecs=25 Coalescence ON MTU 9000 bytes Max throughput 4.199 Gbit/s Sending host, 3 CPUs idle For 90% in kernel mode inc ~10% soft int Receiving host 3 CPUs idle For <8 µ s packets, 1 CPU is ~37% in kernel mode inc ~9% soft int

12 20 September 2007Fabric: WP1.2 Broadband data pathSlide #12 4 Gig Flows on G É ANT: 1-way delay Kernel 2.6.20-web100_pktd-plus Myricom 10G-PCIE-8A-R Fibre Coalescence OFF 1-way delay stable at 23.435 µ s Peak separation 86 µ s ~40 µ s extra delay Lab Tests: Peak separation 86 µ s ~40 µ s extra delay Lightpath adds no unwanted effects

13 20 September 2007Fabric: WP1.2 Broadband data pathSlide #13 4 Gig Flows on G É ANT: Jitter hist Kernel 2.6.20-web100_pktd-plus Myricom 10G-PCIE-8A-R Fibre Coalescence OFF Peak separation ~36 µ s Factor 100 smaller Packet separation 300 µ s Packet separation 100 µ s Lab Tests: Lightpath adds no effects

14 20 September 2007Fabric: WP1.2 Broadband data pathSlide #14 4 Gig Flows on G É ANT: UDP Flow Stability Kernel 2.6.20-web100_pktd-plus Myricom 10G-PCIE-8A-R Fibre Coalescence OFF MTU 9000 bytes Packet spacing 18 us Trials send 10 M packets Ran for 26 Hours Throughput very stable 3.9795 Gbit/s Occasional trials have packet loss ~40 in 10M - investigating Our thanks go to all our collaborators DANTE really provided “Bandwidth on Demand” A record 6 hours ! including Driving to the PoP Installing the PCs Provisioning the Light-path

15 20 September 2007Fabric: WP1.2 Broadband data pathSlide #15 Classic Bottleneck 10 Gbit/s input 4 Gbit/s output Use udpmon to send a stream of spaced UDP packets Measure packet number of first lost frame as function of w packet spacing Alcatel Buffer size: Method Slope gives buffer size ~57 kBytes

16 20 September 2007Fabric: WP1.2 Broadband data pathSlide #16 WP1.2.2 Processor Interface University of Berkeley iBOB design (Dan Wertheimer) 10 tested iBOBs delivered to JBO in June 2007 Firmware being developed - Jonathan Priority: 10 GE data transfer through CX4 connector iBOB connects via VSI-H to EVLA/e-MERLIN station board Prototype station board tested at Penticton- new version will be produced Delivery of SBs to JBO expected after end of year Fringe tests will need correlator cards – some time in 2008?

17 20 September 2007Fabric: WP1.2 Broadband data pathSlide #17 Connection to e-MERLIN

18 20 September 2007Fabric: WP1.2 Broadband data pathSlide #18 IBOB under test

19 20 September 2007Fabric: WP1.2 Broadband data pathSlide #19 iBOB Test Configuration RS232JTAG Optional second CX4 CX4 10Gbps up to 15m iBOB Configured as network testing device Network PC Or Switch Local PC Download FPGA firmware over JTAG Local Monitoring over RS232 Removed when firmware is stable 10/100 Ethernet Remote PC Remote login to network PC to run tests from JBO, Manchester or elsewhere

20 20 September 2007Fabric: WP1.2 Broadband data pathSlide #20 iBOB test set up

21 20 September 2007Fabric: WP1.2 Broadband data pathSlide #21 Simulink Design for Generating Bursts of UDP Packets

22 20 September 2007Fabric: WP1.2 Broadband data pathSlide #22 UDP Throughput vs. Packet Spacing PC Kernel 2.6.20-web100_pktd-plus Myricom 10G-PCIE-8A-R CX4 rx-usecs=25 Coalescence ON MTU 9000 bytes UDP Packets Max throughput 9.4 Gbit/s iBoB Packet 8234 Data: 8192+ Header: 42 100 MHz clock Max rate 6.6 Gbit/s See 6.44Gbit/s

23 20 September 2007Fabric: WP1.2 Broadband data pathSlide #23 Current status Using Network PC to test 10Gbps capability of iBOB Can ARP, PING and send and receive UDP packets using software running on the iBOB’s PowerPC. 10 Gbps packets sent using FPGA hardware Next few weeks: UDP network tests DevelopVSI-E control protocols using Linux Next 6 months iBOB to iBOB transmission over a network using a modified RTP packet header. Algorithms to buffer and re-order late packets in the receiver need to be developed and tested. Develop algorithms on a Xilinx development board to remove the e-Merlin delay model, remove the n x 10kHz offset, filter a 128MHz band into VLBI compatible sub-bands. Implement on the Virtex 4 SX35 chips on the station board.

24 20 September 2007Fabric: WP1.2 Broadband data pathSlide #24 Questions? Monty Midnight Maroon Nov 2006 Contact information: ralph.spencer@manchester.ac.uk EXPReS is made possible through the support of the European Commission (DG-INFSO), Sixth Framework Programme, Contract #026642


Download ppt "FABRIC WP1.2 Broadband Data Path: Protocols and Processor Interface Bonn 20/09/07 Ralph Spencer The University of Manchester."

Similar presentations


Ads by Google