ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester 1 Protocols Working with 10 Gigabit Ethernet Richard Hughes-Jones The University.

Slides:



Advertisements
Similar presentations
MB - NG MB-NG Technical Meeting 03 May 02 R. Hughes-Jones Manchester 1 Task2 Traffic Generation and Measurement Definitions Pass-1.
Advertisements

DataTAG CERN Oct 2002 R. Hughes-Jones Manchester Initial Performance Measurements With DataTAG PCs Gigabit Ethernet NICs (Work in progress Oct 02)
CALICE, Mar 2007, R. Hughes-Jones Manchester 1 Protocols Working with 10 Gigabit Ethernet Richard Hughes-Jones The University of Manchester
JIVE VLBI Network Meeting 15 Jan 2003 R. Hughes-Jones Manchester The EVN-NREN Project Richard Hughes-Jones The University of Manchester.
GNEW2004 CERN March 2004 R. Hughes-Jones Manchester 1 End-2-End Network Monitoring What do we do ? What do we use it for? Richard Hughes-Jones Many people.
TCP and ATLAS T/DAQ Dec 2002 R. Hughes-Jones Manchester TCP/IP and ATLAS T/DAQ With help from: Richard, HansPeter, Bob, & …
Xmas Meeting, Manchester, Dec 2006, R. Hughes-Jones Manchester 1 e-Science work ESLEA & EXPReS vlbi_udp Multiple Flow Tests DCCP Tests EXPReS-Dante Collaboration.
Meeting on ATLAS Remote Farms. Copenhagen 11 May 2004 R. Hughes-Jones Manchester Networking for ATLAS Remote Farms Richard Hughes-Jones The University.
Slide: 1 Richard Hughes-Jones T2UK, October 06 R. Hughes-Jones Manchester 1 Update on Remote Real-Time Computing Farms For ATLAS Trigger DAQ. Richard Hughes-Jones.
CdL was here DataTAG/WP7 Amsterdam June 2002 R. Hughes-Jones Manchester 1 EU DataGrid - Network Monitoring Richard Hughes-Jones, University of Manchester.
ESLEA Technical Collaboration Meeting, Jun 2006, R. Hughes-Jones Manchester 1 Protocols Recent and Current Work. Richard Hughes-Jones The University.
PFLDnet, Nara, Japan 2-3 Feb 2006, R. Hughes-Jones Manchester 1 Transport Benchmarking Panel Discussion Richard Hughes-Jones The University of Manchester.
5 Annual e-VLBI Workshop, September 2006, Haystack Observatory R. Hughes-Jones Manchester 1 The Network Transport layer and the Application or TCP/IP.
Slide: 1 Richard Hughes-Jones PFLDnet2005 Lyon Feb 05 R. Hughes-Jones Manchester 1 Investigating the interaction between high-performance network and disk.
IEEE Real Time 2007, Fermilab, 29 April – 4 May R. Hughes-Jones Manchester 1 Using FPGAs to Generate Gigabit Ethernet Data Transfers & The Network Performance.
DataGrid WP7 Meeting CERN April 2002 R. Hughes-Jones Manchester Some Measurements on the SuperJANET 4 Production Network (UK Work in progress)
JIVE VLBI Network Meeting 28 Jan 2004 R. Hughes-Jones Manchester Brief Report on Tests Related to the e-VLBI Project Richard Hughes-Jones The University.
T2UK RAL 15 Mar 2006, R. Hughes-Jones Manchester 1 ATLAS Networking & T2UK Richard Hughes-Jones The University of Manchester then.
CALICE UCL, 20 Feb 2006, R. Hughes-Jones Manchester 1 10 Gigabit Ethernet Test Lab PCI-X Motherboards Related work & Initial tests Richard Hughes-Jones.
GEANT2 Network Performance Workshop, Jan 200, R. Hughes-Jones Manchester 1 TCP/IP Masterclass or So TCP works … but still the users ask: Where is.
Networkshop Apr 2006, R. Hughes-Jones Manchester 1 Bandwidth Challenges or "How fast can we really drive a Network?" Richard Hughes-Jones The University.
DataTAG Meeting CERN 7-8 May 03 R. Hughes-Jones Manchester 1 High Throughput: Progress and Current Results Lots of people helped: MB-NG team at UCL MB-NG.
PFLDNet Argonne Feb 2004 R. Hughes-Jones Manchester 1 UDP Performance and PCI-X Activity of the Intel 10 Gigabit Ethernet Adapter on: HP rx2600 Dual Itanium.
© 2006 Open Grid Forum Interactions Between Networks, Protocols & Applications HPCN-RG Richard Hughes-Jones OGF20, Manchester, May 2007,
Slide: 1 Richard Hughes-Jones CHEP2004 Interlaken Sep 04 R. Hughes-Jones Manchester 1 Bringing High-Performance Networking to HEP users Richard Hughes-Jones.
ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones Network Measurement & Characterisation and the Challenge of SuperComputing SC200x.
CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 1 European Topology: NRNs & Geant SuperJANET4 CERN UvA Manc SURFnet RAL.
July 2000 PPNCG Meeting R. Hughes-Jones Performance Measurements of LANs MANs and SuperJANET III This is PRELIMINARY uBaseline data for Grid development.
TERENA Networking Conference, Lyngby, May 2007, R. Hughes-Jones Manchester 1 The Performance of High Throughput Data Flows for e-VLBI in Europe Multiple.
ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester 1 Protocols Testing of DCCP at the Application Level Richard Hughes-Jones &
GGF4 Toronto Feb 2002 R. Hughes-Jones Manchester Initial Performance Measurements Gigabit Ethernet NICs 64 bit PCI Motherboards (Work in progress Mar 02)
13th-14th July 2004 University College London End-user systems: NICs, MotherBoards, TCP Stacks & Applications Richard Hughes-Jones.
Sven Ubik, Petr Žejdl CESNET TNC2008, Brugges, 19 May 2008 Passive monitoring of 10 Gb/s lines with PC hardware.
Slide: 1 Richard Hughes-Jones Summer School, Brasov, Romania, July 2005, R. Hughes-Jones Manchester 1 TCP/IP and Other Transports for High Bandwidth Applications.
EVN-NREN Meeting, Zaandan, 31 Oct 2006, R. Hughes-Jones Manchester 1 FABRIC 4 Gigabit Work & VLBI-UDP Performance and Stability. Richard Hughes-Jones The.
Summer School, Brasov, Romania, July 2005, R. Hughes-Jones Manchester 1 TCP/IP and Other Transports for High Bandwidth Applications TCP/IP on High Performance.
Slide: 1 Richard Hughes-Jones e-VLBI Network Meeting 28 Jan 2005 R. Hughes-Jones Manchester 1 TCP/IP Overview & Performance Richard Hughes-Jones The University.
FABRIC Meeting, Poznan Poland, 25 Sep 2006, R. Hughes-Jones Manchester 1 Broadband Protocols WP IP protocols, Lambda switching, multicasting Richard.
ESLEA PMB, Manchester, 23 Nov 2006, R. Hughes-Jones Manchester 1 Protocols Progress with Current Work. Richard Hughes-Jones The University of Manchester.
Slide: 1 Richard Hughes-Jones Mini-Symposium on Optical Data Networking, August 2005, R. Hughes-Jones Manchester 1 Using TCP/IP on High Bandwidth Long.
1 Network Performance Optimisation and Load Balancing Wulf Thannhaeuser.
ESLEA VLBI Bits&Bytes Workshop, 4-5 May 2006, R. Hughes-Jones Manchester 1 VLBI Data Transfer Tests Recent and Current Work. Richard Hughes-Jones The University.
Connect. Communicate. Collaborate 4 Gigabit Onsala - Jodrell Lightpath for e-VLBI The iNetTest Unit Development of Real Time eVLBI at Jodrell Bank Observatory.
Summer School, Brasov, Romania, July 2005, R. Hughes-Jones Manchester1 TCP/IP and Other Transports for High Bandwidth Applications TCP/IP on High Performance.
MB - NG MB-NG Meeting Dec 2001 R. Hughes-Jones Manchester MB – NG SuperJANET4 Development Network SuperJANET4 Production Network Leeds RAL / UKERNA RAL.
ESLEA Bits&Bytes, Manchester, 7-8 Dec 2006, R. Hughes-Jones Manchester 1 Protocols DCCP and dccpmon. Richard Hughes-Jones The University of Manchester.
Slide: 1 Richard Hughes-Jones IEEE Real Time 2005 Stockholm, 4-10 June, R. Hughes-Jones Manchester 1 Investigating the Network Performance of Remote Real-Time.
ESLEA-FABRIC Technical Meeting, 1 Sep 2006, R. Hughes-Jones Manchester 1 Multi-Gigabit Trials on GEANT Collaboration with Dante. Richard Hughes-Jones The.
CAIDA Bandwidth Estimation Meeting San Diego June 2002 R. Hughes-Jones Manchester UDPmon and TCPstream Tools to understand Network Performance Richard.
PFLDNet Workshop February 2003 R. Hughes-Jones Manchester Some Performance Measurements Gigabit Ethernet NICs & Server Quality Motherboards Richard Hughes-Jones.
Collaboration Meeting, 4 Jul 2006, R. Hughes-Jones Manchester 1 Collaborations in Networking and Protocols HEP and Radio Astronomy Richard Hughes-Jones.
DataGrid WP7 Meeting Amsterdam Nov 01 R. Hughes-Jones Manchester 1 UDPmon Measuring Throughput with UDP  Send a burst of UDP frames spaced at regular.
Networkshop March 2005 Richard Hughes-Jones Manchester Bandwidth Challenge, Land Speed Record, TCP/IP and You.
GNEW2004 CERN March 2004 R. Hughes-Jones Manchester 1 Lessons Learned in Grid Networking or How do we get end-2-end performance to Real Users ? Richard.
Networks ∙ Services ∙ People Richard-Hughes Jones eduPERT Training Session, Porto A Hands-On Session udpmon for Network Troubleshooting 18/06/2015.
Final EU Review - 24/03/2004 DataTAG is a project funded by the European Commission under contract IST Richard Hughes-Jones The University of.
ESLEA VLBI Bits&Bytes Workshop, 31 Aug 2006, R. Hughes-Jones Manchester 1 vlbi_udp Throughput Performance and Stability. Richard Hughes-Jones The University.
L1/HLT trigger farm Bologna setup 0 By Gianluca Peco INFN Bologna Genève,
16 th IEEE NPSS Real Time Conference 2009 IHEP, Beijing, China, 12 th May, 2009 High Rate Packets Transmission on 10 Gbit/s Ethernet LAN Using Commodity.
FABRIC WP1.2 Broadband Data Path: Protocols and Processor Interface Bonn 20/09/07 Ralph Spencer The University of Manchester.
EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester 1 The Performance of High Throughput Data Flows for e-VLBI in Europe Multi-Gigabit over.
Recent experience with PCI-X 2.0 and PCI-E network interfaces and emerging server systems Yang Xia Caltech US LHC Network Working Group October 23, 2006.
Connect. Communicate. Collaborate 4 Gigabit Onsala - Jodrell Lightpath for e-VLBI Richard Hughes-Jones.
DataGrid WP7 Meeting Jan 2002 R. Hughes-Jones Manchester Initial Performance Measurements Gigabit Ethernet NICs 64 bit PCI Motherboards (Work in progress)
CALICE TDAQ Application Network Protocols 10 Gigabit Lab
R. Hughes-Jones Manchester
Networking between China and Europe
Data Transfer Node Performance GÉANT & AENEAS
MB-NG Review High Performance Network Demonstration 21 April 2004
MB – NG SuperJANET4 Development Network
Presentation transcript:

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester 1 Protocols Working with 10 Gigabit Ethernet Richard Hughes-Jones The University of Manchester then “Talks”

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester 2 uIntroduction u10 GigE on SuperMicro X7DBE u10 GigE on SuperMicro X5DPE-G2 u10 GigE and TCP– Monitor with web100 disk writes u10 GigE and Constant Bit Rate transfers uUDP + memory access uGÉANT 4 Gigabit tests

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester 3 uUDP/IP packets sent between back-to-back systems Similar processing to TCP/IP but no flow control & congestion avoidance algorithms uLatency Round trip times using Request-Response UDP frames Latency as a function of frame size Slope s given by: Mem-mem copy(s) + pci + Gig Ethernet + pci + mem-mem copy(s) Intercept indicates processing times + HW latencies Histograms of ‘singleton’ measurements uUDP Throughput Send a controlled stream of UDP frames spaced at regular intervals Vary the frame size and the frame transmit spacing & measure: The time of first and last frames received The number packets received, lost, & out of order Histogram inter-packet spacing received packets Packet loss pattern 1-way delay CPU load Number of interrupts Udpmon: Latency & Throughput Measurements uTells us about: Behavior of the IP stack The way the HW operates Interrupt coalescence uTells us about: Behavior of the IP stack The way the HW operates Capacity & Available throughput of the LAN / MAN / WAN

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester 4 Throughput Measurements uUDP Throughput with udpmon uSend a controlled stream of UDP frames spaced at regular intervals n bytes Number of packets Wait time time  Zero stats OK done ●●● Get remote statistics Send statistics: No. received No. lost + loss pattern No. out-of-order CPU load & no. int 1-way delay Send data frames at regular intervals ●●● Time to send Time to receive Inter-packet time (Histogram) Signal end of test OK done Time Sender Receiver

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester 5 High-end Server PCs for 10 Gigabit u Boston/Supermicro X7DBE u Two Dual Core Intel Xeon Woodcrest GHz Independent 1.33GHz FSBuses u 530 MHz FD Memory (serial) Parallel access to 4 banks uChipsets: Intel 5000P MCH – PCIe & Memory ESB2 – PCI-X GE etc. u PCI 3 8 lane PCIe buses 3* 133 MHz PCI-X u 2 Gigabit Ethernet u SATA

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester 6 10 GigE Back2Back: UDP Latency uMotherboard: Supermicro X7DBE uChipset: Intel 5000P MCH uCPU: 2 Dual Intel Xeon GHz with 4096k L2 cache uMem bus: 2 independent 1.33 GHz uPCI-e 8 lane uLinux Kernel web100_pktd-plus uMyricom NIC 10G-PCIE-8A-R Fibre umyri10ge v firmware v rx-usecs=0 Coalescence OFF MSI=1 Checksums ON tx_boundary=4096 uMTU 9000 bytes uLatency 22 µs & very well behaved uLatency Slope µs/byte uB2B Expect: µs/byte Mem PCI-e GigE PCI-e Mem uHistogram FWHM ~1-2 us

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester 7 10 GigE Back2Back: UDP Throughput uKernel web100_pktd-plus uMyricom 10G-PCIE-8A-R Fibre rx-usecs=25 Coalescence ON uMTU 9000 bytes uMax throughput 9.4 Gbit/s uNotice rate for 8972 byte packet u~0.002% packet loss in 10M packets in receiving host uSending host, 3 CPUs idle uFor 90% in kernel mode inc ~10% soft int uReceiving host 3 CPUs idle uFor <8 µs packets, 1 CPU is 70-80% in kernel mode inc ~15% soft int

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester 8 10 GigE UDP Throughput vs packet size uMotherboard: Supermicro X7DBE uLinux Kernel web100_ pktd-plus uMyricom NIC 10G-PCIE-8A-R Fibre umyri10ge v firmware v rx-usecs=0 Coalescence ON MSI=1 Checksums ON tx_boundary=4096 uSteps at 4060 and 8160 bytes within 36 bytes of 2 n boundaries uModel data transfer time as t= C + m*Bytes C includes the time to set up transfers Fit reasonable C= 1.67 µs m= 5.4 e4 µs/byte Steps consistent with C increasing by 0.6 µs uThe Myricom drive segments the transfers, limiting the DMA to 4096 bytes – PCI-e chipset dependent!

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester 9 10 GigE via Cisco 7600: UDP Latency uMotherboard: Supermicro X7DBE uPCI-e 8 lane uLinux Kernel SMP uMyricom NIC 10G-PCIE-8A-R Fibre myri10ge v firmware v Rx-usecs=0 Coalescence OFF MSI=1 Checksums ON uMTU 9000 bytes uLatency 36.6 µs & very well behaved uSwitch Latency µs uSwitch internal: µs/byte PCI-e GigE0.0008

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester 10 The “SC05” Server PCs u Boston/Supermicro X7DBE uTwo Intel Xeon Nocona 3.2 GHz Cache 2048k Shared 800 MHz FSBus uDDR2-400 Memory uChipsets: Intel 7520 Lindenhurst u PCI 2 8 lane PCIe buses 1 4 lane PCIe buse 3* 133 MHz PCI-X u 2 Gigabit Ethernet

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester GigE X7DBE  X6DHE: UDP Throughput uKernel web100_pktd-plus uMyricom 10G-PCIE-8A-R Fibre myri10ge v firmware v rx-usecs=25 Coalescence ON uMTU 9000 bytes uMax throughput 6.3 Gbit/s uPacket loss ~ % in receiving host uSending host, 3 CPUs idle u1 CPU is >90% in kernel mode uReceiving host 3 CPUs idle uFor <8 µs packets, 1 CPU is 70-80% in kernel mode inc ~15% soft int

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester 12 So now we can run at 9.4 Gbit/s Can we do any work ?

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester GigE X7DBE  X7DBE: TCP iperf uNo packet loss uMTU 9000 uTCP buffer 256k BDP=~330k uCwnd SlowStart then slow growth Limited by sender ! uDuplicate ACKs One event of 3 DupACKs uPackets Re-Transmitted uThroughput Mbit/s Iperf throughput 7.77 Gbit/s Web100 plots of TCP parameters

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester GigE X7DBE  X7DBE: TCP iperf uPacket loss 1: 50,000 -recv-kernel patch uMTU 9000 uTCP buffer 256k BDP=~330k uCwnd SlowStart then slow growth Limited by sender ! uDuplicate ACKs ~10 DupACKs every lost packet uPackets Re-Transmitted One per lost packet uThroughput Mbit/s Iperf throughput 7.84 Gbit/s Web100 plots of TCP parameters

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester GigE X7DBE  X7DBE: CBR/TCP uPacket loss 1: 50,000 -recv-kernel patch utcpdelay message 8120bytes uWait 7 µs uRTT 36 µs uTCP buffer 256k BDP=~330k uCwnd Dips as expected uDuplicate ACKs ~15 DupACKs every lost packet uPackets Re-Transmitted One per lost packet uThroughput Mbit/s tcpdelay throughput 7.33 Gbit/s Web100 plots of TCP parameters

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester 16 Cpu0 : 6.0% us, 74.7% sy, 0.0% ni, 0.3% id, 0.0% wa, 1.3% hi, 17.7% si, 0.0% st Cpu1 : 0.0% us, 0.0% sy, 0.0% ni, 100.0% id, 0.0% wa, 0.0% hi, 0.0% si, 0.0% st Cpu2 : 0.0% us, 0.0% sy, 0.0% ni, 100.0% id, 0.0% wa, 0.0% hi, 0.0% si, 0.0% st Cpu3 : 100.0% us, 0.0% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.0% hi, 0.0% si, 0.0% st B2B UDP with memory access uSend UDP traffic B2B with 10GE uOn receiver run independent memory write task L2 Cache 4096 k Byte 8000k Byte blocks 100% user mode uAchievable UDP Throughput mean 9.39 Gb/s sigma 106 mean 9.21 Gb/s sigma 37 mean 9.2 sigma 30 uPacket loss mean 0.04% mean 1.4 % mean 1.8 % uCPU load:

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester 17 ESLEA-FABRIC:4 Gbit flows over GÉANT uSet up 4 Gigabit Lightpath Between GÉANT PoPs Collaboration with Dante GÉANT Development Network London – London or London – Amsterdam and GÉANT Lightpath service CERN – Poznan PCs in their PoPs with 10 Gigabit NICs uVLBI Tests: UDP Performance Throughput, jitter, packet loss, 1-way delay, stability Continuous (days) Data Flows – VLBI_UDP and multi-Gigabit TCP performance with current kernels Experience for FPGA Ethernet packet systems uDante Interests: multi-Gigabit TCP performance The effect of (Alcatel) buffer size on bursty TCP using BW limited Lightpaths

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester 18 Options Using the GÉANT Development Network u10 Gigabit SDH backbone uAlkatel 1678 MCC uNode location: London Amsterdam Paris Prague Frankfurt uCan do traffic routing so make long rtt paths uAvailable Now 07 uLess Pressure for long term tests

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester 19 Options Using the GÉANT LightPaths uSet up 4 Gigabit Lightpath Between GÉANT PoPs Collaboration with Dante PCs in Dante PoPs u10 Gigabit SDH backbone uAlkatel 1678 MCC uNode location: Budapest Geneva Frankfurt Milan Paris Poznan Prague Vienna uCan do traffic routing so make long rtt paths uIdeal: London Copenhagen

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester 20 Any Questions?

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester 21 Backup Slides

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester Gigabit Ethernet: UDP Throughput u1500 byte MTU gives ~ 2 Gbit/s uUsed byte MTU max user length uDataTAG Supermicro PCs uDual 2.2 GHz Xenon CPU FSB 400 MHz uPCI-X mmrbc 512 bytes uwire rate throughput of 2.9 Gbit/s uCERN OpenLab HP Itanium PCs uDual 1.0 GHz 64 bit Itanium CPU FSB 400 MHz uPCI-X mmrbc 4096 bytes uwire rate of 5.7 Gbit/s uSLAC Dell PCs giving a uDual 3.0 GHz Xenon CPU FSB 533 MHz uPCI-X mmrbc 4096 bytes uwire rate of 5.4 Gbit/s

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester Gigabit Ethernet: Tuning PCI-X u16080 byte packets every 200 µs uIntel PRO/10GbE LR Adapter uPCI-X bus occupancy vs mmrbc Measured times Times based on PCI-X times from the logic analyser Expected throughput ~7 Gbit/s Measured 5.7 Gbit/s mmrbc 1024 bytes mmrbc 2048 bytes mmrbc 4096 bytes 5.7Gbit/s mmrbc 512 bytes CSR Access PCI-X Sequence Data Transfer Interrupt & CSR Update

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester Gigabit Ethernet: TCP Data transfer on PCI-X uSun V20z 1.8GHz to 2.6 GHz Dual Opterons uConnect via 6509 uXFrame II NIC uPCI-X mmrbc 4096 bytes 66 MHz uTwo 9000 byte packets b2b uAve Rate 2.87 Gbit/s uBurst of packets length us uGap between bursts 343 us u2 Interrupts / burst CSR Access Data Transfer

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester Gigabit Ethernet: UDP Data transfer on PCI-X uSun V20z 1.8GHz to 2.6 GHz Dual Opterons uConnect via 6509 uXFrame II NIC uPCI-X mmrbc 2048 bytes 66 MHz uOne 8000 byte packets 2.8us for CSRs 24.2 us data transfer effective rate 2.6 Gbit/s u2000 byte packet wait 0us ~200ms pauses u8000 byte packet wait 0us ~15ms between data blocks CSR Access 2.8us Data Transfer

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester Gigabit Ethernet: Neterion NIC Results uX5DPE-G2 Supermicro PCs B2B uDual 2.2 GHz Xeon CPU uFSB 533 MHz uXFrame II NIC uPCI-X mmrbc 4096 bytes uLow UDP rates ~2.5Gbit/s uLarge packet loss uTCP One iperf TCP data stream 4 Gbit/s Two bi-directional iperf TCP data streams 3.8 & 2.2 Gbit/s

ESLEA Closing Conference, Edinburgh, March 2007, R. Hughes-Jones Manchester 27 SC|05 Seattle-SLAC 10 Gigabit Ethernet u2 Lightpaths: Routed over ESnet Layer 2 over Ultra Science Net u6 Sun V20Z systems per λ udcache remote disk data access 100 processes per node Node sends or receives One data stream Mbit/s uUsed Neteion NICs & Chelsio TOE uData also sent to StorCloud using fibre channel links uTraffic on the 10 GE link for 2 nodes: 3-4 Gbit per nodes Gbit on Trunk