Presentation is loading. Please wait.

Presentation is loading. Please wait.

CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 1 European Topology: NRNs & Geant SuperJANET4 CERN UvA Manc SURFnet RAL.

Similar presentations


Presentation on theme: "CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 1 European Topology: NRNs & Geant SuperJANET4 CERN UvA Manc SURFnet RAL."— Presentation transcript:

1 CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 1 European Topology: NRNs & Geant SuperJANET4 CERN UvA Manc SURFnet RAL

2 CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 2 Gigabit Throughput on the Production WAN Manc - RAL 570 Mbit/s 91% of the 622 Mbit access link between SuperJANET4 and RAL 1472 bytes propagation ~21  s Manc-UvA (SARA) 750 Mbit/s SJANET4 + Geant + SURFnet Manc – CERN 460 Mbit/s CERN PC had a 32 bit PCI bus

3 CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 3 Gigabit TCP Throughput on the Production WAN Throughput vs TCP buffer size TCP window sizes in Mbytes calculated from RTT*bandwidth LinkRound trip time ms TCP Window for BW 1 Gbit/s TCP Window for UDP BW 750 Mbit/s TCP Window for UDP BW 460 Mbit/s Man – Ams14.51.81.36 Man - CERN21.42.681.23

4 CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 4 Gigabit TCP on the Production WAN Man-CERN Throughput vs n-streams Default buffer size slope = ~25 Mbit/s/stream up to 9 streams then 15 Mbit/s/stream With larger buffers rate of increase per stream is larger Plateaus at about 7 streams giving a total throughput of ~400 Mbit/s

5 CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 5 UDP Throughput: SLAC - Man SLAC – Manc 470 Mbit/s 75% of the 622 Mbit access link SuperJANET4 peers with ESnet at 622Mbit in NY

6 CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 6 Gigabit TCP Throughput Man-SLAC Throughput vs n-streams Much less than for European links Buffer required: rtt*BW (622Mbit) = ~14 Mbytes With larger buffers > default, rate of increase per stream is ~ 5.4 Mbit/s/stream No Plateau Consistent with Iperf Why do we need so many streams? Les Cottrell SLAC

7 CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 7 iGrid2002 Radio Astronomy data movement (1) Arrival times Slope corresponds to > 2Gbit/s - not physical ! 1.2 ms steps every 79 packets Buffer required: ~ 120 kbytes Average slope: 560 Mbit/s – agrees with (bytes received)/ (time taken)

8 CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 8 iGrid2002 Radio Astronomy data movement (2) Arrival times Slope corresponds to 123 Mbit/s – agrees! 1-way delay flat Suggest that the interface/driver are being clever with the interrupts !

9 CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 9 iGrid2002 UDP Throughput: Intel Pro/1000 Max throughput 700Mbit/s Loss only when at wire rate Loss not due to user  Kernel moves Receiving CPU load ~15% 1472bytes Motherboard: SuperMicro P4DP6 Chipset: Intel E7500 (Plumas) CPU: Dual Xeon Prestonia (2cpu/die) 2.2 GHz Slot 4: PCI, 64 bit, 66 MHz RedHat 7.2 Kernel 2.4.18

10 CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 10 Gigabit iperf TCP From iGrid2002

11 CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 11 Work on End Systems: PCI: SysKonnect SK-9843  Motherboard: SuperMicro 370DLE Chipset: ServerWorks III LE Chipset  CPU: PIII 800 MHz PCI:64 bit 66 MHz  RedHat 7.1 Kernel 2.4.14 SK301 1400 bytes sent Wait 20 us Sk303 1400 bytes sent Wait 10 us Frames are back-to-back Can drive at line speed Cannot go any faster ! Gig Eth frames back to back

12 CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 12 PCI: Intel Pro/1000  Motherboard: SuperMicro 370DLE Chipset:: ServerWorks III LE Chipset  CPU: PIII 800 MHz PCI:64 bit 66 MHz  RedHat 7.1 Kernel 2.4.14 IT66M212 1400 bytes sent Wait 11 us ~4.7us on send PCI bus PCI bus ~45% occupancy ~ 3.25 us on PCI for data recv IT66M212 1400 bytes sent Wait 11 us Packets lost Action of pause packet?

13 CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 13 Packet Loss: Where? Intel Pro 1000 on 370DLE 1472 byte packets Expected loss in transmitter ! /proc/net/snmp UDPmon UDP IP Eth drv UDPmon UDP IP Eth drv HW N Gen N Transmit N Lost InDiscards N Received Gig Switch No loss at switch But Pause packet seen to sender

14 CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 14 High Speed TCP uGareth & Yee Implemented mods to TCP - Sally Floyd 02 draft RFC uCongestion Avoidance uInterest in exchanging stacks:  Les Cottrell SLAC  Bill Allcock Argonne

15 CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 15 UDP Throughput: Intel Pro/1000 on B2B P4DP6 Max throughput 950Mbit/s Some throughput drop for packets >1000 bytes Loss NIC dependent Loss not due to user  Kernel moves Traced to discards in the receiving IP layer ??? Motherboard: SuperMicro P4DP6 Chipset: Intel E7500 (Plumas) CPU: Dual Xeon Prestonia (2cpu/die) 2.2 GHz Slot 4: PCI, 64 bit, 66 MHz RedHat 7.2 Kernel 2.4.14

16 CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 16 Interrupt Coalescence: Latency Intel Pro 1000 on 370DLE 800 MHz CPU

17 CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 17 Interrupt Coalescence: Throughput Intel Pro 1000 on 370DLE


Download ppt "CdL was here DataTAG CERN Sep 2002 R. Hughes-Jones Manchester 1 European Topology: NRNs & Geant SuperJANET4 CERN UvA Manc SURFnet RAL."

Similar presentations


Ads by Google