Download presentation
Presentation is loading. Please wait.
Published byMaximilian Woods Modified over 9 years ago
1
L1/HLT trigger farm Bologna setup 0 By Gianluca Peco INFN Bologna Genève, 4.9.2003
2
L1/HLT trigger farm. Bologna setup 0. 2 D. Galli, U. Marconi, G. Peco Hardware 2 x PC with 3 x 1000Base-T interfaces each Motherboard: SuperMicro X5DPL-iGM Dual Pentium IV Xeon 2.4 GHz, 1 GB ECC RAM Chipset Intel E7501 400/533 MHz FSB (front side bus) Bus Controller Hub Intel P64H2 (2 x PCI-X, 64 bit, 66/100/133 MHz) Ethernet controller Intel 82545EM: 1 x 1000Base-T interface (supports Jumbo Frames) Plugged-in PCI-X Ethernet Card: Intel Pro/1000 MT Dual Port Server Adapter Ethernet controller Intel 82546EB: 2 x 1000Base-T interfaces (supports Jumbo Frames) 1000Base-T 8 ports switch: HP ProCurve 6108 16 Gbps backplane: non-blocking architecture latency: < 12.5 µs (LIFO 64-byte packets) throughput: 11.9 million pps (64-byte packets) switching capacity: 16 Gbps Cat. 6e cables max 250 MHz (cfr 125 MHz 1000Base-T)
3
L1/HLT trigger farm. Bologna setup 0. 3 D. Galli, U. Marconi, G. Peco Hardware II
4
L1/HLT trigger farm. Bologna setup 0. 4 D. Galli, U. Marconi, G. Peco SuperMicro X5DPL-iGM Motherboard
5
L1/HLT trigger farm. Bologna setup 0. 5 D. Galli, U. Marconi, G. Peco Benchmark software We used 2 benchmark software: Netperf 2.2p14 UDP_STREAM Simple sender & receiver program using UDP & RAW IP We found a problem in netperf on Linux: when netperf iterate, it doubles the buffer size each iteration.(Probably is a Linux problem !?)
6
L1/HLT trigger farm. Bologna setup 0. 6 D. Galli, U. Marconi, G. Peco First Attempt Benchmark Results Benchmark results have big fluctuation. Result distribution is multi-modal. First peak is the same for both benchmark software Other peak differs. lost datagrams / sent datagrams 2600 netperf results for 10 6 datagrams each 2.5x 10 -4
7
L1/HLT trigger farm. Bologna setup 0. 7 D. Galli, U. Marconi, G. Peco Factors that affect lost datagram fraction Other process running on the CPU (X11, daemons, etc.) Queue and buffer size. Ethernet switch (compared with direct connection). Ethernet flow control.
8
L1/HLT trigger farm. Bologna setup 0. 8 D. Galli, U. Marconi, G. Peco Tuning of the buffer size descriptors # allocated by the driver IP buffer size [Bytes] transmitter4096 32768 (x2) (send buffer) receiver4096 262144 (x2) (receive buffer)
9
L1/HLT trigger farm. Bologna setup 0. 9 D. Galli, U. Marconi, G. Peco Best Results Maximum trasfer rate (udp 4096 byte datagrams): 957 Mb/s. Mean datagram lost fraction (@ 957 Mb/s): 3.8x10 -5. Distribution is not unimodal. For 870 datagram bunches over 1396 there is no datagram lost.
10
L1/HLT trigger farm. Bologna setup 0. 10 D. Galli, U. Marconi, G. Peco Where datagram are lost NIC error counter doesn’t increase. IP stack dropped counter increases. IP Switch dropped counter increases. Datagrams seems to be lost due to queue overflow on both kernel IP stack and switch (not in ethernet cables:no crc errors) Datagrams are lost on the loopback interface too. Using direct cable, instead of the ethernet switch, lost datagram fraction decreases.
11
L1/HLT trigger farm. Bologna setup 0. 11 D. Galli, U. Marconi, G. Peco Switch datagram dropping Switch increases datagram lost (compared to direct connection). Switch Crossed cable
12
L1/HLT trigger farm. Bologna setup 0. 12 D. Galli, U. Marconi, G. Peco Switch datagram dropping (II) Datagrams seems to be lost in groups (queues reset?) Switch Crossed cable
13
L1/HLT trigger farm. Bologna setup 0. 13 D. Galli, U. Marconi, G. Peco Benchmark condition Kernel 2.4.20-18.9smp GigaEthernet driver: e1000 version 2.2.21-k1 System disconnected from public network Runlevel 3 (X11 stopped) Daemons stopped (crond, atd, sendmail, etc.) Flow control on (on NIC and switch) Numer of descriptors allocated by the driver: 4096 IP send buffer size: 32768 (x2) Bytes IP receive buffer size: 262144 (x2) Bytes
14
L1/HLT trigger farm. Bologna setup 0. 14 D. Galli, U. Marconi, G. Peco Best results Maximum trasfer rate (udp 4096 byte datagrams): 957 Mb/s. Mean datagram lost fraction (@ 957 Mb/s): 3.8x10 -5. Distribution is not unimodal. For 870 datagram bunches over 1396 there is no datagram lost.
15
L1/HLT trigger farm. Bologna setup 0. 15 D. Galli, U. Marconi, G. Peco Conclusions We think we can improve system performances testing : Asynchronous datagram receiving Jumbo frames Kernel mode Linux kernel 2.5 (true zero copy?) Interrupt coalescence According to the present working conditions the probability to loose one LCHb maxi event is: 25 x 4 x 10 -5 = 10 -3 Is it that accettable ?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.