Download presentation
Presentation is loading. Please wait.
Published byDeirdre Weaver Modified over 8 years ago
1
Artur BarczykRT2003, 22.05.031 High Rate Event Building with Gigabit Ethernet Introduction Transport protocols Methods to enhance link utilisation Test bed measurements Conclusions A. Barczyk, J-P Dufey, B. Jost, N. Neufeld CERN, Geneva
2
Artur BarczykRT2003, 22.05.032 Typical applications of Gigabit networks in DAQ: –Fragment sizes O(kB) –Fragment rates O(10-100 kHz) –Good use of protocols (high user data occupancy) At higher rates –Frame size limited by link bandwidth –Protocol overheads sizeable User data bandwidth occupancy becomes smaller We studied the use of Gigabit Ethernet technology for 1MHz readout –Ethernet protocol (Layer 2) –IP protocol (Layer 3) Introduction
3
Artur BarczykRT2003, 22.05.033 Ethernet (802.3) frame format: Total overhead: 26 Bytes (fixed) At 1 MHz: Protocols – Ethernet 866246…15004 Preamble Dst.Addr. Src.Addr. Type Payload FCS Link load [%]Payload [B] Min.Max. 100%4699 70%4661
4
Artur BarczykRT2003, 22.05.034 IP (over Ethernet) frame format Total overhead: 46 Bytes (26 Eth. + 20 IP) At 1 MHz: A consideration for the choice of the switching hardware Protocols - IP 222026…14804 Ethernet Header IP Header Payload FCS Link load [%]Payload [B] Min.Max. 100%2679 70%2641
5
Artur BarczykRT2003, 22.05.035 Protocol overheads & occupancy Max. fragment payload given by –L = link load [0,1] –F = Frame rate [Hz] –ov = protocol overhead [B]
6
Artur BarczykRT2003, 22.05.036 Fragment aggregation No higher level protocols (only Layer 2/3) Avoid congestion in switch (packet drop) Lower link occupancy (70%) Need to enhance user data bandwidth occupancy 2 Methods: –Aggregation of consecutive event fragments (vertical aggregation) –Aggregation of fragments from different sources (horizontal aggregation) … FE SWITCH … FE SWITCH
7
Artur BarczykRT2003, 22.05.037 Vertical aggregation In first approach, each event fragment is packed into one Ethernet frame Aggregating at source N events into one frame reduces overhead by (N-1)x ov bytes Implementation: front end hardware (FPGA) Higher user data occupancy ( [N-1]x ov Bytes less overhead ) Reduced frame rate (by factor 1/N) Increase in latency (1 st event has to wait for N th event for transmission) Larger transport delays (longer frames) N limited by max. Ethernet frame length (segmentation re-introduces overheads)
8
Artur BarczykRT2003, 22.05.038 Horizontal aggregation Aggregate fragments from several sources (N:1) Increase output bandwidth by use of several output ports (N:M multiplexing) Implementation: dedicated Readout Unit between Front-End and switch Higher user data occupancy ( [N-1]x ov Bytes less overhead ) Reduced frame rate (by factor 1/M) No additional latency in event building Needs dedicated hardware (e.g. Network Processor based) with enough processing power to handle full input rate
9
Artur BarczykRT2003, 22.05.039 Case Studies We have studied horizontal aggregation in a test bed using the IBM NP4GS3 Network Processor reference kit 2 cases: –2:2 multiplexing on single NP –4:2 multiplexing with 2 NPs
10
Artur BarczykRT2003, 22.05.0310 2:2 Multiplexing - Setup We used one NP to –Aggregate frames from 2 input ports (on ingress): strip off headers Concatenate payloads –Distribute combined frames on 2 output ports (round-robin) Second NP generated frames with –Variable payload, and at –Variable rate Multiplexing Generation Input @ 1 MHz Output @ 0.5MHz
11
Artur BarczykRT2003, 22.05.0311 2:2 Multiplexing - Results Link load at 1 MHz input rate: –Single output port: Link load above 70% for > 30B input fragment payload –Two output ports: load per link is below 70% at up to ~75 B payload (theory) Measured up to 56 B payload: 500kHz output rate per link 56% link utilization Perfect agreement with calculations To be extended to higher payloads
12
Artur BarczykRT2003, 22.05.0312 4:2 Multiplexing Use 2:2 blocks to perform 4:2 multiplexing with 2 NPs Each processor –Aggregates 2 input fragments on ingress –Sends every 2 nd frame to “the other” NP –Aggregates further on egress (at half rate, twice the payload) Input @ 1 MHz Output @ 0.5MHz DASL Ingress 2:2 Ingress 2:2 Egress 2:1 Egress 2:1 Ethernet
13
Artur BarczykRT2003, 22.05.0313 4:2 Test bed Run full code on one NP (ingress & egress processing) Used second processor to generate traffic: –2 x 1 MHz over Ethernet –1 x 0.5 MHz over DASL (double payload) Sustained aggregation at 1 MHz input rate with up to 46 Bytes input payload (output link occupancy: 84% per link) Only fraction of processor resources used (8 out of 32 threads on average) DASL Ingress 2:2 Egress 2:1 1 x 0.5 MHz 2 x 1 MHz Ethernet Output @ 0.5MHz Generation
14
Artur BarczykRT2003, 22.05.0314 Conclusions At 1 MHz, protocol overheads eat up significant fraction of link bandwidth 2 methods proposed for increasing bandwidth fraction for user data and reducing packet rates: –Aggregation of consecutive event fragments –Aggregation of fragments from different sources N:M multiplexing increases total available bandwidth Test bed results confirm calculations for aggregation and multiplexing at 1 MHz
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.