Artur BarczykRT2003, 22.05.031 High Rate Event Building with Gigabit Ethernet Introduction Transport protocols Methods to enhance link utilisation Test.

Slides:



Advertisements
Similar presentations
Traffic and routing. Network Queueing Model Packets are buffered in egress queues waiting for serialization on line Link capacity is C bps Average packet.
Advertisements

Computer Networks20-1 Chapter 20. Network Layer: Internet Protocol 20.1 Internetworking 20.2 IPv IPv6.
Chabot College Chapter 2 Review Questions Semester IIIELEC Semester III ELEC
The Network Layer Functions: Congestion Control
Introduction1-1 message segment datagram frame source application transport network link physical HtHt HnHn HlHl M HtHt HnHn M HtHt M M destination application.
Copyright© 2000 OPNET Technologies, Inc. R.W. Dobinson, S. Haas, K. Korcyl, M.J. LeVine, J. Lokier, B. Martin, C. Meirosu, F. Saka, K. Vella Testing and.
CSC 450/550 Part 3: The Medium Access Control Sublayer More Contents on the Engineering Side of Ethernet.
5: DataLink Layer5-1 Asynchronous Transfer Mode: ATM r 1990’s/00 standard for high-speed (155Mbps to 622 Mbps and higher) Broadband Integrated Service.
The LHCb Event-Builder Markus Frank, Jean-Christophe Garnier, Clara Gaspar, Richard Jacobson, Beat Jost, Guoming Liu, Niko Neufeld, CERN/PH 17 th Real-Time.
By Aaron Thomas. Quick Network Protocol Intro. Layers 1- 3 of the 7 layer OSI Open System Interconnection Reference Model  Layer 1 Physical Transmission.
1 ELEN Lecture 13 LAN Bridges Routers, Switches, Gateways Network layer -IP Reading: 6.7,
Computer Networks Transport Layer. Topics F Introduction  F Connection Issues F TCP.
Next Generation SDH LCAS Ethernet GFP VCAT.
EE 4272Spring, 2003 Chapter 11. ATM and Frame Relay Overview of ATM Protocol Architecture ATM Logical Connections ATM Cells ATM Service Categories ATM.
IP-UDP-RTP Computer Networking (In Chap 3, 4, 7) 건국대학교 인터넷미디어공학부 임 창 훈.
Gursharan Singh Tatla Transport Layer 16-May
Chapter 4 Queuing, Datagrams, and Addressing
Chapter Six NetworkingHardware. Agenda Questions about Ch. 11 Midterm Exam Ch.6 Cable kit.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
TCP : Transmission Control Protocol Computer Network System Sirak Kaewjamnong.
1 Optical Burst Switching (OBS). 2 Optical Internet IP runs over an all-optical WDM layer –OXCs interconnected by fiber links –IP routers attached to.
TTM1: ”Burst, packet and hybrid switching in the optical core network” Steinar Bjørnstad et al.
CCNA 3 Week 4 Switching Concepts. Copyright © 2005 University of Bolton Introduction Lan design has moved away from using shared media, hubs and repeaters.
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
ACT Week 11 Version 1 Feb 2005Slide & Voice over packet transport technologies Format of lecture: Follow.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
CCNA 1 v3.0 Module 8 Ethernet Switching. Copyright © 2005 University of Bolton Issues with Ethernet On busier shared ethernet networks, collisions become.
1 Network Performance Optimisation and Load Balancing Wulf Thannhaeuser.
A Multiplex-Multicast Scheme that Improves System Capacity of Voice- over-IP on Wireless LAN by 100% * B 葉仰廷 B 陳柏煒 B 林易增 B
Final Chapter Packet-Switching and Circuit Switching 7.3. Statistical Multiplexing and Packet Switching: Datagrams and Virtual Circuits 4. 4 Time Division.
LHCb front-end electronics and its interface to the DAQ.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Using Heterogeneous Paths for Inter-process Communication in a Distributed System Vimi Puthen Veetil Instructor: Pekka Heikkinen M.Sc.(Tech.) Nokia Siemens.
The CMS Event Builder Demonstrator based on MyrinetFrans Meijers. CHEP 2000, Padova Italy, Feb The CMS Event Builder Demonstrator based on Myrinet.
Queuing Delay 1. Access Delay Some protocols require a sender to “gain access” to the channel –The channel is shared and some time is used trying to determine.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
SRB data transmission Vito Palladino CERN 2 June 2014.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Virtual Circuit Networks Frame Relays. Background Frame Relay is a Virtual Circuit WAN that was designed in late 80s and early 90s. Prior to Frame Relays.
Schutzvermerk nach DIN 34 beachten Ethernet Deterministics.
1 Event Building L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Building A Network: Cost Effective Resource Sharing
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
Chapter Objectives After completing this chapter you will be able to: Describe in detail the following Local Area Network (LAN) technologies: Ethernet.
Week #8 OBJECTIVES Chapter #5. CHAPTER 5 Making Networks Work Two Networking Models –OSI OPEN SYSTEMS INTERCONNECTION PROPOSED BY ISO –INTERNATIONAL STANDARDS.
SuperB DAQ U. Marconi Padova 23/01/09. Bunch crossing: 450 MHz L1 Output rate: 150 kHz L1 Triggering Detectors: EC, DC The Level 1 trigger has the task.
IP Fragmentation. Network layer transport segment from sending to receiving host on sending side encapsulates segments into datagrams on rcving side,
Grzegorz Korcyl - Jagiellonian University, Kraków Grzegorz Korcyl – PANDA TDAQ Workshop, Giessen April 2010.
Network Processing Systems Design
Ethernet Packet Filtering – Part 2 Øyvind Holmeide 10/28/2014 by.
ETTC 2015-Guaranteed end-to-end latency through Ethernet Øyvind Holmeide 02/01/2015 by.
Graciela Perera Department of Computer Science and Information Systems Slide 1 of 18 INTRODUCTION NETWORKING CONCEPTS AND ADMINISTRATION CSIS 3723 Graciela.
High Speed Interconnect Project
High Rate Event Building with Gigabit Ethernet
EE 122: Lecture 19 (Asynchronous Transfer Mode - ATM)
Packet Switching Outline Store-and-Forward Switches
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
The LHCb Event Building Strategy
TCP - Part I Relates to Lab 5. First module on TCP which covers packet format, data transfer, and connection management.
Network Core and QoS.
Event Building With Smart NICs
Layered Protocol Wrappers Design and Interface review
LHCb Trigger, Online and related Electronics
Chapter 4 Network Layer Computer Networking: A Top Down Approach 5th edition. Jim Kurose, Keith Ross Addison-Wesley, April Network Layer.
Building A Network: Cost Effective Resource Sharing
Network Processors for a 1 MHz Trigger-DAQ System
Network Layer: Control/data plane, addressing, routers
ITIS 6167/8167: Network and Information Security
Network Core and QoS.
Presentation transcript:

Artur BarczykRT2003, High Rate Event Building with Gigabit Ethernet Introduction Transport protocols Methods to enhance link utilisation Test bed measurements Conclusions A. Barczyk, J-P Dufey, B. Jost, N. Neufeld CERN, Geneva

Artur BarczykRT2003, Typical applications of Gigabit networks in DAQ: –Fragment sizes O(kB) –Fragment rates O( kHz) –Good use of protocols (high user data occupancy) At higher rates –Frame size limited by link bandwidth –Protocol overheads sizeable  User data bandwidth occupancy becomes smaller We studied the use of Gigabit Ethernet technology for 1MHz readout –Ethernet protocol (Layer 2) –IP protocol (Layer 3) Introduction

Artur BarczykRT2003, Ethernet (802.3) frame format: Total overhead: 26 Bytes (fixed) At 1 MHz: Protocols – Ethernet …15004 Preamble Dst.Addr. Src.Addr. Type Payload FCS Link load [%]Payload [B] Min.Max. 100% %4661

Artur BarczykRT2003, IP (over Ethernet) frame format Total overhead: 46 Bytes (26 Eth IP) At 1 MHz: A consideration for the choice of the switching hardware Protocols - IP …14804 Ethernet Header IP Header Payload FCS Link load [%]Payload [B] Min.Max. 100% %2641

Artur BarczykRT2003, Protocol overheads & occupancy Max. fragment payload given by –L = link load [0,1] –F = Frame rate [Hz] –ov = protocol overhead [B]

Artur BarczykRT2003, Fragment aggregation No higher level protocols (only Layer 2/3)  Avoid congestion in switch (packet drop)  Lower link occupancy (70%) Need to enhance user data bandwidth occupancy 2 Methods: –Aggregation of consecutive event fragments (vertical aggregation) –Aggregation of fragments from different sources (horizontal aggregation) … FE SWITCH … FE SWITCH

Artur BarczykRT2003, Vertical aggregation In first approach, each event fragment is packed into one Ethernet frame Aggregating at source N events into one frame reduces overhead by (N-1)x ov bytes Implementation: front end hardware (FPGA) Higher user data occupancy ( [N-1]x ov Bytes less overhead ) Reduced frame rate (by factor 1/N) Increase in latency (1 st event has to wait for N th event for transmission) Larger transport delays (longer frames) N limited by max. Ethernet frame length (segmentation re-introduces overheads)

Artur BarczykRT2003, Horizontal aggregation Aggregate fragments from several sources (N:1) Increase output bandwidth by use of several output ports (N:M multiplexing) Implementation: dedicated Readout Unit between Front-End and switch Higher user data occupancy ( [N-1]x ov Bytes less overhead ) Reduced frame rate (by factor 1/M) No additional latency in event building Needs dedicated hardware (e.g. Network Processor based) with enough processing power to handle full input rate

Artur BarczykRT2003, Case Studies We have studied horizontal aggregation in a test bed using the IBM NP4GS3 Network Processor reference kit 2 cases: –2:2 multiplexing on single NP –4:2 multiplexing with 2 NPs

Artur BarczykRT2003, :2 Multiplexing - Setup We used one NP to –Aggregate frames from 2 input ports (on ingress): strip off headers Concatenate payloads –Distribute combined frames on 2 output ports (round-robin) Second NP generated frames with –Variable payload, and at –Variable rate Multiplexing Generation 1 MHz 0.5MHz

Artur BarczykRT2003, :2 Multiplexing - Results Link load at 1 MHz input rate: –Single output port: Link load above 70% for > 30B input fragment payload –Two output ports: load per link is below 70% at up to ~75 B payload (theory) Measured up to 56 B payload: 500kHz output rate per link 56% link utilization Perfect agreement with calculations To be extended to higher payloads

Artur BarczykRT2003, :2 Multiplexing Use 2:2 blocks to perform 4:2 multiplexing with 2 NPs Each processor –Aggregates 2 input fragments on ingress –Sends every 2 nd frame to “the other” NP –Aggregates further on egress (at half rate, twice the payload) 1 MHz 0.5MHz DASL Ingress 2:2 Ingress 2:2 Egress 2:1 Egress 2:1 Ethernet

Artur BarczykRT2003, :2 Test bed Run full code on one NP (ingress & egress processing) Used second processor to generate traffic: –2 x 1 MHz over Ethernet –1 x 0.5 MHz over DASL (double payload) Sustained aggregation at 1 MHz input rate with up to 46 Bytes input payload (output link occupancy: 84% per link) Only fraction of processor resources used (8 out of 32 threads on average) DASL Ingress 2:2 Egress 2:1 1 x 0.5 MHz 2 x 1 MHz Ethernet 0.5MHz Generation

Artur BarczykRT2003, Conclusions At 1 MHz, protocol overheads eat up significant fraction of link bandwidth 2 methods proposed for increasing bandwidth fraction for user data and reducing packet rates: –Aggregation of consecutive event fragments –Aggregation of fragments from different sources N:M multiplexing increases total available bandwidth Test bed results confirm calculations for aggregation and multiplexing at 1 MHz