Copyright© 2000 OPNET Technologies, Inc. R.W. Dobinson, S. Haas, K. Korcyl, M.J. LeVine, J. Lokier, B. Martin, C. Meirosu, F. Saka, K. Vella Testing and.

Slides:



Advertisements
Similar presentations
Network II.5 simulator ..
Advertisements

G ö khan Ü nel / CHEP Interlaken ATLAS 1 Performance of the ATLAS DAQ DataFlow system Introduction/Generalities –Presentation of the ATLAS DAQ components.
Performance Analysis of Daisy- Chained CPUs Based On Modeling Krzysztof Korcyl, Jagiellonian University, Krakow Radoslaw Trebacz Jagiellonian University,
Ancillary firmware for the NSW Trigger Processor Lorne Levinson, Weizmann Institute for the NSW Trigger Processor Working Group NSW Electronics Design.
1 Network Packet Generator Characterization presentation Supervisor: Mony Orbach Presenting: Eugeney Ryzhyk, Igor Brevdo.
Router Architectures An overview of router architectures.
Can Google Route? Building a High-Speed Switch from Commodity Hardware Guido Appenzeller, Matthew Holliman Q2/2002.
System Architecture A Reconfigurable and Programmable Gigabit Network Interface Card Jeff Shafer, Hyong-Youb Kim, Paul Willmann, Dr. Scott Rixner Rice.
Router Architectures An overview of router architectures.
Computer Networks Switching Professor Hui Zhang
5 Feb 2002Alternative Ideas for the CALICE Backend System 1 Alternative Ideas for the CALICE Back-End System Matthew Warren and Gordon Crone University.
LNL CMS G. MaronCPT Week CERN, 23 April Legnaro Event Builder Prototypes Luciano Berti, Gaetano Maron Luciano Berti, Gaetano Maron INFN – Laboratori.
K. Honscheid RT-2003 The BTeV Data Acquisition System RT-2003 May 22, 2002 Klaus Honscheid, OSU  The BTeV Challenge  The Project  Readout and Controls.
Design and Characterization of TMD-MPI Ethernet Bridge Kevin Lam Professor Paul Chow.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
LECC2003 AmsterdamMatthias Müller A RobIn Prototype for a PCI-Bus based Atlas Readout-System B. Gorini, M. Joos, J. Petersen (CERN, Geneva) A. Kugel, R.
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
1 Network Performance Optimisation and Load Balancing Wulf Thannhaeuser.
Prospects for the use of remote real time computing over long distances in the ATLAS Trigger/DAQ system R. W. Dobinson (CERN), J. Hansen (NBI), K. Korcyl.
1 Presented By: Eyal Enav and Tal Rath Eyal Enav and Tal Rath Supervisor: Mike Sumszyk Mike Sumszyk.
Lecture 12: Reconfigurable Systems II October 20, 2004 ECE 697F Reconfigurable Computing Lecture 12 Reconfigurable Systems II: Exploring Programmable Systems.
CS 4396 Computer Networks Lab Router Architectures.
A large-scale Atlas LVL2 test bed based on a programmable hardware ROB emulator J. Lokier, M. Dobson, M. LeVine.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
LHCb front-end electronics and its interface to the DAQ.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
Kraków4FutureDaQ Institute of Physics & Nowoczesna Elektronika P.Salabura,A.Misiak,S.Kistryn,R.Tębacz,K.Korcyl & M.Kajetanowicz Discrete event simulations.
Modeling PANDA TDAQ system Jacek Otwinowski Krzysztof Korcyl Radoslaw Trebacz Jagiellonian University - Krakow.
An Efficient Gigabit Ethernet Switch Model for Large-Scale Simulation Dong (Kevin) Jin.
LNL 1 SADIRC2000 Resoconto 2000 e Richieste LNL per il 2001 L. Berti 30% M. Biasotto 100% M. Gulmini 50% G. Maron 50% N. Toniolo 30% Le percentuali sono.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
The CMS Event Builder Demonstrator based on MyrinetFrans Meijers. CHEP 2000, Padova Italy, Feb The CMS Event Builder Demonstrator based on Myrinet.
US Peripheral Crate VMEbus Controller Ben Bylsma EMU – ESR CERN, November 2003.
An Efficient Gigabit Ethernet Switch Model for Large-Scale Simulation Dong (Kevin) Jin.
Networking update and plans (see also chapter 10 of TP) Bob Dobinson, CERN, June 2000.
DDRIII BASED GENERAL PURPOSE FIFO ON VIRTEX-6 FPGA ML605 BOARD PART B PRESENTATION STUDENTS: OLEG KORENEV EUGENE REZNIK SUPERVISOR: ROLF HILGENDORF 1 Semester:
CCNA3 Module 4 Brierley Module 4. CCNA3 Module 4 Brierley Topics LAN congestion and its effect on network performance Advantages of LAN segmentation in.
LKr readout and trigger R. Fantechi 3/2/2010. The CARE structure.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Exploiting Task-level Concurrency in a Programmable Network Interface June 11, 2003 Hyong-youb Kim, Vijay S. Pai, and Scott Rixner Rice Computer Architecture.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
KIP Ivan Kisel, Uni-Heidelberg, RT May 2003 A Scalable 1 MHz Trigger Farm Prototype with Event-Coherent DMA Input V. Lindenstruth, D. Atanasov,
Artur BarczykRT2003, High Rate Event Building with Gigabit Ethernet Introduction Transport protocols Methods to enhance link utilisation Test.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
LECC2004 BostonMatthias Müller The final design of the ATLAS Trigger/DAQ Readout-Buffer Input (ROBIN) Device B. Gorini, M. Joos, J. Petersen, S. Stancu,
ROM. ROM functionalities. ROM boards has to provide data format conversion. – Event fragments, from the FE electronics, enter the ROM as serial data stream;
Univ. of TehranIntroduction to Computer Network1 An Introduction to Computer Networks University of Tehran Dept. of EE and Computer Engineering By: Dr.
Grzegorz Korcyl - Jagiellonian University, Kraków Grzegorz Korcyl – PANDA TDAQ Workshop, Giessen April 2010.
ROD Activities at Dresden Andreas Glatte, Andreas Meyer, Andy Kielburg-Jeka, Arno Straessner LAr Electronics Upgrade Meeting – LAr Week September 2009.
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
PXD DAQ News S. Lange (Univ. Gießen) Belle II Trigger/DAQ Meeting (Jan 16-18, 2012, Hawaii, USA) Today: only topics important for CDAQ - GbE Connection.
Eric Hazen1 Ethernet Readout With: E. Kearns, J. Raaf, S.X. Wu, others... Eric Hazen Boston University.
High Rate Event Building with Gigabit Ethernet
Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl.
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
CMS DAQ Event Builder Based on Gigabit Ethernet
PCI BASED READ-OUT RECEIVER CARD IN THE ALICE DAQ SYSTEM
The LHCb Event Building Strategy
Event Building With Smart NICs
LHCb Trigger, Online and related Electronics
Network Processors for a 1 MHz Trigger-DAQ System
LHCb Online Meeting November 15th, 2000
TELL1 A common data acquisition board for LHCb
Cluster Computers.
Presentation transcript:

Copyright© 2000 OPNET Technologies, Inc. R.W. Dobinson, S. Haas, K. Korcyl, M.J. LeVine, J. Lokier, B. Martin, C. Meirosu, F. Saka, K. Vella Testing and Modeling Ethernet Switches and Networks for Use in ATLAS High-level Triggers

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 2 Overview Ethernet as a candidate technology for ATLAS LVL2 ATLAS - the networking requirement Parameterized model of an Ethernet switch Large network modeling results using the parameterized switch model GBE measurements Fast Ethernet tester/ROB emulator

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 3 ATLAS LVL2 system network 1564 buffers (distributed 2 Mbyte image) ~ MIPS processors analyze data from 5% of buffers CENTRAL Gigabit switch Concentrating switches Gigabit Ethernet Fast Ethernet Trigger rate 100 kHz Readout buffers (ROB) Processing nodes

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 4 ATLAS LVL2 network traffic Network nodes 1564 ROBs (Readout Buffers ) Processor CPUs Message scenario LVL1 75 kHz [scalable to100 kHz] Supervisor CPU selects processor CPU for each event accepted by LVL1 Processor CPU sends ROB_REQUEST to selected (~5%) of ROBs ROBs reply with ROB_REPLY containing designated ROI data ROB: ~11000 request-response/sec Worst case data rate: 4 MB/s processor node: ~14000 request-response/sec Average data rate: 6.6 MB/s

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 5 Modeling Parameterized model of a switch  Motivation - Limited number of physical parameters Fast execution Direct determination by measurement  commodity Ethernet switches  parameterized model of a switch  comparison of parameterized model and results from measurements Parameterized model in large Ethernet network  ATLAS HEP experiment at LHC in CERN  second level trigger (LVL2) for ATLAS experiment

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 6 Commodity Ethernet switches Most of the Ethernet switches on the market today have a hierarchical architecture ports modules backplane Store-and-forward mode of operation frame is fully stored in module with an input port for inter-module transfer the frame is subsequently stored in a module with an output port module ports chassis backplane

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 7 Parameterized model for switch: inter-module communication Buffer manager MAC Input buffer P1 Buffer manager Output buffer P2 Backplane P6 P7 P4 P7 P3 Parameters: P1 - Input Buffer Length [ #frames] P2 - Output Buffer Length [ # frames] P3 - Max ToBackplane Throughput [MB/s] P4 - Max FromBackplane Throughput [MB/s] P6 - Max Backplane Throughput [MB/s] P7 - Inter-module Transfer Bandwidth [ MB/s] P9 - Inter-module Fixed Overhead [µs] (not shown) Input moduleOutput module

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 8 Switch parameters from latency and rate measurements Measure latency as function of packet size (ping-pong) Measure packet rate as function of packet size No switch, intra-module, inter-module

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 9 Modeling calculations Switch parameters determined from simple ping-pong and streaming measurements described Switch model implemented as C++ code Event-driven simulations Object-oriented switch model embedded in OPNET (commercial package) Calculations also carried out using the same model within Ptolemy

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 10 parameterized model vs measurement latency vs throughput

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 11 parameterized model vs measurement lost frame rate vs input data rate

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 12 ATLAS LVL2 network performance studies Central switch: generic - fully non-blocking

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 13 Modeling results The parameterized model uses a limited number of parameters, which can be measured on COTS switches The parameterized model has been successfully verified on a number of switches The parameterized switch model was used in modeling the large Ethernet network for the ATLAS second level trigger system The model of the network can be used to verify applicability of different switches currently on the market and their impact on the network performance

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 14 CPU utilization measurements Ping-pong configuration at different packet rates Measure CPU utilization for various packet sizes, rates ATLAS LVL2 REQUIREMENT

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 15 Limitations of PC-based measurements PC with standard software unable to drive switches at line speed 100% for FE only for packet size > 500 Byte [half duplex] 35% for GBE for any packet size - limited by host transfer Parameterization of a single switch needs to be tested over the entire operational envelope Number of nodes limited to 40 in present tests Scaling up to ~2000 nodes is not credible Model needs to be verified on a larger test bed Conclusion: Dedicated traffic generators needed for FE, GBE in order to fully characterize switches to be used in ATLAS HLT Need larger test bed for greater confidence in model applied to full ATLAS HLT system

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 16 Rates: GB Ethernet NIC development Commercial NIC - Alteon ACENIC Inadequate performance using manufacturer’s firmware Modifications - Custom software developed All traffic local to NIC (host memory not involved) Full duplex line speed achieved for any size packet PCI BUS CPU 2 PCI interface CPU 1 MAC Ext Mem 0.5/1 Mbyte Mem DMA1 DMA2 Phy GE TIGON CHIP PCI BUS TIGON CHIP

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 17 8 Gigabit switch tester in a single chassis

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 18 Latency vs throughput for 8-port GBE switch

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 19 Fast Ethernet tester Custom board, built at CERN Programmable network traffic to characterize network switches IP, Ethernet, including QoS Why build it instead of buying? Economics Reprogrammable for ATLAS purposes

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 20 FE tester: capabilities & status 32 ports full-duplex fast Ethernet Parallel port connection to host MAC function: FPGA All FPGAs programmed in Handel C Handel C is a high-level language with ‘C’ syntax Enhancements for FPGAs (e.g., arbitrary width bit fields) Simulation facility built into implementation Output packets generated, time stamped Incoming packets time stamped, CRC checked Queue dwell times histogrammed All of this full full line speed

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 21 Fast Ethernet tester board

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 22 FE tester as ROB emulator Accepts MESH ROB_DATA_REQUESTs Generates MESH ROB_DATA_REPLYs Programmable latency in ROB Response size depends on ROI size and detector type Response contents not meaningful Measure latencies and queue depths

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 23 Large scale test bed using ROB emulator Use 8 of these boards [11K CHF each] to provide 256 emulated ROB ports Add: - 64 PCs - supervisors + farm nodes Switch fabric We have a ~15% scale test bed of Atlas LVL2 Previous test bed: order of magnitude smaller

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 24 Future activity Measurements - Improved switch characterization Modeling - Parameterized model of GE switch (central switch) Detailed modeling of ATLAS LVL2 Modeling of unified network to handle LVL2 traffic + Event Filter (LVL3 trigger)

DAQ 2000 Oct 2000 Testing & Modeling Ethernet Switches for ATLAS HLT 25 Summary TCP/IP is very expensive in terms of CPU utilization at rates required MESH will likely be used in ATLAS LVL2 trigger GBE needs custom firmware in NIC to get required performance Modeling in good agreement with single switch measurements Large scale test bed minimizes uncertainty associated with behavior extrapolated from a few nodes Hardware ROB emulation provides opportunity to see network behavior under realistic conditions