1 Event Building L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.

Slides:



Advertisements
Similar presentations
A Gigabit Ethernet Link Source Card Robert E. Blair, John W. Dawson, Gary Drake, David J. Francis*, William N. Haberichter, James L. Schlereth Argonne.
Advertisements

The LHCb Event-Builder Markus Frank, Jean-Christophe Garnier, Clara Gaspar, Richard Jacobson, Beat Jost, Guoming Liu, Niko Neufeld, CERN/PH 17 th Real-Time.
Remigius K Mommsen Fermilab A New Event Builder for CMS Run II A New Event Builder for CMS Run II on behalf of the CMS DAQ group.
The LHCb DAQ and Trigger Systems: recent updates Ricardo Graciani XXXIV International Meeting on Fundamental Physics.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
LHCb readout infrastructure NA62 TDAQ WG Meeting April 1 st, 2009 Niko Neufeld, PH/LBC.
Niko Neufeld CERN PH/LBC. Detector front-end electronics Eventbuilder network Eventbuilder PCs (software LLT) Eventfilter Farm up to 4000 servers Eventfilter.
Module 10. Internet Protocol (IP) is the routed protocol of the Internet. IP addressing enables packets to be routed from source to destination using.
LNL CMS G. MaronCPT Week CERN, 23 April Legnaro Event Builder Prototypes Luciano Berti, Gaetano Maron Luciano Berti, Gaetano Maron INFN – Laboratori.
Architecture and Dataflow Overview LHCb Data-Flow Review September 2001 Beat Jost Cern / EP.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
1 Chapter 16 Protocols and Protocol Layering. 2 Protocol  Agreement about communication  Specifies  Format of messages (syntax)  Meaning of messages.
CERN Real Time conference, Montreal May 18 – 23, 2003 Richard Jacobsson 1 Driving the LHCb Front-End Readout TFC Team: Arek Chlopik, IPJ, Poland Zbigniew.
VLVNT Amsterdam 2003 – J. Panman1 DAQ: comparison with an LHC experiment J. Panman CERN VLVNT workshop 7 Oct 2003 Use as example CMS (slides taken from.
LHCb Trigger and Data Acquisition System Beat Jost Cern / EP Presentation given at the 11th IEEE NPSS Real Time Conference June 14-18, 1999 Santa Fe, NM.
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
1 Network Performance Optimisation and Load Balancing Wulf Thannhaeuser.
Data Acquisition Backbone Core J. Adamczewski-Musch, N. Kurz, S. Linev GSI, Experiment Electronics, Data processing group.
Niko Neufeld PH/LBC. Detector front-end electronics Eventbuilder network Eventbuilder PCs (software LLT) Eventfilter Farm up to 4000 servers Eventfilter.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
1 DAQ System Realization DAQ Data Flow Review Sep th, 2001 Niko Neufeld CERN, EP.
LHCb front-end electronics and its interface to the DAQ.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
SoLiD/PVDIS DAQ Alexandre Camsonne. DAQ limitations Electronics Data transfer.
MTU Fragmentation process. MTU The Maximum Transmission Unit (MTU) is – the maximum length of data that can be transmitted by a protocol in one instance.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
Online View and Planning LHCb Trigger Panel Beat Jost Cern / EP.
The CMS Event Builder Demonstrator based on MyrinetFrans Meijers. CHEP 2000, Padova Italy, Feb The CMS Event Builder Demonstrator based on Myrinet.
Future experiment specific needs for LHCb OpenFabrics/Infiniband Workshop at CERN Monday June 26 Sai Suman Cherukuwada Sai Suman Cherukuwada and Niko Neufeld.
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
LKr readout and trigger R. Fantechi 3/2/2010. The CARE structure.
Niko Neufeld, CERN. Trigger-free read-out – every bunch-crossing! 40 MHz of events to be acquired, built and processed in software 40 Tbit/s aggregated.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
DAQ Overview + selected Topics Beat Jost Cern EP.
Straw readout status Status and plans in Prague compared with situation now Choke and error Conclusions and plans.
Monitoring for the ALICE O 2 Project 11 February 2016.
PHENIX DAQ RATES. RHIC Data Rates at Design Luminosity PHENIX Max = 25 kHz Every FEM must send in 40 us. Since we multiplex 2, that limit is 12 kHz and.
Artur BarczykRT2003, High Rate Event Building with Gigabit Ethernet Introduction Transport protocols Methods to enhance link utilisation Test.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
An Overview over Online Systems at the LHC Invited Talk at NSS-MIC 2012 Anaheim CA, 31 October 2012 Beat Jost, Cern.
Remigius K Mommsen Fermilab CMS Run 2 Event Building.
Grzegorz Korcyl - Jagiellonian University, Kraków Grzegorz Korcyl – PANDA TDAQ Workshop, Giessen April 2010.
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
EPS 2007 Alexander Oh, CERN 1 The DAQ and Run Control of CMS EPS 2007, Manchester Alexander Oh, CERN, PH-CMD On behalf of the CMS-CMD Group.
Grzegorz Kasprowicz1 Level 1 trigger sorter implemented in hardware.
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
LHCb and InfiniBand on FPGA
High Rate Event Building with Gigabit Ethernet
Electronics Trigger and DAQ CERN meeting summary.
ETD/Online Report D. Breton, U. Marconi, S. Luitz
TELL1 A common data acquisition board for LHCb
Controlling a large CPU farm using industrial tools
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
Magda El Zarki Professor, ICS UC, Irvine
The LHCb Trigger Niko Neufeld CERN, PH
The LHCb Event Building Strategy
VELO readout On detector electronics Off detector electronics to DAQ
LHCb Trigger and Data Acquisition System Requirements and Concepts
John Harvey CERN EP/LBC July 24, 2001
Event Building With Smart NICs
Implementing an OpenFlow Switch on the NetFPGA platform
The LHCb High Level Trigger Software Framework
Network Processors for a 1 MHz Trigger-DAQ System
Throttling: Infrastructure, Dead Time, Monitoring
LHCb Online Meeting November 15th, 2000
Presentation transcript:

1 Event Building L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th

Niko NEUFELD CERN, EP 2 The Event-builder

Niko NEUFELD CERN, EP 3 Event-building All Readout Units (RUs) send frames to the same destination based on the event number The destination is a NP module, which – waits for all frames belonging to one event –concatenates them in the right order –strips off all unnecessary headers –sends the completely assembled events to the SFCs –handles a small amount of reverse direction traffic

Niko NEUFELD CERN, EP 4 Architecture NP Event Builder Readout Network Switch NP SFC Storage Controller Sorter TFC System

Niko NEUFELD CERN, EP 5 Optimising the link-load The input date rate into an event builder module is chosen to be ~ 110 MB/s The output is ~ 75 MB/s (due to the reduced overheads) Since it is advantageous to minimise the number of subfarms (and hence SFCs, c.f. next presentation) a suitable multiplexing using switches is done to bring the link-load into an SFC again up to 110 MB/s, i.e. in the above case 3 event builders feed into 2 sub-farms

Niko NEUFELD CERN, EP 6 Some numbers for the Velo/TT (=baseline) scenario Output Links from Readout Network72 Mean output from Readout Network per link108.7 MB/s Fragment Rate (L1) per link585.6 kHz Fragment Rate (HLT) per link8.9 kHz Event Rate (L1) per Link15.3 kHz Event Rate (HLT) per Link0.556 kHz Mean output from event-builder for L156.5 MB/s Mean output from event-builder for HLT18.8 MB/s Aggregated output from event-builder into sub-farms75.4 MB/s Number of event-builder NPs36

Niko NEUFELD CERN, EP 7 Event-building in the NP Fragment rate is very low compared to the multiplexing after the front-end: 500 kHz for L1 Technically fragments are concatenated in the output stage of the NP (large output buffer of 256 MB) From studies of frame-merging it is known that merging rates of well over 800 kHz can be easily achieved Only technical complication: resulting events can be larger than allowed Ethernet MTU  segmentation must be performed

Niko NEUFELD CERN, EP 8 Event-Building vs FE muxing Fragments of same colour have same event-number Necessary information for merging contained in the header Transport errors (timeouts, CRC errors) are recorded in a trailer

Niko NEUFELD CERN, EP 9 Data-format (HLT)

Niko NEUFELD CERN, EP 10 Backup Slides

Niko NEUFELD CERN, EP 11 Data flow in the NP4GS3 Ingress Event BuildingEgress Event Building DASL Access to frame data