Electronics Trigger and DAQ CERN meeting summary.

Slides:



Advertisements
Similar presentations
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
Advertisements

A Gigabit Ethernet Link Source Card Robert E. Blair, John W. Dawson, Gary Drake, David J. Francis*, William N. Haberichter, James L. Schlereth Argonne.
Remigius K Mommsen Fermilab A New Event Builder for CMS Run II A New Event Builder for CMS Run II on behalf of the CMS DAQ group.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
Straw electronics Straw Readout Board (SRB). Full SRB - IO Handling 16 covers – Input 16*2 links 400(320eff) Mbits/s Control – TTC – LEMO – VME Output.
Architecture and Dataflow Overview LHCb Data-Flow Review September 2001 Beat Jost Cern / EP.
CERN Real Time conference, Montreal May 18 – 23, 2003 Richard Jacobsson 1 Driving the LHCb Front-End Readout TFC Team: Arek Chlopik, IPJ, Poland Zbigniew.
VLVNT Amsterdam 2003 – J. Panman1 DAQ: comparison with an LHC experiment J. Panman CERN VLVNT workshop 7 Oct 2003 Use as example CMS (slides taken from.
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
1 Network Performance Optimisation and Load Balancing Wulf Thannhaeuser.
LHCb front-end electronics and its interface to the DAQ.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Guido Haefeli CHIPP Workshop on Detector R&D Geneva, June 2008 R&D at LPHE/EPFL: SiPM and DAQ electronics.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
SRB data transmission Vito Palladino CERN 2 June 2014.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
DAQ Overview + selected Topics Beat Jost Cern EP.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
ROM. ROM functionalities. ROM boards has to provide data format conversion. – Event fragments, from the FE electronics, enter the ROM as serial data stream;
SuperB DAQ U. Marconi Padova 23/01/09. Bunch crossing: 450 MHz L1 Output rate: 150 kHz L1 Triggering Detectors: EC, DC The Level 1 trigger has the task.
Grzegorz Korcyl - Jagiellonian University, Kraków Grzegorz Korcyl – PANDA TDAQ Workshop, Giessen April 2010.
August 24, 2011IDAP Kick-off meeting - TileCal ATLAS TileCal Upgrade LHC and ATLAS current status LHC designed for cm -2 s 7+7 TeV Limited to.
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
ETD/Online Summary D. Breton, U. Marconi, S. Luitz Frascati Workshop 04/2011.
Some thoughs about trigger/DAQ … Dominique Breton (C.Beigbeder, G.Dubois-Felsmann, S.Luitz) SuperB meeting – La Biodola – June 2008.
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
of the Upgraded LHCb Readout System
Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl.
DAQ ACQUISITION FOR THE dE/dX DETECTOR
M. Bellato INFN Padova and U. Marconi INFN Bologna
The Data Handling Hybrid
DCH FEE STATUS Level 1 Triggered Data Flow FEE Implementation &
Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl.
PID meeting SNATS to SCATS New front end design
Ingredients 24 x 1Gbit port switch with 2 x 10 Gbit uplinks  KCHF
PANDA collaboration meeting FEE session
ETD summary D. Breton, S.Luitz, U.Marconi
ETD/Online Report D. Breton, U. Marconi, S. Luitz
CERN meeting report, and more … D. Breton
Modelisation of SuperB Front-End Electronics
ETD/Online Report D. Breton, U. Marconi, S. Luitz
TELL1 A common data acquisition board for LHCb
Electronics, Trigger and DAQ for SuperB
Controlling a large CPU farm using industrial tools
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
CMS DAQ Event Builder Based on Gigabit Ethernet
Trigger, DAQ, & Online: Perspectives on Electronics
ProtoDUNE SP DAQ assumptions, interfaces & constraints
Modelisation of control of SuperB Common Front-End Electronics
PCI BASED READ-OUT RECEIVER CARD IN THE ALICE DAQ SYSTEM
The LHCb Event Building Strategy
VELO readout On detector electronics Off detector electronics to DAQ
Dominique Breton, Jihane Maalmi
Example of DAQ Trigger issues for the SoLID experiment
SVT detector electronics
John Harvey CERN EP/LBC July 24, 2001
Event Building With Smart NICs
LHCb Trigger, Online and related Electronics
Network Processors for a 1 MHz Trigger-DAQ System
Throttling: Infrastructure, Dead Time, Monitoring
PID meeting Mechanical implementation Electronics architecture
August 19th 2013 Alexandre Camsonne
ETD parallel session March 18th 2010
SVT detector electronics
TELL1 A common data acquisition board for LHCb
U. Marconi, D. Breton, S. Luitz
Fixed Latency Serial Links with FPGA-embedded SerDes for SuperB
Presentation transcript:

Electronics Trigger and DAQ CERN meeting summary.

Synchronous, Pipelined, Fixed-Latency Architecture Synchronous, Pipelined, Fixed-Latency

Optical Link Studies of the optical data links to be used for commands and clock distribution are well advanced. It has been demonstrated that these links can be built using serializer/deserializers, which provide deterministic link latency. Preliminary results concerning irradiation effects are very encouraging. Ser/des needs to be further qualified. Data link properties are still under studies as well. 

Trigger The requirement of having the dead time at 1% level at 150 kHz trigger rate implies a minimum command spacing of about 70ns. The sampling rate of the detectors used for triggering is 7 MHz, with a corresponding sampling period of 140 ns. Interpolation needed. It would significantly help to bring down the L1 accept rate, especially in view of the potential increase in luminosity. Bhabha scattering 50 kHz; Machine related background 25 kHz; Physics and background 25 kHz. We have foreseen a 50 % contingency here. Detector readout windows as small as 100 ns (to be verified) will be used by some sub-detectors (SVT, PID, IFR). It sets a limit on the maximum trigger timing resolution to be better than 50 ns (SVT). Wider sub-detector readout windows may be needed in case to cope with the best achievable trigger timing resolution. L1 processing clock will be set to 56MHz. Evaluate what's achievable as L1 time precision. Define what are the maximum affordable sub-detector readout window sizes. How to reduce machine backgrounds and backgrounds from Bhabha scattering?

FEE We estimated the number of optical data links connecting the FEE to the ROM amounts to 325. data links operate at 1.8 Gb/s rate. The event size at the FEE stage is estimated to be of the order of 500 kB (to be checked). To be studied: optical link occupancies (%), de-randomizer and transmission output stage, throttling mechanisms.

ROM Two solutions are foreseen to implement the required ROM functionalities: FPGA/GBit-NIC and PC/CPU based ones. ROMs based on custom electronics (FPGA/GBit-NIC) would simply act as a bridge/interface between the FEE and the HLT farm, just performing event formatting with minimal data handling. The overall data flux of 500 kB * 150 kHz is assumed to not be an issue for a core network switch connecting the ROMs to the HLT farm; data handling would be postponed to the HLT farm (about the HLT single event processing time see below). PC/CPU implementation requires each PC to be equipped with an optical/PCIe bus interface to get data stored in the PC RAM. This approach provides max flexibility in both FE data processing and  transmission.

FCTS and ECS The FCTS architecture is well understood. Clock and commands are distributed throughout the detector components using pluggable FCTS receiver modules. ROM requires FTCS mezzanine boards to get packets destination addresses and complete event information field. ECS configuration and control of the FE boards is performed by means of SPECS boards and the available protocols at 10 MB/s. Far from the detector ECS can be provided with Ethernet connections.

Event Builder Static, round robin, push mechanism relying on unreliable protocol (UDP) is the easiest to be implemented. For such a scheme, DCB/Converged Ethernet could provide a reliable layer 2 transport, effectively turning UDP in a reliable on-the wire protocol.  Alternative protocols could include RDMA over a reliable layer 2 transport (such as DCB/Converged Ethernet). Slow feedback mechanisms of the PC farm to the system can go through ECS. Do we need FCTS feedbacks instead?

HLT HLT processing time of the order of 10 ms/event at 150 kHz can be managed by means of 1500 CPU cores, which corresponds to order of 100 boxes. The HLT farm shall be rather small therefore. We know that 1÷ 2 ms on 2.2GHZ AMD Opteron (~2006) was the measured latency of the BaBar L3 algorithm in average.6