LHCb Trigger and Data Acquisition System Beat Jost Cern / EP Presentation given at the 11th IEEE NPSS Real Time Conference June 14-18, 1999 Santa Fe, NM.

Slides:



Advertisements
Similar presentations
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
Advertisements

27 th June 2008Johannes Albrecht, BEACH 2008 Johannes Albrecht Physikalisches Institut Universität Heidelberg on behalf of the LHCb Collaboration The LHCb.
The LHCb DAQ and Trigger Systems: recent updates Ricardo Graciani XXXIV International Meeting on Fundamental Physics.
Laboratoire de l’Accélérateur Linéaire, Orsay, France and CERN Olivier Callot on behalf of the LHCb collaboration Implementation and Performance of the.
The LHCb Online System Design, Implementation, Performance, Plans Presentation at the 2 nd TIPP Conference Chicago, 9 June 2011 Beat Jost Cern.
LHCb DAQ Review, September LHCb Timing and Fast Control System TFC Team: Arek Chlopik, Warsaw Zbigniew Guzik, Warsaw Richard Jacobsson, CERN Beat.
K. Honscheid RT-2003 The BTeV Data Acquisition System RT-2003 May 22, 2002 Klaus Honscheid, OSU  The BTeV Challenge  The Project  Readout and Controls.
Hall D Trigger and Data Rates Elliott Wolin Hall D Electronics Review Jefferson Lab 23-Jul-2003.
Niko Neufeld, CERN/PH-Department
Architecture and Dataflow Overview LHCb Data-Flow Review September 2001 Beat Jost Cern / EP.
TFC Partitioning Support and Status Beat Jost Cern EP.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
CERN Real Time conference, Montreal May 18 – 23, 2003 Richard Jacobsson 1 Driving the LHCb Front-End Readout TFC Team: Arek Chlopik, IPJ, Poland Zbigniew.
A PCI Card for Readout in High Energy Physics Experiments Michele Floris 1,2, Gianluca Usai 1,2, Davide Marras 2, André David IEEE Nuclear Science.
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
1 Network Performance Optimisation and Load Balancing Wulf Thannhaeuser.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
1 DAQ System Realization DAQ Data Flow Review Sep th, 2001 Niko Neufeld CERN, EP.
LHCb front-end electronics and its interface to the DAQ.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Guido Haefeli CHIPP Workshop on Detector R&D Geneva, June 2008 R&D at LPHE/EPFL: SiPM and DAQ electronics.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
June 17th, 2002Gustaaf Brooijmans - All Experimenter's Meeting 1 DØ DAQ Status June 17th, 2002 S. Snyder (BNL), D. Chapin, M. Clements, D. Cutts, S. Mattingly.
Modeling PANDA TDAQ system Jacek Otwinowski Krzysztof Korcyl Radoslaw Trebacz Jagiellonian University - Krakow.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
1 Network Processor based RU Implementation, Applicability, Summary Readout Unit Review 24 July 2001 Beat Jost, Niko Neufeld Cern / EP.
Online View and Planning LHCb Trigger Panel Beat Jost Cern / EP.
Data Acquisition, Trigger and Control
Future experiment specific needs for LHCb OpenFabrics/Infiniband Workshop at CERN Monday June 26 Sai Suman Cherukuwada Sai Suman Cherukuwada and Niko Neufeld.
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
DAQ Systems and Technologies for Flavor Physics FPCP Conference Lake Placid 1 June 2009 Beat Jost/ Cern-PH.
1 Event Building L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
DAQ Overview + selected Topics Beat Jost Cern EP.
KIP Ivan Kisel, Uni-Heidelberg, RT May 2003 A Scalable 1 MHz Trigger Farm Prototype with Event-Coherent DMA Input V. Lindenstruth, D. Atanasov,
Monitoring for the ALICE O 2 Project 11 February 2016.
Artur BarczykRT2003, High Rate Event Building with Gigabit Ethernet Introduction Transport protocols Methods to enhance link utilisation Test.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
DAQ Summary Beat Jost Cern EP. Beat Jost, Cern 2 Agenda (Reminder)
The LHCb Calorimeter Triggers LAL Orsay and INFN Bologna.
EPS HEP 2007 Manchester -- Thilo Pauly July The ATLAS Level-1 Trigger Overview and Status Report including Cosmic-Ray Commissioning Thilo.
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
High Rate Event Building with Gigabit Ethernet
Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl.
Richard Jacobsson, CERN
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
TELL1 A common data acquisition board for LHCb
ALICE – First paper.
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
Commissioning of the ALICE HLT, TPC and PHOS systems
DAQ Systems and Technologies for Flavor Physics
The LHCb Trigger Niko Neufeld CERN, PH
The LHCb Event Building Strategy
LHCb Trigger and Data Acquisition System Requirements and Concepts
John Harvey CERN EP/LBC July 24, 2001
Event Building With Smart NICs
The LHCb Trigger Niko Neufeld CERN, PH.
LHCb Trigger, Online and related Electronics
Design Principles of the CMS Level-1 Trigger Control and Hardware Monitoring System Ildefons Magrans de Abril Institute for High Energy Physics, Vienna.
Network Processors for a 1 MHz Trigger-DAQ System
Throttling: Infrastructure, Dead Time, Monitoring
The LHCb Level 1 trigger LHC Symposium, October 27, 2001
LHCb Trigger LHCb Trigger Outlook:
Use Of GAUDI framework in Online Environment
TELL1 A common data acquisition board for LHCb
Presentation transcript:

LHCb Trigger and Data Acquisition System Beat Jost Cern / EP Presentation given at the 11th IEEE NPSS Real Time Conference June 14-18, 1999 Santa Fe, NM

RT 99, Santa Fe, 15 June 1999 Slide 2 Beat Jost, Cern Outline qIntroduction qGeneral Trigger/DAQ Architecture qTrigger/DAQ functional components ãSelected Topics åLevel-1 Trigger åEvent-Building Network Simulation qSummary

RT 99, Santa Fe, 15 June 1999 Slide 3 Beat Jost, Cern Introduction to LHCb qSpecial purpose experiment to measure precisely CP violation parameters in the BB system qDetector is a single-arm spectrometer with one dipole qTotal b-quark production rate is ~75 kHz qExpected rate from inelastic p-p collisions is ~15 MHz qBranching ratios of interesting channels range between giving interesting physics rate of ~5 Hz LHCb in Numbers

RT 99, Santa Fe, 15 June 1999 Slide 4 Beat Jost, Cern LHCb Detector

RT 99, Santa Fe, 15 June 1999 Slide 5 Beat Jost, Cern Typical Interesting Event

RT 99, Santa Fe, 15 June 1999 Slide 6 Beat Jost, Cern Trigger/DAQ Architecture Read-out Network (RN) RU Control & Monitoring RU 2-4 GB/s 4 GB/s 20 MB/s Variable latency L2 ~10 ms L3 ~200 ms LAN Read-out units (RU) Timing & Fast Control Front-End Electronics VDET TRACK ECAL HCAL MUON RICH LHC-B Detector L0 L1 Level 0 Trigger Level 1 Trigger 40 MHz 1 MHz 40 kHz Fixed latency 4.0  s Variable latency <1 ms Data rates 40 TB/s 1 TB/s Front-End Multiplexers (FEM) 1 MHz Front End Links Trigger Level 2 & 3 Event Filter SFC CPU Sub-Farm Controllers (SFC) Storage Throttle

RT 99, Santa Fe, 15 June 1999 Slide 7 Beat Jost, Cern Front-End Electronics qData Buffering for Level-0 latency qData Buffering for Level-1 latency qDigitization and Zero Suppression qFront-end Multiplexing onto Front- end links

RT 99, Santa Fe, 15 June 1999 Slide 8 Beat Jost, Cern Timing and Fast Control qProvide common and synchronous clock to all components needing it qProvide Level-0 and Level-1 trigger decisions qProvide commands synchronous in all components (Resets) qProvide Trigger hold-off capabilities in case buffers are getting full

RT 99, Santa Fe, 15 June 1999 Slide 9 Beat Jost, Cern Level-0 Trigger qLarge transverse Energy (Calorimeter) Trigger qLarge transverse momentum Muon Trigger qPile-up Veto qImplemented in FPGAs/DSPs basically hard-wired Input rate: 40 MHz Output rate: 1 MHz Latency: 4.0  s (fixed)

RT 99, Santa Fe, 15 June 1999 Slide 10 Beat Jost, Cern DAQ Functional Components qReadout Units (RUs) ãMultiplex Front-end links onto Readout Network links ãMerge input fragments to one output fragment qSubfarm Controllers (SFCs) ãassemble event fragments arriving from RUs to complete events and send them to one of the CPUs connected ãLoad balancing among the CPUs connected qReadout Network ãprovide connectivity between RUs and SFCs for event-building ãprovide necessary bandwidth (4 GB/sec sustained) qCPU farm ãexecute the high level trigger algorithms åLevel-2 (Input rate: 40 kHz, Output rate: 5 kHz) åLevel-3 (Input rate: 5 kHz, Output rate: ~100 Hz) ã~2000 processors (à 1000 MIPS) Note: There is no central event manager

RT 99, Santa Fe, 15 June 1999 Slide 11 Beat Jost, Cern Control System qCommon integrated controls system ãDetector controls (classical ‘slow control’) åHigh voltage åLow voltage åCrates åTemperatures åAlarm generation and handling åetc. ãDAQ controls åClassical RUN control åSetup and configuration of all components (FE, Trigger,DAQ) åMonitoring ãSame system for both functions

RT 99, Santa Fe, 15 June 1999 Slide 12 Beat Jost, Cern Level-1 Trigger qPurpose ãSelect events with detached secondary vertices qAlgorithm ãBased on special geometry of vertex detector (r-stations,  -stations) ãSeveral steps åtrack reconstruction in 2 dimensions (r-z) ådetermination of primary vertex åsearch for tracks with large impact parameter relative to primary vertex åfull 3 dimensional reconstruction of those tracks ãExpect rate reduction by factor 25 qTechnical Problem: 1 MHz input rate, 3 GB/s data rate, small event fragments, Latency

RT 99, Santa Fe, 15 June 1999 Slide 13 Beat Jost, Cern Level-1 Trigger (2) qImplementation ã~32 sources to switching network ãAlgorithm running in processors (~200 CPUs) ãBasic idea is to have a switching network between data sources and processors ãIn principle very similar to DAQ, however the input rate of 1 MHz poses special problems.

RT 99, Santa Fe, 15 June 1999 Slide 14 Beat Jost, Cern Event-Building Network qRequirements ã4 GB/s sustained bandwidth ãscalable ãexpandable ã~100 inputs (RUs) ã~100 outputs (SFCs) ãaffordable and if possible commercial (COTS, Commodity?) qReadout Protocol ãPure push-through protocol of complete events to one CPU of the farm ÚSimple hardware and software ÚNo central control  perfect scalability ÚFull flexibility for high-level trigger algorithms ØLarge bandwidth needed ØAvoiding buffer overflows via ‘throttle’ to trigger

RT 99, Santa Fe, 15 June 1999 Slide 15 Beat Jost, Cern Event-Building Network Simulation qSimulated technology: Myrinet ãNominal 1.28 Gb/s ãXon/Xoff flow control ãSwitches: åideal cross-bar å8x8 maximum size (currently) åwormhole routing åsource routing åNo buffering inside switches qSoftware used: Ptolemy discrete event framework qRealistic traffic patterns ãvariable event sizes ãevent building traffic

RT 99, Santa Fe, 15 June 1999 Slide 16 Beat Jost, Cern Network Simulation Results Results don’t depend strongly on specific technology (Myrinet), but rather on characteristics (flow control, etc) qFIFO buffers between switching levels allow to recover scalability q50 % efficiency “Law of nature”

RT 99, Santa Fe, 15 June 1999 Slide 17 Beat Jost, Cern Summary qLHCb is a special purpose experiment to study CP violation qTriggering poses special challenges ãSimilarity between inelastic p-p interactions and events with B-Mesons qDAQ is designed with simplicity and maintainability in mind ãPush readout protocol  Simple, e.g. No central event manager ãHarder bandwidth requirements on readout network ãSimulations suggest that readout network can be realized by adding FIFO buffers between levels of switching elements qUnified approach to Controls ãSame basic infrastructure for detector controls and DAQ controls ãBoth aspects completely integrated but operationally independent