Data Acquisition, Trigger and Control

Slides:



Advertisements
Similar presentations
Sander Klous on behalf of the ATLAS Collaboration Real-Time May /5/20101.
Advertisements

Clara Gaspar on behalf of the LHCb Collaboration, “Physics at the LHC and Beyond”, Quy Nhon, Vietnam, August 2014 Challenges and lessons learnt LHCb Operations.
André Augustinus ALICE Detector Control System  ALICE DCS is responsible for safe, stable and efficient operation of the experiment  Central monitoring.
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
27 th June 2008Johannes Albrecht, BEACH 2008 Johannes Albrecht Physikalisches Institut Universität Heidelberg on behalf of the LHCb Collaboration The LHCb.
The LHCb DAQ and Trigger Systems: recent updates Ricardo Graciani XXXIV International Meeting on Fundamental Physics.
DSP online algorithms for the ATLAS TileCal Read Out Drivers Cristobal Cuenca Almenar IFIC (University of Valencia-CSIC)
The ATLAS trigger Ricardo Gonçalo Royal Holloway University of London.
Clara Gaspar, May 2010 The LHCb Run Control System An Integrated and Homogeneous Control System.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
February 19th 2009AlbaNova Instrumentation Seminar1 Christian Bohm Instrumentation Physics, SU Upgrading the ATLAS detector Overview Motivation The current.
The CMS Level-1 Trigger System Dave Newbold, University of Bristol On behalf of the CMS collaboration.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
Clara Gaspar, October 2011 The LHCb Experiment Control System: On the path to full automation.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Claudia-Elisabeth Wulz Institute for High Energy Physics Vienna Level-1 Trigger Menu Working Group CERN, 9 November 2000 Global Trigger Overview.
ILC Trigger & DAQ Issues - 1 ILC DAQ issues ILC DAQ issues By P. Le Dû
André Augustinus 10 September 2001 DCS Architecture Issues Food for thoughts and discussion.
ATLAS Liquid Argon Calorimeter Monitoring & Data Quality Jessica Levêque Centre de Physique des Particules de Marseille ATLAS Liquid Argon Calorimeter.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Clara Gaspar, March 2005 LHCb Online & the Conditions DB.
1 VeLo L1 Read Out Guido Haefeli VeLo Comprehensive Review 27/28 January 2003.
Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC.
28/03/2003Julie PRAST, LAPP CNRS, FRANCE 1 The ATLAS Liquid Argon Calorimeters ReadOut Drivers A 600 MHz TMS320C6414 DSPs based design.
The Main Injector Beam Position Monitor Front-End Software Luciano Piccoli, Stephen Foulkes, Margaret Votava and Charles Briegel Fermi National Accelerator.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
Dec.11, 2008 ECL parallel session, Super B1 Results of the run with the new electronics A.Kuzmin, Yu.Usov, V.Shebalin, B.Shwartz 1.New electronics configuration.
January 31, MICE DAQ MICE and ISIS Introduction MICE Detector Front End Electronics Software and MICE DAQ Architecture MICE Triggers Status and Schedule.
LHCb front-end electronics and its interface to the DAQ.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
1 The PHENIX Experiment in the RHIC Run 7 Martin L. Purschke, Brookhaven National Laboratory for the PHENIX Collaboration RHIC from space Long Island,
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
Jefferson Laboratory Hall A SuperBigBite Spectrometer Data Acquisition System Alexandre Camsonne APS DNP 2013 October 24 th 2013 Hall A Jefferson Laboratory.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
Software development Control system of the new IGBT EE switch.
ATLAS TDAQ RoI Builder and the Level 2 Supervisor system R. E. Blair, J. Dawson, G. Drake, W. Haberichter, J. Schlereth, M. Abolins, Y. Ermoline, B. G.
ATLAS and the Trigger System The ATLAS (A Toroidal LHC ApparatuS) Experiment is one of the four major experiments operating at the Large Hadron Collider.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
ECFA Workshop, Warsaw, June G. Eckerlin Data Acquisition for the ILD G. Eckerlin ILD Meeting ILC ECFA Workshop, Warsaw, June 11 th 2008 DAQ Concept.
ATLAS and the Trigger System The ATLAS (A Toroidal LHC ApparatuS) Experiment [1] is one of the four major experiments operating at the Large Hadron Collider.
Rutherford Appleton Laboratory September 1999Fifth Workshop on Electronics for LHC Presented by S. Quinton.
FPGA based signal processing for the LHCb Vertex detector and Silicon Tracker Guido Haefeli EPFL, Lausanne Vertex 2005 November 7-11, 2005 Chuzenji Lake,
DAQ Selection Discussion DAQ Subgroup Phone Conference Christopher Crawford
The LHCb Calorimeter Triggers LAL Orsay and INFN Bologna.
DAQ and Trigger for HPS run Sergey Boyarinov JLAB July 11, Requirements and available test results 2. DAQ status 3. Trigger system status and upgrades.
1 Introduction to Data Acquisition ISOTDAQ2010, February 1 st W.Vandelli CERN/PH-ATD.
RF acceleration and transverse damper systems
Intro to Triggering and Data Acquisition
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
TELL1 A common data acquisition board for LHCb
Controlling a large CPU farm using industrial tools
ALICE – First paper.
Intro to Triggering and Data Acquisition
Toward a costing model What next? Technology decision n Schedule
Vertex 2005 November 7-11, 2005 Chuzenji Lake, Nikko, Japan
Special edition: Farewell for Valerie Halyo
VELO readout On detector electronics Off detector electronics to DAQ
The LHCb Run Control System
Example of DAQ Trigger issues for the SoLID experiment
Special edition: Farewell for Stephen Bailey
John Harvey CERN EP/LBC July 24, 2001
Special edition: Farewell for Eunil Won
Commissioning of the ALICE-PHOS trigger
LHCb Trigger, Online and related Electronics
Design Principles of the CMS Level-1 Trigger Control and Hardware Monitoring System Ildefons Magrans de Abril Institute for High Energy Physics, Vienna.
The LHCb Level 1 trigger LHC Symposium, October 27, 2001
The LHCb Front-end Electronics System Status and Future Development
TELL1 A common data acquisition board for LHCb
Presentation transcript:

Data Acquisition, Trigger and Control

Definitions Trigger System Data Acquisition System Control System Selects in Real Time “interesting” events from the bulk of collisions. - Decides if YES or NO the event should be read out of the detector and stored Data Acquisition System Gathers the data produced by the detector and stores it (for positive trigger decisions) Control System Performs the overall Configuration, Control and Monitoring

Trigger, DAQ & Control Detector Channels HwTrigger Front End Electronics Event Building Monitoring & Control High Level Trigger Storage DAQ

LEP & LHC in Numbers

Basic Concepts

Trivial DAQ External View Physical View CPU disk Logical View Proces- sensor ADC Card sensor CPU disk Physical View ADC storage Trigger (periodic) Logical View Proces- sing

Trivial DAQ with a real trigger ADC Sensor Delay Proces- sing Interrupt Discriminator Trigger Start storage What if a trigger is produced when the ADC or Processing is busy ?

Trivial DAQ with a real trigger (2) Sensor Trigger Discriminator Delay Busy Logic Start ADC not and Proces- sing Interrupt Set Q Clear Ready Deadtime (%) - the ratio between the time the DAQ is busy and the total time storage

Trivial DAQ with a real trigger (3) ADC Sensor Delay Proces- sing Discriminator Trigger Start Busy Logic FIFO Full DataReady and storage Buffers: De-randomize data -> decouple data production from data consumption -> Better performance

Trivial DAQ in collider mode Timing Sensor Beam crossing BX Start ADC Abort Trigger Discriminator Full FIFO Busy Logic DataReady Proces- sing We know when a collision happens the TRIGGER is used to “reject” the data storage

LHC timing Level 1 trigger time exceeds bunch interval » ms p p crossing rate 40 MHz (L=1033- 4×1034cm-2 s-1) 25 ns Level 1 40 MHz Level 2 100 kHz Level n 1 kHz Level 1 trigger time exceeds bunch interval Event overlap & signal pileup (multiple crossings since the detector cell memory greater than 25 ns) Very high number of channels

Trivial DAQ in LHC Timing Sensor BX Pipelined Trigger ADC Busy Logic Beam crossing BX Clock Analog Pipeline Pipelined Trigger Accept/Reject ADC Busy Logic Full FIFO DataReady Proces- sing Front-end Pipelines: To wait for the trigger decision + transmission delays are longer than beam crossing period storage

Less trivial DAQ Trigger Proces- sing Proces- sing Proces- sing Event N channels N channels N channels Trigger ADC ADC ADC Some processing can proceed in parallel Proces- sing Proces- sing Proces- sing Event Building Proces- sing storage

? The Real Thing = ~ 600 TB/sec 15 million detector channels @ 40 MHz = ~15 * 1,000,000 * 40 * 1,000,000 bytes = ~ 600 TB/sec ?

Trigger

Trigger System The Trigger system detects whether an event is interesting or not Typical ATLAS and CMS* event 20 collisions may overlap This repeats every 25 ns A Higgs event *)LHCb isn’t much nicer and in Alice (PbPb) it will be even worse

Trigger Levels Since the detector data is not promptly available and the trigger function is highly complex, it is evaluated by successive approximations: Hardware trigger(s): Fast trigger, uses data only from few detectors has a limited time budget Level 1, Sometimes Level 2 Software trigger(s): Refines the decisions of the hardware trigger by using more detailed data and more complex algorithms. It is usually implemented using processors running a program. High Level Triggers (HLT)

Hardware Trigger Luckily pp collisions produce mainly particles with transverse momentum “pt ” ~1 GeV Interesting physics (old and new) has particles with large pt p pt Conclusion: in the first trigger level we need to detect high transverse momentum particles 18

Example: LHCb Calorimeter Trigger Detect a high energy in a small surface Don’t forget the neighbors Compare with threshold

LHC: Trigger communication loop Global Trigger Local level-1  3 s latency loop Timing and trigger has to be distributed to ALL front-ends Trigger Primitive Generator Font-End pipeline delay  3ms accept/reject 40 MHz synchronous digital system Synchronization at the exit of the pipeline non trivial.  Timing calibration

Trigger Levels in LHCb Level-0 (4 ms) (custom processors) High pT for electrons, muons, hadrons Pile-up veto. HLT (»ms) (commercial processors) Refinement of the Level-1. Background rejection. Event reconstruction. Select physics channels. Needs the full event Detectors 40 MHz Trigger level 0 1 MHz Readout buffers Event building Processor Farms High level Trigger 2 KHz Storage

Trigger Levels in ATLAS Level-1 (3.5 ms) (custom processors) Energy clusters in calorimeters Muon trigger: tracking coincidence matrix. Level-2 (100 ms) (specialized processors) Few Regions Of Interest relevant to trigger decisions Selected information (ROI) by routers and switches Feature extractors (DSP or specialized) Staged local and global processors Level-3 (»ms) (commercial processors) Reconstructs the event using all data Selection of interesting physics channels Detectors Front-end Pipelines Trigger level 1 Region of interest RoI Readout buffers Trigger level 2 Event building Processor Farms Trigger level 3 Storage

Data Acquisition

Data Acquisition System Gathers the data produced by the detector and stores it (for positive trigger decisions) Front End Electronics: Receive detector, trigger and timing signals and produce digitized information Readout Network Reads front end data and forms complete events - Event building Processing & Storage Data processing or filtering Stores event data

Front-Ends Detector dependent (Home made) On Detector Transmission Pre-amplification, Discrimination, Shaping amplification and Multiplexing of a few channels Problems: Radiation levels, power consumption Transmission Long Cables (50-100 m), electrical or fiber-optics In Counting Rooms Hundreds of FE crates : Reception, A/D conversion and Buffering

Front-end structure Detector Amplifier Filter Shaper Range compression clock Sampling Digital filter Zero suppression Pipeline Feature extraction Buffer Format & Readout

DAQ Readout Event-data are now digitized, pre-processed and tagged with a unique, monotonically increasing number But distributed over many read-out boards (“sources”) For the next stage of selection, or even simply to write it to tape we have to get the pieces together: Event Building

Event Building Data sources Event Fragments Event Building Full Events Data storage

Event Filters Higher level triggers (3, 4, …) building Trigger level 3,4 Event filter Storage LHC experiments can not afford to write all acquired data into mass-storage. -> Only useful events should be written to the storage The event filter function selects events that will be used in the data analysis. Selected physics channels. Uses commercially available processors (common PCs) But needs thousands of them running in parallel.

Event Building to a CPU farm Data sources Event Fragments Event Building Full Events Event filter CPUs Data storage

Readout Networks We can build networks using buses or switches. data sources processors Switches vs. Buses Total bandwidth of a Bus is shared among all the processors. Adding more processors degrades the performance of the others. In general, Buses do not scale very well. With switches, N simultaneous transfers can co-exists. Adding more processors does not degrade performance (bigger switch). Switches are scaleable.

LHC Readout Architecture Detector Front Ends ... Trigger Level 1/2 Readout Units Event Manager Event Builder (Network) ... Event Filter Units Storage

LHCb DAQ 40 GB/s 80 MB/s Average event size 50 kB Average rate into farm 1 MHz Average rate to tape 3 kHz

LHC Experiments DAQ Level-1 Event Storage ATLAS 100 1 200 CMS 100 1 kHz MByte MByte/s ATLAS 100 1 200 CMS 100 1 200 LHCb 1000 0.05 150 ALICE 1 25 1250 34 34

Configuration, Control and Monitoring

Experiment Control System Detector Control System (HV, LV, GAS, Cooling, etc.) Detector Channels L0 TFC Front End Electronics Experiment Control System Readout Network HLT Farm Storage Monitoring Farm DAQ External Systems (LHC, Technical Services, Safety, etc)

Control System Control System Tasks Configuration Partitioning Loading of parameters (according to RUN type) Enabling/disabling parts of the experiment Partitioning Ability to run parts of the experiment in stand-alone mode simultaneously Monitoring, Error Reporting & Recovery Detect and recover problems as fast as possible User Interfacing

LHC Control Systems Based on Commercial SCADA Systems (Supervisory Control and Data Acquisition) Commonly used for: Industrial Automation Control of Power Plants, etc. Providing: Configuration Database and Tools Run-time and Archiving of Monitoring Data including display and trending Tools. Alarm definition and reporting tools User Interface design tools

Control Automation Experiment runs 24/24 7/7 Only 2 (non-expert) operators Automation Avoids human mistakes and speeds up standard procedures What can be automated Standard Procedures (Start of fill, End of fill) Detection and Recovery from (known) error situations How Finite State Machine tools Expert System Type Tools (automated rules)

Monitoring Two types of Monitoring Monitor Experiment’s Behaviour Automation tools whenever possible Good (homogeneous) User Interface Monitor the quality of the data Automatic histogram production and analysis User Interfaced histogram analysis Event displays (raw data)

LHCb Run Control Homogeneous control of all parts of the experiment Stand alone operation (partitioning) Full Automation (Auto Pilot)

LHCb Histogram Presenter Monitor the quality of the data being acquired Compare with reference Automatic analyses

LHCb Online Event Display

Concluding Remarks Trigger and Data Acquisition systems are becoming increasingly complex. Luckily the requirements of telecommunications and computing in general have strongly contributed to the development of standard technologies: Hardware: Flash ADCs, Analog memories, PCs, Networks, Helical scan recording, Data compression, Image processing, ... Software: Distributed computing, Software development environments, Supervisory systems, ... We can now build a large fraction of our systems using commercial components (customization will still be needed in the front-end). It is essential that we keep up-to-date with the progress being made by industry.