Download presentation
Presentation is loading. Please wait.
1
Intro to Triggering and Data Acquisition
Samuel Silverstein, Stockholm University Trigger/DAQ basics Collider TDAQ Signal processing Dead time
2
In this trigger/DAQ series:
Three segments Trigger/DAQ introduction, concepts TDAQ architectures New developments Exercises Programmable logic (lab) Dead-time simulation (homework)
3
Introduction
4
Colliders produce a lot of data!
Physics at LHC: 1 interesting event in ~ Need a high collision rate: 40 MHz bunch crossing rate 25-50 proton collisions per bunch crossing Collision data volume About 1.5 MB/event (ATLAS) ~60 TB/s at 40 MHz! Too much data!
5
Why is it too much? Can’t analyse it all… Can’t store it all…
(In reverse order…) Can’t analyse it all… Would need a million times more processing Can’t store it all… 60 TB/s is about 3.6 petabytes/minute Can’t get it all off the detector* High-bandwidth data links are expensive, take up space and consume power. Practical consequences: Heat dissipation Cables, power, cooling take up space, leaving dead material and “holes” in detector coverage *N.B. New link technologies making this less true today for some systems
6
So, the challenge is: Throw away 99.999% of the collision data
To keep data rates at manageable levels But don’t throw away: Interesting SM processes New physics predicted by BSM theories Unexpected evidence of new physics Doing all of this well is hard! And perfection is practically impossible Data acquisition must compromise between physics goals and what is technically achievable!
7
Data Acquisition Basics
8
Trivial DAQ example External View Physical View CPU disk Logical View
sensor ADC Card sensor CPU disk Physical View ADC storage Logical View Proces- sing
9
Trivial DAQ How it works: Problem: Solution:
Sensor produces an analog signal ADC periodically converts analog output to digital values CPU reads digital values from the ADC and writes them to disk (readout) Problem: If readout rate is much larger than physics rate: lots of uninteresting data to store and analyze. Solution: Initiate readout only if there is an interesting signal (trigger)
10
What is a trigger? Wikipedia: “A system that uses simple criteria to rapidly decide which events in a particle detector to keep when only a small fraction of the total can be recorded. “
11
What is a trigger? Simple Rapid Selective
Just need to decide whether to keep the data. Detailed analysis comes later Rapid Data you want to keep may be lost if you wait too long to decide Selective Need to achieve a sufficient reduction in readout rate, because… “Only a small fraction can be recorded”
12
Trivial DAQ with trigger
"when did something happen?" Sensor Trigger Delay the signal so the trigger comes in time to keep the event Discriminator: simple pulse indicates arrival of non-zero data Delay Start ADC Proces- sing Interrupt Only read out data you want to keep storage
13
Trivial DAQ with trigger
Sensor Trigger Delay Discriminator Start ADC Proces- sing Interrupt Problem: ADC and processing take time! What if a new trigger arrives when ADC or processing are already busy? storage
14
Flow control New triggers can interfere with the readout if a previous event is still being processed Solution: flow control ADC/readout send ‘Busy’ signal to temporarily disable the trigger Events occurring during ‘Busy’ period are not recorded (dead time).
15
DAQ with trigger & busy logic
Sensor Trigger Discriminator Delay Start ADC Busy Logic ADC and not Busy time: ADC readout + processing time Begin busy Proces- sing Set Q Clear Finished Dead time (%) - percentage of time that DAQ cannot record new events storage
16
Dead time Problem: Triggers occur at random intervals But events take about the same time to read out (Gaussian distribution) As trigger rate approaches readout rate, dead time increases And with it, loss of data! Solution: Add a derandomising buffer (FIFO)
17
Buffers reduce dead time
Buffers (FIFO) receive new events as they arrive, send them out when the processor is ready "average out" arrival rate of new events Dead time depends on trigger rate, readout time, processing time per event, and FIFO depth Arrival time follows random distribution Efficiency (1 – dead time) 1.0 FIFO depth FIFO 5 buffers Buffer Depth 1, 3, 5 ... 0.5 3 buffers 1 buffer Process Processing time follows gaussian distribution 1.0 2.0 Processing time / average trigger interval
18
Buffered DAQ Sensor Trigger Discriminator Delay Start ADC Busy Logic
Proces- sing Discriminator Trigger Start ADC Busy Logic FIFO FIFO Full Data Available and storage Busy time = ADC (as long as buffer is not full) Derandomizing buffer (FIFO) decouples ADC readout from data processing -> Better performance!
19
Triggering on bunch crossings
Particle colliders typically produce collisions at a known bunch crossing (BX) interval Trigger system must analyse data from every BX Produce a yes/no trigger decision Read out data to DAQ only for selected BXs Data from other BXs are effectively “thrown away”.
20
Trivial collider-type trigger
ADC Sensor Proces- sing Discriminator Trigger Start ADC storage Busy Logic FIFO Data Available Timing BX Disable R/O Beam crossing FIFO full Trigger must look at every BX interval Collisions occur at known intervals (BX). Here the TRIGGER prevents readout of uninteresting BXs
21
Multi-level triggering
Collider experiments need to reject most of the events (~99.999% at LHC) But keep the valuable physics! Cannot achieve this with a single trigger. Selective algorithms need full detector data Too complex to analyze 40M BXs/second Solution: Multi-level triggers Start with simple algorithms, high rates Typically implemented in custom hardware Perform more detailed algorithms on (fewer) events that pass the previous levels Implemented in CPUs How to implement a multilevel trigger depends on the BX rate…
22
Collider BX rate evolution
23
Trigger/DAQ at LEP e+e– crossing rate: 45 kHz (LEP-1) 100 ms 8 ms
Level 2 10 Hz Readout 6 ms Level 1 Level3 100 Hz 8 Hz Level 1 trigger latency less than BX interval -> No dead time No overlapping events at any trigger level Most trigger/DAQ electronics located off-detector
24
Trigger/DAQ at LHC p p crossing rate 40 MHz
» ms p p crossing rate 40 MHz 25 ns Level 1 40 MHz Level 2 100 kHz HLT/RO 1 kHz Level 1 trigger latency >> bunch interval Up to 10 μs for complete readout of an event Need pipelined trigger and DAQ
25
Pipelined trigger For every bunch crossing:
Sample and store all detector data in fixed-length pipeline buffers (~few μs) Send reduced-granularity data to the Level-1 trigger over a low-latency path Level-1 does pipelined processing Many consecutive stages Produces a Level-1 accept decision (L1A) for every BX If an L1A is issued for an event: Extract data for that event from the end of the pipeline buffers and read it out to DAQ
26
... Pipeline buffer L1 Accept Detector sample readout
25 ns ... Detector sample readout few μs New data enters the buffer every BX Keep event at end of pipeline if L1A received
27
Simple pipelined trigger/DAQ
Readout Sensor Proces- sing Pipelined Trigger (Level 1) Clock storage Busy Logic FIFO Data Available Timing BX Accept/Reject (L1A) Beam crossing FIFO Full Pipeline buffer Trigger latency >> BX Sensor data stored in pipeline buffer, awaiting trigger decision. Only "interesting" events are selected. Level-1 trigger algorithm has higher latency than BX period, so executed in many "pipelined" steps.
28
TDAQ for multiple systems
Collider detectors are collections of detector systems working together Tracking detectors Calorimeters Muon spectrometers A single Central trigger initiates readout for all detectors DAQ collects and records data from all detectors for each triggered event
29
Multi-system Trigger/DAQ
N channels N channels N channels Trigger ADC ADC ADC FIFO FIFO FIFO Some processing can proceed in parallel Proces- sing Proces- sing Proces- sing Data from different detector components assembled into complete event Event Building Proces- sing Process complete event storage
30
Trigger/DAQ architectures
31
Trigger/DAQ requirements
High event rate, few relevant channels/event Medium event rate, many channels/event Lower event rate, high channel occupancy S. Cittolin CERN/EP-CMD LECC Workshop 2002
32
LHCb: High rate, small event
L0 trigger rate: 1 MHz Average event size ~100kB
33
ALICE: Low rate, large event
L1 trigger rate: 8 kHz Average event size > 40 MB
34
ATLAS and CMS L0 trigger rate: kHz Average event Size ~ 1 MB
35
Different DAQ approaches
Lower event rate, many channels (ALICE) Simpler trigger Naively: something happened (“Activity”) Read out entire event High event rate, few channels (LHCb) Read out only “active” channels Currently: zero-suppression in DAQ readout Upgrading to so-called “triggerless” DAQ Medium event rate, many channels Sophisticated, multi-level trigger More “physics-like” than activity trigger Read out entire event after level-1 trigger I will mainly focus on these… (ATLAS, CMS)
36
ATLAS trigger Detectors Front-end Trigger Pipelines level 1 Region of
Level-1 (3.5 ms) (custom processors) Isolated clusters, jets, ET in calorimeters Muon trigger: tracking coincidence matrix. Level-2 (100 ms) (processor farm) Guided by Regions Of Interest (RoI) identified by Level-1 Select detector data routed to CPUs by routers and switches Feature extractors (DSP or specialized) perform refined object ID algorithms Staged local and global processors Level-3 (»ms) (commercial processors) Reconstruct the event using all data Select of interesting physics channels Front-end Pipelines Trigger level 1 Region of interest RoI Readout buffers Trigger level 2 Event building Processor Farms Trigger level 3 Storage
37
LHCb trigger system Level-0 (4 ms) (custom hardware) Detectors
High pT electrons, muons, hadrons Pile-up veto. Level-1 (1000 ms) (specialized processors) Vertex topology (primary & secondary vertices) Tracking (connecting calorimeter clusters with tracks) Level-2 (»ms) (commercial processors) Refinement of Level-1. Background rejection. Level-3 (»ms) (commercial processors) Event reconstruction. Select physics channels. Detectors 40 MHz Trigger level 0 1 MHz Front-end Buffers Trigger level 1 40 kHz Readout buffers Event building Processor Farms Trigger level 2&3 Storage
38
TDAQ and run control Detector Channels Trigger Front End Electronics
Monitoring & Run control Readout Network Processing/Filtering Storage DAQ
39
Putting it all together…
40
Three more concepts today
Front-end electronics Signal processing Dead time Plus… Introduction to FPGA lab
41
Front-end electronics and signal processing
42
Front-end electronics
Provide the interface between the detector sensors and trigger/DAQ, including: Sensor signals digital data Interface with the first level trigger Pipeline buffering and readout to DAQ A critical part of the detector design. Design choices strongly influence: Detector resolution and dynamic range Signal/noise ratio Maximum channel occupancy (pileup) Etc.
43
Front end electronics Input conditioning
Convert detector input signals to a form useful for the trigger, readout Amplifiers, shapers, integrators... Sampling and digitization (ADC) Buffering and readout Signal processing (for trigger) Amplitude Timing Closely related, often on-detector Beginning to migrate off-detector Usually off-detector
44
Input conditioning/sampling
Raw pulses can be fast and have complex shapes Would fast ADCs to directly measure Expensive, power-hungry, low dynamic range A solution is pulse shaping Convert fast pulses into a slower, well-defined shape Ideally amplitude-independent Match to affordable ADC with the desired dynamic range (# bits) PMT Si strip detector
45
Simple charge integrator
(Simplest pulse shaping circuit) Long decay time is a problem if new pulses arrive too soon (“pile-up”) ADC sample Integrator output Can extract pulse height, but not precise time (only to nearest ADC clock) Detector pulses Time
46
Differentiator/integrator
Commonly-used shaper in particle detectors Differentiator (high-pass filter) Maximum amplitude of shaped pulse Pulse duration (decay time) Integrator (low-pass filter) Slows down rise-time of pulse “CR-RC” shaper CR RC
47
ATLAS calorimeter shapers
Tile (hadronic) calorimeter LAr (EM) calorimeter Unipolar shaper, makes a short PMT pulse longer Bipolar shaper, shortens a long ionization signal
48
Pile-up Broad pulses good for digital processing
Better amplitude and timing estimates Less sensitive to random nose But too broad pulses increase the pile-up rate Need to compromise
49
Case study The following is a simulation of a slow shaper idea for a hadronic calorimeter Design constraints: Fast pulse (PMT) Unipolar shaper (CR-RC) Want a large dynamic range Assumed a 16-bit ADC (13.5 effective bits) with up to 80 MHz sampling rate Low channel occupancy (low pile-up) Pulse can be slow (~150 ns) Oversampling increases # of effective bits
50
Analog input and shaper
Differential output stage Charge integrator high-pass filter (derivative) 3-pole low-pass filter (integral) To ADC 50 ohm cable termination CR RC PMT pulse + noise
51
Transient simulation PMT pulse Integrator output Shaper output to ADC
ADC samples (80 MHz) High-pass filter output
52
PMT input signal Shaper output Simulated PMT signal
53
Shape is amplitude-independent
Shaper output Shape is amplitude-independent Linear fit to an ideal shape gives: Amplitude Timing Pedestal
54
Sampled shape (80 MHz) Multiple samples on rising and falling edges give good timing estimates
55
Signal processing
56
Signal processing From sampled signal, need to extract:
Pulse amplitude Timing of the pulse Coarse timing (which BX?) Fine timing (ns level) Good for eliminating some backgrounds Common approach: digital filter Different approaches I will show finite-impulse-response (FIR) filter with matched coefficients…
57
FIR filter example Matched filter:
Coeffs Cn proportional to normalized sample heights (including noise) C0 is a negative number (pedestal subtraction) Filter output is phase dependent S3 S2 S4 S1 S5 S0 Time extraction: coefficients ~ derivative of the pulse shape
58
Hardware implementation
ATLAS trigger preprocessor ASIC PPR MCM ADCs ASIC Peak finder identifies BX of pulse ≥ > n+1 n n-1
59
FIR response of shaper sim
15 samples with fixed coefficients Next ADC clock t = (n+1) Centered pulse (t=n) Previous ADC clock t = (n-1) Local max of filter output gives coarse pulse time (closest ADC clock) FIR output (arbitrary units) Pulse time (ns)
60
Fine-timing estimate (offline)
Second FIR filter, with coefficients calculated from the derivative of pulse shape at each sample Sub-ns timing possible, depending on noise and clock jitter Timing filter (arbitrary units) Pulse time (ns)
61
Dead Time
62
Dead Time Fraction of time that system cannot record data
Sources of dead time: ADC conversion time (less relevant for FADCs) Finite time to sample and digitize an event. Readout dead time Finite time to write ADC samples to R/O buffers “Busy” condition in derandomising buffers Inhibit triggering new events to prevent overflow Operational dead time Detector readiness, etc…
63
ADC conversion time Collider experiments use fast, pipelined ADCs
Digitize every BX, store in pipeline buffer ADC normally not a source of dead time Continuous sampling no digitization delay 25 ns Detector pulse Shaped pulse Pipeline buffer …
64
Readout dead time Read out multiple samples per event
Simpler systems can have problems reading out overlapping events Result: cannot trigger events too close in time Can avoid this with digital readout Event 2 Event 1 … 25 ns Shaped pulse Event 2 Event 1 …
65
“Busy” condition “Leaky bucket” paradigm for derandomizing buffers
ADC Sensor Delay Proces- sing FIFO BUSY Data Available storage “Leaky bucket” paradigm for derandomizing buffers Input at random intervals ~ Constant output rate If “high- water mark” passed, block trigger Maximum trigger rate: Rmax = 1/Treadout Use sufficiently long buffers to minimize dead time below Rmax .
66
Calculating the dead time
Analytic solution* assuming a constant time to read out each event: Variables: N-1 buffers R: Input rate (trigger) τ: Readout time Reality: τ is not always constant, and the dead time model is often more complicated. Monte Carlo simulation provides better accuracy * Source: G.P. Heath, “Dead time due to trigger processing in a data acquisition system with multiple event buffering”, Nuclear Instruments and Methods in Physics Research, A278 (1989)
67
Example: ATLAS LAr calorimeter
Basic parameters: BX rate: MHz (25 ns) Max trigger rate: 75 kHz (upgrade to 100) Readout time (5 samples/event): 10.6 μs Other complicating factors: At least five BXs (125 ns) between two Level-1 accepts (L1A) (readout dead time) Not all bunches in LHC filled Nominally 2808 of 3564 (trains of 72 bunches) No L1A expected for empty bunches Analog pipeline buffers are 144 samples long Divided between pipeline and derandomizer FIFO So trigger latency affects FIFO length (& dead time)
68
Simulated LAr dead time
Level-1 Latency:
69
In my next two lectures…
Algorithms and architectures More detailed examples of trigger systems, and how they are built The “tools” available for collider detector triggers, and how they can be used New directions: SLHC SLHC planning and implications New technologies and architectures TDAQ upgrades for SLHC
70
TDAQ lab exercise Field Programmable Gate Array (FPGA) tutorial
Specify in VHDL language: Simple Boolean logic gate (e.g. AND) A coincidence counter Implement and test in real hardware
71
Homework Write a Monte Carlo dead time simulation of a semi-realistic LHC readout (Based on ATLAS LAr) Use ‘leaking bucket’ algorithm Parameters: 40 MHz bunch crossing rate (25 ns) L1 accept rate (random): 75 kHz Readout time per event: 10.6 μs Minimum time between L1 accepts: 125 ns (5 bunch crossings) Investigate the questions: How deep must the derandomiser buffer be in order to keep dead time below 1%?
72
Questions?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.