Download presentation
Presentation is loading. Please wait.
1
High Level Triggering Fred Wickens
2
High Level Triggering (HLT)
Introduction to triggering and HLT systems What is Triggering What is High Level Triggering Why do we need it Case study of ATLAS HLT (+ some comparisons with other experiments) Summary
3
Simple trigger for spark chamber set-up
4
Dead time Experiments frozen from trigger to end of readout
Trigger rate with no deadtime = R per sec. Dead time / trigger = t sec. For 1 second of live time = 1 + Rt seconds Live time fraction = 1/(1 + Rt) Real trigger rate = R/(1 + Rt) per sec.
5
Trigger systems 1980’s and 90’s
bigger experiments more data per event higher luminosities more triggers per second both led to increased fractional deadtime use multi-level triggers to reduce dead-time first level - fast detectors, fast algorithms higher levels can use data from slower detectors and more complex algorithms to obtain better event selection/background rejection
6
Trigger systems 1990’s and 2000’s
Dead-time was not the only problem Experiments focussed on rarer processes Need large statistics of these rare events But increasingly difficult to select the interesting events DAQ system (and off-line analysis capability) under increasing strain - limiting useful event statistics This is a major issue at hadron colliders, but is also significant at ILC Use the High Level Trigger to reduce the requirements for The DAQ system Off-line data storage and off-line analysis
7
Summary of ATLAS Data Flow Rates
From detectors > 1014 Bytes/sec After Level-1 accept ~ 1011 Bytes/sec Into event builder ~ 109 Bytes/sec Onto permanent storage ~ 108 Bytes/sec ~ 1015 Bytes/year
8
TDAQ Comparisons
9
The evolution of DAQ systems
10
Typical architecture 2000+
11
Level 1 (Sometimes called Level-0 - LHCb)
Time: one very few microseconds Standard electronics modules for small systems Dedicated logic for larger systems ASIC - Application Specific Integrated Circuits FPGA - Field Programmable Gate Arrays Reduced granularity and precision calorimeter energy sums tracking by masks Event data stored in front-end electronics (at LHC use pipeline as collision rate shorter than Level-1 decision time)
12
Level 2 1) few microseconds (10-100) 2) few milliseconds (1-100)
hardwired, fixed algorithm, adjustable parameters 2) few milliseconds (1-100) Dedicated microprocessors, adjustable algorithm 3-D, fine grain calorimetry tracking, matching Topology Different sub-detectors handled in parallel Primitives from each detector may be combined in a global trigger processor or passed to next level
13
Level 2 - cont’d 3) few milliseconds (10-100) - 2006
Processor farm with Linux PC’s Partial events received with high-speed network Specialised algorithms Each event allocated to a single processor, large farm of processors to handle rate If separate Level 2 data from each event stored in many parallel buffers (each dedicated to a small part of the detector)
14
Level 3 millisecs to seconds processor farm
microprocessors/emulators/workstations Now standard server PC’s full or partial event reconstruction after event building (collection of all data from all detectors) Each event allocated to a single processor, large farm of processors to handle rate
15
Summary of Introduction
For many physics analyses, aim is to obtain as high statistics as possible for a given process We cannot afford to handle or store all of the data a detector can produce! What does the trigger do select the most interesting events from the myriad of events seen I.e. Obtain better use of limited output band-width Throw away less interesting events Keep all of the good events(or as many as possible) But note must get it right - any good events thrown away are lost for ever! High level trigger allows much more complex selection algorithms
16
Case study of the ATLAS HLT system
Concentrate on issues relevant for ATLAS (CMS very similar issues), but try to address some more general points
17
Starting points for any HLT system
physics programme for the experiment what are you trying to measure accelerator parameters what rates and structures detector and trigger performance what data is available what trigger resources do we have to use it
18
Physics at the LHC Interesting events are buried in a sea of soft interactions B physics High energy QCD jet production top physics Higgs production
19
The LHC and ATLAS/CMS LHC has This results in
design luminosity 1034 cm-2s-1 (In 2008 from ?) bunch separation 25 ns (bunch length ~1 ns) This results in ~ 23 interactions / bunch crossing ~ 80 charged particles (mainly soft pions) / interaction ~2000 charged particles / bunch crossing Total interaction rate sec-1 b-physics fraction ~ sec-1 t-physics fraction ~ sec-1 Higgs fraction ~ sec-1
20
Physics programme Higgs signal extraction important but very difficult
Also there is lots of other interesting physics B physics and CP violation quarks, gluons and QCD top quarks SUSY ‘new’ physics Programme will evolve with: luminosity, HLT capacity and understanding of the detector low luminosity (first ~2 years) high PT programme (Higgs etc.) b-physics programme (CP measurements) high luminosity searches for new physics
21
Trigger strategy at LHC
To avoid being overwhelmed use signatures with small backgrounds Leptons High mass resonances Heavy quarks The trigger selection looks for events with: Isolated leptons and photons, -, central- and forward-jets Events with high ET Events with missing ET
22
Example Physics signatures
Objects Physics signatures Electron 1e>25, 2e>15 GeV Higgs (SM, MSSM), new gauge bosons, extra dimensions, SUSY, W, top Photon 1γ>60, 2γ>20 GeV Higgs (SM, MSSM), extra dimensions, SUSY Muon 1μ>20, 2μ>10 GeV Jet 1j>360, 3j>150, 4j>100 GeV SUSY, compositeness, resonances Jet >60 + ETmiss >60 GeV SUSY, leptoquarks Tau >30 + ETmiss >40 GeV Extended Higgs models, SUSY
23
ARCHITECTURE Trigger DAQ 40 MHz ~1 PB/s (equivalent) ~ 200 Hz
Three logical levels Hierarchical data-flow LVL1 - Fastest: Only Calo and Mu Hardwired On-detector electronics: Pipelines ~2 ms LVL2 - Local: LVL1 refinement + track association Event fragments buffered in parallel ~10 ms LVL3 - Full event: “Offline” analysis Full event in processor farm ~1 sec. ~ 200 Hz ~ 300 MB/s Physics
24
Selected (inclusive) signatures
25
Trigger design - Level-1
sets the context for the HLT reduces triggers to ~75 kHz has a very short time budget few micro-sec (ATLAS/CMS ~2.5 - much used up in cable delays!) Detectors used must provide data very promptly, must be simple to analyse Coarse grain data from calorimeters Fast parts of muon spectrometer (I.e. not precision chambers) NOT precision trackers - too slow, too complex (LHCb does use some simple tracking data from their VELO detector to veto events with more than 1 primary vertex) Proposed FP420 detectors provide data too late
26
ATLAS Level-1 trigger system
Calorimeter and muon trigger on inclusive signatures muons; em/tau/jet calo clusters; missing and sum ET Hardware trigger Programmable thresholds Selection based on multiplicities and thresholds
27
ATLAS em cluster trigger algorithm
“Sliding window” algorithm repeated for each of ~4000 cells
28
ATLAS Level 1 Muon trigger
RPC - Trigger Chambers - TGC Measure muon momentum with very simple tracking in a few planes of trigger chambers RPC: Restive Plate Chambers TGC: Thin Gap Chambers MDT: Monitored Drift Tubes
29
Level-1 Selection The Level-1 trigger - an “or” of a large number of inclusive signals - set to match the current physics priorities and beam conditions Precision of cuts at Level-1 is generally limited Adjust the overall Level-1 accept rate (and the relative frequency of different triggers) by Adjusting thresholds Pre-scaling (e.g. only accept every 10th trigger of a particular type) higher rate triggers Can be used to include a low rate of calibration events Menu can be changed at the start of run Pre-scale factors may change during the course of a run
30
Example Level-1 Menu for 2x10^33
Level-1 signature Output Rate (Hz) EM25i 12000 2EM15i 4000 MU20 800 2MU6 200 J200 3J90 4J65 J60 + XE60 400 TAU25i + XE30 2000 MU10 + EM15i 100 Others (pre-scaled, exclusive, monitor, calibration) 5000 Total ~25000
31
Trigger design - Level-2
Level-2 reduce triggers to ~2 kHz Note CMS does not have a physically separate Level-2 trigger, but the HLT processors include a first stage of Level-2 algorithms Level-2 trigger has a short time budget ATLAS ~10 milli-sec average Note for Level-1 the time budget is a hard limit for every event, for the High Level Trigger it is the average that matters, so a some events can take several times the average, provided thay are a minority Full detector data is available, but to minimise resources needed: Limit the data accessed Only unpack detector data when it is needed Use information from Level-1 to guide the process Analysis proceeds in steps with possibility to reject event after each step Use custom algorithms
32
Regions of Interest The Level-1 selection is dominated by local signatures (I.e. within Region of Interest - RoI) Based on coarse granularity data from calo and mu only Typically, there are 1-2 RoI/event ATLAS uses RoI’s to reduce network b/w and processing power required
33
Trigger design - Level-2 - cont’d
Processing scheme extract features from sub-detector data in each RoI combine features from one RoI into object combine objects to test event topology Precision of Level-2 cuts Emphasis is on very fast algorithms with reasonable accuracy Do not include many corrections which may be applied off-line Calibrations and alignment available for trigger not as precise as ones available for off-line
34
ARCHITECTURE Trigger DAQ LVL1 H L T ~ 1 PB/s 40 MHz 75 kHz LVL2 ROS
Calo MuTrCh Other detectors ~ 1 PB/s 40 MHz 40 MHz 75 kHz ~2 kHz ~ 200 Hz LVL1 2.5 ms Calorimeter Trigger Muon FE Pipelines 2.5 ms LVL1 accept Read-Out Drivers ROD ROS Read-Out Sub-systems Read-Out Buffers ROB GB/s Read-Out Links RoI’s H L T LVL2 ~ 10 ms L2P L2SV L2N ROIB RoI requests RoI data = 1-2% ~2 GB/s Event Builder EB ~3 GB/s LVL2 accept Event Filter EFP ~ 1 sec EFN ~3 GB/s ~ 300 MB/s
35
CMS Event Building CMS perform Event Building after Level-1
This simplifies the architecture, but places much higher demand on technology: Network traffic ~100 GB/s Use Myrinet instead of GbE for the EB network Plan a number of independent slices with barrel shifter to switch to a new slice at each event Time will tell which philosophy is better
36
Example for Two electron trigger
e30i + Signature t i m e LVL1 triggers on two isolated e/m clusters with pT>20GeV (possible signature: Z–>ee) Iso– lation STEP 4 e30 + Signature pt> 30GeV STEP 3 HLT Strategy: Validate step-by-step Check intermediate signatures Reject as early as possible e + Signature track finding STEP 2 ecand + Signature Sequential/modular approach facilitates early rejection Cluster shape STEP 1 EM20i + Level1 seed
37
Trigger design - Event Filter / Level-3
Event Filter reduce triggers to ~200 Hz Event Filter budget ~ 1 sec average Full event detector data is available, but to minimise resources needed: Only unpack detector data when it is needed Use information from Level-2 to guide the process Analysis proceeds in steps with possibility to reject event after each step Use optimised off-line algorithms
38
Electron slice at the EF
Wrapper of CaloRec TrigCaloRec EFCaloHypo Wrapper of newTracking EF tracking matches electromagnetic clusters with tracks and builds egamma objects EFTrackHypo Wrapper of EgammaRec TrigEgammaRec EFEgammaHypo
39
HLT Processing at LHCb
40
Trigger design - HLT strategy
Level 2 confirm Level 1, some inclusive, some semi-inclusive, some simple topology triggers, vertex reconstruction (e.g. two particle mass cuts to select Zs) Level 3 confirm Level 2, more refined topology selection, near off-line code
41
Example HLT Menu for 2x10^33 HLT signature Output Rate (Hz) e25i 40
<1 gamma60i 25 2gamma20i 2 mu20i 2mu10 10 j400 3j165 4j110 j70 + xE70 20 tau35i + xE45 5 2mu6 with vertex, decay-length and mass cuts (J/psi, psi’, B) Others (pre-scaled, exclusive, monitor, calibration) Total ~200
42
Example B-physics Menu for 10^33
LVL1 : MU6 rate 24kHz (note there are large uncertainties in cross-section) In case of larger rates use MU8 => 1/2xRate 2MU6 LVL2: Run muFast in LVL1 RoI ~ 9kHz Run ID recon. in muFast RoI mu6 (combined muon & ID) ~ 5kHz Run TrigDiMuon seeded by mu6 RoI (or MU6) Make exclusive and semi-inclusive selections using loose cuts B(mumu), B(mumu)X, J/psi(mumu) Run IDSCAN in Jet RoI, make selection for Ds(PhiPi) EF: Redo muon reconstruction in LVL2 (LVL1) RoI Redo track reconstruction in Jet RoI Selections for B(mumu) B(mumuK*) B(mumuPhi), BsDsPhiPi etc.
43
LHCb Trigger Menu
44
Matching problem Background Physics channel Off-line On-line
45
Matching problem (cont.)
ideally off-line algorithms select phase space which shrink-wraps the physics channel trigger algorithms shrink-wrap the off-line selection in practice, this doesn’t happen need to match the off-line algorithm selection For this reason many trigger studies quote trigger efficiency wrt events which pass off-line selection BUT off-line can change algorithm, re-process and recalibrate at a later stage SO, make sure on-line algorithm selection is well known, controlled and monitored
46
Selection and rejection
as selection criteria are tightened background rejection improves BUT event selection efficiency decreases
47
Selection and rejection
Example of a recent ATLAS Event Filter (I.e. Level-3) study of the effectiveness of various discriminants used to select 25 GeV electrons from a background of dijets
48
Other issues for the Trigger
Efficiency and Monitoring In general need high trigger efficiency Also for many analyses need a well known efficiency Monitor efficiency by various means Overlapping triggers Pre-scaled samples of triggers in tagging mode (pass-through) Final detector calibration and alignment constants not available immediately - keep as up-to-date as possible and allow for the lower precision in the trigger cuts when defining trigger menus and in subsequent analyses Code used in trigger needs to be very robust - low memory leaks, low crash rate, fast Beam conditions and HLT resources will evolve over several years (for both ATLAS and CMS) In 2008 luminosity low, but also HLT capacity will be < 50% of full system (funding constraints)
49
Summary High-level triggers allow complex selection procedures to be applied as the data is taken Thus allow large numbers of events to be accumulated, even in presence of very large backgrounds Especially important at LHC - but significant at most accelerators The trigger stages - in the ATLAS example Level 1 uses inclusive signatures muons; em/tau/jet calo clusters; missing and sum ET Level 2 refines Level 1 selection, adds simple topology triggers, vertex reconstruction, etc Level 3 refines Level 2 adds more refined topology selection Trigger menus need to be defined, taking into account: Physics priorities, beam conditions, HLT resources Include items for monitoring trigger efficiency and calibration Must get it right - any events thrown away are lost for ever!
50
Additional Foils
52
The evolution of DAQ systems
53
ATLAS Detector
54
The ATLAS Sub-Detectors
Inner tracker pixels (silicon) (3 layers) precision 3-D points; 1.4 10^8 channels; occupancy 10^-4 silicon strips (4 layers) precision 2-D points; 5.2 10^6 channels; occupancy 10^-2 transition radiation tracker (straw tubes) (40 layers) continuous tracker + electron identification; 4.2 10^5 channels; 12-33% occupancy
55
ATLAS Sub-Detectors (cont.)
solenoid - inside calorimeters 4 m x 7 m x 1.8T calorimetry electromagnetic liquid argon (accordion) + lead hadronic scintillator tiles & liquid argon + iron 2.3 10^5 channels; occupancy 5-15% muon system air-core toroid magnet system trigger - resistive plate and thin gap chambers precision – monitored drift tubes 1.3 10^6 channels; occupancy 2-7.5%
56
ATLAS event in the tracker
57
ATLAS event - tracker end-view
58
ATLAS event - tracker end-view
59
Trigger functional design
Level 1 Input 40 MHz Accept 75 kHz Latency 2.5 μs Inclusive triggers based on fast detectors Muon, electron/photon, jet, sum and missing ET triggers Coarse(r) granularity, low(er) resolution data Special purpose hardware (FPGAs, ASICs) Level 2 Input 75 (100) kHz Accept O(1) kHz Latency ~10 ms Confirm Level 1 and add track information Mainly inclusive but some simple event topology triggers Full granularity and resolution available Farm of commercial processors with special algorithms Event Filter Input O(1) kHz Accept O(100) Hz Latency ~secs Full event reconstruction Confirm Level 2; topology triggers Farm of commercial processors using near off-line code
60
ATLAS Trigger / DAQ Data Flow
CERN computer centre SDX1 dual-socket server PCs ~30 ~1600 ~100 ~ 500 Event rate ~ 200 Hz Local Storage SubFarm Outputs (SFOs) Event Filter (EF) Event Builder SubFarm Inputs (SFIs) LVL2 farm Second- level trigger Data storage SDX1 pROS pROS DataFlow Manager Network switches stores LVL2 output stores LVL2 output Network switches LVL2 Super- visor Gigabit Ethernet Event data requests Delete commands Requested event data USA15 Regions Of Interest USA15 Data of events accepted by first-level trigger 1600 Read- Out Links ~150 PCs UX15 VME Dedicated links Read-Out Subsystems (ROSs) Read- Out Drivers (RODs) ATLAS detector RoI Builder First- level trigger UX15 Timing Trigger Control (TTC)
61
Event’s Eye View - step-1
At each beam crossing latch data into detector front end After processing, data put into many parallel pipelines - moves along the pipeline at every bunch crossing, falls out the far end after 2.5 microsecs Also send calo + mu trigger data to Level-1
62
Event’s Eye View - step-2
The Level-1 Central Trigger Processor combines the information from the Muon and Calo triggers and when appropriate generates the Level-1 Accept (L1A) The L1A is distributed in real-time via the TTC system to the detector front-ends to send data from the accepted event to the detector ROD’s (Read-Out Drivers) Note must arrive before data has dropped out of the pipe-line - hence hard dead-line of 2.5 micro-secs The TTC system (Trigger, Timing and Control) is a CERN system used by all of the LHC experiments. Allows very precise real-time data distribution of small data packets Detector ROD’s receive data, process and reformat it as necessary and send via fibre links to TDAQ ROS
63
Event’s Eye View - Step-3
At L1A the different parts of LVL1 also send RoI data to the RoI Builder (RoIB), which combines the information and sends as a single packet to a Level-2 Supervisor PC The RoIB is implemented as a number of VME boards with FPGAs to identify and combine the fragments coming from the same event from the different parts of Level-1
64
Step-4 ATLAS Level-2 Trigger
CERN computer centre SDX1 dual-socket server PC’s ~30 ~1600 ~100 ~ 500 Region of Interest Builder (RoIB) passes formatted information to one of the LVL2 supervisors. LVL2 supervisor selects one of the processors in the LVL2 farm and sends it the RoI information. LVL2 processor requests data from the ROSs as needed (possibly in several steps), produces an accept or reject and informs the LVL2 supervisor. Result of processing is stored in pseudo-ROS (pROS) for an accept. Reduces network traffic to ~2 GB/s c.f. ~150 GB/s if do full event build LVL2 supervisor passes decision to the DataFlow Manager (controls Event Building). Event rate ~ 200 Hz Local Storage SubFarm Outputs (SFOs) Event Filter (EF) Event Builder SubFarm Inputs (SFIs) LVL2 farm Second- level trigger Data storage pROS pROS DataFlow Manager Network switches stores LVL2 output stores LVL2 output Network switches LVL2 Super- visor Gigabit Ethernet Event data requests Requested event data Event data for Level-2 pulled: partial events @ ≤ 100 kHz Regions Of Interest USA15 ~150 PCs Read-Out Subsystems (ROSs) RoI Builder
65
Step-5 ATLAS Event Building
CERN computer centre SDX1 dual-socket server PC’s ~30 ~1600 ~100 ~ 500 For each accepted event the DataFlow Manager selects a Sub-Farm Input (SFI) and sends it a request to take care of the building of a complete Event. The SFI sends requests to all ROSs for data of the event to be built. Completion of building is reported to the DataFlow Manager. For rejected events and for events for which event Building has completed the DataFlow Manager sends "clears" to the ROSs (for events Together). Network traffic for Event Building is ~5 GB/s Event rate ~ 200 Hz Local Storage SubFarm Outputs (SFOs) Event Filter (EF) Event Builder SubFarm Inputs (SFIs) LVL2 farm Second- level trigger Data storage pROS pROS DataFlow Manager Network switches stores LVL2 output stores LVL2 output Network switches LVL2 Super- visor Gigabit Ethernet Event data requests Delete commands Requested event data Event data after Level-2 pulled: full events @ ~3 kHz Regions Of Interest USA15 ~150 PCs Read-Out Subsystems (ROSs) RoI Builder
66
Step-6 ATLAS Event Filter
CERN computer centre SDX1 dual-socket server PC’s ~30 ~1600 ~100 ~ 500 A process (EFD) running in each Event Filter farm node collects each complete event from the SFI and assigns it to one of a number of Processing Task’s in that node The Event Filter uses more sophisticated algorithms (near or adapted off-line) and more detailed calibration data to select events based on the complete event data Accepted events are sent to SFO (Sub-Farm Output) node to be written to disk Event rate ~ 200 Hz Local Storage SubFarm Outputs (SFOs) Event Filter (EF) Event Builder SubFarm Inputs (SFIs) LVL2 farm Second- level trigger Data storage pROS pROS DataFlow Manager Network switches stores LVL2 output stores LVL2 output Network switches LVL2 Super- visor Gigabit Ethernet Event data requests Delete commands Requested event data Regions Of Interest USA15 ~150 PCs Read-Out Subsystems (ROSs) RoI Builder
67
Step-7 ATLAS Data Output
CERN computer centre SDX1 dual-socket server PC’s ~30 ~1600 ~100 ~ 500 Event rate ~ 200 Hz Local Storage SubFarm Outputs (SFOs) Event Filter (EF) Event Builder SubFarm Inputs (SFIs) LVL2 farm Second- level trigger The SFO nodes receive the final accepted events and writes them to disk The events include ‘Stream Tags’ to support multiple simultaneous files (e.g. Express Stream, Calibration, b-physics stream, etc) Files are closed when they reach 2 GB or at end of run Closed files are finally transmitted via GbE to the CERN Tier-0 for off-line analysis Data storage pROS pROS DataFlow Manager Network switches stores LVL2 output stores LVL2 output Network switches LVL2 Super- visor Gigabit Ethernet Event data requests Delete commands Requested event data Regions Of Interest USA15 ~150 PCs Read-Out Subsystems (ROSs) RoI Builder
68
ATLAS Trigger / DAQ Data Flow
CERN computer centre SDX1 dual-socket server PC’s ~30 ~1600 ~100 ~ 500 Event rate ~ 200 Hz Local Storage SubFarm Outputs (SFOs) Event Filter (EF) Event Builder SubFarm Inputs (SFIs) LVL2 farm Second- level trigger Data storage SDX1 pROS pROS DataFlow Manager Network switches stores LVL2 output stores LVL2 output Network switches LVL2 Super- visor Gigabit Ethernet Event data requests Delete commands Requested event data Event data pulled: partial events @ ≤ 100 kHz, full events @ ~ 3 kHz USA15 Regions Of Interest USA15 Data of events accepted by first-level trigger 1600 Read- Out Links ~150 PCs UX15 VME Dedicated links Read-Out Subsystems (ROSs) Read- Out Drivers (RODs) ATLAS detector RoI Builder First- level trigger UX15 Timing Trigger Control (TTC) Event data ≤ 100 kHz, 1600 fragments of ~ 1 kByte each
69
HLT Hardware Part of DAQ/HLT Pre-Series system, with full LVL2 Farm Rack at right
70
ATLAS TDAQ Barrack Rack Layout
71
Level 1 <4 µs using hardwired processors
UA1 Trigger Level 1 <4 µs using hardwired processors muon track segment; em showers; jets; ET rate ~ 30 Hz. (reduction factor 103 104) zero deadtime as decision time < bunch separation Level 2 ~7 ms using CPUs muon tracking using drift time; 3-D calorimetry; position detectors rate ~ 3 Hz (reduction factor ~ 10) deadtime (30x0.007 = 20%) front end frozen during level 2 decision time.
72
UA1 Level 1
73
UA1 Level 2 and 3
74
Level 3 ~100 ms using 3081E farm partial event reconstruction;
UA1 Trigger (cont). Level 3 ~100 ms using 3081E farm partial event reconstruction; calorimeter and tracking; event topology reduction factor ~ 3 deadtime (3Hz x 0.03s = 10%) time to read data into processor system (30 ms)
75
bunch separation 22 µs 45kHz (4 bunches) 11 µs 90 kHz (8 bunches)
LEP (ALEPH) luminosity /cm2/s bunch separation 22 µs 45kHz (4 bunches) µs 90 kHz (8 bunches) event rate Hz channels ~106 read-out rate 13 Hz transfer rate ~10 Mbytes/sec
76
Level 1 ALEPH trigger ~4µs decision time + 6 µs clear time (<11µs )
hardwired processors calorimeter energy sums and ITC tracks accept rate 3 30 Hz (5 Hz typ.) zero deadtime as process time < bunch separation
77
Level 2 ALEPH trigger (cont.) 60µs decision plus clear time
hardwired LUT processor for TPC data operates on L1 track triggers only. accept rate 2 6 Hz (2 Hz typ.) deadtime 2bx x 5Hz(L1) / 45kHz = 0.02%, bx x 5Hz(L1) / 90kHz = 0.03%
78
Level 3 ALEPH trigger (cont.) readout time ~10ms.
processing time ~1s/processor microVAX farm (part reconstructed data) accept rate 13 Hz (design rate 1Hz) deadtime for readout 10ms x 2Hz(L2) = 2%
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.