Download presentation
Presentation is loading. Please wait.
Published byGwendoline West Modified over 8 years ago
1
1 Methods of Experimental Particle Physics Alexei Safonov Lecture #18
2
Today Lecture Presentations: D0 calorimeter by Jeff Trigger 2
3
Collisions at LHC Finding anything at a hadron collider requires first getting rid of enormous backgrounds due to QCD multi-jet production Can’t even write all these events on disk, need trigger - will talk later 3 7 TeVProton colliding beams Proton Collisions 1 billion (10 9 ) Hz Parton Collisions Bunch Crossing 40 million (10 6 ) Hz 7.5 m (25 ns) New Particles 1 Hz to 10 micro (10 -5 ) Hz (Higgs, SUSY,....) 14 000 x mass of proton (14 TeV) = Collision Energy Protons fly at 99.999999% of speed of light 2808 = Bunches/Beam 100 billion (10 11 ) = Protons/Bunch
4
Triggering and QCD There is a reason why QCD is called a strong interaction The cross-sections for strong processes are large Most are “soft” QCD events and are not very interesting: We already know about jets, so now they are more of an obstacle Need a device that allows discarding non-interesting events and keeping interesting ones They may look alike: even though jets and leptons usually look differently, occasionally a jet can look like a lepton. Initial rate is so large that occasional can turn out to be very frequent in absolute terms 4
5
Making Discoveries Come Faster Because interesting events are rare, need to make a lot of non-interesting events first Either increase the number of particles per bunch or make more bunches and both create challenges: Many particles per bunch: You end up with a lot of overlapping events (called “pile-up”) within the same crossing, difficult to disentangle things – lower efficiency and less discrimination between signal and background Many bunches: Short time between collisions means that the detector must be able to “recover” from previous collision within a short amount of time and also need to be able to read out your detector very fast Both cause technological limitations on the detector electronics design And also on the computing resources In real life have to pursue both keeping a balance of cost and effectiveness 5
6
Many Bunches The LHC time between collisions is 25 ns: Detector needs to recover from previous interaction and be ready – else “dead-time” Pressure on the readout: 1 MB of data every 25 ns requires a bandwidth of 320 Terrabit per second, which is an insane number for current technologies 6
7
Many Overlapping Events Very high occupancies of hits and particles per detector granularity Many detector design and performance challenges Even if your detector can “operate”, if the data is not good, you won’t be able to do much when doing analysis
8
Many Overlapping Events Very high occupancies of hits and particles per detector granularity Many challenges: Nice and inexpensive detectors, e.g. as chambers become inefficient due to long drift time Calorimeter measurements become useless as deposits sit on top of each other for low granularity, for high granularity still a problem as you never know which interaction a specific deposit came from The only measurements relatively immune to this are tracking as tracking allows to distinguish which track came from which vertex But you can’t do a physics analysis based on tracks only Or can you?
9
Detectors and Pile-Up An illustration of overlapping signals Can get rid of it by using very “fast” and finely segmented detectors But the cost will skyrocket 9
10
Triggering Basics Two paths: Recognize non-interesting events and discard them Not very practical as there are lots of ways how non- interesting events can look like, hard to get all possible modes identified If you don’t, whatever is left can still be way too much Recognize interesting events and keep them More practical as you can build more sophisticated requirements targeting specific topologies Build many “triggers” going after specific types of events, discard events that are not flagged by any of the triggers The more exclusive you go, the less likely it is for a background event to pass your requirements But also dangerous: you may miss a discovery In this approach you must know what you are looking for One has to strike a balance of exclusive and inclusive to not miss something that could be important 10
11
Boundary Conditions On the input: Bunch Crossing rate: 40 MHz Interactions rate: 1-10 GHz (depends on how many overlapping events) Data rate: hundreds of Terrabits per second On the output: Need to write events on disk so that one can analyze the data With some reasonable assumptions on how much you can spend, the likely writing rate is 100-300 crossings per second (100-300 Hz) Multiply by 1 MB event size to get some Gigabits per second 11 What ‘s in between” ? The trigger!
12
How to Build a Trigger Need to bring the rate closer to something manageable but can’t lose data: Solution is to delay full readout until you know the event is interesting Make “pipelines” in the front-end electronics holding the data and “go parallel” Can do if electronics is very segmented (each piece serves some small portion of the coverage of a specific detector system Like one muon chamber Unless you go nuts on segmenting your readout (which will be very expensive), rates are still too high for any kind of commercial computers, need to use fast electronics Can use a fraction of data (say reduce granularity to reduce the rate) or make a more elaborate electronics system 12
13
Trigger Designs Conventional trigger systems use 3 levels: Ultra fast electronics (ASICs/FPGAs) and fast connections Slower but smarter electronics (or super-fast processors) Conventional computer farm 13
14
Traditional Approach (ATLAS) 14
15
CMS Trigger Design 15
16
Algorithmic Considerations The idea is always the same: Do something fast and dirty first to quickly recognize “junk” (if you do, stop processing) More intelligent (and thus slower) algorithms go later The rate is already reduced by “fast and dirty”, so you can spend more time per event without creating a bottle-neck leading to dead-time The deeper the storage pipe-lines, the more time you have to make a decision But your system becomes more and more expensive Need to strike a balance 16
17
Parallelization Tree-like structure of decision making: 17
18
Level-1 CMS and ATLAS do not have tracking in Level-1 Nothing to be proud of: we can kind of survive now, but won’t last long. The only reason we do it is we can’t handle the rates of the current tracker 18
19
CMS Trigger 19
20
CMS Level-1 and DAQ Current system design: 20
21
21
22
22
23
23
24
24
25
25
26
26
27
27
28
28
29
29
30
30
31
General HLT Sequence: Conditionally it’s broken into L2 – L2.5 and L3: L2: repeat L1 algorithms at full segmentation Fast and can eliminate easy to eliminate events L2.5: add only limited tracking information: Pixel detector hits or (later within L2.5) tracks and vertices Not as fast but allows large rejections (although limited tracking capabilities – reduced resolution, potential efficiency losses) L3: add full tracking and particle flow Slow, but hopefully the number of events coming is already small enough to allow it to work 31
32
Trigger Table A typical experiment has hundreds of what’s called trigger paths Each path is a sequence of requirements at Level-1 and HLT Essentially you are looking for some specific object (very energetic electron) or a topology (3 muons with high pt), but you can use earlier paths as bricks in building your trigger path Each path has it’s “owners” who maintain them and continuously improve their trigger An analysis usually uses one or few of these trigger paths A special group usually deals with allocating available bandwidth among trigger paths Reviews proposed triggers and physics motivation, suggests modifications (say to improve background rejection or to make trigger usable for more than one purpose) Allocates available bandwidth to specific paths based on physics priorities The result of such allocation is a “Trigger Table” Dynamic as needs change, different triggers have different growth terms in their rates, needing frequent rebalancing 32
33
33
34
34
35
Data Storage Once the trigger has made a decision to keep the event, the data is written on disk The data is sent in several streams based on the type of objects and physics This way you can only filter events from one stream for your analysis instead of looping over 10 times more events Then the data gets manipulated before it becomes available for analysis Within a few days these events move around From Tier-0 to Tier-1 and further as full event reconstruction is performed Various “standard” formats: Some information that s rarely used gets dropped to make events smaller in size, but a full event record is kept somewhere (one of Tier-1 centers) Eventually data in one of the light format moves to Tier-2 where it can be accessed by analyzers 35
36
Next time Monte Carlo event generators Detector emulation This lecture had a lot of slides borrowed from one of the lectures about triggers by Wesley Smith (UW-Madison) 36
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.