Download presentation
Presentation is loading. Please wait.
Published byDamian Terry Modified over 9 years ago
1
The ATLAS Trigger and Data Acquisition: a brief overview of concept, design and realization John Erik Sloper ATLAS TDAQ group CERN - Physics Dept. April 23, 2017
2
Challenges & Requirements ATLAS TDAQ Architecture
Overview Introduction Challenges & Requirements ATLAS TDAQ Architecture Readout and LVL1 LVL2 Triggering & Region of interest Event Building & Event filtering Current status of installation April 23, 2017
3
Me: Today: Introduction
John Erik Sloper, originally from Bergen, Norway With the TDAQ group for 3½ years Computer science background, currently enrolled at university of Warwick, Coventry for a PhD in engineering Today: Overview of Trigger and Data AcQuisition (TDAQ) Practical view point Using the real ATLAS TDAQ system as an example We will go through the entire architecture of the TDAQ system from readout to storage A brief overview of the status of the installation April 23, 2017
4
Data acquisition and triggering
4/23/2017 Data acquisition and triggering Data acquisition Gathering Receiving data from all read-out links for the entire detector Processing “Building the events” – Collecting all data that correspond to a single event Serving the triggering system with data Storing Transporting the data to mass storage Triggering The trigger has the job of selecting the bunch-crossings of interest for physics analysis, i.e. those containing interactions of interest Tells the data acquisition system what should be used for further processing April 23, 2017
5
What is an “event” anyway?
In high-energy particle colliders (e.g. Tevatron, HERA, LHC), the particles in the counter-rotating beams are bunched Bunches cross at regular intervals Interactions only occur during the bunch-crossings In this presentation “event” refers to the record of all the products of a given bunch-crossing The term “event” is not uniquely defined! Some people use the term “event” for the products of a single interaction between the incident particles People sometimes unwittingly use “event” interchangeably to mean different things!
6
Trigger menus Typically, trigger systems select events according to a “trigger menu”, i.e. a list of selection criteria An event is selected by the trigger if one or more of the criteria are met Different criteria may correspond to different signatures for the same physics process Redundant selections lead to high selection efficiency and allow the efficiency of the trigger to be measured from the data Different criteria may reflect the wish to concurrently select events for a wide range of physics studies HEP “experiments” — especially those with large general-purpose “detectors” (detector systems) — are really experimental facilities The menu has to cover the physics channels to be studied, plus additional event samples required to complete the analysis: Measure backgrounds, check the detector calibration and alignment, etc.
7
ATLAS/CMS physics requirements
Triggers in the general-purpose proton–proton experiments, ATLAS and CMS, will have to: Retain as many as possible of the events of interest for the diverse physics programmes of these experiments Higgs searches (Standard Model and beyond) e.g. H ZZ leptons, H gg; also H tt, H bb SUSY searches With and without R-parity conservation Searches for other new physics Using inclusive triggers that one hopes will be sensitive to any unpredicted new physics Precision physics studies e.g. measurement of W mass B-physics studies (especially in the early phases of these experiments)
8
Challenges & Requirements ATLAS TDAQ Architecture
Overview Introduction Challenges & Requirements ATLAS TDAQ Architecture Readout and LVL1 LVL2 Triggering & Region of interest Event Building & Event filtering Current status of installation April 23, 2017
9
ATLAS TDAQ 22 m Weight: 7000 t 44 m April 23, 2017
10
Particle multiplicity
h = rapidity = log(tg/2) (longitudinal dimension) uch = no. charged particles / unit-h nch = no. charged particles / interaction Nch = total no. charged particles / BC Ntot = total no. particles / BC nch = uch x h= 6 x 7 = 42 Nch = nch x 23 = ~ 900 Ntot = Nch x 1.5 = ~ 1400 7.5 m … still much more complex than a LEP event The LHC flushes each detector with ~1400 particles every 25 ns (p-p operation) April 23, 2017
11
… at a rate of ONE every 10 thousands billions
The challenge How to extract this… … from this … +30 MinBias Higgs -> 4m … at a rate of ONE every 10 thousands billions and without knowing where to look for: the Higgs could be anywhere up to ~1 Tev or even nowhere… April 23, 2017
12
Global requirements No. overlapping events/25 ns 23
No. particles in ATLAS/25 ns Data throughput At detectors (40 MHz) (equivalent to) TB/s --> LVL1 Accepts O(100) GB/s --> Mass storage O(100) MB/s April 23, 2017
13
Immediate observations
Very high rate New data every 25 ns – virtually impossible to make real time decisions at this rate. Not even time for signals to propagate through electronics Amount of data TB/s Can obviously not be stored directly. No hardware or networks exist (or at least not affordable!) that can handle this amount of data Even if we could, analyzing all the data would be extremely time consuming The TDAQ system must reduce the amount of data by several order of magnitudes April 23, 2017
14
Challenges & Requirements ATLAS TDAQ Architecture
Overview Introduction Challenges & Requirements ATLAS TDAQ Architecture Readout and LVL1 LVL2 Triggering & Region of interest Event Building Event filtering Current status of installation April 23, 2017
15
Global view The ATLAS TDAQ architecture is based on a three-level trigger hierarchy Level 1 Level 2 Even filter It uses a Level 2 selection mechanism based on a subset of event data -> Region-of-Interest This reduces the amount of data needed to do lvl2 filtering Therefore, there is a much reduced demand on dataflow power Note that ATLAS differs from CMS on this point 22 m Weight: 7000 t April 23, 2017
16
Multi-level triggers provide
Rapid rejection of high-rate backgrounds without incurring (much) dead-time Fast first-level trigger (custom electronics) Needs high efficiency, but rejection power can be comparatively modest High overall rejection power to reduce output to mass storage to affordable rate Progressive reduction in rate after each stage of selection allows use of more and more complex algorithms at affordable cost Final stages of selection, running on computer farms, can use comparatively very complex (and hence slow) algorithms to achieve the required overall rejection power Nick Ellis, Seminar, Lecce, October 2007
17
Challenges & Requirements ATLAS TDAQ Architecture
Overview Introduction Challenges & Requirements ATLAS TDAQ Architecture Readout and LVL1 LVL2 Triggering & Region of interest Event Building & Event filtering Current status of installation April 23, 2017
18
ARCHITECTURE (Functional elements and their connections)
Trigger DAQ Calo MuTrCh Other detectors 40 MHz LV L1 FE Pipelines 2.5 ms Lvl1 acc April 23, 2017
19
LVL1 selection criteria
Features that distinguish new physics from the bulk of the cross-section for Standard Model processes at hadron colliders are: In general, the presence of high-pT particles (or jets) e.g. these may be the products of the decays of new heavy particles In contrast, most of the particles produced in minimum-bias interactions are soft (pT ~ 1 GeV or less) More specifically, the presence of high-pT leptons (e, m, t), photons and/or neutrinos e.g. the products (directly or indirectly) of new heavy particles These give a clean signature c.f. low-pT hadrons in minimum-bias case, especially if they are “isolated” (i.e. not inside jets) The presence of known heavy particles e.g. W and Z bosons may be produced in Higgs particle decays Leptonic W and Z decays give a very clean signature Also interesting for physics analysis and detector studies
20
LVL1 signatures and backgrounds
LVL1 triggers therefore search for High-pT muons Identified beyond calorimeters; need pT cut to control rate from p+ mn, K+ mn, as well as semi-leptonic beauty and charm decays High-pT photons Identified as narrow EM calorimeter clusters; need cut on ET; cuts on isolation and hadronic-energy veto reduce strongly rates from high-pT jets High-pT electrons Same as photon (matching track in required in subsequent selection) High-pT taus (decaying to hadrons) Identified as narrow cluster in EM+hadronic calorimeters High-pT jets Identified as cluster in EM+hadronic calorimeter — need to cut at very high pT to control rate (jets are dominant high-pT process) Large missing ET or total scalar ET
21
Level1 - Only Calorimeters and Muons
April 23, 2017
22
Level-1 latency Interactions every 25 ns … Cable length ~100 meters …
In 25 ns particles travel 7.5 m Cable length ~100 meters … In 25 ns signals travel 5 m Weight: 7000 t 44 m 22 m Level-1 latency Total Level-1 latency = (TOF+cables+processing+distribution) = 2.5 msec For 2.5 msec, all signals must be stored in electronics pipelines (there are 108 channels!) April 23, 2017
23
April 23, 2017
24
Global Architecture - Level 1
LHC Beam Xings … DETECTOR Inner Tracker Calorimeters Muon Trigger Muon tracker Level 1 Trigger Data Acquisition Calorimetry Muon Tr Ch … FE pipelines Central Trigger Processor Level-1 Accept April 23, 2017
25
ARCHITECTURE (Functional elements and their connections)
Trigger DAQ Calo MuTrCh Other detectors 40 MHz D E T RO LV L1 FE Pipelines 2.5 ms Lvl1 acc ROD Read-Out Drivers Event data Read-Out Links Read-Out Buffers ROB April 23, 2017
26
ATLAS - A Toroidal Lhc ApparatuS - Trig. rate & Ev. size
Selection 2*1033 cm-2s-1 1034 cm-2s-1 MU (20) 0.8 4.0 2MU6 0.2 1.0 EM25I (30) 12.0 22.0 2EM15I (20) 5.0 J (290) 3J (130) 4J (90) J60 + xE ( ) 0.4 0.5 TAU25I + xE (60+60) 2.0 MU10 + EM15I 0.1 Others (pre-scales, calib, …) Total ~ 25 ~ 40 LVL1 trigger rates InDet Channels No. ROLs Fragment size - kB Pixels 1.4x108 120 0.5 SCT 6.2x106 92 1.1 TRT 3.7x105 232 1.2 Calo Channels No. ROLs Fragment size - kB LAr 1.8x105 764 0.75 Tile 104 64 ATLAS - A Toroidal Lhc ApparatuS Trig. rate & Ev. size Muon Channels No. ROLs Fragment size - kB MDT 3.7x105 192 0.8 CSC 6.7x104 32 0.2 RPC 3.5x105 0.38 TGC 4.4x105 16 Rates in kHz No safety factor included! LVL1 trigger rate (high lum) = 40 kHz Total event size = 1.5 MB Total no. ROLs = 1600 Design system for kHz --> 120 GB/s (upgradeable to 100 kHz, 150 GB/s) Trig Channels No. ROLs Fragment size - kB LVL1 56 1.2 April 23, 2017
27
Challenges & Requirements ATLAS TDAQ Architecture
Overview Introduction Challenges & Requirements ATLAS TDAQ Architecture Readout and LVL1 LVL2 Triggering & Region of interest Event Building & Event filtering Current status of installation April 23, 2017
28
ARCHITECTURE (Functional elements and their connections)
Trigger DAQ Calo MuTrCh Other detectors 40 MHz Region of Interest D E T RO LV L1 FE Pipelines 2.5 ms Lvl1 acc = 75 kHz ROD Read-Out Drivers 120 GB/s Event data Read-Out Links Read-Out Buffers ROIB RoI Builder ROB April 23, 2017
29
Region of Interest - Why?
The Level-1 selection is dominated by local signatures Based on coarse granularity (calo, mu trig chamb), w/out access to inner tracking Important further rejection can be gained with local analysis of full detector data The geographical addresses of interesting signatures identified by the LVL1 (Regions of Interest) Allow access to local data of each relevant detector Sequentially Typically, there is 1-2 RoI per event accepted by LVL1 <RoIs/ev> = ~1.6 The resulting total amount of RoI data is minimal a few % of the Level-1 throughput April 23, 2017
30
RoI mechanism - Implementation
There is a simple correspondence # region <-> ROB number(s) (for each detector) -> for each RoI the list of ROBs with the corresponding data from each detector is quickly identified (LVL2 processors) This mechanism provides a powerful and economic way to add an important rejection factor before full Event Building --> the ATLAS RoI-based Level-2 trigger … ~ one order of magnitude smaller ReadOut network … … at the cost of a higher control traffic … 4 RoI -f addresses Note that this example is atypical; the average number of RoIs/ev is ~1.6 April 23, 2017
31
ARCHITECTURE (Functional elements and their connections)
Trigger DAQ Calo MuTrCh Other detectors 40 MHz RoI D E T RO LV L1 FE Pipelines 2.5 ms Lvl1 acc = 75 kHz RoI data ROD Read-Out Drivers 120 GB/s Read-Out Links LVL2 RoI requests ROS Read-Out Buffers RoI Builder ROB L2 Supervisor L2 N/work L2 Proc Unit L2P L2SV L2N ROIB Read-Out Sub-systems Lvl2 acc There is typically only 1-2 RoI / event Only the RoI Builder and ROB input see the full LVL1 rate (same simple point-to-point link) April 23, 2017
32
LVL2 Trafic decision decision Detector Frontend Level 1 Trigger
Readout Buffers LVL2 Supervisor L2 Network Controls decision decision L2 Processors April 23, 2017
33
L2 & EB networks - Full system on surface (SDX15)
ROS systems USA15 SDX15 144 GbEthernet L2 Switch Cross Switch ~80 … L2SV L2PU L2PU L2PU April 23, 2017
34
Level-2 Trigger Level-2 Trigger
Three parameters characterise the RoI-based Level-2 trigger: the amount of data required : 1-2% of total the overall time budget in L2P : 10 ms average the rejection factor : x 30 Level-2 Trigger Three parameters characterise the RoI-based Level-2 trigger: the amount of data required the overall time budget in L2P the rejection factor April 23, 2017
35
Challenges & Requirements ATLAS TDAQ Architecture
Overview Introduction Challenges & Requirements ATLAS TDAQ Architecture Readout and LVL1 LVL2 Triggering & Region of interest Event Building & Event filtering Current status of installation April 23, 2017
36
ARCHITECTURE (Functional elements and their connections)
Trigger DAQ Calo MuTrCh Other detectors 40 MHz D E T RO LV L1 FE Pipelines RoI 2.5 ms Lvl1 acc = 75 kHz RoI data = 1-2% ROD Read-Out Drivers GB/s Read-Out Links LVL2 ~ 10 ms RoI requests Read-Out Buffers RoI Builder ROS ROB L2 Supervisor L2 N/work L2 Proc Unit L2P L2SV L2N ROIB Read-Out Sub-systems EB SFI EBN Sub-Farm Input Event Building N/work Event Builder Event Filter N/work EFN Dataflow Manager DFM Lvl2 acc = ~2 kHz April 23, 2017
37
Event Building Detector Frontend Level 1 +2 Trigger Readout Systems
DataFlow Builder Networks Controls Manager Event Filter April 23, 2017
38
ARCHITECTURE (Functional elements and their connections)
Trigger DAQ Calo MuTrCh Other detectors 40 MHz 75 kHz ~2 kHz ~ 200 Hz 120 GB/s ~ 300 MB/s -> 1 CD / sec ~2+4 GB/s 10’s TB/s (equivalent) CDs/s 40 MHz D E T RO LV L1 FE Pipelines RoI 2.5 ms Lvl1 acc = 75 kHz RoI data = 1-2% ROD Read-Out Drivers GB/s Read-Out Links H L T D A F O W LVL2 ~ 40 ms RoI requests Read-Out Buffers RoI Builder ROS ROB L2 Supervisor L2 N/work L2 Proc Unit L2P L2SV L2N ROIB Read-Out Sub-systems EB SFI EBN Sub-Farm Input Event Building N/work Event Builder Event Filter N/work EFN Dataflow Manager DFM Lvl2 acc = ~2 kHz Event Filter EFP ~ 4 s Processors ~4 GB/s EFacc = ~0.2 kHz Sub-Farm Output SFO April 23, 2017
39
L2 & EB networks - Full system on surface (SDX15)
ROS systems USA15 SDX15 144 GbEthernet 144 L2 Switch EB Switch Cross Switch ~110 ~80 DFM … SFI L2SV L2PU L2PU L2PU Event Filter GbE switches of this size can be bought today SFO April 23, 2017
40
Challenges & Requirements ATLAS TDAQ Architecture
Overview Introduction Challenges & Requirements ATLAS TDAQ Architecture Readout and LVL1 LVL2 Triggering Event Building Event filtering Current status of installation April 23, 2017
41
TDAQ ARCHITECTURE LV L1 H L T EB E RO LVL2 ROS D A F O W Event Filter
2.5 ms ~ 10 ms ROIB L2P L2SV L2N SFO Event Filter EFP ~ sec ROD ROB EB SFI EBN EFN DFM April 23, 2017
42
Trigger / DAQ architecture
4/23/2017 Trigger / DAQ architecture Read-Out Subsystems (ROSs) 1600 Read- Out Links ~150 PCs Event data ≤ 100 kHz, 1600 fragments of ~ 1 kByte each UX15 Read- Out Drivers (RODs) First- level trigger Dedicated links VME Data of events accepted by first-level trigger April 23, 2017
43
4/23/2017 Read-Out System (ROS) 153 ROSs installed in USA15 (completed in August 2006) 149 = 100% foreseen for “core” detectors + 4 hot spares All fully equipped with ROBINs PC properties • 4U, 19" rack mountable PC • Motherboard: Supermicro X6DHE-XB • CPU: Uni-proc 3.4 GHz Xeon • RAM: 512 MB Redundant power supply Network booted (no local hard disk) Remote management via IPMI Network: 2 GbE onboard (1 for control network) 4 GbE on PCI-Express card 1 for LVL2 + 1 for event building Spares: 12 full systems in pre-series as hot spares 27 PC purchased (7 for new detectors) ROBINs (Read-Out Buffer Inputs) Input : 3 ROLs (S-Link I/F) Output : 1 PCI I/F and 1 GbE I/F (for upgrade if needed) Buffer memory : 64 Mbytes (~32k fragments per ROL) ~700 ROBINs installed and spares + 70 ROBINs ordered (new detectors and more spares) System is complete : no further purchasing/procurement foreseen April 23, 2017 43
44
Trigger / DAQ architecture
4/23/2017 Trigger / DAQ architecture Event Filter (EF) ~1800 Network switches ~100 Event Builder SubFarm Inputs (SFIs) Second- level trigger pROS ~ 500 stores LVL2 output farm USA15 SDX1 Regions Of Interest LVL2 Super- visor Gigabit Ethernet Event data requests Delete commands Requested event data Network switches Event data pulled: partial events @ ≤ 100 kHz, full events @ ~ 3 kHz DataFlow Manager USA15 UX15 Read- Out Drivers (RODs) First- level trigger Dedicated links VME Data of events accepted by first-level trigger 1600 Read- Out Links ~150 PCs Read-Out Subsystems (ROSs) RoI Builder Event data ≤ 100 kHz, 1600 fragments of ~ 1 kByte each April 23, 2017
45
Central Switches • Data network • Back-end network
4/23/2017 Central Switches • Data network - Event builder traffic - LVL2 traffic • Force10 E1200 - 1 Chassis + blades for 1 and 10 GbE as required For July 2008 : 14 blades in 2 chassis ~700 GbE speed Final system : extra blades for final system • Back-end network - To Event Filter and SFO • Force10 E600 - 1 Chassis + blades for 1 GbE For July 2008 : blades as required following EF evolution Final system : extra blades for final system • Core online/Control network - Run Control - Databases - Monitoring samplers • Force10 E600 - 1 Chassis + blades for 1 GbE For July 2008 : 2 chassis + blades following full system evolution Final system : extra blades for final system April 23, 2017 45
46
HLT Farms Final size for max L1 rate (TDR) 161 XPU PCs installed
4/23/2017 Final size for max L1 rate (TDR) ~ PCs for L2 ~ 1800 PCs for EF (multi-core technology -> same no. boxes, more applications) Recently decided: ~ 900 of above will be XPUs connected to both L2 and EF networks 161 XPU PCs installed 130 x 8 cores • CPU: 2 x Intel E5320 quad-core 1.86 GHz • RAM: 1 GB / core, i.e. 8 GB x 4 cores Cold-swappable power supply Network booted - Local hard disk Remote management via IPMI Network: 2 GbE onboard, 1 for control network, 1 for data network VLAN for the connection to both data and back-end networks For July 2008 : total of 9 L EF racks as from TDR for 2007 run (1 rack = 31 PCs) Final system : total of 17 L EF racks of which 28 (of 79) racks with XPU connection HLT Farms April 23, 2017 46
47
+~90 racks (mostly empty…)
4/23/2017 DAQ/HLT in SDX1 +~90 racks (mostly empty…) April 23, 2017
48
Trigger / DAQ architecture
4/23/2017 Trigger / DAQ architecture SDX1 dual-CPU nodes CERN computer centre Event rate ~ 200 Hz Data storage 6 Local Storage SubFarm Outputs (SFOs) Event Filter (EF) ~1800 Network switches ~100 Event Builder SubFarm Inputs (SFIs) Second- level trigger pROS ~ 500 stores LVL2 output farm USA15 SDX1 Gigabit Ethernet Event data requests Delete commands Requested event data Regions Of Interest LVL2 Super- visor Network switches DataFlow Manager Timing Trigger Control (TTC) Event data pulled: partial events @ ≤ 100 kHz, full events @ ~ 3 kHz Read-Out Subsystems (ROSs) USA15 1600 Read- Out Links RoI Builder ~150 PCs Event data ≤ 100 kHz, 1600 fragments of ~ 1 kByte each UX15 Read- Out Drivers (RODs) First- level trigger Dedicated links VME Data of events accepted by first-level trigger April 23, 2017
49
SubFarm Output 6 SFO PCs installed Disks: Network:
4/23/2017 SubFarm Output 6 SFO PCs installed 5+1 hot spare • 5U, 19" rack mountable PC • Motherboard: Intel • CPU: 2 x Intel E5320 dual-core 2.0 GHz • RAM: 4 GB Quadruple cold-swappable power supply Network booted Local hard disk recovery from crashes pre-load events for testing (e.g. FDR) Remote management via IPMI Disks: 3 SATA Raid controllers 24 disks x 500 GB = 12 TB Network: 2 GbE onboard 1 for control and IPMI, 1 for data 3 GbE on PCIe card for data System is complete, no further purchasing foreseen required b/width (300MB/s) already available to facilitate detector commissioning and calibrations in early phase April 23, 2017 49
50
CABLING! April 23, 2017
51
TDAQ System operational at Point-1
Routinely used for Debugging and standalone commissioning of all detectors after installation TDAQ Technical Runs - use physics selection algorithms in the HLT farms on simulated data pre-loaded in the Read-Out System Commissioning Runs of integrated ATLAS - take cosmic data through full TDAQ chain (up to Tier-0) after final detectors integration 22 m Weight: 7000 t 44 m April 23, 2017
52
4/23/2017 April 23, 2017
53
Full DAQ chain (up to Tier0) Tile+RPC+TGC triggering on cosmics
Shifters and systems readiness Run Control Trigger status The M4 Commissioning Run (Aug 23 - Sep 03, 2007) Essentially all detectors integrated Full DAQ chain (up to Tier0) Tile+RPC+TGC triggering on cosmics Run Control tree Event rate CHEP 07 April 23, 2017 B. Gorini - Integration of the Trigger and Data Acquisition Systems in ATLAS 7
54
combined algorithms in both LVL2 and EF
ROIB,124-ROS, 29-SFI, 120-XPUs, SFO April 23, 2017
55
EB scalability and stability
59 SFI GB/s (114 MB/s each) 60% of final EB performance better than requirements (but no LVL2…) GbE b/w is limit Negligible protocol overhead Integrated run time EB rate CHEP 07 April 23, 2017
56
LVL2 timing: accepted events (3%)
mean ~ 83 ms mean = 26.5 ms mean >94.3 ms Total time per event Processing time per event mean ~ 24 Data requests per event ~1ms/Request Data collection time per event April 23, 2017
57
LVL2 timing: rejected events (~97%)
Total time per event Processing time per event mean = 31.5 ms mean = 25.7 ms Data collection time per event Data requests per event mean = 6.0 ms mean = 5.3 April 23, 2017
58
Preliminary EF measurements
mean 1.57 s Preliminary Only a snapshot of one particular setup, still far from being representative of the final hardware setup, typical high luminosity trigger menu, and actual LHC events! CHEP 07 April 23, 2017 B. Gorini - Integration of the Trigger and Data Acquisition Systems in ATLAS
59
Combined Cosmic run in June 2007 (M3)
4/23/2017 In June we had a 14 day combined cosmic run. Ran with no magnetic field. Included following systems: Muons – RPC (~1/32) , MDT (~1/16), TGC (~1/36) Calorimeters – EM (LAr )(~50%) & Hadronic (Tile) (~75%) Tracking – Transition Radiation Tracker (TRT) (~6/32 of the barrel of the final system) Only systems missing are the Silicon strips and pixels and the muon system CSCs April 23, 2017 59
60
Conclusions: we have started to operate ATLAS at full-system scale
We seem to be in business, the ATLAS TDAQ system is doing its job at least so far… … but the real test is still to come ... April 23, 2017
61
Status & Conclusions The ATLAS TDAQ system has a three-level trigger hierarchy, making use of the Region-of-Interest mechanism --> important reduction of data movement The system design is complete, but is open to: Optimization of the I/O at the Read-Out System level Optimization of the deployment of the LVL2 and Event Builder networks The architecture has been validated via deployment of full systems: On test bed prototypes At the ATLAS H8 test beam And via detailed modeling to extrapolate to full size April 23, 2017
62
Giovanna Lehmann-Miotto and Nick Ellis
Thanks for listening! Big thanks to: Livio Mapelli, Giovanna Lehmann-Miotto and Nick Ellis for lending slides and helping with this presentation April 23, 2017
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.