Kostas KORDAS INFN – Frascati XI Bruno Touschek spring school, Frascati,19 May 2006 Higgs → 2e+2  O (1/hr) Higgs → 2e+2  O (1/hr) ~25 min bias events.

Slides:



Advertisements
Similar presentations
G ö khan Ü nel / CHEP Interlaken ATLAS 1 Performance of the ATLAS DAQ DataFlow system Introduction/Generalities –Presentation of the ATLAS DAQ components.
Advertisements

Sander Klous on behalf of the ATLAS Collaboration Real-Time May /5/20101.
ATLAS Trigger "sur le terrain" Physique ATLAS France Evian Octobre 2009 C. Clément (Stockholm University) Evian, Octobre 2009 ATLAS Trigger.
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
1 Introduction to Geneva ATLAS High Level Trigger Activities Xin Wu Journée de réflexion du DPNC, 11 septembre, 2007 Participants Assitant(e)s: Gauthier.
The First-Level Trigger of ATLAS Johannes Haller (CERN) on behalf of the ATLAS First-Level Trigger Groups International Europhysics Conference on High.
October 20 th, 2000Lyon - DAQ2000HP Beck ATLAS Trigger & Data Acquisition Requirements and Concepts Hanspeter Beck LHEP - Bern for the ATLAS T/DAQ Group.
The ATLAS High Level Trigger Steering Journée de réflexion – Sept. 14 th 2007 Till Eifert DPNC – ATLAS group.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
Chris Bee ATLAS High Level Trigger Introduction System Scalability Trigger Core Software Development Trigger Selection Algorithms Commissioning & Preparation.
The ATLAS trigger Ricardo Gonçalo Royal Holloway University of London.
The ATLAS Trigger: High-Level Trigger Commissioning and Operation During Early Data Taking Ricardo Gonçalo, Royal Holloway University of London On behalf.
March 2003 CHEP Online Monitoring Software Framework in the ATLAS Experiment Serguei Kolos CERN/PNPI On behalf of the ATLAS Trigger/DAQ Online Software.
1 The ATLAS Online High Level Trigger Framework: Experience reusing Offline Software Components in the ATLAS Trigger Werner Wiedenmann University of Wisconsin,
High Level Triggering Fred Wickens. High Level Triggering (HLT) Introduction to triggering and HLT systems –What is Triggering –What is High Level Triggering.
K. Honscheid RT-2003 The BTeV Data Acquisition System RT-2003 May 22, 2002 Klaus Honscheid, OSU  The BTeV Challenge  The Project  Readout and Controls.
Worldwide event filter processing for calibration Calorimeter Calibration Workshop Sander Klous September 2006.
DAQ System at the 2002 ATLAS Muon Test Beam G. Avolio – Univ. della Calabria E. Pasqualucci - INFN Roma.
What’s in the ATLAS data : Trigger Decision ATLAS Offline Software Tutorial CERN, August 2008 Ricardo Gonçalo - RHUL.
LECC2003 AmsterdamMatthias Müller A RobIn Prototype for a PCI-Bus based Atlas Readout-System B. Gorini, M. Joos, J. Petersen (CERN, Geneva) A. Kugel, R.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
ILC Trigger & DAQ Issues - 1 ILC DAQ issues ILC DAQ issues By P. Le Dû
The ATLAS Trigger and Data Acquisition: a brief overview of concept, design and realization John Erik Sloper ATLAS TDAQ group CERN - Physics Dept. April.
ATLAS HLT in PPD John Baines, Dmitry Emeliyanov, Julie Kirk, Monika Wielers, Will Dearnaley, Fred Wickens, Stephen Burke 1.
Control in ATLAS TDAQ Dietrich Liko on behalf of the ATLAS TDAQ Group.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
The ATLAS Trigger: High-Level Trigger Commissioning and Operation During Early Data Taking Ricardo Gonçalo, Royal Holloway University of London On behalf.
IOP HEPP: Beauty Physics in the UK, 12/11/08Julie Kirk1 B-triggers at ATLAS Julie Kirk Rutherford Appleton Laboratory Introduction – B physics at LHC –
U.S. ATLAS Executive Committee August 3, 2005 U.S. ATLAS TDAQ FY06 M&O Planning A.J. Lankford UC Irvine.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
CHEP March 2003 Sarah Wheeler 1 Supervision of the ATLAS High Level Triggers Sarah Wheeler on behalf of the ATLAS Trigger/DAQ High Level Trigger.
Artemis School On Calibration and Performance of ATLAS Detectors Jörg Stelzer / David Berge.
Modeling PANDA TDAQ system Jacek Otwinowski Krzysztof Korcyl Radoslaw Trebacz Jagiellonian University - Krakow.
Overview of the High-Level Trigger Electron and Photon Selection for the ATLAS Experiment at the LHC Ricardo Gonçalo, Royal Holloway University of London.
Experience with multi-threaded C++ applications in the ATLAS DataFlow Szymon Gadomski University of Bern, Switzerland and INP Cracow, Poland on behalf.
Status of the ATLAS first-level Central Trigger and the Muon Barrel Trigger and First Results from Cosmic-Ray Data David Berge (CERN-PH) for the ATLAS.
ATLAS TDAQ RoI Builder and the Level 2 Supervisor system R. E. Blair, J. Dawson, G. Drake, W. Haberichter, J. Schlereth, M. Abolins, Y. Ermoline, B. G.
Kostas KORDAS INFN – Frascati 10th Topical Seminar on Innovative Particle & Radiation Detectors (IPRD06) Siena, 1-5 Oct The ATLAS Data Acquisition.
The ATLAS DAQ System Online Configurations Database Service Challenge J. Almeida, M. Dobson, A. Kazarov, G. Lehmann-Miotto, J.E. Sloper, I. Soloviev and.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
ATLAS RoI Builder + CDF ● Brief reminder of ATLAS Level 2 ● Opportunities for CDF (what fits - what doesn't) ● Timescales (more of what fits and what doesn't)
LECC2004 BostonMatthias Müller The final design of the ATLAS Trigger/DAQ Readout-Buffer Input (ROBIN) Device B. Gorini, M. Joos, J. Petersen, S. Stancu,
August 24, 2011IDAP Kick-off meeting - TileCal ATLAS TileCal Upgrade LHC and ATLAS current status LHC designed for cm -2 s 7+7 TeV Limited to.
ANDREA NEGRI, INFN PAVIA – NUCLEAR SCIENCE SYMPOSIUM – ROME 20th October
The ATLAS Trigger System 1.Global Requirements for the ATLAS trigger system 2.Trigger/DAQ Architecture 3.LVL1 system 4.HLT System 5.Some results from LHC.
1 Nicoletta GarelliCPPM, 03/25/2011 Overview of the ATLAS Data-Acquisition System o perating with proton-proton collisions Nicoletta Garelli (CERN) CPPM,
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June Computer Instrumentation Introduction Jos Vermeulen, UvA / NIKHEF Topical.
5/14/2018 The ATLAS Trigger and Data Acquisition Architecture & Status Benedetto Gorini CERN - Physics Department on behalf of the ATLAS TDAQ community.
U.S. ATLAS TDAQ FY06 M&O Planning
Ricardo Gonçalo, RHUL BNL Analysis Jamboree – Aug. 6, 2007
Electronics Trigger and DAQ CERN meeting summary.
LHC experiments Requirements and Concepts ALICE
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
High Level Triggering Fred Wickens.
Operating the ATLAS Data-Flow System with the First LHC Collisions
ATLAS Canada Alberta Carleton McGill Montréal Simon Fraser Toronto
ATLAS Canada Alberta Carleton McGill Montréal Simon Fraser Toronto
The First-Level Trigger of ATLAS
12/3/2018 The ATLAS Trigger and Data Acquisition Architecture & Status Benedetto Gorini CERN - Physics Department on behalf of the ATLAS TDAQ community.
TDAQ commissioning and status Stephen Hillier, on behalf of TDAQ
1/2/2019 The ATLAS Trigger and Data Acquisition Architecture & Status Benedetto Gorini CERN - Physics Department on behalf of the ATLAS TDAQ community.
TELL1 A common data acquisition board for LHCb
Presentation transcript:

Kostas KORDAS INFN – Frascati XI Bruno Touschek spring school, Frascati,19 May 2006 Higgs → 2e+2  O (1/hr) Higgs → 2e+2  O (1/hr) ~25 min bias events ( >2k particles ) every 25 ns ~25 min bias events ( >2k particles ) every 25 ns ATLAS Trigger & Data Acquisition system: concept & architecture

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS2 LHC TeVatron Process  (pb) N/sN/year Total collected before start of LHC W  l 3  LEP / 10 7 FNAL Z  ee1.5  LEP t Tevatron b 5  Belle/BaBar ? Low lumi = 10 fb -1 /y ATLAS Trigger & DAQ: the need (1) Total cross section is at ~100 mb, While the very interesting physics is at ~1 nb to ~1 pb, i.e., a ratio of 1: 10 8 to 1:10 11

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS3 ATLAS Trigger & DAQ: the need (2) 40 MHz ~ 200 Hz ~ 300 MB/s Full info / event: ~ 1.6 MB/25ns ~60k TB/s p p Need high luminosity to get to observe the very interesting events Need on-line selection to write to disk mostly interesting events

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS4 Dataflow EB High Level Trigger L2 ROS Level 1 Det. R/O Trigger DAQ 2.5  s ~10 ms Calo MuTrCh Other detectors Read-Out Systems L2P L2N RoI RoI data (~2%) RoI requests L2 accept (~3.5 kHz) SFO L1 accept (100 kHz) 40 MHz EF EFP ~ sec EF accept (~0.2 kHz) ROD ROB SFI EBN Event Builder EFN DFM L2SVROIB Event Filter Level 2 ATLAS Trigger & DAQ: architecture 40 MHz ~ 200 Hz ~ 300 MB/s 100 kHz ~ 3.5 kHz Full info / event: ~ 1.6 MB/25ns ~3+6 GB/s 160 GB/s

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS5 From the detector into the Level-1 Trigger Interactions every 25 ns: …in 25 ns particles travel 7.5 m Cable length ~100 meters: …in 25 ns signals travel 5 m Total Level-1 latency = 2.5  sec (TOF + cables + processing + distribution) For 2.5  sec, all signals must be stored in electronic pipelines Weight: 7000 t 44 m 22m Level 1 Trigger DAQ 2.5  s Calo MuTrCh Other detectors FE Pipelines 40 MHz

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS6 Upon LVL1 accept: buffer data & get RoIs ROS Level 1 Det. R/O Trigger DAQ 2.5  s Calo MuTrCh Other detectors Read-Out Systems RoI L1 accept (100 kHz) 40 MHz 160 GB/s ROD ROB ROIB Read-Out Drivers Region of Interest Builder Read-Out Buffers Read-Out Links (S-LINK) 100 kHz

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS7 LVL1 finds Regions of Interest for next levels 4 RoI  addresses In this example: 4 Regions of Interest: 2 muons, 2 electrons

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS8 Upon LVL1 accept: buffer data & get RoIs ROS Level 1 Det. R/O Trigger DAQ 2.5  s Calo MuTrCh Other detectors Read-Out Systems RoI L1 accept (100 kHz) 40 MHz 160 GB/s ROD ROB ROIB Read-Out Drivers Region of Interest Builder Read-Out Buffers Read-Out Links (S-LINK) 100 kHz On average, LVL1 finds ~2 Regions of Interest (in  ) per event Data in RoIs is a few % of the Level-1 throughput

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS9 LVL2: work with “interesting” ROSs/ROBs For each detector there is a simple correspondence  Region Of Interest ROB(s)  LVL2 Proccessing Units: for each RoI, the list of ROBs with the corresponding data from each detector is quickly identified L2 ROS Level 1 Det. R/O Trigger DAQ 2.5  s ~10 ms Calo MuTrCh Other detectors Read-Out Systems L2P L2N RoI RoI data (~2%) RoI requests L1 accept (100 kHz) 40 MHz 100 kHz 160 GB/s ~3 GB/s ROD ROB L2SVROIB Level 2 LVL2 Supervisor LVL2 Network LVL2 Processing Units Read-Out Buffers RoI-based Level-2 trigger: A much smaller ReadOut network … at the cost of a higher control traffic

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS10 Trigger DAQ Calo MuTrCh EB L2 ROS Level 1 Det. R/O 2.5  s ~10 ms Other detectors Read-Out Systems L2P L2N RoI RoI data (~2%) RoI requests L2 accept (~3.5 kHz) L1 accept (100 kHz) 40 MHz 100 kHz ~3.5 kHz 160 GB/s ~3+6 GB/s ROD ROB SFI EBN Event Builder DFM L2SVROIB Level 2 Sub-Farm Input Dataflow Manager Event Building Network After LVL2: Build full events

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS11 LVL3: Event Filter deals with Full Event info Trigger DAQ EB L2 ROS Level 1 Det. R/O 2.5  s ~10 ms Calo MuTrCh Other detectors Read-Out Systems L2P L2N RoI RoI data (~2%) RoI requests L2 accept (~3.5 kHz) L1 accept (100 kHz) 40 MHz 100 kHz ~3.5 kHz 160 GB/s ~3+6 GB/s EF EFP ~ sec ROD ROB SFI EBN Event Builder EFN DFM L2SVROIB Event Filter Level 2 Farm of Event Filter Processors Event Filter Network Full Event Sub-Farm Input ~ 200 Hz

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS12 EB L2 ROS Level 1 Det. R/O 2.5  s ~10 ms Calo MuTrCh Other detectors Read-Out Systems L2P L2N RoI RoI data (~2%) RoI requests L2 accept (~3.5 kHz) L1 accept (100 kHz) 40 MHz 100 kHz ~3.5 kHz 160 GB/s ~3+6 GB/s EF EFP ~ sec ROD ROB SFI EBN Event Builder EFN DFM L2SVROIB Event Filter Level 2 Event Filter Processors Event Filter Network SFO EF accept (~0.2 kHz) ~ 200 Hz ~ 300 MB/s Sub-Farm Output From Event Filter to Local (TDAQ) storage

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS13 Dataflow EB High Level Trigger L2 ROS Level 1 Det. R/O Trigger DAQ 2.5  s ~10 ms Calo MuTrCh Other detectors Read-Out Systems L2P L2N RoI RoI data (~2%) RoI requests L2 accept (~3.5 kHz) SFO L1 accept (100 kHz) 40 MHz 100 kHz ~3.5 kHz ~ 200 Hz 160 GB/s ~ 300 MB/s ~3+6 GB/s EF EFP ~ sec EF accept (~0.2 kHz) ROD ROB SFI EBN Event Builder EFN DFM L2SVROIB Event Filter Level 2 TDAQ, High Level Trigger & DataFlow

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS14 Dataflow EB High Level Trigger L2 ROS Level 1 Det. R/O Trigger DAQ 2.5  s ~10 ms Calo MuTrCh Other detectors Read-Out Systems L2P L2N RoI RoI data (~2%) RoI requests L2 accept (~3.5 kHz) SFO L1 accept (100 kHz) 40 MHz 100 kHz ~3.5 kHz ~ 200 Hz 160 GB/s ~ 300 MB/s ~3+6 GB/s EF EFP ~ sec EF accept (~0.2 kHz) ROD ROB SFI EBN Event Builder EFN DFM L2SVROIB Event Filter Level 2 TDAQ, High Level Trigger & DataFlow High Level Trigger (HLT) Algorithms developed offline (with HLT in mind) HLT Infrastructure (TDAQ job): –“steer” the order of algorithm execution –Alternate steps of “feature extraction” & “hypothesis testing”)  fast rejection (min. CPU) –Reconstruction in Regions of Interest  min. processing time & network resources

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS15 Dataflow EB High Level Trigger L2 ROS Level 1 Det. R/O Trigger DAQ 2.5  s ~10 ms Calo MuTrCh Other detectors Read-Out Systems L2P L2N RoI RoI data (~2%) RoI requests L2 accept (~3.5 kHz) SFO L1 accept (100 kHz) 40 MHz 100 kHz ~3.5 kHz ~ 200 Hz 160 GB/s ~ 300 MB/s ~3+6 GB/s EF EFP ~ sec EF accept (~0.2 kHz) ROD ROB SFI EBN Event Builder EFN DFM L2SVROIB Event Filter Level 2 TDAQ, High Level Trigger & DataFlow DataFlow Buffer & serve data to HLT Act according to HLT result, but otherwise HLT is a “black box” which gives answers Software framework based on C++ code and the STL

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS16 Dataflow EB High Level Trigger L2 ROS Level 1 Det. R/O Trigger DAQ 2.5  s ~10 ms Calo MuTrCh Other detectors L2P L2N RoI RoI data (~2%) RoI requests L2 accept (~3.5 kHz) SFO L1 accept (100 kHz) 40 MHz EF EFP ~ sec EF accept (~0.2 kHz) ROD ROB SFI EBN EFN DFM L2SVROIB 500 nodes 100 nodes 150 nodes 1600 nodes Infrastructure ControlCommunicationDatabases High Level Trigger & DataFlow: PCs (Linux)

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS17 TDAQ at the ATLAS site SDX1 USA15 UX15 ATLAS detector Read- Out Drivers ( RODs ) First- level trigger Read-Out Subsystems ( ROSs ) UX15 USA15 Dedicated links Timing Trigger Control (TTC) 1600 Read- Out Links Gigabit Ethernet RoI Builder Regions Of Interest VME ~150 PCs Data of events accepted by first-level trigger Event data requests Delete commands Requested event data Event data ≤ 100 kHz, 1600 fragments of ~ 1 kByte each LVL2 Super- visor SDX1 CERN computer centre DataFlow Manager Event Filter (EF) pROS ~ 500~1600 stores LVL2 output dual-CPU nodes ~100~30 Network switches Event data pulled: partial ≤ 100 kHz, full ~ 3 kHz Event rate ~ 200 Hz Data storage Local Storage SubFarm Outputs (SFOs) LVL2 farm Network switches Event Builder SubFarm Inputs (SFIs) Second- level trigger “pre-series” system: ~10% of final TDAQ in place

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS18 Example of worries in such a system: CPU power At Technical Design Report we assumed: –100 kHz LVL1 accept rate –500 dual-CPU PCs for LVL2 –8 GHz per CPU at LVL2 So: –each L2PU does 100Hz –10ms average latency per event in each L2PU 8 GHz per CPU will not come –But, dual-core dual-CPU PCs show scaling! Preloaded ROS w/ muon events, run LVL2 Test with AMD dual-core, dual 1.8 GHz, 4 GB total We should reach necessary performance per PC at cost of higher memory needs & latency (shared memory model would be better here) 18

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS19 Last Sept: cosmics in the Tile hadronic calorimeter, brought via the pre-series (monitoring algorithms) Cosmics in ATLAS in the pit This July: Cosmic run with LAr EM + Tile Had Cal (+Muon detectors?)

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS20 ATLAS TDAQ: –3-level trigger hierarchy –Use Regions of Interest from previous level: small data movement –Feature extraction + hypothesis testing: fast rejection  min. CPU power Summary We are in the installation phase of system Cosmic run with Central Calorimeters (+muon system?) this summer TDAQ will be ready in time for LHC data taking Triggering at Hadron Colliders: –Need high luminosity to get rare events –Can not write all data to disk No sense otherwise: offline, we’ll be wasting our time looking for a needle in the hay!

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS21 Thank you

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS22 ROS units contain 12 R/O Buffers  150 units needed for ATLAS (~1600 ROBs) A ROS unit is implemented with a 3.4 GHz PC housing 4 custom PCI-x cards (ROBIN) ReadOut Systems: 150 PCs w/ special cards 12 ROS in place, more arriving Performance of final ROS (PC+ROBIN) is above requirements Note: we have also ability to access individual ROBs if wanted/needed “ Hottest” ROS from paper model 2. Measurements on real ROS H/W Low Lumi. operating region High Lumi. operating region LVL1 accept rate (kHZ) LVL2 accept rate (% of input) Not all ROSs are equal in rate of data requests ROD  ROS re-mapping can reduce requirements on busiest (hottest) ROS

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS23 So, we need: 5600 MB/s into EB system / (70MB/s in each EB node)  need ~80 SFIs for full ATLAS When SFI serves EF, throughput decreases by ~20%  actually need 80/0.80 = 100 SFIs Event Building needs Throughput requirements: 100 KHz LVL1 accept rate 3.5% LVL2 accept rate  3.5 KHz EB 1.6 MB event size  3.5 x 1.6 = 5600 MB/s total input Network limited (fast CPUs): Event building using 60-70% of Gbit network  ~70 MB/s into each Event Building node (SFI) 6 prototypes in place, evaluation of PCs now,  expect big Event Building needs from day 1: > 50 PCs till end of year

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS24 Data File LVL2 Ltcy Process Time RoI Coll Time RoI Coll Size # Req /Evt (ms) (bytes)  di-jet e Tests of LVL2 algorithms & RoI collection 2) Processing takes ~all latency: small RoI data collection time Note: Neither Trigger menu, nor data files representative mix of ATLAS (this is the aim for a late 2006 milestone) 3) Small RoI data request per event Electron sample is pre-selected 1) Majority of events rejected fast Di-jet,  & e simulated events preloaded on ROSs; RoI info on L2SV L2SV L2PU pROS Emulated ROS pROS 1 DFM 1 Plus: 1 Online Server 1 MySQL data base server

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS25 L2SV gets RoI info from RoIB Assigns a L2PU to work on event Load-balances its’ L2PU sub-farm Can scheme cope with LVL1 rate? Test with preloaded RoI info into RoIB, which triggers TDAQ chain, emulating LVL1 LVL2 system is able to sustain the LVL1 input rate: – 1 L2SV system for LVL1 rate ~ 35 kHz – 2 L2SV system for LVL1 rate ~ 70 kHz (50%-50% sharing) Scalability of LVL2 system Rate per L2SV stable within 1.5% ATLAS will have a handful of L2SVs  can easily manage 100 kHz LVL1 rate

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS26 Previous Event Filter I/O protocol limited rate for small event sizes (e.g., partially built)  changed in current TDAQ software release EF performance scales farm size Dummy algorithm: always accept, but with fixed delay Event size 1 MB Initially CPU limited, but eventually bandwidth limited Test e/  &  selection algorithms –HLT algorithms seeded by L2Result –pre-loaded (e &  ) simulated events on 1 SFI Emulator serving EF farm –Results here are for muons: Running muon algorithms:  scaling with EF farm size (still CPU limited with 9 nodes)

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS27 ATLAS Trigger & DAQ: philosophy 40 MHz ~100 kHz 2.5  s ~3 kHz ~ 10 ms ~ 1 s ~200 Hz Muon LVL1 CaloInner Pipeline Memories Read-Out Drivers RatesLatency RoI LVL2 Event builder cluster Local Storage: ~ 300 MB/s Read-Out Subsystems hosting Read-Out Buffers Event Filter farm EF ROB ROB ROB ROB ROB ROB ROD ROB ROB ROB ROB ROB ROB Hardware based (FPGA, ASIC) Calo/Muon (coarse granularity) Software (specialised algs) Uses LVL1 Regions of Interest All sub-dets, full granularity Emphasis on early rejection Offline algorithms Seeded by LVL2 result Work with full event Full calibration/alignment info High Level Trigger

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS28 Data Flow and Message Passing

XI Bruno Touschek school, Frascati, 19 May '06ATLAS TDAQ concept & architecture - Kostas KORDAS29 A Data Collection application example: the Event Builder Event Assembly Activity Input Activity Request Activity Event Handler Activity Assignment *Event Fragments *Event Event Sampler Activity ROS & pROS Event Fragments Data Requests Data Flow Manager Assignment Event Trigger (Event Filter) Event Monitoring SFI: Event Builder