Kostas KORDAS INFN – Frascati 10th Topical Seminar on Innovative Particle & Radiation Detectors (IPRD06) Siena, 1-5 Oct. 2006 The ATLAS Data Acquisition.

Slides:



Advertisements
Similar presentations
G ö khan Ü nel / CHEP Interlaken ATLAS 1 Performance of the ATLAS DAQ DataFlow system Introduction/Generalities –Presentation of the ATLAS DAQ components.
Advertisements

Sander Klous on behalf of the ATLAS Collaboration Real-Time May /5/20101.
ATLAS Trigger "sur le terrain" Physique ATLAS France Evian Octobre 2009 C. Clément (Stockholm University) Evian, Octobre 2009 ATLAS Trigger.
ATLAS HLT/DAQ Progress report V. Vercesi on behalf of the ATLAS Italia HLT/DAQ Group Maggio 2007.
O. Buchmueller, Imperial College, W. Smith, U. Wisconsin, UPO Meeting, July 6, 2012 Trigger Performance and Strategy Working Group Trigger Performance.
Copyright© 2000 OPNET Technologies, Inc. R.W. Dobinson, S. Haas, K. Korcyl, M.J. LeVine, J. Lokier, B. Martin, C. Meirosu, F. Saka, K. Vella Testing and.
1 Introduction to Geneva ATLAS High Level Trigger Activities Xin Wu Journée de réflexion du DPNC, 11 septembre, 2007 Participants Assitant(e)s: Gauthier.
The First-Level Trigger of ATLAS Johannes Haller (CERN) on behalf of the ATLAS First-Level Trigger Groups International Europhysics Conference on High.
October 20 th, 2000Lyon - DAQ2000HP Beck ATLAS Trigger & Data Acquisition Requirements and Concepts Hanspeter Beck LHEP - Bern for the ATLAS T/DAQ Group.
The ATLAS High Level Trigger Steering Journée de réflexion – Sept. 14 th 2007 Till Eifert DPNC – ATLAS group.
Kostas KORDAS INFN – Frascati XI Bruno Touschek spring school, Frascati,19 May 2006 Higgs → 2e+2  O (1/hr) Higgs → 2e+2  O (1/hr) ~25 min bias events.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
Chris Bee ATLAS High Level Trigger Introduction System Scalability Trigger Core Software Development Trigger Selection Algorithms Commissioning & Preparation.
The ATLAS Trigger: High-Level Trigger Commissioning and Operation During Early Data Taking Ricardo Gonçalo, Royal Holloway University of London On behalf.
March 2003 CHEP Online Monitoring Software Framework in the ATLAS Experiment Serguei Kolos CERN/PNPI On behalf of the ATLAS Trigger/DAQ Online Software.
1 The ATLAS Online High Level Trigger Framework: Experience reusing Offline Software Components in the ATLAS Trigger Werner Wiedenmann University of Wisconsin,
First year experience with the ATLAS online monitoring framework Alina Corso-Radu University of California Irvine on behalf of ATLAS TDAQ Collaboration.
High Level Triggering Fred Wickens. High Level Triggering (HLT) Introduction to triggering and HLT systems –What is Triggering –What is High Level Triggering.
High Level Triggering Fred Wickens. 2 High Level Triggering (HLT) Introduction to triggering and HLT systems –What is Triggering –What is High Level Triggering.
Worldwide event filter processing for calibration Calorimeter Calibration Workshop Sander Klous September 2006.
DAQ System at the 2002 ATLAS Muon Test Beam G. Avolio – Univ. della Calabria E. Pasqualucci - INFN Roma.
What’s in the ATLAS data : Trigger Decision ATLAS Offline Software Tutorial CERN, August 2008 Ricardo Gonçalo - RHUL.
LECC2003 AmsterdamMatthias Müller A RobIn Prototype for a PCI-Bus based Atlas Readout-System B. Gorini, M. Joos, J. Petersen (CERN, Geneva) A. Kugel, R.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
ILC Trigger & DAQ Issues - 1 ILC DAQ issues ILC DAQ issues By P. Le Dû
The ATLAS Trigger and Data Acquisition: a brief overview of concept, design and realization John Erik Sloper ATLAS TDAQ group CERN - Physics Dept. April.
ATLAS HLT in PPD John Baines, Dmitry Emeliyanov, Julie Kirk, Monika Wielers, Will Dearnaley, Fred Wickens, Stephen Burke 1.
Control in ATLAS TDAQ Dietrich Liko on behalf of the ATLAS TDAQ Group.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
The ATLAS Trigger: High-Level Trigger Commissioning and Operation During Early Data Taking Ricardo Gonçalo, Royal Holloway University of London On behalf.
U.S. ATLAS Executive Committee August 3, 2005 U.S. ATLAS TDAQ FY06 M&O Planning A.J. Lankford UC Irvine.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
CHEP March 2003 Sarah Wheeler 1 Supervision of the ATLAS High Level Triggers Sarah Wheeler on behalf of the ATLAS Trigger/DAQ High Level Trigger.
Artemis School On Calibration and Performance of ATLAS Detectors Jörg Stelzer / David Berge.
Modeling PANDA TDAQ system Jacek Otwinowski Krzysztof Korcyl Radoslaw Trebacz Jagiellonian University - Krakow.
Experience with multi-threaded C++ applications in the ATLAS DataFlow Szymon Gadomski University of Bern, Switzerland and INP Cracow, Poland on behalf.
Status of the ATLAS first-level Central Trigger and the Muon Barrel Trigger and First Results from Cosmic-Ray Data David Berge (CERN-PH) for the ATLAS.
ATLAS TDAQ RoI Builder and the Level 2 Supervisor system R. E. Blair, J. Dawson, G. Drake, W. Haberichter, J. Schlereth, M. Abolins, Y. Ermoline, B. G.
GridPP Meeting Jan 2003 R. Hughes-Jones Manchester ATLAS Trigger/DAQ Real-time use of the Grid Network Richard Hughes-Jones The University of Manchester.
The ATLAS DAQ System Online Configurations Database Service Challenge J. Almeida, M. Dobson, A. Kazarov, G. Lehmann-Miotto, J.E. Sloper, I. Soloviev and.
Trigger/DAQ/DCS. LVL1 Trigger Calorimeter trigger Muon trigger Central Trigger Processor (CTP) Timing, Trigger, Control (TTC) Germany, Sweden, UK Italy.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Pierre VANDE VYVRE ALICE Online upgrade October 03, 2012 Offline Meeting, CERN.
ATLAS RoI Builder + CDF ● Brief reminder of ATLAS Level 2 ● Opportunities for CDF (what fits - what doesn't) ● Timescales (more of what fits and what doesn't)
LECC2004 BostonMatthias Müller The final design of the ATLAS Trigger/DAQ Readout-Buffer Input (ROBIN) Device B. Gorini, M. Joos, J. Petersen, S. Stancu,
August 24, 2011IDAP Kick-off meeting - TileCal ATLAS TileCal Upgrade LHC and ATLAS current status LHC designed for cm -2 s 7+7 TeV Limited to.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
ANDREA NEGRI, INFN PAVIA – NUCLEAR SCIENCE SYMPOSIUM – ROME 20th October
The ATLAS Trigger System 1.Global Requirements for the ATLAS trigger system 2.Trigger/DAQ Architecture 3.LVL1 system 4.HLT System 5.Some results from LHC.
1 Nicoletta GarelliCPPM, 03/25/2011 Overview of the ATLAS Data-Acquisition System o perating with proton-proton collisions Nicoletta Garelli (CERN) CPPM,
Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June Computer Instrumentation Introduction Jos Vermeulen, UvA / NIKHEF Topical.
EPS 2007 Alexander Oh, CERN 1 The DAQ and Run Control of CMS EPS 2007, Manchester Alexander Oh, CERN, PH-CMD On behalf of the CMS-CMD Group.
EPS HEP 2007 Manchester -- Thilo Pauly July The ATLAS Level-1 Trigger Overview and Status Report including Cosmic-Ray Commissioning Thilo.
Jos VermeulenTopical Lectures, Computer Instrumentation, TDAQ, June Computer Instrumentation Triggering and DAQ Jos Vermeulen, UvA / NIKHEF Topical.
5/14/2018 The ATLAS Trigger and Data Acquisition Architecture & Status Benedetto Gorini CERN - Physics Department on behalf of the ATLAS TDAQ community.
U.S. ATLAS TDAQ FY06 M&O Planning
Ricardo Gonçalo, RHUL BNL Analysis Jamboree – Aug. 6, 2007
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
High Level Triggering Fred Wickens.
Operating the ATLAS Data-Flow System with the First LHC Collisions
ATLAS Canada Alberta Carleton McGill Montréal Simon Fraser Toronto
ATLAS Canada Alberta Carleton McGill Montréal Simon Fraser Toronto
12/3/2018 The ATLAS Trigger and Data Acquisition Architecture & Status Benedetto Gorini CERN - Physics Department on behalf of the ATLAS TDAQ community.
TDAQ commissioning and status Stephen Hillier, on behalf of TDAQ
1/2/2019 The ATLAS Trigger and Data Acquisition Architecture & Status Benedetto Gorini CERN - Physics Department on behalf of the ATLAS TDAQ community.
Presentation transcript:

Kostas KORDAS INFN – Frascati 10th Topical Seminar on Innovative Particle & Radiation Detectors (IPRD06) Siena, 1-5 Oct The ATLAS Data Acquisition & Trigger: concept, design & status

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS2 ATLAS Trigger & DAQ: concept p p 40 MHz ~ 200 Hz 100 kHz ~ 3.5 kHz Event Filter LVL1 LVL2 ~ 300 MB/s ~3+6 GB/s 160 GB/s Full info / event: ~ 1.6 MB/25ns = 60 PB/s Algorithms on PC farms seeded by previous level decide fast work w/ min. data volume Hardware based No dead time

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS3 From the detector into the Level-1 Trigger Level 1 TriggerDAQ 2.5  s Calo MuTrCh Other detectors FE Pipelines 40 MHz

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS4 Upon LVL1 accept: buffer data & get RoIs ROS Level 1 Det. R/O TriggerDAQ 2.5  s Calo MuTrCh Other detectors Read-Out Systems L1 accept (100 kHz) 40 MHz 160 GB/s ROD ROB Read-Out Drivers Read-Out Buffers Read-Out Links (S-LINK) 100 kHz

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS5 Region of Interest Builder ROS Level 1 Det. R/O TriggerDAQ 2.5  s Calo MuTrCh Other detectors Read-Out Systems RoI L1 accept (100 kHz) 40 MHz 160 GB/s ROD ROB ROIB Read-Out Drivers Read-Out Buffers Read-Out Links (S-LINK) 100 kHz On average, LVL1 finds ~2 Regions of Interest (in  ) per event Upon LVL1 accept: buffer data & get RoIs

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS6 A much smaller ReadOut network … at the cost of a higher control traffic LVL2: work with “interesting” ROSs/ROBs L2 ROS Level 1 Det. R/O TriggerDAQ 2.5  s ~10 ms Calo MuTrCh Other detectors Read-Out Systems L2P L2N RoI RoI requests L1 accept (100 kHz) 40 MHz 100 kHz 160 GB/s ~3 GB/s ROD ROB L2SVROIB Level 2 LVL2 Supervisor LVL2 Processing Units Read-Out Buffers RoI data (~2% of full event) LVL2 Network

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS7 TriggerDAQ Calo MuTrCh EB L2 ROS Level 1 Det. R/O 2.5  s ~10 ms Other detectors Read-Out Systems L2P L2N RoI RoI data (~2%) RoI requests L2 accepts L1 accept (100 kHz) 40 MHz 100 kHz ~3.5 kHz 160 GB/s ~3+6 GB/s ROD ROB SFI EBN Event Builder DFML2SVROIB Level 2 Sub-Farm Input Dataflow Manager After LVL2: Event Builder makes full events EB Network

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS8 Event Filter: deals with Full Events TriggerDAQ EB L2 ROS Level 1 Det. R/O 2.5  s ~10 ms Calo MuTrCh Other detectors Read-Out Systems L2P L2N RoI RoI data (~2%) RoI requests L2 accept (~3.5 kHz) L1 accept (100 kHz) 40 MHz 100 kHz ~3.5 kHz 160 GB/s ~3+6 GB/s EF EFP ~ sec ROD ROB SFI EBN Event Builder EFN DFML2SVROIB Event Filter Level 2 Farm of PCs Event Filter Network Full Event Sub-Farm Input ~ 200 Hz

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS9 EB L2 ROS Level 1 Det. R/O 2.5  s ~10 ms Calo MuTrCh Other detectors Read-Out Systems L2P L2N RoI RoI data (~2%) RoI requests L2 accept (~3.5 kHz) L1 accept (100 kHz) 40 MHz 100 kHz ~3.5 kHz 160 GB/s ~3+6 GB/s EF EFP ~ sec ROD ROB SFI EBN Event Builder EFN DFM L2SVROIB Event Filter Level 2 Event Filter Processors Event Filter Network SFO EF accept (~0.2 kHz) ~ 200 Hz ~ 300 MB/s Sub-Farm Output From Event Filter to Local (TDAQ) storage

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS10 Dataflow EB High Level Trigger L2 ROS Level 1 Det. R/O TriggerDAQ 2.5  s ~10 ms Calo MuTrCh Other detectors Read-Out Systems L2P L2N RoI RoI data (~2%) RoI requests L2 accept (~3.5 kHz) SFO L1 accept (100 kHz) 40 MHz 100 kHz ~3.5 kHz ~ 200 Hz 160 GB/s ~ 300 MB/s ~3+6 GB/s EF EFP ~ sec EF accept (~0.2 kHz) ROD ROB SFI EBN Event Builder EFN DFM L2SVROIB Event Filter Level 2 TDAQ, High Level Trigger & DataFlow

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS11 Dataflow EB High Level Trigger L2 ROS Level 1 Det. R/O TriggerDAQ 2.5  s ~10 ms Calo MuTrCh Other detectors Read-Out Systems L2P L2N RoI RoI data (~2%) RoI requests L2 accept (~3.5 kHz) SFO L1 accept (100 kHz) 40 MHz 100 kHz ~3.5 kHz ~ 200 Hz 160 GB/s ~ 300 MB/s ~3+6 GB/s EF EFP ~ sec EF accept (~0.2 kHz) ROD ROB SFI EBN Event Builder EFN DFM L2SVROIB Event Filter Level 2 TDAQ, High Level Trigger & DataFlow High Level Trigger (HLT) Algorithms developed offline (with HLT in mind) HLT Infrastructure (TDAQ job): –“steer” the order of algorithm execution –Alternate steps of “feature extraction” & “hypothesis testing”)  fast rejection (min. CPU) –Reconstruction in Regions of Interest  min. processing time & network resources

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS12 Dataflow EB High Level Trigger L2 ROS Level 1 Det. R/O TriggerDAQ 2.5  s ~10 ms Calo MuTrCh Other detectors L2P L2N RoI RoI data (~2%) RoI requests L2 accept (~3.5 kHz) SFO L1 accept (100 kHz) 40 MHz EF EFP ~ sec EF accept (~0.2 kHz) ROD ROB SFI EBN EFN DFM L2SVROIB 500 nodes 100 nodes 150 nodes 1600 nodes Infrastructure ControlCommunicationDatabases High Level Trigger & DataFlow: PCs (Linux)

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS13 TDAQ at the ATLAS site SDX1 USA15 UX15 ATLAS detector Read- Out Drivers ( RODs ) First- level trigger Read-Out Subsystems ( ROSs ) UX15 USA15 Dedicated links Timing Trigger Control (TTC) 1600 Read- Out Links Gigabit Ethernet RoI Builder Regions Of Interest VME ~150 PCs Data of events accepted by first-level trigger Event data requests Delete commands Requested event data Event data ≤ 100 kHz, 1600 fragments of ~ 1 kByte each LVL2 Super- visor SDX1 CERN computer centre DataFlow Manager Event Filter (EF) pROS ~ 500~1600 stores LVL2 output dual-CPU nodes ~100~30 Network switches Event data pulled: partial ≤ 100 kHz, full ~ 3 kHz Event rate ~ 200 Hz Data storage Local Storage SubFarm Outputs (SFOs) LVL2 farm Network switches Event Builder SubFarm Inputs (SFIs) Second- level trigger

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS14 SDX1 USA15 UX15 ATLAS detector Read- Out Drivers ( RODs ) First- level trigger Read-Out Subsystems ( ROSs ) UX15 USA15 Dedicated links Timing Trigger Control (TTC) 1600 Read- Out Links Gigabit Ethernet RoI Builder Regions Of Interest VME ~150 PCs Data of events accepted by first-level trigger Event data requests Delete commands Requested event data Event data ≤ 100 kHz, 1600 fragments of ~ 1 kByte each LVL2 Super- visor SDX1 CERN computer centre DataFlow Manager Event Filter (EF) pROS ~ 500~1600 stores LVL2 output dual-CPU nodes ~100~30 Network switches Event data pulled: partial ≤ 100 kHz, full ~ 3 kHz Event rate ~ 200 Hz Data storage Local Storage SubFarm Outputs (SFOs) LVL2 farm Network switches Event Builder SubFarm Inputs (SFIs) Second- level trigger “pre-series” DataFlow: ~10% of final TDAQ  Used for realistic measurements, assessment and validation of TDAQ dataflow & HLT TDAQ testbeds  Large scale system tests (at PC clusters with ~700 nodes) demonstrated required system performance & scalability for online infrastructure

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS15 August 2006: first combined cosmic ray run Muon section at feet of ATLAS Tile (HAD) Calorimeter  Triggered by Muon Trigger Chambers Muon + HAD Cal. cosmics run with LVL1 LVL1: Calorimeter, muon and central trigger logics in production and installation phases for both hardware & software

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS16 ROS units are PCs housing 12 Read Out Buffers, in 4 custom PCI-x cards (ROBIN) ReadOut Systems: all 153 PCs in place All 153 ROSs installed and standalone commissioned Input from detector R ead O ut D rivers

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS17 ReadOut Systems: all 153 PCs in place All 153 ROSs installed and standalone commissioned 44 ROSs connected to detectors and fully commissioned: –Full LAr-barrel (EM), –Half of Tile (HAD) and the Central Trigger Processor –Taking data with final DAQ (Event Building at the ROS level) Commissioning of other detector read-outs: expect to complete most of it by end 2006 ROS units are PCs housing 12 Read Out Buffers, in 4 custom PCI-x cards (ROBIN) Input from detector R ead O ut D rivers

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS18 EM + HAD calo cosmics run using installed ROSs 18

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS19 Event Building needs: bandwidth decides Read-Out Subsystems (ROSs) DFM Network switches Event Builder (SFIs) Gbit links Gbit links Throughput requirements: LVL2 accept rate: 3.5 kHz EB; Event size 1.6 MB  5.6 GB/s total input

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS20 We need ~100 SFIs for full ATLAS Network limited (fast CPUs): Event building using 60-70% of Gbit network  ~70 MB/s into each Event Building node (SFI) Event Building needs: bandwidth decides Read-Out Subsystems (ROSs) DFM Network switches Event Builder (SFIs) Gbit links Gbit links Throughput requirements: LVL2 accept rate: 3.5 kHz EB; Event size 1.6 MB  5600 MB/s total input

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS21 For HLT, CPU power is important At TDR we assumed: –100 kHz LVL1 accept rate –500 dual-CPU PCs for LVL2 –each CPU has to do 100Hz –10ms average latency per event in each CPU Assumed: 8 GHz per CPU at LVL2

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS22 For HLT, CPU power is important Preloaded ROS w/ muon events, run LVL2 Test with AMD dual-core, dual 1.8 GHz, 4 GB total  We should reach necessary performance per PC (the more we wait, the better machines we’ll get) At TDR we assumed: –100 kHz LVL1 accept rate –500 dual-CPU PCs for LVL2 –each CPU has to do 100Hz –10ms average latency per event in each CPU Assumed: 8 GHz per CPU at LVL2 8 GHz per CPU will not come (soon) But, dual-core dual-CPU PCs show scaling.

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS23 Online infrastructure: A useful fraction operational since last year. Growing according to need Final network almost done DAQ / HLT commissioning

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS24 DAQ / HLT commissioning ~ 300 machines on final network First DAQ/HLT-I slice of final system within weeks 153 ROSs (done) 47 Event Building + HLT-Infrastructure PCs 20 Local File Servers, 24 Loc. Switches 20 Operations PCs Might add Pre-series L2 (30 PCs) and EF (12 PCs) racks

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS25 DAQ / HLT commissioning  First 4 full racks of HLT machines (~100) early 2007  Another 500 to 600 machines can be procured within 2007  Rest, not before ~ 300 machines on final network First DAQ/HLT-I slice of final system within weeks 153 ROSs (done) 47 Event Building + HLT-Infrastructure PCs 20 Local File Servers, 24 Loc. Switches 20 Operations PCs Might add Pre-series L2 (30 PCs) and EF (12 PCs) racks

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS26 DAQ / HLT commissioning ~ 300 machines on final network First DAQ/HLT-I slice of final system within weeks 153 ROSs (done) 47 Event Building + HLT-Infrastructure PCs 20 Local File Servers, 24 Loc. Switches 20 Operations PCs Might add Pre-series L2 (30 PCs) and EF (12 PCs) racks  First 4 full racks of HLT machines (~100) early 2007  Another ~500 machines can be procured within 2007  Rest, not before  TDAQ will provide significant trigger rates (LVL1, LVL2, EF) in 2007 –LVL1 rate 40 kHz –EB rate 1.9 kHz –physics storage rate up to 85 Hz –final bandwidth for storage – calibration

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS27 ATLAS TDAQ design: –3-level trigger hierarchy –LVL2 works with Regions of Interest: small data movement –Feature extraction + hypothesis testing: fast rejection  min. CPU power Summary Architecture has been validated via deployment of testbeds We are in the installation phase of system Cosmic runs with Central Calorimeters + muon system An initial but fully functional TDAQ system will be installed, commissioned and integrated with Detectors till end of 2006 TDAQ will provide significant trigger rates (LVL1, LVL2, EF) in 2007

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS28 Thank you

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS29 ATLAS Trigger & DAQ: RoI concept 4 RoI  addresses In this example: 4 Regions of Interest: 2 muons, 2 electrons

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS30 Inner detector Calorimetry Muon system ATLAS total event size = 1.5 MB Total no. ROLs = 1600 Trigger ChannelsNo. ROLs Fragment size - kB MDT3.7x CSC6.7x RPC3.5x TGC4.4x ChannelsNo. ROLs Fragment size - kB LAr1.8x Tile ChannelsNo. ROLs Fragment size - kB LVL ChannelsNo. ROLs Fragment size - kB Pixels0.8x SCT6.2x TRT3.7x

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS31 L2SV gets RoI info from RoIB Assigns a L2PU to work on event Load-balances its’ L2PU sub-farm Can scheme cope with LVL1 rate? Test with preloaded RoI info into RoIB, which triggers TDAQ chain, emulating LVL1 LVL2 system is able to sustain the LVL1 input rate: – 1 L2SV system for LVL1 rate ~ 35 kHz – 2 L2SV system for LVL1 rate ~ 70 kHz (50%-50% sharing) Scalability of LVL2 system Rate per L2SV stable within 1.5% ATLAS will have a handful of L2SVs  can easily manage 100 kHz LVL1 rate

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS32 Data File LVL2 Ltcy Process Time RoI Coll Time RoI Coll Size # Req /Evt (ms) (bytes)  di-jet e Tests of LVL2 algorithms & RoI collection 2) Processing takes ~all latency: small RoI data collection time Note: Neither Trigger menu, nor data files representative mix of ATLAS (this is the aim for a late 2006 milestone) 3) Small RoI data request per event Electron sample is pre-selected 1) Majority of events rejected fast Di-jet,  & e simulated events preloaded on ROSs; RoI info on L2SV L2SV L2PU pROS Emulated ROS pROS 1 DFM 1 Plus: 1 Online Server 1 MySQL data base server

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS33 ATLAS Trigger & DAQ: need 40 MHz ~ 200 Hz ~ 300 MB/s p p Need high luminosity to get to observe the (rare) very interesting events Need on-line selection to write to disk mostly the interesting events Full info / event: ~ 1.6 MB/25ns = 60k TB/s

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS34 ATLAS Trigger & DAQ: LVL1 concept 40 MHz ~ 200 Hz ~ 300 MB/s 100 kHz Full info / event: ~ 1.6 MB/25ns 160 GB/s p p LVL1 Hardware based No dead-time Calo & Muon info (coarse granularity) Identify Regions of Interest for next Trigger Level

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS35 ATLAS Trigger & DAQ: LVL2 concept 40 MHz ~ 200 Hz ~ 300 MB/s 100 kHz ~ 3.5 kHz Full info / event: ~ 1.6 MB/25ns ~3+6 GB/s 160 GB/s p p LVL2 Software (specialized algorithms) Use LVL1 Regions of Interest All sub-detectors : full granularity Emphasis on early rejection

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS36 ATLAS Trigger & DAQ: Event Filter concept 40 MHz ~ 200 Hz ~ 300 MB/s 100 kHz ~ 3.5 kHz Full info / event: ~ 1.6 MB/25ns ~3+6 GB/s 160 GB/s p p Event Filter Offline algorithms Seeded by LVL2 Result Work with full event Full calibration/alignment info

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS37 ATLAS Trigger & DAQ: concept summary 40 MHz ~100 kHz 2.5  s ~3 kHz ~ 10 ms ~ 1 s ~200 Hz Muon LVL1 CaloInner Pipeline Memories RatesLatency RoI LVL2 Event builder cluster Local Storage: ~ 300 MB/s Read-Out Subsystems hosting Read-Out Buffers Event Filter farm EF ROB ROB ROB ROB ROB ROB ROB ROB ROB ROB ROB ROB Hardware based (FPGA, ASIC) Calo/Muon (coarse granularity) Software (specialised algs) Uses LVL1 Regions of Interest All sub-dets, full granularity Emphasis on early rejection Offline algorithms Seeded by LVL2 result Work with full event Full calibration/alignment info High Level Trigger

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS38 Dataflow EB High Level Trigger L2 ROS Level 1 Det. R/O TriggerDAQ 2.5  s ~10 ms Calo MuTrCh Other detectors Read-Out Systems L2P L2N RoI RoI data (~2%) RoI requests L2 accept (~3.5 kHz) SFO L1 accept (100 kHz) 40 MHz EF EFP ~ sec EF accept (~0.2 kHz) ROD ROB SFI EBN Event Builder EFN DFM L2SVROIB Event Filter Level 2 ATLAS Trigger & DAQ: design 40 MHz ~ 200 Hz ~ 300 MB/s 100 kHz ~ 3.5 kHz Full info / event: ~ 1.6 MB/25ns ~3+6 GB/s 160 GB/s

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS39 Dataflow EB High Level Trigger L2 ROS Level 1 Det. R/O TriggerDAQ 2.5  s ~10 ms Calo MuTrCh Other detectors Read-Out Systems L2P L2N RoI RoI data (~2%) RoI requests L2 accept (~3.5 kHz) SFO L1 accept (100 kHz) 40 MHz EF EFP ~ sec EF accept (~0.2 kHz) ROD ROB SFI EBN EFN DFML2SVROIB Event Filter Level GB/s ~ 300 MB/s ~3+6 GB/s Event Builder 40 MHz ~100 kHz 2.5  s ~3.5 kHz ~ 10 ms ~ 1 s ~200 Hz RatesLatency High Level Trigger & DataFlow: recap

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS40 Read- Out Drivers ( RODs ) First- level trigger Read-Out Subsystems ( ROSs ) USA15 Dedicated links Timing Trigger Control (TTC) 1600 Read- Out Links Gigabit Ethernet RoI Builder Regions Of Interest VME ~150 PCs Data of events accepted by first-level trigger Event data requests Delete commands Requested event data LVL2 Super- visor SDX1 DataFlow Manager Event Filter (EF) pROS ~ 500~1600 stores LVL2 output dual-CPU nodes ~100~30 Network switches Local Storage SubFarm Outputs (SFOs) LVL2 farm Network switches Event Builder SubFarm Inputs (SFIs) Second- level trigger

IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS41 Read-Out Subsystems (ROSs) Timing Trigger Control (TTC) RoI Builder L2SVs DFM EFDs pROS Network switches SFOs L2PUs Network switches Event Builder ( SFIs) S-link Gbit