Emanuele Leonardi PADME General Meeting - LNF January 2017

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
The LHCb DAQ and Trigger Systems: recent updates Ricardo Graciani XXXIV International Meeting on Fundamental Physics.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
KLOE Offline Status Report Data Reconstruction MonteCarlo updates MonteCarlo production Computing Farm C.Bloise – May 28th 2003.
+ discussion in Software WG: Monte Carlo production on the Grid + discussion in TDAQ WG: Dedicated server for online services + experts meeting (Thusday.
Greg Sullivan University of Maryland Data Filtering and Software IceCube Collaboration Meeting Monday, March 21, 2005.
MICE CM25 Nov 2009Jean-Sebastien GraulichSlide 1 Detector DAQ Issues o Achievements Since CM24 o Trigger o Event Building o Online Software o Front End.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
Spending Plans and Schedule Jae Yu July 26, 2002.
LOGO PROOF system for parallel MPD event processing Gertsenberger K. V. Joint Institute for Nuclear Research, Dubna.
STATUS OF KLOE F. Bossi Capri, May KLOE SHUTDOWN ACTIVITIES  New interaction region  QCAL upgrades  New computing resources  Monte Carlo.
Status of the KLOE experiment M. Moulson, for the KLOE collaboration LNF Scientific Committee 23 May 2002 Data taking in 2001 and 2002 Hardware and startup.
Hall-D/GlueX Software Status 12 GeV Software Review III February 11[?], 2015 Mark Ito.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
1 ENH1 Detector Integration Meeting 18/6/2015 D.Autiero (IPNL) WA105:  Detector installation requirements  Detector racks and power requirements Note:
Online Monitoring System at KLOE Alessandra Doria INFN - Napoli for the KLOE collaboration CHEP 2000 Padova, 7-11 February 2000 NAPOLI.
1 Calorimeters LED control LHCb CALO meeting Anatoli Konoplyannikov /ITEP/ Status of the calorimeters LV power supply and ECS control Status of.
1 Electronics Status Trigger and DAQ run successfully in RUN2006 for the first time Trigger communication to DRS boards via trigger bus Trigger firmware.
Software for the CMS Cosmic Challenge Giacomo BRUNO UCL, Louvain-la-Neuve, Belgium On behalf of the CMS Collaboration CHEP06, Mumbay, India February 16,
New DAQ at H8 Speranza Falciano INFN Rome H8 Workshop 2-3 April 2001.
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
HPS TDAQ Review Sergey Boyarinov, Ben Raydo JLAB June 18, 2014.
ECFA Workshop, Warsaw, June G. Eckerlin Data Acquisition for the ILD G. Eckerlin ILD Meeting ILC ECFA Workshop, Warsaw, June 11 th 2008 DAQ Concept.
Rutherford Appleton Laboratory September 1999Fifth Workshop on Electronics for LHC Presented by S. Quinton.
L1 DAQ 1 process per DAQ board L1 DAQ 1 process per DAQ board Trigger Distribution System BTF Beam Trigger BTF Beam Trigger 50 Hz L1 DAQ Event build L1.
PADME Kick-Off Meeting – LNF, April 20-21, DAQ Data Rate - Preliminary estimate Tentative setup: all channels read with Fast ADC 1024 samples, 12.
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. Off-the-Shelf Hardware and Software DAQ Performance.
SuperB – Naples Site Dr. Silvio Pardi. Right now the Napoli Group is employed in 3 main tasks relate the computing in SuperB Fast Simulation Electron.
EPS HEP 2007 Manchester -- Thilo Pauly July The ATLAS Level-1 Trigger Overview and Status Report including Cosmic-Ray Commissioning Thilo.
MonteCarlo Simulation
MPD Data Acquisition System: Architecture and Solutions
PROOF system for parallel NICA event processing
WP18, High-speed data recording Krzysztof Wrona, European XFEL
Overview of the Belle II computing
Pasquale Migliozzi INFN Napoli
evoluzione modello per Run3 LHC
Simulation & Reconstruction
LHC experiments Requirements and Concepts ALICE
Computing model and data handling
DCH FEE 28 chs DCH prototype FEE &
ALICE – First paper.
ALICE Physics Data Challenge 3
Bernd Panzer-Steindel, CERN/IT
PADME L0 Trigger Processor
The INFN TIER1 Regional Centre
Trigger, DAQ, & Online: Perspectives on Electronics
Off-line & GRID Computing
Emanuele Leonardi PADME General Meeting - LNF January 2017
ProtoDUNE SP DAQ assumptions, interfaces & constraints
Toward a costing model What next? Technology decision n Schedule
An introduction to the ATLAS Computing Model Alessandro De Salvo
ALICE Computing Upgrade Predrag Buncic
EMC Electronics and Trigger Review and Trigger Plan
Large CMS GEM with APV & SRS electronics
ILD Ichinoseki Meeting
LHC Collisions.
US ATLAS Physics & Computing
VELO readout On detector electronics Off detector electronics to DAQ
Regional Cal. Trigger Milestone: Production & Testing Complete
Example of DAQ Trigger issues for the SoLID experiment
Commissioning of the ALICE-PHOS trigger
LHCb Trigger, Online and related Electronics
Heavy Ion Physics Program of CMS Proposal for Offline Computing
Design Principles of the CMS Level-1 Trigger Control and Hardware Monitoring System Ildefons Magrans de Abril Institute for High Energy Physics, Vienna.
ATLAS DC2 & Continuous production
Emanuele Leonardi PADME Weekly Meeting - LNF February 9th, 2018
August 19th 2013 Alexandre Camsonne
LIU BWS Firmware status
The LHCb Computing Data Challenge DC06
Presentation transcript:

Emanuele Leonardi PADME General Meeting - LNF 17-18 January 2017 TDAQ & Computing Emanuele Leonardi PADME General Meeting - LNF 17-18 January 2017

PADME on-line computing model Target 32ch FADC Spectrometer Veto 192 ch FADC High Energy Positron Veto 32ch FADC ECAL 616 ch FADC SAC 49 ch FADC TPix 65536 ch TBD L1 DAQ 1 process per DAQ board L0 DAQ 1 process per DAQ board ADC zero suppression (?) Trigger Distribution System BTF Beam Cosmics SW/Random L1 DAQ Event build Temporary disk buffer Neutral Filter (Inv) 1 or more ECAL clusters Charged Filter (Vis) 2 or more tracks RAW data Trigger signal Data flow Central Data Recording Facility CNAF (+LNF?) PADME experiment site PADME on-line computing model 18/01/2017 E. Leonardi - PADME G.M. - TDAQ & Computing 2

DAQ data rates Detector Channels ADC boards DAQ in MB/s DAQ out RAW data GB/d TB/365d Raw event KB/event Target 32 1 2.6 1.26 109 40 24.6 Calorimeter 616 20 52.8 2.70 234 85 SAC 49 2 5.3 1.08 93 34 21.1 E veto 96 3 7.9 0.60 51 19 11.6 P veto 1.31 113 41 25.6 HEP veto 1.15 100 36 22.5 TimePix 65536 - 18.8 1.50 130 47 29.3 Total 919 30 97.9 9.60 830 303 187.5 Output estimates assume that zero-suppression is active, include trigger ADC samples (25%) and a 1% autopass stream (10%), and do not include ROOT data-compression factor (testbeam: ≈50%) 18/01/2017 E. Leonardi - PADME G.M. - TDAQ & Computing

2017 Milestones – On-line HW Acquire on-line HW 2 L0 servers 2 L1 servers with storage (~40TB) 2 Network switches Rack and cabling Define UPS for experimental area (computing, electronics, HV) Create on-line infrastructure Define location of L0/L1 Define realistic count of network ports needed Servers, trigger, DCS, sensors, … Lay out cabling Create 10Gbps link to LNF-IT 18/01/2017 E. Leonardi - PADME G.M. - TDAQ & Computing

2017 Milestones – On-line SW Define production DAQ procedures Define main DAQ strategy Now: each ADC board is acquired asynchronously Can we use a trigger-oriented DAQ (all boards acquired after each trigger)? Multi-server RunControl Integrate trigger information Integrate TimePix3 data stream Verify event merger with >2 ADC boards How do we merge TimePix3 data? 18/01/2017 E. Leonardi - PADME G.M. - TDAQ & Computing

E. Leonardi - PADME G.M. - TDAQ & Computing 2017 Milestones - Trigger Create Trigger/Clock infrastrurcture Preliminary requirement for the PADME L0TP are defined Deeper understanding of V1742 board necessary Needs dedicated man power to start implementation studies and project soon. Is an evaluation board+analog Mezzanine suited for the PADME L0TP? 18/01/2017 E. Leonardi - PADME G.M. - TDAQ & Computing

E. Leonardi - PADME G.M. - TDAQ & Computing 2017 Milestones - GRID Port PADME SW to the GRID Install padme-fw to CVMFS Verify interaction with standard G4/ROOT libraries Enable PADME support at selected sites LNF, CNAF, Sofia, Roma1… Create VO roles, register production/standard users Set up automatic data transfer LNF → CNAF CNAF already accepting PADME data (manual or automatic transfer) Create GRID-interfaced transfer server at LNF Allocate GRID/off-GRID CPU power at LNF Tier1 Define utilization of KLOE tape library at LNF 18/01/2017 E. Leonardi - PADME G.M. - TDAQ & Computing

E. Leonardi - PADME G.M. - TDAQ & Computing 2017 Milestones - Software Finalize MC digitization Define digi structure and digitization procedure for all detectors Verify processing times/data output Release Reconstruction SW Finalize infrastructure Create reconstruction example Define reconstruction procedures 18/01/2017 E. Leonardi - PADME G.M. - TDAQ & Computing

2017 buying list 2 L0 servers (HP Proliant or Dell PowerEdge) 10-15K€ 2 L1 servers (Dell PowerEdge) 20-30K€ 2 Network Switches (HPE 5130) 4-6K€ Rack & cabling 0-2K€ All HW to be ordered before summer 18/01/2017 E. Leonardi - PADME G.M. - TDAQ & Computing