PADME Kick-Off Meeting – LNF, April 20-21, 2015. DAQ Data Rate - Preliminary estimate Tentative setup: all channels read with Fast ADC 1024 samples, 12.

Slides:



Advertisements
Similar presentations
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Advertisements

23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Belle computing upgrade Ichiro Adachi 22 April 2005 Super B workshop in Hawaii.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Agenda Network Infrastructures LCG Architecture Management
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En’yo, Takashi Ichihara, Yasushi Watanabe and Satoshi.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
KLOE Offline Status Report Data Reconstruction MonteCarlo updates MonteCarlo production Computing Farm C.Bloise – May 28th 2003.
BESIII computing 王贻芳. Peak Data volume/year Peak data rate at 3000 Hz Events/year: 1*10 10 Total data of BESIII is about 2*640 TB Event size(KB)Data volume(TB)
FZU Computing Centre Jan Švec Institute of Physics of the AS CR, v.v.i
CASPUR Site Report Andrei Maslennikov Sector Leader - Systems Catania, April 2001.
Planning and Designing Server Virtualisation.
+ discussion in Software WG: Monte Carlo production on the Grid + discussion in TDAQ WG: Dedicated server for online services + experts meeting (Thusday.
Federico Ruggieri INFN-CNAF GDB Meeting 10 February 2004 INFN TIER1 Status.
Scientific Computing Experimental Physics Lattice QCD Sandy Philpott May 20, 2011 IT Internal Review 12GeV Readiness.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
The Computing System for the Belle Experiment Ichiro Adachi KEK representing the Belle DST/MC production group CHEP03, La Jolla, California, USA March.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
KLOE Computing Update Paolo Santangelo INFN LNF KLOE General Meeting University of Rome 2, Tor Vergata 2002, December
Spending Plans and Schedule Jae Yu July 26, 2002.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
1 INFN-T1 site report Andrea Chierici On behalf of INFN-T1 staff 28 th October 2009.
STATUS OF KLOE F. Bossi Capri, May KLOE SHUTDOWN ACTIVITIES  New interaction region  QCAL upgrades  New computing resources  Monte Carlo.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
Solution to help customers and partners accelerate their data.
Queensland University of Technology CRICOS No J VMware as implemented by the ITS department, QUT Scott Brewster 7 December 2006.
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
Computational Research in the Battelle Center for Mathmatical medicine.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
A Silvio Pardi on behalf of the SuperB Collaboration a INFN-Napoli -Campus di M.S.Angelo Via Cinthia– 80126, Napoli, Italy CHEP12 – New York – USA – May.
Status report of the KLOE offline G. Venanzoni – LNF LNF Scientific Committee Frascati, 9 November 2004.
Reliability of KLOE Computing Paolo Santangelo for the KLOE Collaboration INFN LNF Commissione Scientifica Nazionale 1 Roma, 13 Ottobre 2003.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
SA1 operational policy training, Athens 20-21/01/05 Presentation of the HG Node “Isabella” and operational experience Antonis Zissimos Member of ICCS administration.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
BNL Oracle database services status and future plans Carlos Fernando Gamboa, John DeStefano, Dantong Yu Grid Group, RACF Facility Brookhaven National Lab,
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
PIC port d’informació científica DateText1 November 2009 (Elena Planas) PIC Site review.
Database CNAF Barbara Martelli Rome, April 4 st 2006.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
L1 DAQ 1 process per DAQ board L1 DAQ 1 process per DAQ board Trigger Distribution System BTF Beam Trigger BTF Beam Trigger 50 Hz L1 DAQ Event build L1.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Australian Institute of Marine Science Jukka Pirhonen.
G. Russo, D. Del Prete, S. Pardi Frascati, 2011 april 4th-7th The Naples' testbed for the SuperB computing model: first tests G. Russo, D. Del Prete, S.
By: Billy Geiger Jordan Howell Project 2. Contents 3 Servers Operating System Firewall Device Switch Server Rack UPS(Uninterruptible Power Supply)
A Web Based Job Submission System for a Physics Computing Cluster David Jones IOP Particle Physics 2004 Birmingham 1.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Validation tests of CNAF storage infrastructure Luca dell’Agnello INFN-CNAF.
SuperB – Naples Site Dr. Silvio Pardi. Right now the Napoli Group is employed in 3 main tasks relate the computing in SuperB Fast Simulation Electron.
Emanuele Leonardi PADME General Meeting - LNF January 2017
PC Farms & Central Data Recording
Experience of Lustre at a Tier-2 site
PADME L0 Trigger Processor
The INFN TIER1 Regional Centre
Off-line & GRID Computing
Emanuele Leonardi PADME General Meeting - LNF January 2017
The INFN Tier-1 Storage Implementation
ALICE Computing Upgrade Predrag Buncic
Vladimir Sapunenko On behalf of INFN-T1 staff HEPiX Spring 2017
Proposal for a DØ Remote Analysis Model (DØRAM)
Emanuele Leonardi PADME Weekly Meeting - LNF February 9th, 2018
Presentation transcript:

PADME Kick-Off Meeting – LNF, April 20-21, 2015

DAQ Data Rate - Preliminary estimate Tentative setup: all channels read with Fast ADC 1024 samples, 12 bits per sample → 1536 Bytes per channel per event Beam rate: 50 Hz Data source# channelsData rate (MiB/s) Active target322.3 Hodoscope Calorimeter SAC40.3 Positron veto503.7 Total /04/2015E.Leonardi - PADME Kick-off Meeting - Controls and Computing 2 Preliminary DAQ L0 Rate

After L1 (zero suppression) Data source # channelsData rate Total0-suppMiB/sTB/y Active target Hodoscope Calorimeter SAC Positron veto Total Notes: Occupancies from MC studies 1 year = 365 days of data taking No event selection: all events are written to disk ROOT can give a 30-50% data compression factor when writing data to disk 21/04/2015E.Leonardi - PADME Kick-off Meeting - Controls and Computing 3 RAW data keeping full ADC samples RAW data keeping full ADC samples

21/04/2015E.Leonardi - PADME Kick-off Meeting - Controls and Computing 4 Reconstruction format (RECO) Energy/Time per Channel Physics objects (Clusters/Tracks) O(10 TB/y) Reconstruction format (RECO) Energy/Time per Channel Physics objects (Clusters/Tracks) O(10 TB/y) Analysis format (ANAL) Physics objects only O(1 TB/y) Analysis format (ANAL) Physics objects only O(1 TB/y) RAW data L2

Storage requirements ~2 TB/Production Unit Storage requirements ~2 TB/Production Unit 21/04/2015E.Leonardi - PADME Kick-off Meeting - Controls and Computing 5 Monte Carlo production - Storage Production Unit e + i.e. 2x10 9 bunches of O(5000) e + N.B. 1 bunch = 1 MC event Production Unit e + i.e. 2x10 9 bunches of O(5000) e + N.B. 1 bunch = 1 MC event Event size: O(1KB/event) Includes only E,t for each active channel Event size: O(1KB/event) Includes only E,t for each active channel If we want 10x statistics each year: ~20 TB/y

Full GEANT4 simulation of the experimental setup Running on a HP Proliant DL560 server o 4 Intel Xeon 2.3GHz (32 cores) o 128 GB ECC RAM Full GEANT4 simulation of the experimental setup Running on a HP Proliant DL560 server o 4 Intel Xeon 2.3GHz (32 cores) o 128 GB ECC RAM 21/04/2015E.Leonardi - PADME Kick-off Meeting - Controls and Computing 6 Monte Carlo production - CPU Use of GRID for MC production under investigation Current MC production rate: 3 Hz/core (3 bunches per second per core) 10 production units per year → 37 days per production CPU power needed: 210 cores, i.e. 7 servers. Current MC production rate: 3 Hz/core (3 bunches per second per core) 10 production units per year → 37 days per production CPU power needed: 210 cores, i.e. 7 servers.

Disk costs: 0.2 K€/TB Tape costs: 0.1 K€/TB CPU: 6K€ per server (32 cores) All data (RAW+RECO+ANAL) written to tape 10% RAW + 100% RECO + 100% ANAL kept on disk 21/04/2015E.Leonardi - PADME Kick-off Meeting - Controls and Computing 7 Costs DiskTape TB/yK€/yTB/yK€/y Data MC CPU: 7 server → 42 K€ Accessories: racks, networking, cables, … → O(10 K€)

Informal preliminary meeting with Massimo Pistoni, head of the LNF IT services, on April 1 st. Space, power, cooling, UPS, network are available. No technical problems to host computing resources for PADME. 21/04/2015E.Leonardi - PADME Kick-off Meeting - Controls and Computing 8 LNF – Central services Computing Center Allocate computer room resources for PADME ASAP BTF Caveat New collaboration INFN-INAF LNF will be asked to host computing resources for the CTA/SKA project. Caveat New collaboration INFN-INAF LNF will be asked to host computing resources for the CTA/SKA project.

Cloud-based storage is being investigated by INFN (CNAF, LNGS, To) Good scaling properties (PB and beyond) Automatic and configurable redundancy (not RAID-based) Lower cost per TB → SAN ~200€/TB vs. Cloud ~100€/TB LNF - Disk 21/04/2015E.Leonardi - PADME Kick-off Meeting - Controls and Computing 9 A Storage Area Network (SAN) exists 1.EMC 2 CX500 (120 disks) + CX300 (60 disks) 2.Hitachi-Storagetek 9985V (up to 240 disks) 3.2 Switch FC Brocade DS4900B 2x32 4 Gbps It can host storage for PADME but the current SAN infrastructure (servers and switches) is close to saturation Bottlenecks are possible Might need an upgrade

Two tape libraries are available. 1.IBM slot 4 Magstar 3592, 300 GB/tape, 40 MB/s 2.StorageTek L slot (max 1400) 2 STK 9940, 200 GB/tape, 30 MB/s Both have spare cartridge slots and can be upgraded with TB-class tape drives. Alternative: buy a new LTO tape library FormatCapacity (unc.)Transfer rate (unc.)Cost per tapeCost per TBAvailability LTO-62.5 TB/tape160 MB/s50 €20 €Now LTO-76.4 TB/tape300 MB/s(?) End 2015 IBM 3494 STK L /04/2015E.Leonardi - PADME Kick-off Meeting - Controls and Computing 10 LNF - Tapes Cost of tape storage: ≤100 €/TB (less for LTO) Need to evaluate cost of new tape drives/library for different solutions

LNF - Networking 21/04/2015E.Leonardi - PADME Kick-off Meeting - Controls and Computing 11 Internal BTF control room close to LAT room  Existing single-mode fiber 10 Gbps BTF experimental area: currently 1 Gbps  No upgrade planned. Should we require one? Internal BTF control room close to LAT room  Existing single-mode fiber 10 Gbps BTF experimental area: currently 1 Gbps  No upgrade planned. Should we require one? Geographical ATLAS T2 has a dedicated 10 Gbps link Other traffic on a 1 Gbps link → will be upgraded to 10 Gbps soon Geographical ATLAS T2 has a dedicated 10 Gbps link Other traffic on a 1 Gbps link → will be upgraded to 10 Gbps soon Computer room Central switch with Gbps Computer room Central switch with Gbps

LNF - Services 21/04/2015E.Leonardi - PADME Kick-off Meeting - Controls and Computing 12 Batch system LSF is commonly used Central license administered by batch farms are managed by each collaboration with “some support” from IT Uses AFS to manage clients’ common area Batch system LSF is commonly used Central license administered by batch farms are managed by each collaboration with “some support” from IT Uses AFS to manage clients’ common area DataBase MySQL available. Currently used for LNF web services. Oracle used by INFN central services Cloud-based DB-as-a-service being investigated DataBase MySQL available. Currently used for LNF web services. Oracle used by INFN central services Cloud-based DB-as-a-service being investigated GRID Computing Used (and managed) by ATLAS T2  M.Antonelli, E.Vilucchi GRID Computing Used (and managed) by ATLAS T2  M.Antonelli, E.Vilucchi

21/04/2015E.Leonardi - PADME Kick-off Meeting - Controls and Computing 13 Controls Investigate solutions used by other experiments Slow control DAQ monitoring

21/04/2015E.Leonardi - PADME Kick-off Meeting - Controls and Computing 14 Conclusions  Data production Preliminary estimate of DAQ rate: ~70 MiB/s Data production close to 100 TB/y  MC production G4 simulation already quite advanced Storage requirements close to 40 TB/y Investigate use of GRID for MC production  LNF IT resources Well suited to host PADME computing Allocate computer room resources ASAP Investigate use of existing SAN and alternatives Investigate use of existing tape libraries and alternatives  Controls Controls must allow automatic and unattended run of the experiment