Pasquale Migliozzi INFN Napoli

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Consorzio COMETA - Progetto PI2S2 UNIONE EUROPEA NEMO Monte Carlo Application on the Grid R. Calcagno for the NEMO Collaboration.
Using HOURS to evaluate KM3NeT designs A.Leisos, A. G. Tsirigotis, S.E.Tzamarias In the framework of the KM3NeT Design Study VLVnT Athens, 15 October.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
KM3NeT IDM/TeVPA conference 23  28 June 2014, Amsterdam, the Netherlands Maarten de Jong on behalf of the KM3NeT collaboration The next generation neutrino.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
S. E. Tzamarias The project is co-funded by the European Social Fund & National Resources EPEAEK-II (PYTHAGORAS) KM3Net Kick-off Meeting, Erlangen-Nuremberg,
KM3NeT The Birth of a Giant V. Popa, KM3NeT Collaboration Institute for Space Sciences, Magurele-Bucharest, Romania.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
CEA DSM Irfu The ANTARES Neutrino Telescope A status report Niccolò Cottini on behalf of the ANTARES Collaboration 44 th Rencontres de Moriond February.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
The ALICE short-term use case DataGrid WP6 Meeting Milano, 11 Dec 2000Piergiorgio Cerello 1 Physics Performance Report (PPR) production starting in Feb2001.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
The LHCb CERN R. Graciani (U. de Barcelona, Spain) for the LHCb Collaboration International ICFA Workshop on Digital Divide Mexico City, October.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
ESFRI & e-Infrastructure Collaborations, EGEE’09 Krzysztof Wrona September 21 st, 2009 European XFEL.
VLVnT09A. Belias1 The on-shore DAQ system for a deep-sea neutrino telescope A.Belias NOA-NESTOR.
A GRID solution for Gravitational Waves Signal Analysis from Coalescing Binaries: preliminary algorithms and tests F. Acernese 1,2, F. Barone 2,3, R. De.
Antares Slow Control Status 2007 International Conference on Accelerator and Large Experimental Physics Control Systems - Knoxville, Tennessee - October.
Time over Threshold electronics for an underwater neutrino telescope G. Bourlis, A.G.Tsirigotis, S.E.Tzamarias Physics Laboratory, School of Science and.
The single shower calibration accuracy is about 6.7 degrees but the accuracy on the mean value (full data set calibration accuracy) scales down inversely.
Feb. 14, 2002DØRAM Proposal DØ IB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) Introduction Partial Workshop Results DØRAM Architecture.
KM 3 Neutrino Telescope European deep-sea research infrastructure DANS – symposium Maarten de Jong.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
Belle II Computing Fabrizio Bianchi INFN and University of Torino Meeting Belle2 Italia 17/12/2014.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
A proposal for the KM3NeT Computing Model Pasquale Migliozzi INFN - Napoli 1.
Status and Perspectives of the BAIKAL-GVD Project Zh.-A. Dzhilkibaev, INR (Moscow), for the Baikal Collaboration for the Baikal Collaboration September.
KM3NeT Neutrino conference 2-7 June 2014, Boston, U.S.A. Maarten de Jong on behalf of the KM3NeT collaboration The next generation neutrino telescope in.
Hall D Computing Facilities Ian Bird 16 March 2001.
Status of ECAP Simulations for the KM3NeT TDR KM3NeT WP2 meeting Rezo Shanidze Paris, December 2008.
WPFL General Meeting, , Nikhef A. Belias1 Shore DAQ system - report on studies A.Belias NOA-NESTOR.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
KM3NeT P.Kooijman Universities of Amsterdam & Utrecht for the consortium.
Emanuele Leonardi PADME General Meeting - LNF January 2017
Deep-sea neutrino telescopes
Computing and Software – Calibration Flow Overview
Grid site as a tool for data processing and data analysis
Concluding remarks E. Migneco
White Rabbit in KM3NeT Mieke Bouwhuis NIKHEF 9th White Rabbit Workshop
LHC experiments Requirements and Concepts ALICE
Els de Wolf 20 February 2012, Catania
Agnese Martini INFN LNF
The COMPASS event store in 2002
LHCb computing in Russia
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Off-line & GRID Computing
Artem Trunov and EKP team EPK – Uni Karlsruhe
ProtoDUNE SP DAQ assumptions, interfaces & constraints
HLT & Calibration.
Simulation use cases for T2 in ALICE
2nd ASTERICS-OBELICS Workshop
ILD Ichinoseki Meeting
The LHCb Event Building Strategy
LHC Data Analysis using a worldwide computing grid
Nuclear Physics Data Management Needs Bruce G. Gibbard
LHCb thinking on Regional Centres and Related activities (GRIDs)
The LHCb Computing Data Challenge DC06
Presentation transcript:

Pasquale Migliozzi INFN Napoli Il calcolo per KM3NeT Pasquale Migliozzi INFN Napoli

The KM3NeT infrastructure (I) The KM3NeT research infrastructure will comprise a deep-sea neutrino telescope at different sites (ARCA), the neutrino-mass-hierarchy detector ORCA and nodes for instrumentation for measurements of earth and sea science (ESS) communities. ARCA/ORCA, the cable(s) to shore and the shore infrastructure will be constructed and operated by the KM3NeT collaboration.

The KM3NeT infrastructure (II) Both the detection units of ARCA/ORCA and the ESS nodes are connected via a deep-sea cable network to shore. Note that KM3NeT will be constructed at multiple sites (France (FR), Italy (IT) and Greece (GR)) as a distributed infrastructure. The set-up shown in Figure will be installed at each of the sites.

The detector The ARCA/ORCA will consist of building blocks (BB) containing 115 detection units (vertical structures supporting 18 optical modules each). Each optical module holds 31 3-inch photomultipliers together with readout electronics and instrumentation within a glass sphere. One building block thus contains approximately 65,000 photomultipliers. Each installation site will contain an integer number of building blocks. The data transmitted from the detector to the shore station include the PMT signals (time-over-threshold and timing), calibration and monitoring data. Phase Detector Layout No. of DUs Start of construction Full Detector Phase 1 approx. ¼ BB 31 DUs End 2014 (1 DU) Mid 2016 Phase 2 2 BB ARCA/ 1 BB ORCA 345 DUs 2016   Final Phase 6 building blocks 690 DUs Reference 1 building block 115 DUs

The KM3NeT Computing Model The KM3NeT computing model (data distribution and data processing system) is based on the LHC computing model. The general concept consists of a hierarchical data processing system, commonly referred to as Tier structure

Data processing steps at the different tiers Computing Facility Processing steps Access Tier-0 at detector site triggering, online-calibration, quasi-online reconstruction direct access, direct processing Tier-1 computing centres calibration and reconstruction, simulation direct access, batch processing and/or grid access Tier-2 local computing clusters simulation and analysis varying

Computing centres and pools provide resources for the KM3NeT Tier Computing Facility Main Task Access Tier-0 at detector site online processing direct access, direct processing Tier-1 CC-IN2P3 general offline processing and central data storage direct access, batch processing and grid access   CNAF grid access ReCaS general offline processing, interim data storage HellasGrid reconstruction of data HOU computing cluster simulation processing batch processing Tier-2 local computing clusters simulation and analysis varying

Detailed Computing Model

Data distribution One of the main tasks is the efficient distribution of the data between the different computing centres – CC-IN2P3, CNAF will act as central storage, i.e. the resulting data of each processing step is transferred to those centres. The data storage at the centres is mirrored. For calibration and reconstruction, processing is performed in batches. The full amount of raw data necessary for the processing is transferred to the relevant computing centre before the processing starts; given enough storage capacity is available (as is the case e.g. at ReCaS), a certain rolling part will be stored at the computing centre, e.g. the last year of data taking. For simulation, negligible input data is necessary. The output data will be locally stored and transferred to the main storage. The most fluctuating access will be on the reconstructed data (from Tier-2) for data analyses.

Overview on computing requirements per year size (TB) computing time (HS06.h) computing resources (HS06) One Building Block 1000 350 M 40 k Phase 1 300 60 M 7 k - first year of operation 100 25 M 3 k - second year of operation 150 40 M 5 k Phase 2 2500 1 G 125 k Final Phase 4000 2 G 250 k

Detailed expectations of necessary storage and computing time for one building block (per processing and per year) processing stage size per proc. (TB) time per proc. (HS06.h) size per year (TB) time per year (HS06.h) periodicity (per year) Raw Data   Raw Filtered Data 300 - 1 Monitoring and Minimum Bias Data 150 Experimental Data Processing Calibration (incl. Raw Data) 750 24 M 1500 48 M 2 Reconstructed Data 119 M 238 M DST 75 30 M 60 M Simulation Data Processing Air showers 100 14 M 50 7 M 0.5 atm. Myons 1 M 25 638 k neutrinos 22 k 20 220 k 10 total: 827 188 M 995 353 M

Detailed expectations of necessary storage and computing time for Phase 1 (per processing and per year) processing stage size per proc. (TB) time per proc. (HS06.h) size per year (TB) time per year (HS06.h) periodicity (per year) Raw Data   Raw Filtered Data 85 - 1 Monitoring and Minimum Bias Data 43 Experimental Data Processing Calibration (incl. Raw Data) 213 3 M 425 6 M 2 Reconstructed Data 15 M 31 M DST 21 4 M 8 M Simulation Data Processing Air showers 10 14 M atm. Muons 5 1 M neutrinos 22 k 20 220 k total: 208 37 M 290 60 M

Networking Rough estimate of the required bandwidth. phase connection average data transfer (MB/s) peak data transfer (MB/s) Phase 1: Tier-0 to Tier-1 5 25 Tier-1 to Tier-1 15 500 Tier-1 to Tier-2 Building Block: 125 50 10 Final Phase: 200 1000 5000 100 Rough estimate of the required bandwidth. Note that the connection from Tier-1 to Tier-2 has the largest fluctuation, driven by the analyses of data (i.e. by users)

KM3NeT on the GRID VO Central Services KM3NeT just starting on the GRID use case: CORSIKA simulation

GRID sites supporting the VO KM3NeT

VO Software Manager (SGM) First trial: Corsika production

Warsaw Computing Center

Asterics

Summary and Conclusion The data distribution model of KM3NeT is based on the LHC computing model. The estimates of the required bandwidths and computing power are well within current standards. A high bandwidth Ethernet link to the shore station is necessary for data archival and remote operation of the infrastructure. KM3NeT-INFN already addressed requests to CS2 KM3NET is already operative on GRID Negotiations are in progress with important CC (i.e. Warsaw) We are also active on Big Data future challenges, i.e. Asterics