Presentation is loading. Please wait.

Presentation is loading. Please wait.

Pasquale Migliozzi INFN Napoli

Similar presentations


Presentation on theme: "Pasquale Migliozzi INFN Napoli"— Presentation transcript:

1 Pasquale Migliozzi INFN Napoli
Il calcolo per KM3NeT Pasquale Migliozzi INFN Napoli

2 The KM3NeT infrastructure (I)
The KM3NeT research infrastructure will comprise a deep-sea neutrino telescope at different sites (ARCA), the neutrino-mass-hierarchy detector ORCA and nodes for instrumentation for measurements of earth and sea science (ESS) communities. ARCA/ORCA, the cable(s) to shore and the shore infrastructure will be constructed and operated by the KM3NeT collaboration.

3 The KM3NeT infrastructure (II)
Both the detection units of ARCA/ORCA and the ESS nodes are connected via a deep-sea cable network to shore. Note that KM3NeT will be constructed at multiple sites (France (FR), Italy (IT) and Greece (GR)) as a distributed infrastructure. The set-up shown in Figure will be installed at each of the sites.

4 The detector The ARCA/ORCA will consist of building blocks (BB) containing 115 detection units (vertical structures supporting 18 optical modules each). Each optical module holds 31 3-inch photomultipliers together with readout electronics and instrumentation within a glass sphere. One building block thus contains approximately 65,000 photomultipliers. Each installation site will contain an integer number of building blocks. The data transmitted from the detector to the shore station include the PMT signals (time-over-threshold and timing), calibration and monitoring data. Phase Detector Layout No. of DUs Start of construction Full Detector Phase 1 approx. ¼ BB 31 DUs End 2014 (1 DU) Mid 2016 Phase 2 2 BB ARCA/ 1 BB ORCA 345 DUs 2016 Final Phase 6 building blocks 690 DUs Reference 1 building block 115 DUs

5 The KM3NeT Computing Model
The KM3NeT computing model (data distribution and data processing system) is based on the LHC computing model. The general concept consists of a hierarchical data processing system, commonly referred to as Tier structure

6 Data processing steps at the different tiers
Computing Facility Processing steps Access Tier-0 at detector site triggering, online-calibration, quasi-online reconstruction direct access, direct processing Tier-1 computing centres calibration and reconstruction, simulation direct access, batch processing and/or grid access Tier-2 local computing clusters simulation and analysis varying

7 Computing centres and pools provide resources for the KM3NeT
Tier Computing Facility Main Task Access Tier-0 at detector site online processing direct access, direct processing Tier-1 CC-IN2P3 general offline processing and central data storage direct access, batch processing and grid access CNAF grid access ReCaS general offline processing, interim data storage HellasGrid reconstruction of data HOU computing cluster simulation processing batch processing Tier-2 local computing clusters simulation and analysis varying

8 Detailed Computing Model

9 Data distribution One of the main tasks is the efficient distribution of the data between the different computing centres – CC-IN2P3, CNAF will act as central storage, i.e. the resulting data of each processing step is transferred to those centres. The data storage at the centres is mirrored. For calibration and reconstruction, processing is performed in batches. The full amount of raw data necessary for the processing is transferred to the relevant computing centre before the processing starts; given enough storage capacity is available (as is the case e.g. at ReCaS), a certain rolling part will be stored at the computing centre, e.g. the last year of data taking. For simulation, negligible input data is necessary. The output data will be locally stored and transferred to the main storage. The most fluctuating access will be on the reconstructed data (from Tier-2) for data analyses.

10 Overview on computing requirements per year
size (TB) computing time (HS06.h) computing resources (HS06) One Building Block 1000 350 M 40 k Phase 1 300 60 M 7 k - first year of operation 100 25 M 3 k - second year of operation 150 40 M 5 k Phase 2 2500 1 G 125 k Final Phase 4000 2 G 250 k

11 Detailed expectations of necessary storage and computing time for one building block (per processing and per year) processing stage size per proc. (TB) time per proc. (HS06.h) size per year (TB) time per year (HS06.h) periodicity (per year) Raw Data Raw Filtered Data 300 - 1 Monitoring and Minimum Bias Data 150 Experimental Data Processing Calibration (incl. Raw Data) 750 24 M 1500 48 M 2 Reconstructed Data 119 M 238 M DST 75 30 M 60 M Simulation Data Processing Air showers 100 14 M 50 7 M 0.5 atm. Myons 1 M 25 638 k neutrinos 22 k 20 220 k 10 total: 827 188 M 995 353 M

12 Detailed expectations of necessary storage and computing time for Phase 1 (per processing and per year) processing stage size per proc. (TB) time per proc. (HS06.h) size per year (TB) time per year (HS06.h) periodicity (per year) Raw Data Raw Filtered Data 85 - 1 Monitoring and Minimum Bias Data 43 Experimental Data Processing Calibration (incl. Raw Data) 213 3 M 425 6 M 2 Reconstructed Data 15 M 31 M DST 21 4 M 8 M Simulation Data Processing Air showers 10 14 M atm. Muons 5 1 M neutrinos 22 k 20 220 k total: 208 37 M 290 60 M

13 Networking Rough estimate of the required bandwidth.
phase connection average data transfer (MB/s) peak data transfer (MB/s) Phase 1: Tier-0 to Tier-1 5 25 Tier-1 to Tier-1 15 500 Tier-1 to Tier-2 Building Block: 125 50 10 Final Phase: 200 1000 5000 100 Rough estimate of the required bandwidth. Note that the connection from Tier-1 to Tier-2 has the largest fluctuation, driven by the analyses of data (i.e. by users)

14 KM3NeT on the GRID VO Central Services
KM3NeT just starting on the GRID use case: CORSIKA simulation

15 GRID sites supporting the VO KM3NeT

16 VO Software Manager (SGM)
First trial: Corsika production

17 Warsaw Computing Center

18 Asterics

19 Summary and Conclusion
The data distribution model of KM3NeT is based on the LHC computing model. The estimates of the required bandwidths and computing power are well within current standards. A high bandwidth Ethernet link to the shore station is necessary for data archival and remote operation of the infrastructure. KM3NeT-INFN already addressed requests to CS2 KM3NET is already operative on GRID Negotiations are in progress with important CC (i.e. Warsaw) We are also active on Big Data future challenges, i.e. Asterics


Download ppt "Pasquale Migliozzi INFN Napoli"

Similar presentations


Ads by Google