A proposal for the KM3NeT Computing Model Pasquale Migliozzi INFN - Napoli 1.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Apostolos Tsirigotis KM3NeT Design Study: Detector Architecture, Event Filtering and Reconstruction Algorithms XXV Workshop on recent developments in High.
R.Shanidze, B. Herold, Th. Seitz ECAP, University of Erlangen (for the KM3NeT consortium) 15 October 2009 Athens, Greece Study of data filtering algorithms.
Using HOURS to evaluate KM3NeT designs A.Leisos, A. G. Tsirigotis, S.E.Tzamarias In the framework of the KM3NeT Design Study VLVnT Athens, 15 October.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
A feasibility study for the detection of SuperNova explosions with an Undersea Neutrino Telescope A. Leisos, A. G. Tsirigotis, S. E. Tzamarias Physics.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
Hanoi, Aug. 6-12, 2006 Pascal Vernin 1 Antares Status report P.Vernin CEA Saclay, Dapnia On behalf of the Antares collaboration P.Vernin
Coincidence analysis in ANTARES: Potassium-40 and muons  Brief overview of ANTARES experiment  Potassium-40 calibration technique  Adjacent floor coincidences.
1 S. E. Tzamarias Hellenic Open University N eutrino E xtended S ubmarine T elescope with O ceanographic R esearch Readout Electronics DAQ & Calibration.
Apostolos Tsirigotis Simulation Studies of km3 Architectures KM3NeT Collaboration Meeting April 2007, Pylos, Greece The project is co-funded by the.
Report of the HOU contribution to KM3NeT TDR (WP2) A. G. Tsirigotis In the framework of the KM3NeT Design Study WP2 Meeting - Marseilles, 29June-3 July.
CEA DSM Irfu The ANTARES Neutrino Telescope A status report Niccolò Cottini on behalf of the ANTARES Collaboration 44 th Rencontres de Moriond February.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
SINP MSU, July 7, 2012 I.Belolaptikov behalf BAIKAL collaboration.
Special Issues on Neutrino Telescopy Apostolos G. Tsirigotis Hellenic Open University School of Science & Technology Particle and Astroparticle Physics.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
1 N eutrino E xtended S ubmarine T elescope with O ceanographic R esearch Operation and performance of the NESTOR test detector.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Spending Plans and Schedule Jae Yu July 26, 2002.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
Online Reconstruction used in the Antares-Tarot alert system J ü rgen Brunner The online reconstruction concept Performance Neutrino doublets and high.
The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma,
The LHCb CERN R. Graciani (U. de Barcelona, Spain) for the LHCb Collaboration International ICFA Workshop on Digital Divide Mexico City, October.
CEA DSM Irfu Reconstruction and analysis of ANTARES 5 line data Niccolò Cottini on behalf of the ANTARES Collaboration XX th Rencontres de Blois 21 / 05.
Study of neutrino oscillations with ANTARES J. Brunner.
Study of neutrino oscillations with ANTARES J. Brunner.
The ANTARES detector: background sources and effects on detector performance S. Escoffier CNRS Centre de Physique des Particules de Marseille on behalf.
Antares Slow Control Status 2007 International Conference on Accelerator and Large Experimental Physics Control Systems - Knoxville, Tennessee - October.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
ATLAS Trigger Development
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
The Trigger and Data Acquisition System for the KM3NeT neutrino telescope Carmelo Pellegrino Tommaso Chiarusi INFN - Sezione di Bologna VLVnT 2015 | Rome,
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
Belle II Computing Fabrizio Bianchi INFN and University of Torino Meeting Belle2 Italia 17/12/2014.
1 June 11/Ian Fisk CMS Model and the Network Ian Fisk.
Oct 16, 2009T.Kurca Grilles France1 CMS Data Distribution Tibor Kurča Institut de Physique Nucléaire de Lyon Journées “Grilles France” October 16, 2009.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
GPUs in HEP INFN Pisa & Physics Department of Pisa 1 Parallel Neutrino Triggers using GPUs for an underwater telescope KM3-NEMO Bachir Bouhadef, Mauro.
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
KM3NeT Neutrino conference 2-7 June 2014, Boston, U.S.A. Maarten de Jong on behalf of the KM3NeT collaboration The next generation neutrino telescope in.
Status of ECAP Simulations for the KM3NeT TDR KM3NeT WP2 meeting Rezo Shanidze Paris, December 2008.
KM3NeT P.Kooijman Universities of Amsterdam & Utrecht for the consortium.
Computing and Software – Calibration Flow Overview
Pasquale Migliozzi INFN Napoli
White Rabbit in KM3NeT Mieke Bouwhuis NIKHEF 9th White Rabbit Workshop
Bachir Bouhadef, Mauro Morganti, G. Terreni (KM3-Italy Collaboration)
ATLAS activities in the IT cloud in April 2008
Agnese Martini INFN LNF
Status and Prospects of The LHC Experiments Computing
Overview of AMADEUS and Positioning for KM3NeT
LHCb computing in Russia
Performance of the AMANDA-II Detector
Astronomy session: a summary
ProtoDUNE SP DAQ assumptions, interfaces & constraints
IC40 Physics Run Preparations
Proposal for the LHCb Italian Tier-2
An introduction to the ATLAS Computing Model Alessandro De Salvo
ILD Ichinoseki Meeting
R. Graciani for LHCb Mumbay, Feb 2006
The ATLAS Computing Model
LHCb thinking on Regional Centres and Related activities (GRIDs)
Hellenic Open University
Presentation transcript:

A proposal for the KM3NeT Computing Model Pasquale Migliozzi INFN - Napoli 1

KM3NeT Strings 18 MultiPMT OMs with 31 3” PMTs 8 KM3NeT-Ita Towers 14 Floors with 6 OMs-10”PMT each since the beginning of 2015 Foot Capo Passero Site 36 m 8 m 20 m

3 Optical TriDAS framework: trigger on a full detector snapshot (the 200 ms Time Slice) µ ΔT 200 ms event ΔT ~ O(1 µs) TS i TS i+1 time HM 0 HM 1 HM 2 HM 3 HM 4 TCPU i with TS i TCPU i+1 with TS i+1 HM: receive subsequent data from a fraction of the detector TCPU: collect data from the full detector for a slice of time (i.e. the Time Slice)

4 Optical DAQ: All data to shore Hit PMT Info Hit Abs Time Hit Charge 6 Bytes Hit PMT Info Hit Abs Time Hit Charge Hit Wave Form(samples) 46 Bytes KM3NeT-Eu KM3NeT-Ita

5 Filtered data KM3NeT-Eu KM3NeT-Ita Input parameters (conservative) Throughputs INCOMING POST TRIGGER

6 Acoustic DAQ: for the positioning KM3NeT-Eu KM3NeT-Ita 1 piezo sensor / DOM 1 Hydrophone / string base total = 18 Piezo + 1 Hydrophone / String 2 Hydrophone / floor 1 Hydrophone / tower base total = 29 hydrophones / Tower constant Sampling rate : 12 Mbps/sensor

Computing Model: Requirements per Building Block 7 based on ANTARES experience: we have to rethink what to store! processing stage size per year (TB) time per year (HS06.h) periodicity (per year) comment Raw Data Raw Filtered Data300-1 Monitoring and Minimum Bias Data300-1 assume same size as raw data Experimental Data Processing Calibration (incl. Raw Data) M2 from ANTARES: 2.5x raw data size Reconstructed Data M2 from ANTARES: 3.5x raw data size DST M2 from ANTARES: 3x raw data size, 25% of time Simulation Data Processing Air showers507 M0.510 month livetime atm. Muons25638 k0.510 month livetime neutrinos20220 k10per analysis total: M 875TB for FR! for Phase 1: scaling ~ 1/3

8

Computing Model Workshop – Bologna Feb. 5 th /6 th Workshop of the Computing and Software WG joined by experts of computing centers Aim: prepare a computing model for KM3NeT Decision: prepare a proposal based on the preliminary KM3-ITA model Mainly GRID based, but include direct/batch access where available Use all available resources: – CC-Lyon, CNAF, HELASGRID, HOU CC, ReCaS The choice of GRID is not totally for free, there is a lot of work to be done… 9

General Scheme Tier-like structure, GRID + direct (batch) access Tier-0: FR, IT at/near detector site: DAQ, short-time data storage, data transfer to permanent storage at Tier-1s Tier-1: currently: CC-Lyon, CNAF, HOU CC, ReCaS, HellasGRID: – permanent data storage (disk, tapes) at CC-Lyon and CNAF – data processing (CC-Lyon, ReCaS) – data reprocessing (CC-Lyon, CNAF, ReCaS) – simulation (CC-Lyon, HOU, ReCaS) Data transfer between the computing centers based on GRID (where applicable) 10

KM3NeT-ITA Computing Model – Data 11

KM3NeT-ITA Computing Model – Simulations 12

KM3NeT-ITA: User Access 13

Ingredients to calculate the computing requirements – Data: 250Hz and 1100Hz trigger rate (IT, FR site) – based on muon rates (>100GeV), 100% trigger efficiency and purity – Data processing: scale ANTARES numbers all we have but we need to be resource efficient (don’t store data that can easily be re-calculated) – MC: use numbers by the simulations WG – CPUs are not a problem, disk/tape space might be 14

To do list Harmonize (symmetric) layout and data flow (in principle: no differences FR/IT) Add direct/batch access to CC-Lyon and HOU computing clusters Add “Tier-2s” (i.e. local computing clusters of institutes) Add networking requirements Further timeline: – first draft of document until beginning of March – get approval of the collaboration – present at the APPEC computing meeting (mid of April, Bologna) – derive application for INFN commission II (for 2015, IT part) 15