Bachir Bouhadef, Mauro Morganti, G. Terreni (KM3-Italy Collaboration)

Slides:



Advertisements
Similar presentations
Apostolos Tsirigotis KM3NeT Design Study: Detector Architecture, Event Filtering and Reconstruction Algorithms XXV Workshop on recent developments in High.
Advertisements

R.Shanidze, B. Herold, Th. Seitz ECAP, University of Erlangen (for the KM3NeT consortium) 15 October 2009 Athens, Greece Study of data filtering algorithms.
Trigger issues for KM3NeT the large scale underwater neutrino telescope the project objectives design aspects from the KM3NeT TDR trigger issues outlook.
Development of a track trigger based on parallel architectures Felice Pantaleo PH-CMG-CO (University of Hamburg) Felice Pantaleo PH-CMG-CO (University.
C.Sollima INFN - University of Pisa for the KM3NeT Consortium “Quality Assurance and Risk Assessment in the KM3NeT Neutrino Telescope Design Study” Technical.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Application of Kalman filter methods to event filtering and reconstruction for Neutrino Telescopy A. G. Tsirigotis In the framework of the KM3NeT Design.
A feasibility study for the detection of SuperNova explosions with an Undersea Neutrino Telescope A. Leisos, A. G. Tsirigotis, S. E. Tzamarias Physics.
KM3NeT detector optimization with HOU simulation and reconstruction software A. G. Tsirigotis In the framework of the KM3NeT Design Study WP2 - Paris,
S. E. Tzamarias The project is co-funded by the European Social Fund & National Resources EPEAEK-II (PYTHAGORAS) KM3Net Kick-off Meeting, Erlangen-Nuremberg,
Emlyn Corrin, DPNC, University of Geneva EUDAQ Status of the EUDET JRA1 DAQ software Emlyn Corrin, University of Geneva 1.
Paolo Piattelli - KM3NeTIAPS - Golden, 6-8 may 2008 KM3NeT: a deep-sea neutrino telescope in the Mediterranean Sea Paolo Piattelli - INFN/LNS Catania (Italy)
Tracking at the ATLAS LVL2 Trigger Athens – HEP2003 Nikos Konstantinidis University College London.
Many-Core Scalability of the Online Event Reconstruction in the CBM Experiment Ivan Kisel GSI, Germany (for the CBM Collaboration) CHEP-2010 Taipei, October.
Apostolos Tsirigotis Simulation Studies of km3 Architectures KM3NeT Collaboration Meeting April 2007, Pylos, Greece The project is co-funded by the.
Piera Sapienza – VLVNT Workshop, 5-8 october 2003, Amsterdam Introduction and framework Simulation of atmospheric  (HEMAS and MUSIC) Response of a km.
Use of GPUs in ALICE (and elsewhere) Thorsten Kollegger TDOC-PG | CERN |
Online Monitoring and Analysis for Muon Tomography Readout System M. Phipps, M. Staib, C. Zelenka, M. Hohlmann Florida Institute of Technology Department.
1 “Fast FPGA-based trigger and data acquisition system for the CERN experiment NA62: architecture and algorithms” Authors G. Collazuol(a), S. Galeotti(b),
OPERA Experiment, Brick Finding Program A. Chukanov Joint Institute for Nuclear Research ISU, 12 th February, 2007.
Disk WP-4 “Information Technology” J. Hogenbirk/M. de Jong  Introduction (‘Antares biased’)  Design considerations  Recent developments  Summary.
NA62 Trigger Algorithm Trigger and DAQ meeting, 8th September 2011 Cristiano Santoni Mauro Piccini (INFN – Sezione di Perugia) NA62 collaboration meeting,
Online Reconstruction used in the Antares-Tarot alert system J ü rgen Brunner The online reconstruction concept Performance Neutrino doublets and high.
Digital & Analog electronics for an autonomous, deep-sea Gamma Ray Burst Neutrino prototype detector K. Manolopoulos, A. Belias, E. Kappos, C. Markou DEMOKRITOS.
CEA DSM Irfu Reconstruction and analysis of ANTARES 5 line data Niccolò Cottini on behalf of the ANTARES Collaboration XX th Rencontres de Blois 21 / 05.
The ANTARES detector: background sources and effects on detector performance S. Escoffier CNRS Centre de Physique des Particules de Marseille on behalf.
The ORCA Letter of Intent Oscillation Research with Cosmics in the Abyss ORCA Antoine Kouchner University Paris 7 Diderot- AstroParticle and Cosmology.
R. Coniglione, VLVnT08, Toulon April ‘08 KM3NeT: optimization studies for a cubic kilometer neutrino detector R. Coniglione P. Sapienza Istituto.
GLAST LAT Project CU Beam Test Workshop 3/20/2006 C. Sgro’, L. Baldini, J. Bregeon1 Glast LAT Calibration Unit Beam Test Status Report on Online Monitor.
Comparison of different km3 designs using Antares tools Three kinds of detector geometry Incoming muons within TeV energy range Detector efficiency.
Susan Burke DØ/University of Arizona DPF 2006 Measurement of the top pair production cross section at DØ using dilepton and lepton + track events Susan.
OPERA Experiment, Brick Finding Program A. Chukanov Joint Institute for Nuclear Research Dubna, 25 th January, 2007.
RD51 GEM Telescope: results from June 2010 test beam and work in progress Matteo Alfonsi on behalf of CERN GDD group and Siena/PISA INFN group.
The Trigger and Data Acquisition System for the KM3NeT neutrino telescope Carmelo Pellegrino Tommaso Chiarusi INFN - Sezione di Bologna VLVnT 2015 | Rome,
L0 trigger update Bruno Angelucci INFN & University of Pisa.
A. Tsirigotis Hellenic Open University N eutrino E xtended S ubmarine T elescope with O ceanographic R esearch Reconstruction, Background Rejection Tools.
LAV L0 trigger primitives Mauro Raggi, Francesco Gonnella TDAQ Working Group.
F. Simeone – INFN, Sez. Roma1 RICAP07 Conference, Rome June 2007 Data taking system for NEMO experiment RICAP07.
Isabella Amore VLV T08, Toulon, France April 2008 International Workshop on a Very Large Volume Neutrino Telescope for the Mediterranean Sea Results.
A proposal for the KM3NeT Computing Model Pasquale Migliozzi INFN - Napoli 1.
GPUs in HEP INFN Pisa & Physics Department of Pisa 1 Parallel Neutrino Triggers using GPUs for an underwater telescope KM3-NEMO Bachir Bouhadef, Mauro.
Geant4 Simulation for KM3 Georgios Stavropoulos NESTOR Institute WP2 meeting, Paris December 2008.
NEMO PHASE II RATE MONITORING AND CORRELATIONS M.G. Pellegriti INFN-LNS KM3 collaboration meeting Roma november 2013.
Gianluca Lamanna TDAQ WG meeting. CHOD crossing point two slabs The CHOD offline time resolution can be obtained online exploiting hit position.
K + → p + nn The NA62 liquid krypton electromagnetic calorimeter Level 0 trigger V. Bonaiuto (a), A. Fucci (b), G. Paoluzzi (b), A. Salamon (b), G. Salina.
WPFL General Meeting, , Nikhef A. Belias1 Shore DAQ system - report on studies A.Belias NOA-NESTOR.
Optical Modules in the KM3NeT optimization studies in Erlangen
INFN-Pisa Collaboration Meeting, Roma – 12/10/2013 *
Intelligent trigger for Hyper-K
Performance of flexible tower with horizontal extent
Sebastian Kuch University Erlangen-Nürnberg
Pasquale Migliozzi INFN Napoli
S. Dasgupta*, N.K. Mondal, D. Samuel, M.N. Saraf,
for the Offline and Computing groups
Acoustic Position Calibration of the KM3NeT Neutrino Telescope
ALICE – First paper.
The COMPASS event store in 2002
LAV simulation with Geant4
Calculation of detector characteristics for KM3NeT
Optimization studies of a tower based km3 detector
Accelerating MapReduce on a Coupled CPU-GPU Architecture
on behalf of the NEMO Collaboration
SVT detector electronics
DING CONGJIN LU GUICHI MARSEILLE
Hellenic Open University
P. Sapienza, R. Coniglione and C. Distefano
Optimization of tower design
Update on POLA-01 measurements in Catania
Trigger Study For NEMO Phase 2 Tower
Nanobeacon: A low cost calibration instrument for KM3NeT
Presentation transcript:

Bachir Bouhadef, Mauro Morganti, G. Terreni (KM3-Italy Collaboration) Parallel Neutrino Triggers using GPUs for an underwater telescope KM3-NEMO Bachir Bouhadef, Mauro Morganti, G. Terreni (KM3-Italy Collaboration) INFN Pisa & Physics Department of Pisa

INFN Pisa & Physics Department of Pisa Outlines NEMO Phase II Tower DAQ system for NEMO Tower Showing a possibility of using a CPU-GPU DAQ for an online muon-track selection. Proposing a method for Parallelizing the online trigger software NEMO-II Tower. Test muon triggers in GPUs. Proposition for KM3Net-It Tower Trigger data handling. 2 INFN Pisa & Physics Department of Pisa 2

INFN Pisa & Physics Department of Pisa NEMO Tower Phase II. KM3 - NEMO Tower . 8 floors 4 PMT/Floor 8m Floor Arm length Floor 14 Floor 13 Floor 1 14 floors 6 PMT/Floor 6m Floor Arm 84 PMTs 32 PMTs X 55 kHz 1.7 Mhit/s 3 INFN Pisa & Physics Department of Pisa 3

Trigger and Data Acquisition System (NEMO Phase II) T. Chiarusi & F. Simeone VLVNT 2013 GPUs in HEP INFN Pisa & Physics Department of Pisa 4 4

Trigger and Data Acquisition System (NEMO Phase II) SC FC Onshore TS 1 TS 0 time AFC TS 0 TS 0 HM 0 TCPU 0 Gbit swich TCPU 1 HM 1 TS 1 TS 1 TS 1 TS 0 time TS is Time Slice of 200ms. The trigger in TCPU: sorting time’s PMTs hits and applying a charge threshold and time coincidences (simple coincidence, Floor coincidence). INFN Pisa & Physics Department of Pisa 5 5

Scalable Programming Model Why GPUs ? Scalable Programming Model A GPU uses blocks and threads for parallel programming SC AFC Tower D.U. INFN Pisa & Physics Department of Pisa 6 6

INFN Pisa & Physics Department of Pisa A new parallel trigger for muon detection based on time difference of muon’s hits Most of the muon track hits are Within a Time Windows (TD). We need at least 5 hits in different PMTs to reconstruct the muon track. We must look for a number of hits N in a fixed time windows. INFN Pisa & Physics Department of Pisa 7 7

INFN Pisa & Physics Department of Pisa 8 8

INFN Pisa & Physics Department of Pisa We propose a DAQ system using CPU-GPU Architecture Gbit swich Onshore Storage Unit INFN Pisa & Physics Department of Pisa 9 9

INFN Pisa & Physics Department of Pisa TGPU-CPU TCPU is replaced by TGPU-CPU, and every second the TGPU-CPU will receive 5 Time slices of 200ms each. INFN Pisa & Physics Department of Pisa GPUs in HEP 10 10

INFN Pisa & Physics Department of Pisa 5 TTS from network thread TTS4 TTS3 TTS2 TTS1 TTS0 Network THRD Step 1 5 CPU threads to put PMT hits in correct time interval, and the hits are time-order by thread Structure is ready to be treated in GPU TTS4 TTS3 TTS2 TTS1 TTS0 Step 2 CPU work The size of hits are not fixed, we must prepare a new structure for the GPU Number of threads must be multiple of 5 and 32 We can not predict how many hits per thread there are, so we fixed a max hits number by thread using the nominal rate X 3 or 6 We have considered also the edge effect between threads and TTSs. We should avoid threads with a few hits by choosing the optimal thread time interval. INFN Pisa & Physics Department of Pisa 11 11

INFN Pisa & Physics Department of Pisa Sort all PMT hits using classical Algorithms (shell) Step 1 Structure is ready to Trigger tagging Step 2 TTS0 TTS1 TTS2 TTS3 Trigger L0 In L1 Trigger all possible trigger can be implemented, and according to L0+L1 efficiencies the best is chosen to tag the event to be saved. GPU work 1 (L0) - N7TW1000 (L0) SC or AFC 2 (L1) - N7TW1000 & SC 3 (L1) - N7TW1000 & AFC 4 (L2) - N7TW1000 & SC & AFC 5 (L2) - N7TW1000 & ( (SC & AFC ) || (SC>1) || (AFC>1)) 6 (L2) - N7TW1000 & (SC || AFC)) 7 (L0) Charge > Charge_THRHD 1 2 3 4 INFN Pisa & Physics Department of Pisa 12 12

INFN Pisa & Physics Department of Pisa Muon triggers tests in GPUs GPU cards INFN Pisa & Physics Department of Pisa 13 13

INFN Pisa & Physics Department of Pisa Muon triggers tests in GPUs Trigger Time execution for one tower of 32PMT/ ~55kHz. Using 2 different GPU cards and one 1 second of raw data for 32 PMTs at background rate ~ 55 kHz and applying all triggers. The back times are measured while the CPU in Idle. The read times are measured times while the CPU executes other processes INFN Pisa & Physics Department of Pisa 14 14

INFN Pisa & Physics Department of Pisa Muon triggers tests in GPUs Using the same two GPU cards, we use one second data of 84 PMTs (6PMT X 14 Floors) at background rate 55 kHz and applying all triggers. The back numbers are the measured times while the CPU in Idle. The red numbers are the measured times while the CPU executes other processes INFN Pisa & Physics Department of Pisa 15 15

INFN Pisa & Physics Department of Pisa We have also tested a new trigger that combines between 7 different hits within 1000ns and time-space correlation with respect to first hit. In GPU the trigger was applied to data tower of 84 PMTs, and measured trigger time is 350ms in case of Tesla20c50. We are also working muon track reconstruction in GPUs. Sorting in Tesla 20c50 100 X 182 threads Alessio Bacciarelli INFN Pisa & Physics Department of Pisa 16 16

INFN Pisa & Physics Department of Pisa Proposed DAQ CPU-GPU for 8 KM3-Ita Tower INFN Pisa & Physics Department of Pisa 17 17

INFN Pisa & Physics Department of Pisa Conclusion and future work - GPUs can also take place in optical neutrino telescopes. - Still other online more selective algorithms can be applied. - Muon track reconstruction is underway. - Both Tesla 20c50 and GTX TITAN can be a good choice. INFN Pisa & Physics Department of Pisa 18 18

Thank you for attention I would like to thank all Members of the KM3Net-It Collaboration INFN Pisa & Physics Department of Pisa 19 19