Computing in CBM Volker Friese GSI Darmstadt International Conference on Matter under High Densities 21 – 23 June 2016 Sikkim Manipal Institute of Technology,

Slides:



Advertisements
Similar presentations
DEVELOPMENT OF ONLINE EVENT SELECTION IN CBM DEVELOPMENT OF ONLINE EVENT SELECTION IN CBM I. Kisel (for CBM Collaboration) I. Kisel (for CBM Collaboration)
Advertisements

Development of a track trigger based on parallel architectures Felice Pantaleo PH-CMG-CO (University of Hamburg) Felice Pantaleo PH-CMG-CO (University.
Centauro and STrange Object Research (CASTOR) - A specialized detector system dedicated to the search for Centauros and Strangelets in the baryon dense,
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. The Power of Data Driven Triggering DAQ Topology.
February 19th 2009AlbaNova Instrumentation Seminar1 Christian Bohm Instrumentation Physics, SU Upgrading the ATLAS detector Overview Motivation The current.
STS Simulations Anna Kotynia 15 th CBM Collaboration Meeting April , 2010, GSI 1.
Niko Neufeld, CERN/PH-Department
Many-Core Scalability of the Online Event Reconstruction in the CBM Experiment Ivan Kisel GSI, Germany (for the CBM Collaboration) CHEP-2010 Taipei, October.
CBM Collaboration Meeting 1 Simulation Status V. Friese.
Helmholtz International Center for CBM – Online Reconstruction and Event Selection Open Charm Event Selection – Driving Force for FEE and DAQ Open charm:
Tracking at LHCb Introduction: Tracking Performance at LHCb Kalman Filter Technique Speed Optimization Status & Plans.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Ooo Performance simulation studies of a realistic model of the CBM Silicon Tracking System Silicon Tracking for CBM Reconstructed URQMD event: central.
Fast reconstruction of tracks in the inner tracker of the CBM experiment Ivan Kisel (for the CBM Collaboration) Kirchhoff Institute of Physics University.
CBM Software Workshop for Future Challenges in Tracking and Trigger Concepts, GSI, 9 June 2010 Volker Friese.
Charmonium feasibility study F. Guber, E. Karpechev, A.Kurepin, A. Maevskaia Institute for Nuclear Research RAS, Moscow CBM collaboration meeting 11 February.
Status of straw-tube tracker V.Peshekhonov, D.Peshekhonov, A.Zinchenko JINR, Dubna V.Tikhomirov P.N.Lebedev Physics Institute, Moscow Presented on the.
Status of Reconstruction in CBM
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
A Silicon vertex tracker prototype for CBM Material for the FP6 Design application.
Standalone FLES Package for Event Reconstruction and Selection in CBM DPG Mainz, 21 March 2012 I. Kisel 1,2, I. Kulakov 1, M. Zyzak 1 (for the CBM.
Simulations and Software CBM Collaboration Meeting, GSI, 17 October 2008 Volker Friese Simulations Software Computing.
Results from first beam tests for the development of a RICH detector for CBM J. Eschke 1*, C. Höhne 1 for the CBM collaboration 1 GSI, Darmstadt, Germany.
Ring Recognition and Electron Identification in the RICH detector of the CBM Experiment at FAIR Semeon Lebedev GSI, Darmstadt, Germany and LIT JINR, Dubna,
Kraków4FutureDaQ Institute of Physics & Nowoczesna Elektronika P.Salabura,A.Misiak,S.Kistryn,R.Tębacz,K.Korcyl & M.Kajetanowicz Discrete event simulations.
– Self-Triggered Front- End Electronics and Challenges for Data Acquisition and Event Selection CBM  Study of Super-Dense Baryonic Matter with.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
CBM Simulation Walter F.J. Müller, GSI CBM Simulation Week, May 10-14, 2004 Tasks and Concepts.
CBM Computing Model First Thoughts CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
The Use of Trigger and DAQ in High Energy Physics Experiments Lecture 3: The (near) future O. Villalobos Baillie School of Physics and Astronomy The University.
20/12/2011Christina Anna Dritsa1 Design of the Micro Vertex Detector of the CBM experiment: Development of a detector response model and feasibility studies.
20/12/2011Christina Anna Dritsa1 The model: Input Charge generation The charge of the cluster is taken by random sampling of the experimental distribution.
CWG7 (reconstruction) R.Shahoyan, 12/06/ Case of single row Rolling Shutter  N rows of sensor read out sequentially, single row is read in time.
ECFA Workshop, Warsaw, June G. Eckerlin Data Acquisition for the ILD G. Eckerlin ILD Meeting ILC ECFA Workshop, Warsaw, June 11 th 2008 DAQ Concept.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
August 24, 2011IDAP Kick-off meeting - TileCal ATLAS TileCal Upgrade LHC and ATLAS current status LHC designed for cm -2 s 7+7 TeV Limited to.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
LIT participation LIT participation Ivanov V.V. Laboratory of Information Technologies Meeting on proposal of the setup preparation for external beams.
0 Characterization studies of the detector modules for the CBM Silicon Tracking System J.Heuser 1, V.Kyva 2, H.Malygina 2,3, I.Panasenko 2 V.Pugatch 2,
Monthly video-conference, 18/12/2003 P.Hristov1 Preparation for physics data challenge'04 P.Hristov Alice monthly off-line video-conference December 18,
CBM Experiment Computing Topics Volker Friese GSI Darmstadt HIC4FAIR Physics Day FIAS, Frankfurt, 11 November 2014.
PANDA Software Trigger K. Götzen Dec Challenge Required reduction factor: ~1/1000 (all triggers in total) A lot of physics channel triggers → even.
Multi-Strange Hyperons Triggering at SIS 100
Event reconstruction for the RICH prototype beamtest data 2014
Use of FPGA for dataflow Filippo Costa ALICE O2 CERN
– a CBM full system test-setup at GSI
WP18, High-speed data recording Krzysztof Wrona, European XFEL
evoluzione modello per Run3 LHC
Strangeness Prospects with the CBM Experiment
LHC experiments Requirements and Concepts ALICE
A heavy-ion experiment at the future facility at GSI
for the Offline and Computing groups
ALICE – First paper.
Commissioning of the ALICE HLT, TPC and PHOS systems
ALICE Computing Upgrade Predrag Buncic
Example of DAQ Trigger issues for the SoLID experiment
I. Vassiliev, V. Akishina, I.Kisel and
Linear Collider Simulation Tools
Preparation of the CLAS12 First Experiment Status and Time-Line
Pierre-Alain Loizeau and David Emschermann for the CBM collaboration
Linear Collider Simulation Tools
SVT detector electronics
CBM Computing Overview and Status V
The LHCb Front-end Electronics System Status and Future Development
Development of LHCb Computing Model F Harris
Perspectives on strangeness physics with the CBM experiment at FAIR
Reconstruction and calibration strategies for the LHCb RICH detector
Presentation transcript:

Computing in CBM Volker Friese GSI Darmstadt International Conference on Matter under High Densities 21 – 23 June 2016 Sikkim Manipal Institute of Technology, Rangpo

Complex Tasks CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 2 The Hope: Learn from the multitude of emitted particles (final state) about the state of the matter immediately after the collision (early state) The Task: Detect the final-state particles as completely as possible and characterise them w.r.t. momentum and identity (π, K. p, …)

Complex Devices CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 3 Dipole magnet + Silicon Tracking System RICH Muon System TRD PSD TOF

Tasks for reconstruction CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 4

„Reconstruction“ CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 5 ParticlesDetector Raw Data Hi, I am channel If you have a DAQ map, you will know that this is strip 348 of module l4u03 of the fourth layer of the main tracker. I just saw a charge of 11 ADC units. My clock reads ns. Reconstruction delivers a list of particles (trajectories) with momenta (and ID), which is the input for the higher-level physics analysis.

Reconstruction Tasks are typically pattern recognition issues act both local… – cluster / hit finding – local track finding … or global – connect track information from different detector systems Algorithms are usually developed by experts and executed in a central production. Results are provided to the physics users in some suitable event format. – user normally does not see raw data CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 6

Example: Track Finding in the Main Tracker CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 7 Silicon Tracking System Event Display Reconstructed Tracks Track finding: define a set of measurements (hits) which belong to one and the same trajectory (particle) Difficulty: Large number of hits; fake hits exceed true hits by factors.

Similar: Track Finding in the MUCH and TRD CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 8 Reconstructed Tracks Muon System: Outside magnetic field (straight tracks), but absorber layers complicate tracking TRD: Outside magnetic field (straight tracks); coordinate resolution worse than STS

Another Example: Ring Finding in the RICH CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 9 Cherenkov light emitted by electrons in the radiator is mirrored and focused into rings onto the photodetector plane. Problems: High hit / ring density Overlapping rings Ring distortions Event Display

Event Reconstruction: A First Upshot The pattern recognition problems in event reconstruction from raw data are common to HEP experiments. A bunch of methods were applied in previous or current experiments. There is, however, nothing “off the shelf”; methods and algorithms have to be adapted / tuned / developed for each experiment. Reconstruction in heavy-ion experiments is more demanding than in particle physics because of the large track multiplicity -> high complexity of the event topology. There are solutions developed for CBM which fulfil our demands in terms of efficiency and precision. CBM Physics Workshop, Rangpo, 23 June 2016 V. Friese 10

High Rates and the Data Challenge CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 11

The Shopping List CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 12 Measurement of very rare probes: requires extreme interaction rates Which rate is possible in heavy-ion reactions? Model predictions of particle multiplicities (Au+Au, 25A GeV) The bulk The needles in the haystack

The Rate Landscape CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 13 Can one do a MHz heavy-ion experiment?

Limiting Factors At an interaction rate of 10 MHz (Au+Au), the average time between subsequent events is 100 ns. – but there are many with much less time spacing. Detectors have to be fast – possible (i.e., solid-state detectors instead of gas drift chambers) Read-out electronics has to be fast – not trivial, but possible The limiting factor is the data rate to be shipped from the detectors to the permanent storage. CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 14

The Data Rate Problem At an interaction rate of 10 MHz (Au+Au), the raw data rate from the detector is about 1 TB/s. What can we store? – to tape: several GB/s – to disk: 100 GB/s no problem (e.g., GSI Lustre FS) What do we want to store? – at 1 GB/s archival rate, the amount of data after 2 months of beam time is 5 PB. The storage bandwidth is rather limited by the cost of storage media than by technological constraints. For a given rare observable, 99% of the raw data are physically uninteresting. – we would like to trigger on such observables. CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 15

The Online Task Online Data Processing 1 TB/s CBM FEE ~ GB/s Mass Storage CBM Physics Workshop, Rangpo, 23 June V. Friese at max. interaction rate

Trigger: tells the readout electronics when to read out the detector tells the electronics whether to send the collected data further or discard then CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 17

A Trigger Example (Schematic) CBM Physics Workshop, Rangpo, 23 June 2016 V. Friese 18 beam beam detector interaction detector target produced particle trigger particle Data Acquisition System Data Acquisition System start electronics reset electronics tell all electronics to send data data data are buffered on-chip until decision whether to use them or not low-level decision is evaluated in hardware logic or on FPGAs detector

Why a Triggered Readout Does Not Work for CBM The trigger signatures (e.g. Ω + cascade decay) are complex and involve several detector systems at a time (e.g., STS + TOF) – not possible to implement in hardware 1 MHz or above does not allow for high trigger latency – FEE buffer is difficult and expensive, in particular if radiation hard – not possible to wait for trigger decision in software CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 19

Free Streaming Read-out and Data Aqcuisition Continuous readout by FEE FEE sends data message on each signal above threshold (“self-triggered”) Hit message come with a time stamp DAQ aggregates messages based on their time stamp into “time slices” Time slices are delivered to the online computing farm (FLES) Decision on data selection is done in the FLES (in software) CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 20

Triggered and Free-Running Readout CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 21 Trigger: snapshots of the detector Trigger-less: a movie of the detector N.b.: Too large to be stored! Will be cut into pieces in the photo lab (= FLES).

DAQ and First-Level Event Selector CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 22 FLES prototype: Loewe CSC Frankfurt

FLES Architecture CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 23

Time Slice: Interface to Online Reconstruction CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 24

Consequences for Online Computing CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 25 CBM tries to achieve high interaction rate capability with a novel readout paradigm. Data will be continuously read out; the full data volume will be shipped to CPU. Online data selection will happen on the online compute farm (O(10 5 ) cores), where data reconstruction will be performed in real-time. The quality criteria for reconstruction algorithms are not only efficiency and precision, but also, and mostly, execution speed. To achieve the required performance, the computing parallelism offered my modern computer architectures must be exploited. High-performance online software is a necessary pre-requisite for the successful operation of CBM.

Where to parallelize? CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 26 FLES is designed as HPC cluster – commodity hardware – GPGPU accelerators Chunks of data („time slice“) distributed to independently operating computing nodes. Obvious data parallelism on event / time- slice level But: each computing node will have a large number of cores Need in addition parallelism within event / time slice

Parallelisation within event CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 27 MVD hit finding MVD hit finding STS cluster finding STS cluster finding RICH ring finding RICH ring finding TRD hit finding TRD hit finding TOF hit finding TOF hit finding ECAL cluster finding ECAL cluster finding STS hit finding STS hit finding STS track finding track fitting STS track finding track fitting TRD track finding track fitting TRD track finding track fitting Global Tracking Vertexing, PID Task Level Parallelism > 1000 sensors Data Level Parallelism

Ways to parallelize: multi-threading CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 28 “Trivial” event-level parallelism: reconstruction with independent processes. Exploit many-core systems with multi-threading: 1 thread per logical core, 1000 events per core. Gives good scalability. CA Track Finder

Muon trigger studies CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 29 Investigation on several CPU / GPU architectures. Strong differences between different computing paradigms on same architecture.

Offline Computing CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 30 Assuming we manage to solve the online computing issues, we will have some 5 PB per year on permanent storage. How to efficiently get physics results out of them?

Computing Tasks CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 31 Reconstruction From raw data to tracks and particles Reconstruction From raw data to tracks and particles Simulations Transport / Detector Geometry Detector Response Digitisation Simulations Transport / Detector Geometry Detector Response Digitisation Analysis From tracks to physics observables Analysis From tracks to physics observables Online Systems Data transport Event building Data distribution and management Online Systems Data transport Event building Data distribution and management Services Experiment Control System Databases Services Experiment Control System Databases

Distributed Computing CBM Physics Workshop, Rangpo, 23 June 2016V. Friese years ago, LHC experiments were confronted with a similar problem. The amount of data was too large to be handled on a single site. The answer was GRID computing: data and data processing are distributed on many sites worldwide. To be considered are three factors: – Computing power – Storage capacity – Data transfer between sites Experience shows that the limiting factors are – Network bandwidth – Site administration

New Paradigms for CBM / FAIR CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 33 Today, the entire LHC grid compute power could easily be concentrated in one large computing centre. The CBM Online Compute Cluster (~10 5 cores) is commodity hardware; it can be used for offline computing between runtimes ( ¾ of the year ). Since reconstruction must be fast (performable in real-time!), there is no need to store reconstructed data. Analysis can always start from raw data. Strategy tends towards a small number of large centres connected by high-speed links - details still to be worked out.

CBM Physics Workshop, Rangpo, 23 June 2016V. Friese 34 Thanks for the attention!