First Level Event Selection Package of the CBM Experiment S. Gorbunov, I. Kisel, I. Kulakov, I. Rostovtseva, I. Vassiliev (for the CBM Collaboration (for.

Slides:



Advertisements
Similar presentations
DEVELOPMENT OF ONLINE EVENT SELECTION IN CBM DEVELOPMENT OF ONLINE EVENT SELECTION IN CBM I. Kisel (for CBM Collaboration) I. Kisel (for CBM Collaboration)
Advertisements

1 Qilin: Exploiting Parallelism on Heterogeneous Multiprocessors with Adaptive Mapping Chi-Keung (CK) Luk Technology Pathfinding and Innovation Software.
Instructor Notes We describe motivation for talking about underlying device architecture because device architecture is often avoided in conventional.
L1 Event Reconstruction in the STS I. Kisel GSI / KIP CBM Collaboration Meeting Dubna, October 16, 2008.
Appendix A — 1 FIGURE A.2.2 Contemporary PCs with Intel and AMD CPUs. See Chapter 6 for an explanation of the components and interconnects in this figure.
FSOSS Dr. Chris Szalwinski Professor School of Information and Communication Technology Seneca College, Toronto, Canada GPU Research Capabilities.
Challenge the future Delft University of Technology Evaluating Multi-Core Processors for Data-Intensive Kernels Alexander van Amesfoort Delft.
ALICE HLT High Speed Tracking and Vertexing Real-Time 2010 Conference Lisboa, May 25, 2010 Sergey Gorbunov 1,2 1 Frankfurt Institute for Advanced Studies,
Porting Reconstruction Algorithms to the Cell Broadband Engine S. Gorbunov 1, U. Kebschull 1,I. Kisel 1,2, V. Lindenstruth 1, W.F.J. Müller 2 S. Gorbunov.
K-Ary Search on Modern Processors Fakultät Informatik, Institut Systemarchitektur, Professur Datenbanken Benjamin Schlegel, Rainer Gemulla, Wolfgang Lehner.
Motivation “Every three minutes a woman is diagnosed with Breast cancer” (American Cancer Society, “Detailed Guide: Breast Cancer,” 2006) Explore the use.
Evaluation of Multi-core Architectures for Image Processing Algorithms Masters Thesis Presentation by Trupti Patil July 22, 2009.
Performance and Energy Efficiency of GPUs and FPGAs
CA+KF Track Reconstruction in the STS I. Kisel GSI / KIP CBM Collaboration Meeting GSI, February 28, 2008.
KIP TRACKING IN MAGNETIC FIELD BASED ON THE CELLULAR AUTOMATON METHOD TRACKING IN MAGNETIC FIELD BASED ON THE CELLULAR AUTOMATON METHOD Ivan Kisel KIP,
Online Event Selection in the CBM Experiment I. Kisel GSI, Darmstadt, Germany RT'09, Beijing May 12, 2009.
Event Reconstruction in STS I. Kisel GSI CBM-RF-JINR Meeting Dubna, May 21, 2009.
Online Track Reconstruction in the CBM Experiment I. Kisel, I. Kulakov, I. Rostovtseva, M. Zyzak (for the CBM Collaboration) I. Kisel, I. Kulakov, I. Rostovtseva,
CA tracker for TPC online reconstruction CERN, April 10, 2008 S. Gorbunov 1 and I. Kisel 1,2 S. Gorbunov 1 and I. Kisel 1,2 ( for the ALICE Collaboration.
Many-Core Scalability of the Online Event Reconstruction in the CBM Experiment Ivan Kisel GSI, Germany (for the CBM Collaboration) CHEP-2010 Taipei, October.
COMPUTER SCIENCE &ENGINEERING Compiled code acceleration on FPGAs W. Najjar, B.Buyukkurt, Z.Guo, J. Villareal, J. Cortes, A. Mitra Computer Science & Engineering.
Helmholtz International Center for CBM – Online Reconstruction and Event Selection Open Charm Event Selection – Driving Force for FEE and DAQ Open charm:
Status of the L1 STS Tracking I. Kisel GSI / KIP CBM Collaboration Meeting GSI, March 12, 2009.
Use of GPUs in ALICE (and elsewhere) Thorsten Kollegger TDOC-PG | CERN |
© 2007 SET Associates Corporation SAR Processing Performance on Cell Processor and Xeon Mark Backues, SET Corporation Uttam Majumder, AFRL/RYAS.
Speed-up of the ring recognition algorithm Semeon Lebedev GSI, Darmstadt, Germany and LIT JINR, Dubna, Russia Gennady Ososkov LIT JINR, Dubna, Russia.
VTU – IISc Workshop Compiler, Architecture and HPC Research in Heterogeneous Multi-Core Era R. Govindarajan CSA & SERC, IISc
Ooo Performance simulation studies of a realistic model of the CBM Silicon Tracking System Silicon Tracking for CBM Reconstructed URQMD event: central.
Fast reconstruction of tracks in the inner tracker of the CBM experiment Ivan Kisel (for the CBM Collaboration) Kirchhoff Institute of Physics University.
Introducing collaboration members – Korea University (KU) ALICE TPC online tracking algorithm on a GPU Computing Platforms – GPU Computing Platforms Joohyung.
Fast track reconstruction in the muon system and transition radiation detector of the CBM experiment at FAIR Andrey Lebedev 1,2 Claudia Höhne 3 Ivan Kisel.
Status of Reconstruction in CBM
Introduction What is GPU? It is a processor optimized for 2D/3D graphics, video, visual computing, and display. It is highly parallel, highly multithreaded.
Standalone FLES Package for Event Reconstruction and Selection in CBM DPG Mainz, 21 March 2012 I. Kisel 1,2, I. Kulakov 1, M. Zyzak 1 (for the CBM.
Tracking in the CBM experiment I. Kisel Kirchhoff Institute of Physics, University of Heidelberg (for the CBM Collaboration) TIME05, Zurich, Switzerland.
V.Petracek TU Prague, UNI Heidelberg GSI Detection of D +/- hadronic 3-body decays in the CBM experiment ● D +/- K  B. R. 
Performance simulations with a realistic model of the CBM Silicon Tracking System Silicon tracking for CBM Number of integration components Ladders106.
– Self-Triggered Front- End Electronics and Challenges for Data Acquisition and Event Selection CBM  Study of Super-Dense Baryonic Matter with.
Methods for fast reconstruction of events Ivan Kisel Kirchhoff-Institut für Physik, Uni-Heidelberg FutureDAQ Workshop, München March 25-26, 2004 KIP.
Fast Tracking of Strip and MAPS Detectors Joachim Gläß Computer Engineering, University of Mannheim Target application is trigger  1. do it fast  2.
Global Tracking for CBM Andrey Lebedev 1,2 Ivan Kisel 1 Gennady Ososkov 2 1 GSI Helmholtzzentrum für Schwerionenforschung GmbH, Darmstadt, Germany 2 Laboratory.
Cellular Automaton Method for Track Finding (HERA-B, LHCb, CBM) Ivan Kisel Kirchhoff-Institut für Physik, Uni-Heidelberg Second FutureDAQ Workshop, GSI.
Muon detection in the CBM experiment at FAIR Andrey Lebedev 1,2 Claudia Höhne 1 Ivan Kisel 1 Anna Kiseleva 3 Gennady Ososkov 2 1 GSI Helmholtzzentrum für.
Monday, January 24 TimeTitleSpeaker 10:00 – 10:30The CBM experimentV. Friese (GSI) 10:30 – 11:00CBM Silicon Pixel Vertex DetectorM. Deveaux (GSI/IReS)
CA+KF Track Reconstruction in the STS S. Gorbunov and I. Kisel GSI/KIP/LIT CBM Collaboration Meeting Dresden, September 26, 2007.
A parallel High Level Trigger benchmark (using multithreading and/or SSE)‏ Håvard Bjerke.
Kalman Filter based Track Fit running on Cell S. Gorbunov 1,2, U. Kebschull 2, I. Kisel 2,3, V. Lindenstruth 2 and W.F.J. Müller 1 1 Gesellschaft für Schwerionenforschung.
Parallelization of the SIMD Kalman Filter for Track Fitting R. Gabriel Esteves Ashok Thirumurthi Xin Zhou Michael D. McCool Anwar Ghuloum Rama Malladi.
Fast parallel tracking algorithm for the muon detector of the CBM experiment at FAIR Andrey Lebedev 1,2 Claudia Höhne 1 Ivan Kisel 1 Gennady Ososkov 2.
My Coordinates Office EM G.27 contact time:
S. Pardi Frascati, 2012 March GPGPU Evaluation – First experiences in Napoli Silvio Pardi.
Parallel Implementation of the KFParticle Vertexing Package for the CBM and ALICE Experiments Ivan Kisel 1,2,3, Igor Kulakov 1,4, Maksym Zyzak 1,4 1 –
Multi-Core CPUs Matt Kuehn. Roadmap ► Intel vs AMD ► Early multi-core processors ► Threads vs Physical Cores ► Multithreading and Multi-core processing.
KFParticle Ivan Kisel 1,2, Igor Kulakov 1,3, Iourii Vassiliev 1,2, Maksym Zyzak 1,3 1 – Goethe-Universität Frankfurt, Frankfurt am Main, Germany 2 – GSI.
GPU's for event reconstruction in FairRoot Framework Mohammad Al-Turany (GSI-IT) Florian Uhlig (GSI-IT) Radoslaw Karabowicz (GSI-IT)
The Present and Future of Parallelism on GPUs
M. Bellato INFN Padova and U. Marconi INFN Bologna
Y. Fisyak1, I. Kisel2, I. Kulakov2, J. Lauret1, M. Zyzak2
Kalman filter tracking library
GPU Computing Jan Just Keijser Nikhef Jamboree, Utrecht
Geant4 MT Performance Soon Yung Jun (Fermilab)
Fast Parallel Event Reconstruction
Parallel Computing Lecture
ALICE HLT tracking running on GPU
TPC reconstruction in the HLT
PANN Testing.
Multicore and GPU Programming
6- General Purpose GPU Programming
CSE 502: Computer Architecture
Multicore and GPU Programming
Presentation transcript:

First Level Event Selection Package of the CBM Experiment S. Gorbunov, I. Kisel, I. Kulakov, I. Rostovtseva, I. Vassiliev (for the CBM Collaboration (for the CBM Collaboration) CHEP'09 Prague, March 26, 2009

26 March 2009, CHEP'09Ivan Kisel, GSI2/12 Tracking Challenge in CBM (FAIR/GSI, Germany) * Fixed-target heavy-ion experiment * 10 7 Au+Au collisions/sec * ~ 1000 charged particles/collision * Non-homogeneous magnetic field * Double-sided strip detectors (85% fake space points) Track reconstruction in STS/MVD and displaced vertex search required in the first trigger level

26 March 2009, CHEP'09Ivan Kisel, GSI3/12 Open Charm Event Selection D  (c  = 312  m): D +  K -  +  + (9.5%) D 0 (c  = 123  m): D 0  K -  + (3.8%) D 0  K -  +  +  - (7.5%) D  s (c  = 150  m): D + s  K + K -  + (5.3%)  + c (c  = 60  m):  + c  pK -  + (5.0%) No simple trigger primitive, like high p t, available to tag events of interest. The only selective signature is the detection of the decay vertex. K-K-K-K- ++ First level event selection is done in a processor farm fed with data from the event building network

26 March 2009, CHEP'09Ivan Kisel, GSI4/12 Many-core HPC Heterogeneous systems of many cores Heterogeneous systems of many cores Uniform approach to all CPU/GPU families Uniform approach to all CPU/GPU families Similar programming languages (CUDA, Ct, OpenCL) Similar programming languages (CUDA, Ct, OpenCL) Parallelization of the algorithm (vectors, multi-threads, many-cores) Parallelization of the algorithm (vectors, multi-threads, many-cores) On-line event selection On-line event selection Mathematical and computational optimization Mathematical and computational optimization Optimization of the detector Optimization of the detectorCores HW Threads SIMD width N speed-up = N cores *(N threads /2)*W SIMD OpenCL ? Gaming STI: Cell STI: CellGaming GP CPU Intel: Larrabee Intel: Larrabee GP CPU Intel: Larrabee Intel: Larrabee ? GP GPU Nvidia: Tesla Nvidia: Tesla GP GPU Nvidia: Tesla Nvidia: Tesla CPU Intel: XX-cores Intel: XX-coresCPU FPGA Xilinx: Virtex Xilinx: VirtexFPGA ? CPU/GPU AMD: Fusion AMD: FusionCPU/GPU ?

26 March 2009, CHEP'09Ivan Kisel, GSI5/12 Standalone Package for Event Selection

26 March 2009, CHEP'09Ivan Kisel, GSI6/12 Kalman Filter for Track Fit arbitrary large errors non-homogeneous magnetic field as large map multiple scattering in material small errors weight for update >>> 256 KB of Local Store not enough accuracy in single precision no correction from measurements

26 March 2009, CHEP'09Ivan Kisel, GSI7/12 Code (Part of the Kalman Filter) inline void AddMaterial( TrackV &track, Station &st, Fvec_t &qp0 ) { cnst mass2 = *0.1396; cnst mass2 = *0.1396; Fvec_t tx = track.T[2]; Fvec_t tx = track.T[2]; Fvec_t ty = track.T[3]; Fvec_t ty = track.T[3]; Fvec_t txtx = tx*tx; Fvec_t txtx = tx*tx; Fvec_t tyty = ty*ty; Fvec_t tyty = ty*ty; Fvec_t txtx1 = txtx + ONE; Fvec_t txtx1 = txtx + ONE; Fvec_t h = txtx + tyty; Fvec_t h = txtx + tyty; Fvec_t t = sqrt(txtx1 + tyty); Fvec_t t = sqrt(txtx1 + tyty); Fvec_t h2 = h*h; Fvec_t h2 = h*h; Fvec_t qp0t = qp0*t; Fvec_t qp0t = qp0*t; cnst c1=0.0136, c2=c1*0.038, c3=c2*0.5, c4=-c3/2.0, c5=c3/3.0, c6=-c3/4.0; cnst c1=0.0136, c2=c1*0.038, c3=c2*0.5, c4=-c3/2.0, c5=c3/3.0, c6=-c3/4.0; Fvec_t s0 = (c1+c2*st.logRadThick + c3*h + h2*(c4 + c5*h +c6*h2) )*qp0t; Fvec_t s0 = (c1+c2*st.logRadThick + c3*h + h2*(c4 + c5*h +c6*h2) )*qp0t; Fvec_t a = (ONE+mass2*qp0*qp0t)*st.RadThick*s0*s0; Fvec_t a = (ONE+mass2*qp0*qp0t)*st.RadThick*s0*s0; CovV &C = track.C; CovV &C = track.C; C.C22 += txtx1*a; C.C22 += txtx1*a; C.C32 += tx*ty*a; C.C33 += (ONE+tyty)*a; C.C32 += tx*ty*a; C.C33 += (ONE+tyty)*a;} Use headers to overload +, -, *, / operators --> the source code is unchanged !

26 March 2009, CHEP'09Ivan Kisel, GSI8/12 Header (Intel’s SSE) typedef F32vec4 Fvec_t; typedef F32vec4 Fvec_t; /* Arithmetic Operators */ /* Arithmetic Operators */ friend F32vec4 operator +(const F32vec4 &a, const F32vec4 &b) { return _mm_add_ps(a,b); } friend F32vec4 operator +(const F32vec4 &a, const F32vec4 &b) { return _mm_add_ps(a,b); } friend F32vec4 operator -(const F32vec4 &a, const F32vec4 &b) { return _mm_sub_ps(a,b); } friend F32vec4 operator -(const F32vec4 &a, const F32vec4 &b) { return _mm_sub_ps(a,b); } friend F32vec4 operator *(const F32vec4 &a, const F32vec4 &b) { return _mm_mul_ps(a,b); } friend F32vec4 operator *(const F32vec4 &a, const F32vec4 &b) { return _mm_mul_ps(a,b); } friend F32vec4 operator /(const F32vec4 &a, const F32vec4 &b) { return _mm_div_ps(a,b); } friend F32vec4 operator /(const F32vec4 &a, const F32vec4 &b) { return _mm_div_ps(a,b); } /* Functions */ /* Functions */ friend F32vec4 min( const F32vec4 &a, const F32vec4 &b ){ return _mm_min_ps(a, b); } friend F32vec4 min( const F32vec4 &a, const F32vec4 &b ){ return _mm_min_ps(a, b); } friend F32vec4 max( const F32vec4 &a, const F32vec4 &b ){ return _mm_max_ps(a, b); } friend F32vec4 max( const F32vec4 &a, const F32vec4 &b ){ return _mm_max_ps(a, b); } /* Square Root */ /* Square Root */ friend F32vec4 sqrt ( const F32vec4 &a ){ return _mm_sqrt_ps (a); } friend F32vec4 sqrt ( const F32vec4 &a ){ return _mm_sqrt_ps (a); } /* Absolute value */ /* Absolute value */ friend F32vec4 fabs( const F32vec4 &a){ return _mm_and_ps(a, _f32vec4_abs_mask); } friend F32vec4 fabs( const F32vec4 &a){ return _mm_and_ps(a, _f32vec4_abs_mask); } /* Logical */ /* Logical */ friend F32vec4 operator&( const F32vec4 &a, const F32vec4 &b ){ // mask returned friend F32vec4 operator&( const F32vec4 &a, const F32vec4 &b ){ // mask returned return _mm_and_ps(a, b); return _mm_and_ps(a, b); } friend F32vec4 operator|( const F32vec4 &a, const F32vec4 &b ){ // mask returned friend F32vec4 operator|( const F32vec4 &a, const F32vec4 &b ){ // mask returned return _mm_or_ps(a, b); return _mm_or_ps(a, b); } friend F32vec4 operator^( const F32vec4 &a, const F32vec4 &b ){ // mask returned friend F32vec4 operator^( const F32vec4 &a, const F32vec4 &b ){ // mask returned return _mm_xor_ps(a, b); return _mm_xor_ps(a, b); } friend F32vec4 operator!( const F32vec4 &a ){ // mask returned friend F32vec4 operator!( const F32vec4 &a ){ // mask returned return _mm_xor_ps(a, _f32vec4_true); return _mm_xor_ps(a, _f32vec4_true); } friend F32vec4 operator||( const F32vec4 &a, const F32vec4 &b ){ // mask returned friend F32vec4 operator||( const F32vec4 &a, const F32vec4 &b ){ // mask returned return _mm_or_ps(a, b); return _mm_or_ps(a, b); } /* Comparison */ /* Comparison */ friend F32vec4 operator<( const F32vec4 &a, const F32vec4 &b ){ // mask returned friend F32vec4 operator<( const F32vec4 &a, const F32vec4 &b ){ // mask returned return _mm_cmplt_ps(a, b); return _mm_cmplt_ps(a, b); } SIMD instructions

26 March 2009, CHEP'09Ivan Kisel, GSI9/12 Kalman Filter Track Fit on Intel Xeon, AMD Opteron and Cell Motivated, but not restricted by Cell ! 2 Intel Xeon Processors with Hyper-Threading enabled and 512 kB cache at 2.66 GHz; 2 Dual Core AMD Opteron Processors 265 with 1024 kB cache at 1.8 GHz; 2 Cell Broadband Engines with 256 kB local store at 2.4G Hz. Intel P4 Cell faster! Comp. Phys. Comm. 178 (2008)

26 March 2009, CHEP'09Ivan Kisel, GSI10/12 Performance of the KF Track Fit on CPU/GPU Systems CBM Progress Report, 2008 Real-time performance on the quad-core Xeon 5345 (Clovertown) at 2.4 GHz – speed-up 30 with 16 threads Speed-up 3.7 on the Xeon 5140 (Woodcrest) at 2.4 GHz using icc 9.1 Real-time performance on different Intel CPU platforms Real-time performance on NVIDIA for a single track Cores HW Threads SIMD width

26 March 2009, CHEP'09Ivan Kisel, GSI11/12 Cellular Automaton Track Finder: Pentium faster!

26 March 2009, CHEP'09Ivan Kisel, GSI12/12Summary Standalone package for online event selection is ready for investigation Standalone package for online event selection is ready for investigation Cellular Automaton track finder takes 5 ms per minimum bias event Cellular Automaton track finder takes 5 ms per minimum bias event Kalman Filter track fit is a benchmark for modern CPU/GPU architectures Kalman Filter track fit is a benchmark for modern CPU/GPU architectures SIMDized multi-threaded KF track fit takes 0.1  s/track on Intel Core i7 SIMDized multi-threaded KF track fit takes 0.1  s/track on Intel Core i7 Throughput of 2.2·10 7 tracks /sec is reached on NVIDIA GTX 280 Throughput of 2.2·10 7 tracks /sec is reached on NVIDIA GTX 280