ALICE HLT tracking running on GPU

Slides:



Advertisements
Similar presentations
DEVELOPMENT OF ONLINE EVENT SELECTION IN CBM DEVELOPMENT OF ONLINE EVENT SELECTION IN CBM I. Kisel (for CBM Collaboration) I. Kisel (for CBM Collaboration)
Advertisements

L1 Event Reconstruction in the STS I. Kisel GSI / KIP CBM Collaboration Meeting Dubna, October 16, 2008.
HLT - data compression vs event rejection. Assumptions Need for an online rudimentary event reconstruction for monitoring Detector readout rate (i.e.
ALICE TPC Online Tracking on GPU David Rohr for the ALICE Corporation Lisbon.
2009/04/07 Yun-Yang Ma.  Overview  What is CUDA ◦ Architecture ◦ Programming Model ◦ Memory Model  H.264 Motion Estimation on CUDA ◦ Method ◦ Experimental.
> IRTG – Heidelberg 2007 < Jochen Thäder – University of Heidelberg 1/18 ALICE HLT in the TPC Commissioning IRTG Seminar Heidelberg – January 2008 Jochen.
ALICE HLT High Speed Tracking and Vertexing Real-Time 2010 Conference Lisboa, May 25, 2010 Sergey Gorbunov 1,2 1 Frankfurt Institute for Advanced Studies,
Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 1 HLT for TPC commissioning - Setup - - Status - - Experience -
Efficient Pseudo-Random Number Generation for Monte-Carlo Simulations Using GPU Siddhant Mohanty, Subho Shankar Banerjee, Dushyant Goyal, Ajit Mohanty.
Motivation “Every three minutes a woman is diagnosed with Breast cancer” (American Cancer Society, “Detailed Guide: Breast Cancer,” 2006) Explore the use.
CA+KF Track Reconstruction in the STS I. Kisel GSI / KIP CBM Collaboration Meeting GSI, February 28, 2008.
KIP TRACKING IN MAGNETIC FIELD BASED ON THE CELLULAR AUTOMATON METHOD TRACKING IN MAGNETIC FIELD BASED ON THE CELLULAR AUTOMATON METHOD Ivan Kisel KIP,
Event Reconstruction in STS I. Kisel GSI CBM-RF-JINR Meeting Dubna, May 21, 2009.
Online Track Reconstruction in the CBM Experiment I. Kisel, I. Kulakov, I. Rostovtseva, M. Zyzak (for the CBM Collaboration) I. Kisel, I. Kulakov, I. Rostovtseva,
CA tracker for TPC online reconstruction CERN, April 10, 2008 S. Gorbunov 1 and I. Kisel 1,2 S. Gorbunov 1 and I. Kisel 1,2 ( for the ALICE Collaboration.
Many-Core Scalability of the Online Event Reconstruction in the CBM Experiment Ivan Kisel GSI, Germany (for the CBM Collaboration) CHEP-2010 Taipei, October.
The High-Level Trigger of the ALICE Experiment Heinz Tilsner Kirchhoff-Institut für Physik Universität Heidelberg International Europhysics Conference.
Helmholtz International Center for CBM – Online Reconstruction and Event Selection Open Charm Event Selection – Driving Force for FEE and DAQ Open charm:
Status of the L1 STS Tracking I. Kisel GSI / KIP CBM Collaboration Meeting GSI, March 12, 2009.
YOU LI SUPERVISOR: DR. CHU XIAOWEN CO-SUPERVISOR: PROF. LIU JIMING THURSDAY, MARCH 11, 2010 Speeding up k-Means by GPUs 1.
Use of GPUs in ALICE (and elsewhere) Thorsten Kollegger TDOC-PG | CERN |
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Speed-up of the ring recognition algorithm Semeon Lebedev GSI, Darmstadt, Germany and LIT JINR, Dubna, Russia Gennady Ososkov LIT JINR, Dubna, Russia.
TPC online reconstruction Cluster Finder & Conformal Mapping Tracker Kalliopi Kanaki University of Bergen.
Fast reconstruction of tracks in the inner tracker of the CBM experiment Ivan Kisel (for the CBM Collaboration) Kirchhoff Institute of Physics University.
GPU Architecture and Programming
Introducing collaboration members – Korea University (KU) ALICE TPC online tracking algorithm on a GPU Computing Platforms – GPU Computing Platforms Joohyung.
Accelerating Error Correction in High-Throughput Short-Read DNA Sequencing Data with CUDA Haixiang Shi Bertil Schmidt Weiguo Liu Wolfgang Müller-Wittig.
Status of Reconstruction in CBM
"Distributed Computing and Grid-technologies in Science and Education " PROSPECTS OF USING GPU IN DESKTOP-GRID SYSTEMS Klimov Georgy Dubna, 2012.
TDAQ Upgrade Software Plans John Baines, Tomasz Bold Contents: Future Framework Exploitation of future Technologies Work for Phase-II IDR.
KIP Ivan Kisel JINR-GSI meeting Nov 2003 High-Rate Level-1 Trigger Design Proposal for the CBM Experiment Ivan Kisel for Kirchhoff Institute of.
Standalone FLES Package for Event Reconstruction and Selection in CBM DPG Mainz, 21 March 2012 I. Kisel 1,2, I. Kulakov 1, M. Zyzak 1 (for the CBM.
Track Finding based on a Cellular Automaton Ivan Kisel Kirchhoff-Institut für Physik, Uni-Heidelberg Tracking Week, GSI January 24-25, 2005 KIP.
Normal text - click to edit HLT tracking in TPC Off-line week Gaute Øvrebekk.
First Level Event Selection Package of the CBM Experiment S. Gorbunov, I. Kisel, I. Kulakov, I. Rostovtseva, I. Vassiliev (for the CBM Collaboration (for.
Methods for fast reconstruction of events Ivan Kisel Kirchhoff-Institut für Physik, Uni-Heidelberg FutureDAQ Workshop, München March 25-26, 2004 KIP.
Status of global tracking and plans for Run2 (for TPC related tasks see Marian’s presentation) 1 R.Shahoyan, 19/03/14.
Global Tracking for CBM Andrey Lebedev 1,2 Ivan Kisel 1 Gennady Ososkov 2 1 GSI Helmholtzzentrum für Schwerionenforschung GmbH, Darmstadt, Germany 2 Laboratory.
Cellular Automaton Method for Track Finding (HERA-B, LHCb, CBM) Ivan Kisel Kirchhoff-Institut für Physik, Uni-Heidelberg Second FutureDAQ Workshop, GSI.
Statistical feature extraction, calibration and numerical debugging Marian Ivanov.
Reconstruction Chain used for the D Meson Analysis Ivan Kisel Kirchhoff Institute of Physics University of Heidelberg, Germany CBM Collaboration Meeting.
CA+KF Track Reconstruction in the STS S. Gorbunov and I. Kisel GSI/KIP/LIT CBM Collaboration Meeting Dresden, September 26, 2007.
Kalman Filter based Track Fit running on Cell S. Gorbunov 1,2, U. Kebschull 2, I. Kisel 2,3, V. Lindenstruth 2 and W.F.J. Müller 1 1 Gesellschaft für Schwerionenforschung.
FPGA Co-processor for the ALICE High Level Trigger Gaute Grastveit University of Bergen Norway H.Helstrup 1, J.Lien 1, V.Lindenstruth 2, C.Loizides 5,
Computer Architecture Lecture 24 Parallel Processing Ralph Grishman November 2015 NYU.
CWG7 (reconstruction) R.Shahoyan, 12/06/ Case of single row Rolling Shutter  N rows of sensor read out sequentially, single row is read in time.
Some GPU activities at the CMS experiment Felice Pantaleo EP-CMG-CO EP-CMG-CO 1.
Fast and parallel implementation of Image Processing Algorithm using CUDA Technology On GPU Hardware Neha Patil Badrinath Roysam Department of Electrical.
1 Reconstruction tasks R.Shahoyan, 25/06/ Including TRD into track fit (JIRA PWGPP-1))  JIRA PWGPP-2: Code is in the release, need to switch setting.
AliRoot survey: Reconstruction P.Hristov 11/06/2013.
Some topics for discussion 31/03/2016 P. Hristov 1.
Parallel Implementation of the KFParticle Vertexing Package for the CBM and ALICE Experiments Ivan Kisel 1,2,3, Igor Kulakov 1,4, Maksym Zyzak 1,4 1 –
Heterogeneous Processing KYLE ADAMSKI. Overview What is heterogeneous processing? Why it is necessary Issues with heterogeneity CPU’s vs. GPU’s Heterogeneous.
GPU's for event reconstruction in FairRoot Framework Mohammad Al-Turany (GSI-IT) Florian Uhlig (GSI-IT) Radoslaw Karabowicz (GSI-IT)
CALIBRATION: PREPARATION FOR RUN2 ALICE Offline Week, 25 June 2014 C. Zampolli.
Jun Doi IBM Research – Tokyo Early Performance Evaluation of Lattice QCD on POWER+GPU Cluster 17 July 2015.
Y. Fisyak1, I. Kisel2, I. Kulakov2, J. Lauret1, M. Zyzak2
Status of Hough Transform TPC Tracker
Intelligent trigger for Hyper-K
Christoph Blume Offline Week, July, 2008
Our Graphics Environment
CS427 Multicore Architecture and Parallel Computing
Fast Parallel Event Reconstruction
ALICE – First paper.
Status of the Offline Pattern Recognition Code GSI, December 10th 2012
Commissioning of the ALICE HLT, TPC and PHOS systems
TPC reconstruction in the HLT
A Fast Hardware Tracker for the ATLAS Trigger System
6- General Purpose GPU Programming
Presentation transcript:

ALICE HLT tracking running on GPU S. Gorbunov1 and I. Kisel1,2 ( for the ALICE Collaboration ) 1 Kirchhoff Institute for Physics, University of Heidelberg, Germany 2 Gesellschaft für Schwerionenforschung mbH, Darmstadt, Germany ALICE/FAIR Workshop GSI, February 3, 2009

TPC reconstruction scheme TPC slice 0 TPC slice 35 The TPC Slice Tracker is the most complicated algorithm: combinatorial search fit mathematics the reconstruction time is crucial Cluster finder Cluster finder clusters clusters Slice tracker Slice tracker … … … slice tracks slice tracks slice tracks TPC Global merger TPC tracks 3 February 2009, GSI Sergey Gorbunov, KIP

Tracking algorithm: the Cellular Automaton method Neighbours finder 2. Composing of tracklets For each TPC cluster find two (up&down) neighbours which compose the best line One-to-one linked neighbours are grouped to the track segments 3. Construction of the track candidates 4. Final selection of tracks Competition between tracks, no shared clusters allowed Fit of trajectories Search for the missed parts 3 February 2009, GSI Sergey Gorbunov, KIP

Use of parallel hardware: GPU NVIDIA GeForce GTX 280: 30x8 general propose processors; pure calculations can be ~100 times faster than CPU very parallel: || execution of branches, || memory access CUDA language - a little extension of the C++ fast access to the small portion of data (16k) at the time; no memory cache single precision floating point ONLY parallel calculations 3 February 2009, GSI Sergey Gorbunov, KIP

Porting HLT tracking code to the GPU The algorithm evolution: Maximal parallelisation of the CPU tracker. ( AliRoot ) Stand-alone CPU tracker w/o ROOT. ( AliRoot->stand-alone ) Developing of the efficient GPU code. ( stand-alone CPU-> stand-alone GPU ) Making hybrid code. ( stand-alone GPU -> stand-alone GPU+CPU ) Porting code back to AliRoot. (stand-alone GPU+CPU -> AliRoot ) Result - HLT TPC tracker: Official svn code, compiles and runs offline and in the HLT framework Satisfies to the ALICE coding rules (almost) Can use the GPU device (not from AliRoot) SAME source code for GPU and CPU, same result. For a moment >10.5 times faster on GPU Pb-Pb event in work 1640 ms 156 ms 3 February 2009, GSI Sergey Gorbunov, KIP

Running the ALICE HLT tracker on the GPU cluster at Frankfurt speed-up: 10.5x GPU CPU same code same result CPU GPU 3 February 2009, GSI Sergey Gorbunov, KIP

Summary and plans Summary: The ALICE HLT tracking algorithm has been parallelised to use the GPU hardware. The new tracker is as fast as before on CPU, and shows 10x speed-up on GPU. The algorithm and the code are universal for GPU and CPU. Commit to svn, running in the HLT Plans: Further speed-up for the GPU Integration of the GPU tracker to the HLT framework. 3 February 2009, GSI Sergey Gorbunov, KIP