Level-3 trigger for ALICE Bergen Frankfurt Heidelberg Oslo.

Slides:



Advertisements
Similar presentations
HLT Collaboration; High Level Trigger HLT PRR Computer Architecture Volker Lindenstruth Kirchhoff Institute for Physics Chair of Computer.
Advertisements

High Level Trigger (HLT) for ALICE Bergen Frankfurt Heidelberg Oslo.
Development of a track trigger based on parallel architectures Felice Pantaleo PH-CMG-CO (University of Hamburg) Felice Pantaleo PH-CMG-CO (University.
HLT - data compression vs event rejection. Assumptions Need for an online rudimentary event reconstruction for monitoring Detector readout rate (i.e.
High Level Trigger – Applications Open Charm physics Quarkonium spectroscopy Dielectrons Dimuons Jets.
A Fast Level 2 Tracking Algorithm for the ATLAS Detector Mark Sutton University College London 7 th October 2005.
27 th June 2008Johannes Albrecht, BEACH 2008 Johannes Albrecht Physikalisches Institut Universität Heidelberg on behalf of the LHCb Collaboration The LHCb.
Data rate reduction in ALICE. Data volume and event rate TPC detector data volume = 300 Mbyte/event data rate = 200 Hz front-end electronics DAQ – event.
Torsten Alt - KIP Heidelberg IRTG 28/02/ ALICE High Level Trigger The ALICE High-Level-Trigger.
ALICE TPC Online Tracking on GPU David Rohr for the ALICE Corporation Lisbon.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
> IRTG – Heidelberg 2007 < Jochen Thäder – University of Heidelberg 1/18 ALICE HLT in the TPC Commissioning IRTG Seminar Heidelberg – January 2008 Jochen.
DSP online algorithms for the ATLAS TileCal Read Out Drivers Cristobal Cuenca Almenar IFIC (University of Valencia-CSIC)
High Level Trigger of Muon Spectrometer Indranil Das Saha Institute of Nuclear Physics.
HLT Collaboration (28-Jun-15) 1 High Level Trigger L0 L1 L2 HLT Dieter Roehrich UiB Trigger Accept/reject events Select Select regions of interest within.
ALICE HLT High Speed Tracking and Vertexing Real-Time 2010 Conference Lisboa, May 25, 2010 Sergey Gorbunov 1,2 1 Frankfurt Institute for Advanced Studies,
February 19th 2009AlbaNova Instrumentation Seminar1 Christian Bohm Instrumentation Physics, SU Upgrading the ATLAS detector Overview Motivation The current.
L3 Filtering: status and plans D  Computing Review Meeting: 9 th May 2002 Terry Wyatt, on behalf of the L3 Algorithms group. For more details of current.
KIP TRACKING IN MAGNETIC FIELD BASED ON THE CELLULAR AUTOMATON METHOD TRACKING IN MAGNETIC FIELD BASED ON THE CELLULAR AUTOMATON METHOD Ivan Kisel KIP,
Possible DAQ Upgrades DAQ1k… DAQ2k… DAQ10k!? Tonko Ljubičić STAR/BNL (for the “3L Group” — Landgraf, LeVine & Ljubičić) (Lange would fit nicely too, )
Tracking at the ATLAS LVL2 Trigger Athens – HEP2003 Nikos Konstantinidis University College London.
The High-Level Trigger of the ALICE Experiment Heinz Tilsner Kirchhoff-Institut für Physik Universität Heidelberg International Europhysics Conference.
HLT architecture.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
The ALICE DAQ: Current Status and Future Challenges P. VANDE VYVRE CERN-EP/AID.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
TPC online reconstruction Cluster Finder & Conformal Mapping Tracker Kalliopi Kanaki University of Bergen.
Leo Greiner TC_Int1 Sensor and Readout Status of the PIXEL Detector.
S.Vereschagin, Yu.Zanevsky, F.Levchanovskiy S.Chernenko, G.Cheremukhina, S.Zaporozhets, A.Averyanov R&D FOR TPC MPD/NICA READOUT ELECTRONICS Varna, 2013.
Fast reconstruction of tracks in the inner tracker of the CBM experiment Ivan Kisel (for the CBM Collaboration) Kirchhoff Institute of Physics University.
1 High Level Processing & Offline event selecton event selecton event processing event processing offine Dieter Roehrich UiB Data volume and event rates.
Tracking at Level 2 for the ATLAS High Level Trigger Mark Sutton University College London 26 th September 2006.
KIP Ivan Kisel JINR-GSI meeting Nov 2003 High-Rate Level-1 Trigger Design Proposal for the CBM Experiment Ivan Kisel for Kirchhoff Institute of.
Tracking, PID and primary vertex reconstruction in the ITS Elisabetta Crescio-INFN Torino.
TPC in Heavy Ion Experiments Jørgen A. Lien, Høgskolen i Bergen and Universitetet i Bergen, Norway for the ALICE Collaboration. Outlook: Presenting some.
Tracking in High Density Environment
LHCb front-end electronics and its interface to the DAQ.
STAR Level-3 C. Struck CHEP 98 1 Level-3 Trigger for the Experiment at RHIC J. Berger 1, M. Demello 5, M.J. LeVine 2, V. Lindenstruth 3, A. Ljubicic, Jr.
Normal text - click to edit HLT tracking in TPC Off-line week Gaute Øvrebekk.
STAR TPC Cluster and Hit Finder Software Raimond Snellings.
Modeling PANDA TDAQ system Jacek Otwinowski Krzysztof Korcyl Radoslaw Trebacz Jagiellonian University - Krakow.
Methods for fast reconstruction of events Ivan Kisel Kirchhoff-Institut für Physik, Uni-Heidelberg FutureDAQ Workshop, München March 25-26, 2004 KIP.
HLT Kalman Filter Implementation of a Kalman Filter in the ALICE High Level Trigger. Thomas Vik, UiO.
Leo Greiner IPHC beam test Beam tests at the ALS and RHIC with a Mimostar-2 telescope.
STAR Analysis Meeting, BNL – oct 2002 Alexandre A. P. Suaide Wayne State University Slide 1 EMC update Status of EMC analysis –Calibration –Transverse.
FPGA Co-processor for the ALICE High Level Trigger Gaute Grastveit University of Bergen Norway H.Helstrup 1, J.Lien 1, V.Lindenstruth 2, C.Loizides 5,
Upgrade Letter of Intent High Level Trigger Thorsten Kollegger ALICE | Offline Week |
Development of the parallel TPC tracking Marian Ivanov CERN.
1/13 Future computing for particle physics, June 2011, Edinburgh A GPU-based Kalman filter for ATLAS Level 2 Trigger Dmitry Emeliyanov Particle Physics.
CWG7 (reconstruction) R.Shahoyan, 12/06/ Case of single row Rolling Shutter  N rows of sensor read out sequentially, single row is read in time.
A Fast Hardware Tracker for the ATLAS Trigger System A Fast Hardware Tracker for the ATLAS Trigger System Mark Neubauer 1, Laura Sartori 2 1 University.
1 Reconstruction tasks R.Shahoyan, 25/06/ Including TRD into track fit (JIRA PWGPP-1))  JIRA PWGPP-2: Code is in the release, need to switch setting.
Workshop ALICE Upgrade Overview Thorsten Kollegger for the ALICE Collaboration ALICE | Workshop |
FTK high level simulation & the physics case The FTK simulation problem G. Volpi Laboratori Nazionali Frascati, CERN Associate FP07 MC Fellow.
Particle Identification of the ALICE TPC via dE/dx
Off-Detector Processing for Phase II Track Trigger Ulrich Heintz (Brown University) for U.H., M. Narain (Brown U) M. Johnson, R. Lipton (Fermilab) E. Hazen,
DAQ and Trigger for HPS run Sergey Boyarinov JLAB July 11, Requirements and available test results 2. DAQ status 3. Trigger system status and upgrades.
June 2009, Wu Jinyuan, Fermilab MicroBooNe Design Review 1 Some Data Reduction Schemes for MicroBooNe Wu, Jinyuan Fermilab June, 2009.
Data Reduction Schemes for MicroBoone Wu, Jinyuan Fermilab.
Use of FPGA for dataflow Filippo Costa ALICE O2 CERN
Status of Hough Transform TPC Tracker
LHC experiments Requirements and Concepts ALICE
ALICE – First paper.
Commissioning of the ALICE HLT, TPC and PHOS systems
Track Reconstruction Algorithms for the ALICE High-Level Trigger
Example of DAQ Trigger issues for the SoLID experiment
Commissioning of the ALICE-PHOS trigger
Low Level HLT Reconstruction Software for the CMS SST
The LHCb Level 1 trigger LHC Symposium, October 27, 2001
The CMS Tracking Readout and Front End Driver Testing
Presentation transcript:

Level-3 trigger for ALICE Bergen Frankfurt Heidelberg Oslo

Assumptions Need for an online rudimentary event reconstruction for monitoring Detector readout rate (i.e. TPC) >> DAQ bandwidth  mass storage bandwidth Some physics observables require running detectors at maximum rate (e.g. quarkonium spectroscopy: TPC/TRD dielectrons; jets in p+p: TPC tracking) Online combination of different detectors can increase selectivity of triggers (e.g. jet quenching: PHOS/TPC high-p T  - jet events)

Data volume and event rate TPC detector data volume = 300 Mbyte/event data rate = 200 Hz front-end electronics DAQ – event building Level-3 system permanent storage system bandwidth 60 Gbyte/sec 15 Gbyte/sec < 1.2 Gbyte/sec < 2 Gbyte/sec

Readout in ALICE for heavy ion running ALICE Trigger and readout scenarios for HI running. Pb+Pb central trigger is 180 Hz, highly central 55 Hz Data Rates produced by ALICE Detectors The data size here is based on zero suppressed raw data readout. One ALICE HI year is 10 6 seconds beam Min.bias sizes are assumed as about 25% of central. The TPC dominates everything, followed by the TRD Need to reduce data volume on tape

Dielectrons Dielectron measurement in TRD/TPC/ITS –quarkonium spectroscopy needs high rates –TPC must operate at >100 Hz –TPC data rate has to be significantly reduced TRD pre-trigger for TPC level-3 trigger system for TPC –partial readout –e + e — verification: event rejection 200 Hz Online track reconstruction: 1) selection of e + e — pairs (ROI) 2) analysis of e + e — pairs (event rejection) level-3 trigger system

Event flow Event sizes and number of links TPC only

Level-3 tasks Online (sub)-event reconstruction: –optimization and monitoring of detector performance –monitoring of trigger selectivity –fast check of physics program Data rate reduction –data volume reduction regions-of-interest and partial readout data compression –event rate reduction (sub)-event reconstruction and event rejection p+p program –pile-up removal –charged particle jet trigger

Online event reconstruction Optimization and monitoring of detector performance –see STAR: online tracking Monitoring of trigger selectivity –see STAR: event rejection by Level-3 vertex determination Fast check of physics program –see STAR: peripheral physics program has to be up and running on day 1

Data rate reduction Volume reduction –regions-of-interest and partial readout –data compression entropy coder vector quantization TPC-data modeling Rate reduction –(sub)-event reconstruction and event rejection before event building

TPC event (only about 1% is shown)

Regions-of-interest and partial readout Example: selection of TPC sector and  -slice based on TRD track candidate

Data compression: Entropy coder Variable Length Coding short codes for long codes for frequent values infrequent values Results: NA49: compressed event size = 72% ALICE: = 65% ( Arne Wiebalck, diploma thesis, Heidelberg) Probability distribution of 8-bit TPC data

Data compression: Vector quantization Sequence of ADC-values on a pad = vector: Vector quantization = transformation of vectors into codebook entries Quantization error: Results: NA49: compressed event size = 29 % ALICE: = 48%-64% (Arne Wiebalck, diploma thesis, Heidelberg) code book compare

Data compression: TPC-data modeling Fast local pattern recognition: Result: NA49: compressed event size = 7 % analytical cluster model quantization of deviations from track and cluster model local track parameters comparison to raw data simple local track model (e.g. helix)track parameters Track and cluster modeling:

Fast pattern recognition Essential part of Level-3 system –crude complete event reconstruction  monitoring –redundant local tracklet finder for cluster evaluation  efficient data compression –selection of ( , ,p T )-slices  ROI –high precision tracking for selected track candidates  jets, dielectrons,...

Fast pattern recognition Sequential approach –cluster finder, vertex finder and track follower STAR code adapted to ALICE TPC –reconstruction efficiency –timing results Iterative feature extraction –tracklet finder on raw data and cluster evaluation Hough transform

Fast cluster finder (1) timing: 5ms per padrow

Fast cluster finder (2)

Fast cluster finder (3) Efficiency Offline efficiency

Fast vertex finder Resolution Timing result: 19 msec on ALPHA (667 MHz)

Fast track finder Tracking efficiency

Fast track finder Timing results

Hough transform (1) Data flow

Hough transform (2)  -slices

Hough transform (3) Transformation and maxima search

TPC on-line tracking Assumptions: Bergen fast tracker DEC Alpha 667 MHz Fast cluster finder excluding cluster deconvolution Note: This cluster finder is sub optimal for the inner sectors and additional work is required here. However in order to get some estimate the computation requirements were based on the outer pad rows. It should be noted that the possibly necessary deconvolution in the inner padrows may require comparably more CPU cycles. TPC L3 Tracking estimate: Cluster finder on pad row of the outer sector 5 ms tracking of all (monte carlo) space points for one TPC sector 600 ms Note- this data may not include realistic noise - tracking to first order is linear with the number of tracks provided there are few overlaps - assuming one ideal processor below Cluster finder on one sector (145 padrows) 725 ms Process complete sector 1,325 s Process complete TPC 47,7 s Running at maximum TPC rate (200 Hz), January CPUs Assuming 20% overhead CPUs (parallel computation, network transfer, inner sector additional overhead, sector merging etc.) Moores Law (60%/a) 2006 – 1a commission x10, CPUs

Level-3 system architecture TP C sect or #1 TP C sect or #36 TR D ITSXY Z local processing (subsector/sector) global processing I (2x18 sectors) global processing II (detector merging) global processing III (event reconstruction) ROI data compres sion jets dielectro n verificati on – event rejection monitoring Level-3 trigger

Level-3 implementation scenarios A B simple architecture trivial parallel processing throughput always limited to Hz due to bandwidth limitation cannot fulfill all Level-3 requirements minimized data transfer scalable distributed computing farm ( nodes + network) would do the job Detectors DAQ- EVB Level-3 event # 1 2 n Detectors Leve l-3 (sub)detector # 1 2 n DAQ- EVB

Conclusion Need for online (crude/partial/sub) event reconstruction and event rejection Essential task: fast pattern recognition (TPC) Distributed computing farm ( nodes) close to the detector readout would do the job

raw data, 10bit dynamic range, zero suppressed Huffman coding and vector quantization fast cluster finder: simple unfolding, flagging of overlapping clusters RCU RORC cluster list raw data fast vertex finder fast track finder initialization (e.g. Hough transform) Hough histograms receiver node Preprocessing per sector global node vertex position detector front-end electronics

TPC - RCU TPC front-end electronics system architecture and readout controller unit. Pipelined Huffman Encoding Unit, implemented in a Xilinx Virtex 50 chip * * T. Jahnke, S. Schoessel and K. Sulimma, EDA group, Department of Computer Science, University of Frankfurt

raw data, 10bit dynamic range; zero suppressed slicing of padrow-pad-time space into sheets of pseudo-rapidity, subdiving each sheet into overlapping patches track segments fast track finder: 1. Hough transformation receiver node Processing per sector vertex position, cluster list sub-volumes in r, ,  seeds cluster deconvolution and fitting updated vertex position updated cluster list, track segment list fast track finder: 2. Hough maxima finder 3. tracklett verification RORC

TPC PCI-RORC Simple PCI-RORC PCI bridgeGlue logic DIU interface DIU card PCI bus FPGA Coprocessor SRAM

TPC PCI-RORC: FPGA co-processor Fast cluster finder (outer padrows) –pad: internal 512x10 RAM –2 external and 2 internal read accesses per hit –timing (in clock cycles, e.g. 5 nsec) 1 : #(cluster-pixels per pad) / 2 + #hits –centroid calculation: pipelined array multiplier Fast vertex finder –histograms of cluster centroids –maxima finding and centroid calculation Fast track finder: Hough transformations 2 –(row,pad,time)-to-(r, ,  ) transformation –(n-pixel)-to-(circle-parameter) transformation –10-60 M transforms/sec (limited by memory access) 1  msecs for a central Pb+Pb event FPGA PCI 66/64 PCI FPGA (S)RAM 1. Timing estimates by K. Sulimma, EDA group, Department of Computer Science, University of Frankfurt 2. E.g. see Pattern Recognition Algorithms on FPGAs and CPUs for the ATLAS LVL2 Trigger, C. Hinkelbein et at., IEEE Trans. Nucl. Sci. 47 (2000) 362.

TPC PCI-RORC: FPGA co-processor

Postprocessing (all sectors) cluster list, track segment list cluster list, track segment list cluster list, track segment list sector 1 sector 19 sector global nodes track segment merging, precise distortion corrections, track refitting, vertex fitting efficient data compression by cluster and track modeling updated vertex position updated cluster list, updated track segment list detector information merging, Level-3 trigger decision other detectors accept/reject compressed data

Level-3 TPC task Efficient data formatting, Huffman coding and Vector quanitization: –TPC Readout Controller Unit Fast cluster finder, fast vertex finder and Hough transformation: –FPGA implementation on PCI Receiver Card Pattern recognition: Hough maxima and track segment finder, cluster evaluation: –Level-3 farm, local level Cluster and tracklett modelling – data compression: –Level-3 farm, local level (Sub)-event reconstruction: event rejection or sub-event selection: –Level-3 farm, global level

Level-3 TPC pattern recognition scheme Preprocessing oFast cluster finder on a fibre patch scope oFast vertex finder using all/outer cluster information oFast tracker (seed finder) working on isolated clusters per sector Processing oDefining (r, ,  ) sub-volumes per sector oDividing the sub-volumes into overlapping patches oPerform track finding on raw ADC data oFind and unfold clusters belonging to track segments oCombine track segments on sector level oModel clusters and compress track and cluster information Postprocessing oCombine track segments from different sectors oReconstruct event

Requirements on the RORC design concerning Level-3 tasks Level-3 TPC data reduction scheme PCI-RORC design

Data volume and event rate TPC detector data volume = 300 Mbyte/event data rate = 200 Hz front-end electronics DAQ – event building realtime data compression & pattern recognition PC farm = 1000 clustered SMP permanent storage system bandwidth 60 Gbyte/sec 15 Gbyte/sec < 1.2 Gbyte/sec < 2 Gbyte/sec parallel processing

Data flow Efficient data formatting, Huffman coding and Vector quanitization: –TPC Readout Controller Unit Fast cluster finder, fast vertex finder and tracker initalization (e.g. Hough transform): –FPGA implementation on PCI Receiver Card Pattern recognition: (Hough maxima and) track segment finder, cluster evaluation: –Level-3 farm, local level Cluster and tracklett modelling – data compression: –Level-3 farm, local level (Sub)-event reconstruction: event rejection or sub-event selection: –Level-3 farm, global level

Typical Level-3 applications