CWG7 (reconstruction) R.Shahoyan, 12/06/2013 1. Case of single row Rolling Shutter  N rows of sensor read out sequentially, single row is read in time.

Slides:



Advertisements
Similar presentations
High Level Trigger (HLT) for ALICE Bergen Frankfurt Heidelberg Oslo.
Advertisements

Performance of Cache Memory
Combined tracking based on MIP. Proposal Marian Ivanov.
HLT - data compression vs event rejection. Assumptions Need for an online rudimentary event reconstruction for monitoring Detector readout rate (i.e.
High Level Trigger – Applications Open Charm physics Quarkonium spectroscopy Dielectrons Dimuons Jets.
B triggering Fabrizio Palla INFN Pisa Consorzio CMS Italia Firenze 7 Sept
1 Online data quality and monitoring M. Ellis Daresbury DAQ Meeting 31 st August 2005.
ALICE HLT High Speed Tracking and Vertexing Real-Time 2010 Conference Lisboa, May 25, 2010 Sergey Gorbunov 1,2 1 Frankfurt Institute for Advanced Studies,
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. The Power of Data Driven Triggering DAQ Topology.
February 19th 2009AlbaNova Instrumentation Seminar1 Christian Bohm Instrumentation Physics, SU Upgrading the ATLAS detector Overview Motivation The current.
Framework for track reconstruction and it’s implementation for the CMS tracker A.Khanov,T.Todorov,P.Vanlaer.
CA tracker for TPC online reconstruction CERN, April 10, 2008 S. Gorbunov 1 and I. Kisel 1,2 S. Gorbunov 1 and I. Kisel 1,2 ( for the ALICE Collaboration.
Many-Core Scalability of the Online Event Reconstruction in the CBM Experiment Ivan Kisel GSI, Germany (for the CBM Collaboration) CHEP-2010 Taipei, October.
Simulation issue Y. Akiba. Main goals stated in LOI Measurement of charm and beauty using DCA in barrel –c  e + X –D  K , K , etc –b  e + X –B 
Some notes on ezTree and EMC data in MuDst Marco van Leeuwen, LBNL.
Faster tracking in hadron collider experiments  The problem  The solution  Conclusions Hans Drevermann (CERN) Nikos Konstantinidis ( Santa Cruz)
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
David N. Brown Lawrence Berkeley National Lab Representing the BaBar Collaboration The BaBar Mini  BaBar  BaBar’s Data Formats  Design of the Mini 
Real data reconstruction A. De Caro (University and INFN of Salerno) CERN Building 29, December 9th, 2009ALICE TOF General meeting.
CBM Software Workshop for Future Challenges in Tracking and Trigger Concepts, GSI, 9 June 2010 Volker Friese.
TRD and Global tracking Andrey Lebedev GSI, Darmstadt and LIT JINR, Dubna Gennady Ososkov LIT JINR, Dubna X CBM collaboration meeting Dresden, 27 September.
A Silicon vertex tracker prototype for CBM Material for the FP6 Design application.
AMB HW LOW LEVEL SIMULATION VS HW OUTPUT G. Volpi, INFN Pisa.
4 th Workshop on ALICE Installation and Commissioning January 16 th & 17 th, CERN Muon Tracking (MUON_TRK, MCH, MTRK) Conclusion of the first ALICE COSMIC.
PWG3 Analysis: status, experience, requests Andrea Dainese on behalf of PWG3 ALICE Offline Week, CERN, Andrea Dainese 1.
Analysis trains – Status & experience from operation Mihaela Gheata.
ALICE Offline Week, CERN, Andrea Dainese 1 Primary vertex with TPC-only tracks Andrea Dainese INFN Legnaro Motivation: TPC stand-alone analyses.
Fast Tracking of Strip and MAPS Detectors Joachim Gläß Computer Engineering, University of Mannheim Target application is trigger  1. do it fast  2.
Status of global tracking and plans for Run2 (for TPC related tasks see Marian’s presentation) 1 R.Shahoyan, 19/03/14.
Cellular Automaton Method for Track Finding (HERA-B, LHCb, CBM) Ivan Kisel Kirchhoff-Institut für Physik, Uni-Heidelberg Second FutureDAQ Workshop, GSI.
Features needed in the “final” AliRoot release P.Hristov 26/10/2006.
HLT Kalman Filter Implementation of a Kalman Filter in the ALICE High Level Trigger. Thomas Vik, UiO.
Workflows and Data Management. Workflow and DM Run3 and after: conditions m LHCb major upgrade is for Run3 (2020 horizon)! o Luminosity x 5 ( )
ALICE Offline Week October 4 th 2006 Silvia Arcelli & Chiara Zampolli TOF Online Calibration - Strategy - TOF Detector Algorithm - TOF Preprocessor.
Predrag Buncic CERN ALICE Status Report LHCC Referee Meeting 01/12/2015.
Requirements for the O2 reconstruction framework R.Shahoyan, 14/08/
Data processing Offline review Feb 2, Productions, tools and results Three basic types of processing RAW MC Trains/AODs I will go through these.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
PHOS offline status report Yuri Kharlov ALICE offline week 7 July 2008.
1 Reconstruction tasks R.Shahoyan, 25/06/ Including TRD into track fit (JIRA PWGPP-1))  JIRA PWGPP-2: Code is in the release, need to switch setting.
 offline code: changes/updates, open items, readiness  1 st data taking plans and readiness.
16 September 2014 Ian Bird; SPC1. General ALICE and LHCb detector upgrades during LS2  Plans for changing computing strategies more advanced CMS and.
Operating Systems Lecture 9 Introduction to Paging Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing Liu School of.
AliRoot survey: Reconstruction P.Hristov 11/06/2013.
LHCb Alignment Strategy 26 th September 2007 S. Viret 1. Introduction 2. The alignment challenge 3. Conclusions.
Some topics for discussion 31/03/2016 P. Hristov 1.
AliRoot survey: Calibration P.Hristov 11/06/2013.
D. Elia (INFN Bari)ALICE Offline week / CERN Update on the SPD Offline Domenico Elia in collaboration with H. Tydesjo, A. Mastroserio Overview:
MAUS Status A. Dobbs CM43 29 th October Contents MAUS Overview Infrastructure Geometry and CDB Detector Updates CKOV EMR KL TOF Tracker Global Tracking.
V4-19-Release P. Hristov 11/10/ Not ready (27/09/10) #73618 Problems in the minimum bias PbPb MC production at 2.76 TeV #72642 EMCAL: Modifications.
Predrag Buncic CERN Plans for Run2 and the ALICE upgrade in Run3 ALICE Tier-1/Tier-2 Workshop February 2015.
Monthly video-conference, 18/12/2003 P.Hristov1 Preparation for physics data challenge'04 P.Hristov Alice monthly off-line video-conference December 18,
CALIBRATION: PREPARATION FOR RUN2 ALICE Offline Week, 25 June 2014 C. Zampolli.
EPS HEP 2007 Manchester -- Thilo Pauly July The ATLAS Level-1 Trigger Overview and Status Report including Cosmic-Ray Commissioning Thilo.
PANDA Software Trigger K. Götzen Dec Challenge Required reduction factor: ~1/1000 (all triggers in total) A lot of physics channel triggers → even.
January 2009 offline detector review - 2 nd go 1 ● Offline – Geometry – Material budget – Simulation – Raw data – OCDB parameters – Reconstruction ● Calibration.
Christoph Blume Offline Week, July, 2008
Digital readout architecture for Velopix
STT Detector risk’s assessment
evoluzione modello per Run3 LHC
ALICE analysis preservation
ALICE – First paper.
Commissioning of the ALICE HLT, TPC and PHOS systems
ALICE HLT tracking running on GPU
Progress with MUON reconstruction
TPC status - Offline Q&A
Data Analysis in Particle Physics
LHCb Particle Identification and Performance
Samples and MC Selection
Implementation of DHLT Monitoring Tool for ALICE
Presentation transcript:

CWG7 (reconstruction) R.Shahoyan, 12/06/2013 1

Case of single row Rolling Shutter  N rows of sensor read out sequentially, single row is read in time , full cycle in T = N  (N ~ ,  ~ 30 ns  T ~ 20 ns)  Cycles are indexed, the start time of each cycle is known precisely  Need 2 cycles to cover hits of single collision  Collision time (  t ~ 25ns << T) is known from trigger  ~ T effective integration time (for the pile-up…) row N row k+1 row k row 1 New ITS row in readout at cycle J readout direction J J+1 Collision happens during readout of row k at cycle J  hits on rows k+1:N will be read at cycle J, hits on rows 1:k at cycle J+1 row (k) is inefficient during readout 2  No principal decision on the readout type yet, different readout schemes are in study:  simultaneous snapshot of sensor matrix upon the trigger (not considered here since event data is already isolated)  sequential readout (rolling shutter)

 Alternative ways of data extraction from the detector upon the trigger signal:  Continuous raw data: all cycles are read out w/o interruption, reconstruction is responsible for isolating the triggered collision using the trigger flag (time) as a reference.  Only time frame relevant for trigger goes to raw data (smallest data size: preferred option?): cycle J (rows k+1:N) + cycle J+1 (rows 1:k) No problem of event separation(?): minimal time-frame covering triggered event is defined in DAQ But: need special handling for the case of 2 nd trigger whose data overlaps with 1 st one  store in the 2 nd event data of J(m+1:N)  J+1(1:k) already stored for 1 st event  events are still isolated in the raw data  at high int. rate (almost every cycle is triggered) overhead of overlapping time frames may exceed the gain from reading only triggered cycles  store 2 nd event data starting from last row stored for 1 st event: J+1 (row k+1)  no overhead in raw data from events overlap  events are not isolated: reconstruction needs to do this  At high rates (always in p-p ?) both continuous and “triggered frames” raw data contains the same information, just format  handling by reconstruction is different Continuous readout with Rolling Shutter (case of single row RS) 3 J J+1 J+2 time/cycle trig1 trig2 k m

Possible reconstruction schemes  Clusterization: Need to define:  cluster format  container: access level: layer, cycle, “row “ (e.g. in-cycle time slice)  handling of clusters split between 2 cycles  Reconstruction: two extreme options  Short time-frames, reconstruction has clusters for cycles J, J+1 only 1) Find tracks fully covered by these cycles IF continuous raw data or “triggered frames” merged together 2) Discard cycle J; if needed, suppresses used clusters of cycle J+1 3) Fetch clusters of cycle J+2 4) Repeat procedure  CPU-time overhead from considering collisions only partially covered by the fetched cycles as background hits (increases combinatorics to test, but its tracks will be discarded: they will be reconstructed at next step)  No memory overhead from keeping in scope large amount of clusters data  Large time-frames: reconstruction has access to clusters for “unlimited” amount of cycles. Tracks are built with local check on cluster’s time slice compatibility;  No overhead on discarding incomplete track candidates to consider them again at next step  Overhead on storing/accessing many clusters by reconstruction  Algorithms: most probably track finding will rely on ITS standalone traking; At the moment CA is prime candidate (CBM CA code is being adapted/assessed) 4

Online reconstruction  Current offline reconstruction is based on offline tracklets.  Idea is to use the online tracklets built in FEE. Main question: quality of the online tracklets (speed vs TRD triggering capabilities)  The efficiency of building the online tracklets is satisfactory (>90% wrt offline for Pt>1.5GeV)  The position resolution is also good but the angular resolution is significantly worse:  will not affect tracking capabilites  will affect PID (need more study)  Organization of the work still to be discussed within the TRD Possible improvement for Run2 (offline) Include TRD data into track kinematics update (larger lever arm  improved momentum resolution) Need better calibration/alignment. Work on global calibration/alignment framework to (re)start in July/August 5 TRD A.Bercuci pp PbPb online/offline pp

6 MUON L.Aphecetche  Goal for run3: have fully online reconstruction for Run3  As much as possible for Run2; If full online reco is impossible, then do online preclustering, store in hltESDTree to finish processing offline (much faster)  Current HLT code does not allow for full track reconstruction (but can be used as a starting point)  simplified clustering  Station within the dipole is not used  trigger information used (will not be available in Run3)  Organizational difficulties: not much expertise in HLT (need training), some people who worked before for HLT Muon are not in Muon anymore. Even for the offline code some critical parts (clustering) was written by person who quit Alice  Currently assessing CPU/memory consumption. CPU hotspots identified (in p-Pb):  ~ 80% of time: clusterization, 20% : tracking/matching to trigger  tracking dominated ( ~ 64%) by B-field queries (field map access can be optimized)  Short-term plans: once profiling CPU/profiling is done, investigate speed-up solutions for pre-clustering.