John Baines on behalf of the ATLAS Collaboration Tracking for Triggering Purposes in ATLAS 1 Connecting the Dots Workshop: Vienna 22-24 Feb 2016.

Slides:



Advertisements
Similar presentations
1 The ATLAS Missing E T trigger Pierre-Hugues Beauchemin University of Oxford On behalf of the ATLAS Collaboration Pierre-Hugues Beauchemin University.
Advertisements

Digital Filtering Performance in the ATLAS Level-1 Calorimeter Trigger David Hadley on behalf of the ATLAS Collaboration.
A Fast Level 2 Tracking Algorithm for the ATLAS Detector Mark Sutton University College London 7 th October 2005.
CMS High Level Trigger Selection Giuseppe Bagliesi INFN-Pisa On behalf of the CMS collaboration EPS-HEP 2003 Aachen, Germany.
The Silicon Track Trigger (STT) at DØ Beauty 2005 in Assisi, June 2005 Sascha Caron for the DØ collaboration Tag beauty fast …
27 th June 2008Johannes Albrecht, BEACH 2008 Johannes Albrecht Physikalisches Institut Universität Heidelberg on behalf of the LHCb Collaboration The LHCb.
The ATLAS B physics trigger
July 7, 2008SLAC Annual Program ReviewPage 1 High-level Trigger Algorithm Development Ignacio Aracena for the SLAC ATLAS group.
The ATLAS trigger Ricardo Gonçalo Royal Holloway University of London.
Real Time 2010Monika Wielers (RAL)1 ATLAS e/  /  /jet/E T miss High Level Trigger Algorithms Performance with first LHC collisions Monika Wielers (RAL)
Online Measurement of LHC Beam Parameters with the ATLAS High Level Trigger David W. Miller on behalf of the ATLAS Collaboration 27 May th Real-Time.
ALICE HLT High Speed Tracking and Vertexing Real-Time 2010 Conference Lisboa, May 25, 2010 Sergey Gorbunov 1,2 1 Frankfurt Institute for Advanced Studies,
General Trigger Philosophy The definition of ROI’s is what allows, by transferring a moderate amount of information, to concentrate on improvements in.
February 19th 2009AlbaNova Instrumentation Seminar1 Christian Bohm Instrumentation Physics, SU Upgrading the ATLAS detector Overview Motivation The current.
FTK poster F. Crescioli Alberto Annovi
Tracking at the ATLAS LVL2 Trigger Athens – HEP2003 Nikos Konstantinidis University College London.
Tracking at LHCb Introduction: Tracking Performance at LHCb Kalman Filter Technique Speed Optimization Status & Plans.
Faster tracking in hadron collider experiments  The problem  The solution  Conclusions Hans Drevermann (CERN) Nikos Konstantinidis ( Santa Cruz)
Future farm technologies & architectures John Baines 1.
Optimising Cuts for HLT George Talbot Supervisor: Stewart Martin-Haugh.
ATLAS ATLAS Week: 25/Feb to 1/Mar 2002 B-Physics Trigger Working Group Status Report
The CDF Online Silicon Vertex Tracker I. Fiori INFN & University of Padova 7th International Conference on Advanced Technologies and Particle Physics Villa.
Il Trigger di Alto Livello di CMS N. Amapane – CERN Workshop su Monte Carlo, la Fisica e le simulazioni a LHC Frascati, 25 Ottobre 2006.
Tracking at Level 2 for the ATLAS High Level Trigger Mark Sutton University College London 26 th September 2006.
Status of Reconstruction in CBM
Valeria Perez Reale University of Bern On behalf of the ATLAS Physics and Event Selection Architecture Group 1 ATLAS Physics Workshop Athens, May
AMB HW LOW LEVEL SIMULATION VS HW OUTPUT G. Volpi, INFN Pisa.
TDAQ Upgrade Software Plans John Baines, Tomasz Bold Contents: Future Framework Exploitation of future Technologies Work for Phase-II IDR.
IOP HEPP: Beauty Physics in the UK, 12/11/08Julie Kirk1 B-triggers at ATLAS Julie Kirk Rutherford Appleton Laboratory Introduction – B physics at LHC –
Commissioning and Performance of the CMS High Level Trigger Leonard Apanasevich University of Illinois at Chicago for the CMS collaboration.
CMS101 Introduction to the CMS Trigger Darin Acosta University of Florida.
Standalone FLES Package for Event Reconstruction and Selection in CBM DPG Mainz, 21 March 2012 I. Kisel 1,2, I. Kulakov 1, M. Zyzak 1 (for the CBM.
G. Volpi - INFN Frascati ANIMMA Search for rare SM or predicted BSM processes push the colliders intensity to new frontiers Rare processes are overwhelmed.
Trigger Software Upgrades John Baines & Tomasz Bold 1.
ATLAS Trigger Development
Overview of the High-Level Trigger Electron and Photon Selection for the ATLAS Experiment at the LHC Ricardo Gonçalo, Royal Holloway University of London.
ATLAS and the Trigger System The ATLAS (A Toroidal LHC ApparatuS) Experiment is one of the four major experiments operating at the Large Hadron Collider.
1/13 Future computing for particle physics, June 2011, Edinburgh A GPU-based Kalman filter for ATLAS Level 2 Trigger Dmitry Emeliyanov Particle Physics.
Performance of the ATLAS Trigger with Proton Collisions at the LHC John Baines (RAL) for the ATLAS Collaboration 1.
Tommaso Boccali SNS and INFN Pisa Vertex 2002 Kailua-Kona, Hawaii 3-8 November 2002 High Level Triggers at CMS with the Tracker.
Régis Lefèvre (LPC Clermont-Ferrand - France)ATLAS Physics Workshop - Lund - September 2001 In situ jet energy calibration General considerations The different.
A Fast Hardware Tracker for the ATLAS Trigger System A Fast Hardware Tracker for the ATLAS Trigger System Mark Neubauer 1, Laura Sartori 2 1 University.
SiD Tracking in the LOI and Future Plans Richard Partridge SLAC ALCPG 2009.
ATLAS and the Trigger System The ATLAS (A Toroidal LHC ApparatuS) Experiment [1] is one of the four major experiments operating at the Large Hadron Collider.
BEACH 04J. Piedra1 SiSA Tracking Silicon stand alone (SiSA) tracking optimization SiSA validation Matthew Herndon University of Wisconsin Joint Physics.
FTK high level simulation & the physics case The FTK simulation problem G. Volpi Laboratori Nazionali Frascati, CERN Associate FP07 MC Fellow.
David Lange Lawrence Livermore National Laboratory
ATLAS UK physics meeting, 10/01/08 1 Triggers for B physics Julie Kirk RAL Overview of B trigger strategy Algorithms – current status and plans Menus Efficiencies.
The LHCb Calorimeter Triggers LAL Orsay and INFN Bologna.
Status of FTK Paola Giannetti INFN Pisa for the FTK Group ATLAS Italia November 17, 2009.
GUIDO VOLPI – UNIVERSITY DI PISA FTK-IAPP Mid-Term Review 07/10/ Brussels.
WP8 : High Level Trigger John Baines. Tasks & Deliverables WP8 Tasks: 1.Optimize HLT Tracking software for Phase-I 2.Optimize Trigger Selections for Phase-I.
Iterative local  2 alignment algorithm for the ATLAS Pixel detector Tobias Göttfert IMPRS young scientists workshop 17 th July 2006.
EPS HEP 2007 Manchester -- Thilo Pauly July The ATLAS Level-1 Trigger Overview and Status Report including Cosmic-Ray Commissioning Thilo.
New Track Seeding Techniques at the CMS experiment
Using IP Chi-Square Probability
More technical description:
Project definition and organization milestones & work-plan
IOP HEPP Conference Upgrading the CMS Tracker for SLHC Mark Pesaresi Imperial College, London.
2018/6/15 The Fast Tracker Real Time Processor and Its Impact on the Muon Isolation, Tau & b-Jet Online Selections at ATLAS Francesco Crescioli1 1University.
Tracking for Triggering Purposes in ATLAS
Silicon Pixel Detector for the PHENIX experiment at the BNL RHIC
Overview of the ATLAS Fast Tracker (FTK) (daughter of the very successful CDF SVT) July 24, 2008 M. Shochet.
Commissioning of the ALICE HLT, TPC and PHOS systems
ATLAS L1Calo Phase2 Upgrade
Michele Pioppi* on behalf of PRIN _005 group
The Silicon Track Trigger (STT) at DØ
Plans for checking hadronic energy
The LHCb Level 1 trigger LHC Symposium, October 27, 2001
Contents First section: pion and proton misidentification probabilities as Loose or Tight Muons. Measurements using Jet-triggered data (from run).
Presentation transcript:

John Baines on behalf of the ATLAS Collaboration Tracking for Triggering Purposes in ATLAS 1 Connecting the Dots Workshop: Vienna Feb 2016

Overview 2 The ATLAS Detector The ATLAS Tracker & Trigger Trigger Tracking Challenges Solutions Current & Future: Custom Fast algorithms Hardware-based Tracking: Fast TracKer (FTK) Use of co-processors/accelerators ATLAS Trigger Tracking for HL-LHC Summary

The ATLAS Detector 3 Si Tracker: 12 Si detector planes 4 Pixel layers (3 in Run-1) 4 double SCT layers ~100M channels SCT Barrel z y  stereo 20 mRad

Muon: Match ID and Muon track segments Electron: Match track and calorimeter cluster Lepton isolation: look for reconstructed tracks within cone around lepton Hadronic Tau lepton decays: distinguish hadronic Tau decay from jet background Based on distribution of tracks in Core and Isolation Cones Tracking in the Trigger (1) 4

Jets originating from b-quark (b-jets): Identify primary and secondary vertex B-Physics Triggers: Invariant mass cuts e.g. Identify J/  (  ) or B(  ) Pile-up suppression: Identify number of interactions (no. vertices) Identify which particles in jet come from primary vertex Tracking in the Trigger (2) 5 ++ -- J/  Dimuon Invariant Mass

LHC & High Luminosity LHC 6 First W->e 7 TeV collision data. Upgraded ATLAS Detector incl. Replacement of Inner Tracker Upgraded Trigger Addition of Inner pixel layer (IBL) Trigger Upgrades 7.5 x nominal luminosity Further trigger upgrades Concurrent processing on many-core cpu (+ accelerators) FTK: Fast TracKer: hardware based tracking Addition of FTK Long Shutdown 1 Long Shutdown 2 Long Shutdown 3 HL-LHC Installation

Trigger Requirements,Challenges & Solutions Solutions for the Trigger: Reconstruction in Regions of Interest Reduced detector volume reconstructed Knowledge of L1 trigger type enables optimised reconstruction Two stage tracking: Fast Tracking: Initial loose trigger selection using reduced resolution tracks Precision Tracking: full precision tracks for final trigger selection Limit hit combinations: Geometrical constraints using RoI, beamspot and possibly primary vertex info Hardware Trigger: FTK (Run 2 & 3) Pattern matching using custom ASIC Acceleration (Future) Exploring use of GPGPUs Space-points10 9 Triplets10 4 Seeds 10 3 Tracks Space-point = Pixel cluster or SCT cluster-pair (  stereo) Requirements: High Efficiency; Low fake rate Excellent track parameter resolution Challenges: Event complexity: many superimposed collisions 45 (Run 1) to 69 (Run 3) to 200 (HL-LHC) High rate: 100 kHz Run 2&3 to 400 kHz (1MHz) HL-LHC Short Time: finite HLT farm size => ~300ms/event for ALL Reco. ~factor 50 faster than offline Huge number of hit combinations for current luminosities (~30 interactions):

Trigger in LHC Run 2 & 3 8 1k Hz 40 MHz 100 kHz < 2.5  s < 300 ms Fast TracKer: Hardware-based tracking Commissioned during 2016 Full-event track information High Level Trigger: Software-based running on CPU farm Precision Calo and Muon info. Precision Tracking mainly in RoI improved resolution Time budget: ~300 ms / event CalorimeterMuon Inner Detector Readout Driver Offline Reconstruction & Storage Level 1 Trigger FPGA High Level Trigger ~4200 CPU ~32k Cores DataFlow Event Fragments Built Events Fast TracKer Level 1 Hardware-based (FPGA) Calorimeter & Muon (+ specialized triggers) Note: In Run-1 HLT was split into Level-2 and Event Filter Calorimeter Region of Interest Muon Region of Interest

y Run-1 Trigger Tracking Two-step Tracking: Fast-Tracking followed by Precision Tracking 1) Fast-Tracking in Run-1 used 2 strategies: Histogramming method 1 : Zfinder: Identify z-positions of verticies Histogram z of pairs/triplets of hits HitFilter: Identify groups of nearby hits Histogram in  (uses z of vertex for  ) Group Cleaner: Identify hits on track Histogram in  0,1/p T Clone Removal & Track Fitting Remove duplicate tracks sharing hits Determine track parameters Extension to Transition Radiation Tracker 2 Combinatorial method 3 Also used for GPU prototype (see later). Similar to algorithm used in Run2 (next slide). 2) Precision Tracking uses Online configuration of Offline tools Combinatorial Method: + Does not rely on primary vertex for efficiency (but vertex info. can increase speed) + More similar to offline algorithm - More rapid scaling of execution time with occupancy Histogramming Method: + Time scales linearly with occupancy - potential loss of efficiency if vertex not found    00 z z x Group 9 1: Sutton, doi: /j.nima : Probabilistic Data Association Filter: Emeliyanov, NIM A 566 (2006) : ATL-DAQ-CONF

Run 2 HLT Tracking 10 Fast Tracking 1 : Seeding: Form Triplets of Hits using bins in r and  Conformal mapping to estimate triplet parameters Combinations limited using RoI geometry Track Following: extend to all Si layers Fast Kalman Track Fit Precision Tracking 2 : Online configuration of Offline Tools 3 Resolve ambiguous hit assignments Remove duplicate tracks Precision track fit (Global  2 algorithm 3 )  r Impact Parameter Resolution v.  1/p T Resolution 1: Martin-Haugh 2015 J. Phys.: Conf. Ser HLTTrackingPublicResults 3: G. Lutz, NIM A 273 (1988) :ATL-SOFT-PUB

Fast TracKer FTK FTK : Gets SCT & Pixel data direct from detector front-end Performs pattern matching & track fitting at up to full Level 1 rate (100 kHz) Sends tracks to the HLT: perigee parameters and hits-on- track information (cluster centres) FTK HLT has access to: FTK Tracks in whole event HLT Refit of FTK tracks HLT tracks in RoI FTK Tracks Primary vertex reconstruction Initial track-based selections in whole event or large RoI incl. hadronic Tau-lepton decays B decays including hadrons Lepton isolation; Track-based jets HLT Refit of FTK tracks in RoI Selections requiring higher parameter precision incl. Primary & secondary vertex for b-jet tagging Impact parameter and invariant mass for B-physics HLT tracking in RoI Single leptons: e, , leptonic  decays Final selections requiring ultimate efficiency & precision HLT 11

FTK System 12 High throughput (40M tracks/s) Low latency (100  s) Achieved by: Parallelism: 64 independent towers (4 in  x 16 in  ) Hardware Implementation: custom ASICs and FPGAs Two stages: 1) Pattern matching with 8 detector layers Uses Associative Memory (AM): 1 billion patterns Reduced granularity: Pixels/Strips grouped to super strips 2) Extension to 12 layers Track parameters extracted using Principle Component Analysis => Sum rather than fit Uses full granularity AM chip: Special Content-Addressable Memory chip AM06 implemented in 65 nm technology Large chip:168mm parallel 16-bit comparisons per second Implementation: 8192 Associative Memory (AM) chips each with 128k Patterns (8 words each) Total Memory 18 Mbits 2000 FPGAs

FTK Patterns & Constants Determined from large sample of simulated single muons (~1.5 billion tracks) Pattern bank size determined by required: minimum p T (1 GeV), d 0 & z 0 range Super-strip size Larger super-strips => smaller pattern bank but more fakes => more work for later steps Variable resolution patterns: Effectively multiply the Super Strip size by 2 (4 or 8) in a layer of a given pattern Employs “Don’t Care” feature when bit-matching Super-Strip address Maximizing efficiency: Allow 1 missing hit at each of stage 1 and stage 2 Additional Wild Card option: treat layer as if all Super Strips had a hit Variable Resolution Patterns: Allow optimum combination of: large bins to limit bank size Small bins to minimise fake rate doi: /ANIMMA

FTK First Stage 14 Clusters formed from SCT and Pixel input Transformed to coarse resolution Super Strips AM board performs Pattern Matching Uses 3 Pixel + 5 SCT layers (6+5 measurements) 1 st Stage Track Fitting Input: addresses of patterns with hits Uses full-resolution hit information Uses Principle Component Analysis to estimate   Allows 1 missing hit 1 st Stage Hit Warrior: Duplicate track removal Compare tracks within same road If tracks share >N max hits, keep higher quality track (based on N hits and   ) S ij & h i are pre-calculated constants x j are hit coordinates j: measured coordinates i:components of  2 calculation linear in local hit position

FTK Second Stage 15 Use 8-layer track parameters to find 3 SCT stereo hits + IBL hit 12-layer track fit: parameters &   All track input to Hit Warrior to remove Duplicates Fit 1 track in 1 ns. (1 track every 5ps for full system) p i : track parameters C il, q i : constants x l hit coordinate Data Formatter Extrapolator Track Fitter Hit Warrior Processor Unit Auxilary Board

FTK Status & Schedule 16 Status: Production of Chips & Boards is underway First AM06 chips received – functioning well Integration with Data Acquisition in progress Dataflow to FTK system exercised at end of 2015 data-taking Triggers being developed & tested with Simulated Data => expected performance measurements Schedule: Full Barrel Commissioned Mid 2016 Full Coverage (for = 40) : for start of 2017 data-taking Full Spec. system (for = 69): 2018 (luminosity driven) Tompkins, ATL-DAQ-SLIDE

GPU Prototype GPU provide high levels of parallelisation => Re-writing of code required to exploit this parallelisation GPU prototype has been developed to explore potential for GPGPU in the ATLAS Trigger Goal is to assess the potential benefit in terms of though-put per unit cost. Integration with the Atlas software uses a client-server technology: Nvidia C2050 Nvidia K80 Cores4482x2496 Multiprocessors142x13 Cores/SMX

Clustering on GPU 18 Pixel Clustering implemented using a Cellular Automaton Algorithm Each hit is assigned an initial tag Tags are replaced for adjacent clusters Ends when no more adjacent hits Pixel Obtain Factor 26 Speed-up running on GPU compared to 1 CPU core D Emeliyanov and J Howard 2012 J. Phys.: Conf. Ser

Combinatorial Track- finding on GPU 19 Doublet Formation Triplet Formation Track Extension

GPU Performance & Outlook 20 Nvidia C2050 GPU Intel E56020 CPU 2.6 GHz Code Speed-up: Average factor 12 speedup for Tracking on GPU c.f. 1 CPU core System Performance: Question: What increase in throughput comparing CPU system with CPU+GPU Measurements in progress: Updated hardware: K80 GPU, Intel E5-2695V3 CPU Updated software: Run-2 Tracking algorithm Initial measurements with Seed-Maker suggest factor of two increase in system throughput could be obtained by adding GPU. Work in progress to add GPU track following Prototype also includes Calorimeter & Muon reconstruction x2.4 x5 ATL-DAQ-SLIDE

Silicon Strips Pixels Extended coverage to |  |<4 Tracking in the Phase II Trigger 21 L=7.5x10 34 cm -2 s -1, 200 pile-up events ~10 times current highest luminosity Upgraded Detector including: upgraded Trigger replacement Tracker (ITK) Tracking in the Phase II Trigger: Hardware Tracking in RoI at up to 1 MHz Used to refine L0 decision Hardware Full Event Tracking at ~100 kHz – selected events Provides primary vertex information for pile-up suppression Precision Tracking at the Event Filter - Mainly in RoI Near offline quality tracks to maximize selection efficiency L0 Muon, Calorimeter 40MHz collision rate 1 MHz L1 Muon, Calorimeter, Regional Fast-Tracking MuonCalorimeter Regional Tracking Event Filter Muon, Calorimeter, Full-event Fast-Tracking Precision Tracks in RoI Full Event Tracking ITK 400 kHz Offline Reconstruction & Storage 10 kHz 100 kHz Schematic of possible ITK layout

Summary 22 The Trigger is a challenging environment for tracking High Rates, Short times, requirement for high efficiency and low fake rates Current solutions include: Partial event reconstruction: use of Regions of Interest Limiting combinations: using beam constraint and RoI information Multi-step reconstruction and selection: Fast lower resolution, Slower full resolution Hardware-based Fast-Tracking based on Associative Memory and FPGA Future Evolution: Evolution of software framework to exploit concurrent execution of algorithms Evolution of algorithms to exploit internal parallelisation Possible use of accelerators such as GPGPU – encouraging results obtained from prototype Phase-II is even more challenging, will exploit: Hardware based tracking: Full-event and Regional CPU-based Precision Tracking in RoI (use of accelerators) Solutions used in the trigger could also be useful offline Holy Grail: Full offline-quality tracking at L0 output rates (400 kHz – 1 MHz) Looking for new ideas to bring us closer to this ideal.

Additional Material 23

24 Status of testing: - received first samples of AM06 (slow corner: -7.5% saturation current) - find 42 out of 50 tested without production defects - defects are localized errors at pattern level for few patterns - tested intensively a few samples-they are running fine at full speed 100 MHz

25

26

27

GPUS Time (ms) Tau RoI 0.6x0.6 tt events 2x10 34 C++ on 2.4 GHz CPU CUDA on Tesla C2050 Speedup CPU/GPU Data. Prep.2739 Seeding Seed ext Triplet merging Clone removal CPU GPU xfern/a0.1n/a Total Example of complete L2 ID chain implemented on GPU (Dmitry Emeliyanov) Data Prep. L2 Tracking 28 X2.4 X5 Max. speed-up: x26 Overall speed-up t(GPU/t(CPU): 12

29

Hardware Tracking at Phase II 30 AM chip: AM chip projected to be about 500 k patterns with a 28 nm ASIC using multi-layer masks (MLM). Regional Tracking: Reconstruction in Regions of Interest at up to 1 MHz The estimated number of patterns required to cover the entire ITk up to |  4 GeV, is about 1.5x10 9. (c.f. 1 GeV for FTK) Duplicate patters to allow 2 events to be processed in parallel. Total 3x10 9 patterns => 6,000 AM chips Full Event Tracking: uses all ITk layers out to |  | < 4.0 and finds tracks down to p T of 1GeV at 100 kHz. ~ 10 Billion patterns =>20,000 AM chips