Algorithm / Data-flow Interface

Slides:



Advertisements
Similar presentations
ATLAS ATLAS PESA Meeting 25/04/02 B-Trigger Working Group Status Report This talk:
Advertisements

ATLAS ATLAS Week: 25/Feb to 1/Mar 2002 Data Manager Design Status Report DM.pdf.
High Level Trigger (HLT) for ALICE Bergen Frankfurt Heidelberg Oslo.
Sander Klous on behalf of the ATLAS Collaboration Real-Time May /5/20101.
Track Trigger Designs for Phase II Ulrich Heintz (Brown University) for U.H., M. Narain (Brown U) M. Johnson, R. Lipton (Fermilab) E. Hazen, S.X. Wu, (Boston.
TRT LAr Tilecal MDT-RPC BOS Pixels&SCT 1 The Atlas combined testbeam Thijs Cornelissen, NIKHEF Jamboree, Nijmegen, December 2004.
TDAQ week Lisbon, October HLT Algorithms Planning Discussion Discussion: (suggested timing) Aims for this meeting (2 mins) Milestones (1 min) Review.
HLT - data compression vs event rejection. Assumptions Need for an online rudimentary event reconstruction for monitoring Detector readout rate (i.e.
A Fast Level 2 Tracking Algorithm for the ATLAS Detector Mark Sutton University College London 7 th October 2005.
The ATLAS trigger Ricardo Gonçalo Royal Holloway University of London.
Trigger and online software Simon George & Reiner Hauser T/DAQ Phase 1 IDR.
The Project AH Computing. Functional Requirements  What the product must do!  Examples attractive welcome screen all options available as clickable.
1 Raw Event : ByteStream implementation Muon Workshop, April 2002 A. Nisati, for the Muon Trigger group.
FTK poster F. Crescioli Alberto Annovi
Commercial Database Applications Testing. Test Plan Testing Strategy Testing Planning Testing Design (covered in other modules) Unit Testing (covered.
Preparing Data for Analysis and Analyzing Spatial Data/ Geoprocessing Class 11 GISG 110.
GTECH 361 Lecture 13a Address Matching. Address Event Tables Any supported tabular format One field must specify an address The name of that field is.
Framework for track reconstruction and it’s implementation for the CMS tracker A.Khanov,T.Todorov,P.Vanlaer.
Tracking at the ATLAS LVL2 Trigger Athens – HEP2003 Nikos Konstantinidis University College London.
The Pixel Detector ByteStream Converter M.Cobal (1), L. Santi (2) (1) University of Udine and INFN Trieste, Italy (2) University of Trieste and INFN Trieste,
1 Modelling parameters Jos Vermeulen, 2 June 1999.
1 Tracking Reconstruction Norman A. Graf SLAC July 19, 2006.
Faster tracking in hadron collider experiments  The problem  The solution  Conclusions Hans Drevermann (CERN) Nikos Konstantinidis ( Santa Cruz)
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Track Reconstruction: the trf & ftf toolkits Norman Graf (SLAC) ILD Software Meeting, DESY July 6, 2010.
Level 3 Muon Software Paul Balm Muon Vertical Review May 22, 2000.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
David N. Brown Lawrence Berkeley National Lab Representing the BaBar Collaboration The BaBar Mini  BaBar  BaBar’s Data Formats  Design of the Mini 
The Region of Interest Strategy for the ATLAS Second Level Trigger
Trigger ESD/AOD Simon George (RHUL) Ricardo Goncalo (RHUL) Monika Wielers (RAL) Contributions from many people.
Event Data History David Adams BNL Atlas Software Week December 2001.
1 Michela Biglietti (Universita’ di Napoli-Federico II) Gabriella Cataldi (INFN Lecce) and the HLT.
Non-prompt Track Reconstruction with Calorimeter Assisted Tracking Dmitry Onoprienko, Eckhard von Toerne Kansas State University, Bonn University Linear.
HEP 2005 WorkShop, Thessaloniki April, 21 st – 24 th 2005 Efstathios (Stathis) Stefanidis Studies on the High.
ATLAS ATLAS Week: 25/Feb to 1/Mar 2002 B-Physics Trigger Working Group Status Report
Trigger input to FFReq 1. Specific Issues for Trigger The HLT trigger reconstruction is a bit different from the offline reconstruction: – The trigger.
Trigger ESD/AOD Simon George (RHUL) Ricardo Goncalo (RHUL) Monika Wielers (RAL) Reporting on the work of many people. ATLAS software week September.
The ATLAS Trigger: High-Level Trigger Commissioning and Operation During Early Data Taking Ricardo Gonçalo, Royal Holloway University of London On behalf.
Navigation Timing Studies of the ATLAS High-Level Trigger Andrew Lowe Royal Holloway, University of London.
TDAQ Upgrade Software Plans John Baines, Tomasz Bold Contents: Future Framework Exploitation of future Technologies Work for Phase-II IDR.
IOP HEPP: Beauty Physics in the UK, 12/11/08Julie Kirk1 B-triggers at ATLAS Julie Kirk Rutherford Appleton Laboratory Introduction – B physics at LHC –
G. Volpi - INFN Frascati ANIMMA Search for rare SM or predicted BSM processes push the colliders intensity to new frontiers Rare processes are overwhelmed.
Artemis School On Calibration and Performance of ATLAS Detectors Jörg Stelzer / David Berge.
David Adams ATLAS DIAL: Distributed Interactive Analysis of Large datasets David Adams BNL August 5, 2002 BNL OMEGA talk.
Calorimeter Assisted Track Finder Tracking Infrastructure Dmitry Onoprienko Kansas State University Linear Collider Workshop 2007 May 30 – June 3, 2007.
GAUDI Muon Software  Algorithms : Muon Digitization MuonL0Trigger MuonIdentification  Detector Description Database  Transient Detector Store  Detector.
3D Event reconstruction in ArgoNeuT Maddalena Antonello and Ornella Palamara 11 gennaio 20161M.Antonello - INFN, LNGS.
Raw data for … 105/04/2002 CAT meeting Witek Krasny(CERN) …the LAR-calorimeter Witek Krasny (CERN)
ATLAS Trigger Development
4/9/2007RPC converter1/18 RPC bytestream converter: Brainstorming a summary of discussions involving M.Bianco, G.Cataldi, G.Chiodini, E.Gorini, A.Guida,
Software Tools for Layout Optimization (Fermilab) Software Tools for Layout Optimization Harry Cheung (Fermilab) For the Tracker Upgrade Simulations Working.
Calibration of the ZEUS calorimeter for hadrons and jets Alex Tapper Imperial College, London for the ZEUS Collaboration Workshop on Energy Calibration.
27 March 2003RD Schaffer & C. Arnault CHEP031 Use of a Generic Identification Scheme Connecting Events and Detector Description in Atlas  Authors: C.
Upgrade PO M. Tyndel, MIWG Review plans p1 Nov 1 st, CERN Module integration Review – Decision process  Information will be gathered for each concept.
Two-particle separation studies with a clustering algorithm for CALICE Chris Ainsley University of Cambridge CALICE (UK) meeting 10 November 2004, UCL.
A Level-2 trigger algorithm for the identification of muons in the ATLAS Muon Spectrometer Alessandro Di Mattia on behalf of the Atlas TDAQ group Computing.
Muon Persistency Persistent Analysis Objects Muon Persistency Norbert Neumeister µ-PRS meeting February 10, 2004.
A Fast Hardware Tracker for the ATLAS Trigger System A Fast Hardware Tracker for the ATLAS Trigger System Mark Neubauer 1, Laura Sartori 2 1 University.
SiD Tracking in the LOI and Future Plans Richard Partridge SLAC ALCPG 2009.
ANDREA NEGRI, INFN PAVIA – NUCLEAR SCIENCE SYMPOSIUM – ROME 20th October
CALIBRATION: PREPARATION FOR RUN2 ALICE Offline Week, 25 June 2014 C. Zampolli.
Ysterious apping ess Florian Lütticke On behalf of the test beam crew 20th International Workshop on DEPFET Detectors and.
Summary of Studies on Split Architecture
ATLAS L1Calo Phase2 Upgrade
FTK variable resolution pattern banks
Vincenzo Innocente CERN/EP/CMC
MOORE (Muon Object Oriented REconstruction) MuonIdentification
Low Level HLT Reconstruction Software for the CMS SST
Detector parameters and modelling
Use Of GAUDI framework in Online Environment
Presentation transcript:

Algorithm / Data-flow Interface John Baines Algorithm / Data-flow Interface Contents: PESA Software Requirements: Algorithms Interface Layer Interfaces Data Descriptor Data Preparation Implementation issues Summary http://hepunx.rl.ac.uk/atlasuk/simulation/level2/meetings/TDAQ-April-01/dataflow.ps http://hepunx.rl.ac.uk/atlasuk/simulation/level2/meetings/TDAQ-April-01/dataflow.pdf

PESA software Requirements PESA software Requirements (currently being reviewed): http://www.hep.ph.rhbnc.ac.uk/atlas/newsw/requirements/ Section 4.8 lists requirements for “Interface to Data Collection” Assumptions made in that document: Algorithms running in LVL2 processors will request data by specifying an abstract “Region” and the required object-type for the returned data. An interface layer is required between PESA algorithms and data collection, fulfilling requirements and constraints from both sides. The ROS may have the capability to perform some data preparation tasks. The current requirements specify a higher degree of flexibility than may be required in the final system because at this stage of the project we need to evaluate various scenarios for ROS-DC-HLT e.g. to evaluate data preparation in the LVL2 processor compared to the same data preparation performed in the ROS. Assumed constraints: Data collection never looks inside the ROS data. Granularity used by the readout system may be coarser than that algorithms would ideally use to select their data, i.e. additional data to that requested may be returned. Data from RODs may vary between predefined formats from event to event e.g. different levels of compression. Full readout / zero suppressed readout etc.

Algorithm Requirements Want flexible way to specify requested data in order to : Avoid inefficiency due to part of trajectory outside simple RoI (e.g. due to track curvature, vertex spread etc.) Specify data in the most natural way for a given detector. Minimise the amount of “extra” data in order to : Reduce data-flow Simplify pattern recognition task Want data returned as a collection of objects of specified type i.e. unpacked with headers removed and converted to appropriate form => data preparation. Ideally want same interface for all algorithms: => Can evaluate same algorithm in different parts of the system in development phase Might also be beneficial to have this flexibility in the final system. Types of algorithm: Data pre-selection : e.g. select subset of hits from detectors inside specified region Data preparation : e.g. : SCT & pixel cluster-formation, SCT stereo-association, Conversion of hit information to position in space, LAr calibration, etc. Feature Extraction : e.g. Calorimeter cluster finding, muon track finding etc. used to be called “pre-processing”

Interface Layer Requirements The interface layer should: Provide a uniform style of data access regardless of detector or level of data preparation requested. Provide data unpacked into objects with a different Class for each detector. Also different classes for different formats (e.g. SCT hits, SCT clusters). Requested Class determines the detector and level of data-preparation. If data preparation / pre-selection is required this is initiated directly by the interface layer in the LVL2 processor or the request is passed on to be initiated at the ROS, this may require some parameters passed to the data pre-selection algorithm at the ROS. The data descriptor must provide for a range of different pre-defined ways to specify the geometrical region for the requested data. The interface layer must convert the geometrical region to a list of RoBs to be requested from data collection. It should be possible to specify a level of granularity below RoB e.g. list of modules. This list would form the parameters used as input to a data pre-selection algorithm running either in the LVL2 processor or at the ROS.

Interfaces DataDescriptorType region; DataType data; ErrorCode rc = dataService->get (region, data) FEX Algorithm Steering seed rc = FexAlgorithm -> execute(seed) data as collection of objects of specified type region, data type LUT Interface Layer Data Preparation or Pre-Selection Algorithm raw data prepared data Geometry info ROB map Event ID, List of ROB IDs Optional data preparation params. raw or prepared data Data Preparation or Pre-Selection Algorithm raw data prepared data Data Collection ROS Raw data

Data Descriptor Trigger algorithms use seeds to derive the region of the detector from which data is required. Seeds can be, e.g.: LVL1 RoI LVL2 cluster LVL2 track I.P. h h + Dh h - Dh Dz Can use LVL1 RoI to specify a cone (h, Dh, f, Df), but this is not sufficient/appropriate in all cases, e.g.: It is crucial to take vertex spread into account for the pixels and SCT => h, Dh, f, Df, Dz (or hmin, hmax, fmin, fmax, zmin, zmax) h, f not natural for all systems, e.g. natural to use x-y range for (FCAL) Ability to additionally specify layers can reduce data-flow e.g. by a factor 2 using layer-by-layer sequential selection in calorimeter (ATL-DAQ-2000-042)

Data Descriptor (cont.) For LVL2 seeds knowledge of pT and charge sign could significantly reduce data-volume to be read out, especially at low pT, e.g. confirmation of muon in TileCal and ID: data volume reduced w.r.t. simple RoI by allowing data to be specified in a road about the trajectory measured in the muon system.

Data Descriptor Implimentation It is the task of the Interface Layer to determine which ROBs contain detectors inside the region described by the data descriptor. Suggestion from Reiner Hauser for an implementation of the data descriptor: http://www.hep.ph.rhbnc.ac.uk/atlas/newsw/discussion-material/rh-region.txt Region is abstract base class with several derived classes, one for each distinct way of specifying the data. Region has a select method which returns true of false according to whether or not a given ROB contains detectors inside the region. To change the way a region is specified, e.g. if a more complicated selection is required, then a new class is defined derived from the Region base class. The RoB Descriptor contains whatever information is necessary to determine whether the RoB is in the Region, e.g. h and f range covered by the RoB, layers, z range etc. Region could also have a method used by a data preparation algorithm to say whether a given detector is inside the region. ROB Descriptor: class ROBDescriptor { // Properties e.g. eta, phi, layer etc. }; Abstract base class : class Region { ... virtual bool select (const ROBDescriptor& desc) = 0; Implementation of specific region description: class EtaPhiRegion : public Region { select(const ROBDescriptor& desc){ // return true if desc in eta/phi region }

Data Preparation Data pre-selection might be performed at the ROS in order to reduce data flow. e.g. Zero suppression of TRT data could reduce the data volume by a factor ~5 at low luminosity (ATLAS DAQ Note 66) i.e. 128 kByte 28 kByte assuming 3% occupancy. Selecting the subset of SCT and pixel modules which lie wholly or partially in the RoI gives factor ~10 data reduction i.e. ~25 kByte / RoI ~2.5 kByte / RoI at L=1034 cm-2s-1. For an optimised ROB mapping there are an average of ~1.5 Pixel ROBs and ~2 SCT ROBs per (small) RoI (Dh x Df = 0.1 x 0.1) i.e. an average of 72 Pixel detectors and 192 SCT detectors read out per RoI, of which only a small subset of modules lie inside the RoI. Data preparation at Interface Layer in order to simplify the task for the algorithm and provide a uniform style of data access and hide details of the raw data format: Conversion to format convenient for FEX algorithm, e.g. Unpacking & creation of objects SCT and Pixel clustering SCT stereo association LAr Calibration Space-point formation : conversion from address to position in space. Possibly further data pre-selection e.g. select sub-set of space-points lying within RoI Pixel: 2,200 modules, 61k pixel elements per module => 135 million pixels L = 1033 : Occupancy ~2 x 10-5 => ~2,700 hits/ev, ~1.2 hits/module ~15 kBytes data/ev. L = 1034 : Occupancy ~1 x 10-4 => ~13,500 hits/ev, ~6 hits/module ~75 kBytes data/ev. SCT: 4,100 modules. 2 x 756 strips per module => 6.2 million channels L = 1033 : Occupancy ~1x10-3 => 3,200 hits, ~0.8 hits/module ~20 kBytes/event L = 1034 : Occupancy ~1x10-2 => 32,000 hits, ~8 hits/module ~200 kBytes/event TRT: 320,000 straws (50,000 in barrel split into 2 r/o channels, 280,000 in end-caps) 420,000 r/o channels, 96+64+96 = 256 ROBs 18 bits per channel packed to ~6.5 bits/channel 34 kBytes per event. Occupancy: Barrel: 1.3 - 3.8%, End-cap 1.5-3.6% @1033 ~ 10,000 hits per event End-cap: 13 - 38%, End-cap 15 - 36% @ 1034 ~ 100,000 hits per event Data Volume: No zero suppression: 130 kBytes @1033 , 330 kBytes @1034 (ATLAS-DAQ-66) With zero suppression: 30 kBytes @1033 , 255 kBytes @1034

Implementation Issues In the case of re-arrangement of data (e.g. unpacking) copying of data could be avoided if the algorithm receives a pointer to a collection of objects and can iterate over these. The details of the raw data are hidden e.g. whether the hits are in separate ROS fragments. The conversion from Raw data to offline-type classes could use the code being developed for the Event Builder, provided LVL2 and EB uses the same Classes. Could data pre-selection and simple data preparation (e.g. SCT clustering) be performed on demand to avoid copying? e.g.: pre-selection: skip over hits outside Region SCT clustering: find centre of sequence of adjacent hits

Summary The different requirements of Data Collection and HLT Algorithms could be met as follows: Algorithms specify their data request in the form of : A Data Descriptor (Region) The Class for the returned data. An Interface Layer is provided in order to: Hide details of the Raw Data and readout architecture (ROS, ROB) from the algorithm Hide details of geometrical regions and offline-type Classes from Data Collection. The Interface Layer converts the data request to a list of ROBs Data preparation and/or pre-selection are performed at the interface layer as required, determined by Region and Class i.e. the interface layer converts the Raw data to a collection of objects of the requested Class Type lying within the requested region. Additionally pre-selection or data preparation may be requested at the ROS. In this case, the interface layer provides the necessary parameters to be passed to the pre-selection or data preparation algorithm at the ROS.