Download presentation
Presentation is loading. Please wait.
1
Normal text - click to edit Status of implementation of Detector Algorithms in the HLT framework Calibration Session – OFFLINE week (16-03-2007) M. Richter, D. Rörich, S. Bablok (IFT, University of Bergen) P.T. Hille (University of Oslo) M. Ploskon (IKF, University of Frankfurt) S. Popescu, V. Lindenstruth (KIP, University of Heidelberg) Indranil Das ( Saha Institute of Nuclear Physics )
2
Normal text - click to edit TOC HLT functionality HLT interfaces HLT DCS HLT OFFLINE HLT interface to AliEve Synchronisation via ECS Status of Detector Algorithms –generell remarks –TPC –TRD –PHOS –DiMuon
3
Normal text - click to edit HLT functionality Trigger –Accept/reject events verify dielectron candidates sharpen dimuon transverse momentum cut identify jets... Select –Select regions of interest within an event remove pile-up in p-p filter out low momentum tracks... Compress –Reduce the amount of data required to encode the event as far as possible without loosing physics information
4
Normal text - click to edit HLT interfaces ECS: –Controls the HLT via well defined states (SMI) –Provides general experiment settings (type of collision, run number, …) DCS: –Provides HLT with current Detector parameters (voltages, temperature, …) Pendolino –Provides DCS processed data from HLT (TPC drift velocity, …) FED-portal (Front-End-Device portal) OFFLINE: –Interface to fetch data from the OCDB TAXI, (OFFLINE HLT) –Provides OFFLINE with calculated calibration data Shuttle-portal, (HLT OFFLINE) HOMER: –HLT interface to AliEve for online event monitoring
5
Normal text - click to edit OFFLINE DCS Task Manager Archive DB PVSS FES MySQL FED portal Shuttle Pendolino ECS- proxy Data flow in HLT spec Datasink (Subscriber) Pendolino-portal intern extern DA HLT OCDB (Conditions) AliEn Taxi HCDB (local Cache) Taxi-HCDB ECS AliRoot CDB access Detector responsibility Framework components Interface DA_HCDB Ali Eve Homer PubSub Detector data HLT cluster data DIM- Subscriber
6
Normal text - click to edit OFF INITIALIZING > DEINITIALIZING > INITIALIZED CONFIGURED READY RUNNING CONFIGURING > ENGAGING > DISENGAGING > INITIALIZE implicit transtion SHUTDOWN CONFIGURE + params RESET ENGANGE DISENGAGE START STOP implicit transtion COMPLETING implicit transtion Distribution of current version of HCDB to DA nodes (DA_HCDB) Filling of FileExchange Server (FES) and MySQL DB Offline Shuttle can fetch data DAs request their DA_HCDB Pendolino fetches data from DCS archive DB and stores data to DA_HCDB Synchronisation via ECS ECS interface
7
Normal text - click to edit HLT DCS interface FED portal: –Dim Channels (Services published on the HLT side) –implementing partially the FEDApi –Subscriber component of the HLT framework –PVSS panels on DCS side integrate data in DCS system –storing of DCS related data in the DCS Archive DB (HLT-cluster monitoring [HLT]; TPC drift velocity, …[detector specific]) Pendolino: –contacts the DCS Amanda server (DCS Archive DB) –fetches current running conditions (temperature, voltages, …) –feeds content into DA_HCDB –requests in regular time intervals: three Pendolinos, each with a different frequency (fast, medium, slow)
8
Normal text - click to edit DCS portal-dcs (dcs-vobox) Detector responsibility Framework components Interface Pendolino (incl. detector preproc) portal-dcs (Dim Subscriber) Interface (PubSub –FED API [ DIM ]) FED API Archive DB PVSS DA_HCDB Pendolino file catalogue SysMes Interface to SysMes triggers sync DA AliRoot CDB access classes HLT DCS interface
9
Normal text - click to edit HLT DCS interface Pendolino details: –Three different frequencies: fast Pendolino: 10 sec - 1 min normal Pendolino: 1 min - 5 min slow Pendolino: over 5 min –Response time: ~13000 values per second –Remark: The requested values can be up to 2 min old. (This is the time, that can pass until the data is shifted from the DCS PVSS to the DCS Archive DB)
10
Normal text - click to edit HLT OFFLINE interface Taxi portal –Requests OCDB and caches content locally (HCDB) –Provides calibration objects to Detector Algorithms (DA) inside the HLT copied locally to DA nodes before each run (DA_HCDB) –DA_HCDB accessible via AliRoot CDB access classes Shuttle portal –Collects calibration data from Detector Algorithms –Provides the data to OFFLINE, fetched after each run by Offline Shuttle –Data exchanged via FileExchangeServer (FES) –Meta data stored in MySQL DB
11
Normal text - click to edit OFFLINE HLT OFFLINE interface Taxi portal: TaskManager DA 1. Taxi0 HCDB0 portal-taxi0 (vobox-taxi0) DA_HCDB ECS- proxy OCDB current run number triggers update ECS AliRoot CDB access classes DA AliRoot CDB access classes SysMes DA_HCDB
12
Normal text - click to edit OFFLINE HLT OFFLINE interface Shuttle portal: OCDB DA FESMySQL portal-shuttle0 (Subscriber) Shuttle DA TaskManager ECS- proxy ECS notifies: “collecting finished”
13
Normal text - click to edit AliEVE HLT event display (example TPC) existing infrastructure (M. Tadel) adopted to HLT with minimal effort connect to HLT from anywhere within GPN ONE monitoring infrastructure for all detectors using HOMER data transport abstraction
14
Normal text - click to edit Status Detector Algorithms (general remark) HLT provides service and infrastructure to run Detector Algorithms, e.g. reconstruction and calibration algorithms offline code can be run on the HLT, if it fulfills the requirements, given by the constraints due to: limited accessibility of (global) AliRoot data structures processing of each event is distributed over many nodes none of the nodes have the full event data of all stages available Detector algorithm interfaces via a processing component to the HLT data chain Processing component implements the HLT component interface General principle: HLT Data Input Data Output Only the input, DCS- and calibration data is available for processing
15
Normal text - click to edit Status Detector Algorithms (general remark) HLT chain HLT Processors/ Detector Algorithms offline source interface Completely identical HLT processing can run both in the online and offline framework dedicated data structures shipped between components, can be ROOT Tobjects DA's must work entirely on incoming data dedicated publisher components for special data are possible HLT produces ESD files, filled with the data it can reconstruct/provide offline sink interface RORC publishers HLT out Online Offline data from DAQAliHLTReconstructor data to DAQ AliHLTReconstructor
16
Normal text - click to edit Status Detector Algorithms (general remark) 1)no access to (global) AliRoot data structures (a) DA's have no AliRunLoader instance (b) DA's run as separated processes, no data exchange via global variables (c) DA's can only work on incoming data and OCDB data 2)proper data transport hierarchy deployed by DA, i.e. access to whatever data through global methods/objects from lower hierarchies is penalty code 3)structures/objects for data exchange have to be optimized 4)TObjects for data transport must declare pointer-type members as transient members ( //! ), initialization properly handled by the constructor 5)in principle all offline code using the AliReconstructor plugin scheme can run on HLT, if a proper data transport hierarchy is deployed
17
Normal text - click to edit Status Detector Algorithms (TPC status) Status: full TPC reconstruction running in HLT output in ESD format TPC calibration tasks defined by the TPC group TPC group decided to extensively use HLT's computing capabilities for calibration task several prototype DA's developed Commissioning of calibration algorithms starts soon
18
Normal text - click to edit Status Detector Algorithms (TPC task list) HLT On-line monitoring for TPC –Calibration : 1-d histograms for pedestal runs and noise calibration 1-d histograms for pad by pad calibration (time offset, gain and width of the time response function) for the pulser run and during the normal data taking 1-d histograms for the gain calibration during the Krypton run, cosmic, laser and data taking TPC drift velocity Data Quality Monitoring (DQM) –Online monitoring: 3d reconstructed track view optionally together with the 3d detector geometry inside Drift velocity monitoring Pad by pad signal Charge per reconstructed track monitoring
19
Normal text - click to edit Status Detector Algorithms (TRD status) Clusterization algorithm –Ready and working –Uses directly Offline clusterizer Stand alone tracker –Almost ready (ready within next 1-2 weeks) HLT Component implemented Still few fixes within the AliRoot TRD offline code to be done – HLT will run 100% Offline code here too PID component –Pending – offline code under finalization stage – again, no change of the Offline algorithms within HLT Triggering scenarios under consideration
20
Normal text - click to edit Status Detector Algorithms (TRD status) Calibration –Native AliRoot OCDB calibration data access (provided via HLT TAXI) –Production of reference data for calibration algorithms Ready and working Uses directly offline code Monitoring –Prototype ready –Integration into AliEve will follow TRD Clusters reconstructed on HLT
21
Normal text - click to edit Status Detector Algorithms (TRD status) Calibration: –Histogram production ready&working (mcm tracklet based) –Each HLT component has an OCDB access (just like in Offline) via local (HLT node) storage – TRD chain is using OCDB data (1:1 Offline AliRoot code) –TRD preprocessor handles calibration of calibration parameter from the input histograms collected on the HLT TRD local reconstruction on HLT almost complete (local tracking still on the way...) Calibration histograms are produced First HLT monitoring code emerging soon (also AliEve support)
22
Normal text - click to edit Status Detector Algorithms (TRD task list) Short term to do: –PID –Track merging with TPC (and ITS eventually) Long term to do: –Physics trigger scenarios
23
Normal text - click to edit Status Detector Algorithms (PHOS status) Current status –running full PHOS HLT chain (5 modules) with raw data simulated in aliroot –successful test on simulated HLT ”cluster” consisting of 3 laptops –fast and accurate online evaluation of cell energies –calibration data: Continious accumulation of per channel energy distribution: Calibration data written to root files at end of run. Histograms has been evaluated visually and looks reasonable. –raw data can be written to files untouched by the HLT (HLT mode A) –calibration data can be accumulated over several runs. –event display: Display of events & calibration data for 5 modules using HOMER –collection of data from several nodes to be vizualized in a single event display. –PHOS HLT Analysis chain has run successfully distributed over 21 nodes at the HLT cluster at P2
24
Normal text - click to edit Status Detector Algorithms (PHOS task list) Currenly ongoing work: –Implementation of DQM –Integration of end of run calibration procedures, DA –Implementation of fast Phi0 invariant mas algorithm –Testing and benchmarking of the processing cain on the HLT cluster. –Preparations for PDC07 Near future plans: –Integration ECS, DCS, shuttle etc.. –Testing of the HLT processing chain on beamtest data –Making of ESDs to be send to DAQ with HLT-out –Running of the PHOS HLT processing chain on data files and root files –Minor improvment on the online display –Finalization and documentation of internal PHOS HLT data format.
25
Normal text - click to edit Status Detector Algorithms (PHOS task list) Currenly ongoing work: –Implementation of DQM –Integration of end of run calibration procedures, DA –Implementation of fast Phi0 invariant mas algorithm –Testing and benchmarking of the processing cain on the HLT cluster. –Preparations for PDC07 Near future plans: –Integration ECS, DCS, shuttle etc.. –Testing of the HLT processing chain on beamtest data –Making of ESDs to be send to DAQ with HLT-out –Running of the PHOS HLT processing chain on data files and root files –Minor improvment on the online display –Finalization and documentation of internal PHOS HLT data format.
26
Normal text - click to edit Status Detector Algorithms (DiMuon status and task list) Present Status: –Standalone hit reconstruction is ready and implemented in HLT environment of CERN PC farm –First results of resolution test with the rawdata generated using AliRoot of dHLT chain at CERN –Processing time for multiple event is large compared to standalone mode –Full dHLT Chain is working and up in UCT cluster Future to do list: –Improvement of the processing timing –Integration of the tracker algorithm in CERN HLT. –Implementation of the full chain along with debugging and benchmarking. –Preparing the output in ESD format. –Efficiency checking of the dHLT chain using beamtest data
27
Normal text - click to edit Information on the web http://wiki.kip.uni-heidelberg.de/ti/HLT/index.php/ECS-interface http://wiki.kip.uni-heidelberg.de/ti/HLT/index.php/Specification-HLT2OFFLINE-interface http://wiki.kip.uni-heidelberg.de/ti/HLT/index.php/UseCase-Calibration-HLT http://wiki.kip.uni-heidelberg.de/ti/HLT/index.php/Data_path_from_DCS_to_the_HLT and talks of HLT session on the last Alice week
28
Normal text - click to edit Backup slides
29
Normal text - click to edit Status Detector Algorithms (TRD DA overview)
30
Normal text - click to edit Status Detector Algorithms (TRD status)
31
Normal text - click to edit Resolution of dHLT hitreconstruction Note : Resolution in the Y direction is far better than the X direction is due to the detector geometry, the minimum padsize in beding plane is ~0.5 cm, whereas in non-bending direction is ~0.71 cm.
32
Normal text - click to edit HLT ECS interface State transition commands from ECS –INITIALIZE, CONFIGURE(+PARAMS), ENAGE,START,… –Mapping to TaskManager states CONFIGURE parameters: –HLT_MODE: the mode, in which the HLT shall run (A, B or C) –BEAM_TYPE: (pp (proton-proton) or AA (heavy ion)) –RUN_NUMBER: the run number for the current run –DATA_FORMAT_VERSION: the expected output data format version –HLT_TRIGGER_CODE: ID defining the current HLT Trigger classes –CTP_TRIGGER_CLASS: the trigger classes in the Central Trigger Processor –HLT_IN_DDL_LIST: list of DDL cables on which the HLT can expect event data in the coming run. The structure will look like the following: :, :,... –HLT_OUT_DDL_LIST: list of DDLs, on which the HLT can send data to DAQ
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.