Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sebastian Robert Bablok

Similar presentations


Presentation on theme: "Sebastian Robert Bablok"— Presentation transcript:

1 High Level Trigger Online Calibration framework in ALICE CHEP 2007 – Victoria (02.09. – 07.09.2007)
Sebastian Robert Bablok (Department of Physics and Technology, University of Bergen, Norway) for the ALICE HLT Collaboration (see last slide for complete list of authors)

2 Outline High Level Triger (HLT) system HLT Interfaces
to the Experiment Control System (ECS) to Offline to the Detector Control System (DCS) to event visualization (AliEve) Synchronization Timeline Summary & Outlook

3 High Level Trigger system
Purpose: Online event reconstruction and analysis 1 kHz event rate (minimum bias PbPb, pp) Providing of trigger decisions Selection of regions of interest within an event performance monitoring of the ALICE detectors Lossless compression of event data Online production of calibration data ECS analyzed events / trigger decisions DAQ HLT DCS Trigger Mass Storage raw event data

4 High Level Trigger system
Hardware Architecture: PC cluster with up to 500 computing node AMD Opterons 2GHz (dual board, dual core) 8 GByte Ram 2 x 1GBit ethernet Infiniband backbone infrastucture nodes for maintaining the cluster 8 TByte AFS file server dedicated portal nodes for interaction with the other ALICE systems cluster organization matches structure of the ALICE detectors

5 High Level Trigger system
Software Architecture: dynamic software data transport framework (Publish-Subscriber) Cluster management (TaskManager) Analysis Software (AliRoot)

6 HLT cluster ECS HLT - Interfaces DCS DCS values Control DAQ
(Experiment Control System) HLT - Interfaces DCS (Detector Control System) DCS values Control DAQ (Data Acquisition) Processed events DCS-portal (Pendolino, FED) ECS- proxy Calculated values Trigger decisions DDL Online event monitoring AliEve (ALICE Event Monitor) HLT cluster HOMER FEP Event data OFFLINE FEE (Front-End- Electronics) Calibration data OCDB (Offline Conditions DB) OFFLINE-portal (Shuttle, Taxi) DAQ (Data Acquisition) New calculated calibration data Shuttle FEP = Front-End-Processor; DDL = Direct Data Link; HOMER = HLT Online Monitoring Environment including ROOT

7 HLT  ECS interface ECS Proxy:
Steering of the HLT via the Experiment Control System (ECS) Synchronizes the HLT with the other online and offline systems in ALICE Provides the HLT with upcoming run setup (run number, type of collision, trigger classes, …) Consists of a state machine states (implemented in SMI++) well defined states and transition commands HLT cluster ECS control TaskManager ECS- proxy states and transition commands

8 OFFLINE  HLT interface (Taxi)
Taxi portal: Fetches latest available calibration settings from the OCDB pre-produced or in former runs calculated calibration objects enveloped in ROOT-files (can be directly used by ALICE analysing framework - AliRoot CDB access classes  transparent access for analysis code if running online or offline) Caches calibration data locally in the HLT (HLT Conditions DataBase - HCDB) Retrieval is performed asynchronously to run HCDB distributed to nodes with the Detector Algorithms (DAs) copied locally to the DA nodes before each run  fixed version per run

9 OFFLINE  HLT interface (Taxi)
intern extern Cluster node (Analysis) Cluster node (Analysis) AliRoot CDB access classes AliRoot CDB access classes DA DA DA DA DA_HCDB DA_HCDB TaskManager updated before each run ECS- proxy Taxi- HCDB current run number portal-taxi Taxi 1. DA = Detector Algorithm OFFLINE OCDB Framework components Detector responsibility

10 DCS  HLT interface (Pendolino)
Purpose: Requests current run conditions (like temperature, voltages, …) permanently measured by the Detector Control System (DCS) frequently request of “current” values (three different time intervals) providing of latest available values to the Detector Algorithms (DAs) preparation / enveloping of the data transparent access to data for the DAs (online and offline)

11 DCS  HLT interface (Pendolino)
Detector responsibility intern extern DCS Archive DB Framework components PVSS DA = Detector Algorithm Pendolino portal-Pendolino Cluster node (Analysis) preparation / enveloping Pendolino HCDB DA DA DA_HCDB frequent update during run

12 HLT  DCS interfaces (FED)
intern extern Cluster node (Analysis) FED-portal DA DA FED API DCS PVSS Archive DB Detector responsibility FED portal: collects produced data inside the HLT during the run writing back data to the DCS (TPC drift velocity, …) uses the Front-End-Device (FED) API common among all detectors in ALICE PVSS panels to integrate data into DCS Framework components DA = Detector Algorithm

13 HLT  OFFLINE interface (Shuttle)
Cluster node Cluster node Purpose: Collects online calculated calibration settings from the DAs inside the HLT calibration objects stored in a File EXchange Server (FXS) meta data stored in a MySQL DB (run number, detector, timestamp, …) Feeding back new calibration data to the OCDB DA DA DA DA notifies: “collecting finished” portal-shuttle (FXS-Subscriber) TaskManager ECS- proxy FXS MySQL ECS Shuttle OFFLINE synchronisation OCDB

14 HLT  AliEve interface
(Alice Event visualization) DA DA integrates the HLT in existing visualization infrastructure of AliEve (M. Tadel) connection to the HLT from anywhere within CERN network ONE visualization infrastructure for all detectors Homer AliEve Visualization of results

15 Synchronization timeline
Sequence: Initialize cluster Init ... HLT ECS- proxy SoR EoR Complete Distribute HCDB collect data for Shuttle FEE- data DCS- Pendolino outstream to DAQ FED AliEve- Homer Offline Taxi (asynch) Shuttle analyze events / calculate calibration data contact with the various other systems in ALICE Before Run At Start-of- Run During Run After End-of-Run

16 Summary & Outlook HLT consists of a large computing farm matching the structure of the ALICE detectors. Interfaces with the other systems of ALICE (online and offline) enables data exchange with the connected parts. HLT allows for detector performance- and physics monitoring and calibration online. With the presented mechanism the HLT will be able to provide all required conditions and calibration settings to the Analysis Software for first physics in ALICE 2008.

17 ALICE HLT Collaboration
Sebastian Robert Bablok, Oystein Djuvsland, Kalliopi Kanaki, Joakim Nystrand, Matthias Richter, Dieter Roehrich, Kyrre Skjerdal, Kjetil Ullaland, Gaute Ovrebekk, Dag Larsen, Johan Alme (Department of Physics and Technology, University of Bergen, Norway) Torsten Alt, Volker Lindenstruth, Timm M. Steinbeck, Jochen Thaeder, Udo Kebschull, Stefan Boettger, Sebastian Kalcher, Camilo Lara, Ralf Panse (Kirchhoff Institute of Physics, Ruprecht-Karls-University Heidelberg, Germany) Harald Appelshaeuser, Mateusz Ploskon (Institute for Nuclear Physics, University of Frankfurt, Germany) Haavard Helstrup, Kirstin F. Hetland, Oystein Haaland, Ketil Roed, Torstein Thingnaes (Faculty of Engineering, Bergen University College, Norway) Kenneth Aamodt, Per Thomas Hille, Gunnar Lovhoiden, Bernhard Skaali, Trine Tveter (Department of Physics, University of Oslo, Norway) Indranil Das, Sukalyan Chattopadhyay (Saha Institute of Nuclear Physics, Kolkata, India) Bruce Becker, Corrado Cicalo, Davide Marras, Sabyasachi Siddhanta (I.N.F.N. Sezione di Cagliari, Cittadella Universitaria, Cagliari, Italy) Jean Cleymans, Artur Szostak, Roger Fearick, Gareth de Vaux, Zeblon Vilakazi (UCT-CERN, Department of Physics, University of Cape Town, South Africa)

18 Backup slides

19 HLT interfaces ECS: DCS: OFFLINE: HOMER: FEE: DAQ:
Controls the HLT via well defined states (SMI++) Provides general experiment settings (type of collision, run number, …) DCS: Provides HLT with current Detector parameters (voltages, temperature, …)  Pendolino, (DCS  HLT) Provides DCS with processed data from the HLT (TPC drift velocity, …)  FED-portal, (HLT  DCS) OFFLINE: Interface to fetch calibration settings from the OCDB  Taxi, (OFFLINE  HLT) Feeding back new calibration data to OCDB  Shuttle-portal, (HLT  OFFLINE) HOMER: HLT interface to AliEve for online (event) monitoring FEE: stream in of raw event data via H-RORCs DAQ: stream out of processed data and trigger decisions to DAQ-LDCs

20 Synchronization timeline
Before Run (continuously): Taxi requests latest available calibration settings from the OCDB (asynchronously to actual run) At Start-of-Run (SoR) - Initial Setting: ECS  HLT: initialization of the HLT cluster (run number, beam type, trigger classes, …) fixating the HCDB for a run, distribution of the HCDB to the DA nodes Offline Taxi (asynch) FEE- data DCS- Pendolino outstream to DAQ DCS- FED AliEve- Homer ... Offline Shuttle Distribute HCDB collect data for Shuttle HLT Initialize cluster analyze events / calculate calibration data ECS- proxy Init SoR EoR Complete contact with the various other systems in ALICE

21 Synchronization timeline
During Run (after SoR): DCS  HLT: current environment/condition values fetched via the Pendolino update of the HCDB on the DA nodes with the new DCS data HLT  DCS: processed, DCS relevant values are written back via the FED-portal (e.g. TPC drift velocity) HLT  DAQ: analyzed events go to permanent storage (includes a certain time period after the End-of-Run) HLT  AliEve: online event monitoring in the Alice Control Room After End-of-Run (EoR): HLT  OFFLINE: collecting of fresh produced calibration objects in the FXS of the Shuttle portal OFFLINE Shuttle fetches these objects from the HLT (synchronization via ECS)

22 ECS Proxy Preparing of HCDB for DAs ECS interface
implicit transition OFF INITIALIZE DEINITIALIZING << intermediate state>> INITIALIZING << intermediate state>> Preparing of HCDB for DAs implicit transition INITIALIZED CONFIGURE + params ECS interface SHUTDOWN CONFIGURING << intermediate state>> RESET implicit transition CONFIGURED ENGANGE implicit transition Pendolino fetches DCS data for DAs DISENGAGING << intermediate state>> ENGAGING << intermediate state>> Offline Shuttle can fetch data DISENGAGE READY implicit transition implicit transition START COMPLETING Event processing Collecting data for Shuttle Portal STOP RUNNING

23 TAXI HLT - cluster ECS- proxy DA DA Pendolino DA AliEn Amanda
TAXI-portal TAXI- HCDB HLT - cluster AFS OCDB release TAXI-HCDB before SOR set links from TAXI-HCDB to HCDB TAXI- HCDB (read-only) ECS- proxy Task Manager Init HCDB script links from TAXI-HCDB to HCDB control Pendolino DA DCS Archive DB Pendolino-portal DA release HCDB after update HCDB (read-only) Amanda HCDB Pendolino Prediction processor DA AFS inform about new release notify PubSub of new release ship update info through chain

24 Introducing the HLT system
HLT counting room in ALICE pit

25 HLT overview ALICE data rates (example TPC) Detectors DAQ HLT
Central Pb+Pb collisions event rates: ~200 Hz (past/future protected) event sizes: ~75 Mbyte (after zero-suppression) data rates: ~ 15 Gbyte/sec TPC data rate alone exceeds by far the total DAQ bandwidth of 1.25 Gbyte/sec Detectors DAQ HLT Mass storage Event selection based on software trigger Efficient data compression TPC is the largest data source with channels, 512 timebins and 10 bit ADC value.

26 Architecture

27 Architecture ECS DAQ HLT DCS Trigger Mass Storage analyzed events /
trigger decisions HLT DCS Trigger raw event data


Download ppt "Sebastian Robert Bablok"

Similar presentations


Ads by Google