Normal text - click to edit CHEP 2006 - Mumbai ALICE High Level Trigger Interfaces … 1/22 ALICE High Level Trigger Interfaces.

Slides:



Advertisements
Similar presentations
HLT Collaboration; High Level Trigger HLT PRR Computer Architecture Volker Lindenstruth Kirchhoff Institute for Physics Chair of Computer.
Advertisements

CWG10 Control, Configuration and Monitoring Status and plans for Control, Configuration and Monitoring 16 December 2014 ALICE O 2 Asian Workshop
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
André Augustinus ALICE Detector Control System  ALICE DCS is responsible for safe, stable and efficient operation of the experiment  Central monitoring.
1 Databases in ALICE L.Betev LCG Database Deployment and Persistency Workshop Geneva, October 17, 2005.
Peter Chochula, January 31, 2006  Motivation for this meeting: Get together experts from different fields See what do we know See what is missing See.
HLT & Calibration.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/27 A Control Software for the ALICE High Level Trigger Timm.
> IRTG – Heidelberg 2007 < Jochen Thäder – University of Heidelberg 1/18 ALICE HLT in the TPC Commissioning IRTG Seminar Heidelberg – January 2008 Jochen.
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg - DPG 2005 – HK New Test Results for the ALICE High Level Trigger.
HLT Collaboration (23-Jun-15) 1 High Level Trigger HLT PRR Planning Volker Lindenstruth Kirchhoff Institute for Physics Chair of Computer Science University.
1 HLT – ECS, DCS and DAQ interfaces Sebastian Bablok UiB.
The Publisher-Subscriber Interface Timm Morten Steinbeck, KIP, University Heidelberg Timm Morten Steinbeck Technical Computer Science Kirchhoff Institute.
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg 1 Tag Database for HLT Micro-ESDs.
CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration Timm Morten Steinbeck, Computer Science.
Normal text - click to edit Status of implementation of Detector Algorithms in the HLT framework Calibration Session – OFFLINE week ( ) M. Richter,
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg 1 Timm M. Steinbeck HLT Data Transport Framework.
Clara Gaspar, May 2010 The LHCb Run Control System An Integrated and Homogeneous Control System.
Normal text - click to edit HLT – Interfaces (ECS, DCS, Offline) (Alice week – HLT workshop ) S. Bablok (IFT, University of Bergen)
Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 1 HLT for TPC commissioning - Setup - - Status - - Experience -
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg 1 HLT and the Alignment & Calibration DB.
March 2003 CHEP Online Monitoring Software Framework in the ATLAS Experiment Serguei Kolos CERN/PNPI On behalf of the ATLAS Trigger/DAQ Online Software.
Normal text - click to edit RCU – DCS system in ALICE RCU design, prototyping and test results (TPC & PHOS) Johan Alme.
The High-Level Trigger of the ALICE Experiment Heinz Tilsner Kirchhoff-Institut für Physik Universität Heidelberg International Europhysics Conference.
1 Alice DAQ Configuration DB
Experience with analysis of TPC data Marian Ivanov.
LIGO-G9900XX-00-M ITR 2003 DMT Sub-Project John G. Zweizig LIGO/Caltech.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
DAQ & ECS for TPC commissioning A few statements about what has been done and what is still in front of us F.Carena.
ALICE Computing Model The ALICE raw data flow P. VANDE VYVRE – CERN/PH Computing Model WS – 09 Dec CERN.
Control in ATLAS TDAQ Dietrich Liko on behalf of the ATLAS TDAQ Group.
ALICE, ATLAS, CMS & LHCb joint workshop on
7. CBM collaboration meetingXDAQ evaluation - J.Adamczewski1.
Bernardo Mota (CERN PH/ED) 17/05/04ALICE TPC Meeting Progress on the RCU Prototyping Bernardo Mota CERN PH/ED Overview Architecture Trigger and Clock Distribution.
HLT-TRD Henner Büsching Fast Track Closeout Meeting, June 07, 2004 Henner Buesching University of Frankfurt ALICE workshop Sibiu 08/22/08.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Data Acquisition Backbone Core J. Adamczewski-Musch, N. Kurz, S. Linev GSI, Experiment Electronics, Data processing group.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
L0 DAQ S.Brisbane. ECS DAQ Basics The ECS is the top level under which sits the DCS and DAQ DCS must be in READY state before trying to use the DAQ system.
5/2/  Online  Offline 5/2/20072  Online  Raw data : within the DAQ monitoring framework  Reconstructed data : with the HLT monitoring framework.
Planning and status of the Full Dress Rehearsal Latchezar Betev ALICE Offline week, Oct.12, 2007.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
The Past... DDL in ALICE DAQ The DDL project ( )  Collaboration of CERN, Wigner RCP, and Cerntech Ltd.  The major Hungarian engineering contribution.
FPGA Co-processor for the ALICE High Level Trigger Gaute Grastveit University of Bergen Norway H.Helstrup 1, J.Lien 1, V.Lindenstruth 2, C.Loizides 5,
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
INFSO-RI Enabling Grids for E-sciencE Using of GANGA interface for Athena applications A. Zalite / PNPI.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Summary of Workshop on Calibration and Alignment Database Magali Gruwé CERN PH/AIP ALICE Computing Day February 28 th 2005.
Monitoring for the ALICE O 2 Project 11 February 2016.
R.Divià, CERN/ALICE 1 ALICE off-line week, CERN, 9 September 2002 DAQ-HLT software interface.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Active-HDL Server Farm Course 11. All materials updated on: September 30, 2004 Outline 1.Introduction 2.Advantages 3.Requirements 4.Installation 5.Architecture.
M. Caprini IFIN-HH Bucharest DAQ Control and Monitoring - A Software Component Model.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
Scalable Readout System Data Acquisition using LabVIEW Riccardo de Asmundis INFN Napoli [Certified LabVIEW Developer]
Sebastian Robert Bablok
LHC experiments Requirements and Concepts ALICE
TPC Commissioning: DAQ, ECS aspects
ALICE High Level Trigger Interfaces and Data Organisation CHEP 2006 – Mumbai ( – ) Sebastian Robert Bablok, Matthias Richter, Dieter Roehrich,
Controlling a large CPU farm using industrial tools
ALICE – First paper.
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
Status of the Front-End Electronics and DCS for PHOS and TPC
Commissioning of the ALICE HLT, TPC and PHOS systems
The LHCb Run Control System
The LHCb High Level Trigger Software Framework
Use Of GAUDI framework in Online Environment
Implementation of DHLT Monitoring Tool for ALICE
Offline framework for conditions data
Presentation transcript:

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 1/22 ALICE High Level Trigger Interfaces and Data Organisation CHEP 2006 – Mumbai ( – ) Sebastian Robert Bablok, Matthias Richter, Dieter Roehrich, Kjetil Ullaland (Department of Physics and Technology, University of Bergen, Norway) Torsten Alt, Volker Lindenstruth, Timm M. Steinbeck, Heinz Tilsner (Kirchhoff Institute of Physics, Ruprecht-Karls-University Heidelberg, Germany) Harald Appelshaeuser (Institute for Nuclear Physics, University of Frankfurt, Germany) Haavard Helstrup (Faculty of Engineering, Bergen University College, Norway) Bernhard Skaali, Thomas Vik (Department of Physics, University of Oslo, Norway) Cvetan Cheshkov (CERN, Geneva, Switzerland) for the ALICE Collaboration

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 2/22 Outline HLT Overview HLT - ECS Interface HLT - DAQ Data Format HLT - GRID Interface Summary & Outlook

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 3/22 Outline HLT Overview HLT - ECS Interface HLT - DAQ Data Format HLT - GRID Interface Summary & Outlook

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 4/22 HLT Overview Purpose: online event reconstruction and analysis providing of trigger decisions to DAQ monitoring and performance analysis of the ALICE detectors usage of its spare computer power (HLT cluster) for GRID Architecture: ca 500 node cluster (similar to Tier 2) dual SMP board with dual core CPUs (  up to 2000 processing units in sum) Gigabit Ethernet includes up to 300 FEP (Front-End-Processing) nodes receiving data from detectors dynamic software data transport framework (PubSub- and TaskManager system)

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 5/22 HLT Overview (300 FEP) (DDL)

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 6/22 HLT Overview Interfaces: raw data input copied via D-RORC from DAQ output of processed data and trigger decisions to the DAQ LDCs ECS interface as integration into the ALICE controlling system DCS interface to receive configuration data AliRoot interface to analysis code and for online monitoring AliEn interface to use idle nodes for GRID computing internal node interfaces using the PubSub - and TaskManager system HLT – cluster Detector Event Building / DAQ Trigger GRID DCS Online / Offline Software ECS Monitoring, Logging L2 Accept HLT Accept

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 7/22 Outline HLT Overview HLT - ECS Interface HLT - DAQ Data Format HLT - GRID Interface Summary & Outlook

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 8/22 Control of HLT through Experiment Control System (ECS) for initialisation and runtime operations Interface: represented as well defined states and transitions managed by Finite State Machines (FSM) ECS controls the HLT via HLT proxy implemented in SMI++ (external communication via DIM) internal communication is using interface library of TaskManager control system HLT - ECS Interface

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 9/22 HLT - ECS Interface HLT proxy states and transitions: subdivided into two type of states “stable” states (OFF, INITIALIZED, CONFIGURED, READY, RUNNING, ERROR): transitions only via well defined commands intermediate states (RAMPING_UP_&_INITIALIZING, CONFIGURING, ENGAGING, DISENGAGING, RAMPING_DOWN): longer procedure of transition – implicit transition, when procedure is finished configuration is passed as parameters of configure command mode of the HLT: “A” (no HLT)  no output to DAQ “B” (data processing only)  processed event data to DAQ, HLT trigger decisions ignored by DAQ “C” (full HLT functionality)  HLT trigger and pre-processed event data provided to DAQ output data format: output format version number of data sent back to DAQ trigger classes: tag identifying the desired physics triggers list of DDLs: on which H-RORCs receive event data for the HLT

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 10/22 HLT proxy - States ERROR OFF RAMPING_UP_&_INITIALIZING > RAMPING_DOWN > INITIALIZED CONFIGURED READY RUNNING CONFIGURING > ENGAGING > DISENGAGING > initialize implicit transtion shutdown reset shutdown configure reset engage disengage start stop implicit transtion

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 11/22 HLT - ECS Interface Implementation: SMI++ (common among all ALICE systems) single instance not partitioned HLT proxy is only active during (de-) initialisation- and configuration phase, in RUNNING state the cluster processes event data autonomously internal communication: interface library of TaskManager (TM) control system HLT proxy contacts to multiple master TMs of the cluster (majority vote – fault tolerance, avoiding single-point-of-failure) master TMs are networked with all slave TMs slave TMs control the PubSub system (RORC publisher and others)

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 12/22

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 13/22 Outline HLT Overview HLT - ECS Interface HLT - DAQ Data Format HLT - GRID Interface Summary & Outlook

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 14/22 HLT - DAQ Data Format HLT – output to DAQ DAQ handles HLT output like data from any other detector (except for the trigger decision). HLT output is received by the DDLs on D-RORCs (DAQ - ReadOut Receiver Cards) located on the LDCs (Local Data Concentrators).

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 15/22 HLT - DAQ Data Format Output data format: trigger decision: list of DDLs to read (detector DDLs [e.g. RoI] and HLT DDLs) Event Summary Data (ESD) in case of an accept candidate ID (ID from used trigger class) preprocessed event data [optional]: (partially) reconstructed event compressed raw data preprocessed detector specific data (e.g. Photon Spectrometer pulse shape analysis) Start-Of-Data (SOD) and End-Of-Data (EOD)

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 16/22 Outline HLT Overview HLT - ECS Interface HLT - DAQ Data Format HLT - GRID Interface Summary & Outlook

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 17/22 HLT - GRID Interface HLT cluster is similar to Tier 2 center:  up to 2000 processing units Cluster usage: HLT processing (ALICE data taking periods) GRID job execution (non data taking periods and unused nodes during data taking)  priority lies on the HLT functionality GRID middleware in ALICE is AliEn (Alice Environment)  HLT cluster offers Computing Element (CE) functionality to AliEn  due to priority in HLT tasks no (permanent) Storage Element (SE)

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 18/22 HLT - GRID Interface Dedicated node acts as GRID portal node: has AliEn installed contacts AliEn master takes care of certificates and authentication administration of GRID nodes includes unused nodes into GRID informs AliEn master about estimated Time-To-Live of included nodes (TTL) excludes HLT required nodes from GRID uses Condor as batch system underneath  Condor also installed on each node sharing of GRID applications inside cluster via network

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 19/22 HLT - GRID Interface GRID usage Free nodes administration (1): unused nodes (for HLT) are collected in “buffered” pool pool consists of three regions divided by two thresholds: GRID usage: nodes above the soft reserve threshold are offered for GRID jobs. below soft reserve: no more nodes are offered for GRID, unused nodes from the HLT are only collected in reserve, nodes finishing GRID jobs are recollected to reserve (  “soft” stop), nodes needed for HLT are taken from the pool. below hard reserve: amount of reserve nodes is below the safety minimum, GRID nodes are immediately recollected (GRID jobs are canceled) to the reserve (  “hard” stop) until hard reserve threshold is exceeded again. pool soft reserve hard reserve reserve

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 20/22 HLT - GRID Interface Free nodes administration (2):  aim of the hard reserve: immediately available spare nodes for HLT  aim of the soft reserve: avoiding “hard” stops as far as possible a sophisticated algorithm will estimate TTL of free nodes (this will develop together with the experience gathered during the use of the HLT cluster)  non data taking periods in this context are just special cases of the normal pool administration

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 21/22 Outline HLT Overview HLT - ECS Interface HLT - DAQ Data Format HLT - GRID Interface Summary & Outlook

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 22/22 Summary & Outlook HLT: will consist of a large PC cluster (similar to Tier 2) internally managed by TaskManager control system overall system to be configured and controlled by ECS  via well defined states and transitions (commands)  interface implemented in SMI++  internal communication to multiple master TaskManagers via own interface library will provide (pre-) processed data and trigger decisions to DAQ planned to offer unused nodes for GRID computing  organised in “buffered” pool (three regions divided by two thresholds)  interface to AliEn, Condor as batch system  utilisation of significant computing power for multiple applications

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 23/22 HLT proxy – States (additional slide)

Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 24/22 HLT - GRID Interface (additional slide)