Normal text - click to edit HLT – Interfaces (ECS, DCS, Offline) (Alice week – HLT workshop 06-03-2007) S. Bablok (IFT, University of Bergen)

Slides:



Advertisements
Similar presentations
The Detector Control System – FERO related issues
Advertisements

Peter Chochula CERN-ALICE ALICE DCS Workshop, CERN September 16, 2002 DCS – Frontend Monitoring and Control.
CWG10 Control, Configuration and Monitoring Status and plans for Control, Configuration and Monitoring 16 December 2014 ALICE O 2 Asian Workshop
5/2/  Online  Offline 5/2/20072  Online  Raw data : within the DAQ monitoring framework  Reconstructed data : with the HLT monitoring framework.
André Augustinus ALICE Detector Control System  ALICE DCS is responsible for safe, stable and efficient operation of the experiment  Central monitoring.
CHEP 2012 – New York City 1.  LHC Delivers bunch crossing at 40MHz  LHCb reduces the rate with a two level trigger system: ◦ First Level (L0) – Hardware.
1 Databases in ALICE L.Betev LCG Database Deployment and Persistency Workshop Geneva, October 17, 2005.
Peter Chochula, January 31, 2006  Motivation for this meeting: Get together experts from different fields See what do we know See what is missing See.
Supervision of Production Computers in ALICE Peter Chochula for the ALICE DCS team.
HLT & Calibration.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/27 A Control Software for the ALICE High Level Trigger Timm.
Normal text - click to edit FeeServer: (version 0.9.2) ( ) S. Bablok (IFT, University of Bergen)
1 HLT – ECS, DCS and DAQ interfaces Sebastian Bablok UiB.
Normal text - click to edit Status of implementation of Detector Algorithms in the HLT framework Calibration Session – OFFLINE week ( ) M. Richter,
Clara Gaspar, May 2010 The LHCb Run Control System An Integrated and Homogeneous Control System.
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg 1 HLT and the Alignment & Calibration DB.
Control and monitoring of on-line trigger algorithms using a SCADA system Eric van Herwijnen Wednesday 15 th February 2006.
Peter Chochula.  DAQ architecture and databases  DCS architecture  Databases in ALICE DCS  Layout  Interface to external systems  Current status.
Calo Piquet Training Session - Xvc1 ECS Overview Piquet Training Session Cuvée 2012 Xavier Vilasis.
JCOP Workshop September 8th 1999 H.J.Burckhart 1 ATLAS DCS Organization of Detector and Controls Architecture Connection to DAQ Front-end System Practical.
Update on Database Issues Peter Chochula DCS Workshop, June 21, 2004 Colmar.
André Augustinus 10 September 2001 DCS Architecture Issues Food for thoughts and discussion.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
4/5/2007Data handling and transfer in the LHCb experiment1 Data handling and transfer in the LHCb experiment RT NPSS Real Time 2007 FNAL - 4 th May 2007.
ALICE, ATLAS, CMS & LHCb joint workshop on
P. Chochula ALICE Week Colmar, June 21, 2004 Status of FED developments.
Clara Gaspar, March 2005 LHCb Online & the Conditions DB.
Introduction CMS database workshop 23 rd to 25 th of February 2004 Frank Glege.
Databases in CMS Conditions DB workshop 8 th /9 th December 2003 Frank Glege.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
Peter Chochula ALICE Offline Week, October 04,2005 External access to the ALICE DCS archives.
L0 DAQ S.Brisbane. ECS DAQ Basics The ECS is the top level under which sits the DCS and DAQ DCS must be in READY state before trying to use the DAQ system.
Planning and status of the Full Dress Rehearsal Latchezar Betev ALICE Offline week, Oct.12, 2007.
4 Oct 2005 / Offline week Shuttle program for gathering conditions data from external DB Boyko Yordanov 4 October 2005 ALICE Offline week.
1 SDD: DA and preprocessor Francesco Prino INFN Sezione di Torino ALICE offline week – April 11th 2008.
Peter Chochula.  DCS architecture in ALICE  Databases in ALICE DCS  Layout  Interface to external systems  Current status and experience  Future.
Online Monitoring System at KLOE Alessandra Doria INFN - Napoli for the KLOE collaboration CHEP 2000 Padova, 7-11 February 2000 NAPOLI.
Configuration database status report Eric van Herwijnen September 29 th 2004 work done by: Lana Abadie Felix Schmidt-Eisenlohr.
1 CTP offline software status (Offline week,8/4/08) R.Lietava for CTP group.
Status of the Shuttle Framework Alberto Colla Jan Fiete Grosse-Oetringhaus ALICE Offline Week October 2006.
Summary of User Requirements for Calibration and Alignment Database Magali Gruwé CERN PH/AIP ALICE Offline Week Alignment and Calibration Workshop February.
ALICE Offline Week October 4 th 2006 Silvia Arcelli & Chiara Zampolli TOF Online Calibration - Strategy - TOF Detector Algorithm - TOF Preprocessor.
LHCb Configuration Database Lana Abadie, PhD student (CERN & University of Pierre et Marie Curie (Paris VI), LIP6.
22 March 2010 DCS workshop C. Bortolin, g. de cataldo and A. Franco INFN it/CERN CH 1 Progress on LHC data exchange LHC-ALI_DCS project overview Future.
The DCS Databases Peter Chochula. 31/05/2005Peter Chochula 2 Outline PVSS basics (boring topic but useful if one wants to understand the DCS data flow)
Normal text - click to edit CHEP Mumbai ALICE High Level Trigger Interfaces … 1/22 ALICE High Level Trigger Interfaces.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Summary of TPC/TRD/DCS/ECS/DAQ meeting on FERO configuration CERN,January 31 st 2006 Peter Chochula.
Database Issues Peter Chochula 7 th DCS Workshop, June 16, 2003.
Maria del Carmen Barandela Pazos CERN CHEP 2-7 Sep 2007 Victoria LHCb Online Interface to the Conditions Database.
ATLAS The ConditionDB is accessed by the offline reconstruction framework (ATHENA). COOLCOnditions Objects for LHC The interface is provided by COOL (COnditions.
COMPASS DAQ Upgrade I.Konorov, A.Mann, S.Paul TU Munich M.Finger, V.Jary, T.Liska Technical University Prague April PANDA DAQ/FEE WS Игорь.
M. Caprini IFIN-HH Bucharest DAQ Control and Monitoring - A Software Component Model.
20OCT2009Calo Piquet Training Session - Xvc1 ECS Overview Piquet Training Session Cuvée 2009 Xavier Vilasis.
AliRoot survey: Calibration P.Hristov 11/06/2013.
10/8/ HMPID offline status D. Di Bari, A. Mastroserio, L.Molnar, G. Volpe HMPID Group Alice Offline Week.
Sebastian Robert Bablok
Christoph Blume Offline Week, July, 2008
Peter Chochula Calibration Workshop, February 23, 2005
News on the CDB Framework
ALICE High Level Trigger Interfaces and Data Organisation CHEP 2006 – Mumbai ( – ) Sebastian Robert Bablok, Matthias Richter, Dieter Roehrich,
Calibrating ALICE.
Controlling a large CPU farm using industrial tools
Commissioning of the ALICE HLT, TPC and PHOS systems
ProtoDUNE SP DAQ assumptions, interfaces & constraints
HLT & Calibration.
The LHCb Run Control System
Offline framework for conditions data
Presentation transcript:

Normal text - click to edit HLT – Interfaces (ECS, DCS, Offline) (Alice week – HLT workshop ) S. Bablok (IFT, University of Bergen)

Normal text - click to edit TOC HLT interfaces HLT  ECS HLT  DCS HLT  OFFLINE HLT calibration (Use Case) additional slides

Normal text - click to edit HLT interfaces ECS: –Controls the HLT via well defined states (SMI) –Provides general experiment settings (type of collision, run number, …) DCS: –Provides HLT with current Detector parameters (voltages, temperature, …)  Pendolino –Provides DCS processed data from HLT (TPC drift velocity, …)  FED portal (Front-End-Device portal) OFFLINE: –Interface to fetch data from the OCDB (OFFLINE  HLT) –Provides OFFLINE with calculated calibration data (HLT  OFFLINE)

Normal text - click to edit OFFLINE DCS Task Manager Archive DB PVSS FES MySQL DCS portal Shuttle Pendolino ECS- proxy Data flow in HLT spec Datasink (Subscriber) Pendolino-PubSub DIM- Subscriber PubSub intern extern DA HLT OFFLINE Storage OCDB (Conditions) ALiEn Taxi local Cache (HCDB) Taxi-HCDB ECS AliRoot CDB access Detector responsibility Framework components Interface

Normal text - click to edit HLT  ECS interface State transition commands from ECS –INITIALIZE, CONFIGURE(+PARAMS), ENAGE,START,… –Mapping to TaskManager states CONFIGURE parameters: –HLT_MODE: the mode, in which the HLT shall run (A, B or C) –BEAM_TYPE: (pp (proton-proton) or AA (heavy ion)) –RUN_NUMBER: the run number for the current run –DATA_FORMAT_VERSION: the expected output data format version –HLT_TRIGGER_CODE: ID defining the current HLT Trigger classes –CTP_TRIGGER_CLASS: the trigger classes in the Central Trigger Processor –HLT_IN_DDL_LIST: list of DDL cables on which the HLT can expect event data in the coming run. The structure will look like the following: :, :,... –HLT_OUT_DDL_LIST: list of DDLs, on which the HLT can send data to DAQ

Normal text - click to edit OFF INITIALIZING > DEINITIALIZING > INITIALIZED CONFIGURED READY RUNNING CONFIGURING > ENGAGING > DISENGAGING > INITIALIZE implicit transtion SHUTDOWN CONFIGURE + params RESET ENGANGE DISENGAGE START STOP implicit transtion COMPLETING implicit transtion slaves_dead / off INITIALIZING > DEINITIALIZING > processes_ dead local_ready ready running/busy CONFIGURING > ENGAGING > DISENGAGING > start_slaves implicit transtion kill_slaves start + params stop connect disconnect start_run stop_run implicit transtion COMPLETING implicit transtion

Normal text - click to edit HLT  DCS interface FED portal: –Dim Channels (Services published on the HLT side) –Implements partially the Fed Api –Subscriber component –PVSS panels on DCS side integrate data in DCS system Pendolino: –Contacts the DCS Amanda server –Fetches current running conditions –Publisher component –Three Pendolinos, each with a different frequency (fast, medium, slow)

Normal text - click to edit DCS HLT components to interface DCS Pendolino- PubSub (Data processor) Interface (PubSub -Pendolino [AliRoot]) Detector responsibility Framework components Interface Pendolino DCS portal (Dim Subscriber) Interface (PubSub –FED API [ DIM ]) FED API Archive DB PVSS

Normal text - click to edit HLT  DCS dataflow Purpose: –Storing of DCS related data in the DCS Archive DB (HLT-cluster monitoring [HLT]; TPC drift velocity, …. [detector specific]) HLT side: –One node providing a special PubSub framework component implementing (partly) the FED API (DCS portal node) DCS side: –Different PVSS panels: A dedicated panel for HLT cluster monitoring Integration of detector specific processed data in the PVSS panels of the according detector

Normal text - click to edit HLT  DCS dataflow Purpose: –Storing of DCS related data in the DCS Archive DB (HLT-cluster monitoring [HLT]; TPC drift velocity, …. [detector specific]) HLT Cluster has a dedicated DCS portal node –DCS portal acts as DIM server –DIM channels to detectors PVSS panels and HLT PVSS panels (DIM client) –implements a FedServer (DIM server) [partly] "ConfigureFeeCom" command channel (setting log level) Service channels (Single and Grouped service channels, Message channel, ACK channel) –all located in one DIM-DNS domain 2 DCS PCs for HLT PVSS panels –worker node: PVSS panels to receive the monitored cluster data this node will also connect to Pendolino (vice versa connection, see below) –operator node: PVSS to watch the data (counting room and outer world) HLT cluster intern data is transported via PubSub system

Normal text - click to edit HLT  DCS dataflow (Hardware + Software components) DIM-DNS domain DCS HLT - cluster HLT HLT-DCS portal TPC TRDHLT Pub-Sub connections The connections inside the cluster are based on the Pub- Sub framework HLT- DCS Nodes (located in DCS counting room): - worker node: PVSS panels to receive the monitored data; - operator node: PVSS panels to watch the data (remotely: counting room and outer world) Services of one detector are offered in Single and/or Grouped Service channel and can be requested by the PVSS of the according detector via DIM. Common Detector – DCS integration (over PVSS) Online Calib.... Cluster monitoring FEDServer (Dim) FEDClient (PVSS) Ordinary detector DCS nodes, connecting to the HLT portal in addition to their normal tasks.

Normal text - click to edit DCS  HLT dataflow HLT needs DCS data (temperature, current configuration, …): Online analysis Calibration data processing The required data can be acquired from the DCS Archive DB: retrieval via AMANDA Pendolino request data in regular time intervals about three Pendolinos with different frequencies are foreseen (three different frequencies – requesting different type of data) HLT intern data is distributed via PubSub framework

Normal text - click to edit DCS  HLT dataflow (Hardware + Software components) DCS HLT - cluster HLT HLT-DCS portal HLT- wn Pub-Sub connections The connections inside the cluster are based on the Pub- Sub framework.... worker node (wn), where AMANDA server for HLT is running Request of data via AMANDA (PVSS DataManager) Archive AMANDA – Pendolinos Pendolino Request Data response

Normal text - click to edit DCS  HLT dataflow Pendolino details: Three different frequencies: –fast Pendolino: 10 sec - 1 min –normal Pendolino: 1 min - 5 min –slow Pendolino: over 5 min Response time: –~13000 values per second –e.g. If Pendolino needs data for N channels, for a period of X seconds and the channels change at a rate of Y Hz (with Y smaller than 1 Hz !), it will take: (N*X*Y) / seconds to read back the data. (given by Peter Chochula) Remark: –The requested values can be up to 2 min old. (This is the time, that can pass until the data is shifted from the DCS PVSS to the DCS Archive DB)

Normal text - click to edit DCS data  HLT Remarks: Amanda Pendolino can only be used to request data included in the DCS Archive DB Requests of values with higher frequency than ~0.1 Hz need a different connection requests only data from current run; older data can be requested from the OCDB (OFFLINE interface, see below) Requests for huge amount of data should be requested via the FES of DCS (additional portal, will be similar to OFFLINE FES)

Normal text - click to edit HLT  OFFLINE interface Taxi portal –Requests OCDB and caches content locally (HCDB) –Provides calibration objects to Detector Algorithms (DA) inside the HLT –HCDB accessible via AliRoot CDB access classes Shuttle portal –Provides calibration data to OFFLINE (OCDB) –Data exchanged via FileExchangeServer (FES,FXS) –Meta data stored in MySQL DB –Fetched by OFFLINE at end of the run

Normal text - click to edit HLT  OFFLINE interface Taxi portal: DA 1. Taxi0 HCDB0 portal-taxi0 DA_HCDB 1. Taxi1 HCDB1 portal-taxi1 ECS- proxy OCDB current run number triggers update ECS AliRoot CDB access classes DA AliRoot CDB access classes

Normal text - click to edit HLT  OFFLINE interface How to access data from HCDB: string hcdbURL = “ ”; string calibObj = “ ”; Int_t runNumber = ; AliCDBManager *man = AliCDBManager::Instance(); AliCDBStorage *hcdb = man->GetStorage(hcdbURL.c_str()); hcdb->QueryCDB(runNumber); Int_t latestVersion = hcdb->GetLatestVersion( calibObj.c_str(), runNumber); AliCDBEntry *calibObject = hcdb->Get(calibObj.c_str(), runNumber, latestVersion);... // and that’s it!! AliCDBEntry represents calibration objects

Normal text - click to edit OFFLINE HLT  OFFLINE interface Shuttle portal: OCDB DA FESMySQL portal-shuttle0 (Subscriber) Shuttle FES1MySQL1 portal-shuttle1 (Subscriber) DA

Normal text - click to edit HLT  OFFLINE interface Shuttle portal: Field nameDescriptionProviderType runthe run numberHLT (framework)INT detectorthe detector name HLT (DA = Detector Algorithm) CHAR(3) fileIdfile identifier HLT (DA (checked by framework) VARCHAR (128) DDLnumbersDDL numbers, where data is originated fromHLT (DA, framework?)CHAR(64) filePathFull path to access file on FESHLT (framework) VARCHAR (256) time_createdtimestamp, when file has been copied to FESHLT (framework)DOUBLE time_processedtimestamp, when OFFLINE has processed the fileOFFLINEDOUBLE time_deletedtimestamp, when HLT has deleted the file from FESHLT (framework)DOUBLE sizesize of fileHLT (framework)INT fileChecksumchecksum of fileHLT (framework)CHAR(64)

Normal text - click to edit Information on the web

Normal text - click to edit Additional slides

Normal text - click to edit OFFLINE DCS Shuttle Processed events Processed data Calibration data Control (run number, …) FEE Event data Processed calibration data HLT - Interfaces HLT cluster OCDB (Conditions) FEP DCS- portal DDL Taxi portal HLT- proxy OFFLINE- Shuttle ECS DAQ Pendolino PVSS Archive DB DCS values

Normal text - click to edit HLT condition dataflow / Use Case Framework for data exchange –Initial Settings (before Start-of-Run (SoR)) ECS  HLT (over SMI: run number, beam type, mode, etc) OFFLINE  HLT (run and experiment conditions from OCDB; local cache  HCDB) –During run (after SoR) DCS  HLT (current environment/condition values via Amanda Pendolino) HLT  DCS (processed data via DIM-PVSS; e.g. drift velocity) Processed data back to DAQ (also for certain period after End-of-Run) –After run (after End-of-Run (EoR)) HLT  OFFLINE (OFFLINE Shuttle requests data via MySQL DB and File Exchange Server (FES))

Normal text - click to edit Timing diagram ECS DAQ DCS HLT Pre-Proc SHUTTLE EoR SoR Init OFFLINE

Normal text - click to edit HLT dataflow / Remarks Goal: framework components shall be independent of data –definition can be changed later without change of model design & framework implementation –usage of already proven technologies (AMANDA, PVSS, DIM, AliRoot, PubSub framework) Detectors / Detector algorithms can define the required data later on –BUT: they have to make sure, that their requested data is available in the connected systems (OCDB, DCS Archive, event data stream (from FEE)) –Limit their requests to the actual required amount of data (Performance)