Status of the Shuttle Framework Alberto Colla Jan Fiete Grosse-Oetringhaus ALICE Offline Week 2. - 6. October 2006.

Slides:



Advertisements
Similar presentations
5/2/  Online  Offline 5/2/20072  Online  Raw data : within the DAQ monitoring framework  Reconstructed data : with the HLT monitoring framework.
Advertisements

11th April 2008TOF DA & ppChiara Zampolli. DAQ-DA Three DAQ-DAs implemented, deployed, validated, and tested: TOFda TOFnoiseda TOFpulserda DCS-DA One.
André Augustinus ALICE Detector Control System  ALICE DCS is responsible for safe, stable and efficient operation of the experiment  Central monitoring.
1 Databases in ALICE L.Betev LCG Database Deployment and Persistency Workshop Geneva, October 17, 2005.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
Peter Chochula, January 31, 2006  Motivation for this meeting: Get together experts from different fields See what do we know See what is missing See.
Normal text - click to edit HLT – Interfaces (ECS, DCS, Offline) (Alice week – HLT workshop ) S. Bablok (IFT, University of Bergen)
Staging to CAF + User groups + fairshare Jan Fiete Grosse-Oetringhaus, CERN PH/ALICE Offline week,
Chiara Zampolli in collaboration with C. Cheshkov, A. Dainese ALICE Offline Week Feb 2009C. Zampolli 1.
Copyright ®xSpring Pte Ltd, All rights reserved Versions DateVersionDescriptionAuthor May First version. Modified from Enterprise edition.NBL.
1 Alice DAQ Configuration DB
Experience with analysis of TPC data Marian Ivanov.
DAQ & ECS for TPC commissioning A few statements about what has been done and what is still in front of us F.Carena.
André Augustinus 10 September 2001 DCS Architecture Issues Food for thoughts and discussion.
1 CTP offline software status (Offline week) A.Jusko and R.Lietava.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
Web application for detailed real-time database transaction monitoring for CMS condition data ICCMSE 2009 The 7th International Conference of Computational.
Offline shifter training tutorial L. Betev February 19, 2009.
ALICE Computing Model The ALICE raw data flow P. VANDE VYVRE – CERN/PH Computing Model WS – 09 Dec CERN.
Offline report – 7TeV data taking period (Mar.30 – Apr.6) ALICE SRC April 6, 2010.
ALICE Condition DataBase Magali Gruwé CERN PH/AIP Alice Offline week May 31 st 2005.
F.Carena, CERN/ALICE The ALICE Experiment Control System F. Carena / CERN-PH.
4 th Workshop on ALICE Installation and Commissioning January 16 th & 17 th, CERN Muon Tracking (MUON_TRK, MCH, MTRK) Conclusion of the first ALICE COSMIC.
5/2/  Online  Offline 5/2/20072  Online  Raw data : within the DAQ monitoring framework  Reconstructed data : with the HLT monitoring framework.
Planning and status of the Full Dress Rehearsal Latchezar Betev ALICE Offline week, Oct.12, 2007.
HMPID offline status report D. Di Bari, L. Molnar, G. Volpe ALICE Offline Week, CERN, 22 June 2009.
1 SDD: DA and preprocessor Francesco Prino INFN Sezione di Torino ALICE offline week – April 11th 2008.
EMCAL : Online Software Status 1) General info/intro 2) DCS-related: detector configuration, Data Points 3) DAQ: Detector Algorithms 4) QA, AMORE 5) Offline-related:
1 Checks on SDD Data Piergiorgio Cerello, Francesco Prino, Melinda Siciliano.
JAliEn Java AliEn middleware A. Grigoras, C. Grigoras, M. Pedreira P Saiz, S. Schreiner ALICE Offline Week – June 2013.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
1 CTP offline software status (Offline week,8/4/08) R.Lietava for CTP group.
November 1, 2004 ElizabethGallas -- D0 Luminosity Db 1 D0 Luminosity Database: Checklist for Production Elizabeth Gallas Fermilab Computing Division /
October Test Beam DAQ. Framework sketch Only DAQs subprograms works during spills Each subprogram produces an output each spill Each dependant subprogram.
Pavel Nevski DDM Workshop BNL, September 27, 2006 JOB DEFINITION as a part of Production.
Summary of User Requirements for Calibration and Alignment Database Magali Gruwé CERN PH/AIP ALICE Offline Week Alignment and Calibration Workshop February.
ALICE Offline Week October 4 th 2006 Silvia Arcelli & Chiara Zampolli TOF Online Calibration - Strategy - TOF Detector Algorithm - TOF Preprocessor.
LHCb Configuration Database Lana Abadie, PhD student (CERN & University of Pierre et Marie Curie (Paris VI), LIP6.
1 SDD offline status Francesco Prino INFN sezione di Torino ALICE offline week – October 20th 2008.
Summary of Workshop on Calibration and Alignment Database Magali Gruwé CERN PH/AIP ALICE Computing Day February 28 th 2005.
The DCS Databases Peter Chochula. 31/05/2005Peter Chochula 2 Outline PVSS basics (boring topic but useful if one wants to understand the DCS data flow)
PHOS offline status report Yuri Kharlov ALICE offline week 7 July 2008.
Dynamic staging to a CAF cluster Jan Fiete Grosse-Oetringhaus, CERN PH/ALICE CAF / PROOF Workshop,
Database Issues Peter Chochula 7 th DCS Workshop, June 16, 2003.
 offline code: changes/updates, open items, readiness  1 st data taking plans and readiness.
AliRoot Classes for access to Calibration and Alignment objects Magali Gruwé CERN PH/AIP ALICE Offline Meeting February 17 th 2005 To be presented to detector.
Good user practices + Dynamic staging to a CAF cluster Jan Fiete Grosse-Oetringhaus, CERN PH/ALICE CUF,
ATLAS The ConditionDB is accessed by the offline reconstruction framework (ATHENA). COOLCOnditions Objects for LHC The interface is provided by COOL (COnditions.
D. Elia (INFN Bari)Offline week / CERN Status of the SPD Offline Domenico Elia (INFN Bari) Overview:  Response simulation (timing info, dead/noisy.
AliRoot survey: Calibration P.Hristov 11/06/2013.
D. Elia (INFN Bari)ALICE Offline week / CERN Update on the SPD Offline Domenico Elia in collaboration with H. Tydesjo, A. Mastroserio Overview:
MAUS Status A. Dobbs CM43 29 th October Contents MAUS Overview Infrastructure Geometry and CDB Detector Updates CKOV EMR KL TOF Tracker Global Tracking.
ALICE Full Dress Rehearsal ALICE TF Meeting 02/08/07.
10/8/ HMPID offline status D. Di Bari, A. Mastroserio, L.Molnar, G. Volpe HMPID Group Alice Offline Week.
V4-19-Release P. Hristov 11/10/ Not ready (27/09/10) #73618 Problems in the minimum bias PbPb MC production at 2.76 TeV #72642 EMCAL: Modifications.
CALIBRATION: PREPARATION FOR RUN2 ALICE Offline Week, 25 June 2014 C. Zampolli.
F. Bellini for the DQM core DQM meeting, 04th October 2012
DCS Status and Amanda News
Massimo Masera INFN sezione di Torino
Status of the CERN Analysis Facility
TPC Commissioning: DAQ, ECS aspects
News on the CDB Framework
Calibrating ALICE.
Offline shifter training tutorial
Commissioning of the ALICE HLT, TPC and PHOS systems
Experience between AMORE/Offline and sub-systems
Offline shifter training tutorial
Corr framework Computing in High Energy and Nuclear Physics
ITS combined test seen from DAQ and ECS F.Carena, J-C.Marin
Offline framework for conditions data
Presentation transcript:

Status of the Shuttle Framework Alberto Colla Jan Fiete Grosse-Oetringhaus ALICE Offline Week October 2006

Status of the Shuttle Framework - Jan Fiete Grosse-Oetringhaus2 Content The Shuttle Recent developments Demo Todo list

Status of the Shuttle Framework - Jan Fiete Grosse-Oetringhaus3 Concept Online machinery is shielded by firewalls Conditions data produced online has to be extracted online  Shuttle collects data at end of run and puts it into the Condition DB

Status of the Shuttle Framework - Jan Fiete Grosse-Oetringhaus4 DAQ Schema Online Offline DCS HLT SHUTTLE Conditions DB Grid File Catalog ECS Preprocessor Det 3 Preprocessor Det 2 Preprocessor Det 1 End-Of-Run

Status of the Shuttle Framework - Jan Fiete Grosse-Oetringhaus5 Sub-detectors preprocessor Runs inside the Shuttle Is executed once for each finished run Gets information from the DAQ LB and DCS Retrieves files produced by calibration routines running in DAQ, DCS, HLT online machines Processes data (ROOTifying, summarizing, fitting,...) Writes results into Conditions DB No alternative system to extract data (including online calibration results) between data-taking and first reconstruction pass!

Status of the Shuttle Framework - Jan Fiete Grosse-Oetringhaus 6 Sequence diagram OCDB ECS DAQ DCS (Arch. DB) HLT SHUTTLE EoREoR2 ACORD E EMCALHMPIDFMDITSMUONPHOSPMDT0TOFTPCTRDV0ZDC Detector preprocessors Loop over all detectors Registration of condition files Interfaces with info providers DCS (Calib runs) SoR

Status of the Shuttle Framework - Jan Fiete Grosse-Oetringhaus7 Developments since June DAQ File eXchange Server (FES=FXS) interfaced DCS, HLT agreed, but not implemented yet Checksum added to FXS tables Support for calibration runs Log file per sub detector Online/ Offline naming conversion (see next slides) Status system / Crash recovery (see next slides) Introduction of reference data (see next slides)

Status of the Shuttle Framework - Jan Fiete Grosse-Oetringhaus8 Reference Data Data stored in OCDB is divided into two logical categories Conditions data (CD) –Data needed for reconstruction –E.g. calculated calibration values (pedestals, drift velocities) –Is replicated to all sites participating in reconstruction Histograms Conditions Reference data Calibration values Preprocessor

Status of the Shuttle Framework - Jan Fiete Grosse-Oetringhaus9 Reference Data (2) Reference data (RD) (optional) –Optional data that can be used to understand the origin of the calibration values –E.g. input for calculation of calibration values (histograms,...) Usually size(CD) << size(RD): Estimated sizes of both are to be submitted to Yves Schutz for the calibration readiness tables (please check this) Both are stored safely in the Grid and accessible by the CDB framework

Status of the Shuttle Framework - Jan Fiete Grosse-Oetringhaus10 Status System / Crash Recovery Detailed status is stored locally –Processing status per sub detector (e.g. receiving DCS values, running preprocessor) –Crash recovery at startup (only not processed sub detectors are processed) –Crash count: After a maximum number of crashes this run is skipped for this sub detector StartedDCS started Preprocessor Started Preprocessor Error Failed Done DCS done Count: 1

Status of the Shuttle Framework - Jan Fiete Grosse-Oetringhaus11 Status System / Crash Recovery (2) Global status is stored in a MySQL table in DAQ –Contains simplified status per run and sub detector –Could be included in the global ALICE monitoring Unprocessed Inactive Done Failed

Status of the Shuttle Framework - Jan Fiete Grosse-Oetringhaus12 Changes in the interface Store(pathLevel2, pathLevel3, object, metaData, validityStart, validityInfinite) –Full path can be specified: / / –validityStart dates validity back w.r.t. current run (default: 0) –validityInfinite dates this data valid until infinity (or until a newer version is stored), needed for calibration runs (default: kFALSE) StoreReferenceData(pathLevel2, pathLevel3, object, metaData) –Used to store reference data, stored for current run

Status of the Shuttle Framework - Jan Fiete Grosse-Oetringhaus13 Online vs. Offline Naming Preprocessors are to be implemented following the online naming ( Put online name in constructor of your preprocessor Several preprocessors for 3 sub detectors: –ITS: SPD, SDD, SSD –MUON: MCH (muon tracker), MTR (muon trigger) –PHOS: PHS, CPV (charged particle veto) The Store() function of the Shuttle converts the naming  e.g. AliITSSPDPreprocessor::Store saves in the namespace of ITS

Status of the Shuttle Framework - Jan Fiete Grosse-Oetringhaus14 Open Issues Is it needed that a preprocessor reads from the OCDB? –Preprocessor that needs data from OCDB can only run when Grid connection is available –Data from previous run might not yet be available (no strict ordering of runs currently) Is validityStart needed? –Data from an older run might overwrite data from newer run (no strict ordering of runs currently) –How would a sub detector define the value for validityStart? (e.g. the magnetic field could have been changed before this run)

Status of the Shuttle Framework - Jan Fiete Grosse-Oetringhaus15 Demo 2 preprocessors, 2 runs –For the second run one of the preprocessors will fail because it misses a file in the file exchange server 4 runs of the Shuttle –1. We simulate the Grid is not working, the resulting files are stored locally –2. The Grid is working, the already created files are uploaded – The second run for one preprocessor is repeated because it failed before, until the maximum number of retries is exceeded

Status of the Shuttle Framework - Jan Fiete Grosse-Oetringhaus16 Todo list All information from the DAQ Logbook available to the preprocessor (run type,...) Monitoring process that aborts Shuttle if a preprocessor gets stuck Shuttle status website –Status of Shuttle processing per run –Log files per sub detector –Mail sent to sub detector expert if preprocessor fails Full test system by the end of this year –Nightly runs with the preprocessors from AliRoot HEAD

Status of the Shuttle Framework - Jan Fiete Grosse-Oetringhaus17 Reminder to sub detectors Provide preprocessor –Communicate size of produced data to Yves Schutz –Commit preprocessor into CVS –Communicate list of DCS data points to Shuttle team Provide input data for full test setup –Give input files to Shuttle team –Give simulated list of DCS values to Shuttle team

Status of the Shuttle Framework - Jan Fiete Grosse-Oetringhaus18 Backup

19 Status of the Shuttle Framework - Jan Fiete Grosse-Oetringhaus High-level dataflow CAF WN lfnguid{se’s} lfnguid{se’s} lfnguid{se’s} lfnguid{se’s} lfnguid{se’s} ALICE File Catalogue Publish agent xrootd Castor cache CASTOR LDC DAQ Network GDC Condition files Data files Data file FTS SRMSRM SRMSRM T1’s OfflineOnline HLT DDL 240TB DCS Shuttle Publish in AliEn Monit. Calib. DAQ FES DAQ Logbook DB Condition files HLT FES DCS FES Run info DCS DB

Status of the Shuttle Framework - Jan Fiete Grosse-Oetringhaus 20 SHUTTLE CTP LTU DET E C S TRG DCS DAQ Run1 Run2... RUN LgBk Run1 ¦ Path ¦ OK Run2 ¦ Path ¦ OK... FS Table File System SHUTTLE CDB Grid FC HLT EOR notification 2.Query Logbook for Run nb 3.Query FS table for file path 4.Collect files 5.Process and store files in CDB 6.Send Status flag 5 LDC LDC’s GDC