Commissioning: Preliminary thoughts from the Offline side ATLAS Computing Workshop December 2004 Rob McPherson Hans von der Schmitt.

Slides:



Advertisements
Similar presentations
RPC & LVL1 Mu Barrel Online Monitoring during LS1 M. Della Pietra.
Advertisements

Sander Klous on behalf of the ATLAS Collaboration Real-Time May /5/20101.
March 24-28, 2003Computing for High-Energy Physics Configuration Database for BaBar On-line Rainer Bartoldus, Gregory Dubois-Felsmann, Yury Kolomensky,
GNAM and OHP: Monitoring Tools for the ATLAS Experiment at LHC GNAM and OHP: Monitoring Tools for the ATLAS Experiment at LHC M. Della Pietra, P. Adragna,
June 19, 2002 A Software Skeleton for the Full Front-End Crate Test at BNL Goal: to provide a working data acquisition (DAQ) system for the coming full.
The LAr ROD Project and Online Activities Arno Straessner and Alain, Daniel, Annie, Manuel, Imma, Eric, Jean-Pierre,... Journée de réflexion du DPNC Centre.
Linda R. Coney – 24th April 2009 Online Reconstruction & a little about Online Monitoring Linda R. Coney 18 August, 2009.
Atlas SemiConductor Tracker Andrée Robichaud-Véronneau.
CSC DQA and Commissioning Summary  We are responsible for the online and offline DQA for the CSC system, a US ATLAS responsibility  We are ready for.
1 Raw Event : ByteStream implementation Muon Workshop, April 2002 A. Nisati, for the Muon Trigger group.
L3 Filtering: status and plans D  Computing Review Meeting: 9 th May 2002 Terry Wyatt, on behalf of the L3 Algorithms group. For more details of current.
DAQ System at the 2002 ATLAS Muon Test Beam G. Avolio – Univ. della Calabria E. Pasqualucci - INFN Roma.
CLAS12 CalCom Activity CLAS Collaboration Meeting, March 6 th 2014.
ATLAS ONLINE MONITORING. FINISHED! Now what? How to check quality of the data?!! DATA FLOWS!
06/03/06Calice TB preparation1 HCAL test beam monitoring - online plots & fast analysis - - what do we want to monitor - how do we want to store & communicate.
JCOP Workshop September 8th 1999 H.J.Burckhart 1 ATLAS DCS Organization of Detector and Controls Architecture Connection to DAQ Front-end System Practical.
Simulation issue Y. Akiba. Main goals stated in LOI Measurement of charm and beauty using DCA in barrel –c  e + X –D  K , K , etc –b  e + X –B 
+ discussion in Software WG: Monte Carlo production on the Grid + discussion in TDAQ WG: Dedicated server for online services + experts meeting (Thusday.
Clara Gaspar, October 2011 The LHCb Experiment Control System: On the path to full automation.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Alignment Strategy for ATLAS: Detector Description and Database Issues
LHC: ATLAS Experiment meeting “Conditions” data challenge Elizabeth Gallas - Oxford - August 29, 2009 XLDB3.
Plans for Trigger Software Validation During Running Trigger Data Quality Assurance Workshop May 6, 2008 Ricardo Gonçalo, David Strom.
André Augustinus 10 September 2001 DCS Architecture Issues Food for thoughts and discussion.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
Control in ATLAS TDAQ Dietrich Liko on behalf of the ATLAS TDAQ Group.
ATLAS Liquid Argon Calorimeter Monitoring & Data Quality Jessica Levêque Centre de Physique des Particules de Marseille ATLAS Liquid Argon Calorimeter.
Gnam Monitoring Overview M. Della Pietra, D. della Volpe (Napoli), A. Di Girolamo (Roma1), R. Ferrari, G. Gaudio, W. Vandelli (Pavia) D. Salvatore, P.
OFFLINE TRIGGER MONITORING TDAQ Training 5 th November 2010 Ricardo Gonçalo On behalf of the Trigger Offline Monitoring Experts team.
Naming and Code Conventions for ALICE DCS (1st thoughts)
A. Gibson, Toronto; Villa Olmo 2009; ATLAS LAr Commissioning October 5, 2009 Commissioning of the ATLAS Liquid Argon Calorimeter Adam Gibson University.
24/06/03 ATLAS WeekAlexandre Solodkov1 Status of TileCal software.
Clara Gaspar, March 2005 LHCb Online & the Conditions DB.
CMS pixel data quality monitoring Petra Merkel, Purdue University For the CMS Pixel DQM Group Vertex 2008, Sweden.
Multistep Runs with ROD Crate DAQ Murrough Landon, QMUL Outline: Overview Implementation Comparison with existing setup Readout Status ModuleServices API.
Navigation Timing Studies of the ATLAS High-Level Trigger Andrew Lowe Royal Holloway, University of London.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
TRT Offline Software DOE Visit, August 21 st 2008 Outline: oTRT Commissioning oTRT Offline Software Activities oTRT Alignment oTRT Efficiency and Noise.
U.S. ATLAS Executive Committee August 3, 2005 U.S. ATLAS TDAQ FY06 M&O Planning A.J. Lankford UC Irvine.
Argonne National Laboratory Tom LeCompte1 Testbeam Requirements and Requests ATLAS Software Week Tom LeCompte Argonne National Laboratory
September 2007CHEP 07 Conference 1 A software framework for Data Quality Monitoring in ATLAS S.Kolos, A.Corso-Radu University of California, Irvine, M.Hauschild.
David Adams ATLAS DIAL: Distributed Interactive Analysis of Large datasets David Adams BNL August 5, 2002 BNL OMEGA talk.
Moritz Backes, Clemencia Mora-Herrera Département de Physique Nucléaire et Corpusculaire, Université de Genève ATLAS Reconstruction Meeting 8 June 2010.
Linda R. Coney – 5 November 2009 Online Reconstruction Linda R. Coney 5 November 2009.
Online (GNAM) and offline (Express Stream and Tier0) monitoring produced results during cosmic/collision runs (Oct-Dec 2009) Shifter and expert level monitoring.
Pixel DQM Status R.Casagrande, P.Merkel, J.Zablocki (Purdue University) D.Duggan, D.Hidas, K.Rose (Rutgers University) L.Wehrli (ETH Zuerich) A.York (University.
DQM for the RPC subdetector M. Maggi and P. Paolucci.
Online Monitoring System at KLOE Alessandra Doria INFN - Napoli for the KLOE collaboration CHEP 2000 Padova, 7-11 February 2000 NAPOLI.
S t a t u s a n d u pd a t e s Gabriella Cataldi (INFN Lecce) & the group Moore … in the H8 test-beam … in the HLT(Pesa environment) … work in progress.
ScECAL Beam FNAL Short summary & Introduction to analysis S. Uozumi Nov ScECAL meeting.
14 November 08ELACCO meeting1 Alice Detector Control System EST Fellow : Lionel Wallet, CERN Supervisor : Andre Augustinus, CERN Marie Curie Early Stage.
11th November Richard Hawkings Richard Hawkings (CERN) ATLAS reconstruction jobs & conditions DB access  Conditions database basic concepts  Types.
Clara Gaspar, April 2006 LHCb Experiment Control System Scope, Status & Worries.
Summary of User Requirements for Calibration and Alignment Database Magali Gruwé CERN PH/AIP ALICE Offline Week Alignment and Calibration Workshop February.
LAV thresholds requirements Paolo Valente. LAV answers for Valeri’s questions (old) 1.List of hardware to control (HV, LV, crates, temperatures, pressure,
The DCS Databases Peter Chochula. 31/05/2005Peter Chochula 2 Outline PVSS basics (boring topic but useful if one wants to understand the DCS data flow)
1 Cosmic commissioning Milestone runs Jamie Boyd (CERN) ATLAS UK Physics Meeting-- J Boyd Jan
LAr Testbeam Offline Software Status and Plans Combined-Combined TB Meeting 16 June 2003 Rob McPherson University of Victoria.
ID Week 13 th of October 2014 Per Johansson Sheffield University.
ATLAS The ConditionDB is accessed by the offline reconstruction framework (ATHENA). COOLCOnditions Objects for LHC The interface is provided by COOL (COnditions.
M. Caprini IFIN-HH Bucharest DAQ Control and Monitoring - A Software Component Model.
TRTViewer: the ATLAS TRT detector monitoring and diagnostics tool 4 th Workshop on Advanced Transition Radiation Detectors for Accelerator and Space Applications.
Online Database Work Overview Work needed for OKS database
U.S. ATLAS TDAQ FY06 M&O Planning
Online Control Program: a summary of recent discussions
Online Software Status
Level 1 (Calo) Databases
MOORE (Muon Object Oriented REconstruction) MuonIdentification
DQM for the RPC subdetector
Pierluigi Paolucci & Giovanni Polese
Presentation transcript:

Commissioning: Preliminary thoughts from the Offline side ATLAS Computing Workshop December 2004 Rob McPherson Hans von der Schmitt

2004/12/06Rob McPherson / Hans von der Schmitt2 ATLAS Commissioning  Commissioning = “Just Installed”  “Operational”  Referring primarily to activities at Point 1 (+ integration)  Broken into 4 phases Phase 1: subsystem standalone commissioning Phase 1: subsystem standalone commissioning DCS: LV, HV, cooling, gas, safety systems, record in DB, retrieve from DB DCS: LV, HV, cooling, gas, safety systems, record in DB, retrieve from DB DAQ: pedestal runs, electronic calibration, write data, analyze DAQ: pedestal runs, electronic calibration, write data, analyze Phase 2: integrate systems into full detector Phase 2: integrate systems into full detector Phase 3: cosmic rays Phase 3: cosmic rays Take data, record, analyze/understand them, distribute to remote sites Take data, record, analyze/understand them, distribute to remote sites Phase 4: single beam, 1 st collisions Phase 4: single beam, 1 st collisions Same, with increasingly higher rates Same, with increasingly higher rates  Phases will overlap Some systems may take cosmics while others are still installing Some systems may take cosmics while others are still installing  Starts very soon Barrel calos start “phase 1” electronics commissioning  Mar 2005 Barrel calos start “phase 1” electronics commissioning  Mar 2005

2004/12/06Rob McPherson / Hans von der Schmitt3 ATLAS Commissioning Structure Cryogenics G. Passardi Det. cooling, Gas J. Godlewski Cooling,Ventilation B. Pirollet Safety G. Benincasa Magnets H. Ten Take Databases R. Hawkings, T. Wenaus Offline R. McPherson H. von der Schmitt Pixels L. Rossi LAR L. Hervas Tiles B. Stanek Mu-Ba L. Pontecorvo Mu-EC S. Palestini Lumi., Beam-pipe Shieldings, Etc.. SCT S. McMahon TRT H.Danielsson P. Lichard ID P. Wells DAQ G. Mornacchi CentralDCS H. Burckhart LVL1 T. Wengler HLT F. Wikkens TDAQ G.Mornacchi The are the current commissioning contact people The names are the current commissioning contact people OVERALL ATLAS G.Mornacchi, P.Perrodo

2004/12/06Rob McPherson / Hans von der Schmitt4 Offline commissioning  Work program will start with detector debugging/monitoring in early stages, move to cosmics, beam halo, beam gas, and then 1 st collisions  Many issues for detector online+offline software, databases, simulation, data distribution, remote reconstruction,...  Will have meetings as needed  Request contact people from detectors and related groups with some response ID : Maria Costa LAr : ? Tiles : Sasha Solodkov Muons : ? DB : Richard Hawkings and Torre Wenaus Simulation : ? Physics : ?

2004/12/06Rob McPherson / Hans von der Schmitt5 RCCVME Detector commissioning: offline view DETECTOR ? ConfigurationDatabase(s)ConditionsDatabase(s) DCS / Controls HV, LV HV, LV Temp sensors Temp sensors Alignment Alignment Cryogenics Cryogenics TDAQ/systemWorkstation (GNAM in CTB) OnlineSystem Presenter Online Histo Svc ? Front-End ROD ROS BytestreamFiles SFI EF SFO LVL2 LVL1 ATHENA

2004/12/06Rob McPherson / Hans von der Schmitt6 non-Event data access  DCS and other controls data If needed offline, natural access via conditions DB interface If needed offline, natural access via conditions DB interface Can we assume evolution of current PVSS manager sufficient? Probably yes. Assume it will move to Oracle at some point. Can we assume evolution of current PVSS manager sufficient? Probably yes. Assume it will move to Oracle at some point.  But must watch custom DB use in case additional central support tools required.  And must also watch data volume into relational DB... Note that most “DCS” monitoring tasks is done in PVSS et al. (Not called “offline” here) Note that most “DCS” monitoring tasks is done in PVSS et al. (Not called “offline” here)  Configuration information Again, assume necessary information written either to event stream or to conditions DB Again, assume necessary information written either to event stream or to conditions DB  Calibration / alignment constants Obviously need conditions DB for these Obviously need conditions DB for these Will we have common system in time for commissioning ? (POOL et al?) Will we have common system in time for commissioning ? (POOL et al?)  Can fallback to CTB systems... not very nice....

2004/12/06Rob McPherson / Hans von der Schmitt7 Event Data Access (1) 1) Running ATHENA on ByteStream / EventStorageFiles  Easiest way for offline code to access data  Would we want to maintain a “commissioning branch” like the CTB?  Would we want this branch built on non-afs like the CTB?  Can re-use a lot of monitoring tools developed for CTB  Ideally ROD  ROB  ROS chain working, but can also RCC  BS  Limited number of channels: ROB/ROS  PC with Filar card (need for subdetectors without full VME readout) 2) In Event Filter  Requires more of the DAQ system running  Experience from CTB : not always possible to keep code up-to-date  uncouple detector monitoring from online software releases etc. as much as possible?  Need to review handling of “incidents” (asynchronous interrupts) passed into the ATHENA job  Histogram reset under certain conditions...

2004/12/06Rob McPherson / Hans von der Schmitt8 Event Data Access (2) 3) “Online” workstation (a la GNAM in the CTB)  Can take (ethernet) data stream via ROS (or RCC) ?  Need to review this ethernet data stream and how to read it from an ATHENA job  Also need to review running ATHENA on lower level (ROD?) fragment (is all needed information available?)  If we run ATHENA here, require:  Possibility of “light-weight” ATHENA with only converters and histogramming  If we don’t run ATHENA here:  Require duplication of converters and parallel maintenance  Limited monitoring possible at this level unless we also want to duplicate cabling/mapping services, database interaction, etc.  But we will want to match histogram root tree to ATHENA in any case to use same plots / macros / etc.

2004/12/06Rob McPherson / Hans von der Schmitt9 ATHENA “online”  Direct access to TDAQ Information Service (IS) essential Had limited use in CTB04 monitoring (eg beam energy for histograms) Had limited use in CTB04 monitoring (eg beam energy for histograms) Found this a weak point that could use review Found this a weak point that could use review  Need a structured monitoring/histogramming environment that matches online use Dynamic booking / rebooking of histograms Dynamic booking / rebooking of histograms Zero histograms based on some external input Zero histograms based on some external input Eg, shift crew presses a “reset” button... or change in some condition picked up via the Information Service Eg, shift crew presses a “reset” button... or change in some condition picked up via the Information Service Can work features into AthenaMonitoring package, once we understand what features are wanted Can work features into AthenaMonitoring package, once we understand what features are wanted Need a “state model” for online system, mapped/implemented in ATHENA Need a “state model” for online system, mapped/implemented in ATHENA  Need a “smaller” build Strong feeling on subdetector/TDAQ side that ATHENA is too hard to use for the “GNAM” environment Strong feeling on subdetector/TDAQ side that ATHENA is too hard to use for the “GNAM” environment Hard to use, crashes in obscure places (say, ByteStreamSvc somewhere due to corrupt events? How to debug this? It will happen a lot during commissioning!) Hard to use, crashes in obscure places (say, ByteStreamSvc somewhere due to corrupt events? How to debug this? It will happen a lot during commissioning!)

2004/12/06Rob McPherson / Hans von der Schmitt10 Summary thoughts on tools  Databases DCS databases and offline access seem OK for early commissioning DCS databases and offline access seem OK for early commissioning Calibration/Alignment databases need rationalization Calibration/Alignment databases need rationalization It would be very nice to have on recommended/supported solution before these are seriously required and used. It would be very nice to have on recommended/supported solution before these are seriously required and used. Some CTB04 solutions (Nova, writing significant data into CDB) won’t scale Some CTB04 solutions (Nova, writing significant data into CDB) won’t scale Want to archive histograms etc. from commissioning phase in DB?? Want to archive histograms etc. from commissioning phase in DB??  If we use ATHENA-based event stream monitoring Many of the CTB tools can be migrated to commissioning Many of the CTB tools can be migrated to commissioning Monitoring Algorithms/AlgTools, root macros, etc. Monitoring Algorithms/AlgTools, root macros, etc. Need to think about detailed plots etc. for full ATLAS Need to think about detailed plots etc. for full ATLAS Have been ridiculous monitoring histogram extrapolations from CTB  ATLAS... must review these Have been ridiculous monitoring histogram extrapolations from CTB  ATLAS... must review these There is a “histogram checker” (Monitoring/MonHighLevel from Manuel Diaz) framework in place, but needs clients There is a “histogram checker” (Monitoring/MonHighLevel from Manuel Diaz) framework in place, but needs clients  If we also use non-ATHENA-based event stream monitoring Surely would still want a common framework Surely would still want a common framework Can still recycle many root macros etc. from CTB Can still recycle many root macros etc. from CTB

2004/12/06Rob McPherson / Hans von der Schmitt11 Phase 3 and beyond...  Have fully simulated cosmics, beam halo and beam gas samples available for detector studies Some use so far Some use so far Tiles : commissioning trigger rate studies Tiles : commissioning trigger rate studies Muons : tracking package for non-pointing, out-of-time events Muons : tracking package for non-pointing, out-of-time events LAr and ID : some rate and reconstruction studies LAr and ID : some rate and reconstruction studies Do we want dedicated samples with special detector configurations? Or more statistics of the samples we have? Do we want dedicated samples with special detector configurations? Or more statistics of the samples we have? So far: only G3 simulation of overburden/cavern. Want G4? So far: only G3 simulation of overburden/cavern. Want G4? Need to review the readiness of the subdetector reconstruction software for these non-standard events Need to review the readiness of the subdetector reconstruction software for these non-standard events  Once we’re taking data with full TDAQ chain in place Data distribution to “Tier0” and remote computing centres planned for cosmics and single-beam data Data distribution to “Tier0” and remote computing centres planned for cosmics and single-beam data Considering this is not currently highest priority, but must keep in mind that this will run in parallel with DC3 Considering this is not currently highest priority, but must keep in mind that this will run in parallel with DC3

2004/12/06Rob McPherson / Hans von der Schmitt12 Summary  ATLAS commissioning at point 1 starts in a few months  Initially, may need fallback DB solutions, but need to work to avoid these if possible Must watch data rate and volume written into relational DB Must watch data rate and volume written into relational DB  Will use ATHENA for detailed data analysis Must think about “AthenaMonitoring” environment for non-developers Must think about “AthenaMonitoring” environment for non-developers Smaller, faster, simpler, robust... Smaller, faster, simpler, robust... Maybe need to define incident path to react inside ATHENA to changing external conditions matching to TDAQ states Maybe need to define incident path to react inside ATHENA to changing external conditions matching to TDAQ states Also must verify that all subdetectors implement BS fragment versioning. Will evolve significantly during detector commissioning Also must verify that all subdetectors implement BS fragment versioning. Will evolve significantly during detector commissioning  Must consider if ATHENA also OK for “in the pit while plugging in a board” monitoring and then subsequent standard online monitoring TDAQ event stream  ATHENA ? TDAQ event stream  ATHENA ? Regardless, it would still be good to maintain code in only one place Regardless, it would still be good to maintain code in only one place  Also need to review detector reco for cosmics etc.  And eventually also the best timescale to distribution of commissioning events to external computing centres