THIS MORNING (Start an) informal discussion to -Clearly identify all open issues, categorize them and build an action plan -Possibly identify (new) contributing.

Slides:



Advertisements
Similar presentations
Status of the CTP O.Villalobos Baillie University of Birmingham April 23rd 2009.
Advertisements

LAV trigger primitives Francesco Gonnella Photon-Veto Working Group CERN – 09/12/2014.
System integrity The term system integrity has the following meanings: That condition of a system where in its specified operational and technical parameters.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
Peter Chochula, January 31, 2006  Motivation for this meeting: Get together experts from different fields See what do we know See what is missing See.
Far Detector Data Quality Andy Blake Cambridge University.
Linda R. Coney – 24th April 2009 Online Reconstruction & a little about Online Monitoring Linda R. Coney 18 August, 2009.
Octal ASD Certification Tests at Michigan J. Chapman, Tiesheng Dai, & Tuan Bui August 30, CERN.
Shuei MEG review meeting, 2 July MEG Software Status MEG Software Group Framework Large Prototype software updates Database ROME Monte Carlo.
06/03/06Calice TB preparation1 HCAL test beam monitoring - online plots & fast analysis - - what do we want to monitor - how do we want to store & communicate.
IceCube DAQ Mtg. 10,28-30 IceCube DAQ: “DOM MB to Event Builder”
Status of NA62 straw electronics and services Peter LICHARD, Johan Morant, Vito PALLADINO.
SPSC Questions to the 2014 NA62 Status Report 1 Beam: which is the beam structure expected in October? Which is the intensity you can reasonably expect?
Introduction CMS database workshop 23 rd to 25 th of February 2004 Frank Glege.
Databases in CMS Conditions DB workshop 8 th /9 th December 2003 Frank Glege.
Dec.11, 2008 ECL parallel session, Super B1 Results of the run with the new electronics A.Kuzmin, Yu.Usov, V.Shebalin, B.Shwartz 1.New electronics configuration.
5/2/  Online  Offline 5/2/20072  Online  Raw data : within the DAQ monitoring framework  Reconstructed data : with the HLT monitoring framework.
Overview, remarks, lamentations, hope and despair M. Sozzi TDAQ WG meeting CERN - 4 June 2013 Introduction, news and appetizer.
NA62 TDAQ WG Meeting News and usual stuff M. Sozzi CERN – 29/5/2009.
ALICE Pixel Operational Experience R. Santoro On behalf of the ITS collaboration in the ALICE experiment at LHC.
Online Reconstruction 1M.Ellis - CM th October 2008.
Online System Status LHCb Week Beat Jost / Cern 9 June 2015.
Linda R. Coney – 5 November 2009 Online Reconstruction Linda R. Coney 5 November 2009.
Installation status Control Room PC farm room DetectorsEB Infrastructure 918 ECN3.
MICE CM28 Oct 2010Jean-Sebastien GraulichSlide 1 Detector DAQ o Achievements Since CM27 o DAQ Upgrade o CAM/DAQ integration o Online Software o Trigger.
Online Monitoring System at KLOE Alessandra Doria INFN - Napoli for the KLOE collaboration CHEP 2000 Padova, 7-11 February 2000 NAPOLI.
RunControl status update Nicolas Lurkin School of Physics and Astronomy, University of Birmingham NA62 TDAQ Meeting – CERN, 14/10/2015.
“Planning for Dry Run: material for discussion” Gianluca Lamanna (CERN) TDAQ meeting
TEL62 AND TDCB UPDATE JACOPO PINZINO ROBERTO PIANDANI CERN ON BEHALF OF PISA GROUP 14/10/2015.
AliRoot survey: Analysis P.Hristov 11/06/2013. Are you involved in analysis activities?(85.1% Yes, 14.9% No) 2 Involved since 4.5±2.4 years Dedicated.
October Test Beam DAQ. Framework sketch Only DAQs subprograms works during spills Each subprogram produces an output each spill Each dependant subprogram.
Online Consumers produce histograms (from a limited sample of events) which provide information about the status of the different sub-detectors. The DQM.
15 December 2015Alan Norton, NA62, LKR WG 1 LKR Calibration Topics for 2016 Pedestals Pulser Calibration Bad Channels Time Calibration Small Pulses LKR.
LHC CMS Detector Upgrade Project RCT/CTP7 Readout Isobel Ojalvo, U. Wisconsin Level-1 Trigger Meeting June 4, June 2015, Isobel Ojalvo Trigger Meeting:
Straw readout status Status and plans in Prague compared with situation now Choke and error Conclusions and plans.
Calibration algorithm and detector monitoring - TPC Marian Ivanov.
 offline code: changes/updates, open items, readiness  1 st data taking plans and readiness.
The NA62RunControl: Status update Nicolas Lurkin School of Physics and Astronomy, University of Birmingham NA62 TDAQ Meeting – CERN, 10/06/2015.
L0 trigger update Bruno Angelucci INFN & University of Pisa.
Evelyn Thomson Ohio State University Page 1 XFT Status CDF Trigger Workshop, 17 August 2000 l XFT Hardware status l XFT Integration tests at B0, including:
Status of LAV electronics commissioning Mauro Raggi, Francesco Gonnella Laboratori Nazionali di Frascati 1 Mauro Raggi - Laboratori Nazionali di Frascati4.
R. Fantechi 2/09/2014. Milestone table (7/2014) Week 23/6: L0TP/Torino test at least 2 primitive sources, writing to LTU, choke/error test Week.
M. Caprini IFIN-HH Bucharest DAQ Control and Monitoring - A Software Component Model.
Software and TDAQ Peter Lichard, Vito Palladino NA62 Collaboration Meeting, Sept Ferrara.
LAV electronics status report Mauro Raggi and Francesco Gonnella TDAQ meeting NA62 Collboration Meeting 1-5 September 2014 Ferrara.
CALORIMETER CELL MONITORING TOOL Viacheslav Shary.
General online meeting SLAC Nov 4, 1999 G.Crosetti, M.Lo Vetere, E.Robutti IFR online calibrations status and plans.
D. Elia (INFN Bari)ALICE Offline week / CERN Update on the SPD Offline Domenico Elia in collaboration with H. Tydesjo, A. Mastroserio Overview:
Scalable Readout System Data Acquisition using LabVIEW Riccardo de Asmundis INFN Napoli [Certified LabVIEW Developer]
“Technical run experience” Gianluca Lamanna (CERN) TDAQ meeting
ATLAS Detector Resources & Lumi Blocks Enrico & Nicoletta.
L1Calo Databases ● Overview ● Trigger Configuration DB ● L1Calo OKS Database ● L1Calo COOL Database ● ACE Murrough Landon 16 June 2008.
Gianluca Lamanna TDAQ WG meeting. CHOD crossing point two slabs The CHOD offline time resolution can be obtained online exploiting hit position.
SOFTWARE TESTING TRAINING TOOLS SUPPORT FOR SOFTWARE TESTING Chapter 6 immaculateres 1.
Straw readout status Run 2016 Cover FW SRB FW.
MICE Computing and Software
Level-zero trigger status
Calicoes Calice OnlinE System Frédéric Magniette
Enrico Gamberini for the GTK WG TDAQ WG Meeting June 01, 2016
Computing model and data handling
G.Lamanna (CERN) NA62 Collaboration Meeting TDAQ-WG
Offline shifter training tutorial
Stress tests for CLAS12 software
SAC/IRC data analysis Venelin Kozhuharov for the photon veto working group NA62 photon veto meeting
Offline shifter training tutorial
Run experience Cover FW SRB FW DAQ and monitoring
Nicolas Rothbacher University of Puget Sound
CMS Pixel Data Quality Monitoring
Chapter 2: Operating-System Structures
Chapter 2: Operating-System Structures
Presentation transcript:

THIS MORNING (Start an) informal discussion to -Clearly identify all open issues, categorize them and build an action plan -Possibly identify (new) contributing people to help attacking each problem -Make a plan, specifically setting dates for dry runs (to be expanded and continued offline...)

RUN CONTROL INTEGRATION, INITIALIZATION, ETC. - Is the system initialization complete and reliable? - Do we need new features in the run control? - Does the present XML file configuration scheme require some change (e.g. incremental file handling, tolerance on missing parameters, etc.) ? - Are the informations available in the run database enough (e.g. number of triggers in the burst and other EOB information)? - Do we need more tools, besides the web page, to query/write information to/from this database (e.g. burst data quality flags written by offline reprocessing)?

TDC/TEL62 GENERIC SYSTEM - Do we have evidence of any hardware failures? - Are clock phase settings completely defined? - Which input rate limitations were observed? - Which (instantaneous/integrated) rate measurement do we have at our disposal? How is this information used? - Which is the hit distribution model (in time and among channels) for CEDAR, CHOD, CHANTI, RICH, LAV? - Which are the known limitations in the FW? - Which trigger rate limitations were observed? - Which output rate limitations were observed? Up to which data rate towards the farm was the system tested? - Is all required diagnostics information recorded in the EOB? Or elsewhere? - Are all possible errors logged? How should we keep the log information across bursts?

- Which kind of errors are possible during a burst? Are all errors conditions recorded? How should the system react to such errors? Which ones must be fatal and which ones are allowed to be handled within the system to avoid stopping the data taking? - How does the system fail (input data, trigger) when it is overloaded beyond design specs? How should the data input be limited in this case? - How to monitor the possible data loss due to the filling of the TDC single-channel buffer for a single-channel instantaneous rate peak? For which systems can this occur (channel signal shapes and dead times for each detector)? - Is some online data processing (e.g. channel remapping, time offset subtraction) required at the beginning of the DAQ chain? Both for the main data flow and the trigger primitive generation input? - Do we still have FW compilation issues in different places? - Do we have solid evidence of change in functionality depending on compilation? Can implement incremental compilation to avoid it?

OTHER DAQ SYSTEMS - What is missing for the integration of the other DAQ systems (STRAWS, GTK, SAC/IRC) in the common framework? - Do we have at present any evidence of any limitation in the LKr readout? - Which kind of error monitoring and data checking is available there?

DIGITAL TRIGGER - How deeply was the trigger generation primitive FW tested, and how? Up to which data input rate? - Which monitoring information are available from the primitive generation FW (in EOB)? - Communication between TELs: hardware? Common FW? Specific FW for LAV and RICH? - LKr trigger: how far was it tested and how? Which kind of diagnostics/monitoring do we have available from it? L0TP - Which are the intrinsic limitations of the L0TP(s)? - Which kind of diagnostics is available to verify the correctness of the number of triggers sent? - Are all the useful informations made available from the L0TP to the offline data? Does the farm processing need to exploit part of these?

DATA AND FARM - What is missing to perform a detector time alignment in a fast and stable way? - Which kind of diagnostics is available to check data integrity? Internal data consistency? Consistency between detectors? Lack of data transmission failures? When is such information used? When is a burst marked as bad? - Which kind of errors were identified in data? Do we have the tools to identify the source or the errors? - Do we have a common official automatic checking routine? When and where is it run (e.g. L1)? Who maintains it? How are shifters informed? - Is it useful to extract some informations from the EOB events to fill/update a database? Where and when should this be done?

L1/L2 - Where is the information on the version and cuts performed in L1/L2 for a given run stored and made available? - Do we have a procedure to process (offline) the collected data through L1 and L2? - Do we have a procedure to process the MC data through L1 and L2? - Do we have a flexible downscaling mechanism in L1/L2? Where are the settings for it set/recorded? RECONSTRUCTION/MONITORING - Do we have a standard official reconstruction code? - Do we have a standard official online version to be used for online monitiring? How should this be improved? - Do the existing programs need further integration?

TESTS - Which tools are available to stress the system as a whole without beam? What else is needed and how can it be implemented? - Which significative test procedures can be performed without beam? - Which is the best and most effective schedule for collective tests at CERN in the incoming weeks? What should be tested when?