Overview of DAQ at CERN experiments E.Radicioni, INFN - 20050901 - MICE Daq and Controls Workshop.

Slides:



Advertisements
Similar presentations
Sander Klous on behalf of the ATLAS Collaboration Real-Time May /5/20101.
Advertisements

CWG10 Control, Configuration and Monitoring Status and plans for Control, Configuration and Monitoring 16 December 2014 ALICE O 2 Asian Workshop
André Augustinus ALICE Detector Control System  ALICE DCS is responsible for safe, stable and efficient operation of the experiment  Central monitoring.
CHEP 2012 – New York City 1.  LHC Delivers bunch crossing at 40MHz  LHCb reduces the rate with a two level trigger system: ◦ First Level (L0) – Hardware.
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
1 Databases in ALICE L.Betev LCG Database Deployment and Persistency Workshop Geneva, October 17, 2005.
Workshop Goals MICE Controls and Monitoring Workshop September 25, 2006 A. Bross.
1 ITC242 – Introduction to Data Communications Week 12 Topic 18 Chapter 19 Network Management.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/27 A Control Software for the ALICE High Level Trigger Timm.
1 HLT – ECS, DCS and DAQ interfaces Sebastian Bablok UiB.
DAQ WS03 Sept 2006Jean-Sébastien GraulichSlide 1 Interface between Control & Monitoring and DDAQ o Introduction o Some background on DATE o Control Interface.
VC Sept 2005Jean-Sébastien Graulich Report on DAQ Workshop Jean-Sebastien Graulich, Univ. Genève o Introduction o Monitoring and Control o Detector DAQ.
Clara Gaspar, May 2010 The LHCb Run Control System An Integrated and Homogeneous Control System.
March 2003 CHEP Online Monitoring Software Framework in the ATLAS Experiment Serguei Kolos CERN/PNPI On behalf of the ATLAS Trigger/DAQ Online Software.
L. Granado Cardoso, F. Varela, N. Neufeld, C. Gaspar, C. Haen, CERN, Geneva, Switzerland D. Galli, INFN, Bologna, Italy ICALEPCS, October 2011.
- Software block schemes & diagrams - Communications protocols & data format - Conclusions EUSO-BALLOON DESIGN REVIEW, , CNES TOULOUSE F. S.
Riccardo de Asmundis INFN Napoli [Certified LabVIEW Developer]
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
DAQ System at the 2002 ATLAS Muon Test Beam G. Avolio – Univ. della Calabria E. Pasqualucci - INFN Roma.
Summary DCS Workshop - L.Jirdén1 Summary of DCS Workshop 28/29 May 01 u Aim of workshop u Program u Summary of presentations u Conclusion.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
JCOP Workshop September 8th 1999 H.J.Burckhart 1 ATLAS DCS Organization of Detector and Controls Architecture Connection to DAQ Front-end System Practical.
Clara Gaspar, October 2011 The LHCb Experiment Control System: On the path to full automation.
C.Combaret, L.Mirabito Lab & beamtest DAQ with XDAQ tools.
Update on Database Issues Peter Chochula DCS Workshop, June 21, 2004 Colmar.
14 Sep 2005DAQ - Paul Dauncey1 Tech Board: DAQ/Online Status Paul Dauncey Imperial College London.
G. Maron, Agata Week, Orsay, January Agata DAQ Layout Gaetano Maron INFN – Laboratori Nazionali di Legnaro.
DCS Workshop - L.Jirdén1 ALICE DCS PROJECT ORGANIZATION - a proposal - u Project Goals u Organizational Layout u Technical Layout u Deliverables.
The Joint COntrols Project Framework Manuel Gonzalez Berges on behalf of the JCOP FW Team.
André Augustinus 10 September 2001 DCS Architecture Issues Food for thoughts and discussion.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
Control in ATLAS TDAQ Dietrich Liko on behalf of the ATLAS TDAQ Group.
ALICE, ATLAS, CMS & LHCb joint workshop on
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Clara Gaspar, March 2005 LHCb Online & the Conditions DB.
Data Acquisition Backbone Core J. Adamczewski-Musch, N. Kurz, S. Linev GSI, Experiment Electronics, Data processing group.
Introduction CMS database workshop 23 rd to 25 th of February 2004 Frank Glege.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
Peter Chochula ALICE Offline Week, October 04,2005 External access to the ALICE DCS archives.
LHCb front-end electronics and its interface to the DAQ.
Online Monitoring for the CDF Run II Experiment T.Arisawa, D.Hirschbuehl, K.Ikado, K.Maeshima, H.Stadie, G.Veramendi, W.Wagner, H.Wenzel, M.Worcester MAR.
CMS Luigi Zangrando, Cern, 16/4/ Run Control Prototype Status M. Gulmini, M. Gaetano, N. Toniolo, S. Ventura, L. Zangrando INFN – Laboratori Nazionali.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
Online Monitoring System at KLOE Alessandra Doria INFN - Napoli for the KLOE collaboration CHEP 2000 Padova, 7-11 February 2000 NAPOLI.
Status & development of the software for CALICE-DAQ Tao Wu On behalf of UK Collaboration.
1 Channel Access Concepts – IHEP EPICS Training – K.F – Aug EPICS Channel Access Concepts Kazuro Furukawa, KEK (Bob Dalesio, LANL)
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
The DCS Databases Peter Chochula. 31/05/2005Peter Chochula 2 Outline PVSS basics (boring topic but useful if one wants to understand the DCS data flow)
DAQ Overview + selected Topics Beat Jost Cern EP.
R.Divià, CERN/ALICE 1 ALICE off-line week, CERN, 9 September 2002 DAQ-HLT software interface.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
1 DAQ.IHEP Beijing, CAS.CHINA mail to: The Readout In BESIII DAQ Framework The BESIII DAQ system consists of the readout subsystem, the.
T0 DCS Status DCS Workshop March 2006 T.Karavicheva on behalf of T0 team.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
CMS Luigi Zangrando, Cern, 16/4/ Run Control Prototype Status M. Gulmini, M. Gaetano, N. Toniolo, S. Ventura, L. Zangrando INFN – Laboratori Nazionali.
The ALICE data quality monitoring Barthélémy von Haller CERN PH/AID For the ALICE Collaboration.
M. Caprini IFIN-HH Bucharest DAQ Control and Monitoring - A Software Component Model.
January 2010 – GEO-ISC KickOff meeting Christian Gräf, AEI 10 m Prototype Team State-of-the-art digital control: Introducing LIGO CDS.
Online Software November 10, 2009 Infrastructure Overview Luciano Orsini, Roland Moser Invited Talk at SuperB ETD-Online Status Review.
Scalable Readout System Data Acquisition using LabVIEW Riccardo de Asmundis INFN Napoli [Certified LabVIEW Developer]
AIRS Meeting GSFC, February 1, 2002 ECS Data Pool Gregory Leptoukh.
Online Control Program: a summary of recent discussions
Controlling a large CPU farm using industrial tools
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
The LHCb Event Building Strategy
ITS combined test seen from DAQ and ECS F.Carena, J-C.Marin
The LHCb Run Control System
Philippe Vannerem CERN / EP ICALEPCS - Oct03
L2 CPUs and DAQ Interface: Progress and Timeline
Presentation transcript:

Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop

Overview DAQ architectures Implementations Software and tools

Some basic parameters

Overall architecture This is LHCb, but at this level they all look the same. Buffering on the FE, waiting for LV-0/1 decision Building network HLT filtering before storage.

This is already a standard PC with Ethernet … but they DO differ right after the FEs ALICECMS This is the entry point into a customized Myrinet network 1 EVB level 2 EVB levels

DAQ: taking care of Data Flow Building Processing Storage Get the data to some network as fast as possible Custom VS standard network technology CMS & Alice as extremes –The others are in between

CMS approach Only 1 hardware trigger, do the rest in HLT  high DAQ bandwidth. –Flexible trigger, but rigid DAQ. Also partitioning less flexible. Barrel-shifter approach: deterministic but rigid, customized network 2-stage building

ALICE approach HLT embedded in the DAQ More than one hardware trigger  not straightforward to change trigger –But DAQ can be very flexible with standard technology. Easy to partition down to the level of the single front-end HLT / Monitoring

Alice HLT

List of DAQ functions (one may expect) Run control (state machine) with GUI –Configure DAQ topology –Select trigger –Start/stop (by user or by DCS) –Communicate status to DCS Partitioning. Its importance is never stressed enough. Minimal set of hardware access libraries (VME, USB, S-LINK), and ready-to-use methods to initialize interfaces. Data flow –Push (or pull …) data from FE to Storage via (one or more) layer of Event- Building DAQ performance check with GUI Data quality monitoring (or framework to do it) –GUI most likely external Logging (DAQ-generated messages) with GUI to select/analyze logs

What can you expect to be able to use out of these systems? MICE  test beam system Who’s providing the best “test beam” system? –Reduced scale, but keeping rich functionality –And software already available –Not only framework, but also ready-to-use applications and GUIs.

All experiments more or less implement this: Main data flow ~ KHz Spying data subset for monitoring Also good for test beams, ~ 1KHz Support for test beams vary from one experiment to the other, from barebone system (just framework) to full-fledged support

CMS: public-domain framework, called xdaq: Just framework (data and message passing, event-builder). For the rest, you are on your own. ALICE tends to support its detector teams with a full set of DAQ tools –Partitioning –Data transport –Monitoring –Logging ATLAS similar to ALICE in this respect –However, at the times of HARP construction it was not yet ready for release to (external) groups.

Readout Clear separation of readout and recording functions Readout high- priority (or read- time), recorder low priority (quasi-async) Large memory buffer to accommodate for fluctuations

User-provided functions A clear, simple way for the user to initialize and read out its own hardware

Event builder Running on a standard PC Able to perform, at the same time: –Global or partial on-demand building –EVB strategies to be matched to trigger configurations –Event consistency checks –Recording to disk –Serving events to subscribers (i.e. monitoring) With basic data selections Possibly with multi-staging after the event-building

Event format: headers and payloads, one payload per front-end Precise time-stamping, numbering and Ids on headers of each payload

Ready-to-use GUIs Run control should be implemented as a state machine for proper handling of state change Configure and partition Set run parameters and connect Select active processes and start them Start/stop

Run-control One run-control agent per DAQ “actor”

Run-control partitioning Warning: to take advantage of DAQ partitioning, also TRIGGER has to support partition …  Requirement to TRIGGER system

Logging Informative logging with an effective user interface Log filtering and archiving Run statistic collection and reporting also useful

Monitoring A ready-to-use monitoring architecture With a monitoring library as a software interface Depending on DAQ systems, a global monitoring GUI (to be extended for specific needs) might be available already

Recent trends: the ECS An overall controller of the complete status of the experiment, including DCS Partitionable state machine Configuration databases Usually interfaced to PVSS, but EPICS should also be possible

Conclusions: never underestimate … Users are not experts: provide them the tools to work and to report problems effectively. Flexible partitioning. Event-building with accurate fragment alignment and validity checks, state reporting and reliability. Redundancy / fault tolerance A proper run-control with state-machine –And simplifying to the users the tasks of Partition, Configure, Trigger selection, Start, Stop A good monitoring framework, with clear-cut separation between DAQ services and monitoring clients Extensive and informative logging GUIs