Dynamic configuration of the CMS Data Acquisition Cluster Hannes Sakulin, CERN/PH On behalf of the CMS DAQ group Part 1: Configuring the CMS DAQ Cluster.

Slides:



Advertisements
Similar presentations
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Advertisements

Remigius K Mommsen Fermilab A New Event Builder for CMS Run II A New Event Builder for CMS Run II on behalf of the CMS DAQ group.
J. Leonard, U. Wisconsin 1 Commissioning the Trigger of the CMS Experiment at the CERN Large Hadron Collider Jessica L. Leonard Real-Time Conference Lisbon,
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration Timm Morten Steinbeck, Computer Science.
March 2003 CHEP Online Monitoring Software Framework in the ATLAS Experiment Serguei Kolos CERN/PNPI On behalf of the ATLAS Trigger/DAQ Online Software.
First year experience with the ATLAS online monitoring framework Alina Corso-Radu University of California Irvine on behalf of ATLAS TDAQ Collaboration.
Data Acquisition Software for CMS HCAL Testbeams Jeremiah Mans Princeton University CHEP2003 San Diego, CA.
New Features of APV-SRS-LabVIEW Data Acquisition Program Eraldo Oliveri on behalf of Riccardo de Asmundis INFN Napoli [Certified LabVIEW Developer] NYC,
First operational experience with the CMS Run Control System Hannes Sakulin, CERN/PH on behalf of the CMS DAQ group 17 th IEEE Real Time Conference,
CERN - IT Department CH-1211 Genève 23 Switzerland t The High Performance Archiver for the LHC Experiments Manuel Gonzalez Berges CERN, Geneva.
DAQ System at the 2002 ATLAS Muon Test Beam G. Avolio – Univ. della Calabria E. Pasqualucci - INFN Roma.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
The Run Control and Monitoring System of the CMS Experiment Presented by Andrea Petrucci INFN, Laboratori Nazionali di Legnaro, Italy On behalf of the.
+ discussion in Software WG: Monte Carlo production on the Grid + discussion in TDAQ WG: Dedicated server for online services + experts meeting (Thusday.
C.Combaret, L.Mirabito Lab & beamtest DAQ with XDAQ tools.
RPC PAC Trigger system installation and commissioning How we make it working… On-line software Resistive Plate Chambers Link Boxes Optical Links Synchronization.
ALICE, ATLAS, CMS & LHCb joint workshop on
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
7. CBM collaboration meetingXDAQ evaluation - J.Adamczewski1.
The new CMS DAQ system for LHC operation after 2014 (DAQ2) CHEP2013: Computing in High Energy Physics Oct 2013 Amsterdam Andre Holzner, University.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
LHCb front-end electronics and its interface to the DAQ.
CHEP March 2003 Sarah Wheeler 1 Supervision of the ATLAS High Level Triggers Sarah Wheeler on behalf of the ATLAS Trigger/DAQ High Level Trigger.
DAQ Andrea Petrucci 6 May 2008 – CMS-UCSD meeting OUTLINE Introduction SCX Setup Run Control Current Status of the Tests Summary.
CMS Luigi Zangrando, Cern, 16/4/ Run Control Prototype Status M. Gulmini, M. Gaetano, N. Toniolo, S. Ventura, L. Zangrando INFN – Laboratori Nazionali.
Development of the CMS Databases and Interfaces for CMS Experiment: Current Status and Future Plans D.A Oleinik, A.Sh. Petrosyan, R.N.Semenov, I.A. Filozova,
The CMS Event Builder Demonstrator based on MyrinetFrans Meijers. CHEP 2000, Padova Italy, Feb The CMS Event Builder Demonstrator based on Myrinet.
STAR Pixel Detector readout prototyping status. LBNL-IPHC-06/ LG22 Talk Outline Quick review of requirements and system design Status at last meeting.
Software for the CMS Cosmic Challenge Giacomo BRUNO UCL, Louvain-la-Neuve, Belgium On behalf of the CMS Collaboration CHEP06, Mumbay, India February 16,
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Infrastructures and Installation of the Compact Muon Solenoid Data AcQuisition at CERN on behalf of the CMS DAQ group TWEPP 2007 Prague,
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
DAQ & ConfDB Configuration DB workshop CERN September 21 st, 2005 Artur Barczyk & Niko Neufeld.
CMS Luigi Zangrando, Cern, 16/4/ Run Control Prototype Status M. Gulmini, M. Gaetano, N. Toniolo, S. Ventura, L. Zangrando INFN – Laboratori Nazionali.
Rutherford Appleton Laboratory September 1999Fifth Workshop on Electronics for LHC Presented by S. Quinton.
Remigius K Mommsen Fermilab CMS Run 2 Event Building.
August 24, 2011IDAP Kick-off meeting - TileCal ATLAS TileCal Upgrade LHC and ATLAS current status LHC designed for cm -2 s 7+7 TeV Limited to.
Online Software November 10, 2009 Infrastructure Overview Luciano Orsini, Roland Moser Invited Talk at SuperB ETD-Online Status Review.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
TRTViewer: the ATLAS TRT detector monitoring and diagnostics tool 4 th Workshop on Advanced Transition Radiation Detectors for Accelerator and Space Applications.
EPS 2007 Alexander Oh, CERN 1 The DAQ and Run Control of CMS EPS 2007, Manchester Alexander Oh, CERN, PH-CMD On behalf of the CMS-CMD Group.
Eric Hazen1 Ethernet Readout With: E. Kearns, J. Raaf, S.X. Wu, others... Eric Hazen Boston University.
EPS HEP 2007 Manchester -- Thilo Pauly July The ATLAS Level-1 Trigger Overview and Status Report including Cosmic-Ray Commissioning Thilo.
CMS DAQ project at Fermilab
HTCC coffee march /03/2017 Sébastien VALAT – CERN.
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
Use of FPGA for dataflow Filippo Costa ALICE O2 CERN
J. Gutleber, L. Orsini, 2005 March 15
Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl.
ELACCO mid-term Review
CMS High Level Trigger Configuration Management
LHC experiments Requirements and Concepts ALICE
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
Controlling a large CPU farm using industrial tools
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
CMS DAQ Event Builder Based on Gigabit Ethernet
DAQ upgrades at SLHC S. Cittolin CERN/CMS, 22/03/07
The LHCb Event Building Strategy
Example of DAQ Trigger issues for the SoLID experiment
LHCb Trigger and Data Acquisition System Requirements and Concepts
John Harvey CERN EP/LBC July 24, 2001
Event Building With Smart NICs
LHCb Trigger, Online and related Electronics
Design Principles of the CMS Level-1 Trigger Control and Hardware Monitoring System Ildefons Magrans de Abril Institute for High Energy Physics, Vienna.
Network Processors for a 1 MHz Trigger-DAQ System
The CMS Tracking Readout and Front End Driver Testing
The LHCb Front-end Electronics System Status and Future Development
Presentation transcript:

Dynamic configuration of the CMS Data Acquisition Cluster Hannes Sakulin, CERN/PH On behalf of the CMS DAQ group Part 1: Configuring the CMS DAQ Cluster Part 1: Configuring the CMS DAQ Cluster Part 2: Creating DAQ Configurations Part 2: Creating DAQ Configurations

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 2 The Compact Muon Solenoid Experiment Drift-Tube chambers Cathode Strip Chambers Resistive Plate Chambers Iron Yoke 4 T Superconducting Coil Tracker Si StripSi Strip Si PixelSi Pixel Electromagnetic Calorimeter Hadronic Calorimeter LHC p-p collisions, E CM =14 TeV Bunch crossing frequency 40 MHz CMS Multi-purpose detector, broad physics programme 55 Mio. Channels to read out, 1 MB event size after zero suppression 100 kHz Level-1 accept rate = DAQ input rate  High-Level Trigger implemented in software running on the DAQ cluster  DAQ input throughput 100 GB / s

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 3 CMS Central DAQ System...Front-End Drivers 0 ~500 ~650 Readout Links FMM …… Trigger Throttling System … FMM Switch Fabric … DATA FROM DETECTOR Myrinet … Level-1 Trigger Input Stage Custom Electronics controlled by PCs through Compact PCI Event Builder Slices Software on PC farm & Gigabit Ethernet … 8x Super-Fragment Builder Myrinet 8x8 071

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 4 Event Builder Slice EVMRU Gigabit Ethernet Switch BF 01 ~71 01 ~ RU 2 SM Builder / Filter Unit 8 cores 1 builder process 7 filter (=high level trigger) processes Readout Unit Event Manager Storage Manager multiple rails super - fragments mass storage Myrinet to Tier-0

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 5 Event Builder Slice EVMRU Gigabit Ethernet Switch BF 01 ~71 01 ~ RU 2 SM Builder / Filter Unit 8 cores 1 builder process 7 filter (=high level trigger) processes Readout Unit Event Manager Storage Manager multiple rails super - fragments mass storage Myrinet to Tier-0 Software components on all hosts based on the XDAQ infrastructure

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 6 CMS Central DAQ Online Software DAQ online software is based on a common infrastructure: XDAQ  XDAQ executives (= processes)  XDAQ applications one or more per executive  Data Transport Protocols  Hardware access Library  highly configurable through XML documents XDAQ executive Host XML XDAQ application XDAQ: see talk by J. Gutleber at the start of the session custom hardware XDAQ application Data Transport … XML document determines role of the executive libraries to be loaded applications & parameters network connections collaborating applications

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 7 Dynamic configuration of hosts Resource Service (Oracle database) jobcontrol service Run Control XDAQ executive Host in the CMS DAQ cluster application libraries are installed locally on all PCs XML Java / Tomcat XDAQ application XDAQ executive XML SOAP JDBC XDAQ configuration determines configuration of custom hardware configuration of Super-Fragment Builder event data flow topology in Event Builders

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 8 Configuration Structure in Resource Service FM FM 2 FM 1 XDAQ executive configuration document Run Control Function Manager Hierarchy job control executive Application 2 Application 1 XML executive Application 2 Application 1 XML executive Application 2 Application 1 XML executive Application XML … … executive Application 2 Application 1 XML application properties XDAQ I2O & SOAP connections Relational Database model, Java Object model control connections Function managers are loadable Java modules that control parts of the DAQ System url, source url, parameters

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 9 Resource Service Organization Resource Service Configurations are organized in a tree. Tree Structure Configurations are versioned XML configuration documents stored in relational scheme Serialized Java Objects stored in the database for performance reasons Resource Service & Run Control used by all CMS Subsystems

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 10 … Dynamic Configuration of the DAQ Cluster Run Control XML Typical Central DAQ Configuration XML System startup time: 35 seconds (further optimizations on the way) loading of the configuration from resource service loading and creation of function managers start of all XDAQ executives and applications on all hosts Resource Service database XML 5Run Control tomcats 50Function Managers 1500hosts 6500executives (processes) 10000applications

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 11 Part 2: How do we fill the Resource Service?

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 12 The problem: Configuration needs to be flexible The system shown is an ideal system  In real life, always hardware problems in a small fraction of the system  Need to quickly adapt the configuration … ✗ ✗✗

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 13 The problem: Configuration needs to be flexible The system shown is an ideal system  In real life, always hardware problems in a small fraction of the system  Need to quickly adapt the configuration Not always using the full system  Staged deployment  Maintenance of part of the cluster  Parallel test runs / partitioned operation  Need to scale the configuration May also change roles of hosts …

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 14 The problem: Configuration needs to be flexible The system shown is an ideal system  In real life, always hardware problems in a small fraction of the system  Need to quickly adapt the configuration Not always using the full system  Staged deployment  Maintenance of part of the cluster  Parallel test runs / partitioned operation  Need to scale the configuration May also change roles of hosts …

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 15 The problem: Configuration needs to be flexible The system shown is an ideal system  In real life, always hardware problems in a small fraction of the system  Need to quickly adapt the configuration Not always using the full system  Staged deployment  Maintenance of part of the cluster  Parallel test runs / partitioned operation  Need to scale the configuration May also change roles of hosts …

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 16 The problem: Configuration needs to be flexible The system shown is an ideal system  In real life, always hardware problems in a small fraction of the system  Need to quickly adapt the configuration Not always using the full system  Staged deployment  Maintenance of part of the cluster  Parallel test runs / partitioned operation  Need to scale the configuration May also change roles of hosts Software keeps evolving, parameters are optimized  Super-fragment composition  Number of network rails, buffer sizes, …  Need to adjust configuration … frequently need to change configuration

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 17 Parameters of the custom electronics need to be adjusted  what Readout Links and Merging Modules to include (crate, slot)  what channels to include / mask out Routing of the Myrinet Super-fragment Builder changes  what input addresses send data to what output addresses  respect constraints of switch fabric in order to be non-blocking Networks Connections in the event builder vary  need to specify the collaborating hosts for each host  need to specify the network to use for each connection Multiplicity, Location and Connectivity of Functional Units vary frequently Quick analysis: How do configurations differ ? Control structure largely constant  some parts need to be duplicated for each Event Builder Slice Composition of functional units (e.g. Readout Unit) largely constant  slow evolution with new software releases Configuration Template algorithmic computation High Level Configuration Layout

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 18 The solution: DAQ Configurator DAQ Configurator High-Level DAQ Configuration Layout Configuration Template Hardware & Configuration DB Configuration Template DB Standalone Java web-start application Resource Service DB XML DAQ Configuration

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 19 The solution: DAQ Configurator DAQ Configurator High-Level DAQ Configuration Layout Configuration Template Hardware & Configuration DB Configuration Template DB Standalone Java web-start application Resource Service DB XML DAQ Configuration

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 20 Configuration Template DB Template FM Template FM 2 Template FM 1 Application 1 Template Control Structure Hierarchy of Run Control Function Managers Template Unit Template Connections I2O / SOAP single / multi rail global / local to slice Template XDAQ Applications Template Unit Application 2 Application 1 Configuration Templates are organized in a tree Template Application parameters placeholders filled from high-level configuration layout or computation Template Functional Units map to XDAQ executives

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 21 Configuration templates are parameterizable Variables substitution  ${varName}, ${ arithmeticExpression } Three levels (sets of variables)  Site settings (production system, test beds) location of libraries addresses of services, etc.  Account Settings run control installation port ranges, etc.  User input before creation of the configuration dataset label test options (e.g. drop data at a certain stage)  Can use the same configuration template for test setups and for the real DAQ system

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 22 Typical CMS DAQ Software Template DAQ FM SLICE FM FEDBuilder FM TTS FM RUBuilder FM HLTS FM FRL Controller Front-end Readout Link Crate FBO EVM pheaps ptATCP Readout Unit BU ptATCP Builder Unit FMM Controller Fast Merging Module Crate Event Processor Filter Unit Storage Manager Template Control Structure Template Functional Units ptATCP Template Connections FBO RU pheaps ptATCP Resource Broker Event Manager duplicated per slice

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 23 DAQ Configurator workflow DAQ Configurator High-Level DAQ Configuration Layout Configuration Template Hardware & Configuration DB Configuration Template DB Standalone Java web-start application Resource Service DB XML DAQ Configuration

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 24 DAQ Hardware & Configuration DB Equipment Set hosts & networks custom hardware all cabling Admin tool Updated when hardware / cabling changes on average every 2 weeks so far Super Fragment Builder Set Front-End Drivers to read out their grouping DAQ Partition Set Definition of Slices Event Manager Readout Units Builder/Filter Units Storage Managers Updated to balance Super-Fragment Builders Updated to scale system / exclude faulty hardware SFBuilder Set Java Filler DAQPartition Set Java Filler Relational Scheme equipment sets for different DAQ setups evolution over time

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 25 CMS DAQ Hardware and Configuration Database Organization A) Equipment Sets B) Super-Fragment Builder Sets C) DAQ Partition Sets Directories may be used for grouping at all levels

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 26 DAQ Configurator workflow Compute configuration of custom electronics using connectivity information from Equipment Set Readout Links: hosts, slots, masks, expected IDs Trigger Throttling: hosts, slots, masks, mappings Super-Fragment Builder: input & output addresses Compute configuration of custom electronics using connectivity information from Equipment Set Readout Links: hosts, slots, masks, expected IDs Trigger Throttling: hosts, slots, masks, mappings Super-Fragment Builder: input & output addresses Resource Service DB Hardware & Configuration DB Configuration Template DB DAQ Configurator (Java) Duplicator Compute configuration details Compute configuration details Router XML Generator XML Generator Parameterize Template Duplicate Template Functional Units for each unit in the high-level layout Use parameters from layout Connect Sources with Targets Globally / within slice SOAP / I2O single / multiple rails Different routing algorithms Connect Sources with Targets Globally / within slice SOAP / I2O single / multiple rails Different routing algorithms Generate an XML file for each XDAQ executive Slice Unit SubUnit Parameter Generic high-level layout XML user input

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 27 1) Select DAQPartitionSet from Hardware & Configuration DB 2) Select Site Settings 3) Select Account Settings 4) Select Configuration Template 5) Optionally Adjust Variables 6) Select Output Location in Resource Service Configuration 8) Optionally Visualize & Edit configuration 7) Go! timing report: typical time to create full 8-slice DAQ configuration (10000 applications) seconds depending on database load timing report: typical time to create full 8-slice DAQ configuration (10000 applications) seconds depending on database load Graphical editor for Software Templates Graphical editor for Software Templates

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 28 Configuration Display (mid-size configuration) Useful for debugging small configurations …

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 29 The Configurator in practice … The Configurator has been used successfully to create all central DAQ configurations  for the DAQ test setups (development & integration)  for CMS commissioning and cosmic data taking since 2006  for the first LHC beam data taking in September 2008 Debris from particles hitting the collimator blocks. First LHC beam in September Cosmic Muon crossing CMS. Magnet Test, August 2006.

CHEP ‘09, 23 March 2009H. Sakulin / CERN PH 30 Summary Configuration of the CMS DAQ cluster needs to be highly flexible The Run Control System configures the DAQ cluster dynamically at the start of a session  configuration data is loaded from the Resource Service database and distributed via SOAP / XML  online software is loaded dynamically depending on the configuration Developed a powerful tool to create configurations for the CMS DAQ System: CMS DAQ Configurator  high-level configuration layout + configuration template = configuration  algorithmic computation of configuration details using underlying database of connectivity information  fast turn-around time  successfully in use since 2006