EPS 2007 Alexander Oh, CERN 1 The DAQ and Run Control of CMS EPS 2007, Manchester Alexander Oh, CERN, PH-CMD On behalf of the CMS-CMD Group.

Slides:



Advertisements
Similar presentations
Protocols and software for exploiting Myrinet clusters Congduc Pham and the main contributors P. Geoffray, L. Prylli, B. Tourancheau, R. Westrelin.
Advertisements

GNAM and OHP: Monitoring Tools for the ATLAS Experiment at LHC GNAM and OHP: Monitoring Tools for the ATLAS Experiment at LHC M. Della Pietra, P. Adragna,
CHEP 2012 – New York City 1.  LHC Delivers bunch crossing at 40MHz  LHCb reduces the rate with a two level trigger system: ◦ First Level (L0) – Hardware.
Remigius K Mommsen Fermilab A New Event Builder for CMS Run II A New Event Builder for CMS Run II on behalf of the CMS DAQ group.
ACAT 2002, Moscow June 24-28thJ. Hernández. DESY-Zeuthen1 Offline Mass Data Processing using Online Computing Resources at HERA-B José Hernández DESY-Zeuthen.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
CMS Michele Gulmini, CHEP2003, San Diego USA, March Run Control and Monitor System for the CMS Experiment Michele Gulmini CERN/EP – INFN Legnaro.
First year experience with the ATLAS online monitoring framework Alina Corso-Radu University of California Irvine on behalf of ATLAS TDAQ Collaboration.
K. Honscheid RT-2003 The BTeV Data Acquisition System RT-2003 May 22, 2002 Klaus Honscheid, OSU  The BTeV Challenge  The Project  Readout and Controls.
The Application of DAQ-Middleware to the J-PARC E16 Experiment E Hamada 1, M Ikeno 1, D Kawama 2, Y Morino 1, W Nakai 3, 2, Y Obara 3, K Ozawa 1, H Sendai.
CMS Michele Gulmini, Cern, DAQ Weekly 07/05/ RCMS – Plan of work Michele Gulmini DAQ Weekly 7th May 2002.
S. Cittolin CERN/CMS, 22/03/07 DAQ architecture. TDR-2003 DAQ evolution and upgrades DAQ upgrades at SLHC.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
The Run Control and Monitoring System of the CMS Experiment Presented by Andrea Petrucci INFN, Laboratori Nazionali di Legnaro, Italy On behalf of the.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
CMS Luigi Zangrando, Cern, 05/03/ RCMS for XDaq based small DAQ Systems M. Gulmini, M. Gaetano, N. Toniolo, S. Ventura, L. Zangrando INFN – Laboratori.
C.Combaret, L.Mirabito Lab & beamtest DAQ with XDAQ tools.
LECC2003 AmsterdamMatthias Müller A RobIn Prototype for a PCI-Bus based Atlas Readout-System B. Gorini, M. Joos, J. Petersen (CERN, Geneva) A. Kugel, R.
ILC Trigger & DAQ Issues - 1 ILC DAQ issues ILC DAQ issues By P. Le Dû
Design and Performance of a PCI Interface with four 2 Gbit/s Serial Optical Links Stefan Haas, Markus Joos CERN Wieslaw Iwanski Henryk Niewodnicznski Institute.
ALICE, ATLAS, CMS & LHCb joint workshop on
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
Dynamic configuration of the CMS Data Acquisition Cluster Hannes Sakulin, CERN/PH On behalf of the CMS DAQ group Part 1: Configuring the CMS DAQ Cluster.
The new CMS DAQ system for LHC operation after 2014 (DAQ2) CHEP2013: Computing in High Energy Physics Oct 2013 Amsterdam Andre Holzner, University.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
LHCb front-end electronics and its interface to the DAQ.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
CHEP March 2003 Sarah Wheeler 1 Supervision of the ATLAS High Level Triggers Sarah Wheeler on behalf of the ATLAS Trigger/DAQ High Level Trigger.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
DAQ Andrea Petrucci 6 May 2008 – CMS-UCSD meeting OUTLINE Introduction SCX Setup Run Control Current Status of the Tests Summary.
CMS Luigi Zangrando, Cern, 16/4/ Run Control Prototype Status M. Gulmini, M. Gaetano, N. Toniolo, S. Ventura, L. Zangrando INFN – Laboratori Nazionali.
DØ Online Workshop3-June-1999S. Fuess Online Computing Overview DØ Online Workshop 3-June-1999 Stu Fuess.
The CMS Event Builder Demonstrator based on MyrinetFrans Meijers. CHEP 2000, Padova Italy, Feb The CMS Event Builder Demonstrator based on Myrinet.
Kostas KORDAS INFN – Frascati 10th Topical Seminar on Innovative Particle & Radiation Detectors (IPRD06) Siena, 1-5 Oct The ATLAS Data Acquisition.
Software for the CMS Cosmic Challenge Giacomo BRUNO UCL, Louvain-la-Neuve, Belgium On behalf of the CMS Collaboration CHEP06, Mumbay, India February 16,
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Pierre VANDE VYVRE ALICE Online upgrade October 03, 2012 Offline Meeting, CERN.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
DAQ Overview + selected Topics Beat Jost Cern EP.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
CMS Luigi Zangrando, Cern, 16/4/ Run Control Prototype Status M. Gulmini, M. Gaetano, N. Toniolo, S. Ventura, L. Zangrando INFN – Laboratori Nazionali.
Remigius K Mommsen Fermilab CMS Run 2 Event Building.
M. Caprini IFIN-HH Bucharest DAQ Control and Monitoring - A Software Component Model.
Online Software November 10, 2009 Infrastructure Overview Luciano Orsini, Roland Moser Invited Talk at SuperB ETD-Online Status Review.
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
CMS DAQ project at Fermilab
MPD Data Acquisition System: Architecture and Solutions
Use of FPGA for dataflow Filippo Costa ALICE O2 CERN
Electronics Trigger and DAQ CERN meeting summary.
CMS High Level Trigger Configuration Management
LHC experiments Requirements and Concepts ALICE
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
Controlling a large CPU farm using industrial tools
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
CMS DAQ Event Builder Based on Gigabit Ethernet
DAQ upgrades at SLHC S. Cittolin CERN/CMS, 22/03/07
Hannes Sakulin, CERN/EP on behalf of the CMS DAQ group
The LHCb Event Building Strategy
LHCb Trigger and Data Acquisition System Requirements and Concepts
John Harvey CERN EP/LBC July 24, 2001
Event Building With Smart NICs
LHCb Trigger, Online and related Electronics
Network Processors for a 1 MHz Trigger-DAQ System
Cluster Computers.
Presentation transcript:

EPS 2007 Alexander Oh, CERN 1 The DAQ and Run Control of CMS EPS 2007, Manchester Alexander Oh, CERN, PH-CMD On behalf of the CMS-CMD Group

EPS 2007 Alexander Oh, CERN 2 DAQ

EPS 2007 Alexander Oh, CERN 3 DAQ requirements and Architecture Collision rate40 MHz Two Level Trigger System Level-1 Maximum trigger rate 100 kHz Average event size 1 MByte HLT acceptance1-10% No. of In-Out units 500 Network aggregate Throughput 1 Terabit/s Event filter computing power MIPS Data productionTbyte/day No. of PC3000

EPS 2007 Alexander Oh, CERN 4 Two-Stage Event Builder First stage –Combine 8 fragments into 1 super-fragment Second stage –Combine 72 super- fragments into 1 event

EPS 2007 Alexander Oh, CERN 5 Two-Stage Event Builder ~600 fragments,  ~2kB 72 s-fragments,  ~16kB 1 event,  ~1MB Sizes & Multiplicities

EPS 2007 Alexander Oh, CERN 6 Two-Stage Event Builder Super Fragment Builder Myrinet NIC –Common technology in high performance computing –2 bi-directional optical links, 2Gb/s –200m cable from cavern to DAQ PCs –RISC processor custom programmed Cross-Bar Switches 32x32 –Wormhole routing –Low latency –Link level flow control with backpressure –Loss-less transmission –Switch not buffered Head of Line blocking

EPS 2007 Alexander Oh, CERN 7 Two-Stage Event Builder Super Fragment Builder Myrinet NIC Cross-Bar Switches

EPS 2007 Alexander Oh, CERN 8 Performance Super Fragment Builder 500 MB/s line speed two rails 200 MB/s needed 300 2kB loss due to head-of-line blocking Working point Test with different fragment distributions.

EPS 2007 Alexander Oh, CERN 9 Two-Stage Event Builder Event Builder PCs act as “Intelligent Buffers” –PCI-E Intel based NIC with 4 GbE ports –Myrinet input from Super Fragment Builder –GB Ethernet to assemble event and output to filter-farm –Protocol is TPC/IP Gb Ethernet Switches –2x Switch Force 10 E1200 –90 GbE port line cards –4 ports per machine

EPS 2007 Alexander Oh, CERN 10 Two-Stage Event Builder Event Builder PC model: Dell PE2950 with 2 dual-core CPU ~700 PC installed Gb Ethernet Switches: Force 10 E1200

EPS 2007 Alexander Oh, CERN 11 Performance Event Builder First test with final Hardware –64 Input PCs x 64 Output PCs, 4GBE ports per PC –Super fragments generated in the Input –Events dropped in the Output –Constant fragment size requirement Requirement well fulfilled!

EPS 2007 Alexander Oh, CERN 12 Two-Stage Event Builder Two Stage Event Builder allows a staged installation: –At each stage the full functionality is provided. –The performance is scalable as a function of available hardware. The number of Event Builders can be varied from 1 to 8 –Scalability: from 1/8th to 1/1th of the final performance –Reliability: Event Builders are functionally redundant.

EPS 2007 Alexander Oh, CERN 13 DAQ Configurations The configuration of the DAQ adopted to the commissioning schedule –Now: first global runs with some sub-detectors –Sep ‘07: Global DAQ commissioning Full Read-Out, 4 Event Builder, reduced Filter Farm –Nov ‘07 - Apr ‘08: Global Runs with technical and cosmic triggers Full Read-Out, 2 Event Builder, reduced Filter Farm –Jul ‘08 Physics Run: Full Read-Out, 4 Event Builder, nominal Filter Farm –Later:High Luminosity Runs

EPS 2007 Alexander Oh, CERN kHz+12.5 kHz Structure Data to Surface COMPLETED 100% 676 FED, 512 FRL, 1536 links, 72 FED Builders Data to Surface COMPLETED 100% 676 FED, 512 FRL, 1536 links, 72 FED Builders 640 PCs PE2950: 2x2core 2GHz Xeon 4 GB Ultimate function: EVB-RU at 100 kHz (8 DAQ slices) 640 PCs PE2950: 2x2core 2GHz Xeon 4 GB Ultimate function: EVB-RU at 100 kHz (8 DAQ slices) Installed 100 kHz EVB, ReadOut only (2007) More on commissioning: “Status and Commissionning of the CMS Experiment” Claudia Wulz, today 11h15

EPS 2007 Alexander Oh, CERN 15 Run Control

EPS 2007 Alexander Oh, CERN 16 Requirements & Architecture Run Control tasks –Configure –Control –Monitor –Diagnostic –Interactivity Framework provides uniform API to common tasks –DB for configuration –Statemachines for control –Access to monitoring system Objects to manage ~10000 distributed Applications on ~2000 PCs

EPS 2007 Alexander Oh, CERN 17 Architecture The experiment is controlled by a tree of Finite State Machines. “Function Managers” implement the finite state machine. A set of services support the Function Managers.

EPS 2007 Alexander Oh, CERN 18 –SECURITY SERVICE login and user account management; –RESOURCE SERVICE (RS) information about DAQ resources and partitions; –INFORMATION AND MONITOR SERVICE (IMS) Collects messages and monitor data; distributes them to the subscribers; –JOB CONTROL Starts, monitors and stops the software elements of RCMS, including the DAQ components; Architecture Services 18

EPS 2007 Alexander Oh, CERN 19 Run Control Implementation & Technologies Implementation in Java as a Web Application –Java –Application Server Tomcat –Development Tool: Eclipse IDE Data Base Support –Oracle 10g –MySQL 5 Web Service Interfaces –WSDL specified using Axis –Tested Clients: Java, Perl, LabView

EPS 2007 Alexander Oh, CERN 20 Run Control Framework Resource Service –Stores and delivers configuration information of Online Processes Function Manager Framework –Finite State Machine Engine –Error Handlers –Process facades GUI –Generic JSP based GUI –Basic command and state display. Log Collector –Collect, store and forward log4c and log4j messages Job Control –Start, Monitor and Stop Unix processes

EPS 2007 Alexander Oh, CERN 21 Run Control Deployment –10 PC allocated to Run Control Run on average two instances of tomcat DNS alias makes relocation transparent (load balancing) –One common Online DB shared with all online processes. Account separation per sub- system Configuration management with Global Configuration Keys Final machines are installed and are being used for the July “Global Run” csc daq dt ecal es hcal pixel rpc top tracker trigger dqm RC PC Oracle 10g

EPS 2007 Alexander Oh, CERN 22 Run Control In Use Run Control has been used successfully for taking data with cosmic muons during the magnet test (MTCC) and during Local and Global Runs for CMS comissioning. More on MTCC: “ First Cosmic Data Taking with CMS ” Hannes Sakulin, today 11h30

EPS 2007 Alexander Oh, CERN 23 Run Control In Use Typical Times to start a “Run” in the MTCC-II

EPS 2007 Alexander Oh, CERN 24 Finis DAQ –Two stage trigger (Lvl-1 and HLT) require 100GB/s event builder –Two Stage Event Building provides a flexible design –Hardware has been installed and is being commissioned Run Control –Hierarchical State Machine Model –Build around Web technologies –In production at Test Beams and Global Runs

EPS 2007 Alexander Oh, CERN 25 EXTRA

EPS 2007 Alexander Oh, CERN 26 Architecture The FMs are grouped by level in the control hierarchy. Level 0 is the entry point to the experiment. Level 1 is the entry point to the sub-system and is standardized Additional Levels are optional. Web Browser (GUI) Level 0 FM Level 1 FM Level 2 FM User interaction with Web Browser connected to Level 0 FM. Level 0 FM is entry point to Run Control System.. Level 2 FMs are sub- system specific custom implementations. Level 1 FM interface to the Level 0 FM and have to implement a standard set of inputs and states.

EPS 2007 Alexander Oh, CERN 27 Structure Sep 07 (Global DAQ commissioning) 640 PC 2x2core 2GHz Xeon 4 DAQ slices (72x72). 288 RU x 288 BUFU Maximum Level1 rate 50 kHz HLT PC event rate ~ 180 Hz Ignoring I/O on BU/FU PC 22 TB local storage at 1 GB/s Sep 07 (Global DAQ commissioning) 640 PC 2x2core 2GHz Xeon 4 DAQ slices (72x72). 288 RU x 288 BUFU Maximum Level1 rate 50 kHz HLT PC event rate ~ 180 Hz Ignoring I/O on BU/FU PC 22 TB local storage at 1 GB/s

EPS 2007 Alexander Oh, CERN 28 Structure Nov 07 - Apr 08 (global runs: technical, cosmic) 640 PC 2x2core 2 GHz Xeon 2 DAQ slices (72x200). 144 RU x 400 BUFU Maximum Level1 rate 20 kHz HLT PC event rate ~ 50 Hz Ignoring I/O on BU/FU PC 22 TB local storage at 1 GB/s Nov 07 - Apr 08 (global runs: technical, cosmic) 640 PC 2x2core 2 GHz Xeon 2 DAQ slices (72x200). 144 RU x 400 BUFU Maximum Level1 rate 20 kHz HLT PC event rate ~ 50 Hz Ignoring I/O on BU/FU PC 22 TB local storage at 1 GB/s

EPS 2007 Alexander Oh, CERN 29 Structure July 08 (physics run) 4 DAQ slices (72x288). Maximum Level1 rate 50 kHz HLT PC event rate ~ 40 Hz >2009. High Luminosity runs 8 DAQ slices 100 kHz July 08 (physics run) 4 DAQ slices (72x288). Maximum Level1 rate 50 kHz HLT PC event rate ~ 40 Hz >2009. High Luminosity runs 8 DAQ slices 100 kHz

EPS 2007 Alexander Oh, CERN 30 Architecture Function Manager The FM controls a set of resources –Classic definition: A control system determines its outputs depending on its inputs. –Resources are online processes implemented in C++ FM Components: –Input Module –Output module –Data Access Module: fetch configuration data –Processor module: process incoming messages –Finite State Machine Standardization –Sub-detectors implement a defined Finite State Machine –Facilitates integration

EPS 2007 Alexander Oh, CERN 31 DAQ Event Building Sources Destinations Event fragments : Event data fragments are stored in separated physical memory systems. Event builder : Physical system interconnecting data sources with data destinations. It has to move each event data fragments into a same destination. Full events : Full event data are stored into one physical memory system associated to a processing unit.

EPS 2007 Alexander Oh, CERN 32 Trigger Level-1 Level-1 trigger: reduce 40 MHz to 10 5 Hz –Upstream: still need to get to 10 2 Hz Front end pipelines Readout buffers Processor farms Switching network Detectors Lvl-1 HLT Lvl-1 Lvl-2 Lvl-3 Front end pipelines Readout buffers Processor farms Switching network Detectors “Traditional”: 3 physical levelsCMS: 2 physical levels