MPD Data Acquisition System: Architecture and Solutions

Slides:



Advertisements
Similar presentations
Remigius K Mommsen Fermilab A New Event Builder for CMS Run II A New Event Builder for CMS Run II on behalf of the CMS DAQ group.
Advertisements

Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
Status and plans for online installation LHCb Installation Review April, 12 th 2005 Niko Neufeld for the LHCb Online team.
ILC Trigger & DAQ Issues - 1 ILC DAQ issues ILC DAQ issues By P. Le Dû
Design and Performance of a PCI Interface with four 2 Gbit/s Serial Optical Links Stefan Haas, Markus Joos CERN Wieslaw Iwanski Henryk Niewodnicznski Institute.
Data Acquisition for the 12 GeV Upgrade CODA 3. The good news…  There is a group dedicated to development and support of data acquisition at Jefferson.
The ALICE DAQ: Current Status and Future Challenges P. VANDE VYVRE CERN-EP/AID.
LOGO PROOF system for parallel MPD event processing Gertsenberger K. V. Joint Institute for Nuclear Research, Dubna.
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
Virtualization for the LHCb Online system CHEP Taipei Dedicato a Zio Renato Enrico Bonaccorsi, (CERN)
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
The new CMS DAQ system for LHC operation after 2014 (DAQ2) CHEP2013: Computing in High Energy Physics Oct 2013 Amsterdam Andre Holzner, University.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
CLASS Information Management Presented at NOAATECH Conference 2006 Presented by Pat Schafer (CLASS-WV Development Lead)
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
LOGO Development of the distributed computing system for the MPD at the NICA collider, analytical estimations Mathematical Modeling and Computational Physics.
Silicon Module Tests The modules are tested in the production labs. HIP is is participating in the rod quality tests at CERN. The plan of HIP CMS is to.
Hyper-V Performance, Scale & Architecture Changes Benjamin Armstrong Senior Program Manager Lead Microsoft Corporation VIR413.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
Niko Neufeld, CERN/PH. ALICE – “A Large Ion Collider Experiment” Size: 26 m long, 16 m wide, 16m high; weight: t 35 countries, 118 Institutes Material.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
The Past... DDL in ALICE DAQ The DDL project ( )  Collaboration of CERN, Wigner RCP, and Cerntech Ltd.  The major Hungarian engineering contribution.
XLV INTERNATIONAL WINTER MEETING ON NUCLEAR PHYSICS Tiago Pérez II Physikalisches Institut For the PANDA collaboration FPGA Compute node for the PANDA.
STAR Pixel Detector readout prototyping status. LBNL-IPHC-06/ LG22 Talk Outline Quick review of requirements and system design Status at last meeting.
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Pierre VANDE VYVRE ALICE Online upgrade October 03, 2012 Offline Meeting, CERN.
CODA Graham Heyes Computer Center Director Data Acquisition Support group leader.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
Workshop ALICE Upgrade Overview Thorsten Kollegger for the ALICE Collaboration ALICE | Workshop |
ROM. ROM functionalities. ROM boards has to provide data format conversion. – Event fragments, from the FE electronics, enter the ROM as serial data stream;
Remigius K Mommsen Fermilab CMS Run 2 Event Building.
DAQ 1000 Tonko Ljubicic, Mike LeVine, Bob Scheetz, John Hammond, Danny Padrazo, Fred Bieser, Jeff Landgraf.
Online Software November 10, 2009 Infrastructure Overview Luciano Orsini, Roland Moser Invited Talk at SuperB ETD-Online Status Review.
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
HTCC coffee march /03/2017 Sébastien VALAT – CERN.
Slow Control and Run Initialization Byte-wise Environment
Slow Control and Run Initialization Byte-wise Environment
Use of FPGA for dataflow Filippo Costa ALICE O2 CERN
LHCb and InfiniBand on FPGA
CIS 700-5: The Design and Implementation of Cloud Networks
PROOF system for parallel NICA event processing
Beam Wire Scanner (BWS) serial link requirements and architecture
WP18, High-speed data recording Krzysztof Wrona, European XFEL
The demonstration of Lustre in EAST data system
California Institute of Technology
evoluzione modello per Run3 LHC
LHC experiments Requirements and Concepts ALICE
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
Enrico Bonaccorsi, (CERN) Loic Brarda, (CERN) Gary Moine, (CERN)
ALICE – First paper.
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
CMS DAQ Event Builder Based on Gigabit Ethernet
J.M. Landgraf, M.J. LeVine, A. Ljubicic, Jr., M.W. Schulz
Emanuele Leonardi PADME General Meeting - LNF January 2017
ProtoDUNE SP DAQ assumptions, interfaces & constraints
Vertex 2005 November 7-11, 2005 Chuzenji Lake, Nikko, Japan
Example of DAQ Trigger issues for the SoLID experiment
Event Building With Smart NICs
LHCb Trigger, Online and related Electronics
Network Processors for a 1 MHz Trigger-DAQ System
Presentation transcript:

MPD Data Acquisition System: Architecture and Solutions Ilya Slepnev and VBLHEP DAQ Group Joint Institute for Nuclear Research, RU 3 – 7 September Warsaw

MPD DAQ Intro Readout Architecture Computing Architecture System Requirements MPD DAQ Parameters Readout Architecture Two Architectures WR Switch: VLANs, Priorities Readout Card Base Design Hardware Networking Stack Some of PCB Produced Computing Architecture Data Processing Pipeline Distributed Event Building White Rabbit and Data Networks Data Network Topology Status of Online Cluster Active Topics 3 - 7 November 2015 - NICA days 2015, Warsaw

MPD DAQ: System Requirements Properties: Reliable data transfer Diagnostics. Data integrity check on all levels Fault tolerance and self recovery Distributed and scalable Operation Modes: Multiple hardware trigger classes High Level Software Trigger: Off / Analysis / Filtering 3 - 7 November 2015 - NICA days 2015, Warsaw

MPD Stage-1 DAQ parameters MPD DAQ Parameters Data Acquisition System Control Trigger Timing Raw Data MPD Stage-1 DAQ parameters Beam Au-Au 9 GeV Trigger rate 7 kHz Event size 500 kBytes Raw data rate 3.5 GByte/s Data taking time 8 months / year Beam available 50% of time Annual raw data size 38 PB Compression factor 1:5 – 1:30 Annual storage size 1 – 8 PB Collision energy: 4 - 11 GeV Beams: p to Au Luminocity: 1027 Au-Au 3 - 7 November 2015 - NICA days 2015, Warsaw

DAQ phase-space 3 - 7 November 2015 - NICA days 2015, Warsaw

MPD DAQ: Readout Architectures Based on Common Readout Units Readout of TPC CRU is a link aggregator Timing, Trigger and Control by CRU Intelligence in CRU Based on White Rabbit Network Readout of ECal, TOF, ZDC, etc. Intelligence in Readout Electronics Local Trigger and Control Units Scales well from 1 to 1000 boards Take best of two for MPD WR based electronics performed great in BMN data taking run in March 2015 Note: CRU and WR Core are not radiation hard and requires protection Both designs has much to implement yet to be ready for MPD run 3 - 7 November 2015 - NICA days 2015, Warsaw

DAQ Architecture: Interconnects Common Readout Units: Trigger, Timing, Control Data compression (clustering) Aggregate custom data links TCP-IP interface to DAQ White Rabbit Network: All traffic on same network fibers Traffic at different priority levels Readout electronics boards: Data compression TCP-IP interface to DAQ 3 - 7 November 2015 - NICA days 2015, Warsaw

DAQ Architecture: CRU based 3 - 7 November 2015 - NICA days 2015, Warsaw

DAQ Architecture: WR Network 3 - 7 November 2015 - NICA days 2015, Warsaw

WR Network Streams 3 - 7 November 2015 - NICA days 2015, Warsaw

Detector Readout Electronics DRE Link Streams DRE Board Structure 3 - 7 November 2015 - NICA days 2015, Warsaw

HWIP: IPv4 Network Stack on FPGA IPv4 stack implemented on FPGA (10,000 lines in Verilog) ARP, DHCP, ICMP, UDP, LLDP M-Stream: reliable transfer protocol, FIFO streaming M-Link: Register I/O protocol Direct interface readout electronic board to computer cluster, no special drivers or interface cards required Standards compliant Works in pair with White Rabbit Node Core Automatic Device Discovery 3 - 7 November 2015 - NICA days 2015, Warsaw

Some of MPD DAQ Electronics 3 - 7 November 2015 - NICA days 2015, Warsaw

MPD DAQ: Computing Architecture 3 - 7 November 2015 - NICA days 2015, Warsaw

Event Building Data Flow Distributed and Parallel data processing: MPD produces over 3 GBytes of raw data every second Event Building – process of sorting data fragments from subdetectors and assembling complete event data ready for physical analysis Reliability – handle single errors, data dropouts, corrupted data, timeouts, detector subsystem restarts or processing servers restarts without interruption of event building process Integrity check over all data path. CRC insertion by readout cards 3 - 7 November 2015 - NICA days 2015, Warsaw

Distributed Event Building 3 - 7 November 2015 - NICA days 2015, Warsaw

WR and Data Networks 3 - 7 November 2015 - NICA days 2015, Warsaw

MPD DAQ: Network Topology Upgrade Topology: Core – Distribution – Access Used by service providers Bandwidth limited by distribution switches Optimal for small number of links Good for vertical traffic (client – server) Small size DAQ – up to 40 Gb/s Topology: Leaf – Spine (Clos) Used in parallel applications / Big Data Number of links defines bandwidth Low and predictable latency Good for horizontal traffic (cloud apps) Multi-Terabit scale DAQ 3 - 7 November 2015 - NICA days 2015, Warsaw

Online Computing Cluster in 2015 One rack put online for BMN run 8 Compute nodes in 4U 160 x 3 GHz CPU cores 1024 GB RAM 4 Storage nodes in 16U Ceph 0.94.3 “Hammer” Data on 4 TB HDD Journals on NVM-express SSD Triple replication 430 TB raw, 144 TB usable 10GbE Network in another rack 2 x Cisco Nexus 5548, High Availability pair 2 x Cisco 4500X, VSS pair 3 - 7 November 2015 - NICA days 2015, Warsaw

Active Topics DAQ Hardware DAQ Computing FPGA Engineering PCB Schematic and Routing Assembly and Testing Preparation for Installation DAQ Computing Distributed Processing Detector Data Compression Software Defined Storage Cloud Networking Database Programming FPGA Engineering Digital Signal Processing Network Interfaces Data Compression Embedded CPUs 3 - 7 November 2015 - NICA days 2015, Warsaw

Thank You! 3 - 7 November 2015 - NICA days 2015, Warsaw

Extra slides 3 - 7 November 2015 - NICA days 2015, Warsaw

Summary MPD DAQ: System Requirements Au+Au events, NICA DAQ phase-space MPD DAQ: Readout Architectures DAQ Architecture: LRU based DAQ Architecture: WR Network Detector Readout Electronics WR Network Streams Readout Electronics for MPD MPD DAQ: Data Flow Computing Architecture WR and Data Networks Computing Cluster 3 - 7 November 2015 - NICA days 2015, Warsaw

Readout Card: FPGA Connections 3 - 7 November 2015 - NICA days 2015, Warsaw

Au+Au events, NICA Event rate = 6 000 Hz (min. bias) Luminosity L = 1027cm−2s−1 Total inelastic cross-section σ = 6 b (6·10−24cm2) Event rate = 6 000 Hz (min. bias) Multiplicity: central event nc = 500, average n = 100 TPC average track size – 50 ionization centers (clusters) One cluster induce signals on 6 pads 10 bytes per cluster (3 ADC samples + time + channel num.) Min-bias TPC event size: 300 000 bytes. ECal ~ 120 000 bytes. Others < 50 000 bytes. 3 - 7 November 2015 - NICA days 2015, Warsaw

References NICA technical parameters MPD CDR v1.4. JINR, 2013 MPD TDR (DAQ, TPC, TOF, ZDC). JINR, 2015 presentations: Kevin Black, Trigger and Data Acquisition at the LHC. Harvard University Niko Neufeld, High throughput DAQ. CERN, 23-29 October 2011 3 - 7 November 2015 - NICA days 2015, Warsaw