Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl.

Slides:



Advertisements
Similar presentations
Performance Analysis of Daisy- Chained CPUs Based On Modeling Krzysztof Korcyl, Jagiellonian University, Krakow Radoslaw Trebacz Jagiellonian University,
Advertisements

An ATCA and FPGA-Based Data Processing Unit for PANDA Experiment H.XU, Z.-A. LIU,Q.WANG, D.JIN, Inst. High Energy Physics, Beijing, W. Kühn, J. Lang, S.
1 Advances in Optical Switching Mohammad Ilyas, PhD College of Engineering & Computer Science Florida Atlantic University Boca Raton, Florida USA
Trigger-less and reconfigurable data acquisition system for J-PET
GigE Knowledge. BODE, Company Profile Page: 2 Table of contents  GigE Benefits  Network Card and Jumbo Frames  Camera - IP address obtainment  Multi.
Chapter 4 Queuing, Datagrams, and Addressing
Modeling of the architectural studies for the PANDA DAT system K. Korcyl 1,2 W. Kuehn 3, J. Otwinowski 1, P. Salabura 1, L. Schmitt 4 1 Jagiellonian University,Krakow,
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
DLS Digital Controller Tony Dobbing Head of Power Supplies Group.
SODA: Synchronization Of Data Acquisition I.Konorov  Requirements  Architecture  System components  Performance  Conclusions and outlook PANDA FE-DAQ.
TRIGGER-LESS AND RECONFIGURABLE DATA ACQUISITION SYSTEM FOR POSITRON EMISSION TOMOGRAPHY Grzegorz Korcyl 2013.
SLAC Particle Physics & Astrophysics The Cluster Interconnect Module (CIM) – Networking RCEs RCE Training Workshop Matt Weaver,
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
1 Optical Packet Switching Techniques Walter Picco MS Thesis Defense December 2001 Fabio Neri, Marco Ajmone Marsan Telecommunication Networks Group
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
LHCb front-end electronics and its interface to the DAQ.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
Kraków4FutureDaQ Institute of Physics & Nowoczesna Elektronika P.Salabura,A.Misiak,S.Kistryn,R.Tębacz,K.Korcyl & M.Kajetanowicz Discrete event simulations.
Modeling PANDA TDAQ system Jacek Otwinowski Krzysztof Korcyl Radoslaw Trebacz Jagiellonian University - Krakow.
Florida Institute of Technology, Melbourne, FL
XLV INTERNATIONAL WINTER MEETING ON NUCLEAR PHYSICS Tiago Pérez II Physikalisches Institut For the PANDA collaboration FPGA Compute node for the PANDA.
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
Trigger Hardware Development Modular Trigger Processing Architecture Matt Stettler, Magnus Hansen CERN Costas Foudas, Greg Iles, John Jones Imperial College.
SRB data transmission Vito Palladino CERN 2 June 2014.
ROM. ROM functionalities. ROM boards has to provide data format conversion. – Event fragments, from the FE electronics, enter the ROM as serial data stream;
Grzegorz Korcyl - Jagiellonian University, Kraków Grzegorz Korcyl – PANDA TDAQ Workshop, Giessen April 2010.
The Slow Control System of the HADES RPC Wall Alejandro Gil on behalf of the HADES RPC group IFIC (Centro Mixto UV-CSIC) Valencia, 46071, Spain IEEE-RT2009.
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
PXD DAQ News S. Lange (Univ. Gießen) Belle II Trigger/DAQ Meeting (Jan 16-18, 2012, Hawaii, USA) Today: only topics important for CDAQ - GbE Connection.
The LHCb Calorimeter Triggers LAL Orsay and INFN Bologna.
IRFU The ANTARES Data Acquisition System S. Anvar, F. Druillole, H. Le Provost, F. Louis, B. Vallage (CEA) ACTAR Workshop, 2008 June 10.
M. Tiemens, M. Kavatsyuk KVI, University of Groningen EMC Front-End Electronics for the Preassembly.
WPFL General Meeting, , Nikhef A. Belias1 Shore DAQ system - report on studies A.Belias NOA-NESTOR.
Graciela Perera Department of Computer Science and Information Systems Slide 1 of 18 INTRODUCTION NETWORKING CONCEPTS AND ADMINISTRATION CSIS 3723 Graciela.
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
M. Bellato INFN Padova and U. Marconi INFN Bologna
The Data Handling Hybrid
LHCb and InfiniBand on FPGA
16th IEEE NPSS Real Time Conference 2009 University of Heidelberg
Data and Control link via GbE
LESSON 2.1_A Networking Fundamentals Understand Switches.
Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl.
PANDA collaboration meeting FEE session
Electronics Trigger and DAQ CERN meeting summary.
Connecting Network Components
Future Hardware Development for discussion with JLU Giessen
The Data Handling Hybrid
96-channel, 10-bit, 20 MSPS ADC board with Gb Ethernet optical output
TELL1 A common data acquisition board for LHCb
Electronics, Trigger and DAQ for SuperB
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
CMS DAQ Event Builder Based on Gigabit Ethernet
Large CMS GEM with APV & SRS electronics
Chapter 4: Network Layer
The LHCb Event Building Strategy
Praesideo Cobranet Interface
Example of DAQ Trigger issues for the SoLID experiment
John Harvey CERN EP/LBC July 24, 2001
Business Data Communications, 4e
LHCb Trigger, Online and related Electronics
Chapter 4 Network Layer Computer Networking: A Top Down Approach 5th edition. Jim Kurose, Keith Ross Addison-Wesley, April Network Layer.
Design Principles of the CMS Level-1 Trigger Control and Hardware Monitoring System Ildefons Magrans de Abril Institute for High Energy Physics, Vienna.
Network Processors for a 1 MHz Trigger-DAQ System
SUPPORTING MULTIMEDIA COMMUNICATION OVER A GIGABIT ETHERNET NETWORK
Optical communications & networking - an Overview
The LHCb Front-end Electronics System Status and Future Development
TELL1 A common data acquisition board for LHCb
Chapter 4: Network Layer
COE 342: Data & Computer Communications (T042) Dr. Marwan Abu-Amara
Presentation transcript:

Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl Institue of Nuclear Physics Polish Academy of Sciences & Cracow University of Technology

Agenda PANDA experiment – detectors and requirements for DAQ system the push-only architecture Compute Node in ATCA standard data flow in the architecture short introduction on discrete event modelling modeling results latency queues dynamics load monitoring summary

PANDA detectors Experiment at HESR (High Energy Storage Ring) in FAIR (Facility for Antiproton and Ion Research ) complex at GSI, Darmstadt. Tracking detectors: Micro Vertex Detector Central Tracker Gas Electron Multiplier Stations Forward Tracker Particles identification: DIRC (Detection of Internally Reflected Cherenkov) Time of Flight System Muon Detection System Ring Imaging Cherenkov Detector Calorimetry: Electromagnetic calorimeter

PANDA DAQ requirements interaction rate: up to 20 MHz (luminosity 2* 1032 cm-2s-1) typical event size : ~4 kB expected throughput: 80 GB/s (100 GB/s) rich physics program requires a high flexibility in event selection front end electronics working in continuous sampling mode lack of hardware trigger signal

The push-only architecture SODA Time Distribution System Passive point-to-multipoint bidirectional fiber network providing time reference with precission better than 20 ps and performs synchronization of data taking SODA SODA SODA SODA

ATCA crate and backplane ATCA – Advanced Telecommunications Computing Architecture Backplane: one of possible configuration is full mesh (connects each pair of modules with dedicated point-to-point bidirectional llink

Each board is equipped with 5 Virtex4 FX60 FPGAs. Compute Node Each board is equipped with 5 Virtex4 FX60 FPGAs. High bandwidth connectivity is provided by 8 Gbit optical links connected to RocketIO ports (6.5 Gb/s). In addition the board is equipped with five Gbit Ethernet links

Inter-crate wiring 416 Module in slot N at the FEE level connects, with 2 links trunks, to modules at slots N at the CPU level. The odd events packets at the FEE level are first routed via the backplane and then outbound to the CPU level via a proper trunk. The even events packets outbound to the CPU level first and then use backplane to change the slot.

Inter-crate routing animation 205 366

Onboard Virtex connections Virtex0 – handles all communications to/from the backplane Virtex1-4 - manage 2 input and 2 output ports at the front panel

Discrete event modeling Model – computer program simulating system dynamics in time (support from SystemC library) Fixed time step Discrete events t events t State of the system remains constant between events Processing system in a state may lead to scheduling a new event in the future

Parameterization of ports SendFifo – occupation can grow if multiple writes are allowed OR the link speed is smaller than the speed of write OR the recent packets are smaller than the former ones. ReceiveFifo – occupation can grow if the queue head can not be transferred due to destination being busy with another transfer OR the recent packets are smaller than the former ones. The transfer speed is a parameter – during the simulations it was set to 6.5 Gb/s (RocketIO)

Model of data source Data source – Data Concentrator: Simulates operation of the accelerator Burst: 2 us of interactions + 400 ns gap Superburst: 10 bursts Generates data packet with a size proportional to the sum of number of inter-interactions calculated from Poisson distribution with average of 20 MHz Uniform data size for all Data Concentrators 100 GB/s - av. link usage 37% 173 GB/s - av. link usage 70% Simulates the 8b/10b conversion, tags packets with destination CPU number and pushes into the architecture.

Model of data sink Data sink – event building CPU: Simulates event building of 416 fragments with the same tag size of burst: ~300 kB size of super-burst: ~3 MB

Event building latency Stable latency in time indicates ability of the architecture to switch not only 100 GB/s but up to 173 GB/s

Load distribution between CPUs The CPU for next event is calculated with the formula: Nt+1 = mod (Nt + 79) , 416)

Monitoring queues’ evolution Averaged maximal queue length in input ports at the FEE level. The averaging was over the ports with the same index in all FEE modules.

Monitoring links’ load Load on fiber links connecting output ports from the FEE level with input ports at the CPU level. Homogenious load indicates proper routing scheme – also for trunking.

Monitoring queues Average of maximal length of input queues at the CPU level. The average was made with ports with the same index on all modules.

Monitoring Virtex link’s occupation At the CPU level, the packets heading for odd-numbered CPUs go via the backplane to the slot with destination CPU.

Summary We propose the event building architecture for the triggerless DAQ system for the PANDA experiment. The architecture uses Compute Node modules in ATCA standard. We built simplified models of the components using SystemC library and run the full scale simulations to demonstrate required performance and to analyse dynamics of the system. The push-only mode provides 100 GB/s throughput which allows to perform burst/super-burst building and to run selection algorithms on fully assembled data. with the input links loaded up to 70% of their nominal capacity the architecture can handle up to 173 GB/s

Basic networking Various higher level protocols implementations: Network recognition (ARP, DHCP, ICMP) Transmission of data to the event builders over UDP Multi-frame packets (up to 64kB packets) Dynamic, multiple event builders addressing possible

TCP Extension Implementation of „TCP bypass mode” channel Allows the injection of received by hardware TCP frames into linux kernel Transmission of TCP frames generated by the kernel Ongoing development of „true” TCP implementation Telnet protocol implementation in progress

General features Tested on: Virtex 4 FX Virtex 5 LX Will be on Virtex 5 FX Lattice ECP2M Lattice ECP3M Implemented to work on optical link or copper cable Max transmission speed (when data available): 118MBps Jumbo frames VLAN support Easily configurable set of protocols