Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl.

Slides:



Advertisements
Similar presentations
Performance Analysis of Daisy- Chained CPUs Based On Modeling Krzysztof Korcyl, Jagiellonian University, Krakow Radoslaw Trebacz Jagiellonian University,
Advertisements

CHL -2 Level 1 Trigger System Fully Pipelined Custom ElectronicsDigitization Drift Chamber Pre-amp The GlueX experiment will utilize fully pipelined front.
Copyright© 2000 OPNET Technologies, Inc. R.W. Dobinson, S. Haas, K. Korcyl, M.J. LeVine, J. Lokier, B. Martin, C. Meirosu, F. Saka, K. Vella Testing and.
Router Architecture : Building high-performance routers Ian Pratt
An ATCA and FPGA-Based Data Processing Unit for PANDA Experiment H.XU, Z.-A. LIU,Q.WANG, D.JIN, Inst. High Energy Physics, Beijing, W. Kühn, J. Lang, S.
1 Advances in Optical Switching Mohammad Ilyas, PhD College of Engineering & Computer Science Florida Atlantic University Boca Raton, Florida USA
GLAST June 8, 2000, P. Roger Williamson, BFP - 1 Balloon Flight Planning Roger Williamson June 8, 2000.
Trigger-less and reconfigurable data acquisition system for J-PET
Modeling of the architectural studies for the PANDA DAT system K. Korcyl 1,2 W. Kuehn 3, J. Otwinowski 1, P. Salabura 1, L. Schmitt 4 1 Jagiellonian University,Krakow,
DLS Digital Controller Tony Dobbing Head of Power Supplies Group.
SODA: Synchronization Of Data Acquisition I.Konorov  Requirements  Architecture  System components  Performance  Conclusions and outlook PANDA FE-DAQ.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Hall D Online Meeting 28 March 2008 Fast Electronics R. Chris Cuevas Group Leader Jefferson Lab Experimental Nuclear Physics Division System Engineering.
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
1 Optical Packet Switching Techniques Walter Picco MS Thesis Defense December 2001 Fabio Neri, Marco Ajmone Marsan Telecommunication Networks Group
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
XFEL The European X-Ray Laser Project X-Ray Free-Electron Laser Communication in ATCA-LLRF System LLRF Review, DESY, December 3rd, 2007 Communication in.
28/03/2003Julie PRAST, LAPP CNRS, FRANCE 1 The ATLAS Liquid Argon Calorimeters ReadOut Drivers A 600 MHz TMS320C6414 DSPs based design.
Interconnect simulation. Different levels for Evaluating an architecture Numerical models – Mathematic formulations to obtain performance characteristics.
Interconnect simulation. Different levels for Evaluating an architecture Numerical models – Mathematic formulations to obtain performance characteristics.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
VLVnT09A. Belias1 The on-shore DAQ system for a deep-sea neutrino telescope A.Belias NOA-NESTOR.
12GeV Trigger Workshop Christopher Newport University 8 July 2009 R. Chris Cuevas Welcome! Workshop goals: 1.Review  Trigger requirements  Present hardware.
LHCb front-end electronics and its interface to the DAQ.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
Kraków4FutureDaQ Institute of Physics & Nowoczesna Elektronika P.Salabura,A.Misiak,S.Kistryn,R.Tębacz,K.Korcyl & M.Kajetanowicz Discrete event simulations.
Modeling PANDA TDAQ system Jacek Otwinowski Krzysztof Korcyl Radoslaw Trebacz Jagiellonian University - Krakow.
XLV INTERNATIONAL WINTER MEETING ON NUCLEAR PHYSICS Tiago Pérez II Physikalisches Institut For the PANDA collaboration FPGA Compute node for the PANDA.
Trigger Hardware Development Modular Trigger Processing Architecture Matt Stettler, Magnus Hansen CERN Costas Foudas, Greg Iles, John Jones Imperial College.
ECFA Workshop, Warsaw, June G. Eckerlin Data Acquisition for the ILD G. Eckerlin ILD Meeting ILC ECFA Workshop, Warsaw, June 11 th 2008 DAQ Concept.
DAQ Overview + selected Topics Beat Jost Cern EP.
Super BigBite DAQ & Trigger Jens-Ole Hansen Hall A Collaboration Meeting 16 December 2009.
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
The LHCb Calorimeter Triggers LAL Orsay and INFN Bologna.
IRFU The ANTARES Data Acquisition System S. Anvar, F. Druillole, H. Le Provost, F. Louis, B. Vallage (CEA) ACTAR Workshop, 2008 June 10.
Development of new DAQ system at Super-Kamiokande for nearby supernova A.Orii T. Tomura, K. Okumura, M. Shiozawa, M. Nakahata, S. Nakayama, Y. Hayato for.
M. Tiemens, M. Kavatsyuk KVI, University of Groningen EMC Front-End Electronics for the Preassembly.
K + → p + nn The NA62 liquid krypton electromagnetic calorimeter Level 0 trigger V. Bonaiuto (a), A. Fucci (b), G. Paoluzzi (b), A. Salamon (b), G. Salina.
XFEL The European X-Ray Laser Project X-Ray Free-Electron Laser Wojciech Jalmuzna, Technical University of Lodz, Department of Microelectronics and Computer.
WPFL General Meeting, , Nikhef A. Belias1 Shore DAQ system - report on studies A.Belias NOA-NESTOR.
Digital Acquisition: State of the Art and future prospects
Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl.
DAQ Requirements M. Villa 21/02/2011.
The Data Handling Hybrid
LHCb and InfiniBand on FPGA
16th IEEE NPSS Real Time Conference 2009 University of Heidelberg
PANDA collaboration meeting FEE session
Electronics Trigger and DAQ CERN meeting summary.
Future Hardware Development for discussion with JLU Giessen
The Data Handling Hybrid
96-channel, 10-bit, 20 MSPS ADC board with Gb Ethernet optical output
TELL1 A common data acquisition board for LHCb
Electronics, Trigger and DAQ for SuperB
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
CMS DAQ Event Builder Based on Gigabit Ethernet
Chapter 4: Network Layer
The LHCb Event Building Strategy
William Stallings Data and Computer Communications
Example of DAQ Trigger issues for the SoLID experiment
LHCb Trigger and Data Acquisition System Requirements and Concepts
Timing System GSI R. Bär / U. Krause 15. Feb. 2008
John Harvey CERN EP/LBC July 24, 2001
LHCb Trigger, Online and related Electronics
Chapter 4 Network Layer Computer Networking: A Top Down Approach 5th edition. Jim Kurose, Keith Ross Addison-Wesley, April Network Layer.
Design Principles of the CMS Level-1 Trigger Control and Hardware Monitoring System Ildefons Magrans de Abril Institute for High Energy Physics, Vienna.
The LHCb Front-end Electronics System Status and Future Development
TELL1 A common data acquisition board for LHCb
FED Design and EMU-to-DAQ Test
Chapter 4: Network Layer
Presentation transcript:

Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl Institue of Nuclear Physics Polish Academy of Sciences & Cracow University of Technology

Agenda PANDA experiment – detectors and requirements for DAQ system the push-only architecture Compute Node in ATCA standard data flow in the architecture short introduction on discrete event modelling modeling results latency queues dynamics load monitoring summary

PANDA detectors Experiment at HESR (High Energy Storage Ring) in FAIR (Facility for Antiproton and Ion Research ) complex at GSI, Darmstadt. Tracking detectors: Micro Vertex Detector Central Tracker Gas Electron Multiplier Stations Forward Tracker Particles identification: DIRC (Detection of Internally Reflected Cherenkov) Time of Flight System Muon Detection System Ring Imaging Cherenkov Detector Calorimetry: Electromagnetic calorimeter

PANDA DAQ requirements interaction rate: up to 20 MHz (luminosity 2* 1032 cm-2s-1) typical event size : ~4 kB expected throughput: 80 GB/s (100 GB/s) rich physics program requires a high flexibility in event selection front end electronics working in continuous sampling mode lack of hardware trigger signal

The push-only architecture SODA Time Distribution System Passive poin-to-multipoint bidirectional fiber network providing time reference with precission better than 20 ps and performs synchronization of data taking

ATCA crate and backplane ATCA – Advanced Telecommunications Computing Architecture Backplane: one of possible configuration is full mesh (connects each pair of modules with dedicated point-to-point bidirectional llink

Each board is equipped with 5 Virtex4 FX60 FPGAs. Compute Node Each board is equipped with 5 Virtex4 FX60 FPGAs. High bandwidth connectivity is provided by 8 Gbit optical links connected to RocketIO ports (6.5 Gb/s). In addition the board is equipped with five Gbit Ethernet links

Inter-crate wiring 416 Module in slot N at the FEE level connects, with 2 links trunks, to modules at slots N at the CPU level. The odd events packets at the FEE level are first routed via the backplane and then outbound to the CPU level via a proper trunk. The even events packets outbound to the CPU level first and then use backplane to change the slot.

Inter-crate routing animation 205 366

Onboard Virtex connections Virtex0 – handles all communications to/from the backplane Virtex1-4 - manage 2 input and 2 output ports at the front panel

Discrete event modeling Model – computer program simulating system dynamics in time (support from SystemC library) Fixed time step Discrete events t events t State of the system remains constant between events Processing system in a state may lead to scheduling a new event in the future

Parameterization of ports SendFifo – occupation can grow if multiple writes are allowed OR the link speed is smaller than the speed of write OR the recent packets are smaller than the former ones. ReceiveFifo – occupation can grow if the queue head can not be transferred due to destination being busy with another transfer OR the recent packets are smaller than the former ones. The transfer speed is a parameter – during the simulations it was set to 6.5 Gb/s (RocketIO)

Models of source and sink of data Data source – Data Concentrator: generates data packet with a size proportional to the sum of number of inter-interactions calculated from Poisson distribution with average of 20 MHz Burst: 2 µs of interactions + 400 ns of silence gap Super-burst: 10 bursts Simulates the 8b/10b conversion, tags packets with destination CPU number and pushes into the architecture. Data sink – event building CPU: Simulates event building of 416 fragments with the same tag size of burst: ~300 kB size of super-burst: ~3 MB

Event building latency Constant latency in time indicates ability of the architecture to switch not only 100 GB/s but also 173 GB/s

Load distribution between CPUs The CPU for next event is calculated with the formula: Nt+1 = mod (Nt + 79) , 416)

Monitoring queues’ evolution Averaged maximal queue length in input ports at the FEE level. The averaging was over the ports with the same index in all FEE modules.

Monitoring links’ load Load on fiber links connecting output ports from the FEE level with input ports at the CPU level. Homogenious load indicates proper routing scheme – also for trunking.

Monitoring queues Average of maximal length of input queues at the CPU level. The average was made with ports with the same index on all modules.

Monitoring Virtex link’s occupation At the CPU level, the packets heading for odd-numbered CPUs go via the backplane to the slot with destination CPU.

Summary We propose the event building architecture for the triggerless DAQ system for the PANDA experiment. The architecture uses Compute Node modules in ATCA standard. We built simplified models of the components using SystemC library and run the full scale simulations to demonstrate required performance and to analyse dynamics of the system. The push-only mode offers 100 GB/s throughput which allows to perform burst/super-burst building and to run selection algorithms on fully assembled data. with the input links loaded up to 70% of their nominal capacity the architecture can handle 173 GB/s