LECC2003 AmsterdamMatthias Müller A RobIn Prototype for a PCI-Bus based Atlas Readout-System B. Gorini, M. Joos, J. Petersen (CERN, Geneva) A. Kugel, R.

Slides:



Advertisements
Similar presentations
CHEP 2000 Padova Stefano Veneziano 1 The Read-Out Crate in the ATLAS DAQ/EF prototype -1 The Read-Out Crate model The Read-Out Buffer The ROBin ROB performance.
Advertisements

G ö khan Ü nel / CHEP Interlaken ATLAS 1 Performance of the ATLAS DAQ DataFlow system Introduction/Generalities –Presentation of the ATLAS DAQ components.
A Gigabit Ethernet Link Source Card Robert E. Blair, John W. Dawson, Gary Drake, David J. Francis*, William N. Haberichter, James L. Schlereth Argonne.
Copyright© 2000 OPNET Technologies, Inc. R.W. Dobinson, S. Haas, K. Korcyl, M.J. LeVine, J. Lokier, B. Martin, C. Meirosu, F. Saka, K. Vella Testing and.
Kostas KORDAS INFN – Frascati XI Bruno Touschek spring school, Frascati,19 May 2006 Higgs → 2e+2  O (1/hr) Higgs → 2e+2  O (1/hr) ~25 min bias events.
Remigius K Mommsen Fermilab A New Event Builder for CMS Run II A New Event Builder for CMS Run II on behalf of the CMS DAQ group.
An ATCA and FPGA-Based Data Processing Unit for PANDA Experiment H.XU, Z.-A. LIU,Q.WANG, D.JIN, Inst. High Energy Physics, Beijing, W. Kühn, J. Lang, S.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
5 October 20002nd ATLAS ROD Workshop1 The MROD The MDT Precision Chambers ROD Adriaan König University of Nijmegen.
11 November 2003ATLAS MROD Design Review1 The MROD The Read Out Driver for the ATLAS MDT Muon Precision Chambers Marcello Barisonzi, Henk Boterenbrood,
Using the Trigger Test Stand at CDF for Benchmarking CPU (and eventually GPU) Performance Wesley Ketchum (University of Chicago)
March 2003 CHEP Online Monitoring Software Framework in the ATLAS Experiment Serguei Kolos CERN/PNPI On behalf of the ATLAS Trigger/DAQ Online Software.
System Architecture A Reconfigurable and Programmable Gigabit Network Interface Card Jeff Shafer, Hyong-Youb Kim, Paul Willmann, Dr. Scott Rixner Rice.
5 Feb 2002Alternative Ideas for the CALICE Backend System 1 Alternative Ideas for the CALICE Back-End System Matthew Warren and Gordon Crone University.
Straw electronics Straw Readout Board (SRB). Full SRB - IO Handling 16 covers – Input 16*2 links 400(320eff) Mbits/s Control – TTC – LEMO – VME Output.
DAQ System at the 2002 ATLAS Muon Test Beam G. Avolio – Univ. della Calabria E. Pasqualucci - INFN Roma.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
14 Sep 2005DAQ - Paul Dauncey1 Tech Board: DAQ/Online Status Paul Dauncey Imperial College London.
Design and Performance of a PCI Interface with four 2 Gbit/s Serial Optical Links Stefan Haas, Markus Joos CERN Wieslaw Iwanski Henryk Niewodnicznski Institute.
Micro-Research Finland Oy Components for Integrating Device Controllers for Fast Orbit Feedback Jukka Pietarinen EPICS Collaboration Meeting Knoxville.
R&D for First Level Farm Hardware Processors Joachim Gläß Computer Engineering, University of Mannheim Contents –Overview of Processing Architecture –Requirements.
David Abbott - JLAB DAQ group Embedded-Linux Readout Controllers (Hardware Evaluation)
A PCI Card for Readout in High Energy Physics Experiments Michele Floris 1,2, Gianluca Usai 1,2, Davide Marras 2, André David IEEE Nuclear Science.
Gueorgui ANTCHEVPrague 3-7 September The TOTEM Front End Driver, its Components and Applications in the TOTEM Experiment G. Antchev a, b, P. Aspell.
The Data Flow System of the ATLAS DAQ/EF "-1" Prototype Project G. Ambrosini 3,9, E. Arik 2, H.P. Beck 1, S. Cetin 2, T. Conka 2, A. Fernandes 3, D. Francis.
Online-Offsite Connectivity Experiments Catalin Meirosu *, Richard Hughes-Jones ** * CERN and Politehnica University of Bucuresti ** University of Manchester.
May 23 rd, 2003Andreas Kugel, Mannheim University1 Mannheim University – FPGA Group Real Time Conference 2003, Montréal ATLAS RobIn ATLAS Trigger/DAQ Read-Out-Buffer.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
LNL 1 SADIRC2000 Resoconto 2000 e Richieste LNL per il 2001 L. Berti 30% M. Biasotto 100% M. Gulmini 50% G. Maron 50% N. Toniolo 30% Le percentuali sono.
The CMS Event Builder Demonstrator based on MyrinetFrans Meijers. CHEP 2000, Padova Italy, Feb The CMS Event Builder Demonstrator based on Myrinet.
US Peripheral Crate VMEbus Controller Ben Bylsma EMU – ESR CERN, November 2003.
Development of PCI Bus Based DAQ Platform for Higher Luminosity Experiments T.Higuchi, 1 H.Fujii, 1 M.Ikeno, 1 Y.Igarashi, 1 E.Inoue, 1 R.Itoh, 1 H.Kodama,
Experience with multi-threaded C++ applications in the ATLAS DataFlow Szymon Gadomski University of Bern, Switzerland and INP Cracow, Poland on behalf.
ATLAS TDAQ RoI Builder and the Level 2 Supervisor system R. E. Blair, J. Dawson, G. Drake, W. Haberichter, J. Schlereth, M. Abolins, Y. Ermoline, B. G.
S. Durkin, CMS EMU Meeting U.C. Davis Feb. 25, DMB Production 8-layer PC Board, 2 Ball-Grid Array FPGA’s, 718 Components/Board 550 Production Boards.
Networking update and plans (see also chapter 10 of TP) Bob Dobinson, CERN, June 2000.
Kostas KORDAS INFN – Frascati 10th Topical Seminar on Innovative Particle & Radiation Detectors (IPRD06) Siena, 1-5 Oct The ATLAS Data Acquisition.
Guirao - Frascati 2002Read-out of high-speed S-LINK data via a buffered PCI card 1 Read-out of High Speed S-LINK Data Via a Buffered PCI Card A. Guirao.
New DAQ at H8 Speranza Falciano INFN Rome H8 Workshop 2-3 April 2001.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
SRB data transmission Vito Palladino CERN 2 June 2014.
Exploiting Task-level Concurrency in a Programmable Network Interface June 11, 2003 Hyong-youb Kim, Vijay S. Pai, and Scott Rixner Rice Computer Architecture.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
DAQ Overview + selected Topics Beat Jost Cern EP.
S.Anvar, V.Gautard, H.Le Provost, F.Louis, K.Menager, Y.Moudden, B.Vallage, E.Zonca, on behalf of the KM3NeT consortium 1 IRFU/SEDI-CEA Saclay F
17/02/06H-RORCKIP HeidelbergTorsten Alt The new H-RORC H-RORC.
LECC2004 BostonMatthias Müller The final design of the ATLAS Trigger/DAQ Readout-Buffer Input (ROBIN) Device B. Gorini, M. Joos, J. Petersen, S. Stancu,
ROM. ROM functionalities. ROM boards has to provide data format conversion. – Event fragments, from the FE electronics, enter the ROM as serial data stream;
Remigius K Mommsen Fermilab CMS Run 2 Event Building.
ROD Activities at Dresden Andreas Glatte, Andreas Meyer, Andy Kielburg-Jeka, Arno Straessner LAr Electronics Upgrade Meeting – LAr Week September 2009.
Jos VermeulenTopical lectures, Computer Instrumentation, Introduction, June Computer Instrumentation Introduction Jos Vermeulen, UvA / NIKHEF Topical.
CHEP 2010, October 2010, Taipei, Taiwan 1 18 th International Conference on Computing in High Energy and Nuclear Physics This research project has.
The ALICE Data-Acquisition Read-out Receiver Card C. Soós et al. (for the ALICE collaboration) LECC September 2004, Boston.
Eric Hazen1 Ethernet Readout With: E. Kearns, J. Raaf, S.X. Wu, others... Eric Hazen Boston University.
Jos VermeulenTopical Lectures, Computer Instrumentation, TDAQ, June Computer Instrumentation Triggering and DAQ Jos Vermeulen, UvA / NIKHEF Topical.
M. Bellato INFN Padova and U. Marconi INFN Bologna
Production Firmware - status Components TOTFED - status
Operating Systems (CS 340 D)
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
CMS DAQ Event Builder Based on Gigabit Ethernet
Evolution of S-LINK to PCI interfaces
PCI BASED READ-OUT RECEIVER CARD IN THE ALICE DAQ SYSTEM
The Read Out Driver for the ATLAS Muon Precision Chambers
Example of DAQ Trigger issues for the SoLID experiment
TELL1 A common data acquisition board for LHCb
Cluster Computers.
Presentation transcript:

LECC2003 AmsterdamMatthias Müller A RobIn Prototype for a PCI-Bus based Atlas Readout-System B. Gorini, M. Joos, J. Petersen (CERN, Geneva) A. Kugel, R. Männer, M. Müller, M. Yu (University of Mannheim) B. Green (Royal Holloway University London) G. Kieft (NIKHEF, Amsterdam)

LECC2003 AmsterdamMatthias Müller 2 Outline Overview The Atlas Readout Sub-System (ROS) PCI based Atlas ROS The RobIn Prototype Measurements Conclusions

LECC2003 AmsterdamMatthias Müller 3 Overview PCI-based-ROS is one of the two implementation option of the Atlas ROS.  Uses custom PCI board for receiving / buffering data  RobIn  Host is a PC with multiple PCI-Buses  Gigabit Ethernet connection to LVL2 and EF  PC running multithreaded Software and Master-DMA based PCI messaging scheme Data request rates of measured Full scale system achieves LVL1 Rate of (with GE Net I/O)

LECC2003 AmsterdamMatthias Müller 4 Atlas Readout Subsystem Overview Buffers detector data while LVL2 computes trigger decision 1600 links from detector up to 160 MB/s input bandwidth, 100kHz input rate. 2 kHz output to LVL2 on request via Gigabit Ethernet Output to Event Filter on event accept (~3kHz) LVL2 EF ROS

LECC2003 AmsterdamMatthias Müller 5 VME bus RCPRCP RODROD RODROD RODROD RODROD Config & Control Event sampling & Calibration data … PCI bus ROBIN NIC Gigabit Ethernet links LVL2 & Event Builder Networks Alternative data paths ROD Crate Processor ROLs Data 90 crates (~40 racks) 144 4U PCs (~15 racks) 1600 links (HOLA S-link, 160 MByte/s per link) In USA15 (underground) In SDX15 (at surface) Atlas Readout Subsystem

LECC2003 AmsterdamMatthias Müller 6 PCI based Atlas ROS: Hardware Available: 2 GHz, 2.4 GHz and 3 GHz Xeon PC OS: Linux CERN RedHat 7.3, kernel (patched) 532MB/s CPU (2.4GHz) Mem DDR RAM PCI 64bit/66MHz SCSI 2xFE/GE Slot 1 Slot 2 Slot 3 Slot 4 Slot 5Slot 6 ~2GB/s 532MB/s PCI 64bit/66MHz RobIn GEth

LECC2003 AmsterdamMatthias Müller 7 PCI based Atlas ROS: Software ROS software multi-threaded Fragment Manager interface for RobIn hardware abstraction

LECC2003 AmsterdamMatthias Müller 8 The RobIn Prototype

LECC2003 AmsterdamMatthias Müller 9 The RobIn Prototype (2) Requests to RobIn sent  by PCI single cycles (data requests)  by PLX Bus Master DMA (clear requests) Event data from RobIn:  FPGA sends fragment without first word  First word transmitted finally to signal end-of-transfer

LECC2003 AmsterdamMatthias Müller 10 Measurements Initially RobIn Prototype not available All presented measurements (except one) with alternative RobIn hardware  MPRACE1 MPRACE1:  Common purpose PCI based FPGA Co-Processor  FPGA and PCI bridge identical to RobIn Prototype  FPGA only board  no PowerPC processor available.  Implementing the same PCI messaging as the RobIn Prototype Measurements on three different PCs: a 2GHz Xeon, a 2.4GHz Xeon and a 3GHz Xeon

LECC2003 AmsterdamMatthias Müller 11 Measurements - Multi-threading - Bare data request performance with 1 RobIn, no I/O to Gigabit Ethernet Variation of Request Handler threads shows maximum at 14

LECC2003 AmsterdamMatthias Müller 12 Measurements - Fragment Size Dependency - MPRACE: Up to 512 bytes: fix request overheads overlap the returning fragment data transmissions from the RobIn.  very small fragment size dependency RobIn Prototype: comparison with MPRACE seems to be valid, up to 1kB no fragment size dependency

LECC2003 AmsterdamMatthias Müller 13 Measurements - Influence of DC I/O - 4 ROLs per RobIn (MPRACE) emulated Network I/O to LVL2 and EF reduce performance by a factor of 3.

LECC2003 AmsterdamMatthias Müller 14 Measurements - DC I/O and CPU scalability - 4 ROLs per RobIn (MPRACE) emulated Moving towards a 3 GHz PC improves performance by ~25%.

LECC2003 AmsterdamMatthias Müller 15 Conclusions Max. request performance per RobIn is 170 kHz (1kB fragment size). “Standalone” ROS can handle 12 ROLs on 3 RobIns with 300 kHz LVL1 input rate. Full scale ROS System (3GHz Xeon PC) handles 130 kHz LVL1 input rate (> Atlas requirements) First measurements with RobIn Prototype confirm the results obtained with an earlier prototype (MPRACE).

LECC2003 AmsterdamMatthias Müller 16 RobIn (MPRACE1) PLX9656 (PCI Connection) Xilinx VirtexII FPGA ControlPLD Expansion Connector ZBT SRAM 2MB SDRAM Socket Local Bus 32bit/66MHz PCI Bus 64bit/66MHz Parts common to the RobIn Prototype: PLX Pci Bridge, Local Bus, FPGA Firmware implements RobIn Prototype Message Passing protocol On-board “local” bus limited to 266MB/s (half of max. PCI throughput)

LECC2003 AmsterdamMatthias Müller 17 Measurements - Influence of DC I/O - 4 ROLs per RobIn (MPRACE) emulated Network I/O to LVL2 and EF reduce performance to 1/3 Large EB fractions: performance limited by GE line speed Small EB fractions: performance limited by PC’s computing power 100 kHz Gigabit Ethernet Line Speed 3 kHz Atlas Baseline

LECC2003 AmsterdamMatthias Müller 18 Measurements - Multiple PCI Buses - Request rate decreases, even though PCI – Bus is not saturated. Low parallelism in software?