The PHENIX Event Builder David Winter Columbia University for the PHENIX Collaboration DNP 2004 Chicago, IL.

Slides:



Advertisements
Similar presentations
Khaled A. Al-Utaibi  Computers are Every Where  What is Computer Engineering?  Design Levels  Computer Engineering Fields  What.
Advertisements

Gelu M. Nita NJIT. Noise Diode Control Day/Night Attn. Ctrl. Solar Burst Attn. Ctrl. V/H RF Power Out Attn. Ctrl. Temperature Sensors.
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
A Gigabit Ethernet Link Source Card Robert E. Blair, John W. Dawson, Gary Drake, David J. Francis*, William N. Haberichter, James L. Schlereth Argonne.
Remigius K Mommsen Fermilab A New Event Builder for CMS Run II A New Event Builder for CMS Run II on behalf of the CMS DAQ group.
Henrik Tydesjö March O UTLINE - The Quark Gluon Plasma - The Relativistic Heavy Ion Collider (RHIC) - The PHENIX Experiment - Event-by-Event Net-Charge.
ACAT 2002, Moscow June 24-28thJ. Hernández. DESY-Zeuthen1 Offline Mass Data Processing using Online Computing Resources at HERA-B José Hernández DESY-Zeuthen.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
Figure 1.1 Interaction between applications and the operating system.
Xen and the Art of Virtualization. Introduction  Challenges to build virtual machines Performance isolation  Scheduling priority  Memory demand  Network.
RTOS Design & Implementation Swetanka Kumar Mishra & Kirti Chawla.
Sourav Tarafdar Banaras Hindu University For the PHENIX Collaboration Hard Probes 2012 Measurement of electrons from Heavy Quarks at PHENIX.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
Jan 3, 2001Brian A Cole Page #1 EvB 2002 Major Categories of issues/work Performance (event rate) Hardware  Next generation of PCs  Network upgrade Control.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
CS 390- Unix Programming Environment CS 390 Unix Programming Environment Topics to be covered: Distributed Computing Fundamentals.
IMPLEMENTATION OF SOFTWARE INPUT OUTPUT CONTROLLERS FOR THE STAR EXPERIMENT J. M. Burns, M. Cherney*, J. Fujita* Creighton University, Department of Physics,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
GBT Interface Card for a Linux Computer Carson Teale 1.
LECC2003 AmsterdamMatthias Müller A RobIn Prototype for a PCI-Bus based Atlas Readout-System B. Gorini, M. Joos, J. Petersen (CERN, Geneva) A. Kugel, R.
U N C L A S S I F I E D FVTX Detector Readout Concept S. Butsyk For LANL P-25 group.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
ATCA based LLRF system design review DESY Control servers for ATCA based LLRF system Piotr Pucyk - DESY, Warsaw University of Technology Jaroslaw.
Design and Performance of a PCI Interface with four 2 Gbit/s Serial Optical Links Stefan Haas, Markus Joos CERN Wieslaw Iwanski Henryk Niewodnicznski Institute.
DAQ Issues for the 12 GeV Upgrade CODA 3. A Modest Proposal…  Replace aging technologies  Run Control  Tcl-Based DAQ components  mSQL  Hall D Requirements.
Data Acquisition for the 12 GeV Upgrade CODA 3. The good news…  There is a group dedicated to development and support of data acquisition at Jefferson.
David Abbott - JLAB DAQ group Embedded-Linux Readout Controllers (Hardware Evaluation)
A PCI Card for Readout in High Energy Physics Experiments Michele Floris 1,2, Gianluca Usai 1,2, Davide Marras 2, André David IEEE Nuclear Science.
Computer Organization & Assembly Language © by DR. M. Amer.
The PHENIX Event Builder David Winter Columbia University for the PHENIX Collaboration CHEP 2004 Interlaken, Switzerland.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
The PHENIX Event Builder David Winter Columbia University for the PHENIX Collaboration CHEP 2004 Interlaken, Switzerland.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Why it might be interesting to look at ARM Ben Couturier, Vijay Kartik Niko Neufeld, PH-LBC SFT Technical Group Meeting 08/10/2012.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
Report from India Topical Conference on Hadron Collider Physics XIII Jan 14-20, TIFR, India Naohito Saito RIKEN/ RIKEN BNL Research Center.
Ralf Averbeck Stony Brook University Hot Quarks 2004 Taos, New Mexico, July 19-24, 2004 for the Collaboration Open Heavy Flavor Measurements with PHENIX.
Stefano Magni I.N.F.N Milano Italy - NSS 2003 Pomone, a PCI based data acquisition system Authors: G.Alimonti 3, G. Chiodini 1, B. Hall 2, S. Magni 3,
Online Monitoring System at KLOE Alessandra Doria INFN - Napoli for the KLOE collaboration CHEP 2000 Padova, 7-11 February 2000 NAPOLI.
Week1: Introduction to Computer Networks. Copyright © 2012 Cengage Learning. All rights reserved.2 Objectives 2 Describe basic computer components and.
ATLAS and the Trigger System The ATLAS (A Toroidal LHC ApparatuS) Experiment is one of the four major experiments operating at the Large Hadron Collider.
Guirao - Frascati 2002Read-out of high-speed S-LINK data via a buffered PCI card 1 Read-out of High Speed S-LINK Data Via a Buffered PCI Card A. Guirao.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Pierre VANDE VYVRE ALICE Online upgrade October 03, 2012 Offline Meeting, CERN.
Exploiting Task-level Concurrency in a Programmable Network Interface June 11, 2003 Hyong-youb Kim, Vijay S. Pai, and Scott Rixner Rice Computer Architecture.
David L. Winter for the PHENIX Collaboration Event Building at Multi-kHz Rates: Lessons Learned from the PHENIX Event Builder Real Time 2005 Stockholm,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
PHENIX DAQ RATES. RHIC Data Rates at Design Luminosity PHENIX Max = 25 kHz Every FEM must send in 40 us. Since we multiplex 2, that limit is 12 kHz and.
P H E N I X / R H I CQM04, Janurary 11-17, Event Tagging & Filtering PHENIX Level-2 Trigger Xiaochun He Georgia State University.
ATLAS and the Trigger System The ATLAS (A Toroidal LHC ApparatuS) Experiment [1] is one of the four major experiments operating at the Large Hadron Collider.
LECC2004 BostonMatthias Müller The final design of the ATLAS Trigger/DAQ Readout-Buffer Input (ROBIN) Device B. Gorini, M. Joos, J. Petersen, S. Stancu,
Rutherford Appleton Laboratory September 1999Fifth Workshop on Electronics for LHC Presented by S. Quinton.
The ALICE data quality monitoring Barthélémy von Haller CERN PH/AID For the ALICE Collaboration.
Remigius K Mommsen Fermilab CMS Run 2 Event Building.
October 19, 2010 David Lawrence JLab Oct. 19, 20101RootSpy -- CHEP10, Taipei -- David Lawrence, JLab Parallel Session 18: Software Engineering, Data Stores,
Barthélémy von Haller CERN PH/AID For the ALICE Collaboration The ALICE data quality monitoring system.
The ALICE Data-Acquisition Read-out Receiver Card C. Soós et al. (for the ALICE collaboration) LECC September 2004, Boston.
Eric Hazen1 Ethernet Readout With: E. Kearns, J. Raaf, S.X. Wu, others... Eric Hazen Boston University.
M. Bellato INFN Padova and U. Marconi INFN Bologna
NFV Compute Acceleration APIs and Evaluation
Review of ALICE Experiments
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
Controlling a large CPU farm using industrial tools
Silicon Pixel Detector for the PHENIX experiment at the BNL RHIC
ALICE – First paper.
J.M. Landgraf, M.J. LeVine, A. Ljubicic, Jr., M.W. Schulz
Programming Languages
EPICS: Experimental Physics and Industrial Control System
Presentation transcript:

The PHENIX Event Builder David Winter Columbia University for the PHENIX Collaboration DNP 2004 Chicago, IL

29 Oct 2004David Winter - The PHENIX Event Builder2Overview Introduction –Quarks & Gluons at the extreme: Heavy Ion Collisions at RHIC –The challenge: The PHENIX experiment and its DAQ The Event Builder –Software & Hardware –System Design –Monitoring & Performance Present and Future Development Summary

29 Oct 2004David Winter - The PHENIX Event Builder3 Torturing the Nucleus: Heavy Ion Collisions “Cartoon” of what we imagine to be phase diagram of hadronic matter (Temp vs. baryon density) Lattice QCD calculations have long indicated existence of phase transition Entropy Density Temperature

29 Oct 2004David Winter - The PHENIX Event Builder4 RHIC Two independent rings 3.83 km circumference Capable of colliding ~ any nuclear species on ~ any other species Center of Mass Energy: è 500 GeV for p-p è 200 GeV for Au-Au (per N-N collision) Luminosity –Au-Au: 2 x cm -2 s -1 –p-p : 2 x cm -2 s -1 (polarized) Two central arms for measuring hadrons, photons and electrons Two forward arms for measuring muons Event characterization detectors in center

29 Oct 2004David Winter - The PHENIX Event Builder5 Data Collection: The Challenge Blue and Yellow Beams cross at ~10MHz Collisions occur at ~10kHz hadrons photons leptons High rates Large event sizes (Run-4: >200 kb/event) Interest in rare physics processes => Big Headache How do we address these challenges? –Level-1 triggering –Buffering & pipelining: “deadtime-less” DAQ –High Bandwidth (Run-4: ~400 MB/s archiving) –Fast processing (eg. Level-2 triggering) 1.5 Billion Events MB/s ~200 kB/event kHz rate Run-4 ~200 kB/event 5 kHz rate ð1 GB/s !! Run-5

29 Oct 2004David Winter - The PHENIX Event Builder6 Event Builder Overview Data stream from granule SEB GBit S W I T C H ATP STORAGE SEB ATP EBC Data stream from granule Control provided by Run Control Three functionally distinct programs derived from the same basic object SubEvent Buffer (SEB): Collects data for a single subsystem – a “subevent” Event Builder Controller (EBC): Receives event notification, assigns events, flushes system Assembly Trigger Processor (ATP): Assembles events by requesting data from each SEB, writes assembled events to short-term storage, can also provide Level- 2 trigger environment Partition 1 Partition 2 (x1) (x52) (x39)

29 Oct 2004David Winter - The PHENIX Event Builder7 Software & Hardware Software environment: Run-4 to Run-5 paradigm shift –New platform: Windows NT/2k  Linux 2.4.x (FNAL’s SL3.0.2) –New compiler: Visual C  GCC –Same: Iona Orbix (CORBA), Boost template library 105 1U Rack-mounted dual CPU x86 servers –1.0 GHz PIII & 2.4 GHz P4 Xeon –Gigabit NIC (Intel PRO/1000 MT Server) Foundry FastIron 1500 Gigabit Switch –480 Gbps total switching capacity –15 Slots, 10 in use (includes 96 Gigabit ports) JSEB: custom-designed PCI card –Interface between EvB and incoming data stream –Dual 1 MB memory banks (allows simultaneous r/w) –Programmable FPGA –Latest firmware enables DMA Burst – up to 100 MB/s I/O

29 Oct 2004David Winter - The PHENIX Event Builder8 Basic Component Design Booted Configured Initialized Configuration Data from Run Control State Machine whose transitions are controlled by CORBA methods Running Monitor Data to client(s) Specific EvB components are derived from this model Event-driven Multithreaded UDP sockets for data TCP sockets for control

29 Oct 2004David Winter - The PHENIX Event Builder9 Performance Monitoring Each component keeps track of various statistics Data served via CORBA calls Java client displays stats in “real time” Strip charts display data as function of time Histograms display data as function of component

29 Oct 2004David Winter - The PHENIX Event Builder10 Run Control

29 Oct 2004David Winter - The PHENIX Event Builder11 Where does the future lie? How do we break the 2.5 kHz boundary? The most important improvement we can make: Port to Linux Win32 was the right platform when using ATM –ATM (completely) replaced by Gigabit in Run-4 At the limit of what Win32 can provide us Growing pains while porting –Thread-safety: Replacing Interlocked operations Who said writing atomic operations in assembly isn’t fun? –Replacing Overlapped socket I/O with synchronous I/O Linux and AIO? Maybe in our lifetime... –Event synchronization: Events vs. Condition variables –Timeout mechanisms (eg. Dropped packets/events)

29 Oct 2004David Winter - The PHENIX Event Builder12 The Impact of a Linux Port Win32 Asynch Socket Win32 Synch Socket Linux Synchronous Socket Linux beats Win32 hands-down in simple socket tests Win32 Completion Socket A picture is worth a thousand words Write Bandwidth (Bytes/s)

29 Oct 2004David Winter - The PHENIX Event Builder13 The Payoff Tests with multiple granule partitions have been performed Fake Data Mode with 1 & 2 granules –26 kHz (0.240 kB/event) to 10 kHz (~150 kB/event) Clock Triggers with 1, 2, & 4 granule partitions – kHz (little to no dependence on event size)

29 Oct 2004David Winter - The PHENIX Event Builder14Summary Ideal laboratory for the study of hot, dense quark matter: Heavy Ion Collisions at RHIC. The PHENIX experiment is designed to make high statistics measurements of a variety of physics processes, esp. rare signatures The PHENIX Event Builder lies at the heart of a parallel pipelined DAQ, enabling high rates of archiving. –Three multithreaded programs originally implemented on Win32 –Win32 EvB has done a respectable job so far, but we need more Linux is the future of the PHENIX EvB –Synchronous I/O superior to even Win32’s overlapped I/O –OS overheads much lower (in general) –Various issues when porting from Win32 to Linux I/O, timers, threading Bottom line: Run-5 will have a Linux Event Builder that early tests show will improve performance by as much as a factor of 10. The goal of archiving up to 1 GB/s at 5 kHz is well within reach.

29 Oct 2004David Winter - The PHENIX Event Builder15 Backup Slides

29 Oct 2004David Winter - The PHENIX Event Builder16 The PHENIX DAQ EvB A granule is a detector subsystem, including readout electronics A partition is one or more granules that receives the same triggers & busies

29 Oct 2004David Winter - The PHENIX Event Builder17 JSEB Interface Card PLX 9080 PCI controller Input for JSEB cable from DAQ PCI 2.1 interface: 32-bit 33 MHz with 3.3v or 5v signaling 2 banks of 1 MB static RAM Altera FLEX10k FPGA JTAG port for programming FPGA DMA Burst Mode off DMA Burst (4-word) DMA Burst (continuous) Interfaces the DAQ to the SEB Data transmitted via 50-pair 32-bit parallel cable (Pseudo) Driver provided by Jungo Latest firmware provides DMA burst mode

29 Oct 2004David Winter - The PHENIX Event Builder18CORBA Common Object Request Broker Architecture The networking protocol by which the run control software components talk to each other. Based on a client/server architecture through a heterogeneous computing environment. –VxWorks, Linux, Windows NT/2000 Servers: implement “CORBA Objects” that execute the functionality described in the member functions of the object Clients: invoke local CORBA Object’s methods. This causes the server to execute the code of its corresponding member function

29 Oct 2004David Winter - The PHENIX Event Builder19 Characterizing PCI bus interactions JSEB contention read test Without network writing (top curves) With network writing (bottom curves)