PHENIX DAQ RATES. RHIC Data Rates at Design Luminosity PHENIX Max = 25 kHz Every FEM must send in 40 us. Since we multiplex 2, that limit is 12 kHz and.

Slides:



Advertisements
Similar presentations
Run8 – Coming sooner than we had thought Mike Leitch Run8 Coordinator November 7 th /7/20071MJL.
Advertisements

Future Dataflow Bottlenecks Christopher O’Grady with A. Perazzo and M. Weaver Babar Dataflow Group.
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
A Gigabit Ethernet Link Source Card Robert E. Blair, John W. Dawson, Gary Drake, David J. Francis*, William N. Haberichter, James L. Schlereth Argonne.
SciFi Tracker DAQ M. Yoshida (Osaka Univ.) MICE Tracker KEK Mar. 30, 2005.
28 August 2002Paul Dauncey1 Readout electronics for the CALICE ECAL and tile HCAL Paul Dauncey Imperial College, University of London, UK For the CALICE-UK.
HCAL FIT 2002 HCAL Data Concentrator Status Report Gueorgui Antchev, Eric Hazen, Jim Rohlf, Shouxiang Wu Boston University.
Switching for BTeV Level 1 Trigger Jinyuan Wu (For the BTeV Collaboration)
1 Upgrades for the PHENIX Data Acquisition System Martin L. Purschke, Brookhaven National Laboratory for the PHENIX Collaboration RHIC from space Long.
Trigger Overview Planning for TN (Trigger Nirvana) in Run Expectations for Run-04 Au-Au Luminosities 2.Trigger Planning for Run-04 3.Lessons Learned.
5 Feb 2002Alternative Ideas for the CALICE Backend System 1 Alternative Ideas for the CALICE Back-End System Matthew Warren and Gordon Crone University.
K. Honscheid RT-2003 The BTeV Data Acquisition System RT-2003 May 22, 2002 Klaus Honscheid, OSU  The BTeV Challenge  The Project  Readout and Controls.
Jan 3, 2001Brian A Cole Page #1 EvB 2002 Major Categories of issues/work Performance (event rate) Hardware  Next generation of PCs  Network upgrade Control.
LECC2003 AmsterdamMatthias Müller A RobIn Prototype for a PCI-Bus based Atlas Readout-System B. Gorini, M. Joos, J. Petersen (CERN, Geneva) A. Kugel, R.
U N C L A S S I F I E D FVTX Detector Readout Concept S. Butsyk For LANL P-25 group.
Understanding Data Acquisition System for N- XYTER.
PHENIX upgrade DAQ Status/ HBD FEM experience (so far) The thoughts on the PHENIX DAQ upgrade –Slow download HBD test experience so far –GTM –FEM readout.
November 18, 2008 John Haggerty 1 PHENIX In Run 9 John Haggerty Brookhaven National Laboratory.
Pixel hybrid status & issues Outline Pixel hybrid overview ALICE1 readout chip Readout options at PHENIX Other issues Plans and activities K. Tanida (RIKEN)
VLVNT Amsterdam 2003 – J. Panman1 DAQ: comparison with an LHC experiment J. Panman CERN VLVNT workshop 7 Oct 2003 Use as example CMS (slides taken from.
Leo Greiner IPHC DAQ Readout for the PIXEL detector for the Heavy Flavor Tracker upgrade at STAR.
Time Meeting PHENIX Run-15 Status Douglas Fields PHENIX Run-15 Run Coordinator University of New Mexico.
The PHENIX Event Builder David Winter Columbia University for the PHENIX Collaboration DNP 2004 Chicago, IL.
Run 14 RHIC Machine/Experiments Meeting 1 July 2014 Agenda: Run 14 Schedule (Pile) Machine Status (Robert-Demolaize) STAR and PHENIX Status (Experiments)
1 Network Performance Optimisation and Load Balancing Wulf Thannhaeuser.
The PHENIX Event Builder David Winter Columbia University for the PHENIX Collaboration CHEP 2004 Interlaken, Switzerland.
The PHENIX Event Builder David Winter Columbia University for the PHENIX Collaboration CHEP 2004 Interlaken, Switzerland.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
1 The PHENIX Experiment in the RHIC Run 7 Martin L. Purschke, Brookhaven National Laboratory for the PHENIX Collaboration RHIC from space Long Island,
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
SCT Readiness for Heavy Ion Collisions Dave Robinson on behalf of SCT 15/9/101Dave Robinson Heavy Ion Review.
June 17th, 2002Gustaaf Brooijmans - All Experimenter's Meeting 1 DØ DAQ Status June 17th, 2002 S. Snyder (BNL), D. Chapin, M. Clements, D. Cutts, S. Mattingly.
Status of TPC/HBD for PHENIX Craig Woody BNL DC Upgrades Meeting February 12, 2002.
SoLiD/PVDIS DAQ Alexandre Camsonne. DAQ limitations Electronics Data transfer.
HBD Status for Run 10 C.Woody For the HBD Group Collaboration Meeting January 15, 2010.
Pulsar Status For Peter. L2 decision crate L1L1 TRACKTRACK SVTSVT CLUSTERCLUSTER PHOTONPHOTON MUONMUON Magic Bus α CPU Technical requirement: need a FAST.
The CMS Event Builder Demonstrator based on MyrinetFrans Meijers. CHEP 2000, Padova Italy, Feb The CMS Event Builder Demonstrator based on Myrinet.
1 PHENIX Status Ed O’Brien for John Haggerty Brookhaven National Laboratory June 23, 2009.
TPC electronics Status, Plans, Needs Marcus Larwill April
LKr readout and trigger R. Fantechi 3/2/2010. The CARE structure.
.1PXL READOUT STAR PXL READOUT requirement and one solution Xiangming Sun.
1 Nov 9, 1989 Oct 3, 1990 By the way DAQ Update We had a few very interesting weeks... We really learned how to recover from power outages and other.
David L. Winter for the PHENIX Collaboration Event Building at Multi-kHz Rates: Lessons Learned from the PHENIX Event Builder Real Time 2005 Stockholm,
CPT week May 2003Dominique Gigi CMS DAQ 1.Block diagram 2.Form Factor 3.Mezzanine card (transmitter SLINK64) 4.Test environment 5.Test done 1.Acquisition.
P H E N I X / R H I CQM04, Janurary 11-17, Event Tagging & Filtering PHENIX Level-2 Trigger Xiaochun He Georgia State University.
Artur BarczykRT2003, High Rate Event Building with Gigabit Ethernet Introduction Transport protocols Methods to enhance link utilisation Test.
Muon Arm Physics Program Past, Present + Future Patrick L. McGaughey Los Alamos National Laboratory Santa Fe June 17, 2003.
DAQ 1000 Tonko Ljubicic, Mike LeVine, Bob Scheetz, John Hammond, Danny Padrazo, Fred Bieser, Jeff Landgraf.
Integration with ATLAS DAQ Marcin Byszewski 23/11/2011 RD51 Mini week Marcin Byszewski, CERN1.
1 Performance and Upgrade Plans for the PHENIX Data Acquisition System Martin L. Purschke, Brookhaven National Laboratory for the PHENIX Collaboration.
ROD Activities at Dresden Andreas Glatte, Andreas Meyer, Andy Kielburg-Jeka, Arno Straessner LAr Electronics Upgrade Meeting – LAr Week September 2009.
Mu3e Data Acquisition Ideas Dirk Wiedner July /5/20121Dirk Wiedner Mu3e meeting Zurich.
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
Electron Trigger in PHENIX Kenta Shigaki (KEK) at PHENIX Heavy Flavor and Light Vector Meson Physics Working Group Meeting on March 9, 2000.
Run 14 RHIC Machine/Experiments Meeting 24 June 2014 Agenda: Run 14 Schedule (Pile) Machine Status (Robert-Demolaize) STAR and PHENIX Status (Experiments)
Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl.
High Rate Event Building with Gigabit Ethernet
Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl.
LHC experiments Requirements and Concepts ALICE
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
mmDAQ (Muon Atlas MicroMegas Activity – ATLAS R&D)
CMS SLHC Calorimeter Trigger Upgrade,
Example of DAQ Trigger issues for the SoLID experiment
Special edition: Farewell for Stephen Bailey
Network Processors for a 1 MHz Trigger-DAQ System
DCM II DCM function DCM II design ( conceptual ?)
The CMS Tracking Readout and Front End Driver Testing
DCM II DCM II system Status Chain test Schedule.
New DCM, FEMDCM DCM jobs DCM upgrade path
DATA COLLECTION MODULE II (DCM II) Stratix III
Presentation transcript:

PHENIX DAQ RATES

RHIC Data Rates at Design Luminosity PHENIX Max = 25 kHz Every FEM must send in 40 us. Since we multiplex 2, that limit is 12 kHz and 80 us. For Au-Au, Level-1 rate is ~1400 Hz - we sample every event in the level-2. For p-p, the interaction rate is 10MHz - need very good level-1 rejection. Au-Au: L = 2 x cm -2 s -1 p-p: L = 2 x cm -2 s -1 Design Luminosities

Assumptions: –A total period of 25 weeks for beam availability –RHIC duty factor = 50% –PHENIX duty factor = 50% –Collision region rms = 20 cm. –All running at Au-Au segment of 17 weeks: –First 7 weeks: RHIC: commissioning from ~10% to 100% of design luminosity PHENIX: Commissioning, calibrating, start of data-taking –Last 10 weeks: Running at ~100% of design luminosity (2 x cm -2 s -1 ) –300  b -1 (possibly 600) of integrated luminosity recorded p-p segment of 8 weeks –2 weeks to commission collisions –1 week to commission polarization (> ~50%) –5 weeks of polarized running at 5 x cm -2 s -1 –3.5 pb -1 of integrated luminosity recorded: Run Segments (“borrowed” from Bill’s RBUP)

Data Volumes UnSuppressed2.93 x 10^6 Au-Au Central4.02 x 10^5 10% occ Au-Au mbias1.61 x 10^5 2.5% occ Au-Au mbias1.26 x 10^5 2.5% occ, min fem header p-p mbias4.30 x 10^4 0% occupancy min fem header Data TypeSize (bytes) Overhead Packet > 22 x 10^3 Header FEM Header 53.2 x 10^3 Minimal FEM 18.1 x 10^3 Header (clk, evt, summary) Data TypeSize (bytes) Is the minimal FEM header really “the” mininal? Packet and FEM headers can be dropped from empty packets, but not from non-empty ones We are currently running with all headers and packets...

PHENIX Hardware Throughput Spews out data at 25 kHz* Must be able to make L1 decision in 40 beam crossings (GL1 + LL1) *EMC needs short format FPGA, TOF needs extra controllers to get < 40us output time. FEM fiber 40 MHz DC, TEC 20 MHz all else. (ie, 160 or 80 MB/s) LEVEL-1 Reads in FEM data, 0-suppress Process Data Joins Data, further processing High Speed Transfer to SEB DSPs 1-4 DSP 5 (1 per DCB) 20 MB/s Link Port Joins Data, further processing Joins Data, further processing 160 MB/s (40 MHz) Token Passing Bus PARTITIONER LVDS CABLE to Event Builder DCM FPGA

PHENIX Hardware Throughput Dual Buffer Storage in RAID Arrays running on Linux Box (one writes to HPSS, one reads from ATPs) 28? x 100Mbit ethernet 10 Gbit/s aggregate, 5 each way 155 Mbit/s NIC 30-40? MB/s RAID LVDS CABLE (from PAR) Buffer Data from DCM’s JSEB Input Card (PCI, 132 MB/s) 26? SEBs (NT 4) Assemble Events, Run Level 2 Trigger 28? ATPs (NT 4) Event Builder Buffer Boxes HPSS 20 MB/s Each SEB can write out at 155 Mbit/s, so max rate is (155Mbit/s - overhead)/event size 20 MB/s sustained max overall output rate set by HPSS into the buffer boxes is instantaneous rate. 155 Mbit/s NIC Fore ASX-1000 ATM Switch

Current Status, Event Size = 120 kBytes Event Rate ~ 600 Hz MVD Largest Granule

Some Personal Thoughts... For the Au-Au running, we want to be at 1400 Hz, and it would be even better to get to 2100 Hz if we want the coherent peripheral program to succeed. We are currently somewhere around 600 Hz, but we’re still working on it. Given the limit of 155 MBits/s per SEB due to the ATM NIC, the Level 1 limit is set by the DC - at 2.5% occ, event size is 26.3 kB, so 15 MB/26.3 kB => max rate of 1140 Hz. If we go to minimal FEM headers, the DC size goes down to 21.8kB, or a max rate of ~1400 Hz. If we want coherent peripheral, we’d need to split the DC granules. Still a lot of work to do for p-p running, and none of it seems certain: minimize headers, drop packets, level-1 b oards…