1 The PHENIX Experiment in the RHIC Run 7 Martin L. Purschke, Brookhaven National Laboratory for the PHENIX Collaboration RHIC from space Long Island,

Slides:



Advertisements
Similar presentations
Sander Klous on behalf of the ATLAS Collaboration Real-Time May /5/20101.
Advertisements

1 Run 9 PHENIX Goals and Performance John Haggerty Brookhaven National Laboratory June 4, 2009.
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
The LHCb DAQ and Trigger Systems: recent updates Ricardo Graciani XXXIV International Meeting on Fundamental Physics.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
Martin L Purschke CHEP 2003 Online and Offline Computing systems in the PHENIX experiment Martin L. Purschke, for the PHENIX collaboration Physics Department,
Trigger-less and reconfigurable data acquisition system for J-PET
1 Upgrades for the PHENIX Data Acquisition System Martin L. Purschke, Brookhaven National Laboratory for the PHENIX Collaboration RHIC from space Long.
Sourav Tarafdar Banaras Hindu University For the PHENIX Collaboration Hard Probes 2012 Measurement of electrons from Heavy Quarks at PHENIX.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Performance of the PHENIX Muon Tracking System in Run-2 Ming X. Liu Los Alamos National Lab (for the PHENIX Collaboration) –Detector Commissioning –Detector.
Hall D Trigger and Data Rates Elliott Wolin Hall D Electronics Review Jefferson Lab 23-Jul-2003.
Tracking at the ATLAS LVL2 Trigger Athens – HEP2003 Nikos Konstantinidis University College London.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
Simulation Calor 2002, March. 27, 2002M. Wielers, TRIUMF1 Performance of Jets and missing ET in ATLAS Monika Wielers TRIUMF, Vancouver on behalf.
Observation of W decay in 500GeV p+p collisions at RHIC Kensuke Okada for the PHENIX collaboration Lake Louise Winter Institute February 20, /20/20101.
Data Acquisition for the 12 GeV Upgrade CODA 3. The good news…  There is a group dedicated to development and support of data acquisition at Jefferson.
The ALICE DAQ: Current Status and Future Challenges P. VANDE VYVRE CERN-EP/AID.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
November 18, 2008 John Haggerty 1 PHENIX In Run 9 John Haggerty Brookhaven National Laboratory.
 production in p+p and Au+Au collisions in STAR Debasish Das UC Davis (For the STAR Collaboration)‏
S.Vereschagin, Yu.Zanevsky, F.Levchanovskiy S.Chernenko, G.Cheremukhina, S.Zaporozhets, A.Averyanov R&D FOR TPC MPD/NICA READOUT ELECTRONICS Varna, 2013.
ALICE Computing Model The ALICE raw data flow P. VANDE VYVRE – CERN/PH Computing Model WS – 09 Dec CERN.
ATLAS Liquid Argon Calorimeter Monitoring & Data Quality Jessica Levêque Centre de Physique des Particules de Marseille ATLAS Liquid Argon Calorimeter.
Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC.
BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India.
1 LHC-Era Data Rates in 2004 and 2005 Experiences of the PHENIX Experiment with a PetaByte of Data Martin L. Purschke, Brookhaven National Laboratory PHENIX.
Takao Sakaguchi, BNL Run-11 PHENIX Run Coordinator PHENIX Run-11 Report Sakaguchi, RHIC retreat 1 RHIC retreat version.
The LHCb CERN R. Graciani (U. de Barcelona, Spain) for the LHCb Collaboration International ICFA Workshop on Digital Divide Mexico City, October.
Report on CHEP ‘06 David Lawrence. Conference had many participants, but was clearly dominated by LHC LHC has 4 major experiments: ALICE, ATLAS, CMS,
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
October 3, 2008 John Haggerty 1 PHENIX Plan for Run 9 John Haggerty Brookhaven National Laboratory.
January 31, MICE DAQ MICE and ISIS Introduction MICE Detector Front End Electronics Software and MICE DAQ Architecture MICE Triggers Status and Schedule.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Simulations and Software CBM Collaboration Meeting, GSI, 17 October 2008 Volker Friese Simulations Software Computing.
1 RHIC Machine/Detector Planning Meeting 29 Jan 08 Agenda –Run 8 end game issues, if any Planning Meeting Web Site:
NTOF DAQ status D. Macina (EN-STI-EET) Acknowledgements: EN-STI-ECE section: A. Masi, A. Almeida Paiva, M. Donze, M. Fantuzzi, A. Giraud, F. Marazita,
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
RCF Status - Introduction PHENIX and STAR Counting Houses are connected to RCF at a Network Bandwidth of 20 Gbits/sec each –Redundant (Bandwidth-wise and.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
STAR J/  Trigger in dA Manuel Calderon for the Heavy-Flavor Group Trigger Workshop at BNL October 21, 2002.
Data Acquisition, Trigger and Control
Online Monitoring System at KLOE Alessandra Doria INFN - Napoli for the KLOE collaboration CHEP 2000 Padova, 7-11 February 2000 NAPOLI.
Analyzing ever growing datasets in PHENIX Chris Pinkenburg for the PHENIX collaboration.
LHCb report to LHCC and C-RSG Philippe Charpentier CERN on behalf of LHCb.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
Bill Christie For the STAR Collaboration January 5, Outline Detector configuration for Run 16 Data set Goals Desired luminosity profile for Run.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
PHENIX DAQ RATES. RHIC Data Rates at Design Luminosity PHENIX Max = 25 kHz Every FEM must send in 40 us. Since we multiplex 2, that limit is 12 kHz and.
P H E N I X / R H I CQM04, Janurary 11-17, Event Tagging & Filtering PHENIX Level-2 Trigger Xiaochun He Georgia State University.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
1 Performance and Upgrade Plans for the PHENIX Data Acquisition System Martin L. Purschke, Brookhaven National Laboratory for the PHENIX Collaboration.
Quark Matter 2002, July 18-24, Nantes, France Dimuon Production from Au-Au Collisions at Ming Xiong Liu Los Alamos National Laboratory (for the PHENIX.
Predrag Buncic CERN Plans for Run2 and the ALICE upgrade in Run3 ALICE Tier-1/Tier-2 Workshop February 2015.
MPD Data Acquisition System: Architecture and Solutions
Streaming TPC Readout in the sPHENIX Experiment
WP18, High-speed data recording Krzysztof Wrona, European XFEL
evoluzione modello per Run3 LHC
LHC experiments Requirements and Concepts ALICE
Controlling a large CPU farm using industrial tools
ALICE – First paper.
Offline shifter training tutorial
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
ALICE Computing Upgrade Predrag Buncic
Near Real Time Reconstruction of PHENIX Run7 Minimum Bias Data From RHIC Project Goals Reconstruct 10% of PHENIX min bias data from the RHIC Run7 (Spring.
The CMS Tracking Readout and Front End Driver Testing
Expanding the PHENIX Reconstruction Universe
Presentation transcript:

1 The PHENIX Experiment in the RHIC Run 7 Martin L. Purschke, Brookhaven National Laboratory for the PHENIX Collaboration RHIC from space Long Island, NY (with some references to Run 6...) ‏

2 Our best run ever! Or, much simpler: (Ok, the DOE continuing budget resolution cut down our running time to 13 cryo-weeks and we would have gotten > 1Pb of data else, but even so we outdid previous runs.) ‏

3 RHIC/PHENIX at a glance RHIC: 2 independent rings, one beam clockwise, the other counterclockwise sqrt(S NN )= 500GeV * Z/A ~200 GeV for Heavy Ions ~500 GeV for proton-proton (polarized) ‏ PHENIX: 4 spectrometer arms 15 Detector subsystems 500,000 detector channels Lots of readout electronics Uncompressed Event size typically KB for AuAu, CuCu, pp Data rate ~5KHz (Au+Au) ‏ Front-end data rate GB/s Data Logging rate ~400MB/s, 700 MB/s max

4 TOF-W RXNP HBD MPC-N...and 4 new Detector Systems in Run 7

5 Building up to record speed Over the previous runs we have been adding improvements Had lighter systems, d+Au, p-p, Cu-Cu in the last runs, less of a challenge than 200GeV Au+Au Distributed data compression (run 4) ‏ Multi-Event buffering (run 5) ‏ Mostly consolidating the achievements/tuning/etc in run 6, also lots of improvements in operations (increased uptime) ‏ 10G Network upgrade in run 7, added Lvl2 filtering Ingredients: With increased luminosity, we saw the previously demonstrated 600++MB/s data rate in earnest for the first time.

6 Data Compression LZO algorithm New buffer with the compressed one as payload Add new buffer hdr buffer LZO Unpack Original uncompressed buffer restored This is what a file then looks like On readback: This is what a file normally looks like All this is handled completely in the I/O layer, the higher-level routines just receive a buffer as before. Found that the raw data are still gzip-compressible after zero-suppression and other data reduction techniques Introduced a compressed raw data format that supports a late-stage compression

7 Distributed Compression ATP SEB Gigabit Crossbar Switch To HPSS Event Builder The compression is handled in the “Assembly and Trigger Processors” (ATP’s) and can so be distributed over many CPU’s -- that was the breakthrough Buffer Box The Event builder has to cope with the uncompressed data flow, e.g. 600MB/s … 1200MB/s The buffer boxes and storage system see the compressed data stream, 350MB/s … 650MB/s Buffer Box

8 Multi-Event Buffering: DAQ Evolution PHENIX is a rare-event experiment, after all -- you don’t want to go down this path Without MEB

9 MEB: trigger delays by analog Memory trigger electronics needs to buy some time to make its decision done by storing the signal charge in an analog memory (AMU) ‏ Memory keeps the state of some 40us worth of bunch crossings Trigger decision arrives. FEM goes back a given number of analog memory cells and digitizes the contents of that memory location time Multi-Event buffering means to start the AMU sampling again while the current sample is still being digitized. Trigger busy released much earlier deadtime greatly reduced

10 The Multi-Event Buffering Effect

11 ~600 MB/s This shows the aggregated data rate from the DAQ to disk in a RHIC fill We are very proud of this performance... Decay of RHIC Luminosity Length of a DAQ run It's not the best, it's one where I was there... the best RHIC fill best went up to 650MB/s

12 Event statistics 5.7 Billion Events in ~650TB of data Run 6 pp –

13 Online Filtering and Reconstruction We ran Level-2 triggers in the ATP’s in so-called filter mode lvl2 triggers don't reject but fish out interesting events for priority reconstruction Filtered data were sent to IN2P3 in France where resources were available AND where the people most interested in the filtered dataset are ~10% of min bias data were sent to Vanderbilt University where Computing resources were available to reconstruct the data set, find problems in reconstruction, new detectors' software, make early DST's available, gear up for “real” production Valuable tool to get a reading how you are doing, as well as preliminary physics signals to check calibrations etc Used to refine our GRID file transfer procedures to “new” remote sites (not that much data volume transferred during this run, ~70TB – Run TB)

14 Summary Very successful run, 650TB of data on tape despite short run due to DOE budget woes Can do > 600MB/s 4 new detector systems which still needed some “shakedown” Reached 5KHz event rate in Au-Au with larger event size successful filtering effort for priority reconstruction First iterations of min bias data production at a remote site (Vanderbilt University) ‏

15 Where we are w.r.t. others ATLAS CMS LHCb ALICE ~25~40 ~100 ~300 All in MB/s all approximate ~100 ~ ~ MB/s are not so Sci-Fi these days