Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Performance and Upgrade Plans for the PHENIX Data Acquisition System Martin L. Purschke, Brookhaven National Laboratory for the PHENIX Collaboration.

Similar presentations


Presentation on theme: "1 Performance and Upgrade Plans for the PHENIX Data Acquisition System Martin L. Purschke, Brookhaven National Laboratory for the PHENIX Collaboration."— Presentation transcript:

1 1 Performance and Upgrade Plans for the PHENIX Data Acquisition System Martin L. Purschke, Brookhaven National Laboratory for the PHENIX Collaboration RHIC from space Long Island, NY

2 2 RHIC/PHENIX at a glance RHIC: 2 independent rings, one beam clockwise, the other counterclockwise sqrt(S NN )= 500GeV * Z/A ~200 GeV for Heavy Ions ~500 GeV for proton-proton (polarized) ‏ PHENIX: 4 spectrometer arms 15 Detector subsystems 500,000 detector channels Lots of readout electronics Uncompressed Event size typically 280 -220 - 130 KB for AuAu, CuCu, pp Data rate ~8KHz (Au+Au) ‏ Front-end data rate 0.5 - 1.1 GB/s Data Logging rate ~500MB/s, 700 MB/s max

3 3 TOF-W RXNP MPC-N...and 3 new Detector Systems on board HBD

4 4 Need for Speed: Where we are ATLAS CMS LHCb ALICE ~25~40 ~100 ~300 All in MB/s all approximate ~100 ~150 600 ~1250 Lvl1-Triggers in Heavy Ions have a notoriously low rejection factor that's because so many events have something that's interesting (different from LHC) ‏ But hey, we could write out almost everything that RHIC gave us, so why bother... this approach has served us really well. It also opened up access to processes that you can't exactly trigger on, it “just” takes some more work offline.

5 5 Building up to record speed Over the previous runs we have been adding improvements Had lighter systems, d+Au, p-p, Cu-Cu in the last runs, less of a challenge than 200GeV Au+Au (most challenging) Distributed data compression (run 4) ‏ Multi-Event buffering (run 5) ‏ Mostly consolidating the achievements/tuning/etc in run 6, also lots of improvements in operations (increased uptime) ‏ 10G Network upgrade in run 7, added Lvl2 filtering Ingredients: With increased luminosity, we saw the previously demonstrated 600++MB/s data rate in earnest for the first time.

6 6 Data Compression LZO algorithm New buffer with the compressed one as payload Add new buffer hdr buffer LZO Unpack Original uncompressed buffer restored This is what a file then looks like On readback: This is what a file normally looks like All this is handled completely in the I/O layer, the higher-level routines just receive a buffer as before. Found that the raw data are still gzip-compressible after zero-suppression and other data reduction techniques Introduced a compressed raw data format that supports a late-stage compression

7 7 Distributed Compression ATP SEB Gigabit Crossbar Switch To HPSS Event Builder The compression is handled in the “Assembly and Trigger Processors” (ATP’s) and can so be distributed over many CPU’s -- that was the breakthrough Buffer Box The Event builder has to cope with the uncompressed data flow, e.g. 600MB/s … 1200MB/s The buffer boxes and storage system see the compressed data stream, 350MB/s … 700MB/s Buffer Box

8 8 Multi-Event Buffering: DAQ Evolution PHENIX is a rare-event experiment, after all -- you don’t want to go down this path Without MEB Multi-Event buffering means to start the AMU sampling again while the current sample is still being digitized. Trigger busy released much earlier deadtime is greatly reduced

9 9 The Multi-Event Buffering Effect

10 10 600++ MB/s This shows the aggregated data rate from the DAQ to disk in a RHIC fill We are very proud of this performance... Decay of RHIC Luminosity Length of a DAQ run It's not the best, it's one where I was there... the best RHIC fill best went up to 650MB/s

11 11 Run 9 Event statistics (so far) p+p 500GeV (done) 3.5 billion Events200 TB p+p 200GeV (in 3 rd week) 1.5 billion Events 80 TB so far Typical is 500...700 TB in a Run “Physics at the Peta-Scale” (recent workshop title) has long arrived for us 58% of a PetaByte of data on tape: Run 9 is still ongoing as we speak... 280 TB (200 TB) 80 TB (200GeV) PHENIX Raw Data

12 12 Upgrade Programs RHIC will give us several luminosity and beam livetime upgrades The era where could mostly write out “everything” is coming to an end The Future we will add detectors in the central region which will significantly increase our data volume

13 13 Upgrades 3 main new detectors (that's in addition to the ones I showed before as “on board”): The Vertex/Forward Vertex detectors A Muon trigger upgrade RPC detectors 23911Total 1.071002.8%6FVTX 1.92390.54%/0.16%3VTX pixel 1.25904.5%/2.5%2VTX strip Data rate (Gbps)* Event size (kbyte)‏ OccupancyDCM groups Detector Triples current event size 800-pound gorilla

14 14 Upgrades All new detectors have electronics with high rate capability However, older detector readout limits the level 1 rate no way to upgrade any time soon - $$$$$ We will need to focus on rare events more Front end pipelines Readout buffers Processor farms Switching network Detectors Lvl-1 HLT 40MHz 100KHz 100Hz Remember, our Lvl1 is not the LHC Lvl1... ours is before digitization HLT is no solution CMS Hence: FVTX has Lvl1 trigger “hookup” for displaced vertex triggers other upgrade is a trigger to begin with (W->muon)

15 15 Prototype RPC +  Trigger RPC3 RPC2 View from back Prototype RPC RPCs μ +/-

16 16 Need for Speed Total Event building bandwidth is several times the average throughput But individual detectors/components are close to the max (network, processing) Replace the Data Collection Module (DCM) with DMC II – packaging/0-sup of data DCM was modern in its days, DSP-based + some FPGA DCM II uses latest FPGA technology, FPGA is main component ATP SEB Gigabit 10G Crossbar Switch Event Builder Buffer Box 10G links 10G networks are slowly becoming a commodity Allows to make better use of multi-core machines Saves money and power and A/C in the end In the same spirit, replace PCI with PCI Express

17 17 Outlook PCI Express 10GB/s Networks DCM II Upgrade New hardware components to help maintain our current speed The End

18 18 u d Similar expression for W - to get Δῡ and Δd… Since W is maximally parity violating large measured Δu and Δd require large asymmetries. W Production Basics No Fragmentation!

19 19 MuonTrigger and RPC upgrade RPCs μ +/- u d W physics with polarized protons Trigger will allow to enhance the sample if high- momentum/straight-track muons RPC adds timing to reduce large background from non- collision muons (beam, cosmics)

20 FVTX Fitted track provides a DCA to the primary vertex (measured by central arm barrel VTX detector)‏  prompt Pinpoint the decay vertex to eliminate backgrounds! Endcap detects the following by displaced vertex (∆r, ∆z) of muons: D (charm)  μ + X B (beauty)  μ + X B  J/ ψ + X  μ+ μ-

21 21-October-2008 Jon S. Kapustinsky2008 IEEE NSS-MIC Dresden 21 FVTX Section View 80 cm Barrel 4 discs of Si sensor in acceptance of each Muon Arm Microstrips to accurately measure R coordinate of track Scheduled to be installed in FY11 Two endcap halves ½ of one endcap ½ disks


Download ppt "1 Performance and Upgrade Plans for the PHENIX Data Acquisition System Martin L. Purschke, Brookhaven National Laboratory for the PHENIX Collaboration."

Similar presentations


Ads by Google