Download presentation
Presentation is loading. Please wait.
Published byMariah Ward Modified over 9 years ago
1
Niko Neufeld, CERN/PH
2
Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data acquisition Terabit/s networks Data-transfer and storage openlab V lightning DAQ N. Neufeld2
3
LHCb Run2LHCb Run 3 Max. inst. luminosity4 x 10^322 x 10^33 Event-size (mean – zero-suppressed) [kB]~ 60 (L0 accepted) ~ 100 Event-building rate [MHz]1 40 # read-out boards~ 330400 - 500 link speed from detector [Gbit/s]1.64.5 output data-rate / read-out board [Gbit/s]4100 # detector-links / readout-boardup to 24up to 48 # farm-nodes~ 16001000 - 4000 # links 100 Gbit/s (from event-builder PCs)n/a400 - 500 final output rate to tape [kHz]1220 - 100 3
4
Detector front-end electronics Eventbuilder network Eventbuilder PCs + software LLT Eventfilter Farm ~ 80 subfarms Eventfilter Farm ~ 80 subfarms UX85B Point 8 surface subfarm switch TFC 500 6 x 100 Gbit/s subfarm switch Online storage Clock & fast commands 8800 Versatile Link 8800 Versatile Link throttle from PCIe40 Clock & fast commands 6 x 100 Gbit/s ECS openlab V lightning DAQ N. Neufeld4
5
5 40000 kHz == collision rate LHCb abandons Level 1 for an all-software trigger O(100) Tbit/s networks required
6
Partners: CERN, intel Goal apply intel technologies to Online computing challenges Work-packages 1.Interconnect for DAQ (Stormlake) 2.Knights Landing for software triggers (HLT and L1.5) 3.Reconfigurable Logic (Xeon/FPGA) Use LHCb use-case as concrete example problems, but try to keep solutions and reports as widely applicable as possible openlab V lightning DAQ N. Neufeld6
7
1)“Level-1” using (more/all) COTS hardware ✔ 2)“Data Acquisition” ✔ 3)“High Level Trigger” ✔ 4)“Controls” 5)“Online data processing” ✔ 6)“Exporting data” openlab V lightning DAQ N. Neufeld7 ✔ == covered in HTCC
10
openlab V lightning DAQ N. Neufeld10
11
openlab V lightning DAQ N. Neufeld11
12
The Level 1 Trigger is implemented in hardware: FPGAs and ASICs difficult / expensive to upgrade or change, maintenance by experts only Decision time: ~ a small number of microseconds It uses simple, hardware-friendly signatures looses interesting collisions Each sub-detector has its own solution, only the uplink is standardized openlab V lightning DAQ N. Neufeld 12
13
Can we do this in software? Using GPGPUs / XeonPhis? We need low and near– deterministic latency Need an efficient interface to detector- hardware: CPU/FPGA hybrid? openlab V lightning DAQ N. Neufeld13
15
openlab V lightning DAQ N. Neufeld 15 custom radiation- hard link from the detector DAQ (“event-building”) links – some LAN (10/40/100 GbE / InfiniBand) Links into compute-units: typically 1/10 Gbit/s Detector DAQ network 100 m rock Readout Units Compute Units Every Readout Unit has a piece of the collision data All pieces must be brought together into a single compute unit The Compute Unit runs the software filtering (High Level Trigger – HLT) 10000 x ~ 1000 x ~ 3000 x
16
Transport large amount of data (multiple Terabit/s @ LHC) reliably and cost-effectively Integrate the network closely and efficiently with compute resources (be they classical CPU or “many-core”) Multiple network technologies should seamlessly co-exist in the same integrated fabric ( “the right link for the right task” ), end-to-end solution from online processing to scientist laptop (e.g. light-sources) openlab V lightning DAQ N. Neufeld16
18
openlab V lightning DAQ N. Neufeld18
19
openlab V lightning DAQ N. Neufeld 19 Can be much more complicated: lots of tracks / rings, curved / spiral trajectories, spurious measurements and various other imperfections
20
Existing code base: 5 MLOC of mostly C++ Almost all algorithms are single-threaded (only few exceptions) Currently processing time on a X5650 per event: several 10 ms / process (hyper-thread) Currently between 100k and 1 million events per second are filtered online in each of the 4 LHC experiments openlab V lightning DAQ N. Neufeld 20
21
Make the code-base ready for multi/many-core (this is not Online specific!) Optimize the online processing compute in terms of cost, power, cooling Find the best architecture integrating “standard servers”, many-core systems and a high- bandwidth network openlab V lightning DAQ N. Neufeld21
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.