Download presentation
Presentation is loading. Please wait.
Published byMerry Ellis Modified over 9 years ago
1
5 th LHCb Computing Workshop, May 19 th 2015 Niko Neufeld, CERN/PH-Department niko.neufeld@cern.ch
2
Apply upcoming Intel technologies in an Online context Data acquisition and event-building, accelerator-assisted processing for high-level trigger This is about Intel’s upcoming new network technology Omni-Path (100 Gigabit/s) Accelerators here are: Intel Xeon/Phi, a.k.a. MIC, Knights Landing, Knights Corner, etc… Intel Xeon/FPGA concept – this is an FPGA sitting in a CPU socket together with a Xeon and programmable as an accelerator (OpenCL, HDL, as needed) HTCC for LHCb - Niko Neufeld - 18/5/152
3
Intel has announced plans for the first Xeon with cache- coherent FPGA providing new capabilities We want to explore this to: Move from firmware to software Custom hardware commodity Need real-time characteristics: algorithms must decide in O(10) microseconds or force default decisions (interesting for experiments keeping a first level trigger) For LHCb study Muon trigger / Muon reconstruction – could be used as a “software” LLT or simply assist the HLT Compare performance of kernel-lized algorithms between Xeon, FPGA and Xeon/Phi – can hand-tune the FPGA if necessary (prefer not to do so maintainability! HTCC for LHCb - Niko Neufeld - 18/5/153
4
Crucial for the Online upgrade, but not so relevant for this session: Explore Intel’s new fabric Omni-Path to build the next generation data acquisition systems Use CPU-fabric integration to minimise transport overheads Use Omni-Path to integrate Xeon, XeonPhi and Xeon/FPGA concept in optimal proportions as compute units This is potentially very relevant for LHCb, to define the optimal HLT “farm” of the future. If we need 1000 servers for the processing, who tells us that we also need 1000 Xeon/Phi cards and not rather 400 or 350 or 621 If everything is (high-speed) networked we can choose and distribute load as we need the framework of the future must support this, but it can be done already with the current GPUManager (maybe not in the most efficient way) HTCC for LHCb - Niko Neufeld - 18/5/154
5
Intel’s answer to GPGPUs: stand-alone server 60+ x86 cores (single-core performance up 3x from the original KNC) Large fast on chip memory (16 GB) + lots of DDR4 integrated interconnect (Omni-Path) We will check if and how much this improves existing GPGPU codes (PatPixel) Benchmark LHCb code But even here we will only be able to profit for at least moderately parallelised algorithms HTCC for LHCb - Niko Neufeld - 18/5/155
6
2 fellows in LBC Christian Faerber (physicist) + ??? (waiting for result from AFC) (computer scientist) 1 technical student in LBC Karel Ha (computer scientist) 1 staff in IT/CF ??? (board completed) (computer scientist) HTCC for LHCb - Niko Neufeld - 18/5/156
7
Participate in whatever optimization team LHCb sets up Participate in code reviews Try out new approaches for CPU-intense algorithms (tracking?, particle-ID?, others?) Consultancy for developers, trainings Port LHCb code-base to icc and maintain build (needed for Xeon/Phi and to have 2 nd compiler) HTCC for LHCb - Niko Neufeld - 18/5/157
8
Simulation and generator optimisation dealt with in a different Intel project (with Geant-V) HTCC for LHCb - Niko Neufeld - 18/5/158
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.