Niko Neufeld, CERN/PH-Department

Slides:



Advertisements
Similar presentations
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
Advertisements

Remigius K Mommsen Fermilab A New Event Builder for CMS Run II A New Event Builder for CMS Run II on behalf of the CMS DAQ group.
Evaluation of 'OpenCL for FPGA' for DAQ and Acceleration in HEP applications CHEP 2015 Srikanth S Marie Curie Fellow: ICE-DIP
The LHCb DAQ and Trigger Systems: recent updates Ricardo Graciani XXXIV International Meeting on Fundamental Physics.
An ATCA and FPGA-Based Data Processing Unit for PANDA Experiment H.XU, Z.-A. LIU,Q.WANG, D.JIN, Inst. High Energy Physics, Beijing, W. Kühn, J. Lang, S.
J. Leonard, U. Wisconsin 1 Commissioning the Trigger of the CMS Experiment at the CERN Large Hadron Collider Jessica L. Leonard Real-Time Conference Lisbon,
11/14/05ELEC Fall Multi-processor SoCs Yijing Chen.
ACAT 2002, Moscow June 24-28thJ. Hernández. DESY-Zeuthen1 Offline Mass Data Processing using Online Computing Resources at HERA-B José Hernández DESY-Zeuthen.
DSP online algorithms for the ATLAS TileCal Read Out Drivers Cristobal Cuenca Almenar IFIC (University of Valencia-CSIC)
5 th LHCb Computing Workshop, May 19 th 2015 Niko Neufeld, CERN/PH-Department
ILC DAQ JJRussell. What Can Be Said At This Time ? Terminology - Front-End vs Back-End Broad shape of the DAQ Management and Social Issues Summary.
LHCb readout infrastructure NA62 TDAQ WG Meeting April 1 st, 2009 Niko Neufeld, PH/LBC.
ICE-DIP project Parallel processing on Many-Core processors ICE-DIP introduction at Intel › 22/7/2014.
The CMS Level-1 Trigger System Dave Newbold, University of Bristol On behalf of the CMS collaboration.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
Niko Neufeld, CERN/PH-Department
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Use of GPUs in ALICE (and elsewhere) Thorsten Kollegger TDOC-PG | CERN |
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
1 “Fast FPGA-based trigger and data acquisition system for the CERN experiment NA62: architecture and algorithms” Authors G. Collazuol(a), S. Galeotti(b),
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
Background Physicist in Particle Physics. Data Acquisition and Triggering systems. Specialising in Embedded and Real-Time Software. Since 2000 Project.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC.
Indexing and Selection of Data Items Using Tag Collections Sebastien Ponce CERN – LHCb Experiment EPFL – Computer Science Dpt Pere Mato Vila CERN – LHCb.
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Guido Haefeli CHIPP Workshop on Detector R&D Geneva, June 2008 R&D at LPHE/EPFL: SiPM and DAQ electronics.
01/04/09A. Salamon – TDAQ WG - CERN1 LKr calorimeter L0 trigger V. Bonaiuto, L. Cesaroni, A. Fucci, A. Salamon, G. Salina, F. Sargeni.
Why it might be interesting to look at ARM Ben Couturier, Vijay Kartik Niko Neufeld, PH-LBC SFT Technical Group Meeting 08/10/2012.
Kraków4FutureDaQ Institute of Physics & Nowoczesna Elektronika P.Salabura,A.Misiak,S.Kistryn,R.Tębacz,K.Korcyl & M.Kajetanowicz Discrete event simulations.
Niko Neufeld, CERN/PH. ALICE – “A Large Ion Collider Experiment” Size: 26 m long, 16 m wide, 16m high; weight: t 35 countries, 118 Institutes Material.
Barcelona 1 Development of new technologies for accelerators and detectors for the Future Colliders in Particle Physics URL.
LHC experimental data: From today’s Data Challenges to the promise of tomorrow B. Panzer – CERN/IT, F. Rademakers – CERN/EP, P. Vande Vyvre - CERN/EP Academic.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
The Past... DDL in ALICE DAQ The DDL project ( )  Collaboration of CERN, Wigner RCP, and Cerntech Ltd.  The major Hungarian engineering contribution.
XLV INTERNATIONAL WINTER MEETING ON NUCLEAR PHYSICS Tiago Pérez II Physikalisches Institut For the PANDA collaboration FPGA Compute node for the PANDA.
Future experiment specific needs for LHCb OpenFabrics/Infiniband Workshop at CERN Monday June 26 Sai Suman Cherukuwada Sai Suman Cherukuwada and Niko Neufeld.
ICHEC Presentation ESR2: Reconfigurable Computing and FPGAs ICE-DIP Srikanth Sridharan 9/2/2015.
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
Niko Neufeld HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Niko Neufeld, CERN. Trigger-free read-out – every bunch-crossing! 40 MHz of events to be acquired, built and processed in software 40 Tbit/s aggregated.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Pierre VANDE VYVRE ALICE Online upgrade October 03, 2012 Offline Meeting, CERN.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
Big Data for Big Discoveries How the LHC looks for Needles by Burning Haystacks Alberto Di Meglio CERN openlab Head DOI: /zenodo.45449, CC-BY-SA,
Online Software November 10, 2009 Infrastructure Overview Luciano Orsini, Roland Moser Invited Talk at SuperB ETD-Online Status Review.
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
29/05/09A. Salamon – TDAQ WG - CERN1 LKr calorimeter L0 trigger V. Bonaiuto, L. Cesaroni, A. Fucci, A. Salamon, G. Salina, F. Sargeni.
APE group Many-core platforms and HEP experiments computing XVII SuperB Workshop and Kick-off Meeting Elba, May 29-June 1,
High throughput computing collaboration (HTCC) Jon Machen, Network Software Specialist DCG IPAG, EU Exascale Labs INTEL Switzerland.
Accelerating particle identification for high-speed data-filtering using OpenCL on FPGAs and other architectures for FPL 2016 Srikanth Sridharan CERN 8/31/2016.
HTCC coffee march /03/2017 Sébastien VALAT – CERN.
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
LHCb and InfiniBand on FPGA
Niko Neufeld  (quasi) real-time connectivity requirements ”CERN openlab workshop.
FPGAs for next gen DAQ and Computing systems at CERN
Challenges in ALICE and LHCb in LHC Run3
Niko Neufeld LHCb Upgrade Online Computing Challenges CERN openlab Workshop on Data Center Technologies and Infrastructures, Mar 2017.
LHC experiments Requirements and Concepts ALICE
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
ALICE – First paper.
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
High-performance tracing of many-core systems with LTTng
@tarashears.
The LHCb Event Building Strategy
Characteristics of Reconfigurable Hardware
Design Principles of the CMS Level-1 Trigger Control and Hardware Monitoring System Ildefons Magrans de Abril Institute for High Energy Physics, Vienna.
LHCb Online Meeting November 15th, 2000
Presentation transcript:

Niko Neufeld, CERN/PH-Department

Apply upcoming Intel technologies in an Online / Trigger & DAQ context Application domains: L1-trigger, data acquisition and event-building, accelerator-assisted processing for high-level trigger Intel/CERN High Throughput Computing Collaboration openlab Open Day June Niko Neufeld CERN 2

15 million sensors Giving a new value / second = ~15 * 1,000,000 * 40 * 1,000,000 bytes = ~ 600 TB/sec (16 / 24 hrs / 120 days a year) can (afford to) store about O(1) GB/s 3

1.Thresholding and tight encoding 2.Real-time selection based on partial information 3.Final selection using full information of the collisions Intel/CERN High Throughput Computing Collaboration openlab Open Day June Niko Neufeld CERN Selection systems are called “Triggers” in high energy physics 4

Intel/CERN High Throughput Computing Collaboration openlab Open Day June Niko Neufeld CERN A combination of (radiation hard) ASICs and FPGAs process data of “simple” sub-systems with “few” O(10000) channels in real-time Other channels need to buffer data on the detector this works only well for “simple” selection criteria long-term maintenance issues with custom hardware and low-level firmware crude algorithms miss a lot of interesting collisions 6

Intel has announced plans for the first Xeon with coherent FPGA concept providing new capabilities We want to explore this to: Move from firmware to software Custom hardware  commodity Rationale: HEP has a long tradition of using FPGAs for fast, online, processing Need real-time characteristics: algorithms must decide in O(10) microseconds or force default decisions (even detectors without real-time constraints will profit) Intel/CERN High Throughput Computing Collaboration openlab Open Day June Niko Neufeld CERN 7

Port existing (Altera ) FPGA based LHCb Muon trigger to Xeon/FPGA Currently uses 4 crates with > 400 Stratix II FPGAs  move to a small number of FPGA enhanced Xeon-servers Study ultra-fast track reconstruction techniques for 40 MHz tracking (“track-trigger”) Collaboration with Intel DCG IPAG -EU Data Center Group, Innovation Pathfinding Architecture Group-EU Intel/CERN High Throughput Computing Collaboration openlab Open Day June Niko Neufeld CERN 8

Detector DAQ network Readout Units Compute Units Pieces of collision data spread out over links received by O(100) readout-units All pieces must be brought together into one of thousands compute units  requires very fast, large switching network Compute units running complex filter algorithms x ~ 1000 x ~ 3000 x 10

Intel/CERN High Throughput Computing Collaboration openlab Open Day June Niko Neufeld CERN 11

Explore Intel’s new OmniPath interconnect to build the next generation data acquisition systems Build small demonstrator DAQ Use CPU-fabric integration to minimise transport overheads Use OmniPath to integrate Xeon, Xeon/Phi and Xeon/FPGA concept in optimal proportions as compute units Work out flexible concept Study smooth integration with Ethernet (“the right link for the right task”) Intel/CERN High Throughput Computing Collaboration openlab Open Day June Niko Neufeld CERN 12

Pack the knowledge of tens of thousands of physicists and decades of research into a huge sophisticated algorithm Several lines of code Takes (only!) a few milliseconds per collision Intel/CERN High Throughput Computing Collaboration openlab Open Day June Niko Neufeld CERN “And this, in simple terms, is how we find the Higgs Boson” 14

Intel/CERN High Throughput Computing Collaboration openlab Open Day June Niko Neufeld CERN 15

Intel/CERN High Throughput Computing Collaboration openlab Open Day June Niko Neufeld CERN Can be much more complicated: lots of tracks / rings, curved / spiral trajectories, spurious measurements and various other imperfections 16

Complex algorithms Hot spots difficult to identify  cannot be accelerated by optimising 2 -3 kernels alone Classical algorithms very “sequential”, parallel versions need to be developed and their correctness (same physics!) needs to be demonstrated Lot of throughput necessary  high memory bandwidth, strong I/O There is a lot of potential for parallelism, but the SIMT-kind (GPGPU-like) is challenging for many of our problems HTCC will use next generation Xeon/Phi (KNL) and port critical online applications as demonstrators: LHCb track reconstruction (“Hough Transformation & Kalman Filtering”) Particle identification using RICH detectors Intel/CERN High Throughput Computing Collaboration openlab Open Day June Niko Neufeld CERN 17

The LHC experiments need to reduce 100 TB/s to ~ 25 PB/ year Today this is achieved with massive use of custom ASICs and in-house built FPGA-boards and x86 computing power Finding new physics requires massive increase of processing power, much more flexible algorithms in software and much faster interconnects The CERN/Intel HTC Collaboration will explore Intel’s Xeon/FPGA concept, Xeon/Phi and OmniPath technologies for building future LHC TDAQ systems Intel/CERN High Throughput Computing Collaboration openlab Open Day June Niko Neufeld CERN 18