1 Status report on DAQ E. Dénes – Wigner RCF/NFI ALICE Bp meeting – March 30 2012.

Slides:



Advertisements
Similar presentations
HLT Collaboration; High Level Trigger HLT PRR Computer Architecture Volker Lindenstruth Kirchhoff Institute for Physics Chair of Computer.
Advertisements

LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
HLT - data compression vs event rejection. Assumptions Need for an online rudimentary event reconstruction for monitoring Detector readout rate (i.e.
1P. Vande Vyvre - CERN/PH ALICE DAQ Technical Design Report DAQ TDR Task Force Tome ANTICICFranco CARENA Wisla CARENA Ozgur COBANOGLU Ervin DENESRoberto.
The LHCb DAQ and Trigger Systems: recent updates Ricardo Graciani XXXIV International Meeting on Fundamental Physics.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg - DPG 2005 – HK New Test Results for the ALICE High Level Trigger.
Sept TPC readoutupgade meeting, Budapest1 DAQ for new TPC readout Ervin Dénes, Zoltán Fodor KFKI, Research Institute for Particle and Nuclear Physics.
Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 1 HLT for TPC commissioning - Setup - - Status - - Experience -
PCIe based readout U. Marconi, INFN Bologna CERN, May 2013.
ALICE Data Challenge V P. VANDE VYVRE – CERN/PH LCG PEB - CERN March 2004.
MSS, ALICE week, 21/9/041 A part of ALICE-DAQ for the Forward Detectors University of Athens Physics Department Annie BELOGIANNI, Paraskevi GANOTI, Filimon.
Tom Dietel University of Cape Town for the ALICE Collaboration Computing for ALICE at the LHC.
Niko Neufeld, CERN/PH-Department
ALICE O 2 Plenary | October 1 st, 2014 | Pierre Vande Vyvre O2 Project Status P. Buncic, T. Kollegger, Pierre Vande Vyvre 1.
The High-Level Trigger of the ALICE Experiment Heinz Tilsner Kirchhoff-Institut für Physik Universität Heidelberg International Europhysics Conference.
ALICE DAQ Plans for 2006 Procurement, Installation, Commissioning P. VANDE VYVRE – CERN/PH for LHC DAQ Club - CERN - May 2006.
1 Alice DAQ Configuration DB
Use of GPUs in ALICE (and elsewhere) Thorsten Kollegger TDOC-PG | CERN |
V. Altini, T. Anticic, F. Carena, W. Carena, S. Chapeland, V. Chibante Barroso, F. Costa, E. Dénes, R. Divià, U. Fuchs, I. Makhlyueva, F. Roukoutakis,
The ALICE DAQ: Current Status and Future Challenges P. VANDE VYVRE CERN-EP/AID.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
The ALICE Data-Acquisition Software Framework DATE V5 F. Carena, W. Carena, S. Chapeland, R. Divià, I. Makhlyueva, J-C. Marin, K. Schossmaier, C. Soós,
S.Vereschagin, Yu.Zanevsky, F.Levchanovskiy S.Chernenko, G.Cheremukhina, S.Zaporozhets, A.Averyanov R&D FOR TPC MPD/NICA READOUT ELECTRONICS Varna, 2013.
ALICE Computing Model The ALICE raw data flow P. VANDE VYVRE – CERN/PH Computing Model WS – 09 Dec CERN.
Roberto Divià, CERN/ALICE 1 CHEP 2009, Prague, March 2009 The ALICE Online Data Storage System Roberto Divià (CERN), Ulrich Fuchs (CERN), Irina Makhlyueva.
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
NA49-future Meeting, January 26, 20071Ervin Dénes, KFKI - RMKI DATE the DAQ s/w for ALICE (Birmingham, Budapest, CERN, Istanbul, Mexico, Split, Zagreb.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
1 LHCC RRB SG 16 Sep P. Vande Vyvre CERN-PH On-line Computing M&O LHCC RRB SG 16 Sep 2004 P. Vande Vyvre CERN/PH for 4 LHC DAQ project leaders.
P. Vande Vyvre – CERN/PH CERN – January Research Theme 2: DAQ ARCHITECT – Jan 2011 P. Vande Vyvre – CERN/PH2 Current DAQ status: Large computing.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
Pierre VANDE VYVRE for the O 2 project 15-Oct-2013 – CHEP – Amsterdam, Netherlands.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
The Past... DDL in ALICE DAQ The DDL project ( )  Collaboration of CERN, Wigner RCP, and Cerntech Ltd.  The major Hungarian engineering contribution.
O 2 Project Roadmap P. VANDE VYVRE 1. O2 Project: What’s Next ? 2 O2 Plenary | 11 March 2015 | P. Vande Vyvre TDR close to its final state and its submission.
Filippo Costa ALICE DAQ ALICE DAQ future detector readout October 29, 2012 CERN.
ALICE Online Upgrade P. VANDE VYVRE – CERN/PH ALICE meeting in Budapest – March 2012.
Upgrade Letter of Intent High Level Trigger Thorsten Kollegger ALICE | Offline Week |
A. KlugeFeb 18, 2015 CRU form factor discussion & HLT FPGA processor part II A.Kluge, Feb 18,
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Pierre VANDE VYVRE ALICE Online upgrade October 03, 2012 Offline Meeting, CERN.
CWG13: Ideas and discussion about the online part of the prototype P. Hristov, 11/04/2014.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
Status of Integration of Busy Box and D-RORC Csaba Soós ALICE week 3 July 2007.
Monitoring for the ALICE O 2 Project 11 February 2016.
R.Divià, CERN/ALICE 1 ALICE off-line week, CERN, 9 September 2002 DAQ-HLT software interface.
ALICE O 2 | 2015 | Pierre Vande Vyvre O 2 Project Pierre VANDE VYVRE.
Workshop ALICE Upgrade Overview Thorsten Kollegger for the ALICE Collaboration ALICE | Workshop |
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
P. Vande Vyvre – CERN/PH for the ALICE collaboration CHEP – October 2010.
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
The ALICE Data-Acquisition Read-out Receiver Card C. Soós et al. (for the ALICE collaboration) LECC September 2004, Boston.
Use of FPGA for dataflow Filippo Costa ALICE O2 CERN
LHCb and InfiniBand on FPGA
evoluzione modello per Run3 LHC
TOF Upgrade: (few) updates
LHC experiments Requirements and Concepts ALICE
Outline ALICE upgrade during LS2
TPC Commissioning: DAQ, ECS aspects
ALICE – First paper.
IEEE NPSS Real Time Conference 2009
Commissioning of the ALICE HLT, TPC and PHOS systems
ALICE Computing Upgrade Predrag Buncic
PCI BASED READ-OUT RECEIVER CARD IN THE ALICE DAQ SYSTEM
Commissioning of the ALICE-PHOS trigger
Presentation transcript:

1 Status report on DAQ E. Dénes – Wigner RCF/NFI ALICE Bp meeting – March

2 Visit of ALICE DAQ group in Wigner RCF March 6-7, 2012 CERN participants: Paolo Gubellino, ALICE spokesperson Pierre Vande Vyvre, head of ALICE DAQ group Filippo Costa, member of ALICE DAQ group From Wigner RCF: Péter Lévai Ervin Dénes Tivadar Kiss György Rubin Tamás Tölyhi Students: Kristóf Blutman, BME Gábor Kiss, ELTE Hunor Melegh, BME

3 Main talks Péter Lévai: About Wigner RCF Paolo Gubellino: ALICE upgrade strategy (incl. the detector upgrades) Pierre Vande Vyvre: ALICE online upgrade* 2 half-day discussion Self introduction of students Roadmap* Visit to Cerntech * See following slides

4 Present Online Architecture ALICE Mar.2012P. Vande Vyvre – CERN/PH4 GDC TDSM CTP LTU TTC FERO LTU TTC RCU LDC BUSY Rare/All Event Fragment Sub-event Event File Storage Network PDS L0, L1a, L2 360 DDLs D-RORC EDM LDC D-RORC Load Bal. LDC D-RORC HLT Farm FEP H-RORC DDL H-RORC 10 DDLs 10 D-RORC 10 HLT LDC 120 DDLs DA DQM DSS Event Building Network 430 D-RORC 125 Detector LDC 75 GDC 30 TDSM 18 DSS 60 DA/DQM 75 TDS Archiving on Tape in the Computing Centre (Meyrin)

5 ALICE Upgrade Strategy Strategy recently approved by ALICE presents a global and coherent plan to upgrade the experiment for 2018 (Long Shutdown 2 LS2) “Upgrade Strategy for the ALICE Central Barrel” Key concepts for running the experiment at high rate Pipelined electronics Triggering the TPC would limit this rate → Continuous readout Reduce the data volume by topological trigger and online reconstruction Major upgrade of the detector electronics and of the online systems Online upgrade design: Data buffer and processing off detector Fast Trigger Processor (FTP): 2 hw trigger levels to accommodate various detector latencies and max. readout rate HLT: 2 sw trigger levels: ITS, TRD, TOF, EMC (reduce the rate before building TPC event) Final decision using the data from all detectors Common DAQ /HLT farm to minimize cost while preserving the present flexibility of running modes ALICE Mar.20125P. VANDE VYVRE CERN-PH

6 Trigger Rates ALICE Mar.20126P. VANDE VYVRE CERN-PH Key concepts for running ALICE at high rate Hardware trigger-less architecture whenever needed and affordable Pipelined electronics and continuous readout Reduction of the data volume by topological trigger and 2 steps online reconstruction Trigger Levels Detectors Trigger with pp beams Trigger with Pb-Pb beams Frequency (kHz) LatencyFrequency (kHz) Latency No TriggerITS, TPC, TRD, EMCal, PHOS Continuous read-out at 10 MHz Level 0 (hw)TOF (Pb-Pb)  s  s Level 1 (hw)TOF (p-p), Muon  s  s Level 2 (sw) s Level 3 (sw) s

7 Present TPC Readout ALICE Mar.20127P. VANDE VYVRE CERN-PH RCU CTP FEC DAQ HLT MB/s 216 DDL 2.0 Gb/s 216 DDL 2.0 Gb/s FEC TTC FEC MB/s FEC 1 13 Present readout : Links: DDL at 2 Gb/s. 216 DDLs for the TPC used at 1.6 Gb/s PC adapters: D-RORC and H-RORC Up to 6 RORCs/PC DRORCDRORC DRORCDRORC HRORCHRORC HRORCHRORC

8 Readout Upgrade ALICE Mar.20128P. VANDE VYVRE CERN-PH FEC2 EPN or 40 Gb/s FLP 10 Gb/s ~7000 Network DAQ and HLT systems or 40 Gb/s Upgrade readout: Non-zero suppressed TPC data 57 Tb/s (570 kchannels x 10 bits x 10 MHz) ~7000 links at 10 Gb/s for TPC. ~7800 links for the whole experiment. TBD: FEC2 characteristics (GEM readout, very simple FEC vs more links) DDL3 and RORC3 for LS2 upgrade (ALICE common solution) Address the needs of the new architecture for the period after LS2 (2018) DDL3 links at 10 Gb/s Exact requirements to be addressed (radiation tolerance/hardness) Different physical layers possible (CERN GBT, Eth, FCS, wavelength multiplex.) First-Level Processors (FLPs) ~650 FLPs needed: 12 detector links 10 Gb/s, 1 network link at 10 or 40 Gb/s Data readout and first-level data processing: ZS, cluster finder and compression could be performed by the RORC FPGA Regional online reconstruction

9 Future evolution of DDL & RORC ALICE Mar.2012P. Vande Vyvre – CERN/PH9 New common DAQ/HLT DDL2 and RORC2 Prototype under design by Heiko Engel (Udo Kebschull’s team/Frankfurt) and Tivadar Kiss (Budapest) (Ready in 2012) It will address the upgrade needs with the present architecture for the period ( ) between Long Shutdown 1 (LS1) and LS2 Includes 12 DDL2 links at 6 Gb/s. 6 links to DAQ LDC  36 Gb/s. PCIe V2 8 lanes (500 MB/s/lane)  32 Gb/s of I/O capacity. Data processing in the FPGA (e.g cluster finding) Currently: 5 links at 2Gb/s per PC  10 Gb/s of I/O capacity Prototype under development 12 links at 6 Gb/s 6 links to DAQ LDC  36 Gb/s. PCIe Gen2 8 lanes (500 MB/s/lane)  32 Gb/s of I/O capacity Final system 12 links at 10 Gb/s per PC PCIe Gen3 16 lanes  I/O 128 Gb/s

10 Upgrade Online (after LS2) ALICE Mar P. VANDE VYVRE CERN-PH 10 Gb/s or 40 Gb/s FLP 10 Gb/s L2 Network DAQ and HLT DAQ and HLT 10 Gb/s or 40 Gb/s FLP EPN FLP TPC TRD Muon FTP L0 L1 FLP TOF EPN FLP ITS L3 FLP PHOS FLP EMC Trigger Detectors ~ 650 FLPs~1250 EPNs ~7800 DDL3

11 Next Steps ALICE Mar.2012P. Vande Vyvre – CERN/PH11 Major upgrade foreseen for the online systems during LS2 Improve the Strategy Note (Mar ‘12) More detailed and optimized cost estimate (Mar ‘12) Prototyping, some using the present online systems (‘12) Continue to gain experience with using HLT online results as basis of offline reconstruction, e.g. the cluster finding Essential to reach the necessary data reduction factors after LS2 R&D program (‘12-’16) Fast Trigger Processor Readout links Event-building and HLT dataflow network Online reconstruction: online use of “offline software” Efficient use of new computing platforms (many-cores and GPUs) Online Conceptual Design Report (TBD) Requirements from detectors Key results from R&D

12 Short and Mid Term SW, FW, HW Tasks (for us) During LS1: some detector will upgrade to C-RORC C-RORC prototype: June 2012, Tivadar C-RORC firmware: September 2012, Filippo C-RORC software: Q4 2012, Ervin Tailoring of the DDG for the C-RORC: Q4 2012, Filippo, Ervin Technology Research Project Members: Pierre, Filippo, Tivadar, Ervin, Gyuri, Gusty?, students? Preparing a list of technological companies, deadline: end of April Preparing questionnaires for the meetings with companies: end of May Scheduling meetings with industrial companies: At CERN: middle of June In the US: October Preparation of a spread sheet for comparison the different solutions: Preliminary: end of September, final: Q1/2013 Setting up and running demonstrations in Budapest: PCIe-over-fibre, 2012 Recommended capital investments: High-end server motherboard PCIe expansion box over-fibre connection

13 Longer time tasks Building demonstrations with selected technologies Deadline for defining the common interface for the detectors: June 2014 Deadline for a Conceptual Design Report: ~ End of 2014 Deadline for delivering prototypes of "DDL3" for the detectors: Q Deliver prototype software (Q4 2015) Production of DDL3 (Q Q2 2017) Release production software (Q2 2017) Detector installation and commissioning at Point 2 (Q3 2017) A small DAQ with limited performance (but full functionality) has to be installed by Q3 of 2017

14 Thank you for your attention

15 Reserved slides

16 A quick look to the past: the DDL and the D-RORC Project started in 1995 Strong push from the management: George Vesztergombi and now Péter Lévai Work of a solid and competent team: Ervin Dénes, Tivadar Kiss, György Rubin, Csaba Soós, and many others contributed to this successful project The Budapest team delivered to the ALICE experiment a common and reliable solution for the data transfer from the detectors to the online systems : the DDL and the D-RORC ALICE Mar P. VANDE VYVRE CERN-PH

17 DDL and D-RORC ALICE Mar.2012P. Vande Vyvre – CERN/PH17 ~ 500 DDLs and ~450 D-RORCs Collect all the data of ALICE (~2 PB/year) Configure the electronics of several detectors, in particular the biggest one (TPC) Goal for the future: Repeat this success story !

18 Continuous Readout ALICE Mar.2012P. Vande Vyvre – CERN/PH18 Timet+3xt+2xt+xtt-xt-2x TRD EMC TPC ITS TOF Events n+2n+1n FLP Event Building & Processing

19 Online reconstruction ALICE Mar.2012P. Vande Vyvre – CERN/PH19 50 kHz Pb–Pb collisions inspected with the least possible bias using topological triggers and on-line particle identification HI run 2011: online cluster finding data compression factor of 4 for TPC Two HLT scenarios for the upgrade: 1. Partial event reconstruction: Further factor of 5 → overall reduction factor of 20. Close to what could be achieved and tested now. Rate to tape: 5 kHz. 2. Full event reconstruction: Overall data reduction by a factor of 100. Rate to tape: 25 kHz. HI run 2018: min. bias event size ~75 MB  ~4-1 MB after data volume reduction. Throughput to mass storage: 20 GB/s. Target rate after LS2 can only be reached with online reconstruction Build on success of last year with TPC cluster finding Extremely ambitious: online calibration and data processing Essential to gain experience with using online results as basis of offline reconstruction, e.g. the cluster finding. Mandatory to reach the necessary data reduction factor after LS2.

20 Network Routers and Switches for the Event Building ALICE Mar.2012P. Vande Vyvre – CERN/PH20 Present DAQ network (Force 10 now DELL) Exascale E1200i 3.5 Tb/s Capacity of present DAQ router not adequate Technology 1: Ethernet (Brocade MLX Series) MLX32e up to 15.3 Tb/s, 32 line card slots Up to 256x10GbE ports, 32x100GbE ports Line cards: 2x100 Gb/s or 8x10 Gb/s Technology 2: Infiniband (Mellanox) MIS6500 up to 51.8 Tb/s, 648 ports at 40 Gb/s

21 Processing Power ALICE Mar.2012P. Vande Vyvre – CERN/PH21 Current HLT Currently HLT processing rates with full TPC reconstruction: Either ~200 Hz central events (65 MB) or ~800Hz min bias events 216 front-end nodes (equivalent to the FLPs) TPC cluster finder on H-RORC FPGA and coordinates transformation by the CPU 60 tracking nodes with GPU (equivalent of the EPNs) HLT Assuming linear scaling of rates and using current HLT technology: Level 2 (reconstruction using ITS, TRD, EMC) 50 kHz min bias events (15 MB, ~4 times less CPU) → 50 kHz/800Hz*60 nodes*1/4= 1250 nodes (CPU+GPU) Level 3 (full reconstruction incl. TPC) 25 kHz central events → 25 kHz/200Hz*60 nodes = 7500 nodes (CPU+GPU) 8750 nodes of 2011 → 1250 nodes of 2018 Event Processing nodes ~1250 EPNs Current HLT processing nodes: Intel Xeon E5520, 2.2 GHz 4 cores Global track merging + reconstruction, some trigger algorithms Nvidia GTX480/580 ~90% of the tracking Performance gain 33%/yr during 7 years Factor 7 from 2011 to 2018 Overall chip performance gain → Significant R&D on computing and algorithms concepts