ALICE, ATLAS, CMS & LHCb joint workshop on DAQ

Slides:



Advertisements
Similar presentations
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
Advertisements

Trigger and online software Simon George & Reiner Hauser T/DAQ Phase 1 IDR.
1 Towards an Upgrade TDR: LHCb Computing workshop May 2015 Introduction to Upgrade Computing Session Peter Clarke Many peoples ideas Vava, Conor,
Online Data Challenges David Lawrence, JLab Feb. 20, /20/14Online Data Challenges.
Architecture and Dataflow Overview LHCb Data-Flow Review September 2001 Beat Jost Cern / EP.
Use of GPUs in ALICE (and elsewhere) Thorsten Kollegger TDOC-PG | CERN |
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Lessons from HLT benchmarking (For the new Farm) Rainer Schwemmer, LHCb Computing Workshop 2014.
Introducing collaboration members – Korea University (KU) ALICE TPC online tracking algorithm on a GPU Computing Platforms – GPU Computing Platforms Joohyung.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Ian Bird GDB; CERN, 8 th May 2013 March 6, 2013
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
1 LHCC RRB SG 16 Sep P. Vande Vyvre CERN-PH On-line Computing M&O LHCC RRB SG 16 Sep 2004 P. Vande Vyvre CERN/PH for 4 LHC DAQ project leaders.
Online System Status LHCb Week Beat Jost / Cern 9 June 2015.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
The Past... DDL in ALICE DAQ The DDL project ( )  Collaboration of CERN, Wigner RCP, and Cerntech Ltd.  The major Hungarian engineering contribution.
A. KlugeFeb 18, 2015 CRU form factor discussion & HLT FPGA processor part II A.Kluge, Feb 18,
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
Wesley Smith, Graeme Stewart, June 13, 2014 ECFA - TOOC - 1 ECFA - TOOC ECFA - TOOC ECFA HL-LHC workshop PG7: Trigger, Online, Offline and Computing preparatory.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Alignment in real-time in current detector and upgrade 6th LHCb Computing Workshop 18 November 2015 Beat Jost / Cern.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
Moore vs. Moore Rainer Schwemmer, LHCb Computing Workshop 2015.
PCIe40 — a Tell40 implementation on PCIexpress Beat Jost DAQ Mini Workshop 27 May 2013.
DAQ Overview + selected Topics Beat Jost Cern EP.
16 September 2014 Ian Bird; SPC1. General ALICE and LHCb detector upgrades during LS2  Plans for changing computing strategies more advanced CMS and.
AliRoot survey: Reconstruction P.Hristov 11/06/2013.
October 19, 2010 David Lawrence JLab Oct. 19, 20101RootSpy -- CHEP10, Taipei -- David Lawrence, JLab Parallel Session 18: Software Engineering, Data Stores,
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Savannah to Jira Migration
HTCC coffee march /03/2017 Sébastien VALAT – CERN.
Computer Hardware What is a CPU.
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
Conclusions on CS3014 David Gregg Department of Computer Science
FPGAs for next gen DAQ and Computing systems at CERN
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Workshop Concluding Remarks
TDAQ Phase-II kick-off CERN background information
CERN meeting report, and more … D. Breton
The “Understanding Performance!” team in CERN IT
Trigger, DAQ and Online Closeout
evoluzione modello per Run3 LHC
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
for the Offline and Computing groups
Sergei V. Gleyzer University of Florida
Totem Readout using the SRS
Controlling a large CPU farm using industrial tools
ALICE – First paper.
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
DAQ Systems and Technologies for Flavor Physics
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
CernVM Status Report Predrag Buncic (CERN/PH-SFT).
The LHCb Event Building Strategy
New strategies of the LHC experiments to meet
OffLine Physics Computing
Data Lifecycle Review and Outlook
Example of DAQ Trigger issues for the SoLID experiment
Nuclear Physics Data Management Needs Bruce G. Gibbard
Event Building With Smart NICs
Network Processors for a 1 MHz Trigger-DAQ System
Computing at the HL-LHC
LHCb Online Meeting November 15th, 2000
Overview of Networking
The LHCb Front-end Electronics System Status and Future Development
The LHC Computing Grid Visit of Professor Andreas Demetriou
Presentation transcript:

ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Summary Thoughts ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 14 March 2013 Beat Jost / Cern

First of All Many thanks to the session organizers and the speakers who went through a lot of work gathering and assembling all the material Many thanks also to David, Frans and Pierre who did most of the nitty-gritty organisation. Thanks also to PH Department for taking over the charge for the venue and providing the coffee/tea breaks Last but not least thank YOU all for coming and making this event a success Personal Notes Indico Statistics Participant: ALICE 22 ATLAS 38 CMS 24 LHCb 10 Other 3 total 97 Not bad… Summary Thoughts, 14 March 2013

From David’s Introductory Remarks Summary Thoughts, 14 March 2013

Systems Today In general everyone is quite happy with their systems up to now Upgrades during LS1, i.e. for Run 2 are stimulated more by replacing old/obsolete hardware/technologies (PCI-X) than really new requirements New technologies, e.g. ≥10GbE/IB allow simplifications and rationalizations, especially for the event building networks (CMS, Atlas uniform network) Summary Thoughts, 14 March 2013

Systems Tomorrow Tomorrow  Run 3+ Atlas and CMS no major upgrades (prob) in the DAQ systems, besides the standard 5-year replacement cycle (maybe taking advantage of newer technologies) Alice and LHCb are foreseeing a major upgrade for this period Alice: ~2 orders of magnitude higher data rates Continuous TPC readout Full online reconstruction throwing away raw data (ambitious/daring) Making Online computing infrastructure a GP Alice computing centre used online and offline LHCb: eliminate hardware trigger in the long run (LLT as intermediate stage) ~40 times higher data rate No architectural changes foreseen, just application of modern technologies, if they work… R&D As usual higher rate to tape (somehow reoccurring theme) Summary Thoughts, 14 March 2013

Systems Next Week Major steps for Atlas and CMS Personal note Introduction of new trigger level (Track trigger) after high pT trigger Higher readout rates (500-1000 kHz) compared with 100 kHz today And again… higher rate to tape Personal note I would hope that the two experiments can come together and find a solution based on a common effort This is a unique opportunity Summary Thoughts, 14 March 2013

HLT Developments Demands on CPU power will always be increasing Higher multiplicity Do more in HLT to increase sensitivity/purity What can we do to match Moore’s law still helps Not in clock speed More cores per chip Memory is the bottle neck (amount and bandwidth) Forking (checkpointing) and COW help for some time (surely till LS2) One event one process (maybe some parallel algorithms) is still the most efficient Have to make sure that we use the capabilities of the CPUs to their maximum ‘Coprocessors’ (GPU, XEON-PHI)… Very sexy (fashion only?) Needs a lot of work to get the code running efficiently and producing identical results Use ‘non-physics’ time of LHC (deferred triggering) Summary Thoughts, 14 March 2013

Links and Boards Parallel buses are dead Already for a long time for data (performance) Now also for controls Fashion for boards nowadays xTCA Features: Support for high-power boards (cooling, power) Board-board communication paths (PtP, star/mesh topologies) Only used in trigger applications Very small form factor Links If GBT comes in time and works, will most likely be used for transferring data out of detector Lot of effort invested. Performance? Summary Thoughts, 14 March 2013

Other Topics Storage System Monitoring and management Disks get bigger and bigger Problem is to organize the data over the disks (RAID, ODS) Recovery from failures In general we will always be able to write out our data Maybe not for free but surely affordable System Monitoring and management Very wide field and myriad of approaches Often fashion driven Looks as if convergence will happen eventually, maybe,with some luck to some degree Summary Thoughts, 14 March 2013

Conclusion (1) I think the workshop was useful If only to ‘learn’ what others are doing Actually I learned a lot even of what happens in LHCb… Can we find common solutions to common problems? Yes and no… Seems to have some new motivation to do it in the young generation LS1: Share experience (e.g. on common tools) LS2: further common discussion about choices Some topics have clearly been suggested DQM/Histogramming Control Technology tracking and tools testing PON Dataflow Monitoring Summary Thoughts, 14 March 2013

Conclusion (2) Should we repeat it? Well… up to you! Maybe not every 2 weeks, but maybe in 12 months (well before the end of LS1) or in 18 months (just before beam)? Groups talking to each other between now and the next edition Either on specific topics as suggested during the workshop Or make a joint wrap-up brainstorming between a few people to propose the next steps I was missing the controls (DCS) community. Maybe a consequence of the title of the workshop. Summary Thoughts, 14 March 2013