Wesley Smith, Graeme Stewart, June 13, 2014 ECFA - TOOC - 1 ECFA - TOOC ECFA - TOOC ECFA HL-LHC workshop PG7: Trigger, Online, Offline and Computing preparatory.

Slides:



Advertisements
Similar presentations
ANNOUNCEMENTS Please any technical issues to CHANGE OF LOCATION: – Days 1 and 2 will now BOTH be held at the Washington DC.
Advertisements

ECFA HL-LHC Workshop Preparation Meeting July 24, 2014 P. Allport D. Contardo 1.
Presentation at WebEx Meeting June 15,  Context  Challenge  Anticipated Outcomes  Framework  Timeline & Guidance  Comment and Questions.
Development of a track trigger based on parallel architectures Felice Pantaleo PH-CMG-CO (University of Hamburg) Felice Pantaleo PH-CMG-CO (University.
O. Buchmueller, Imperial College, W. Smith, U. Wisconsin, UPO Meeting, July 6, 2012 Trigger Performance and Strategy Working Group Trigger Performance.
Upgrading the CMS simulation and reconstruction David J Lange LLNL April CHEP 2015D. Lange.
LHCb readout infrastructure NA62 TDAQ WG Meeting April 1 st, 2009 Niko Neufeld, PH/LBC.
ECFA HL LHC Workshop: ECFA High Luminosity LHC Experiments Workshop For the TDOC preparatory group Niko Neufeld CERN/PH.
DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
Trigger and online software Simon George & Reiner Hauser T/DAQ Phase 1 IDR.
Niko Neufeld, CERN/PH-Department
Roger Jones, Lancaster University1 Experiment Requirements from Evolving Architectures RWL Jones, Lancaster University Ambleside 26 August 2010.
Ian Bird LHCC Referee meeting 23 rd September 2014.
Offline Coordinators  CMSSW_7_1_0 release: 17 June 2014  Usage:  Generation and Simulation samples for run 2 startup  Limited digitization and reconstruction.
Use of GPUs in ALICE (and elsewhere) Thorsten Kollegger TDOC-PG | CERN |
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
ATLAS in LHCC report from ATLAS –ATLAS Distributed Computing has been working at large scale Thanks to great efforts from shifters.
CERN Physics Database Services and Plans Maria Girone, CERN-IT
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Ian Bird GDB; CERN, 8 th May 2013 March 6, 2013
LCG Generator Meeting, December 11 th 2003 Introduction to the LCG Generator Monthly Meeting.
Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 1 Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory.
TDAQ Upgrade Software Plans John Baines, Tomasz Bold Contents: Future Framework Exploitation of future Technologies Work for Phase-II IDR.
LHCbComputing Manpower requirements. Disclaimer m In the absence of a manpower planning officer, all FTE figures in the following slides are approximate.
1 Evolving ATLAS Computing Model and Requirements Michael Ernst, BNL With slides from Borut Kersevan and Karsten Koeneke U.S. ATLAS Distributed Facilities.
1 LHCC RRB SG 16 Sep P. Vande Vyvre CERN-PH On-line Computing M&O LHCC RRB SG 16 Sep 2004 P. Vande Vyvre CERN/PH for 4 LHC DAQ project leaders.
Technical Workshop 5-6 November 2015 Alberto Di Meglio CERN openlab Head.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
1 News and Miscellaneous UPO Apr Didier Contardo, Jeff Spalding News from Oct. RRB HL-LHC Pile-up for simulation Upcoming events: DESY week – Taipei.
Future computing strategy Some considerations Ian Bird WLCG Overview Board CERN, 28 th September 2012.
Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 1 Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory.
Computing Performance Recommendations #10, #11, #12, #15, #16, #17.
Common R&D projects for the LHC upgrade and plans for irradiation facilities Outline: PH common R&D projects Working Group on future irradiation facilities.
News and Miscellaneous UPO Jan Didier Contardo, Jeff Spalding 1 UPO Jan Workshop on Upgrade simulations in 2013 (Jan. 17/18) -ESP in.
Ian Bird CERN, 17 th July 2013 July 17, 2013
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Welcome to the 2 nd ECFA HL-LHC Experiments Workshop at Aix-les-Bains As for 2013, the bulk of the material has been organised by Preparatory Groups, but.
Pierre VANDE VYVRE ALICE Online upgrade October 03, 2012 Offline Meeting, CERN.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
LHC Computing, SPC-FC-CC-C; H F Hoffmann1 CERN/2379/Rev: Proposal for building the LHC computing environment at CERN (Phase 1) Goals of Phase.
1 HL-LHC Experiment ECFA Workshop, Aix-les-Bains Oct Preparatory group meeting May P. Allport, D. Contardo Steering committee of the workshop:
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
Ian Bird, CERN WLCG LHCC Referee Meeting 1 st December 2015 LHCC; 1st Dec 2015 Ian Bird; CERN1.
W. Smith, U. Wisconsin, Upgrade MB, Sept. 9, 2011 Proposal Reviews - 1 Upgrade Peer Review Report Wesley H. Smith U. Wisconsin CMS Upgrade Peer Review.
R&D per l’LHC upgrade di fase 2 1 Iniziative verso la fase 2 : USA: Snowmass, (Jul 29 - Aug 6, 2013) HL-LHC Experiment ECFA Workshop, Aix-les-Bains Oct.
1 ECFA HL-LHC Experiments Workshop Aix-les-Bains Oct. 1-3 D. Contardo, P. Allport CERN 3 rd September ECFA steering committee and preparatory group meeting.
16 September 2014 Ian Bird; SPC1. General ALICE and LHCb detector upgrades during LS2  Plans for changing computing strategies more advanced CMS and.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Current activities and short term plans 24/04/2015 P. Hristov.
Alessandro De Salvo CCR Workshop, ATLAS Computing Alessandro De Salvo CCR Workshop,
Building PetaScale Applications and Tools on the TeraGrid Workshop December 11-12, 2007 Scott Lathrop and Sergiu Sanielevici.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
LHCbComputing Update of LHC experiments Computing & Software Models Selection of slides from last week’s GDB
Technology and R&D Summary and next steps Burkhard Schmidt.
Ideas and Plans towards CMS Software and Computing for Phase 2 Maria Girone, David Lange for the CMS Offline and Computing group April
Operations Coordination Team Maria Girone, CERN IT-ES GDB, 11 July 2012.
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Workshop Concluding Remarks
SuperB and its computing requirements
The “Understanding Performance!” team in CERN IT
CERN R&D plans and Infrastructures
for the Offline and Computing groups
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
WLCG: TDR for HL-LHC Ian Bird LHCC Referees’ meting CERN, 9th May 2017.
ALICE, ATLAS, CMS & LHCb joint workshop on DAQ
ALICE Computing Upgrade Predrag Buncic
The latest developments in preparations of the LHC community for the computing challenges of the High Luminosity LHC Dagmar Adamova (NPI AS CR Prague/Rez)
Computing at the HL-LHC
Presentation transcript:

Wesley Smith, Graeme Stewart, June 13, 2014 ECFA - TOOC - 1 ECFA - TOOC ECFA - TOOC ECFA HL-LHC workshop PG7: Trigger, Online, Offline and Computing preparatory group Wesley H. Smith U. Wisconsin - Madison Graeme Stewart U. Glasgow June 13, 2014

Wesley Smith, Graeme Stewart, June 13, 2014 ECFA - TOOC - 2 MembershipMembership ALICE: Latchezar Betev, Mikolaj Krzewicki, Pierre Vande Vyvre, ATLAS: Graeme Stewart, Benedetto Gorini, Nikos Konstantinidis, Imma Riu, Stefano Veneziano CMS: Wesley Smith, Maria Girone, David Lange, Frans Meijers LHCb: Peter Clarke, Vava Gligorov, Niko Neufeld

Wesley Smith, Graeme Stewart, June 13, 2014 ECFA - TOOC - 3 Preparatory Groups: Charge The preparatory groups are charged to identify the key topics to be discussed and to organize their corresponding session at the workshop. In conjunction with program of other groups (avoid overlaps) They will propose the detailed agenda within that session and propose speakers to the workshop Steering Committee Chairs of the PGs will write a document of few pages summarizing the main outcomes of the workshop and proposing further steps on each topic See 2013 charge at: 3

Wesley Smith, Graeme Stewart, June 13, 2014 ECFA - TOOC - 4 Preparatory Groups: Modus Operandi The chairs of PGs compile mailing lists of all those willing to contribute and organise dedicated meetings and/or mini-workshops advertised within the HL-LHC community to fulfil the objectives outlined above. A common TWIKI page should be used to collect and circulate information and progress of each group as last time (see The meetings of the groups should normally be fully open, although a mechanism to review talks and discuss material still being approved for release should be organised by the relevant PG chairs The lists from last time along with internal lists within the theory, accelerator and individual experiments can be used to find volunteers: list of the PG members: list of the PG chairs: list of the workshop SC:

Wesley Smith, Graeme Stewart, June 13, 2014 ECFA - TOOC - 5 Conclusions from ‘13 TDAQ (W. Smith) ATLAS & CMS L1 Trigger Scenario: μs latency & L1 Accept rates of 0.2 – 1 MHz. L1 Track Trigger ATLAS & CMS Phase 2 DAQ HLT design to accept 1 MHz of 4 MB events w/PU = 140 Output of 10 kHz. ALICE & LHCb Trigger & DAQ: Moving to “triggerless” architecture R&D Program FPGAs, Links, Telcom Tech., Associative Mem., GPU, New Processors, Architectures (heterogeneous) More powerful tools require more investment to exploit!

Wesley Smith, Graeme Stewart, June 13, 2014 ECFA - TOOC - 6 Conclusions on ‘13 Technology (N. Neufeld) Big-data and cloud-computing drive market for Commercial Off The Shelf IT equipment  HEP profits from that, but depends on economy at large We expect current performance rates (and price performance improvements) to grow at historic rates until at least LS2 25% performance improvement per year in computing at constant cost local area network and link technology sufficient for all HL-LHC needs also beyond LS2 wide area network growth sufficient for LHC needs, provided sufficient funding 20% price-drop at constant capacity expected for disk-storage, Far beyond LS2 the technical challenges for further evolution seem daunting. Nevertheless the proven ingenuity and creativity of the IT justify cautious optimism The fruits of technology are there, but hard work is needed to make the best of it

Wesley Smith, Graeme Stewart, June 13, 2014 ECFA - TOOC - 7 Conclusions on Computing (P. Buncic) Resources needed for Computing at HL-LHC are going to be large but not unprecedented Data volume is going to grow dramatically Projected CPU needs are reaching the Moore’s law ceiling Grid growth of 25% per year is by far not sufficient Speeding up the simulation is very important Parallelization at all levels is needed for improving performance (application, I/O) Requires reengineering of experiment software New technologies such as clouds and virtualization may help to reduce the complexity and help to fully utilize available resources

Wesley Smith, Graeme Stewart, June 13, 2014 ECFA - TOOC - 8 Conclusions on ‘13 Software (D. Rousseau) HL-LHC : high pile-up and high read-out rate → large increase of processing needs  With flat resource (in euros), and even with Moore’s law holding true (likely, provided we maintain/improve efficient use of processors), this is not enough (by 1/2 to one order of magnitude) → large software improvement needed  Future evolution of processors: many cores with less memory per core, more sophisticated processors instructions (micro-parallelism), possibility of specialised cores→ Optimisation of software to use high level processors instructions, especially in identified hot spots (expert task) Parallel framework to distribute algorithms to cores, in a semi-transparent way to regular physicist software developer LHC experiments code base more than 15 millions of line of code, written by more than 3000 people → a whole community to engage, starting essentially now, new blood to inject  We are sharing already effort and software. We can do much more: concurrency forum

Wesley Smith, Graeme Stewart, June 13, 2014 ECFA - TOOC - 9 Next Steps and Upcoming Events o Meetings of each PG -Define topics to be addressed at the workshop -Meeting among all chairs to compare proposed talk areas by end of June o Meeting of PGs with Workshop SC -A half day meeting by end of July -Presentation of progress from each PGs -Develop the workshop agenda -Session titles, number of talks and durations agreed and entered into indico with draft talk titles o September Preview of the Final Workshop Session Content -Chairs report to the workshop SC -Names of possible speakers to be discussed with SC -Final talk titles to be entered into indico with speakers names

Wesley Smith, Graeme Stewart, June 13, 2014 ECFA - TOOC - 10 DocumentationDocumentation Some useful references: ALICE LoI for run 3 upgrade: LHCC ATLAS phase 2 upgrade LoI: LHCC CMS: Phase 2 Plan and Cost: RRB LHCb Phase 2 LoI: LHCC ECFA Workshop: ECFA Workshop Report: TDOC ‘13 Workshop: Please send us other useful references

Wesley Smith, Graeme Stewart, June 13, 2014 ECFA - TOOC - 11 DiscussionDiscussion Community involvement Engagement with experts outside the experiments in really critical What topics to focus on? What has changed since last year? Planning for Mini-Workshops? Within specific communities or general White Papers? Preparations for June meeting of PG chairs: Trigger, Online and Offline computing tentatively 120’ on Thursday afternoon Proposed talk areas