Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013

Slides:



Advertisements
Similar presentations
Ian Bird LHCC Referees’ meeting; CERN, 18 th November 2014.
Advertisements

CERN Summary Ian Bird eInfrastructure Workshop 9 December, 2003.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
Trigger and online software Simon George & Reiner Hauser T/DAQ Phase 1 IDR.
Ian Bird WLCG Workshop Okinawa, 12 th April 2015.
Ian Bird WLCG Management Board CERN, 17 th February 2015.
Ian Bird LHCC Referees’ meeting; CERN, 11 th June 2013 March 6, 2013
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
REVIEW OF NA61 SOFTWRE UPGRADE PROPOSAL. Mandate The NA61 experiment is contemplating to rewrite its fortran software in modern technology and are requesting.
Ian Bird LHCC Referee meeting 23 rd September 2014.
INFSO-RI Enabling Grids for E-sciencE SA1: Cookbook (DSA1.7) Ian Bird CERN 18 January 2006.
LCG and HEPiX Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002.
Take on messages from Lecture 1 LHC Computing has been well sized to handle the production and analysis needs of LHC (very high data rates and throughputs)
Offline Coordinators  CMSSW_7_1_0 release: 17 June 2014  Usage:  Generation and Simulation samples for run 2 startup  Limited digitization and reconstruction.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
Workshop summary Ian Bird, CERN WLCG Workshop; DESY, 13 th July 2011 Accelerating Science and Innovation Accelerating Science and Innovation.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
1. Maria Girone, CERN  Q WLCG Resource Utilization  Commissioning the HLT for data reprocessing and MC production  Preparing for Run II  Data.
ATLAS in LHCC report from ATLAS –ATLAS Distributed Computing has been working at large scale Thanks to great efforts from shifters.
Ian Bird LHCC Referees Meeting; CERN, 12 th March 2013 March 6, 2013
Ian Bird GDB; CERN, 8 th May 2013 March 6, 2013
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
CERN IT Department CH-1211 Geneva 23 Switzerland t GDB CERN, 4 th March 2008 James Casey A Strategy for WLCG Monitoring.
LHCbComputing Manpower requirements. Disclaimer m In the absence of a manpower planning officer, all FTE figures in the following slides are approximate.
Ian Bird GDB CERN, 9 th September Sept 2015
Procedure to follow for proposed new Tier 1 sites Ian Bird CERN, 27 th March 2012.
Evolution of storage and data management Ian Bird GDB: 12 th May 2010.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
Future computing strategy Some considerations Ian Bird WLCG Overview Board CERN, 28 th September 2012.
Procedure for proposed new Tier 1 sites Ian Bird WLCG Overview Board CERN, 9 th March 2012.
LHCbComputing LHCC status report. Operations June 2014 to September m Running jobs by activity o Montecarlo simulation continues as main activity.
LHC Computing, CERN, & Federated Identities
Ian Bird WLCG Workshop, Barcelona, 9 th July July 2014 Ian Bird; Barcelona1.
Ian Bird CERN, 17 th July 2013 July 17, 2013
Ian Bird, CERN 2 nd February Feb 2016
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
PCAP Close Out Feb 2, 2004 BNL. Overall  Good progress in all areas  Good accomplishments in DC-2 (and CTB) –Late, but good.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
ALICE RRB-T ALICE Computing – an update F.Carminati 23 October 2001.
1 Proposal for a technical discussion group  Because...  We do not have a forum where all of the technical people discuss the critical.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
LHCC Referees Meeting – 28 June LCG-2 Data Management Planning Ian Bird LHCC Referees Meeting 28 th June 2004.
16 September 2014 Ian Bird; SPC1. General ALICE and LHCb detector upgrades during LS2  Plans for changing computing strategies more advanced CMS and.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Evolution of WLCG infrastructure Ian Bird, CERN Overview Board CERN, 30 th September 2011 Accelerating Science and Innovation Accelerating Science and.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
Outcome should be a documented strategy Not everything needs to go back to square one! – Some things work! – Some work has already been (is being) done.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
LHCbComputing Update of LHC experiments Computing & Software Models Selection of slides from last week’s GDB
Ian Bird LCG Project Leader Summary of EGI workshop.
Hall D Computing Facilities Ian Bird 16 March 2001.
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Evolution of storage and data management
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Computing models, facilities, distributed computing
Ian Bird GDB Meeting CERN 9 September 2003
for the Offline and Computing groups
WLCG: TDR for HL-LHC Ian Bird LHCC Referees’ meting CERN, 9th May 2017.
LHCb Software & Computing Status
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
ALICE Computing Model in Run3
Strategy document for LHCC – Run 4
The latest developments in preparations of the LHC community for the computing challenges of the High Luminosity LHC Dagmar Adamova (NPI AS CR Prague/Rez)
Input on Sustainability
Presentation transcript:

Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013

Background Requested by the LHCC in December: need to see updated computing models before Run 2 starts A single document to: - Describe changes since the original TDRs (2005) in Assumptions, models, technology, etc. - Emphasise what is being done to adapt to new technologies, to improve efficiency, to be able to adapt to new architectures, etc. - Describe work that still needs to be done - Use common formats, tables, assumptions, etc 1 document rather than 5 March 6, 2013

Timescales Document should describe the period from LS1 – LS2 - Estimates of evolving resource needs In order to prepare for 2015, a good draft needs to be available in time for the Autumn 2013 RRB, so needs to be discussed at the LHCC in September:  Solid draft by end of summer 2013 (!) Work has started - Informed by all of the existing work from the last 2 years (Technical Evolution groups, Concurrency forum, Technology review of 2012) March 6, 2013

Opportunities This document gives a framework to: - Describe significant changes and improvements already made - Stress commonalities between experiments – and drive strongly in that direction Significant willingness to do this Describe the models in a common way – calling out differences - Make a statement about the needs of WLCG in the next 5 years (technical, infrastructure, resources) - Potentially review the organisational structure of the collaboration - Review the implementation: scale, quality of service of sites/Tiers; archiving vs processing vs analysis activities - Raise concerns: E.g. staffing issues; missing skills; March 6, 2013

Draft ToC Preamble/introduction Experiment computing models Technology review and outlook Challenges – the problem being addressed Distributed computing Computing services Software activities and strategies Resource needs and expected evolution Collaboration organisation and management March 6, 2013

Experiment computing models Data models – types of data, event sizes, relationships, etc Anticipated event rates and data streams Data flows Differences for Pb-Pb or p-Pb Non-event data March 6, 2013

Technology review Use (update) report from 2012 What are likely technologies in next 5 years? - CPU: e.g. Intel vs ARM vs GPU vs ?? - Storage - Clouds/virtualisation - Likely evolution of networks Etc. March 6, 2013

Challenges What problem are we addressing? - Need to make the best use possible of available resources - Major investment in software needed But missing skills, people, tools, infrastructure? - Need for flexibility in adapting the models to changing technologies - … March 6, 2013

Distributed computing Use Cases - Calibration, reconstruction, re-processing, stripping, analysis use cases & strategies for prompt vs delayed analysis, simulation Functions implemented at Tier 0,1,2, HLT - Include how we would use opportunistic resources - Review functions: perhaps distinguish between archive needs and data distribution needs, QoS Networking - What will our needs be? - What topologies to interconnect tiers? March 6, 2013

Computing services Emphasise commonalities between experiments (and justify differences) - Workflow management – use of pilots, implementations, needs at sites - Data management – strategies for use of tapes, disks, other storage; services such as FTS, data popularity, data federation Use of HEP vs standard protocols - Distributed computing services (aka “grid mw”) Describe the services required, central vs distributed deployments; where do experiment needs diverge? Federated identity management? Infrastructure services: operations, monitoring, accounting, security, etc Other new technologies March 6, 2013

Application Software Current and recent activities – gains already achieved Strategies for the future How to address the use of new architectures etc Common tools and libraries (ROOT, GEANT, etc) Possibility for common frameworks between experiments, prerequisite for: Setting up consultancy/optimisation team (inc. testing infrastructure) to guide efforts in optimisation (memory use, I/O, parallel code, etc) What is the coordination and management needed for this? March 6, 2013

Resource Needs & Evolution Assumptions on running conditions Event parameters Summary tables of resource requirements - Tape, disk, cpu, bandwidths /19 March 6, 2013

Collaboration Organisation & Management Opportunity to review the WLCG organisational structure and associated bodies Stress the need for building collaborative activities – in order to find the effort for needed developments, operations, etc. Describe the anticipated interactions with e- infrastructures (EGI/Horizon 2020, OSG, etc) Interaction with other HEP experiments – i.e. should the scope of WLCG broaden to support HEP more widely? March 6, 2013