Scientific Computing Strategy (for HEP)

Slides:



Advertisements
Similar presentations
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Advertisements

The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 15 th April 2009 Visit of Spanish Royal Academy.
DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
Ian Bird WLCG Workshop Okinawa, 12 th April 2015.
Ian Bird WLCG Management Board CERN, 17 th February 2015.
Assessment of Core Services provided to USLHC by OSG.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
Ian Bird LHCC Referees’ meeting; CERN, 11 th June 2013 March 6, 2013
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
LCG and HEPiX Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002.
Workshop summary Ian Bird, CERN WLCG Workshop; DESY, 13 th July 2011 Accelerating Science and Innovation Accelerating Science and Innovation.
…building the next IT revolution From Web to Grid…
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
LHCbComputing Manpower requirements. Disclaimer m In the absence of a manpower planning officer, all FTE figures in the following slides are approximate.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
Future computing strategy Some considerations Ian Bird WLCG Overview Board CERN, 28 th September 2012.
LHC Computing, CERN, & Federated Identities
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
Ian Bird CERN, 17 th July 2013 July 17, 2013
Ian Bird, CERN 2 nd February Feb 2016
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI strategy and Grand Vision Ludek Matyska EGI Council Chair EGI InSPIRE.
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
LHCbComputing Update of LHC experiments Computing & Software Models Selection of slides from last week’s GDB
HEPiX IPv6 Working Group David Kelsey (STFC-RAL) GridPP33 Ambleside 22 Aug 2014.
Ian Bird LHCC Referees; CERN, 2 nd June 2015 June 2,
NORDUnet Nordic Infrastructure for Research & Education NORDUnet’s views on cloud and cloud providers LHCOPN/ONE meeting Taipei, March 2016.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
ScotGRID is the Scottish prototype Tier 2 Centre for LHCb and ATLAS computing resources. It uses a novel distributed architecture and cutting-edge technology,
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
Evolution of storage and data management
Bob Jones EGEE Technical Director
Accessing the VI-SEEM infrastructure
Review of the WLCG experiments compute plans
WLCG Workshop 2017 [Manchester] Operations Session Summary
(Prague, March 2009) Andrey Y Shevel
Services for EOSC management
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Scaling Science Communities Lessons learned by and future plans of the Open Science Grid Frank Würthwein OSG Executive Director Professor of Physics UCSD/SDSC.
Computing models, facilities, distributed computing
Working Group 4 Facilities and Technologies
WLCG Data Steering Group
The LHC Computing Grid Visit of Mtro. Enrique Agüera Ibañez
INFN Computing Outlook The Bologna Initiative
CMS ATLAS LHC and Clouds
Ian Bird GDB Meeting CERN 9 September 2003
Recommendation 6: Using ‘cloud computing’ to meet the societal need ‘Faster and transparent access to public sector services’ Cloud computing Faster and.
WLCG: TDR for HL-LHC Ian Bird LHCC Referees’ meting CERN, 9th May 2017.
The LHC Computing Grid Visit of Her Royal Highness
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
System performance and cost model working group
R&D for HL-LHC from the CWP
Vision for CERN IT Department
Strategy document for LHCC – Run 4
WLCG Collaboration Workshop;
New strategies of the LHC experiments to meet
Input on Sustainability
Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002
EGI Webinar - Introduction -
Cloudstor: Glamming up the ecosystem
Collaboration Board Meeting
Brian Matthews STFC EOSCpilot Brian Matthews STFC
Defining the Grid Fabrizio Gagliardi EMEA Director Technical Computing
LHCb thinking on Regional Centres and Related activities (GRIDs)
WP6 – EOSC integration J-F. Perrin (ILL) 15th Jan 2019
The LHC Computing Grid Visit of Professor Andreas Demetriou
EOSC-hub Contribution to the EOSC WGs
Presentation transcript:

Scientific Computing Strategy (for HEP) Ian Bird 3rd Informal Discussion with Funding Agencies CERN 21st March 2017 Scientific Computing Strategy (for HEP)

Preamble & disclaimer When LHC Computing started there were no large Internet companies (Google, Amazon, …) HEP had to build infrastructure and tools Starting today might be very different … The following is intended to provoke discussion - it is not a plan 21 March 2017 Ian Bird

Some drivers from HL-LHC Need to fit computing within constrained costs But being asked to do much more … Often need to co-locate (intimately) with other HEP & other sciences in our Data Centres Esp. with Astronomy/Astro-particle & other physics e.g. photon science etc 21 March 2017 Ian Bird

Key Components of an evolved infrastructure A general infrastructure and services (data stores, compute facilities, networking, etc.); Associated services like AAA, security, base monitoring, operational support Needs to be capable of supporting different computing models and agile to technology changes “Software”: full stack from workflow and data management tools to application level; Common tools, libraries, etc.; BUT: a set of optional tools, contributed, developed, maintained, by the community; Common R&D and support tools (technology tracking, software tooling); This is essentially what the HSF is mandated to do. A Steering group structure to organize and evolve the above – mandated by e.g. ICFA Constituted from major global HENP facilities, experiments, observers (A-P) Experiment/facility-specific (optional) processes e.g. WLCG for the LHC resource management, etc. 21 March 2017 Ian Bird

Infrastructure Networks AAA services Operational services LHCOne – global HENP (and A-P) overlay network (LHCOPN is LHC-specific), Address the need for access to commercial clouds via NREN’s How to manage a HEP data cloud network (multi-Tb interconnect, SDN) AAA services Federated identities, authorization mechanisms, accounting Policies Operational services Support, incident response, monitoring, etc. 21 March 2017 Ian Bird

Possible Model for future HL-LHC computing infrastructure Simulation resources ATLAS HLT CMS HLT LHCb HLT ALICE O2 turbo Reconstruction facility (calibration, alignment, reconstruction) Archive at “AOD” level Distribute Cloud users: Analysis LHC Data cloud Storage and compute 21 March 2017 Ian Bird

Build a “data cloud” Few – O(5-10) - large centres Multi-Tb private (SDN) network between them Treat as a single “virtual data centre” Policy replicates data inside for security and performance Think of RAID across data centres Store all of the “AOD” data here; Do not replicate data to global physics institutes (major cost) Pluggable compute capacity: HEP resources at these centres & other large centres Commercial compute Model allows commercial data centres For storage – enough redundancy that a commercial centre could unplug For compute Relies on networking and GEANT/Esnet etc. connections to commercial entities, policy Users access data in this cloud remotely Eventually download “ntuples” – or equivalent All organised processing is done in this model Enables new analysis models: all data can be seen as colocated Get away from the “event-loop”  queries, machine-learning, etc. This idea has been discussed in the WLCG community (e.g. see I. Fisk CHEP plenary) Hybrid model: HEP-resources at a level we guarantee to fill  cost-effective Commercial resources for “elasticity” Needs new funding models 21 March 2017 Ian Bird

What about Tier 2s, opportunistic, … ?? Depends on scale Large Tier 2s & HPC could be pluggable compute facilities; need connectivity to the private data network Most effective use of medium and small centres is for simulation Simulation loads are huge (~50%) Centres upload MC samples to the cloud Ideally want full-chain pre-emptable simulation workflows To make best use of HPC, opportunistic, volunteer, etc Some HPC potentially best suited to event generation Becomes a significant compute load Must consider the best way to use what we have, not focus on the labels/prestige etc. Easily adapts to the situation where a country may replace academic computing clusters with “tokens” for commercial cloud providers 21 March 2017 Ian Bird

Software HSF Set up in response to recognition that software will be key to success for HL-LHC and the future 21 March 2017 Ian Bird

Software investment There are some (funded) activities in some countries Need to engage effort globally & coherently in HEP for this Could use mechanism of original LCG project Define project(s) with clear timescale and outcomes Ask for contributions (of effort not cash) Use the HSF/CWP as the framework – we already have community engagement NB: Software and infrastructure must be treated together – the separation is a source of inefficiency 21 March 2017 Ian Bird

Steering / governance Would need buy-in from all potential stakeholders Would like steering from IT Heads of major HENP labs, projects, facilities, experiments Should have a globally recognized mandate E.g. from ICFA (have had such before for grids); needed to get acceptance from some countries Governance should be very lightweight, and through consensus of the community, informing the steering group Role is to ensure that infrastructure and software: Direction and evolution is suitable for the community; Funding and effort – direct feedback to FA’s or labs; Can use HSF, HEPiX, specific projects, working groups. AND: community-wide licensing, procurements, agreements E.g. on commercial clouds – economy of scale Address political issues – e.g. how to evolve funding models 21 March 2017 Ian Bird

WLCG/HEP We could build a prototype of some of this now HEP technology (EOS etc) – small demonstrators exist EOS & dCache federations Opportunity of a new DC in Prévessin Software initiatives HEP put HSF in place, recognising software as a crucial area Must ensure that we invest sufficiently in people and skill building/retention 21 March 2017 Ian Bird

Prototyping Building a prototype data cloud/science cloud now would also allow us to provide additional value-add services to HEP (Web-)FTS, cernbox, data archiving, DBOD, data preservation (Zenodo) and open access platform All of these “as-a-service” Could demonstrate to other sciences – to eventually contribute to a science cloud And then include other collaborative services (Indico, Vidyo, etc.) Leverages CERN’s & HEP key competencies and experience This would be a clear unique contribution of HEP to an EOSC infrastructure … 21 March 2017 Ian Bird