HEP Forum for Computational Excellence (HEP-FCE) Kick Off Meeting Salman Habib & Rob Roser (Co-Directors)

Slides:



Advertisements
Similar presentations
Conference xxx - August 2003 Fabrizio Gagliardi EDG Project Leader and EGEE designated Project Director Position paper Delivery of industrial-strength.
Advertisements

Life Science Services and Solutions
Joint CASC/CCI Workshop Report Strategic and Tactical Recommendations EDUCAUSE Campus Cyberinfrastructure Working Group Coalition for Academic Scientific.
Presentation at WebEx Meeting June 15,  Context  Challenge  Anticipated Outcomes  Framework  Timeline & Guidance  Comment and Questions.
ACCI TASK FORCES Update CASC September 22, Task Force Introduction Timeline months or less from June 2009 Led by NSF Advisory Committee on.
High-Performance Computing
Role of RAS in the Agricultural Innovation System Rasheed Sulaiman V
Materials by Design G.E. Ice and T. Ozaki Park Vista Hotel Gatlinburg, Tennessee September 5-6, 2014.
Beyond the ALCPG David B. MacFarlane Associate Laboratory Director for PPA.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
UK e-Science and the White Rose Grid Paul Townend Distributed Systems and Services Group Informatics Research Institute University of Leeds.
1 Ideas About the Future of HPC in Europe “The views expressed in this presentation are those of the author and do not necessarily reflect the views of.
Research and Innovation Research and Innovation Research and Innovation Research and Innovation Research Infrastructures and Horizon 2020 The EU Framework.
EUROPEAN ASSOCIATION OF DEVELOPMENT RESEARCH AND TRAINING INSTITUTES European experience in networking development research & training institutes,
1 Building National Cyberinfrastructure Alan Blatecky Office of Cyberinfrastructure EPSCoR Meeting May 21,
Welcome to HTCondor Week #14 (year #29 for our project)
Assessment of Core Services provided to USLHC by OSG.
International collaboration in high energy physics experiments  All large high energy physics experiments today are strongly international.  A necessary.
US NITRD LSN-MAGIC Coordinating Team – Organization and Goals Richard Carlson NGNS Program Manager, Research Division, Office of Advanced Scientific Computing.
GEO Work Plan Symposium 2012 ID-05 Resource Mobilization for Capacity Building (individual, institutional & infrastructure)
HEPAP and P5 Report DIET Federation Roundtable JSPS, Washington, DC; April 29, 2015 Andrew J. Lankford HEPAP Chair University of California, Irvine.
ISBE An infrastructure for European (systems) biology Martijn J. Moné Seqahead meeting “ICT needs and challenges for Big Data in the Life Sciences” Pula,
Wireless Networks Breakout Session Summary September 21, 2012.
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program NERSC Users Group Meeting Department of Energy Update September.
Towards a European network for digital preservation Ideas for a proposal Mariella Guercio, University of Urbino.
Results of the HPC in Europe Taskforce (HET) e-IRG Workshop Kimmo Koski CSC – The Finnish IT Center for Science April 19 th, 2007.
Advancing Cooperative Conservation. 4C’s Team An interagency effort established in early 2003 by Department of the Interior Secretary Gale Norton Advance.
Information Security Research and Education Network INSuRE Dr. Melissa Dark Purdue University Award #
SLUO LHC Workshop: Closing RemarksPage 1 SLUO LHC Workshop: Closing Remarks David MacFarlane Associate Laboratory Directory for PPA.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Knowledge Exchange and Impact in the AHRC Susan Amor Head of Knowledge Exchange Conny Carter Impact and Policy Manager University of Exeter 7 April 2011.
Commodity Grid Kits Gregor von Laszewski (ANL), Keith Jackson (LBL) Many state-of-the-art scientific applications, such as climate modeling, astrophysics,
Catawba County Board of Commissioners Retreat June 11, 2007 It is a great time to be an innovator 2007 Technology Strategic Plan *
1 ASTRONET Coordinating strategic planning for European Astronomy.
Materials Innovation Platforms (MIP): A New NSF Mid-scale Instrumentation and User Program to Accelerate The Discovery of New Materials MRSEC Director’s.
DESY Photon Science XFEL official start of project: 5 June 2007 FLASH upgrade to 1 GeV done, cool down started PETRA III construction started 2 July 2007.
ESIP Vision: “Achieve a sustainable world” by Serving as facilitator and advisor for the Earth science information community Promoting efficient flow of.
1 DOE Office of Science October 2003 SciDAC Scientific Discovery through Advanced Computing Alan J. Laub.
Background Nature and function Rationale Opportunities for TB control Partnering process.
Group Science J. Marc Overhage MD, PhD Regenstrief Institute Indiana University School of Medicine.
Bob Lucas University of Southern California Sept. 23, 2011 Transforming Geant4 for the Future Bob Lucas and Rob Roser USC and FNAL May 8, 2012.
1 1 Office of Science Jean-Luc Vay Accelerator Technology & Applied Physics Division Lawrence Berkeley National Laboratory HEP Software Foundation Workshop,
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
Future computing strategy Some considerations Ian Bird WLCG Overview Board CERN, 28 th September 2012.
HEP and NP SciDAC projects: Key ideas presented in the SciDAC II white papers Robert D. Ryne.
Department of Energy Office of Science  FY 2007 Request for Office of Science is 14% above FY 2006 Appropriation  FY 2007 Request for HEP is 8% above.
Government and Industry IT: one vision, one community Vice Chairs April Meeting Agenda Welcome and Introductions GAPs welcome meeting with ACT Board (John.
ComPASS Summary, Budgets & Discussion Panagiotis Spentzouris, Fermilab ComPASS PI.
U.S. Grid Projects and Involvement in EGEE Ian Foster Argonne National Laboratory University of Chicago EGEE-LHC Town Meeting,
Ian Bird CERN, 17 th July 2013 July 17, 2013
Welcome !! to the CAoPAC Workshop Carsten P. Welsch.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI strategy and Grand Vision Ludek Matyska EGI Council Chair EGI InSPIRE.
All Hands Meeting 2005 BIRN-CC: Building, Maintaining and Maturing a National Information Infrastructure to Enable and Advance Biomedical Research.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
RISE Proposal. Acronym: JEPIF: Japan-Europe Physics at Intensity Frontier ??? Merge WP 1 & 3 ? – Sell as cross-fertilization and networking of flavour.
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
Toward High Breakthrough Collaboration (HBC) Susan Turnbull Program Manager Advanced Scientific Computing Research March 4, 2009.
CMB & LSS Virtual Research Community Marcos López-Caniego Enrique Martínez Isabel Campos Jesús Marco Instituto de Física de Cantabria (CSIC-UC) EGI Community.
Evolution of successful Forum for Computational Excellence (FCE) Pilot project – raising awareness for HEP response to rapid evolution of the computational.
CPM 2012, Fermilab D. MacFarlane & N. Holtkamp The Snowmass process and SLAC plans for HEP.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
Day 2 Discussion Slides Rob and Salman Kickoff Meeting 1 September.
EGI-InSPIRE RI EGI Compute and Data Services for Open Access in H2020 Tiziana Ferrari Technical Director, EGI.eu
EUB Brazil: IoT Pilots HORIZON 2020 WP EUB Brazil: IoT Pilots DG CONNECT European Commission.
Centre of Excellence in Physics at Extreme Scales Richard Kenway.
Inter-experimental LHC Machine Learning Working Group Activities
Joslynn Lee – Data Science Educator
Electron Ion Collider New aspects of EIC experiment instrumentation and computing, as well as their possible impact on and context in society (B) COMPUTING.
Cognitus: A Science Case for HPC in the Nordic Region
Presentation transcript:

HEP Forum for Computational Excellence (HEP-FCE) Kick Off Meeting Salman Habib & Rob Roser (Co-Directors)

Goals of this Introductory Talk − Background on how this all started and context for why its success is critical to HEP as we move forward − Level set this group as to what FCE goals are − Understand what the FCE is NOT

HEP Computing Frontier − Computing is an essential enabling and empowering component of almost all aspects of HEP science − HEP computing has a long history (~60 years) including notable contributions to High Performance Computing (HPC), High Throughput Computing (HTC), and large-scale Data Science − Substantial resources are devoted to computation and data science as an essential aspect of HEP’s scientific enterprise − Winds of Change: New challenges posed by hardware evolution and rapid increase in data rates/volumes in an era of flat/declining budgets − Wide recognition that concerted response leveraging HEP community expertise and best practices from the outside world is needed (HEP Topical Panel, Snowmass, P5) 3

From the P5 Report… − Recommendation 29 : “Strengthen the global cooperation among laboratories and universities to address computing and scientific software needs, and provide efficient training in next- generation hardware and data-science software relevant to particle physics.” − Driver for a set of HEP-FCE goals in collaboration and community- building efforts, R&D to focus on building cross-cutting dedicated expertise in architecture and data science evolution − “Investigate models for the development and maintenance of major software within and across research areas, including long- term data and software preservation. (p. 21)” − Driver for a set of HEP-FCE goals in R&D, cross-cut software, and data preservation 4

What is the HEP–FCE? − Official DOE HEP response to P5 recommendations, Snowmass, and the HEP Topical Panel on Computing and Simulation (Dec 2013): “Computing Frontier” as key strength of HEP − Cross-cut exchange to promote excellence in HEP computing, simulation, data transfer/storage/preservation and promote the flow of information and best practices across HEP Frontiers − Identify and undertake focused R&D tasks addressing frontier challenges posed by rapidly evolving computer architectures, next-generation data-intensive computing, and development of shared software elements − Undertake information gathering tasks – community-based studies and surveys − Access point for agency/office partnerships (ASCR, NSF, NASA…) and HEP connections (CERN, DPHEP, G4, HSF, OSG, USQCD...) 5

Present computing model in HEP handles virtually all computing within experiments/frontiers − Need to strengthen and expand horizontal component to best utilize HEP resources, avoid duplication, optimize technology, develop external resources & strengthen partnerships…… Rapidly evolving computer architectures and exponentially increasing data volumes REQUIRE effective responses & cross-cut solutions − Evolution of commodity hardware and rise of cloud computing − Enhance opportunities of interfacing with ASCR − Understanding HEP’s core competencies and leveraging that expertise moving forward Why NOW and WHY HEP-FCE? Higgs Boson For Discovery Dark Matter Pursue physics associated with neutrino mass Dark Energy and Inflation Explore the Unknown (new particles and fields) HEP-FCE Given flat/declining budgets, it is increasingly important to leverage wor k that has already been done and to use expertise that already exists

Roles that the FCE can play… − Computing-related HEP community planning − Coordinate HEP-FCE R&D portfolio focused on cross-cutting initiatives and tasks − Advice on computational support for smaller/new experiments − Provide cross-cutting knowledge/capability base − Provide Training/Workshop activities − Engage/reward junior researchers (training, travel/visiting awards) − Disseminate information re data archiving/curation/management − Strategic connectivity for DOE HEP program; serve as a community hub − Work with existing groups to leverage capabilities – OSG, SciDAC teams, and others Note: HEP-FCE is an intellectual provider, not a service provider; main roles above relate to facilitation and incubation As these activities mature – we can expand the role further 7

Roles that the FCE can play… − Computing-related HEP community planning − Coordinate HEP-FCE R&D portfolio focused on cross-cutting initiatives and tasks − Advice on computational support for smaller/new experiments − Provide cross-cutting knowledge/capability base − Provide Training/Workshop activities − Engage/reward junior researchers (training, travel/visiting awards) − Disseminate information re data archiving/curation/management − Strategic connectivity for DOE HEP program; serve as a community hub − Work with existing groups to leverage capabilities – OSG, SciDAC teams, and others Note: HEP-FCE is an intellectual provider, not a service provider; main roles above relate to facilitation and bamail.com/content.aspx? id=15265incubation As these activities mature – we can expand the role further 8 We will talk more about these this afternoon

HEP-FCE: Where we are today I HEP-FCE is still in its early days Success of HEP-FCE is tied to the success of collaboration and leveraging Embarked on a pilot project in mid-2014 Started out by launching 3 Working Groups: Based on Snowmass, HEP Computing Topical Panel, and P5 reports, along with community input, the WGs were tasked to identify concrete, actionable opportunities in the HEP computational landscape: Systems (hardware and systems software) Tools and Libraries Applications Software Working Group Reports: Reports in hand; major general conclusions: Systems: Focus on 1) Data storage and data access technologies, 2) Future computer architectures Tools and Libraries: 1) Adopt sharing across experiments, 2) Step beyond local solutions, 3) Avoid systemic shortcomings Applications Software: 1) Top-down consolidation not required, 2) Need for recognition of software importance — on par with major detector R&D

HEP-FCE: Where we are today II Community Support and Collaboration Development: Web presence established at initial team of experts and volunteers in place Recently Completed; coordinating exascale requirements activity with DOE ASCR

Newly Established HEP-FCE: Advisory Panel Mayly Sanchez (Iowa State) Paul Avery (Florida) Maria Spiropulu (Caltech) Kaushik De (Texas) Andy Connolly (Washington) Steve Gottlieb (Indiana)

Enhance and Expand the HEP/ASCR Connection I 1.SciDAC (Scientific Discovery through Advanced Computing) Lattice QCD Accelerator modeling (parallel computing came to this community via a DOE Grand Challenge followed by SciDAC) Computational cosmology All three of these cases have solid success stories and are ongoing… 2.ESnet – partnership essential for Energy Frontier, also for Cosmic and Intensity Frontiers 3.ALCC awards – Cosmic Frontier End-Station, LHC Event Simulation, Theory 4.INCITE awards – Lattice QCD, Accelerators, Computational Cosmology 5.Other partnerships – showing directions forward but not yet at the same level as the ones above – include work on refactoring HEP software and building new data-intensive capabilities 6.Partnerships include work at all three major ASCR facilities: ALCF, NERSC, OLCF 12

Enhance and Expand the HEP/ASCR Connection II 7.Leveraging HEP capabilities – 1.In the vein of “do what we do best” but learn/adopt best practices from the outside world 2.Help promote the success of OSG and identify further opportunities 3.Use our own internal expertise and external connections (with ASCR) to encourage and help others in HEP. Examples include: 1.Hoeche et al., work on Sherpa 2.LeCompte et al., running ATLAS event simulation on a production basis on ANL HPC machines 3.Dark Energy Survey data management pipelines running at NERSC 8.“Mini-Apps” – benchmark and improve our core software apps; help HEP and ASCR make intelligent decisions for resource acquisition 9.Fully engage in the Exascale Initiative – initial engagement complete but we need to stay engaged as a full partner 13

The Impact of HPC on HEP − Not all problems can be solved using HPC systems, but many can (accelerators, cosmology, event generation/simulation, QCD…) − Next generation of ASCR HPC machines (staging begins 2016, ends in 2018) will sum to ~500 petaflops of compute capability − If HEP experiments use just 5% of that, i.e. 25 petaflops, it is ~25 times what the Grid will provide − Learning how to leverage these resources to seamlessly supplement/enhance current capability is important − New possibilities opened up by HPC platforms will offer unique computational opportunities

HPC for HEP Experiments ATLAS has about a year and a half experience of production HPC use (Comp. HEP pilot project) − ALCC award equivalent to several percent of the grid − Used to run event generators − Highlighted areas for code improvement − Running at scales of 100K-1M parallel processes (close collaboration between HEP/ALCF/code authors) HPC fraction can grow, path easier for new experiments (e.g., IF) Event display for a simulated z+5 jets event: Generated on Mira (ALCF) with Alpgen, simulated on Edison (NERSC) with Yoda, reconstructed at CERN

Next-Generation Architectures: HEP Mini-Apps − Real applications usually too large and complex to work with in design studies − Kernels alone are often inadequate − Mini-apps (Heroux/Barrett): Compact, self-contained proxies that adequately portray the computational loads and dataflow, but are not meant to be scientifically realistic − HEP mini-app collection (thousands of lines of code each) will be a good way to develop a design suite for next- generation architectures − Mini-app instances can cover all HEP application areas Co-design loop using mini- apps to approximate the workload of the parent application (LLNL STR 2013)

The Realities of HEP Computing Moving Forward − Due to funding constraints, we will have to optimize computing resources for “average” demand and not peak − Need to find creative solutions for those instances where we need more than we have and need it fast (“pledged” vs. “non- pledged” resources) − Importance of leveraging HPC facilities – already demonstrated at level of several percent of ATLAS production computing, equivalent to all of Spain’s contribution (LeCompte et al.) − Computing in the Cloud will be an important player in future − Helping each other across Frontiers/projects – sharing core competencies essential − Finding ways to not just develop key software – but to maintain and evolve it going forward

The Success of FCE − It’s not automatic! − We need to work as a collective unit to start bringing some of these ideas into reality; community-based approach needed − Computing has always been a strength of the HEP community – the FCE will help leverage that competency across the various scientific enterprises and take advantage of the existing expertise to solve outstanding problems − It will only succeed if we want it to – and put in the effort to make it so. − By the end of this meeting, we need to have a collective vision of what the FCE is, and action plans to launch it forward − Concrete, well-defined, time-bound activities − We also need people to start talking about it and letting our colleagues know that this is happening

BACKUPS

FCE Next Steps Next-Generation Architectures/Supercomputing Applications – FCE will help foster and build collaborations to enable a new set of supercomputer users within HEP Data-Intensive/Cloud Computing – Virtualization and software containers offer elasticity of compute resources and handle complex software stacks Cross-Cut Software Development – Developing a strong cross-cut software base, by functioning as a helpful go- between across experiments, institutions, and frontiers High-Speed Networking – Engagement with entities such as ESnet, HEP and ASCR facilities, and software teams that are focused on managing data movement ASCR/HEP Interactions and Workshops – Joint meetings with HEP and ASCR to shape future collaborative activities HEP-FCE Infrastructure Support and Community Development – Develop and maintain a HEP community infrastructure for sharing ideas, algorithms, codes and evolving sets of best practices.