High Performance Computing Activities at Fermilab James Amundson Breakout Session 5C: Computing February 11, 2015.

Slides:



Advertisements
Similar presentations
U.S. Department of Energy’s Office of Science Basic Energy Sciences Advisory Committee Dr. Daniel A. Hitchcock October 21, 2003
Advertisements

GPU Virtualization Support in Cloud System Ching-Chi Lin Institute of Information Science, Academia Sinica Department of Computer Science and Information.
U.S. Department of Energy Brookhaven Science Associates BNL’s Role in High Energy Physics Thomas B.W. Kirk Associate Director for High Energy and Nuclear.
1 NERSC User Group Business Meeting, June 3, 2002 High Performance Computing Research Juan Meza Department Head High Performance Computing Research.
1 Discussions on the next PAAP workshop, RIKEN. 2 Collaborations toward PAAP Several potential topics : 1.Applications (Wave Propagation, Climate, Reactor.
HEPAP and P5 Report DIET Federation Roundtable JSPS, Washington, DC; April 29, 2015 Andrew J. Lankford HEPAP Chair University of California, Irvine.
Simulating Quarks and Gluons with Quantum Chromodynamics February 10, CS635 Parallel Computer Architecture. Mahantesh Halappanavar.
QCD Project Overview Ying Zhang September 26, 2005.
Lattice QCD in Nuclear Physics Robert Edwards Jefferson Lab CCP 2011 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.:
Physics Steven Gottlieb, NCSA/Indiana University Lattice QCD: focus on one area I understand well. A central aim of calculations using lattice QCD is to.
R. Ryne, NUG mtg: Page 1 High Energy Physics Greenbook Presentation Robert D. Ryne Lawrence Berkeley National Laboratory NERSC User Group Meeting.
SLUO LHC Workshop: Closing RemarksPage 1 SLUO LHC Workshop: Closing Remarks David MacFarlane Associate Laboratory Directory for PPA.
Building an Electron Cloud Simulation using Bocca, Synergia2, TxPhysics and Tau Performance Tools Phase I Doe SBIR Stefan Muszala, PI DOE Grant No DE-FG02-08ER85152.
Eric Prebys 10/28/2008.  There is a great deal of synergy between PS2 and the Fermilab Main Injector during the Project X era.  High energy ion transport,
Computational Requirements for NP Robert Edwards Jefferson Lab TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAAAAAAA.
24-Aug-11 ILCSC -Mumbai Global Design Effort 1 ILC: Future after 2012 preserving GDE assets post-TDR pre-construction program.
Lattice QCD and GPU-s Robert Edwards, Theory Group Chip Watson, HPC & CIO Jie Chen & Balint Joo, HPC Jefferson Lab TexPoint fonts used in EMF. Read the.
 Advanced Accelerator Simulation Panagiotis Spentzouris Fermilab Computing Division (member of the SciDAC AST project)
Brent Gorda LBNL – SOS7 3/5/03 1 Planned Machines: BluePlanet SOS7 March 5, 2003 Brent Gorda Future Technologies Group Lawrence Berkeley.
November 1, 2007Scott Dodelson1 Computational Cosmology Initiative Report of the Task Force.
P5 Meeting - Jan , US LHC The Role of the LHC in US HEP Dan Green US CMS Program Manager January 28, 2003.
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Update IDC HPC Forum.
1 1 Office of Science Jean-Luc Vay Accelerator Technology & Applied Physics Division Lawrence Berkeley National Laboratory HEP Software Foundation Workshop,
Scientific Computing at SLAC: The Transition to a Multiprogram Future Richard P. Mount Director: Scientific Computing and Computing Services Stanford Linear.
What is QCD? Quantum ChromoDynamics is the theory of the strong force
Accelerator Simulation in the Computing Division Panagiotis Spentzouris.
, VilniusBaltic Grid1 EG Contribution to NEEGRID Martti Raidal On behalf of Estonian Grid.
HEP and NP SciDAC projects: Key ideas presented in the SciDAC II white papers Robert D. Ryne.
Department of Energy Office of Science  FY 2007 Request for Office of Science is 14% above FY 2006 Appropriation  FY 2007 Request for HEP is 8% above.
Accelerator Simulation in the Computing Division Panagiotis Spentzouris.
ComPASS Summary, Budgets & Discussion Panagiotis Spentzouris, Fermilab ComPASS PI.
The Performance Evaluation Research Center (PERC) Participating Institutions: Argonne Natl. Lab.Univ. of California, San Diego Lawrence Berkeley Natl.
U.S. Department of Energy’s Office of Science Midrange Scientific Computing Requirements Jefferson Lab Robert Edwards October 21, 2008.
ENLIGHT,12/2/20021 Health and Science, can CERN contribute? The Mission of CERN (1954): “The Organization shall provide for collaboration among European.
Argonne Accelerator Institute Activites Rod Gerig Argonne May 18, 2007 Presentation to the Fermilab-Argonne Directors’ Collaboration Meeting.
Large-scale accelerator simulations: Synergia on the Grid turn 1 turn 27 turn 19 turn 16 C++ Synergia Field solver (FFT, multigrid) Field solver (FFT,
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
Toward High Breakthrough Collaboration (HBC) Susan Turnbull Program Manager Advanced Scientific Computing Research March 4, 2009.
Fermi National Accelerator Laboratory & Thomas Jefferson National Accelerator Facility SciDAC LQCD Software The Department of Energy (DOE) Office of Science.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
CPM 2012, Fermilab D. MacFarlane & N. Holtkamp The Snowmass process and SLAC plans for HEP.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
APE group Many-core platforms and HEP experiments computing XVII SuperB Workshop and Kick-off Meeting Elba, May 29-June 1,
Particle Physics Sector Young-Kee Kim / Greg Bock Leadership Team Strategic Planning Winter Workshop January 29, 2013.
LQCD Computing Project Overview
A Brief Introduction to NERSC Resources and Allocations
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Early Results of Deep Learning on the Stampede2 Supercomputer
Deploying Regional Grids Creates Interaction, Ideas, and Integration
Clouds , Grids and Clusters
Volunteer Computing for Science Gateways
Electron Ion Collider New aspects of EIC experiment instrumentation and computing, as well as their possible impact on and context in society (B) COMPUTING.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Heterogeneous Computation Team HybriLIT
PanDA engagement with theoretical community via SciDAC-4
Future Trends in Nuclear Physics Computing Workshop
Jeffrey Eldred, Sasha Valishev AAC Workshop 2016
Tohoku University, Japan
Scientific Computing At Jefferson Lab
Early Results of Deep Learning on the Stampede2 Supercomputer
Sky Computing on FutureGrid and Grid’5000
Particle Physics Theory
Computing Overview Amber Boehnlein.
Scientific Computing Strategy
Defining the Grid Fabrizio Gagliardi EMEA Director Technical Computing
Sky Computing on FutureGrid and Grid’5000
A Possible OLCF Operational Model for HEP (2019+)
Welcome to (HT)Condor Week #19 (year 34 of our project)
Workflow and HPC erhtjhtyhy Doug Benjamin Argonne National Lab.
Presentation transcript:

High Performance Computing Activities at Fermilab James Amundson Breakout Session 5C: Computing February 11, 2015

High Performance vs. High Throughput Computing Much of the computing in HEP relies on High Throughput Computing (HTC) –One task per processor on many processors –Trivial to perform in parallel This talk is about High Performance Computing (HPC) –One large task on many processors –Tightly-coupled communications Advanced networking –Low latency and high bandwidth –Includes Linux clusters with specialized networking and supercomputers James Amundson | High Performance Computing Activities at Fermilab2

Cooperative Work in High Performance Computing (HPC) Lab facilities –HPC clusters –Next-generation HPC testbeds Lab competencies –Scientific Computational Physics on HPC –Technical HPC programming and optimization HPC cluster support Relevant lab science topics –Lattice QCD –Cosmology –Accelerator Simulation James Amundson | High Performance Computing Activities at Fermilab3 Much of this work Funded by DOE HEP CompHEP through SciDAC

Shared HPC Facilities Lattice QCD –Three clusters ~25,000 CPU cores total Two clusters include GPUs Infiniband interconnects Accelerator Simulation and Cosmology –Two clusters ~2,500 CPU cores total Infiniband interconnects Next-generation research –Two clusters 72 traditional cores James Amundson | High Performance Computing Activities at Fermilab4 Phi cluster –16 Intel Xeon Phi 5110P accelerators GPU clusters –2 NVIDIA Tesla Kepler K20m GPUs and 2 K40m GPUs Infiniband interconnects

Sharing HPC Facilities Facilities are operated by Computing (CS); use is shared between Particle Physics (PPD), Center for Particle Astrophysics (FCPA) and CS. All three subjects also utilize leadership class supercomputing facilities, for example –ALCF at Argonne (Mira: BlueGene/Q) Resources obtained through the INCITE program. –NERSC at LBL (Edison: Cray XC30) Resources obtained through the SciDAC program James Amundson | High Performance Computing Activities at Fermilab5

Shared Competencies HPC cluster support –LQCD group has developed highly specialized expertise in acquisition, deployment and support of HPC Linux clusters. Difficulty of each is easy to underestimate. –Exotic hardware and drivers. –Expertise is shared with Accelerator Simulation and Cosmology through the support of specialized clusters. Computational Physics on HPC –Many overlaps between LQCD, Accelerator Simulation and Cosmology Numerical Algorithms –Particle-in-cell techniques, linear algebra and spectral methods Parallelization Algorithms –Load balancing, data layout, communication avoidance, etc James Amundson | High Performance Computing Activities at Fermilab6

Shared Competencies, continued HPC programming and optimization –Similar technical issues are faced by all HPC efforts at the lab. –Groups meet in a regular series of technical seminars (“NEAT Topics”). –LQCD and Accelerator Simulation submitted SciDAC cross- cutting project. –Accelerator Simulation and Cosmology share technical personnel James Amundson | High Performance Computing Activities at Fermilab7

Lattice-QCD computing and USQCD Lattice-QCD applications cover all science cross-cuts, e.g. hadronic contributions to (g- 2)μ, nucleon axial-vector form factor for CCQE X-section, quark masses & αs for Higgs predictions –Calculations require supercomputers (provided by DOE’s LCFs) and even more flops on medium-scale computing clusters (naturally provided by DOE laboratories) Fermilab deploys and operates large computing clusters for U.S. lattice gauge theory community (organized by USQCD Collaboration), which is mostly based at universities James Amundson | High Performance Computing Activities at Fermilab8 USQCD effort over 100 strong Hardware funded by DOE HEP & NP offices Hosts largest share of USQCD’s computing hardware Highest user satisfaction among three USQCD facilities Fermilab plays leading roles in U.S. lattice effort including: Paul Mackenzie USQCD spokesperson and project PI. Bill Boroski (office of CIO) LQCD project manager. Computing and PPD interactions crucial for successful science!

Accelerator Simulation and ComPASS Community Project for Accelerator Science and Simulation (ComPASS) collaboration –Mission is to develop and deploy the state-of- the-art accelerator modeling. –DOE CompHEP+ASCR funding through SciDAC –Collaboration includes national labs, universities and one company James Amundson | High Performance Computing Activities at Fermilab9 Fermilab is the lead institution on ComPASS –More discussion of accelerator simulation to follow…

Science on HPC The point of these shared facilities and competencies is producing science. –Lattice QCD is covered in other cross-cutting sessions. –Cosmology is covered in other cross-cutting sessions. I will focus on Accelerator Simulation. –Accelerator simulation serves HEP by enabling accelerator technology. Directions must be in sync with HEP priorities. Work is done in collaboration with Fermilab’s Accelerator Division and other accelerator facilities, especially CERN James Amundson | High Performance Computing Activities at Fermilab10

Accelerator Simulation Directions Set by Lab Goals We choose simulation topics based on Lab goals, which are ultimately driven by P5 goals. –Extend the scientific reach of existing accelerator facilities Simulate existing Booster, Delivery Ring, Recycler and Main Injector. –Launch a test facility to enable “transformative” Accelerator Science Simulate the Integrable Optics Test Accelerator (IOTA). –IOTA is an experimental machine to explore using intrinsically nonlinear dynamics to overcome intensity limitations inherent to ordinary linear machines. –Establish Fermilab as essential contributor to future large accelerators Simulate LHC Injectors for HL-LHC James Amundson | High Performance Computing Activities at Fermilab11

Synergia James Amundson | High Performance Computing Activities at Fermilab12 Beam Dynamics Framework Developed at Fermilab with support from SciDAC ComPASS Collaboration (CompHEP+ASCR) Fermilab leads ComPASS Advanced capabilities Collective effects High precision through high statistics (many particles) Single- and multi-bunch physics Multi-bunch capability is unique to Synergia Runs on everything from laptops to supercomputers Synergia also used to teach students from around the world at the US Particle Accelerator School (e.g., two weeks ago)

Synergia and Current-Generation HPC Synergia was constructed for HPC –Core model is MPI + (optional) OpenMP –Strong scaling over 1000x –Weak scaling to 100,000+ cores James Amundson | High Performance Computing Activities at Fermilab13 Single-bunch strong scaling from 16 to 16,384 cores 32x32x1024 grid, 105M particles Weak scaling from 1M to 256M particles 128 to 32,768 cores Weak scaling from 64 to 1024 bunches 8192 to 131,072 cores Up to over particles

Synergia and Next-Generation HPC Dramatic changes in near future HPC architectures –Supercomputers and clusters alike OLCF’s Summit: GPU + PowerPC NERSC’s Cori: Intel MIC Synergia GPU and MIC ports in progress –Using shared expertise and facilities James Amundson | High Performance Computing Activities at Fermilab14 First production-level GPU results Vectorization benchmark for MIC

Synergia Applications Lab Goal: Extend the scientific reach of existing accelerator facilities Simulate the Main Injector and Recycler under high intensities –Work done with Accelerator Division Main Injector personnel –Multi-bunch capabilities in Synergia used to simulate slip stacking Slip stacking uses RF to combine multiple bunches to increase intensity James Amundson | High Performance Computing Activities at Fermilab15

Synergia Applications, continued Resonant extraction in the Delivery Ring for Mu2e –Work done with Accelerator Division Muon personnel –Important collective effects James Amundson | High Performance Computing Activities at Fermilab16 Multi-bunch instability in the Booster –Work done with AD Proton Source personnel Requires multi-bunch physics with collective effects

Other Synergia Applications Lab Goal: Establish Fermilab as essential contributor to future large accelerators Synergia benchmarking exercise and simulations for HL-LHC Work done with CERN Accelerator personnel James Amundson | High Performance Computing Activities at Fermilab17 71 steps/turn 7,100,000 steps (~ 7 million) 4,194,304 particles (~ 4 million) 29,779,558,400,000 particle-steps (~ 30 trillion) 1,238,158,540,800,000 calls to “drift” (~ 1 quadrillion) Lab Goal: Enable “transformative” Accelerator Science Synergia extended to handle nonlinear lens for IOTA Work done with Accelerator Physics Center personnel

Conclusions Three scientific areas at the lab use significant HPC resources –Lattice QCD, Cosmology, Accelerator Simulation Share facilities –Acquired, deployed and operated by Computing –Used by all Share Competencies –Computational Physics –HPC programming and optimization Produce science aligned with lab priorities –Accelerator simulation in support of P5/Lab goals James Amundson | High Performance Computing Activities at Fermilab18