R. Ryne, NUG mtg: 040625Page 1 High Energy Physics Greenbook Presentation Robert D. Ryne Lawrence Berkeley National Laboratory NERSC User Group Meeting.

Slides:



Advertisements
Similar presentations
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Cray X1 and Black Widow at ORNL Center for Computational.
Advertisements

U.S. Department of Energy’s Office of Science Basic Energy Sciences Advisory Committee Dr. Daniel A. Hitchcock October 21, 2003
FUTURE TECHNOLOGIES Lecture 13.  In this lecture we will discuss some of the important technologies of the future  Autonomic Computing  Cloud Computing.
U.S. Department of Energy Brookhaven Science Associates BNL’s Role in High Energy Physics Thomas B.W. Kirk Associate Director for High Energy and Nuclear.
Reconfigurable Application Specific Computers RASCs Advanced Architectures with Multiple Processors and Field Programmable Gate Arrays FPGAs Computational.
Latest Advances in “Hybrid” Codes & their Application to Global Magnetospheric Simulations A New Approach to Simulations of Complex Systems H. Karimabadi.
Advancing Computational Science Research for Accelerator Design and Optimization Accelerator Science and Technology - SLAC, LBNL, LLNL, SNL, UT Austin,
Support for Adaptive Computations Applied to Simulation of Fluids in Biological Systems Immersed Boundary Method Simulation in Titanium Siu Man Yau, Katherine.
SLAC is focusing on the modeling and simulation of DOE accelerators using high- performance computing The performance of high-brightness RF guns operating.
Nuclear Physics Greenbook Presentation (Astro,Theory, Expt) Doug Olson, LBNL NUG Business Meeting 25 June 2004 Berkeley.
T7/High Performance Computing K. Ko, R. Ryne, P. Spentzouris.
© Fujitsu Laboratories of Europe 2009 HPC and Chaste: Towards Real-Time Simulation 24 March
Simulating Quarks and Gluons with Quantum Chromodynamics February 10, CS635 Parallel Computer Architecture. Mahantesh Halappanavar.
LLNL-PRES-XXXXXX This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
SciDAC Accelerator Simulation project: FNAL Booster modeling, status and plans Robert D. Ryne, P. Spentzouris.
Improved pipelining and domain decomposition in QuickPIC Chengkun Huang (UCLA/LANL) and members of FACET collaboration SciDAC COMPASS all hands meeting.
QCD Project Overview Ying Zhang September 26, 2005.
Oliver Boine-FrankenheimSIS100-4: High current beam dynamics studies SIS 100 ‘high current’ design challenges o Beam loss in SIS 100 needs to be carefully.
Future role of DMR in Cyber Infrastructure D. Ceperley NCSA, University of Illinois Urbana-Champaign N.B. All views expressed are my own.
Results Matter. Trust NAG. Numerical Algorithms Group Mathematics and technology for optimized performance Alternative Processors Panel IDC, Tucson, Sept.
Physics Steven Gottlieb, NCSA/Indiana University Lattice QCD: focus on one area I understand well. A central aim of calculations using lattice QCD is to.
FACET and beam-driven e-/e+ collider concepts Chengkun Huang (UCLA/LANL) and members of FACET collaboration SciDAC COMPASS all hands meeting 2009 LA-UR.
Supercomputing Cross-Platform Performance Prediction Using Partial Execution Leo T. Yang Xiaosong Ma* Frank Mueller Department of Computer Science.
Building an Electron Cloud Simulation using Bocca, Synergia2, TxPhysics and Tau Performance Tools Phase I Doe SBIR Stefan Muszala, PI DOE Grant No DE-FG02-08ER85152.
Issues in (Financial) High Performance Computing John Darlington Director Imperial College Internet Centre Fast Financial Algorithms and Computing 4th.
N ATIONAL E NERGY R ESEARCH S CIENTIFIC C OMPUTING C ENTER 1 NERSC Visualization Greenbook Workshop Report June 2002 Wes Bethel LBNL.
BROOKHAVEN SCIENCE ASSOCIATES Peter Bond Deputy Director for Science and Technology October 29, 2005 New Frontiers at RHIC Workshop.
BESAC Dec Outline of the Report I. A Confluence of Scientific Opportunities: Why Invest Now in Theory and Computation in the Basic Energy Sciences?
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Review of Urban Modeling Program at LLNL CRTI RD Project Review Meeting Canadian Meteorological Centre August 22-23, 2006.
Beam Dynamics: Planned Activities Code Development Intrabeam collisions Electron cooling Continued support for IMPACT Continued development of  beam-beam.
1 1 What does Performance Across the Software Stack mean?  High level view: Providing performance for physics simulations meaningful to applications 
 Advanced Accelerator Simulation Panagiotis Spentzouris Fermilab Computing Division (member of the SciDAC AST project)
Brent Gorda LBNL – SOS7 3/5/03 1 Planned Machines: BluePlanet SOS7 March 5, 2003 Brent Gorda Future Technologies Group Lawrence Berkeley.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
1 HPC Middleware on GRID … as a material for discussion of WG5 GeoFEM/RIST August 2nd, 2001, ACES/GEM at MHPCC Kihei, Maui, Hawaii.
A Domain Decomposition Method for Pseudo-Spectral Electromagnetic Simulations of Plasmas Jean-Luc Vay, Lawrence Berkeley Nat. Lab. Irving Haber & Brendan.
I/O for Structured-Grid AMR Phil Colella Lawrence Berkeley National Laboratory Coordinating PI, APDEC CET.
Cracow Grid Workshop, November 5-6, 2001 Concepts for implementing adaptive finite element codes for grid computing Krzysztof Banaś, Joanna Płażek Cracow.
J.-N. Leboeuf V.K. Decyk R.E. Waltz J. Candy W. Dorland Z. Lin S. Parker Y. Chen W.M. Nevins B.I. Cohen A.M. Dimits D. Shumaker W.W. Lee S. Ethier J. Lewandowski.
BESAC August Part III IV. Connecting Theory with Experiment V. The Essential Resources for Success Co-Chairs Bruce Harmon – Ames Lab and Iowa.
Anton, a Special-Purpose Machine for Molecular Dynamics Simulation By David E. Shaw et al Presented by Bob Koutsoyannis.
1 OFFICE OF ADVANCED SCIENTIFIC COMPUTING RESEARCH The NERSC Center --From A DOE Program Manager’s Perspective-- A Presentation to the NERSC Users Group.
ILC Damping Rings Mini-Workshop, KEK, Dec 18-20, 2007 Status and Plans for Impedance Calculations of the ILC Damping Rings Cho Ng Advanced Computations.
A QCD Grid: 5 Easy Pieces? Richard Kenway University of Edinburgh.
Accelerator Simulation in the Computing Division Panagiotis Spentzouris.
HEP and NP SciDAC projects: Key ideas presented in the SciDAC II white papers Robert D. Ryne.
Accelerator Simulation in the Computing Division Panagiotis Spentzouris.
ComPASS Summary, Budgets & Discussion Panagiotis Spentzouris, Fermilab ComPASS PI.
The Performance Evaluation Research Center (PERC) Participating Institutions: Argonne Natl. Lab.Univ. of California, San Diego Lawrence Berkeley Natl.
U.S. Department of Energy’s Office of Science Midrange Scientific Computing Requirements Jefferson Lab Robert Edwards October 21, 2008.
Algebraic Solvers in FASTMath Argonne Training Program on Extreme-Scale Computing August 2015.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Welcome !! to the CAoPAC Workshop Carsten P. Welsch.
Evaluating scientific software. Choosing appropriate software. Options. 1. Create your own from scratch. 2. Create your own with key bits from well-developed,
Tackling I/O Issues 1 David Race 16 March 2010.
Large-scale accelerator simulations: Synergia on the Grid turn 1 turn 27 turn 19 turn 16 C++ Synergia Field solver (FFT, multigrid) Field solver (FFT,
PEER 2003 Meeting 03/08/031 Interdisciplinary Framework Major focus areas Structural Representation Fault Systems Earthquake Source Physics Ground Motions.
 Accelerator Simulation P. Spentzouris Accelerator activity coordination meeting 03 Aug '04.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
Fermi National Accelerator Laboratory & Thomas Jefferson National Accelerator Facility SciDAC LQCD Software The Department of Energy (DOE) Office of Science.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
High Performance Computing Activities at Fermilab James Amundson Breakout Session 5C: Computing February 11, 2015.
Particle Physics Sector Young-Kee Kim / Greg Bock Leadership Team Strategic Planning Winter Workshop January 29, 2013.
______ APPLICATION TO WAKEFIELD ACCELERATORS EAAC Workshop – Elba – June juillet 2016 | PAGE 1 CEA | 10 AVRIL 2012 X. Davoine 1, R. Lehe 2, A.
Unstructured Meshing Tools for Fusion Plasma Simulations
LQCD Computing Project Overview
Grid Computing.
Parallel 3D Finite Element Particle-In-Cell Simulations with Pic3P*
Presentation transcript:

R. Ryne, NUG mtg: Page 1 High Energy Physics Greenbook Presentation Robert D. Ryne Lawrence Berkeley National Laboratory NERSC User Group Meeting June 25, 2005

R. Ryne, NUG mtg: Page 2 Outline  Lattice QCD  Accelerator Physics  Astrophysics (see D. Olson’s presentation)

R. Ryne, NUG mtg: Page 3 Lattice QCD

R. Ryne, NUG mtg: Page 4 Goals  Determine a number of basic parameters of the Standard Model  Make precise tests of the Standard Model  Obtain a quantitative understanding of the physical phenomena controlled by the strong interactions

R. Ryne, NUG mtg: Page 5 Impact on determination of CKM matrix  Improvements in lattice errors obtained w/ computers sustaining 0.6, 6, and 60 Tflops for one year

R. Ryne, NUG mtg: Page 6 Computing Needs: Approach  Two pronged approach:  Use of national supercomputer centers such as NERSC  Build dedicated computers using special purpose hardware for QCD - QCDOC - Optimized clusters  Special purpose hardware is used to perform the majority of the lattice calculations  Supercomputer centers used for a combination of lattice calculations and data analysis

R. Ryne, NUG mtg: Page 7 Computational Issues  Lattice calculations utilize a 4D grid  Need highest possible single processor performance  Communication is nearest-neighbor  Don’t need large memory  Do need high speed networks - International Lattice Data Grid formed to share computationally expensive data - Need to move ~1 petabyte in 24 hrs

R. Ryne, NUG mtg: Page 8 Lattice QCD Computational Roadmap  Lattice community presently sustains Tflop/sec  Has allowed determination of a limited number of key quantities to ~few percent accuracy  Has allowed development & testing of new formulations of that will significantly improve accuracy of future calculations  In next few years need to sustain Tflop/sec  Calculate weak decay constants & form factors  Determine phase diagram of high temp QCD, calculate EOS of quark-gluon plasma  Obtain quantitative understanding of internal structure of strongly interacting particles  Need to sustain ~ 1 petaflop/sec by end of decade

R. Ryne, NUG mtg: Page 9 Accelerator Physics

R. Ryne, NUG mtg: Page 10 Goals  Large-scale modeling is essential for  Improving/upgrading existing accelerators  Designing next-generation accelerators  Exploring/discovering new methods of acceleration - Laser/plasma based concepts

R. Ryne, NUG mtg: Page 11 Accelerator modeling is very diverse  Many models  Maxwell  Vlasov/Poisson  Vlasov/Maxwell  Fokker-Planck  Leonard-Weichart  Single & multi-species  Particle based codes  Mesh-based codes: regular, irregular, AMR,…  Combined particle/mesh codes  Runs of various sizes (up to ~1000 PEs and beyond)

R. Ryne, NUG mtg: Page 12 Advanced Computing: An imperative to help assure success and best performance of a ~$20B investment  SciDAC budget is < 0.02% of the this amount  Small investiment in computing can have huge financial consequences

R. Ryne, NUG mtg: Page 13 Accelerator Modeling Roadmap  Current resources: ~3M NERSC  In the next few years, will need ~20M hrs/yr  Design of proposed machines: Linear Collider, RIA, hadron machines (proton drivers, muon/neutrino systems, VLHC)  Simulation of existing & near-term machines: LHC,RHIC, PEP-II, SNS  Design of advanced concepts: 1 GeV stage, plasma afterburner  Design of 4th generation light sources  By the end of the decade will need ~60M hrs/yr  Full scale electron-cloud  Multi-slice, multi-IP, strong-strong beam-beam  Interaction of space charge effects, wakefields, and machine nonlinearities in boosters and accumulator rings  First principles Langevin modeling of electron cooling systems  CSR effects with realistic boundary conditions  Goal is end-to-end modeling of complete systems

R. Ryne, NUG mtg: Page 14 Algorithmic & Software Needs  Continued close collaboration with ASCR- supported researchers is essential  Linear solvers, eigensolvers, PDE solvers, meshing technologies, visualization  Performance monitoring and enhancement, version control & build tools, multi-language support  Multi-scale methods are becoming increasingly important  We need robust, easy-to-use parallel programming environments & parallel scientific software libraries

R. Ryne, NUG mtg: Page 15 Parallel Optimization promises to be well suited for design problems on 10’s of thousands of processors  Machine design always involves multiple runs  Up to now the community has learned how to run large problems on ~thousand processors  In the future, it will be desirable to run multiple ~1000 processor runs in a single optimization step  Will allow scaling up to 10’s of thousands of processors for machine design problems  NOTE: not all problems are design problems. Fast interprocessor communication is needed for the very largest “single point” runs.

R. Ryne, NUG mtg: Page 16 Diversity of accelerator modeling problems demands a mix of capacity & capability, and a mix of system parameters  Some problems well suited to <=500 processors, but we typically need to run a large # of simulations  Design studies, parameter scans  Some problems demand large simulations (>=1000) procs) and involve regular, near-neighbor comm.  Electromagnetic PIC  Some problems demand large simulations and involve global, irregular communication  Modeling geometrically complex electromagnetic structures

R. Ryne, NUG mtg: Page 17 THE END