Presentation is loading. Please wait.

Presentation is loading. Please wait.

R. Ryne, NUG mtg: 040625Page 1 High Energy Physics Greenbook Presentation Robert D. Ryne Lawrence Berkeley National Laboratory NERSC User Group Meeting.

Similar presentations


Presentation on theme: "R. Ryne, NUG mtg: 040625Page 1 High Energy Physics Greenbook Presentation Robert D. Ryne Lawrence Berkeley National Laboratory NERSC User Group Meeting."— Presentation transcript:

1 R. Ryne, NUG mtg: 040625Page 1 High Energy Physics Greenbook Presentation Robert D. Ryne Lawrence Berkeley National Laboratory NERSC User Group Meeting June 25, 2005

2 R. Ryne, NUG mtg: 040625Page 2 Outline  Lattice QCD  Accelerator Physics  Astrophysics (see D. Olson’s presentation)

3 R. Ryne, NUG mtg: 040625Page 3 Lattice QCD

4 R. Ryne, NUG mtg: 040625Page 4 Goals  Determine a number of basic parameters of the Standard Model  Make precise tests of the Standard Model  Obtain a quantitative understanding of the physical phenomena controlled by the strong interactions

5 R. Ryne, NUG mtg: 040625Page 5 Impact on determination of CKM matrix  Improvements in lattice errors obtained w/ computers sustaining 0.6, 6, and 60 Tflops for one year

6 R. Ryne, NUG mtg: 040625Page 6 Computing Needs: Approach  Two pronged approach:  Use of national supercomputer centers such as NERSC  Build dedicated computers using special purpose hardware for QCD - QCDOC - Optimized clusters  Special purpose hardware is used to perform the majority of the lattice calculations  Supercomputer centers used for a combination of lattice calculations and data analysis

7 R. Ryne, NUG mtg: 040625Page 7 Computational Issues  Lattice calculations utilize a 4D grid  Need highest possible single processor performance  Communication is nearest-neighbor  Don’t need large memory  Do need high speed networks - International Lattice Data Grid formed to share computationally expensive data - Need to move ~1 petabyte in 24 hrs

8 R. Ryne, NUG mtg: 040625Page 8 Lattice QCD Computational Roadmap  Lattice community presently sustains 0.5-1 Tflop/sec  Has allowed determination of a limited number of key quantities to ~few percent accuracy  Has allowed development & testing of new formulations of that will significantly improve accuracy of future calculations  In next few years need to sustain 50-100 Tflop/sec  Calculate weak decay constants & form factors  Determine phase diagram of high temp QCD, calculate EOS of quark-gluon plasma  Obtain quantitative understanding of internal structure of strongly interacting particles  Need to sustain ~ 1 petaflop/sec by end of decade

9 R. Ryne, NUG mtg: 040625Page 9 Accelerator Physics

10 R. Ryne, NUG mtg: 040625Page 10 Goals  Large-scale modeling is essential for  Improving/upgrading existing accelerators  Designing next-generation accelerators  Exploring/discovering new methods of acceleration - Laser/plasma based concepts

11 R. Ryne, NUG mtg: 040625Page 11 Accelerator modeling is very diverse  Many models  Maxwell  Vlasov/Poisson  Vlasov/Maxwell  Fokker-Planck  Leonard-Weichart  Single & multi-species  Particle based codes  Mesh-based codes: regular, irregular, AMR,…  Combined particle/mesh codes  Runs of various sizes (up to ~1000 PEs and beyond)

12 R. Ryne, NUG mtg: 040625Page 12 Advanced Computing: An imperative to help assure success and best performance of a ~$20B investment  SciDAC budget is < 0.02% of the this amount  Small investiment in computing can have huge financial consequences

13 R. Ryne, NUG mtg: 040625Page 13 Accelerator Modeling Roadmap  Current resources: ~3M hrs/yr @ NERSC  In the next few years, will need ~20M hrs/yr  Design of proposed machines: Linear Collider, RIA, hadron machines (proton drivers, muon/neutrino systems, VLHC)  Simulation of existing & near-term machines: LHC,RHIC, PEP-II, SNS  Design of advanced concepts: 1 GeV stage, plasma afterburner  Design of 4th generation light sources  By the end of the decade will need ~60M hrs/yr  Full scale electron-cloud  Multi-slice, multi-IP, strong-strong beam-beam  Interaction of space charge effects, wakefields, and machine nonlinearities in boosters and accumulator rings  First principles Langevin modeling of electron cooling systems  CSR effects with realistic boundary conditions  Goal is end-to-end modeling of complete systems

14 R. Ryne, NUG mtg: 040625Page 14 Algorithmic & Software Needs  Continued close collaboration with ASCR- supported researchers is essential  Linear solvers, eigensolvers, PDE solvers, meshing technologies, visualization  Performance monitoring and enhancement, version control & build tools, multi-language support  Multi-scale methods are becoming increasingly important  We need robust, easy-to-use parallel programming environments & parallel scientific software libraries

15 R. Ryne, NUG mtg: 040625Page 15 Parallel Optimization promises to be well suited for design problems on 10’s of thousands of processors  Machine design always involves multiple runs  Up to now the community has learned how to run large problems on ~thousand processors  In the future, it will be desirable to run multiple ~1000 processor runs in a single optimization step  Will allow scaling up to 10’s of thousands of processors for machine design problems  NOTE: not all problems are design problems. Fast interprocessor communication is needed for the very largest “single point” runs.

16 R. Ryne, NUG mtg: 040625Page 16 Diversity of accelerator modeling problems demands a mix of capacity & capability, and a mix of system parameters  Some problems well suited to <=500 processors, but we typically need to run a large # of simulations  Design studies, parameter scans  Some problems demand large simulations (>=1000) procs) and involve regular, near-neighbor comm.  Electromagnetic PIC  Some problems demand large simulations and involve global, irregular communication  Modeling geometrically complex electromagnetic structures

17 R. Ryne, NUG mtg: 040625Page 17 THE END


Download ppt "R. Ryne, NUG mtg: 040625Page 1 High Energy Physics Greenbook Presentation Robert D. Ryne Lawrence Berkeley National Laboratory NERSC User Group Meeting."

Similar presentations


Ads by Google