Download presentation
Presentation is loading. Please wait.
Published byMark Dickerson Modified over 9 years ago
1
Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory
2
Robert Ryne2 …with contributions from members of l Grand Challenge in Computational Accelerator Physics l Advanced Computing for 21st Century Accelerator Science and Technology project
3
Robert Ryne3 Outline l Importance of Accelerators l Future of Accelerators l Importance of Accelerator Simulation l Past Accomplishments: n Grand Challenge in Computational Accelerator Physics –electromagnetics –beam dynamics –applications beyond accelerator physics l Future Plans n Advanced Computing for 21st Century Accelerator S&T
4
Robert Ryne4 Accelerators have enabled some of the greatest discoveries of the 20th century l “Extraordinary tools for extraordinary science” n high energy physics n nuclear physics n materials science n biological science
5
Robert Ryne5 Accelerator Technology Benefits Science, Technology, and Society l electron microscopy l beam lithography l ion implantation l accelerator mass spectrometry l medical isotope production l medical irradiation therapy
6
Robert Ryne6 Accelerators have been proposed to address issues of international importance l Accelerator transmutation of waste l Accelerator production of tritium l Accelerators for proton radiography l Accelerator-driven energy production Accelerators are key tools for solving problems related to energy, national security, and quality of the environment
7
Robert Ryne7 Future of Accelerators: Two Questions l What will be the next major machine beyond LHC? n linear collider -factory/ -collider n rare isotope accelerator n 4th generation light source l Can we develop a new path to the high-energy frontier? n Plasma/Laser systems may hold the key
8
Example: Comparison of Stanford Linear Collider and Next Linear Collider
9
Possible Layout of a Neutrino Factory
10
Robert Ryne10 Importance of Accelerator Simulation l Next generation of accelerators will involve: n higher intensity, higher energy n greater complexity n increased collective effects l Large-scale simulations essential for n design decisions & feasibility studies: –evaluate/reduce risk, reduce cost, optimize performance n accelerator science and technology advancement
11
Robert Ryne11 Cost Impacts l Without large-scale simulation: cost escalation n SSC: 1 cm increase in aperture due to lack of confidence in design resulted in $1B cost increase l With large-scale simulation: cost savings n NLC: Large-scale electromagnetic simulations have led to $100M cost reduction
12
Robert Ryne12 DOE Grand Challenge In Computational Accelerator Physics (1997-2000) Goal - “to develop a new generation of accelerator modeling tools on High Performance Computing (HPC) platforms and to apply them to present and future accelerator applications of national importance.” Beam Dynamics: LANL (S. Habib, J. Qiang, R. Ryne) UCLA (V. Decyk) Electromagnetics: SLAC (N. Folwell, Z. Li, V. Ivanov, K. Ko, J. Malone, B. McCandless, C.-K. Ng, R. Richardson, G. Schussman, M. Wolf) Stanford/SCCM (T. Afzal, B. Chan, G. Golub, W. Mi, Y. Sun, R. Yu) Computer Science & Computing Resources - NERSC & ACL
13
Robert Ryne13 New parallel applications codes have been applied to several major accelerator projects l Main deliverables: 4 parallel applications codes l Electromagnetics: n 3D parallel eigenmode code Omega3P n 3D parallel time-domain EM code Tau3P l Beam Dynamics: n 3D parallel Poisson/Vlasov code, IMPACT n 3D parallel Fokker/Planck code, LANGEVIN3D l Applied to SNS, NLC, PEP-II, APT, ALS, CERN/SPL New capability has enabled simulations 3-4 orders of magnitude greater than previously possible
14
Robert Ryne14 Parallel Electromagnetic Field Solvers: Features l C++ implementation w/ MPI l Reuse of existing parallel libraries (ParMetis, AZTEC) l Unstructured grids for conformal meshes l New solvers for fast convergence and scalability l Adaptive refinement to improve accuracy & performance l Omega3P: 3D finite element w/ linear & quadratic basis functions l Tau3P: unstructured Yee grid
15
Robert Ryne15 Why is Large-Scale Modeling Needed? Example: NLC Rounded Damped Detuned Structure (RDDS) Design l highly three-dimensional structure l detuning+damping manifold for wakefield suppression l require 0.01% accuracy in accelerating frequency to maintain efficiency l simulation mesh size close to fabrication tolerance (order of microns) l available 3D codes on desktop computers cannot deliver required accuracy, resolution
16
Robert Ryne16 NLC - RDDS Cell Design (Omega3P) Accelerating Mode Frequency accuracy to 1 part in 10,000 is achieved 1 MHz h4h4
17
Robert Ryne17 +0.41 MHz +13.39 MHz +4.86 MHz +1.05 MHz +0.42 MHz +0.35 MHz +0.23 MHz +1.12 MHz +2.60 MHz -2.96 MHz +0.42 MHz +0.55 MHz +0.52 MHz +0.41 MHz +0.14 MHz NLC - RDDS 6 Cell Section (Omega3P)
18
Robert Ryne18 NLC - RDDS Output End (Tau3P)
19
Robert Ryne19 PEP II, SNS, and APT Cavity Design (Omega3P)
20
Robert Ryne20 refined mesh size: 5 mm 2.5 mm 1.5mm # elements : 23390 43555 106699 degrees of freedom: 142914 262162 642759 peak power density: 1.2811 MW/m 2 1.3909 MW/m 2 1.3959 MW/m 2 Peak Wall Loss in PEP-II Waveguide-Damped RF cavity Omega3P - Mesh Refinement
21
Robert Ryne21 Parallel Beam Dynamics Codes: Features l split-operator-based 3D parallel particle-in-cell l canonical variables l variety of implementations (F90/MPI, C++, POOMA, HPF) l particle manager, field manager, dynamic load balancing l 6 types of boundary conditions for field solvers: n open/circular/rectangular transverse; open/periodic longitudinal l reference trajectory + transfer maps computed “on the fly” l philosophy: n do not take tiny steps to push particles n do take tiny steps to compute maps; then push particles w/ maps l LANGEVIN3D: self-consistent damping/diffusion coefficients
22
Robert Ryne22 Why is Large-Scale Modeling Needed? Example: Modeling Beam Halo in High Intensity Linacs l Future high-intensity machines will have to operate with ultra- low losses l A major source of loss: low density, large amplitude halo l Large scale simulations (~100M particles) needed to predict halo Maximum beam size does not converge in small-scale PC simulation (up to 1M particles)
23
Robert Ryne23 Mismatched Induced Beam Halo Matched beam. x-y cross-section Mismatched beam. x-y cross-section
24
Robert Ryne24 Vlasov Code or PIC code? l Direct Vlasov: n bad: very large memory n bad: subgrid scale effects n good: no sampling noise n good: no collisionality l Particle-based: n good: low memory n good: subgrid resolution OK n bad: statistical fluctuations n bad: numerical collisionality
25
Robert Ryne25 How to turn any magnetic optics code into a tracking code with space charge Split-Operator Methods H=H ext H=H sc M = M ext M = M sc H=H ext +H sc M( t ) = M ext ( t/2 ) M sc ( t ) M ext ( t/2 ) + O(t 3 ) Magnetic Optics Multi-Particle Simulation (arbitrary order possible via Yoshida)
26
Robert Ryne26 Development of IMPACT has Enabled the Largest, Most Detailed Linac Simulations ever Performed l Model of SNS linac used 400 accelerating structures l Simulations run w/ up to 800M particles on a 512 3 grid l Approaching real-world # of particles (900M for SNS) l 100M particle runs now routine (5-10 hrs on 256 PEs) l Analogous 1M particle simulation using legacy 2D code on a PC requires weekend n 3 order-of-magnitude increase in simulation capability 100x larger simulations performed in 1/10 the time
27
Robert Ryne27 Comparison: Old vs. New Capability l 1980s: 10K particle, 2D serial simulations typical l Early 1990s: 10K-100K particle, 2D serial simulations typical l 2000: 100M particle runs routine (5-10 hrs on 256 PEs); more realistic treatment of beamline elements SNS linac; 500M particlesLEDA halo expt; 100M particles
28
Robert Ryne28 Intense Beams in Circular Accelerators l Previous work emphasized high intensity linear accelerators l New work treats intense beams in bending magnets l Issue: vast majority of accelerator codes use arc length (“z” or “s”) as the independent variable. l Simulation of intense beams requires solving 2 = at fixed time The split-operator approach treated in linear and circular systems will soon make it possible to “flip a switch” to turn space charge on/off in the major accelerator codes x-z plot based on x- data from an s-code plotted at 8 different times
29
Robert Ryne29 Collaboration/impact beyond accelerator physics l Modeling collisions in plasmas n new Fokker/Planck code l Modeling astrophysical systems n starting w/ IMPACT, developing astrophysical PIC code n also a testbed for testing scripting ideas l Modeling stochastic dynamical systems n new leap-frog integrator for systems w/ multiplicative noise l Simulations requiring solution of large eigensystems n new eigensolver developed by SLAC/NMG & Stanford SCCM l Modeling quantum systems n Spectral and DeRaedt-style codes to solve the Schrodinger, density matrix, and Wigner-function equations
30
Robert Ryne30 First-Ever Self-Consistent Fokker/Planck l Self-consistent Langevin-Fokker/Planck requires the analog of thousands of space charge calculations per time step n “…clearly such calculations are impossible….” NOT! n DEMONSTRATED, thanks to modern parallel machines and intelligent algorithms Diffusion CoefficientsFriction Coefficient / velocity
31
Robert Ryne31 Schrodinger Solver: Two Approaches l Spectral: l Field Theoretic: l Discrete: FFTs; global communication Nearest-neighbor communication
32
Robert Ryne32 Conclusion “Advanced Computing for 21st Century Accelerator Sci. & Tech.” l Builds on foundation laid by Accelerator Grand Challenge l Larger collaboration: n presently LANL, SLAC, FNAL, LBNL, BNL, JLab, Stanford, UCLA l Project Goal: develop a comprehensive, coherent accelerator simulation environment l Focus Areas: n Beam Systems Simulation, Electromagnetic Systems Simulation, Beam/Electromagnetic Systems Integration l View toward near-term impact on: NLC, -factory (driver, muon cooling), laser/plasma accelerators
33
Robert Ryne33 Acknowledgement l Work supported by the DOE Office of Science n Office of Advanced Scientific Computing Research, Division of Mathematical, Information, and Computational Sciences n Office of High Energy and Nuclear Physics n Division of High Energy Physics, Los Alamos Accelerator Code Group
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.