Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory.

Slides:



Advertisements
Similar presentations
IPAC10, Kyoto, Japan, May 23-28, 2010 E-cloud feedback simulations - Vay et al. 1 Simulation of E-Cloud driven instability and its attenuation using a.
Advertisements

The Continuing Role of SRF for AARD: Issues, Challenges and Benefits SRF performance has been rising every decade SRF installations for HEP (and other.
Normal-Conducting Photoinjector for High Power CW FEL Sergey Kurennoy, LANL, Los Alamos, NM, USA An RF photoinjector capable of producing high continuous.
Linear Collider Bunch Compressors Andy Wolski Lawrence Berkeley National Laboratory USPAS Santa Barbara, June 2003.
Helmholtz International Center for Oliver Boine-Frankenheim GSI mbH and TU Darmstadt/TEMF FAIR accelerator theory (FAIR-AT) division Helmholtz International.
Modeling Generation and Nonlinear Evolution of VLF Waves for Space Applications W.A. Scales Center of Space Science and Engineering Research Virginia Tech.
Hybrid Simulation of Ion-Cyclotron Turbulence Induced by Artificial Plasma Cloud in the Magnetosphere W. Scales, J. Wang, C. Chang Center for Space Science.
Advancing Computational Science Research for Accelerator Design and Optimization Accelerator Science and Technology - SLAC, LBNL, LLNL, SNL, UT Austin,
Beam-beam studies for Super KEKB K. Ohmi & M Tawada (KEK) Super B factories workshop in Hawaii Apr
Brookhaven Science Associates U.S. Department of Energy MUTAC Review March 16-17, 2006, FNAL, Batavia, IL Target Simulations Roman Samulyak Computational.
SLAC is focusing on the modeling and simulation of DOE accelerators using high- performance computing The performance of high-brightness RF guns operating.
Wakefield Damping Effects in the CLIC Power Extraction and Transfer Structure (PETS) Wakefield Simulation of CLIC PETS Structure Using Parallel 3D Finite.
T7/High Performance Computing K. Ko, R. Ryne, P. Spentzouris.
Loss maps of RHIC Guillaume Robert-Demolaize, BNL CERN-GSI Meeting on Collective Effects, 2-3 October 2007 Beam losses, halo generation, and Collimation.
Kwok Ko (SLAC), Robert Ryne (LBNL) ADVANCED COMPUTING FOR 21 ST CENTURY ACCELERATOR SCIENCE & TECHONLOGY SciDAC PI Meeting – January 15-16, 2002.
SciDAC Accelerator Simulation project: FNAL Booster modeling, status and plans Robert D. Ryne, P. Spentzouris.
Simulation Technology & Applied Research, Inc N. Port Washington Rd., Suite 201, Mequon, WI P:
ComPASS Project Overview Panagiotis Spentzouris, Fermilab ComPASS PI.
Improved pipelining and domain decomposition in QuickPIC Chengkun Huang (UCLA/LANL) and members of FACET collaboration SciDAC COMPASS all hands meeting.
The EMMA Project Rob Edgecock STFC Rutherford Appleton Laboratory & Huddersfield University.
R. Ryne, NUG mtg: Page 1 High Energy Physics Greenbook Presentation Robert D. Ryne Lawrence Berkeley National Laboratory NERSC User Group Meeting.
FACET and beam-driven e-/e+ collider concepts Chengkun Huang (UCLA/LANL) and members of FACET collaboration SciDAC COMPASS all hands meeting 2009 LA-UR.
1 BeamBeam3D: Code Improvements and Applications Ji Qiang Center of Beam Physics Lawrence Berkeley National Laboratory SciDAC II, COMPASS collaboration.
Brookhaven Science Associates U.S. Department of Energy MUTAC Review April , 2004, LBNL Target Simulation Roman Samulyak, in collaboration with.
High Energy and Nuclear Physics Collaborations and Links Stu Loken Berkeley Lab HENP Field Representative.
Beam-Beam Simulations for RHIC and LHC J. Qiang, LBNL Mini-Workshop on Beam-Beam Compensation July 2-4, 2007, SLAC, Menlo Park, California.
Quantitative Optimisation Studies of the Muon Front-End for a Neutrino Factory S. J. Brooks, RAL, Chilton, Oxfordshire, U.K. Tracking Code Non-linearised.
Front-End Design Overview Diktys Stratakis Brookhaven National Laboratory February 19, 2014 D. Stratakis | DOE Review of MAP (FNAL, February 19-20, 2014)1.
1 MaryLie/IMPACT: Status and Future Plans Robert D. Ryne Center of Beam Physics Lawrence Berkeley National Laboratory SciDAC II, COMPASS collaboration.
FALC Technology Benefits study P. Grannis Beijing GDE meeting Feb. 7, 2007 FALC = Funding Agencies for Large Colliders is composed of representatives from.
Terascale Computing in Accelerator Science & Technology Robert D. Ryne Accelerator Modeling and Advanced Computing Group Lawrence Berkeley National Laboratory.
Simulation of Microbunching Instability in LCLS with Laser-Heater Linac Coherent Light Source Stanford Synchrotron Radiation Laboratory.
 Advanced Accelerator Simulation Panagiotis Spentzouris Fermilab Computing Division (member of the SciDAC AST project)
Office of Science U.S. Department of Energy 1 International Linear Collider In August 2004 ICFA announced their technology selection for an ILC: 1.The.
PROTON LINAC FOR INDIAN SNS Vinod Bharadwaj, SLAC (reporting for the Indian SNS Design Team)
Lecture 9: Inelastic Scattering and Excited States 2/10/2003 Inelastic scattering refers to the process in which energy is transferred to the target,
Beam-Beam Simulations Ji Qiang US LARP CM12 Collaboration Meeting Napa Valley, April 8-10, 2009 Lawrence Berkeley National Laboratory.
Max Cornacchia, SLAC LCLS Project Overview BESAC, Feb , 2001 LCLS Project Overview What is the LCLS ? Transition from 3 rd generation light sources.
FNAL 8 GeV SC linac / HINS Beam Dynamics Jean-Paul Carneiro FNAL Accelerator Physics Center Peter N. Ostroumov, Brahim Mustapha ANL March 13 th, 2009.
ILC Damping Rings Mini-Workshop, KEK, Dec 18-20, 2007 Status and Plans for Impedance Calculations of the ILC Damping Rings Cho Ng Advanced Computations.
Design Optimization of MEIC Ion Linac & Pre-Booster B. Mustapha, Z. Conway, B. Erdelyi and P. Ostroumov ANL & NIU MEIC Collaboration Meeting JLab, October.
Proton Source & Site Layout Keith Gollwitzer Accelerator Division Fermi National Accelerator Laboratory Muon Accelerator Program Review Fermilab, August.
IDS-NF Accelerator Baseline The Neutrino Factory [1, 2] based on the muon storage ring will be a precision tool to study the neutrino oscillations.It may.
DEVELOPMENT OF A STEADY STATE SIMULATION CODE FOR KLYSTRONS
Brookhaven Science Associates U.S. Department of Energy MUTAC Review April , 2004, BNL Target Simulations Roman Samulyak in collaboration with Y.
Ion effects in low emittance rings Giovanni Rumolo Thanks to R. Nagaoka, A. Oeftiger In CLIC Workshop 3-8 February, 2014, CERN.
1 IMPACT: Benchmarking Ji Qiang Lawrence Berkeley National Laboratory CERN Space-Charge Collaboration Meeting, May 20-21, 2014.
Marcel Schuh CERN-BE-RF-LR CH-1211 Genève 23, Switzerland 3rd SPL Collaboration Meeting at CERN on November 11-13, 2009 Higher.
2 February 8th - 10th, 2016 TWIICE 2 Workshop Instability studies in the CLIC Damping Rings including radiation damping A.Passarelli, H.Bartosik, O.Boine-Fankenheim,
Evan Li, Brown University 2012 Evan Li Xiaobiao Huang SLAC National Accelerator Laboratory August 12, 2010.
Pushing the space charge limit in the CERN LHC injectors H. Bartosik for the CERN space charge team with contributions from S. Gilardoni, A. Huschauer,
 Accelerator Simulation P. Spentzouris Accelerator activity coordination meeting 03 Aug '04.
Space Charge and CSR Microwave Physics in a Circulated Electron Cooler Rui Li Jefferson Lab and C-Y. Tsai, D. Douglas, C. Tennant, S. Benson, Ya. Derbenev,
Ionization Cooling for Muon Accelerators Prepared by Robert Ryne Presented by Jean-Pierre Delahaye MICE Optics Review Jan, 2016 RAL.
11/18/2008 Global Design Effort 1 Summary for Gamma-Gamma Mayda M. Velasco Northwestern University November 20, 2008 LCWS08 -- UIC, Chicago.
1 Strong-Strong Beam-Beam Simulations Ji Qiang Lawrence Berkeley National Laboratory 3 rd JLEIC Collaboration Meeting March 31 st, Jlab 2016.
G. Cheng, R. Rimmer, H. Wang (Jefferson Lab, Newport News, VA, USA)
Challenges in Electromagnetic Modeling Scalable Solvers
Beam-beam effects in eRHIC and MeRHIC
Stefano Romeo on behalf of SPARC_LAB collaboration
Lecture 2 Live Feed – CERN Control Centre
Wakefield Accelerator
Parallel 3D Finite Element Particle-In-Cell Simulations with Pic3P*
PARALLEL FINITE ELEMENT MODELING TOOLS FOR ERL DESIGN AND ANALYSIS
L Ge, L Lee, A. Candel, C Ng, K Ko, SLAC
Wakefield Simulation of CLIC PETS Structure Using Parallel 3D Finite Element Time-Domain Solver T3P* Arno Candel, Andreas Kabel, Zenghai Li, Cho Ng, Liequan.
Electron Rings Eduard Pozdeyev.
Gain Computation Sven Reiche, UCLA April 24, 2002
Physics 417/517 Introduction to Particle Accelerator Physics
PARALLEL FINITE ELEMENT MODELING TOOLS FOR ERL DESIGN AND ANALYSIS
Presentation transcript:

Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory

Robert Ryne2 …with contributions from members of l Grand Challenge in Computational Accelerator Physics l Advanced Computing for 21st Century Accelerator Science and Technology project

Robert Ryne3 Outline l Importance of Accelerators l Future of Accelerators l Importance of Accelerator Simulation l Past Accomplishments: n Grand Challenge in Computational Accelerator Physics –electromagnetics –beam dynamics –applications beyond accelerator physics l Future Plans n Advanced Computing for 21st Century Accelerator S&T

Robert Ryne4 Accelerators have enabled some of the greatest discoveries of the 20th century l “Extraordinary tools for extraordinary science” n high energy physics n nuclear physics n materials science n biological science

Robert Ryne5 Accelerator Technology Benefits Science, Technology, and Society l electron microscopy l beam lithography l ion implantation l accelerator mass spectrometry l medical isotope production l medical irradiation therapy

Robert Ryne6 Accelerators have been proposed to address issues of international importance l Accelerator transmutation of waste l Accelerator production of tritium l Accelerators for proton radiography l Accelerator-driven energy production Accelerators are key tools for solving problems related to energy, national security, and quality of the environment

Robert Ryne7 Future of Accelerators: Two Questions l What will be the next major machine beyond LHC? n linear collider -factory/  -collider n rare isotope accelerator n 4th generation light source l Can we develop a new path to the high-energy frontier? n Plasma/Laser systems may hold the key

Example: Comparison of Stanford Linear Collider and Next Linear Collider

Possible Layout of a Neutrino Factory

Robert Ryne10 Importance of Accelerator Simulation l Next generation of accelerators will involve: n higher intensity, higher energy n greater complexity n increased collective effects l Large-scale simulations essential for n design decisions & feasibility studies: –evaluate/reduce risk, reduce cost, optimize performance n accelerator science and technology advancement

Robert Ryne11 Cost Impacts l Without large-scale simulation: cost escalation n SSC: 1 cm increase in aperture due to lack of confidence in design resulted in $1B cost increase l With large-scale simulation: cost savings n NLC: Large-scale electromagnetic simulations have led to $100M cost reduction

Robert Ryne12 DOE Grand Challenge In Computational Accelerator Physics ( ) Goal - “to develop a new generation of accelerator modeling tools on High Performance Computing (HPC) platforms and to apply them to present and future accelerator applications of national importance.” Beam Dynamics: LANL (S. Habib, J. Qiang, R. Ryne) UCLA (V. Decyk) Electromagnetics: SLAC (N. Folwell, Z. Li, V. Ivanov, K. Ko, J. Malone, B. McCandless, C.-K. Ng, R. Richardson, G. Schussman, M. Wolf) Stanford/SCCM (T. Afzal, B. Chan, G. Golub, W. Mi, Y. Sun, R. Yu) Computer Science & Computing Resources - NERSC & ACL

Robert Ryne13 New parallel applications codes have been applied to several major accelerator projects l Main deliverables: 4 parallel applications codes l Electromagnetics: n 3D parallel eigenmode code Omega3P n 3D parallel time-domain EM code Tau3P l Beam Dynamics: n 3D parallel Poisson/Vlasov code, IMPACT n 3D parallel Fokker/Planck code, LANGEVIN3D l Applied to SNS, NLC, PEP-II, APT, ALS, CERN/SPL New capability has enabled simulations 3-4 orders of magnitude greater than previously possible

Robert Ryne14 Parallel Electromagnetic Field Solvers: Features l C++ implementation w/ MPI l Reuse of existing parallel libraries (ParMetis, AZTEC) l Unstructured grids for conformal meshes l New solvers for fast convergence and scalability l Adaptive refinement to improve accuracy & performance l Omega3P: 3D finite element w/ linear & quadratic basis functions l Tau3P: unstructured Yee grid

Robert Ryne15 Why is Large-Scale Modeling Needed? Example: NLC Rounded Damped Detuned Structure (RDDS) Design l highly three-dimensional structure l detuning+damping manifold for wakefield suppression l require 0.01% accuracy in accelerating frequency to maintain efficiency l simulation mesh size close to fabrication tolerance (order of microns) l available 3D codes on desktop computers cannot deliver required accuracy, resolution

Robert Ryne16 NLC - RDDS Cell Design (Omega3P) Accelerating Mode Frequency accuracy to 1 part in 10,000 is achieved 1 MHz h4h4

Robert Ryne MHz MHz MHz MHz MHz MHz MHz MHz MHz MHz MHz MHz MHz MHz MHz NLC - RDDS 6 Cell Section (Omega3P)

Robert Ryne18 NLC - RDDS Output End (Tau3P)

Robert Ryne19 PEP II, SNS, and APT Cavity Design (Omega3P)

Robert Ryne20 refined mesh size: 5 mm 2.5 mm 1.5mm # elements : degrees of freedom: peak power density: MW/m MW/m MW/m 2 Peak Wall Loss in PEP-II Waveguide-Damped RF cavity Omega3P - Mesh Refinement

Robert Ryne21 Parallel Beam Dynamics Codes: Features l split-operator-based 3D parallel particle-in-cell l canonical variables l variety of implementations (F90/MPI, C++, POOMA, HPF) l particle manager, field manager, dynamic load balancing l 6 types of boundary conditions for field solvers: n open/circular/rectangular transverse; open/periodic longitudinal l reference trajectory + transfer maps computed “on the fly” l philosophy: n do not take tiny steps to push particles n do take tiny steps to compute maps; then push particles w/ maps l LANGEVIN3D: self-consistent damping/diffusion coefficients

Robert Ryne22 Why is Large-Scale Modeling Needed? Example: Modeling Beam Halo in High Intensity Linacs l Future high-intensity machines will have to operate with ultra- low losses l A major source of loss: low density, large amplitude halo l Large scale simulations (~100M particles) needed to predict halo Maximum beam size does not converge in small-scale PC simulation (up to 1M particles)

Robert Ryne23 Mismatched Induced Beam Halo Matched beam. x-y cross-section Mismatched beam. x-y cross-section

Robert Ryne24 Vlasov Code or PIC code? l Direct Vlasov: n bad: very large memory n bad: subgrid scale effects n good: no sampling noise n good: no collisionality l Particle-based: n good: low memory n good: subgrid resolution OK n bad: statistical fluctuations n bad: numerical collisionality

Robert Ryne25 How to turn any magnetic optics code into a tracking code with space charge Split-Operator Methods H=H ext H=H sc M = M ext M = M sc H=H ext +H sc M( t ) = M ext ( t/2 ) M sc ( t ) M ext ( t/2 ) + O(t 3 ) Magnetic Optics Multi-Particle Simulation (arbitrary order possible via Yoshida)

Robert Ryne26 Development of IMPACT has Enabled the Largest, Most Detailed Linac Simulations ever Performed l Model of SNS linac used 400 accelerating structures l Simulations run w/ up to 800M particles on a grid l Approaching real-world # of particles (900M for SNS) l 100M particle runs now routine (5-10 hrs on 256 PEs) l Analogous 1M particle simulation using legacy 2D code on a PC requires weekend n 3 order-of-magnitude increase in simulation capability 100x larger simulations performed in 1/10 the time

Robert Ryne27 Comparison: Old vs. New Capability l 1980s: 10K particle, 2D serial simulations typical l Early 1990s: 10K-100K particle, 2D serial simulations typical l 2000: 100M particle runs routine (5-10 hrs on 256 PEs); more realistic treatment of beamline elements SNS linac; 500M particlesLEDA halo expt; 100M particles

Robert Ryne28 Intense Beams in Circular Accelerators l Previous work emphasized high intensity linear accelerators l New work treats intense beams in bending magnets l Issue: vast majority of accelerator codes use arc length (“z” or “s”) as the independent variable. l Simulation of intense beams requires solving  2  =  at fixed time The split-operator approach treated in linear and circular systems will soon make it possible to “flip a switch” to turn space charge on/off in the major accelerator codes x-z plot based on x-  data from an s-code plotted at 8 different times

Robert Ryne29 Collaboration/impact beyond accelerator physics l Modeling collisions in plasmas n new Fokker/Planck code l Modeling astrophysical systems n starting w/ IMPACT, developing astrophysical PIC code n also a testbed for testing scripting ideas l Modeling stochastic dynamical systems n new leap-frog integrator for systems w/ multiplicative noise l Simulations requiring solution of large eigensystems n new eigensolver developed by SLAC/NMG & Stanford SCCM l Modeling quantum systems n Spectral and DeRaedt-style codes to solve the Schrodinger, density matrix, and Wigner-function equations

Robert Ryne30 First-Ever Self-Consistent Fokker/Planck l Self-consistent Langevin-Fokker/Planck requires the analog of thousands of space charge calculations per time step n “…clearly such calculations are impossible….” NOT! n DEMONSTRATED, thanks to modern parallel machines and intelligent algorithms Diffusion CoefficientsFriction Coefficient / velocity

Robert Ryne31 Schrodinger Solver: Two Approaches l Spectral: l Field Theoretic: l Discrete: FFTs; global communication Nearest-neighbor communication

Robert Ryne32 Conclusion “Advanced Computing for 21st Century Accelerator Sci. & Tech.” l Builds on foundation laid by Accelerator Grand Challenge l Larger collaboration: n presently LANL, SLAC, FNAL, LBNL, BNL, JLab, Stanford, UCLA l Project Goal: develop a comprehensive, coherent accelerator simulation environment l Focus Areas: n Beam Systems Simulation, Electromagnetic Systems Simulation, Beam/Electromagnetic Systems Integration l View toward near-term impact on: NLC, -factory (driver, muon cooling), laser/plasma accelerators

Robert Ryne33 Acknowledgement l Work supported by the DOE Office of Science n Office of Advanced Scientific Computing Research, Division of Mathematical, Information, and Computational Sciences n Office of High Energy and Nuclear Physics n Division of High Energy Physics, Los Alamos Accelerator Code Group