Update on GHOST Code Development

Slides:



Advertisements
Similar presentations
Simulation Study For MEIC Electron Cooling He Zhang, Yuhong Zhang Thomas Jefferson National Accelerator Facility Abstract Electron cooling of the ion beams.
Advertisements

Linear Collider Bunch Compressors Andy Wolski Lawrence Berkeley National Laboratory USPAS Santa Barbara, June 2003.
A. Castilla / January 2015 USPAS Accelerator Physics 1 USPAS Accelerator Physics 2015 Old Dominion University Colliders, Luminosity, & Crabbing Todd Satogata.
Using Tune Shifts to Evaluate Electron Cloud Effects on Beam Dynamics at CesrTA Jennifer Chu Mentors: Dr. David Kreinick and Dr. Gerry Dugan 8/11/2011REU.
Taylor Models Workhop 20 December 2004 Exploring and optimizing Adiabatic Buncher and Phase Rotator for Neutrino Factory in COSY Infinity A.Poklonskiy.
Introduction Status of SC simulations at CERN
Topic Three: Perturbations & Nonlinear Dynamics UW Spring 2008 Accelerator Physics J. J. Bisognano 1 Accelerator Physics Topic III Perturbations and Nonlinear.
Oliver Boine-FrankenheimSIS100-4: High current beam dynamics studies SIS 100 ‘high current’ design challenges o Beam loss in SIS 100 needs to be carefully.
MEIC Electron Cooling Simulation He Zhang 03/18/2014, EIC 14 Newport News, VA.
Development of Simulation Environment UAL for Spin Studies in EDM Fanglei Lin December
Beam-Beam Simulations for RHIC and LHC J. Qiang, LBNL Mini-Workshop on Beam-Beam Compensation July 2-4, 2007, SLAC, Menlo Park, California.
Matching recipe and tracking for the final focus T. Asaka †, J. Resta López ‡ and F. Zimmermann † CERN, Geneve / SPring-8, Japan ‡ CERN, Geneve / University.
October 4-5, Electron Lens Beam Physics Overview Yun Luo for RHIC e-lens team October 4-5, 2010 Electron Lens.
MEIC Staged Cooling Scheme and Simulation Studies He Zhang MEIC Collaboration Meeting, 10/06/2015.
 Advanced Accelerator Simulation Panagiotis Spentzouris Fermilab Computing Division (member of the SciDAC AST project)
Beam-Beam Simulations Ji Qiang US LARP CM12 Collaboration Meeting Napa Valley, April 8-10, 2009 Lawrence Berkeley National Laboratory.
July LEReC Review July 2014 Low Energy RHIC electron Cooling Jorg Kewisch, Dmitri Kayran Electron Beam Transport and System specifications.
Status of MEIC Beam Synchronization V.S. Morozov on behalf of MEIC study group (summary of the results of a series of special MEIC R&D meetings) MEIC Collaboration.
Principle of Wire Compensation Theory and Simulations Simulations and Experiments The Tevatron operates with 36 proton bunches and 36 anti-proton bunches.
Beam-beam Simulation at eRHIC Yue Hao Collider-Accelerator Department Brookhaven National Laboratory July 29, 2010 EIC Meeting at The Catholic University.
2 February 8th - 10th, 2016 TWIICE 2 Workshop Instability studies in the CLIC Damping Rings including radiation damping A.Passarelli, H.Bartosik, O.Boine-Fankenheim,
Adiabatic buncher and (  ) Rotator Exploration & Optimization David Neuffer(FNAL), Alexey Poklonskiy (FNAL, MSU, SPSU)
Beam dynamics in crab collision K. Ohmi (KEK) IR2005, 3-4, Oct FNAL Thanks to K. Akai, K. Hosoyama, K. Oide, T. Sen, F. Zimmermann.
Toward Gear-Change and Beam-Beam Simulations with GHOST
Envelope tracking as a tool for low emittance ring design
Benchmarking MAD, SAD and PLACET Characterization and performance of the CLIC Beam Delivery System with MAD, SAD and PLACET T. Asaka† and J. Resta López‡
Electron Cooling Simulation For JLEIC
Beam-beam effects in eRHIC and MeRHIC
Large Booster and Collider Ring
Non-linear Beam Dynamics Studies for JLEIC Electron Collider Ring
First Look at Nonlinear Dynamics in the Electron Collider Ring
Electron collider ring Chromaticity Compensation and dynamic aperture
Beam-beam R&D for eRHIC Linac-Ring Option
Single-bunch instability preliminary studies ongoing.
Beam-Beam with Gear Change for JLEIC
A Head-Tail Simulation Code for Electron Cloud
Chromatic Corrections
Beam-Beam Interaction in Linac-Ring Colliders
JLEIC Collaboration Meeting Spring 2017
EIC Accelerator Collaboration Meeting
Explanation of the Basic Principles and Goals
Some Issues on Beam-Beam Interaction & DA at CEPC
Yuri Nosochkov Yunhai Cai, Fanglei Lin, Vasiliy Morozov
Study of Beam Losses and Collimation in JLEIC
Update on the JLEIC pCDR
Racetrack Booster Option & Initial Spin Tracking Results
Update on MEIC Ion Polarization Work
JLEIC simulation report 3/09/2017 Y. Roblin
MEIC New Baseline: Luminosity Performance and Upgrade Path
Beam Beam effects for JLEIC
Summary of Working Group 1
HE-JLEIC: Boosting Luminosity at High Energy
Status and plans for crab crossing studies at JLEIC
Update on Crab Cavity Simulations for JLEIC
JLEIC Main Parameters with Strong Electron Cooling
Alejandro Castilla CASA/CAS-ODU
Conventional Synchronization Schemes
Progress Update on the Electron Polarization Study in the JLEIC
Beam-beam Studies, Tool Development and Tests
MEIC New Baseline: Performance and Accelerator R&D
Discussions on DA with tune scan
Status of IR / Nonlinear Dynamics Studies
JLEIC Accelerator R&D Meeting
Fanglei Lin JLEIC R&D Meeting, August 4, 2016
JLEIC Accelerator R&D Meeting Reorganization
Click to add title Click to add subtitle
Updates on the beam-beam effect in JLEIC
Summary and Plan for Electron Polarization Study in the JLEIC
DYNAMIC APERTURE OF JLEIC ELECTRON COLLIDER
Optimization of JLEIC Integrated Luminosity Without On-Energy Cooling*
Presentation transcript:

Update on GHOST Code Development Balša Terzić Department of Physics, Old Dominion University Center for Accelerator Studies (CAS), Old Dominion University JLEIC Collaboration Meeting, Jefferson Lab, October 5, 2016 October 5, 2016 Update on GHOST Code Development Terzić

Interdisciplinary Collaboration Jefferson Lab (CASA) Collaborators: Vasiliy Morozov, He Zhang, Fanglei Lin, Yves Roblin, Todd Satogata Old Dominion University (Center for Accelerator Science): Professors: Physics: Balša Terzić, Alexander Godunov Computer Science: Mohammad Zubair, Desh Ranjan Students: Computer Science: Kamesh Arumugam, Ravi Majeti Physics: Chris Cotnoir, Mark Stefani October 5, 2016 Update on GHOST Code Development

Update on GHOST Code Development Outline Motivation and Challenges Importance of beam synchronization Computational requirements and challenges GHOST Code Development: Status Update Review what we reported last time Toward new results Implementation of beam collisions on GPUs “Gear change” on the horizon Checklist and Timetable October 5, 2016 Update on GHOST Code Development

Motivation: Implication of “Gear Change” Fast beam Synchronization – highly desirable Smaller magnet movement Smaller RF adjustment Detection and polarimetry – highly desirable Cancellation of systematic effects associated with bunch charge and polarization variation – great reduction of systematic errors, sometimes more important than statistics Simplified electron polarimetry – only need average polarization, much easier than bunch-by-bunch measurement Dynamics – question Possibility of an instability – needs to be studied (Hirata & Keil 1990; Hao et al. 2014) Slow beam October 5, 2016 Update on GHOST Code Development

Computational Requirements Perspective: At the current layout of the JLEIC 1 hour of machine operation time ≈ 400 million turns Requirements for long-term beam-beam simulations of JLEIC High-order symplectic particle tracking Speed Beam-beam collision “Gear change” We employ approximations which speed computations up: One-turn maps Approximate beam-beam collision (Bassetti-Erskine) October 5, 2016 Update on GHOST Code Development

Update on GHOST Code Development GHOST: Outline GHOST: Gpu-accelerated High-Order Symplectic Tracking GHOST resolves computational bottlenecks by Uses one-turn maps for particle tracking Employing Bassetti-Erskine approximation for collisions Implementing the code on a massively-parallel GPU platform GPU implementation yields best returns when: The same instruction for multiple data (particle tracking) No communication (particle tracking; parallel collision) Two main parts: Particle tracking Beam collisions October 5, 2016 Update on GHOST Code Development

Reported at the Last Meeting Tracking: Finished Long-term simulation require symplectic tracking GHOST implements symplectic tracking just like COSY Demonstrated equivalency of results GHOST tracking results match those with elegant Elegant is element-by-element, GHOST one-turn map GPU implementation of GHOST tracking speeds execution up Execution on 1 GPU over 1 CPU is over 280 times faster Collisions in GHOST done on a CPU only Hourglass effect reproduced Benchmarked with BeamBeam3D – excellent agreement Collisions: Started October 5, 2016 Update on GHOST Code Development

GHOST: Beam Collisions Bassetti-Erskine Approximation Beams treated as 2D transverse Gaussian slices (Good approximation for the JLEIC) Poisson equation reduces to a complex error function Finite length of beams simulated by using multiple slices We generalized a “weak-strong” formalism of Bassetti-Erskine Include “strong-strong” collisions (each beam evolves) Include various beam shapes (original only flat beams) October 5, 2016 Update on GHOST Code Development

GHOST Benchmarking: Collisions Code calibration and benchmarking Convergence with increasing number of slices M Comparison to BeamBeam3D (Qiang, Ryne & Furman 2002) GHOST, 1 cm bunch 40k particles BeamBeam3D & GHOST, 10 cm bunch 40k particles Finite bunch length accurately represented Excellent agreement with BeamBeam3D October 5, 2016 Update on GHOST Code Development

GHOST Benchmarking: Hourglass Effect When the bunch length σz ≈ β*at the IP, it experiences a geometric reduction in luminosity – the hourglass effect (Furman 1991) GHOST, 128k particles, 10 slices Good agreement with theory October 5, 2016 Update on GHOST Code Development

“Gear Change” with GHOST: Approach “Gear change” provides beam synchronization for JLEIC Non-pair-wise collisions of beams with different number of bunches (N1, N2) in each collider ring (for JLEIC N1 = N2+1 ≈ 3420) If N1 and N2 are mutually prime, all combinations of bunches collide The load can be alleviated by implementation on GPUs The information for all bunches is stored: large memory load! Approach Implement single-bunch collision right and fast Collide multiple bunches on a predetermined schedule Nbunch different pairs of collisions on each turn Fast beam Slow beam October 5, 2016 Update on GHOST Code Development

“Gear Change” with GHOST: Toward Results Technical implementation FINISHED Gaining confidence in the results: Do the single collision right. Compare to BeamBeam3D. Reproduce the hourglass effect. Collide multiple pairs of bunches at different turns. Reasonableness check: 1-1 the same as 10-10? Conduct convergence studies. We will share first results only upon completing all five steps Validation and benchmarking UNDERWAY October 5, 2016 Update on GHOST Code Development

“Gear Change” with GHOST: Preliminaries How GHOST Scales with Bunches and Particles on 1 GPU Linear with the number of bunches Linear with the number of particles per bunch October 5, 2016 Update on GHOST Code Development

GHOST: Checklist and Timetable Stage 1: Particle tracking (Year 1: COMPLETED) High-order, symplectic tracking optimized on GPUs Benchmarked against COSY: Exact match 400 million turn tracking-only simulation completed Submitted for publication (NIM A: waiting since June!) Stage 2: Beam collisions (Year 1: COMPLETED or UNDERWAY) Single-bunch collision implemented on GPUs Multiple-bunch collision implemented on a single GPU (arbitrary Nbunch) Multiple-bunch validation, checking, benchmarking and optimization Multiple-bunch collision implemented on a multiple GPUs Stage 3: Other effects to be implemented (Year 2 & Beyond) Systematic simulations of JLEIC Other collision methods: fast multipole Space charge, synchrotron radiation, IBS       ☐ ☐ October 5, 2016 Update on GHOST Code Development

Update on GHOST Code Development Backup Slides October 5, 2016 Update on GHOST Code Development

Last Time: Symplectic Tracking Needed Symplectic tracking is essential for long-term simulations Non-Sympletic Tracking 500 000 iterations, 3rd order map Sympletic Tracking 500 000 iterations, 3rd order map px px x x Energy not conserved Particle will soon be lost Energy conserved October 5, 2016 Update on GHOST Code Development

Speedup 6Slices 1Turn Npart CPU GPU Speedup CPU Tracking Collision 1000 0.644896 13.2116 0.644768 15.4794 0.85 10000 1.02157 129.49 1.04451 17.9388 7.22 100000 5.86016 1287.17 5.91194 29.8827 43 1000000 54.5349 12851 54.8268 147.746 86 10k Particles Nturn 1 1.04202 129.479 1.03523 17.823 7.26 10 0.953088 131.204 0.96128 17.7718 7.38 100 0.965376 143.975 0.961472 17.4446 8.25 0.951872 119.376 0.989312 12.4215 9.61 1Million Patricles Nslices 54.473 2235.67 54.8347 30.738 72 2 54.4848 4396.9 54.7464 54.4933 81 3 54.4546 6480.04 54.7209 75.2644 4 54.4835 8612.99 54.8068 99.6275 5 54.5129 10708.2 54.7883 125.001 6 54.4469 12843.6 54.7732 147.913 87

Last Time: High-Order Symplectic Maps Higher-order symplecticity reveals more about dynamics 2nd order symplectic 4th order symplectic 3rd order symplectic 5th order symplectic 5000 turns October 5, 2016 Update on GHOST Code Development

Last Time: Symplectic Tracking With GHOST Symplectic tracking in GHOST is the same as in COSY Infinity (Makino & Berz 1999) Start with a one-turn map Symplecticity criterion enforced at each turn Involves solving an implicit set of non-linear equations Introduces a significant computational overhead Initial coordinates Final coordinates October 5, 2016 Update on GHOST Code Development

Last Time: Symplectic Tracking With GHOST Symplectic tracking in GHOST is the same as in COSY Infinity (Makino & Berz 1999) Non-Sympletic Tracking 3rd order map COSY GHOST 100,000 turns Sympletic Tracking 3rd order map COSY GHOST 100,000 turns Perfect agreement! October 5, 2016 Update on GHOST Code Development

Last Time: GHOST Symplectic Tracking Validation Dynamic aperture comparison to Elegant (Borland 2000) 400 million turn simulation (truly long-term) GHOST Elegant 1,000 turns Sympletic Tracking 4th order map Excellent agreement! October 5, 2016 Update on GHOST Code Development

GHOST: GPU Implementation GHOST: 3rd order tracking 1 GPU, varying # of particles 100k particles, varying # of GPUs Speedup on 1 GPU over 1 CPU over 280 times 400 million turns in an JLEIC ring for a bunch with 100k particles: < 7 hr non-symplectic, ~ 4.5 days for symplectic tracking With each new GPU architecture, performance improves October 5, 2016 Update on GHOST Code Development

GHOST GPU Implementation GHOST Tracking on 1 GPU October 5, 2016 Update on GHOST Code Development

JLEIC Design Parameters Used October 5, 2016 Update on GHOST Code Development

“Gear Change” with GHOST “Gear change” provides beam synchronization for JLEIC Non-pair-wise collisions of beams with different number of bunches (N1, N2) in each collider ring (for JLEIC N2 = N1-1 ~ 3420) Simplifies detection and polarimetry Beam-beam collisions precess If N1 and N2 are incommensurate, all combinations of bunches collide Can create linear and non-linear instabilities? (Hirata & Keil 1990; Hao et al. 2014) The load can be alleviated by implementation on GPUs The information for all bunches is stored: huge memory load! Approach Implement single-bunch collision right and fast Collide multiple bunches on a predetermined schedule Nbunch different pairs of collisions on each turn October 5, 2016 Update on GHOST Code Development