Presentation is loading. Please wait.

Presentation is loading. Please wait.

TUPEC057 Advances With Merlin – A Beam Tracking Code J. Molson, R.J. Barlow, H.L. Owen, A. Toader www.cockcroft.ac.uk www.manchester.ac.uk MERLIN is a.

Similar presentations


Presentation on theme: "TUPEC057 Advances With Merlin – A Beam Tracking Code J. Molson, R.J. Barlow, H.L. Owen, A. Toader www.cockcroft.ac.uk www.manchester.ac.uk MERLIN is a."— Presentation transcript:

1 TUPEC057 Advances With Merlin – A Beam Tracking Code J. Molson, R.J. Barlow, H.L. Owen, A. Toader www.cockcroft.ac.uk www.manchester.ac.uk MERLIN is a highly abstracted particle tracking code written in C++ that provides many unique features, and is simple to extend and modify. We have investigated the addition of high order wakefields to this tracking code and their effects on bunches, particularly with regard to collimation systems for both hadron and lepton accelerators. Updates have also been made to increase the code base compatibility with current compilers, and speed enhancements have been made to the code via the addition of multi-threading to allow cluster operation on the grid. In addition, this allows for simulations with large numbers of particles to take place. Instructions for downloading the new code base are given. Abstract Introduction Acknowledgements We would like to thank Nick Walker and Andy Wolski for their assistance with development of the Merlin source code. We thank CSED staff at STFC Daresbury Laboratory for providing computational resources. Number of Processor Cores Execution time (seconds) 11067 2738 3632 4569 Source Code access Merlin is a beam tracking code developed in C++ by N. Walker et al for particle tracking the ILC linac. It is easy to extend due to its modular process nature, and the code is clean structured C++. Merlin exists as a set of supporting library functions, where one writes one’s own simulation program and makes use of this provided simulation system. This provides great flexibility in what the code can do, as demonstrated in the example files available in the Merlin distribution. we have taken responsibility for developing and maintaining the code, and have added several new features and enhancements. The above plot shows the scalability of the MPI Merlin code in a simulation of the V6.503 LHC lattice, with 100k particles over 10 laps. The plot ranges over 2 to 128 CPUs. The above and below plots show a comparison between the new proton scattering code in Merlin, and FLUKA for a 0.5m long Copper jaw collimator. The current release of the source code is available from sourceforge at http://sourceforge.net/projects/merlin-pt/. We actively encourage new developers to join the Merlin project. As part of our development efforts, we have switched to the git distributed version control system. This has allowed individual developers to make their own branches and track their changes without modifying the main tree. Scattering physics for protons has been added to the code. These scattering processes include: Multiple coulomb, elastic proton-nucleus, inelastic proton-nucleus, elastic proton- nucleon, quasi-elastic single diffractive proton-nucleon, and Rutherford scattering. A large number of minor code warnings have been fixed. We have tested the code base with gcc 4.5 builds to ensure compatibility with current compilers. In addition many minor code design and layout enhancements have taken place, in addition to more comments explaining what is occurring in the code. We feel full documentation is important in tracking codes, and this is a work in progress for Merlin. Previously collimator settings had to be added to the Merlin input files (MAD format) by hand. These are now read in from a user defined collimator database file, allowing easy changes to collimator settings. In a similar way, there is now a unified material class, and a material database class. Here materials objects are created, are filled with the relevant material properties and are then pushed back onto a C++ vector for easy access and searching. Due to this data not changing between runs, this information is held within the source code itself instead of an external configuration file. Given the correct material properties and cross sections, it is now trivial to add new materials to Merlin. We have also implemented bunch load functions, allowing checkpoint features for long simulation runs, where the bunch must be saved and reloaded at a later time in the same state. Code Improvements Scattering enhancements Increasing tracking speed The table below shows the execution time of 10 laps of the V6.503 LHC lattice with 100k particles for the OpenMP code. We have implemented muti-threaded code into Merlin in order to speed up tracking. Initially openMP was used to parallelize the particle transport routines, but we then moved to MPI. We have developed particle distribution routines in order to split tracking over multiple physical computers. All tracking, collimation, and other independent processes will take place on individual CPU nodes, with particle exchange taking place for collective effects only. Collective processes include initial bunch creation, wakefield effects and emittance calculations. We have implemented a new MPI_PARTICLE derived type in order to transfer particles between physical systems. The above image shows the MPI based tracking design for the Merlin code. Resistive Wakefield enhancements Previous versions of Merlin had a fixed macrocharge for each particle in the bunch. We have added a new ParticleBunchQ class, which allows for each particle to have its own macrocharge. This allows us to give core beam particles a higher macrocharge, whilst adding a halo with a lower macrocharge. This will give a more accurate simulation of the effect of wakefields on halo particles, with the core beam charge producing the field that acts on the halo. In addition, the WakefieldProcess has been enhanced to work with the MPI code, allowing transfers from other compute nodes. The collective wakefield from multiple systems is calculated. #pragma omp parallel for for(size_t i = 0; i<bunch.size(); i++) { amap->Apply(bunch.GetParticles()[i]); } We also have implemented a load balancing particle distribution system for use on shared or heterogeneous clusters. Below is a sample of the OpenMP parallel tracking code.


Download ppt "TUPEC057 Advances With Merlin – A Beam Tracking Code J. Molson, R.J. Barlow, H.L. Owen, A. Toader www.cockcroft.ac.uk www.manchester.ac.uk MERLIN is a."

Similar presentations


Ads by Google