TUPEC057 Advances With Merlin – A Beam Tracking Code J. Molson, R.J. Barlow, H.L. Owen, A. Toader www.cockcroft.ac.uk www.manchester.ac.uk MERLIN is a.

Slides:



Advertisements
Similar presentations
National Database Templates for the Biosafety Clearing-House Application (NDT-nBCH) Overview of the US nBCH Applications.
Advertisements

Configuration management
Slides Prepared from the CI-Tutor Courses at NCSA By S. Masoud Sadjadi School of Computing and Information Sciences Florida.
IIAA GPMAD A beam dynamics code using Graphics Processing Units GPMAD (GPU Processed Methodical Accelerator Design) utilises Graphics Processing Units.
IGOR: A System for Program Debugging via Reversible Execution Stuart I. Feldman Channing B. Brown slides made by Qing Zhang.
1 Coven a Framework for High Performance Problem Solving Environments Nathan A. DeBardeleben Walter B. Ligon III Sourabh Pandit Dan C. Stanzione Jr. Parallel.
Edoclite and Managing Client Engagements What is Edoclite? How is it used at IU? Development Process?
TRACK 2™ Version 5 The ultimate process management software.
SixTrack: Minor bug fixes and pencil beam R. Bruce.
Artwork Production Workflow And Approval Management For Ad Agency Networks And Design Agencies Future ready online application with user friendly features.
Spillman Sentryx 6.0.
Incremental Network Programming for Wireless Sensors NEST Retreat June 3 rd, 2004 Jaein Jeong UC Berkeley, EECS Introduction Background – Mechanisms of.
Software Group © 2006 IBM Corporation Compiler Technology Task, thread and processor — OpenMP 3.0 and beyond Guansong Zhang, IBM Toronto Lab.
EuCARD/COLMAT Project: Validate the existing LHC collimator studies, (done using Sixtrack) with the Merlin program Saturday, June 27, 2015Roger Barlow.
Data Forms in Hyperion Planning. Data Forms are used by the business users and planners to enter, update and analyze the data. Actually, data forms.
Senior Design May AbstractDesign Alex Frisvold Alex Meyer Nazmus Sakib Eric Van Buren Our project is to develop a working emulator for an Android.
Introduction Status of SC simulations at CERN
What is R By: Wase Siddiqui. Introduction R is a programming language which is used for statistical computing and graphics. “R is a language and environment.
Applying Twister to Scientific Applications CloudCom 2010 Indianapolis, Indiana, USA Nov 30 – Dec 3, 2010.
Task Farming on HPCx David Henty HPCx Applications Support
RMG Study Group Session I: Git, Sphinx, webRMG Connie Gao 9/20/
GRD - Collimation Simulation with SIXTRACK - MIB WG - October 2005 LHC COLLIMATION SYSTEM STUDIES USING SIXTRACK Ralph Assmann, Stefano Redaelli, Guillaume.
Loss maps of RHIC Guillaume Robert-Demolaize, BNL CERN-GSI Meeting on Collective Effects, 2-3 October 2007 Beam losses, halo generation, and Collimation.
1 CCOS Seasonal Modeling: The Computing Environment S.Tonse, N.J.Brown & R. Harley Lawrence Berkeley National Laboratory University Of California at Berkeley.
Peter J. Briggs, Liz Potterton *, Pryank Patel, Alun Ashton, Charles Ballard, Martyn Winn CLRC Daresbury Laboratory, Warrington, Cheshire WA4 4AD, UK *
Scattering Workshop Merlin A C++ Class Library for Collimation Studies Haroon Rafique, R. Appleby, R. Barlow J. Molson, M. Serluca, A. Toader.
PTC ½ day – Experience in PS2 and SPS H. Bartosik, Y. Papaphilippou.
Previously Fetch execute cycle Pipelining and others forms of parallelism Basic architecture This week we going to consider further some of the principles.
 To explain the importance of software configuration management (CM)  To describe key CM activities namely CM planning, change management, version management.
A Metadata Based Approach For Supporting Subsetting Queries Over Parallel HDF5 Datasets Vignesh Santhanagopalan Graduate Student Department Of CSE.
Drag and Drop Display and Builder. Timofei B. Bolshakov, Andrey D. Petrov FermiLab.
Usability Issues Documentation J. Apostolakis for Geant4 16 January 2009.
(1) A Beginner’s Quick Start to SIMICS. (2) Disclaimer This is a quick start document to help users get set up quickly Does not replace the user guide.
Mantid Scientific Steering Committee Nick Draper 10/11/2010.
Version 5. ¿What is PAF? PAF is a tool to easily and quickly implement… …distributed analysis over ROOT trees. …by hiding as much as possible the inherent.
1 Chapter Nine Using GUI Objects and the Visual Studio IDE.
GEM: A Framework for Developing Shared- Memory Parallel GEnomic Applications on Memory Constrained Architectures Mucahid Kutlu Gagan Agrawal Department.
Experiences with Achieving Portability across Heterogeneous Architectures Lukasz G. Szafaryn +, Todd Gamblin ++, Bronis R. de Supinski ++ and Kevin Skadron.
Visual Basic for Application - Microsoft Access 2003 Finishing the application.
Coupled Domains in Coastal Ocean Simulations using MPIg on the NGS Stephen Pickles *, Mike Ashworth *, Jason Holt # * CSED, STFC Daresbury Laboratory #
Collimation for the Linear Collider, Daresbury.1 Adam Mercer, German Kurevlev, Roger Barlow Simulation of Halo Collimation in BDS.
Computing Performance Recommendations #10, #11, #12, #15, #16, #17.
Status of BDSIM Simulation L. Nevay, S. Boogert, H. Garcia-Morales, S. Gibson, R. Kwee-Hinzmann, J. Snuverink Royal Holloway, University of London 17 th.
Slide 1 NEMOVAR-LEFE Workshop 22/ Slide 1 Current status of NEMOVAR Kristian Mogensen.
GUINEA-PIG: Beam-beam interaction simulation status M. Alabau, P. Bambade, O. Dadoun, G. Le Meur, C. Rimbault, F. Touze LAL - Orsay D. Schulte CERN - Genève.
Vacuum specifications in Linacs J-B. Jeanneret, G. Rumolo, D. Schulte in CLIC Workshop 09, 15 October 2009 Fast Ion Instability in Linacs and the simulation.
20 October 2005 LCG Generator Services monthly meeting, CERN Validation of GENSER & News on GENSER Alexander Toropin LCG Generator Services monthly meeting.
Intra-Beam scattering studies for CLIC damping rings A. Vivoli* Thanks to : M. Martini, Y. Papaphilippou *
Flair development for the MC TPS Wioletta Kozłowska CERN / Medical University of Vienna.
Mantid Scientific Steering Committee Nick Draper 18/06/2010.
Processes Chapter 3. Processes in Distributed Systems Processes and threads –Introduction to threads –Distinction between threads and processes Threads.
J. Snuverink and J. Pfingstner LinSim LinSim Linear Accelerator Simulation Framework with PLACET an GUINEA-PIG Jochem Snuverink Jürgen Pfingstner 16 th.
MASS C++ Updates JENNIFER KOWALSKY, What is MASS? Multi-Agent Spatial Simulation A library for parallelizing simulations and data analysis Uses.
Application of Design Patterns to Geometric Decompositions V. Balaji, Thomas L. Clune, Robert W. Numrich and Brice T. Womack.
Copyright © 2015 Rockwell Automation, Inc. All Rights Reserved. PUBLIC PUBLIC CO900H L19 - Studio 5000® and Logix Advanced Lab.
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Tracking simulations of protons quench test
Status of MERLIN and Recent Developments
CMS High Level Trigger Configuration Management
Collimator design and short range wakefields
In-situ Visualization using VisIt
HIGHLIGHTS OF LAST MONTHS OF HSS ACTIVITIES
Metadata Editor Introduction
Valloni A. Mereghetti, E. Quaranta, H. Rafique, J. Molson, R. Bruce, S. Redaelli Comparison between different composite material implementations in Merlin.
New Mexico State University
Beam collimation for SPPC
Machine Independent Features
Welcome to E-Prime E-Prime refers to the Experimenter’s Prime (best) development studio for the creation of computerized behavioral research. E-Prime is.
Status of energy deposition studies IR7
This presentation document has been prepared by Vault Intelligence Limited (“Vault") and is intended for off line demonstration, presentation and educational.
Presentation transcript:

TUPEC057 Advances With Merlin – A Beam Tracking Code J. Molson, R.J. Barlow, H.L. Owen, A. Toader MERLIN is a highly abstracted particle tracking code written in C++ that provides many unique features, and is simple to extend and modify. We have investigated the addition of high order wakefields to this tracking code and their effects on bunches, particularly with regard to collimation systems for both hadron and lepton accelerators. Updates have also been made to increase the code base compatibility with current compilers, and speed enhancements have been made to the code via the addition of multi-threading to allow cluster operation on the grid. In addition, this allows for simulations with large numbers of particles to take place. Instructions for downloading the new code base are given. Abstract Introduction Acknowledgements We would like to thank Nick Walker and Andy Wolski for their assistance with development of the Merlin source code. We thank CSED staff at STFC Daresbury Laboratory for providing computational resources. Number of Processor Cores Execution time (seconds) Source Code access Merlin is a beam tracking code developed in C++ by N. Walker et al for particle tracking the ILC linac. It is easy to extend due to its modular process nature, and the code is clean structured C++. Merlin exists as a set of supporting library functions, where one writes one’s own simulation program and makes use of this provided simulation system. This provides great flexibility in what the code can do, as demonstrated in the example files available in the Merlin distribution. we have taken responsibility for developing and maintaining the code, and have added several new features and enhancements. The above plot shows the scalability of the MPI Merlin code in a simulation of the V6.503 LHC lattice, with 100k particles over 10 laps. The plot ranges over 2 to 128 CPUs. The above and below plots show a comparison between the new proton scattering code in Merlin, and FLUKA for a 0.5m long Copper jaw collimator. The current release of the source code is available from sourceforge at We actively encourage new developers to join the Merlin project. As part of our development efforts, we have switched to the git distributed version control system. This has allowed individual developers to make their own branches and track their changes without modifying the main tree. Scattering physics for protons has been added to the code. These scattering processes include: Multiple coulomb, elastic proton-nucleus, inelastic proton-nucleus, elastic proton- nucleon, quasi-elastic single diffractive proton-nucleon, and Rutherford scattering. A large number of minor code warnings have been fixed. We have tested the code base with gcc 4.5 builds to ensure compatibility with current compilers. In addition many minor code design and layout enhancements have taken place, in addition to more comments explaining what is occurring in the code. We feel full documentation is important in tracking codes, and this is a work in progress for Merlin. Previously collimator settings had to be added to the Merlin input files (MAD format) by hand. These are now read in from a user defined collimator database file, allowing easy changes to collimator settings. In a similar way, there is now a unified material class, and a material database class. Here materials objects are created, are filled with the relevant material properties and are then pushed back onto a C++ vector for easy access and searching. Due to this data not changing between runs, this information is held within the source code itself instead of an external configuration file. Given the correct material properties and cross sections, it is now trivial to add new materials to Merlin. We have also implemented bunch load functions, allowing checkpoint features for long simulation runs, where the bunch must be saved and reloaded at a later time in the same state. Code Improvements Scattering enhancements Increasing tracking speed The table below shows the execution time of 10 laps of the V6.503 LHC lattice with 100k particles for the OpenMP code. We have implemented muti-threaded code into Merlin in order to speed up tracking. Initially openMP was used to parallelize the particle transport routines, but we then moved to MPI. We have developed particle distribution routines in order to split tracking over multiple physical computers. All tracking, collimation, and other independent processes will take place on individual CPU nodes, with particle exchange taking place for collective effects only. Collective processes include initial bunch creation, wakefield effects and emittance calculations. We have implemented a new MPI_PARTICLE derived type in order to transfer particles between physical systems. The above image shows the MPI based tracking design for the Merlin code. Resistive Wakefield enhancements Previous versions of Merlin had a fixed macrocharge for each particle in the bunch. We have added a new ParticleBunchQ class, which allows for each particle to have its own macrocharge. This allows us to give core beam particles a higher macrocharge, whilst adding a halo with a lower macrocharge. This will give a more accurate simulation of the effect of wakefields on halo particles, with the core beam charge producing the field that acts on the halo. In addition, the WakefieldProcess has been enhanced to work with the MPI code, allowing transfers from other compute nodes. The collective wakefield from multiple systems is calculated. #pragma omp parallel for for(size_t i = 0; i<bunch.size(); i++) { amap->Apply(bunch.GetParticles()[i]); } We also have implemented a load balancing particle distribution system for use on shared or heterogeneous clusters. Below is a sample of the OpenMP parallel tracking code.