Evan Li, Brown University 2012 Evan Li Xiaobiao Huang SLAC National Accelerator Laboratory August 12, 2010.

Slides:



Advertisements
Similar presentations
Element Loads Strain and Stress 2D Analyses Structural Mechanics Displacement-based Formulations.
Advertisements

IIAA GPMAD A beam dynamics code using Graphics Processing Units GPMAD (GPU Processed Methodical Accelerator Design) utilises Graphics Processing Units.
Current Status of Virtual Accelerator at J-PARC 3 GeV Rapid Cycling Synchrotron H. Harada*, K. Shigaki (Hiroshima University in Japan), H. Hotchi, F. Noda,
Types of Parallel Computers
Transverse optics 2: Hill’s equation Phase Space Emittance & Acceptance Matrix formalism Rende Steerenberg (BE/OP) 17 January 2012 Rende Steerenberg (BE/OP)
Searching for CesrTA guide field nonlinearities in beam position spectra Laurel Hales Mike Billing Mark Palmer.
Wilson Lab Tour Guide Orientation 11 December 2006 CLASSE 1 Focusing and Bending Wilson Lab Tour Guide Orientation M. Forster Mike Forster 11 December.
July 22, 2005Modeling1 Modeling CESR-c D. Rubin. July 22, 2005Modeling2 Simulation Comparison of simulation results with measurements Simulated Dependence.
Lattice calculations: Lattices Tune Calculations Dispersion Momentum Compaction Chromaticity Sextupoles Rende Steerenberg (BE/OP) 17 January 2012 Rende.
Lecture 5: Beam optics and imperfections
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Programming Massively Parallel Processors.
Introduction to Symmetric Multiprocessors Süha TUNA Bilişim Enstitüsü UHeM Yaz Çalıştayı
Status of BDSIM Developments at RHUL L. Nevay, S. Boogert, H. Garcia-Morales, S. Gibson, R. Kwee-Hinzmann, J. Snuverink Acknowledgments: R. Bruce, S. Redaelli.
JCE A Java-based Commissioning Environment tool Hiroyuki Sako, JAEA Hiroshi Ikeda, Visible Information Center Inc. SAD Workshop.
Lecture 29 Fall 2006 Lecture 29: Parallel Programming Overview.
GRD - Collimation Simulation with SIXTRACK - MIB WG - October 2005 LHC COLLIMATION SYSTEM STUDIES USING SIXTRACK Ralph Assmann, Stefano Redaelli, Guillaume.
© 2008 The MathWorks, Inc. ® ® Parallel Computing with MATLAB ® Silvina Grad-Freilich Manager, Parallel Computing Marketing
SciDAC Accelerator Simulation project: FNAL Booster modeling, status and plans Robert D. Ryne, P. Spentzouris.
27-Nov-2007 SLAC-ILC- AP Meeting Global Design Effort 1 Lucretia Developments PT SLAC.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
PTC ½ day – Experience in PS2 and SPS H. Bartosik, Y. Papaphilippou.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
CCA Common Component Architecture Manoj Krishnan Pacific Northwest National Laboratory MCMD Programming and Implementation Issues.
A Metadata Based Approach For Supporting Subsetting Queries Over Parallel HDF5 Datasets Vignesh Santhanagopalan Graduate Student Department Of CSE.
Simulation of direct space charge in Booster by using MAD program Y.Alexahin, N.Kazarinov.
Scalable Web Server on Heterogeneous Cluster CHEN Ge.
CSC 230: C and Software Tools Rudra Dutta Computer Science Department Course Introduction.
Optimization of the ESRF upgrade lattice lifetime and dynamic aperture using genetic algorithms Nicola Carmignani 11/03/2015.
Development of Simulation Environment UAL for Spin Studies in EDM Fanglei Lin December
April 26, CSE8380 Parallel and Distributed Processing Presentation Hong Yue Department of Computer Science & Engineering Southern Methodist University.
Frontiers in Massive Data Analysis Chapter 3.  Difficult to include data from multiple sources  Each organization develops a unique way of representing.
Matching recipe and tracking for the final focus T. Asaka †, J. Resta López ‡ and F. Zimmermann † CERN, Geneve / SPring-8, Japan ‡ CERN, Geneve / University.
October 4-5, Electron Lens Beam Physics Overview Yun Luo for RHIC e-lens team October 4-5, 2010 Electron Lens.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Basic Parallel Programming Concepts Computational.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Simulation of direct space charge in Booster by using MAD program Y.Alexahin, A.Drozhdin, N.Kazarinov.
1 EPIC SIMULATIONS V.S. Morozov, Y.S. Derbenev Thomas Jefferson National Accelerator Facility A. Afanasev Hampton University R.P. Johnson Muons, Inc. Operated.
 Advanced Accelerator Simulation Panagiotis Spentzouris Fermilab Computing Division (member of the SciDAC AST project)
J-PARC Trace3D Upgrades Christopher K. Allen Los Alamos National Laboratory.
A U.S. Department of Energy Office of Science Laboratory Operated by The University of Chicago Office of Science U.S. Department of Energy Containing a.
Tuesday, 02 September 2008FFAG08, Manchester Stephan I. Tzenov1 Modeling the EMMA Lattice Stephan I. Tzenov and Bruno D. Muratori STFC Daresbury Laboratory,
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Spring 2010 Lecture 13: Basic Parallel.
Daniel Dobos Seminar: Chaos, Prof. Markus
By Verena Kain CERN BE-OP. In the next three lectures we will have a look at the different components of a synchrotron. Today: Controlling particle trajectories.
Lecture 4 - E. Wilson - 23 Oct 2014 –- Slide 1 Lecture 4 - Transverse Optics II ACCELERATOR PHYSICS MT 2014 E. J. N. Wilson.
Parallel Computing Presented by Justin Reschke
Lecture 4 - E. Wilson –- Slide 1 Lecture 4 - Transverse Optics II ACCELERATOR PHYSICS MT 2009 E. J. N. Wilson.
Lecture 4 Longitudinal Dynamics I Professor Emmanuel Tsesmelis Directorate Office, CERN Department of Physics, University of Oxford ACAS School for Accelerator.
SLAC LET Meeting Global Design Effort 1 CHEF: Recent Activity + Observations about Dispersion in Linacs Jean-Francois Ostiguy FNAL.
Managed by UT-Battelle for the Department of Energy Python ORBIT in a Nutshell Jeff Holmes Oak Ridge National Laboratory Spallation Neutron Source Space.
1 Tracking study of muon acceleration with FFAGs S. Machida RAL/ASTeC 6 December, ffag/machida_ ppt.
Hybrid Fast-Ramping Synchrotron to 750 GeV/c J. Scott Berg Brookhaven National Laboratory MAP Collaboration Meeting March 5, 2012.
Fermi National Accelerator Laboratory & Thomas Jefferson National Accelerator Facility SciDAC LQCD Software The Department of Energy (DOE) Office of Science.
Lecture 5 - E. Wilson - 6/29/ Slide 1 Lecture 5 ACCELERATOR PHYSICS MT 2014 E. J. N. Wilson.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
OPERATED BY STANFORD UNIVERSITY FOR THE U.S. DEPT. OF ENERGY 1 Alexander Novokhatski April 13, 2016 Beam Heating due to Coherent Synchrotron Radiation.
Productive Performance Tools for Heterogeneous Parallel Computing
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Academic Training Lecture 2 : Beam Dynamics
Lecture 4 - Transverse Optics II
Ben Cerio Office of Science, SULI Program 2006
Multi-Turn Extraction studies and PTC
Team 1 Aakanksha Gupta, Solomon Walker, Guanghong Wang
Lecture 4 - Transverse Optics II
By Brandon, Ben, and Lee Parallel Computing.
Electron Rings Eduard Pozdeyev.
Lecture 4 - Transverse Optics II
with Model Independent
Physics 417/517 Introduction to Particle Accelerator Physics
Lecture 5 ACCELERATOR PHYSICS MT 2009 E. J. N. Wilson.
Presentation transcript:

Evan Li, Brown University 2012 Evan Li Xiaobiao Huang SLAC National Accelerator Laboratory August 12, 2010

 Toolbox for MATLAB, specifically oriented toward solving problems in computational accelerator physics  Combines heavily matured numerical methods and algorithms for beam tracking with the user-friendliness of the MATLAB interface  Memory management and datatype conversion are handled automatically, allowing for increased efficiency and flexibility in beam tracking  Combines existing codes and frameworks in to a simple “all-in-one” package Evan Li, Brown University 2012

 Parallel computing (as opposed to serial computing) involves executing a problem by simultaneously running computations on multiple processors  Allows for management of large data sets as well as increased performance speed  Machines have steadily switched to this model for processing  To keep AT updated with modern standards, parallel implementation is both logical and essential Evan Li, Brown University 2012 Serial Computing Model Parallel Computing Model

 The beam is expressed as a 6xN dimensional array  The beam contains N particles  Each particle is represented as a 6-dimensional column vector  Populates a 6-dimensional phase space  x 1 and x 1 ’: horizontal displacement its derivative (with respect to s)  x 2 and x 2 ’: vertical displacement its derivative  z: axial displacement  delta: off-momentum  Observed in a co-moving coordinate system with the beam Evan Li, Brown University 2012

 Beam trajectory is evolved as it passes through the ring’s lattice of cavities and magnets  Most basic lattice elements  Drift Space: Just vacuum, no electromagnetic fields  Bending Element: Magnetic dipole, used to steer the beam  Focusing Elements: Quadrupoles, used to focus the beam  The pass methods for these elements can simply be expressed as linear transfer matrices  The motion of the beam in a lattice containing only dipoles and quadrupoles can be easily solved analytically Evan Li, Brown University 2012

 Unfortunately, sextupoles and higher order magnets have passmethods that cannot be described simply using a transfer matrix  When these magnets are introduced, the behavior of the beam becomes chaotic at large amplitudes, and the beam optics must be optimized to reduce chaotic behavior  To test the beam optics, we calculate the dynamic aperture of the beam  The dynamic aperture is a limitation on the transverse displacement of each particle  Particles outside of the dynamic aperture will exhibit unstable orbits and eventually collide with the physical aperture, i.e. the walls of the vacuum  Calculating the dynamic aperture is a computationally intensive process!  No analytical solution for beam motion, so particle trajectories must be individually calculated  Particles trajectories must be calculated for thousands of turns before the trajectory can be declared stable  Thousands of particles must be tracked to obtain a rough visualization of the dynamic aperture  The dynamic aperture must be recalculated every time the ring optics are adjusted, which can mean days of calculation before an optimal design is found Evan Li, Brown University 2012

 Calculates trajectory based on the beam lattice RING, the beam’s initial coordinates Rin, and the specified number of turns NT  Used to track particle trajectories for a huge number of revolutions  Calculations are done numerically and reiterated millions of times  Used in determining the dynamic aperture to address the problem of aperture optimization  Individual calls to ringpass can take several hours to complete  Parallelization of the problem allows for simultaneous calculation of multiple particle trajectories, dramatically increasing the efficiency of the process Evan Li, Brown University 2012 Poincare Map

Evan Li, Brown University 2012

 MATLAB isn’t open source, so developing tools for it can be difficult  Much of the difficulty in the project comes from finding methods for communication and message-passing between nodes  Parallel Computing Toolbox  Commercially developed by Mathworks  Each separate node requires a MDCS license: won’t use enough of the software features to justify the enormous cost factors  Introducing PCT as a system requirement would detract from AT’s utility as a widely available toolbox for computational accelerator physics  Tried a lot of other things, lots of misdirection, blech…  MatlabMPI  Open-source set of MATLAB scripts developed by MIT Lincoln Laboratory  Implements functions from the Message Passing Interface (MPI) library, which has become the standard tool for communication for running programs on multiple processors  Not as user-friendly: requires lower-level “housekeeping” tasks involving memory management and data parsing  Massive headache trying to run programs and debug :(  Uses file I/O to communicate between separate processes  Can work adhering to both shared and distributed memory computing structures Evan Li, Brown University 2012

Measured processing times using MATLAB’s tic, toc functions: Parallelization algorithms showed around 90% efficiency for each additional processor Demonstrated processing speed increased by a factor of over 3.7 by parallelizing on a local office computer We were successfully able to demonstrate dramatic increases in processing speeds by dividing AT’s passmethods into multiple threads to be run on a small number of local processors

Evan Li, Brown University 2012 With the success of parallelization efforts at SSRL, including those of AT, there has been discussion of gaining access to larger scale computing clusters for tasks requiring high-performance processing With plans to upgrade the SPEAR3 project in the near future, SSRL will be using AT very frequently to produce accurate simulations and predictions for beam behavior Dynamic aperture optimization algorithms are essential in maintaining stable beam dynamics If we take advantage of a 128-node cluster, a process that could take over 8 days could be reduced to a 15 minute process This is an enormous upgrade for the computing power of the AP group at SSRL

 Passmethods for AT can be improved in a number of ways  Higher compensation for “fringe field” effects  More methods to simulate particle interactions within the beam  Algorithms for dynamic aperture optimization  Implementing more efficient algorithms, such as application of genetic algorithms in lattice optimization  Documentation  While AT continues to improve, its documentation has not been updated in recent years to reflect these upgrades Evan Li, Brown University 2012

Xiaobiao Huang Department of Energy SULI and SLAC National Accelerator Laboratory Steve Rock, Eric Shupert, Shannon Ferguson, Christine Green