Simbuca [1], using a graphics card to simulate Coulomb interactions in a Penning trap Simon Van Gorp [1]: S. Van Gorp et al. (2011), Nuclear Instruments.

Slides:



Advertisements
Similar presentations
What is the ISOLDE cooler RFQ CB - ISCOOL H. Frånberg.
Advertisements

Scalable Multi-Cache Simulation Using GPUs Michael Moeng Sangyeun Cho Rami Melhem University of Pittsburgh.
Christopher McCabe, Derek Causon and Clive Mingham Centre for Mathematical Modelling & Flow Analysis Manchester Metropolitan University MANCHESTER M1 5GD.
IIAA GPMAD A beam dynamics code using Graphics Processing Units GPMAD (GPU Processed Methodical Accelerator Design) utilises Graphics Processing Units.
Development of a track trigger based on parallel architectures Felice Pantaleo PH-CMG-CO (University of Hamburg) Felice Pantaleo PH-CMG-CO (University.
ECE 562 Computer Architecture and Design Project: Improving Feature Extraction Using SIFT on GPU Rodrigo Savage, Wo-Tak Wu.
Multi Agent Simulation and its optimization over parallel architecture using CUDA™ Abdur Rahman and Bilal Khan NEDUET(Department Of Computer and Information.
Parallel Computation of the Minimum Separation Distance of Bezier Curves and Surfaces Lauren Bissett, Nicholas Woodfield,
Acceleration of a mass limited target by ultra-high intensity laser pulse A.A.Andreev 1, J.Limpouch 2, K.Yu.Platonov 1 J.Psikal 2, Yu.Stolyarov 1 1. ILPh.
Synergistic Execution of Stream Programs on Multicores with Accelerators Abhishek Udupa et. al. Indian Institute of Science.
Guillermina Ramirez San Juan
Accelerating Machine Learning Applications on Graphics Processors Narayanan Sundaram and Bryan Catanzaro Presented by Narayanan Sundaram.
SIMULATION PROGRESS AND PLANS AT ROSTOCK/DESY Aleksandar Markovic ECL2, CERN, March 1, 2007 Gisela Pöplau.
ElectroScience Lab IGARSS 2011 Vancouver Jul 26th, 2011 Chun-Sik Chae and Joel T. Johnson ElectroScience Laboratory Department of Electrical and Computer.
JPEG C OMPRESSION A LGORITHM I N CUDA Group Members: Pranit Patel Manisha Tatikonda Jeff Wong Jarek Marczewski Date: April 14, 2009.
Parallel Performance of Hierarchical Multipole Algorithms for Inductance Extraction Ananth Grama, Purdue University Vivek Sarin, Texas A&M University Hemant.
Resonance Crossing Experiment in PoP FFAG (preliminary report) M. Aiba (Tokyo Univ.) for KEK FFAG Group FFAG W.S. KEK.
CuMAPz: A Tool to Analyze Memory Access Patterns in CUDA
Course Outline Introduction in algorithms and applications Parallel machines and architectures Overview of parallel machines, trends in top-500, clusters,
Shu Nishioka Faculty of Science and Technology, Keio Univ.
Simulation of the spark rate in a Micromegas detector with Geant4 Sébastien Procureur CEA-Saclay.
Beam Dynamic Calculation by NVIDIA® CUDA Technology E. Perepelkin, V. Smirnov, and S. Vorozhtsov JINR, Dubna 7 July 2009.
MS Thesis Defense “IMPROVING GPU PERFORMANCE BY REGROUPING CPU-MEMORY DATA” by Deepthi Gummadi CoE EECS Department April 21, 2014.
WITCH status + Simbuca, a Penning trap simulation program S. Van Gorp, M. Breitenfeldt, V. De Leebeeck,T. Porobic, G. Soti, M. Tandecki, F. Wauters, N.
1 Electric Field. 2 Chapter Objectives know the definition of, and basic uses for, the electric field. be able to sketch electric field lines. know how.
1 Evaluation of parallel particle swarm optimization algorithms within the CUDA™ architecture Luca Mussi, Fabio Daolio, Stefano Cagnoni, Information Sciences,
Multiprocessing. Going Multi-core Helps Energy Efficiency William Holt, HOT Chips 2005 Adapted from UC Berkeley "The Beauty and Joy of Computing"
MR-TOF at ISOLDE Frank Wienholtz - University of Greifswald - for the ISOLTRAP Collaboration GUI –
Ab initio simulations for (the WITCH) Penning traps using a graphics cards for faster coulomb interaction calculations S. Van Gorp, M. Breitenfeldt, M.
Static Electricity Chapter 16 and 24. Review: The 4 Fundamental Forces Strong Force – The force that is involved in holding the nucleus of an atom together.
A 3D tracking algorithm for bunches in beam pipes with elliptical cross-section and a concept for simulation of the interaction with an e-cloud Aleksandar.
Contribution of Penning trap mass spectrometry to neutrino physics Szilárd Nagy MPI-K Heidelberg, Germany New Instruments for Neutrino Relics and Mass,
The REXTRAP Penning Trap Pierre Delahaye, CERN/ISOLDE Friedhelm Ames, Pierre Delahaye, Fredrik Wenander and the REXISOLDE collaboration TAS workshop, LPC.
October 14, 2004 Single Spin Asymmetries 1 Single Spin Asymmetries for charged pions. Overview  One physics slide  What is measured, kinematic variables.
Simon Van Gorp 1/x Thesis defense 28 th of February, 2011 Search for physics beyond the standard electroweak model with the WITCH experiment Simon Van.
Main Ring + Space charge effects WHAT and HOW … Alexander Molodozhentsev for AP_MR Group May 10, 2005.
Chromaticity dependence of the vertical effective impedance in the PS Chromaticity dependence of the vertical effective impedance in the PS S. Persichelli.
Outline Sebastian George Tokyo 2007 High-Precision Mass Spectrometry
Simon Van Gorp PhD defense 28th of February, Leuven
An Efficient CUDA Implementation of the Tree-Based Barnes Hut n-body Algorithm By Martin Burtscher and Keshav Pingali Jason Wengert.
Mayur Jain, John Verboncoeur, Andrew Christlieb [jainmayu, johnv, Supported by AFOSR Abstract Electrostatic particle based models Developing.
Improved electron cloud build-up simulations with PyECLOUD G. Iadarola (1),(2), G. Rumolo (1) (1) CERN, Geneva, Switzerland, (2) Università di Napoli “Federico.
SHERPA Simulation for High Energy Reaction of PArticles.
GWENAEL FUBIANI L’OASIS GROUP, LBNL 6D Space charge estimates for dense electron bunches in vacuum W.P. LEEMANS, E. ESAREY, B.A. SHADWICK, J. QIANG, G.
WITCH - a first determination of the beta-neutrino angular correlation coefficient in 35 Ar decay S. Van Gorp, M. Breitenfeldt, V. De Leebeeck,T. Porobic,
CERN, BE-ABP (Accelerators and Beam Physics group) Jürgen Pfingstner Adaptive control scheme for the main linac of CLIC Jürgen Pfingstner 21 th of October.
Space Charge with PyHEADTAIL and PyPIC on the GPU Stefan Hegglin and Adrian Oeftiger Space Charge Working Group meeting –
Ion effects in low emittance rings Giovanni Rumolo Thanks to R. Nagaoka, A. Oeftiger In CLIC Workshop 3-8 February, 2014, CERN.
1 Chapter-3 (Electric Potential) Electric Potential: The electrical state for which flow of charge between two charged bodies takes place is called electric.
3/12/2013Computer Engg, IIT(BHU)1 CUDA-3. GPGPU ● General Purpose computation using GPU in applications other than 3D graphics – GPU accelerates critical.
Nuclear structure research at ISOLTRAP 17th of November 2008 Dennis Neidherr University of Mainz Outline:  Motivation for our measurements  Xe/Rn results.
Alexander Herlert High-precision mass measurements for reliable nuclear-astrophysics calculations CERN, PH-IS NIC-IX, CERN, Geneva, June 29, 2006.
BBFP J. Wei’s Fokker-Planck solver for bunched beams November 21 st, 2007 CLIC Beam dynamics meeting Y. Papaphilippou.
Fast and parallel implementation of Image Processing Algorithm using CUDA Technology On GPU Hardware Neha Patil Badrinath Roysam Department of Electrical.
Intra-Beam scattering studies for CLIC damping rings A. Vivoli* Thanks to : M. Martini, Y. Papaphilippou *
Mitglied der Helmholtz-Gemeinschaft Automated adjustment of the electron beam line of the 2 MeV Electron Cooler at COSY March 11, 2015 | 17:15 | A.Halama.
TRIGA-SPEC: Developement platform for MATS and LaSpec at FAIR Double-beta transition Q-value measurements with TRIGA-TRAP NUSTAR Meeting Christian.
S. Pardi Frascati, 2012 March GPGPU Evaluation – First experiences in Napoli Silvio Pardi.
Societal applications of large scalable parallel computing systems ARTEMIS & ITEA Co-summit, Madrid, October 30th 2009.
The WITCH Experiment 5th International Symposium on Symmetries in Subatomic Physics June 18-22, 2012 KVI, Groningen G. Ban 1, M. Breitenfeldt 2, V. De.
WITCH - a first determination of the beta-neutrino angular correlation S. Van Gorp, M. Breitenfeldt, V. De Leebeeck,T. Porobic, G. Soti, M. Tandecki, N.
A novel approach to visualizing dark matter simulations
Monte Carlo methods in spallation experiments Defense of the phD thesis Mitja Majerle “Phasotron” and “Energy Plus Transmutation” setups (schematic drawings)
MCP Analysis on ALpha Howard Chiao.
ALICE HLT tracking running on GPU
Introduction Motivation Objective
Course Outline Introduction in algorithms and applications
High-precision mass measurements of exotic nuclides:
Vrije Universiteit Amsterdam
6- General Purpose GPU Programming
Presentation transcript:

Simbuca [1], using a graphics card to simulate Coulomb interactions in a Penning trap Simon Van Gorp [1]: S. Van Gorp et al. (2011), Nuclear Instruments and Methods in Physics research A, 638,

Simbuca motivation The WITCH experiment: searches for physics beyond the Standard Model by measuring the recoil energy distribution of the nucleus after b- decay. This by comparing the expected simulated energy distribution (of recoil ions after  decay) with the experimentally obtained energy distribution -> Any mismatch would hint to new physics. [2]: S. Van Gorp et al. (2012), to be submitted to PRC (PRL?) Without Simbuca (i.e. assuming a perfectly cooled ion cloud) the Standard Model value was found to be 4  off its expected value [2]. Simulations are important! Therefore the source of ions in the Penning trap ( stored there for up to a few seconds) has to be simulated properly. -> The duration of a computer simulation is dominated by the Coulomb interaction calculation Simon Van Gorp EFTMS 20122nd of April, /13

Simbuca overview Simbuca is a modular Penning trap simulation package. Reading external fieldmaps Comsol SIMION Trap excitations 3 different integrators 3 buffer gas routines Can run on CPU and GPU Compile with g++ or icpc Several analysis tools are provided A Makefile is provided Support by me Simon Van Gorp EFTMS 20122nd of April, /13

Integrators and buffer gas models Integrators: 4 th and 5 th order Runga Kutta with adaptive step size and error control 1 st order (predictor corrector) Gear method Boris algorithm ? [3] Buffer gas models: Langevin or polarizability model (= for all mases) Ion mobility based model ( ≈ for all mases) HS1 SIMION model IonCool model ? [4] [3]: Boris J.P. (1970), Proc. 4th Conf. Num. Sim. Plasmas, 3-67 [4]: Schwarz S. et al. (2006), NIM A 566 2, Simon Van Gorp EFTMS 20122nd of April, /13

Coulomb interactions Simulation time scales with O(N 2 ) Tree methods (Barnes Hut, PM, P 3 M, PIC, FMM) reduces this to O(N log N) Space is divided in nodes. Which are subdivided A node has the total charge and mass, and is located on the centre of mass. Approx. long range force by aggregating particles into one particle and use the force of this one particle Scaled Coulomb force puts more weight to the charge of one ion to simulate more ions. Works well [5,6,7]. [5]: D. Beck et al. (2001), Hyperfine Interactions, 132, 469–474 [6]: S. Sturm et al. (2009), AIP Conference Proceedings, 1114(1), 185–190 [7]: S. Van Gorp (2012), PhD thesis, Leuven Simon Van Gorp EFTMS 20122nd of April, /13

Benefits of a GPU 1. Parallellism due to multiple stream processors 2. SIMD structure (pipelining) 3. Very fast floating point calculations 4. CUDA programming language (2007) 8 x 16 stream processors ≈ each comparable to a single processor = comparable to a conveyor belt with the threads being the workers Geforce 8800 GTX Simon Van Gorp EFTMS 20122nd of April, /13

Chamomile scheme The Chamomile scheme by Hamada and Iitaka (2007) [8] calculated the gravitational interaction between entities precisely Each set of i-particles is coupled to its own conveyor belt j-particles are sequentually presented to each conveyor belt At the end the result of all conveyor belts is being summed to obtain the force between the particles [8]: T. Hamada and T. Iitaka (2007), arXiv.org:astro-ph/ Simon Van Gorp EFTMS 20122nd of April, /13

Chamomile scheme: practical usage Black box function provided by Hamada and Iitaka: Gravitational force ≈ Coulomb Force Conversion coefficient: Needed: - 64 bit linux - NVIDIA Graphics Card that supports CUDA - CUDA environment v Not needed: - CUDA knowledge - … Simon Van Gorp EFTMS 20122nd of April, /13

GPU vs CPU GPU blows the CPU away. The effect becomes more visible with even more particles simulated. Simulated is a quadrupole excitation for N ions, moving 100 ms with buffer gas. This takes 3 days with a GPU compared to 3-4 years with a CPU! GPU improvement factorCPU and GPU simulation time Simon Van Gorp EFTMS 20122nd of April, /13

Simbuca: usage by other groups 1. WITCH Behavior of large ion clouds Energy and position distribution 2. Smiletrap (Stockholm) Highly charged ions Stochastic cooling processes 3. ISOLTRAP (CERN) In-trap decay [9] Investigate the influence of Coulomb repulsion between ions in a Penning trap 4. Piperade (Orsay and MPI Heidelberg) Simulate mass separation of ion clouds 5. ISOLTRAP (Greifswald) isobaric buncher, mass separation and negative mass effect [10] 6. CLIC accelerator (CERN) Simulate bunches of the beam [9]: A. Herlert, S. Van Gorp et al. Recoil-ion trapping for precision mass measurements, to be published [10]: Wolf, R et al. (2011). Hyperfine Interactions, 199, 115–122 Simon Van Gorp EFTMS 20122nd of April, /13

Tree codes on a GPU Octgrav v1.7 = first tree code on the GPU [11] Under construction, close to being finished [12] The step size (  t) defines the speed of the program (  t ~  c -1 ). Bring B out the equation -> If  c is constant ->  t schrinks [3,13,14] MPI-ing the code: just started by parallellizing the force calculation Improvements [11]: Gaburov, E et al. (2010). Procedia Computer Science, 1(1), 1119 –1127. [12]: Iitaka, T. (2012). A novel tree code for the gpu. private communication. [13]: Spreiter, Q. & Walter, M. (1999). Journal of Computational Physics, 152(1), 102 – 119. [14]: Herfurth, F et al. (2006). Hyperfine Interactions, 173, 93–101. Simon Van Gorp EFTMS 20122nd of April, /13

An example without Coulomb with Coulomb 50  s When trapping a large amount of ions, the cloud’s own electric field will create an E x B drift force for the ions with Simon Van Gorp EFTMS 20122nd of April, /13 Y X

1. A versatile Penning trap simulation package Simbuca is presented 2. The first program that uses a GPU to calculate Coulomb interactions 3. GPU computing is a new field of which we barely scratched the surface 4. Simbuca will continue to develop in the future Conclusion Simon Van Gorp EFTMS 20122nd of April, /13

Thank you for your attention

Starting up a simulation (1) Download program on a linux/windows PC ( Compile the program …. 15/18 Simon Van Gorp – MPI Heidelberg –

Simon Van Gorp – MPI Heidelberg /24 Change variables in main.cpp according to what simulation you want to do Starting up a simulation (2)

Starting up a simulation (3) Change variables in main.cpp 17/18 Simon Van Gorp – MPI Heidelberg –

Compile Change variables in Makefile compile the program 18/18 Simon Van Gorp – MPI Heidelberg –

execute Go to the directory with executable and execute the program check the logfile check the outputfile 19/18 Simon Van Gorp – MPI Heidelberg –

Run the simulation Post-process with functions parser.cpp or with root / linux bash 20/18 Simon Van Gorp – MPI Heidelberg –

21/18 Simon Van Gorp – MPI Heidelberg – Backup slides

22/18 Simon Van Gorp – MPI Heidelberg – Backup slides

23/18 Simon Van Gorp – MPI Heidelberg – Backup slides

24/18 Simon Van Gorp – MPI Heidelberg – Backup slides

Simulation Motivation WITCH compares the simulated spectra with an experimentally expected spectra to determine the  angular correlation coefficient a. The source of ions in the Penning trap has to be simulated properly. Separating ion species in a Penning trap is strongly distorted by the mutual Coulomb repulsion: both broadening and a shift of the excitation frequency has been observed [3] 25/18 The duration of a computer simulation is dominated by the Coulomb interaction calculation [2]: S. Van Gorp et al., to be submitted to PRC (PRL?) [3]: A. Herlert et al., Hyperfine Interactions, 199, 211–220 Without Simbuca (i.e. assuming a perfectly cooled ion cloud) the Standard Model value was found to be 4  off its expected value [2]. Simon Van Gorp – MPI Heidelberg –

Cs 1+ ions Helium Buffer gas (p=10 -4 mbar T=293 K) With Coulomb interaction. Scaled Coulomb factor of > mimic 10 7 ions 1. Dipole excitation (0.5 V amplitude. 5 ms duration and  - frequency) ms cooling Movie time 26/18 Simon Van Gorp – MPI Heidelberg –