1 MaryLie/IMPACT: Status and Future Plans Robert D. Ryne Center of Beam Physics Lawrence Berkeley National Laboratory SciDAC II, COMPASS collaboration.

Slides:



Advertisements
Similar presentations
TUPEC057 Advances With Merlin – A Beam Tracking Code J. Molson, R.J. Barlow, H.L. Owen, A. Toader MERLIN is a.
Advertisements

Emittance Measurement Simulations in the ATF Extraction Line Anthony Scarfe The Cockcroft Institute.
Model Server for Physics Applications Paul Chu SLAC National Accelerator Laboratory October 15, 2010.
An Introduction to Breakdown Simulations With PIC Codes C. Nieter, S.A. Veitzer, S. Mahalingam, P. Stoltz Tech-X Corporation MTA RF Workshop 2008 Particle-in-Cell.
Chris Rogers, MICE CM16 Wednesday Plenary Progress in Cooling Channel Simulation.
Chris Rogers, Analysis Parallel, MICE CM17 Progress in Cooling Channel Simulation.
VORPAL for Simulating RF Breakdown Kevin Paul VORPAL is a massively-parallel, fully electromagnetic particle- in-cell (PIC) code, originally.
LHeC Test Facility Meeting
Introduction Status of SC simulations at CERN
1 Web Based Interface for Numerical Simulations of Nonlinear Evolution Equations Ryan N. Foster & Thiab Taha Department of Computer Science The University.
GRD - Collimation Simulation with SIXTRACK - MIB WG - October 2005 LHC COLLIMATION SYSTEM STUDIES USING SIXTRACK Ralph Assmann, Stefano Redaelli, Guillaume.
45 th ICFA Beam Dynamic Workshop June 8–12, 2009, Cornell University, Ithaca New York Jim Crittenden Cornell Laboratory for Accelerator-Based Sciences.
1 IMPACT-Z and IMPACT-T: Code Improvements and Applications Ji Qiang Center of Beam Physics Lawrence Berkeley National Laboratory SciDAC II, COMPASS collaboration.
SciDAC Accelerator Simulation project: FNAL Booster modeling, status and plans Robert D. Ryne, P. Spentzouris.
Beam Dynamic Calculation by NVIDIA® CUDA Technology E. Perepelkin, V. Smirnov, and S. Vorozhtsov JINR, Dubna 7 July 2009.
Emittance measurement: ID muons with time-of-flight Measure x,y and t at TOF0, TOF1 Use momentum-dependent transfer matrices to map  path Assume straight.
PTC ½ day – Experience in PS2 and SPS H. Bartosik, Y. Papaphilippou.
25-26 June, 2009 CesrTA Workshop CTA09 Electron Cloud Single-Bunch Instability Modeling using CMAD M. Pivi CesrTA CTA09 Workshop June 2009.
Oliver Boine-FrankenheimSIS100-4: High current beam dynamics studies SIS 100 ‘high current’ design challenges o Beam loss in SIS 100 needs to be carefully.
R. Ryne, NUG mtg: Page 1 High Energy Physics Greenbook Presentation Robert D. Ryne Lawrence Berkeley National Laboratory NERSC User Group Meeting.
TransAT Tutorial Particle Tracking July 2015 ASCOMP
1 BeamBeam3D: Code Improvements and Applications Ji Qiang Center of Beam Physics Lawrence Berkeley National Laboratory SciDAC II, COMPASS collaboration.
PowerBuilder Online Courses - by Prasad Bodepudi
Ajit Kurup, Imperial College London P. Bonnal, B. Daudin, J. De Jonghe, CERN An Accelerator Design Tool for the International Design Study for the Neutrino.
Spectrometer Optics John J. LeRose. The Basics Charged particles moving through static magnetic fields.  Magnetic Rigidity Local radius of curvature.
Components for Beam Dynamics Douglas R. Dechow, Tech-X Lois Curfman McInnes, ANL Boyana Norris, ANL With thanks to the Common Component Architecture (CCA)
Matching recipe and tracking for the final focus T. Asaka †, J. Resta López ‡ and F. Zimmermann † CERN, Geneve / SPring-8, Japan ‡ CERN, Geneve / University.
Building an Electron Cloud Simulation using Bocca, Synergia2, TxPhysics and Tau Performance Tools Phase I Doe SBIR Stefan Muszala, PI DOE Grant No DE-FG02-08ER85152.
SAP Participants: Douglas Dechow, Tech-X Corporation Lois Curfman McInnes, Boyana Norris, ANL Physics Collaborators: James Amundson, Panagiotis Spentzouris,
Status of Space-Charge Simulations with MADX Valery KAPIN ITEP & MEPhI, Moscow GSI, 19-Feb-2009
K.Furukawa, Nov Database and Simulation Codes 1 Simple thoughts Around Information Repository and Around Simulation Codes K. Furukawa, KEK Nov.
Simulation of direct space charge in Booster by using MAD program Y.Alexahin, A.Drozhdin, N.Kazarinov.
1 EPIC SIMULATIONS V.S. Morozov, Y.S. Derbenev Thomas Jefferson National Accelerator Facility A. Afanasev Hampton University R.P. Johnson Muons, Inc. Operated.
Oliver Boine-Frankenheim, High Current Beam Physics Group Simulation of space charge and impedance effects Funded through the EU-design study ‘DIRACsecondary.
Beam Dynamics: Planned Activities Code Development Intrabeam collisions Electron cooling Continued support for IMPACT Continued development of  beam-beam.
Status of Scaling FFAG Phase Rotation ICOOL Simulations Ajit Kurup ISS Machine Group Meeting 27 th July 2006.
Very Large Scale Computing In Accelerator Physics Robert D. Ryne Los Alamos National Laboratory.
 Advanced Accelerator Simulation Panagiotis Spentzouris Fermilab Computing Division (member of the SciDAC AST project)
S.A. Veitzer H IGH -P ERFORMANCE M ODELING OF E LECTRON C LOUD E FFECT AND RF D IAGNOSTICS SIMULATIONS EMPOWERING YOUR INNOVATIONS 1 MEIC Collaboration.
‘Computer power’ budget for the CERN Space Charge Group Alexander Molodozhentsev for the CERN-ICE ‘space-charge’ group meeting March 16, 2012 LIU project.
6-D dynamics in an isochronous FFAG lattice e-model Main topic : Tracking code development : 3-D simulation of the field in an isochronous FFAG optics.
Design of the Turnaround Loops for the Drive Beam Decelerators R. Apsimon, J. Esberg CERN, Switzerland.
Accelerator Simulation in the Computing Division Panagiotis Spentzouris.
Midwest Accelerator Physics Meeting. Indiana University, March 15-19, ORBIT Electron Cloud Model Andrei Shishlo, Yoichi Sato, Slava Danilov, Jeff.
Accelerator Simulation in the Computing Division Panagiotis Spentzouris.
DEVELOPMENT OF A STEADY STATE SIMULATION CODE FOR KLYSTRONS
PTC-ORBIT code for CERN machines (PSB, PS, SPS) Alexander Molodozhentsev (KEK) Etienne Forest (KEK) Group meeting, CERN June 1, 2011 current status …
3D codes: TraFiC4 and CSRtrack Torsten Limberg DESY Zeuthen 2003.
Update on ILC Production and Capturing Studies Wei Gai, Wanming Liu and Kwang-Je Kim ILC e+ Collaboration Meeting IHEP Beijing Jan 31 – Feb 2, 2007.
Panel Discussion on Single Particle Codes 1.Catalogue of existing codes 2.Transparency of input and output between different codes: what is the status.
Warp LBNL Warp suite of simulation codes: developed to study high current ion beams (heavy-ion driven inertial confinement fusion). High.
Emittance measurements with multiple wire-scanners and quadrupole scans in ATF EXT C. Rimbault, Brossard, P. Bambade (LAL) Main goal : - Try to improve.
A. Sim, CRD, L B N L 1 SRM Collaboration Meeting, Sep , 2005 SRM v3.0 LBNL Implementation Status Report Scientific Data Management Research Group.
1 Task particle simulations studies: progress report M. Giovannozzi Objectives To study the field quality tolerances for new magnetic elements for.
ISIS Beam and MICE Target Simulation MICE Collaboration Meeting Adam Dobbs 19 th Oct 2008.
1 IMPACT: Benchmarking Ji Qiang Lawrence Berkeley National Laboratory CERN Space-Charge Collaboration Meeting, May 20-21, 2014.
Nufact02, London, July 1-6, 2002K.Hanke Muon Phase Rotation and Cooling: Simulation Work at CERN new 88 MHz front-end update on cooling experiment simulations.
Tech-X Corporation Status of the next generation of Synergia Interface and Refactoring Douglas R. Dechow.
SLAC LET Meeting Global Design Effort 1 CHEF: Recent Activity + Observations about Dispersion in Linacs Jean-Francois Ostiguy FNAL.
Managed by UT-Battelle for the Department of Energy Python ORBIT in a Nutshell Jeff Holmes Oak Ridge National Laboratory Spallation Neutron Source Space.
 Accelerator Simulation P. Spentzouris Accelerator activity coordination meeting 03 Aug '04.
Target insertion matching and standard cell optics optimization
Overview of Needs for SixTrack on-line Aperture Check
Multi-Turn Extraction studies and PTC
Sabrina Appel, GSI, Beam physics Space charge workshop 2013, CERN
Jeffrey Eldred, Sasha Valishev
Single-bunch instability preliminary studies ongoing.
SC Overview 2013 White & Rouge The Codes in Comparison The Noise Issue
M. Pivi PAC09 Vancouver, Canada 4-8 May 2009
MEBT1&2 design study for C-ADS
Presentation transcript:

1 MaryLie/IMPACT: Status and Future Plans Robert D. Ryne Center of Beam Physics Lawrence Berkeley National Laboratory SciDAC II, COMPASS collaboration meeting, Tech-X Corp. Boulder, CO, Oct. 5-6, 2009

2 MaryLie/IMPACT: Hybrid Parallel Beam Dynamics Code Combines features of MaryLie (U. Md) + IMPACT (LBNL) + new features Multiple capabilities in a single unified environment: —Map generation, map analysis, particle tracking w/ 3D space charge, envelope tracking, fitting and optimization MAD-style input (Standard Input Format, enhanced as needed) Uses split operator method to combine high order optics w/ parallel particle-in-cell Split-Operator Methods M = M ext M = M sc H=H ext +H sc M (t)= M ext (t/2) M sc (t) M ext (t/2) + O(t 3 ) Magnetic Optics Parallel PIC (space charge)

3 MaryLie/IMPACT: Activities & Improvements Ported to ANL/BG systems Applied to CERN PS2 dynamic apertures studies, space-charge studies, comparisons w/ IMPACT Encapsulated D. Abell’s 5 th order nonlinear map for RF cavity for inclusion in other codes New serial scan capability (10 dimensional parameter space) New parallel scan capability (10D parameter space) up to 80,000 processors on ANL/Intrepid New multi-bunch capability Enhanced diagnostics Auxilliary scripts to produce animations Used in USPAS class (along with Synergia)

4 Encapsulation of 5 th order nonlinear rf cavity model (in collaboration w/ Dan Abell, who wrote the NLRF routines) About 20,000 lines of code for everything (map generation, tracking through the map, and all ancillary routines) call get_egengrad(pp1,cpp1) do jslice=1,nslices if(icutslicesinhalf.eq.0)then slfrac=1.d0/nslices write(6,*)'(nohalf) slfrac=',slfrac ihalf=0 call get_nlrf(pp2,cpp2,h,xmh,jslice,nslices,slfrac,ihalf) c if(idotracking.eq.0)call concat(ah,amh,h,xmh,ah,amh) if(idotracking.eq.1)call dotrack(h,xmh,ntaysym,norder,ntrace) endif if(icutslicesinhalf.eq.1)then slfrac=0.5d0/nslices write(6,*)'(yeshalf) slfrac=',slfrac ihalf=1 call get_nlrf(pp2,cpp2,h,xmh,jslice,nslices,slfrac,ihalf) c if(idotracking.eq.0)call concat(ah,amh,h,xmh,ah,amh) if(idotracking.eq.1)call dotrack(h,xmh,ntaysym,norder,ntrace) c ihalf=2 call get_nlrf(pp2,cpp2,h,xmh,jslice,nslices,slfrac,ihalf) c if(idotracking.eq.0)call concat(ah,amh,h,xmh,ah,amh) if(idotracking.eq.1)call dotrack(h,xmh,ntaysym,norder,ntrace) endif enddo

5 #comments #menu beam: beam, particle=proton, ekinetic=250.d-3, bfreq=700.d6, bcurr=0.1d0 units: units, type=dynamic, l=1.0d0, w=2.d0*pi*700.d6 multiscan: mdscan, pmin1= 6.0, pmax1= 7.0, np1=25, pname1=fquad[2] & pmin2=-6.0, pmax2=-7.0, np2=25, pname2=dquad[2], nprocspptsim=128 dr: drift, l=0.10 slices=4 fquad: quadrupole, l=0.15 g1= 6.00 lfrn=0. tfrn=0. slices=6 dquad: quadrupole, l=0.30 g1=-6.00 lfrn=0. tfrn=0. slices=12 gapa1: rfgap,freq=7.e8,escale=4.e7,phasedeg=45.,file=rfdata1,steps=100,slices=5 gapb1: rfgap,freq=7.e8,escale=4.e7,phasedeg=-1.,file=rfdata1,steps=100,slices=5 pois: poisson, nx=64,ny=64,nz=64 !Poisson solver with variable grid size pois_fixed: poisson, nx=32,ny=32,nz=64 gridsize=fixed, & xmin=-.0025, xmax=.0025, ymin=-.0029, ymax=.0029, zmin=-.0095, zmax=.0095 raysin: raytrace, type=readonly file1=partcl.data slice:autoslice, control=local !this is actually the default slice_global:autoslice, control=global, l=0.01 !approx 1cm slice thickness dotrack:autotrack, type=taylor1 post: autoapply, name=prntall prntmoms: moments, precision=9, nunits=1 prntref: reftraj, precision=9, nunits=1 prntmax: maxsize, precision=9, nunits=1 fileout: pmif, itype=1, isend=3 end: end cell, line= (fquad dr gapa1 dr dquad dr gapb1 dr fquad) !prntall, line=(prntmoms,prntref,prntmax) prntall, line=(prntmoms) #labor slice pois !set poisson solver parameters raysin !read in some rays prntall !print moments and reference trajectory dotrack !tell the code to do autotracking post !after every slice, print moments cell !here is the beamline Parallel Scan Test Problem: The “GapTest” with scans of the quad strengths Perform 25x25=625 “point simulations” using 128 processors per point simulation; parcel out the point simulations to each group of 128 procs. For example, if NP=625x128=80000, each group of 128 procs does one point simulation. If NP=40000, each group of 128 procs does 2 point simulations in succession. … and so on

6 Use of methodical coding style helps transition to parallel scan capability Store MPI communication variables in a module Everywhere in your code, refer to pre-set, pre-stored variables: lworld (not MPI_COMM_WORLD) idproc (don’t compute it “on the fly” in each routine) Write to *named files* (never to unnamed fileds, i.e. never to a file that gets a default name like “fort.10”) If you code in this style, then enhancing your code to do parallel scans is easy. But meeting the prerequisites might be tedious if you have a large code that is not already written in this style of coding!

7 MaryLie/IMPACT: Next Year Plans Prepare tutorial for collaborators on parallel scans w/ MPI groups Extend nonlinear rf cavity capability to deflecting modes Extend parallel scan capability to prototype parallel optimization capability Continued applications to CERN PS2 Improved user manual and example files