Download presentation
Presentation is loading. Please wait.
Published byLydia O’Brien’ Modified over 9 years ago
1
EPAC08 - Genova Participants > 1300 The last EPAC → IPAC (Kyoto IPAC10) Next PAC09 in Vancouver Three-year cycle: Asia, Europe, North America + PAC North America in odd years (2011: Valencia & NY) 3 ILC talks Akira Yamamoto: Co-ordinated Global R&D Effort for the ILC Linac Technology James Clarke: Design of the Positron Source for the ILC Toshiaki Tauchi: The ILC Beam Delivery System Design and R&D Programme
2
Advanced Computing Tools and Models for Accelerator Physics “We cannot foresee what this kind of creativity in physics will bring…” Robert D. Ryne Lawrence Berkeley National Laboratory June 26, 2008 Genoa, Italy
3
Overview on High Performance Computingfor Accelerator Physics SciDAC (2001-06) (DOE program: Scientific Discovery through Advanced Computing) AST(Accelerator Science and Technology) SciDAC2 (2007-11) COMPASS: The Community Petascale Project for Accelerator Science and Simulation Results shown mainly from first SciDAC program National Energy Research Scientific Computing Center (NERSC), Berkley, e.g. Franklin, Seaborg (decom. Jan 08) 6080 CPUs ATLAS cluster at LLNL (~1000 nodes Linux cluster)
4
Overview on High Performance Computingfor Accelerator Physics SciDAC (2001-06) (DOE program: Scientific Discovery through Advanced Computing) AST(Accelerator Science and Technology) SciDAC2 (2007-11) COMPASS: The Community Petascale Project for Accelerator Science and Simulation Results shown mainly from first SciDAC program National Energy Research Scientific Computing Center (NERSC), Berkley, e.g. Franklin, Seaborg (decom. Jan 08) 6080 CPUs ATLAS cluster at LLNL (~1000 nodes Linux cluster) The new buzzword
5
Two weeks ago, petaflop announcement: IBM “roadrunner” 100 million times performance compared with computers at the time of the 1971 High Energy Accelerator Conference!
6
Two weeks ago, petaflop announcement: IBM “roadrunner” 100 million times performance compared with computers at the time of the 1971 High Energy Accelerator Conference! 6480 AMD-AMD- Dual-Core-Opteron + 1 Cell Processor per core Playstation3 Los Alamos N L
7
GPU’s gaining popularity 1 teraflop 4 GB 1.4 billion transistors 240 cores $1700 For comparison: Photo shown at PAC 2001 3.4 Tflops! It’s called TESLA and runs at 1.3 GHz Graphics Processing Unit NVDIA Tesla C1060 PCIe-card Presented June 2008 Seaborg
8
What to do with all that computing power? Beam dynamics Multiparticle interaction Beams in plasma Component design (e.g. Cavities) Codes e.g. IMPACT BeamBeam3d (Tevatron beam-beam) T3P/Omega3p (time, freq. domain solve … HPC, parallelisation Collaborative effort allows to combine codes and to define interfaces
9
Transport MaryLie Dragt-Finn MAD PARMILA 2D space charge PARMELA PARMTEQ IMPACT-Z IMPACT-T ML/I Synergia ORBIT BeamBeam3D Freq maps MXYZPTLK COSY-INF rms eqns Normal Forms Symp Integ DA GCPIC 3D space charge WARP SIMPSONSIMPACT MAD-X/PTC Partial list only; Many codes not shown Parallelization begins SINGLE PARTICLE OPTICS 1D, 2D COLLECTIVE 3D COLLECTIVE SELF-CONSISTENT MULTI-PHYSICS 1970 198019902000 Examples->
10
Modeling FERMI@Elettra Linac with IMPACT-Z Using 1 Billion Macroparticles 100MeV 1.2GeV Ji Qiang, LBNL
11
Accurate prediction of uncorrelated energy spread in a linac for a future light source Ji Qiang Final longitudinal phase space from IMPACT-Z simulation using 10M and 1B particles
12
Final Uncorrelated Energy Spread versus # of Macroparticles: 10M, 100M, 1B, 5B Ji QiangM. Venuturini IMPACT-Z results Microbunching instability gain function
13
BeamBeam3D simulation and visualization of beam-beam interaction at Tevatron 400 times usual intensity Eric Stern et al., FNAL
14
1.75 M quadratic elements, 10 M DOFs, 47 min per nsec on Seaborg 1024 CPUs with 173 GB memory – CG and incomplete Cholesky preconditioner Simulations of chains of cavities Full cryomodule Sorry, could not get the movies 1 hour CPU time, 1024 processors, 300 GB memory at NERSC
15
Cavity Coupler Kicks (Wakefield and RF) 6 posters Studies for ILC (main linac/RTML), FLASH 2 numerical calculations for RF kick M. Dohlus (Mafia) V.Yakolev (HFSS) MOPP042, N.Solyak et al. (Andrea Latina, PLACET) TUPP047, D. Kruecker et al. (MERLIN) ~30% different
16
Wakefields in periodic structures #cavities M.Dohlus et al. MOPP013 402 CPUs,7days 1CPU Without error estimate The discussions on wakefield kicks started at 20V/nC The effect becomes smaller
17
FLASH Simulation vs. Measurement 0.6 nC BPM11DBC2 OTR screens Coupler wakefield calculations From I. Zogorodnov M. Dohlus E. Prat et al. TUPP018 (ELEGANT)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.