‘Computer power’ budget for the CERN Space Charge Group Alexander Molodozhentsev for the CERN-ICE ‘space-charge’ group meeting March 16, 2012 LIU project.

Slides:



Advertisements
Similar presentations
IIAA GPMAD A beam dynamics code using Graphics Processing Units GPMAD (GPU Processed Methodical Accelerator Design) utilises Graphics Processing Units.
Advertisements

1 PIC versus Frozen? 5/27/2013 FSOutcome of SC-13 PIC codes are definitely needed when coherent effects are relevant. In our case in presence of strong.
Helmholtz International Center for Oliver Boine-Frankenheim GSI mbH and TU Darmstadt/TEMF FAIR accelerator theory (FAIR-AT) division Helmholtz International.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Space Charge meeting – CERN – 09/10/2014
PyECLOUD for PyHEADTAIL: development work G. Iadarola, A. Axford, H. Bartosik, K. Li, G. Rumolo Electron cloud meeting – 14 May 2015 Many thanks to: A.
Introduction Status of SC simulations at CERN
GRD - Collimation Simulation with SIXTRACK - MIB WG - October 2005 LHC COLLIMATION SYSTEM STUDIES USING SIXTRACK Ralph Assmann, Stefano Redaelli, Guillaume.
The LHC: an Accelerated Overview Jonathan Walsh May 2, 2006.
DELL PowerEdge 6800 performance for MR study Alexander Molodozhentsev KEK for RCS-MR group meeting November 29, 2005.
SciDAC Accelerator Simulation project: FNAL Booster modeling, status and plans Robert D. Ryne, P. Spentzouris.
Beam Dynamic Calculation by NVIDIA® CUDA Technology E. Perepelkin, V. Smirnov, and S. Vorozhtsov JINR, Dubna 7 July 2009.
PTC ½ day – Experience in PS2 and SPS H. Bartosik, Y. Papaphilippou.
25-26 June, 2009 CesrTA Workshop CTA09 Electron Cloud Single-Bunch Instability Modeling using CMAD M. Pivi CesrTA CTA09 Workshop June 2009.
Oliver Boine-FrankenheimSIS100-4: High current beam dynamics studies SIS 100 ‘high current’ design challenges o Beam loss in SIS 100 needs to be carefully.
HSM Meeting - HPC - FS High Performance Computing (HPC) Support from IT 1 Historical Overview of IT Computing Support & Present State Present.
PS Booster Studies with High Intensity Beams Magdalena Kowalska supervised by Elena Benedetto Space Charge Collaboration Meeting May 2014.
Theoretical studies of IBS in the SPS F. Antoniou, H. Bartosik, T. Bohl, Y.Papaphilippou MSWG – LIU meeting, 1/10/2013.
Oliver Boine-Frankenheim, High Current Beam Physics Group Simulation of space charge and impedance effects Funded through the EU-design study ‘DIRACsecondary.
Status Report – Injection Working Group Working group to find strategy for more efficient start-up of injectors and associated facilities after long stops.
 Advanced Accelerator Simulation Panagiotis Spentzouris Fermilab Computing Division (member of the SciDAC AST project)
STATUS OF THE LHC INJECTORS UPGRADE (“LIU”) PROJECT R. Garoby for the LIU Project Team March 6, 2012CERN.
Main Ring + Space charge effects WHAT and HOW … Alexander Molodozhentsev for AP_MR Group May 10, 2005.
Chromaticity dependence of the vertical effective impedance in the PS Chromaticity dependence of the vertical effective impedance in the PS S. Persichelli.
Midwest Accelerator Physics Meeting. Indiana University, March 15-19, ORBIT Electron Cloud Model Andrei Shishlo, Yoichi Sato, Slava Danilov, Jeff.
Simplified Modeling of Space Charge Losses in Booster at Injection Alexander Valishev June 17, 2015.
Simulation of resonances and beam loss for the J-PARC Main Ring Alexander Molodozhentsev (KEK) Etienne Forest (KEK) for the ICFA HB08 Workshop ICFA HB08.
Elias Métral, ICFA-HB2004, Bensheim, Germany, 18-22/10/ E. Métral TRANSVERSE MODE-COUPLING INSTABILITY IN THE CERN SUPER PROTON SYNCHROTRON G. Arduini,
PTC-ORBIT code for CERN machines (PSB, PS, SPS) Alexander Molodozhentsev (KEK) Etienne Forest (KEK) Group meeting, CERN June 1, 2011 current status …
Example: Longitudinal single bunch effects Synchrotron tune spread Synchrotron phase shift Potential well distortion Energy spread widening (microwave.
Space Charge with PyHEADTAIL and PyPIC on the GPU Stefan Hegglin and Adrian Oeftiger Space Charge Working Group meeting –
Hadron Synchrotron Incoherent effects section – 1 st meeting January 14 th, 2016.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
CERN Timing Overview CERN timing overview and our future plans with White Rabbit Jean-Claude BAU – CERN – 22 March
Tunes modulation in a space charge dominated beam: The particles behavior in the “necktie” Space charge meeting – CERN - 21/11/2013 Vincenzo Forte Thanks.
LHC Injectors Upgrade Project 1/12/2010 R. Garoby.
Ion effects in low emittance rings Giovanni Rumolo Thanks to R. Nagaoka, A. Oeftiger In CLIC Workshop 3-8 February, 2014, CERN.
Study of the space charge effects for J-PARC Main Ring Alexander Molodozhentsev (KEK) SAD Workshop, September 5-7, 2006.
LIU Beam Parameter Working Group Meeting #1: Ion parameter table Outline: Where we are standing What we need to do (still in 2015)
CERN IT Department CH-1211 Genève 23 Switzerland t SL(C) 5 Migration at CERN CHEP 2009, Prague Ulrich SCHWICKERATH Ricardo SILVA CERN, IT-FIO-FS.
Agenda 1 BOSS Meeting - 27/5/ : :10 Introduction and aims of the workshop 10‘ 09: :25 Priorities from OP for PSB and transfer lines 15'
Large-scale accelerator simulations: Synergia on the Grid turn 1 turn 27 turn 19 turn 16 C++ Synergia Field solver (FFT, multigrid) Field solver (FFT,
Pushing the space charge limit in the CERN LHC injectors H. Bartosik for the CERN space charge team with contributions from S. Gilardoni, A. Huschauer,
 Accelerator Simulation P. Spentzouris Accelerator activity coordination meeting 03 Aug '04.
HPC need and potential of ANSYS CFD and mechanical products at CERN A. Rakai EN-CV-PJ2 5/4/2016.
Evolution at CERN E. Da Riva1 CFD team supports CERN development 19 May 2011.
LHC Injectors Upgrade Project 11/11/2010 R. Garoby.
LIU Day 2014 – M.Bodendorfer - LEIR model. LEIR model A plan for understanding/upgrading the LEIR performance limitations M.Bodendorfer BE/ABP & LEIR.
ICE SECTION The coolest place to be! Elias Métral
People who attended the meeting:
CERN Space Charge Studies 2012/2013
CERN Space Charge Studies 2012/2013
FASTION L. Mether, G. Rumolo ABP-CWG meeting
Multi-Turn Extraction studies and PTC
Code description: pyORBIT
LIU, ABP-CWG, PBC, miscellaneous
Multi-core CPU Power Control
Update on the HiLumi/LIU parameters and performance ramp up after LS2
Acknowledgments: LIU-PT members and deputies, H. Bartosik
Sabrina Appel, GSI, Beam physics Space charge workshop 2013, CERN
Space Charge Study Group
Single-bunch instability preliminary studies ongoing.
SC Overview 2013 White & Rouge The Codes in Comparison The Noise Issue
All our thanks to Kevin Li and Michael Schenck BE/ABP-HSC CERN
E. Métral, G. Rumolo, R. Tomás (CERN Switzerland), B
Update on PTC/ORBIT space charge studies in the PSB
Candidato: Nicolò Biancacci
Preliminary results of the PTC/ORBIT convergence studies in the PSB
SPS collective effects Hannes, Gianni, Giovanni, Benoit, Yannis
Simulation of the bunching effect:
Presentation transcript:

‘Computer power’ budget for the CERN Space Charge Group Alexander Molodozhentsev for the CERN-ICE ‘space-charge’ group meeting March 16, 2012 LIU project

LIST of the CERN machines  Under consideration by the ICE space- charge group … in frame of the LHC Injectors Upgrade Project … PS Booster  4 people PS  3 SPS  2 LEIR  2 ‘RCS’ study  1 Alexander Molodozhentsev

Main items to study ‘SHORT-term’ tracking … Convergence study  PSB, PS, SPS, LEIR Injection process  PSB, PS, SPS, LEIR Optimization of the machine operation (‘working point’ scan)  PSB, PS, SPS, LEIR Implementation the machine imperfections  all … MT extraction, bunch splitting mechanism  PS ‘LONG-term’ tracking …  simulations for a full-cycle of the machine for the ‘optimized’ set of the main parameters  all … Current status Alexander Molodozhentsev

Computational tools PTC-ORBIT code…  … compiled for the ‘lxplus’ machine …  … ‘batch’ runs … by using the ‘lxbst2001…2010’ Alexander Molodozhentsev CPU time depends on (2&1/2D model):  Number of the Space Charge Nodes around the machine  Number of the transverse mesh points for the Poisson solver  Number of the macro-particles to represent the ‘real’ beam  computational approach to simulate the space charge kick … should be OPTIMIZED for each machine to minimize the required CPU time per turn without artificial effects …

CERN PS Booster / CPU time Alexander Molodozhentsev NO ‘Space charge’ module WITH ‘Space charge’ module PIC & FFT with chamber N SP = 199  ??? N mesh = 128x128 N bin = 128 … should be optimized ! 2 sec3 sec Lxplus (interactive) NOT Processors  Processes !!! … always 8 processors ONLY !

CERN PS / CPU time Alexander Molodozhentsev 12 sec WITH ‘Space charge’ module PIC & FFT with chamber N SP = 70 N mesh = 128x128 N bin = 128 … should be optimized ! Lxplus (interactive)

Required budget (1) … o the ‘code setting’ optimization …  convergence study … Alexander Molodozhentsev ‘short-term’ tracking … at least one synchrotron period   1000 turn PS Booster   3sec  1000 turns   1 hour … at least 15 runs   15 hours PS   12 sec  1000 turns   3.4 hours … at least 10 runs   34 hours ASSUMPTION: ‘waiting’ time is ZERO !

o ‘Tune-scan’ analysis (FULL SCAN)  multi-particle tracking during a few synchrotron periods …  2’000 turns …  Q SCAN  0.5 …  Q STEP   (20  20) points  400 points Required budget (2) … Alexander Molodozhentsev PS Booster   3sec  2000 turns   1.7 hours (one point) … 400 points  1.7 h   28 days … PS   12 sec  2000 turns   6.7 hours (one point) … 400 points  6.7 h   111 days … ASSUMPTION: ‘waiting’ time is ZERO !

TOTAL CPU budget for the CERN Space Charge Group for 2012 year (8 calendar months  6000 h)  50…60 processors / RUN   5’000 ÷ 8’000 hours of the dedicated cores of the ‘lxplus’ cluster … Required budget (3) … Alexander Molodozhentsev

Additional comparison:  ‘batch’ regime …  mpich1 vz mpich2  lxplus (engpara) vz cs-ccr-beabt1 Engpara  … a set of 40 batch worker nodes have been equipped with low-latency 10Gb ethernet cards for batch runs of MPI applications. The nodes are Viglen CPU servers with 8 core Intel "Nehalem" L5520 chips and 48Gb of memory. Cs-ccr-beabt1  … no information in Internet v.0330

CERN PS / CPU time Lxplus (batch) sec sec Machine: lxbsu2311

CERN PS / CPU time N p = 22