Download presentation
Presentation is loading. Please wait.
Published byJack Dennis Modified over 9 years ago
1
Timestepping and Parallel Computing in Highly Dynamic N-body Systems Joachim Stadel stadel@physik.unizh.ch University of Zürich Institute for Theoretical Physics
2
LSS-Surveys Galaxy Formation Solar System Formation Astrophysical N-body Simulations 10 2 1 12 10 91110678345 13 10 1 121715106842 Physics Apps Gravity Hydro CollisionsNear Integrable SS-Stability
3
Outline Collisionless Simulations and Resolution Collisionless Simulations and Resolution Parallel Computers Parallel Computers Tree Codes – Tree Codes on Parallel Computers Tree Codes – Tree Codes on Parallel Computers PKDGRAV (and Gasoline) PKDGRAV (and Gasoline) Applications – Various Movies Applications – Various Movies Warm Dark Matter Warm Dark Matter Multistepping Part 1 Multistepping Part 1 New Parallelization Problems New Parallelization Problems Multistepping Part 2 Multistepping Part 2 Initial Conditions – Shells Initial Conditions – Shells Blackhole "Mergers" Blackhole "Mergers" Fast Multipole Method Fast Multipole Method PKDGRAV2 PKDGRAV2 Cosmo Initial Conditions Cosmo Initial Conditions GHALO Simulation GHALO Simulation GHALO Prelim. Results GHALO Prelim. Results - Density Profile - Phase-Space Density - Subhalos & Reionization What next? What next?
4
WMAP Satellite 2003 Fluctuations in the Microwave Background Radiation The initial conditions for structure formation. The Universe is completely smooth to one part in 1,000 at z=1000.
5
Greenbank radio galaxy survey (1990) 31,000 galaxies At z=0 and on the very largest scales the distribution of galaxies is in fact homogeneous.
6
On ´smaller´ scales: redshift surveys
7
Numerical Simulation From the microwave background fluctuations to the present day structure seen in galaxy redshift surveys.
8
N-body simulations as models of stellar systems j≠i N x i =∑- Φ(x i,x j ) ∂ƒ/∂t + [ƒ,H] = 0; ƒdz = 1 ¨ dx/dt = v ; dv/dt = - Φ ∫ N Typically N simulation << N real so the equation below is NOT the one we should be solving. the Collisionless Boltzman Equation (CBE) CBE is 1st order non-linear PDE. These can be solved by the method of characteristics. The characteristics are the path along which information propagates; for CBE defined by: But these are the equations of motion we had above! ƒ is constant along the characterisics, thus each particle carries a piece of ƒ in its trajectory.
9
only difficulty is in evaluating Φ Φ(x) ≈ -GM/N ∑ ƒ(z i )/ƒ s (z i ) 1/|x-x´| i=1 N ∫ dz g(z) = lim 1/N ∑ g(z i )/ƒ s (z i ) N∞N∞ N i=1 Φ(x) = -GM ∫ dz´ƒ(z´)/|x-x´| In terms of the distribution function, Monte Carlo : for any reasonable function g(z), z i are randomly chosen with sampling probability density ƒ s Apply this to the Poisson Integral So in a conventional N-body simulation ƒ s (z) = ƒ(z), so the particle density represents the underlying phase space density.
10
Softening dE/dt = x i ∂E/∂x i + v i ∂E/∂v i + ∂E/∂t = ∂Φ/∂t The singularity at x = x´ in the Poisson integral causes very large scatter in the estimation of Φ. This results in a fluctuation in the potential, δΦ, which has 2 effects. 1. Change in the particle‘s energy along its orbit: Fluctuations in Φ due to discrere sampling will cause a random walk in enegy for the particle: this is two-body relaxation. 2. Mass segregation: if more and less massive particles are present, the less massive ones will typically recoil from an encounter with more velocity than a massive particle. Softening, either explicitly introduced or as part of the numerical method, lessens these effects.
11
All N-body simulations of the CBE suffer from 2-body relaxation! This is even more important for cosmological simulations where all structures formed from smaller initial objects. All particles experienced a large relative degree of relaxation in the past. Diemand and Moore 2002
12
Increasing Resolution Cluster Resolved 67,500 Galaxy Halos Resoved 1,300,000 Dwarf Galaxy Halos Resolved 10,500,000
14
zBox : (Stadel & Moore) 2002 288 AMD MP2200+ processors, 144 Gigs ram, 10 Terabyte disk Compact, easy to cool and maintain Very fast Dolphin/SCI interconnects - 4 Gbit/s, microsecond latency A teraflop computer for $500,000 ($250,000 with MBit) Roughly one cubic meter, one ton and requires 40kilowatts of power
15
Parallel supercomputing
16
500 CPUs/640 GB RAM ~100 TB of Disk A parallel computer is currently still mostly wiring. The human brain (Gary Kasparov) is no exception. However, wireless CPUs are now under development which will revolutionize parallel computer construction.
17
Spatial Binary Tree k-D Treespatial binary with squeeze
18
Forces are calculated using a 4th order multipoles. Ewald summation technique used to introduce periodic boundary conditions (also based on a 4th order expansion). Work is tracked and fed back into domain decomp.
19
Compute time vs. Accuracy
20
Parallelizing Gravity (PKDGRAV) Spatial Locality = Computational Locality Spatial Locality = Computational Locality (1/r^2) This means it is benificial to divide space in order to achieve load balance. Minimizes communication with other processors. But... add constraint on the number of particles/processor, Memory is limitted! But... add constraint on the number of particles/processor, Memory is limitted! Domain Decomposition is a global optimization of these requirements which is solved dynamically with every step. Domain Decomposition is a global optimization of these requirements which is solved dynamically with every step. Example division of space for 8 processors
21
Other decomposition strategies...
22
How are non-local parts of the tree walked by PKDGRAV? CPU i CPU j Low latency message passing Local cache of remote data elements PKDGRAV does not attempt to determine in advance which data elements are going to be required in a step (LET). The hit rate in the cache is very good with as little as 10 MB.
23
PKDGRAV Scaling On the T3E it was possible to obtain 80% of linear scaling on 512 processors. PKDGRAV Joachim Stadel Thomas Quinn
24
GASOLINE: Wadsley, Stadel & Quinn NewA 2003 Fairly standard SPH formulation is used in GASOLINE SPH is very well matched to a particle based gravity code like PKDGRAV since all the core data structures and many of the same algorithms can be used. For example, the neighbor searching can simply use the parallel distrinuted tree structure. Evrard 88, Benz 89 Hernquist & Katz 89 Monaghan 92
25
Algorithms within GASOLINE We perform 2 NN operations We perform 2 NN operations 1. Find 32 NN and calculate densities. 2. Calculate forces in a second pass. For active particles we do a gather on the k-NN, and a scatter from the k- Inverse NN. We never store the nearest neighbors. (Springel 2001 similar) For active particles we do a gather on the k-NN, and a scatter from the k- Inverse NN. We never store the nearest neighbors. (Springel 2001 similar) Cooling and Heating and Ionization quite efficient. Cooling and Heating and Ionization quite efficient.
26
The Large Magellanic Cloud (LMC) in gas and stars Chiara Mastropietro (University of Zürich) With fully dynamical Milky Way Halo (dark matter and hot gas and stellar disk and bulge) which are not shown here. Both tidal and ram- pressure stripping of gas is taking place.
27
Collisional Physics Derek C. Richardson Gravity with hard spheres including surface friction, coefficient of restitution and aggregates; the Euler equations for solid bodies.
28
Asteroid Collisions
29
Part of an asteroid disk, where the outcomes of the asteroid impact simulations are included.
31
Movies of 1000 years of evolution.
32
The power spectrum of density fluctuations in three different dark matter models Small scales (dwarf galaxies) Large scales (galaxy clusters) CMB Horizon scale
33
40Mpc N=10^7 Andrea Maccio et al CDM T=GeV
34
40Mpc N=10^7 Andrea Maccio et al WDM T=2keV
35
40Mpc N=10^7 Andrea Maccio et al WDM T=0.5keV
40
CDM ~500 satellites 1kev WDM ~10 satellites Very strong constraint on the lowest mass WDM candidate – need to form at least one Draco sized substructure halo Halo density profiles unchanged – Liouvilles constraint gives cores ~< 50pc
41
CDM n(M)=M^-2 WDM n(M)=M^-1 Data n(L)=L^-1
42
With fixed timesteps these codes all scale very well. With fixed timesteps these codes all scale very well. However, this is no-longer the only measure since the scaling of a very "deep" multistepping run can be a lot worse. However, this is no-longer the only measure since the scaling of a very "deep" multistepping run can be a lot worse. How do we do multistepping now and why does it have problems?
43
Drift-Kick-Drift Multistepping Leapfrog Drift Kick Rung 0 Rung 1 Rung 2 time Select SelectSelect Note that none of the Kick tick marks align, meaning that gravity is calculated for a single rung at a time, despite the fact that the tree is built for all particles. The select operators are performed top-down until all particles end up on appropriate timestep rungs. 0:DSKD, 1:DS(DSKDDSKD)D, 2:DS(DS(DSKD...
44
Kick-Drift-Kick Multistepping Leapfrog Select SelectSelect Select This method is more efficient since it performs half the number of tree build operations. It also exhibits somewhat lower errors than the standard DKD integrator It is the only scheme used in production at present.
45
Choice of Timestep Want a criterion which commutes with the Kick operator and is Galilean invariant, so it should not depend on velocities. Want a criterion which commutes with the Kick operator and is Galilean invariant, so it should not depend on velocities. and can take the minimum of any or all of these criteria Local Non-local, based on max acceleration in moderate densities
46
Multistepping: The real parallel computing challenge. T ~ 1/sqrt(Gρ), even more dramatic in SPH T ~ 1/sqrt(Gρ), even more dramatic in SPH Implies N active << N Implies N active << N Global approach to load balancing fails. Global approach to load balancing fails. Less compute/comm Less compute/comm Too many synchronization points between all processors. Too many synchronization points between all processors. Want all algorithms of the simulation code to scale as O(N active log N)! Everything that isn't introduces a fixed cost which limits the speed- up attainable from multistepping
47
The Trends Parallel computers are getting ever more independent computing elements. Eg: Bluegene (100'000s), Multicore CPUs Parallel computers are getting ever more independent computing elements. Eg: Bluegene (100'000s), Multicore CPUs Our simulations are always increasing in resolution and hence we need many more timesteps than were required in the past. Our simulations are always increasing in resolution and hence we need many more timesteps than were required in the past. Multistepping methods have ever more potential to speed up calculations, but introduce new complexities into codes, particularly for large parallel machines. Multistepping methods have ever more potential to speed up calculations, but introduce new complexities into codes, particularly for large parallel machines.
48
What can be done? Tree repair instead of rebuild. Don't drift all particles, only drift terms that appear on the interaction list! Do smart updates of local cache information instead of flushing at each timestep. Use some local form of achieving load balancing, perhaps scheduling? Remote walks? Allow different parts of the simulation to get somewhat out-of-sync? Use O(N^2) for very active regions. Hybrid Methods: Block+Symba
49
"Take-away's" on Parallel Computing in N-body Simulations Multistepping is a key ingredient of higher resolution simulations. Multistepping is a key ingredient of higher resolution simulations. Multistepping creates challenging parallel computing problems, particularly as machines as machines grow in number of CPUs. Multistepping creates challenging parallel computing problems, particularly as machines as machines grow in number of CPUs. Multistepping must also be done carefully with algorithms that try to preserve time reversal or other symmetries. Multistepping must also be done carefully with algorithms that try to preserve time reversal or other symmetries. As adaptivity in space pushes us to hybrid approaches, adaptivity in time also push us to hybrid techniques (TreeSymba later). As adaptivity in space pushes us to hybrid approaches, adaptivity in time also push us to hybrid techniques (TreeSymba later).
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.