Large scale simulations of astrophysical turbulence

Slides:



Advertisements
Similar presentations
Turbulent transport of magnetic fields Fausto Cattaneo Center for Magnetic Self-Organization in Laboratory and Astrophysical.
Advertisements

Magnetic Chaos and Transport Paul Terry and Leonid Malyshkin, group leaders with active participation from MST group, Chicago group, MRX, Wisconsin astrophysics.
Outline Dynamo: theoretical General considerations and plans Progress report Dynamo action associated with astrophysical jets Progress report Dynamo: experiment.
Non-Fickian diffusion and Minimal Tau Approximation from numerical turbulence A.Brandenburg 1, P. Käpylä 2,3, A. Mohammed 4 1 Nordita, Copenhagen, Denmark.
Particle acceleration in a turbulent electric field produced by 3D reconnection Marco Onofri University of Thessaloniki.
“The interaction of a giant planet with a disc with MHD turbulence II: The interaction of the planet with the disc” Papaloizou & Nelson 2003, MNRAS 339.
Large scale simulations of astrophysical turbulence Axel Brandenburg (Nordita, Copenhagen) Wolfgang Dobler (Univ. Calgary) Anders Johansen (MPIA, Heidelberg)
“Physics at the End of the Galactic Cosmic-Ray Spectrum” Aspen, CO 4/28/05 Diffusive Shock Acceleration of High-Energy Cosmic Rays The origin of the very-highest-energy.
Does hyperviscosity spoil the inertial range? A. Brandenburg, N. E. L. Haugen Phys. Rev. E astro-ph/
Emerging Flux Simulations Bob Stein A.Lagerfjard Å. Nordlund D. Benson D. Georgobiani 1.
ASCI/Alliances Center for Astrophysical Thermonuclear Flashes Simulating Self-Gravitating Flows with FLASH P. M. Ricker, K. Olson, and F. X. Timmes Motivation:
Shock Acceleration at an Interplanetary Shock: A Focused Transport Approach J. A. le Roux Institute of Geophysics & Planetary Physics University of California.
The Pencil Code -- a high order MPI code for MHD turbulence Anders Johansen (Sterrewacht Leiden)‏ Axel Brandenburg (NORDITA, Stockholm)‏ Wolfgang Dobler.
Magneto-hydrodynamic turbulence: from the ISM to discs
Zhaorui Li and Farhad Jaberi Department of Mechanical Engineering Michigan State University East Lansing, Michigan Large-Scale Simulations of High Speed.
Massively Parallel Magnetohydrodynamics on the Cray XT3 Joshua Breslau and Jin Chen Princeton Plasma Physics Laboratory Cray XT3 Technical Workshop Nashville,
Stratified Magnetohydrodynamics Accelerated Using GPUs:SMAUG.
How long can left and right handed life forms coexist? Axel Brandenburg, Anja Andersen, Susanne Höfner, Martin Nilsson, Tuomas Multamäki (Nordita) Orig.
Magnetic field generation on long time scales Axel Brandenburg (Nordita/Stockholm) Kemel+12 Ilonidis+11Brandenburg+11Warnecke+11 Käpylä+12.
Magnetic dynamo over different astrophysical scales Axel Brandenburg & Fabio Del Sordo (Nordita) with contributions from many others seed field primordial.
Critical issues to get right about stellar dynamos Axel Brandenburg (Nordita, Copenhagen) Shukurov et al. (2006, A&A 448, L33) Schekochihin et al. (2005,
Magneto-rotational instability Axel Brandenburg (Nordita, Copenhagen)
High-performance multi-user code development with Google Code  Current status  (...just google for Pencil Code)
Accretion disc dynamos B. von Rekowski, A. Brandenburg, 2004, A&A 420, B. von Rekowski, A. Brandenburg, W. Dobler, A. Shukurov, 2003 A&A 398,
Bern, MHD, and shear Axel Brandenburg (Nordita, Copenhagen) Collaborators: Nils Erland Haugen (Univ. Trondheim) Wolfgang Dobler (Freiburg  Calgary) Tarek.
Reconnection rates in Hall MHD and Collisionless plasmas
Solar activity as a surface phenomenon Axel Brandenburg (Nordita/Stockholm) Kemel+12 Ilonidis+11Brandenburg+11Warnecke+11 Käpylä+12.
Dynamo theory and magneto-rotational instability Axel Brandenburg (Nordita) seed field primordial (decay) diagnostic interest (CMB) AGN outflows MRI driven.
Large Scale Dynamo Action in MRI Disks Role of stratification Dynamo cycles Mean-field interpretation Incoherent alpha-shear dynamo Axel Brandenburg (Nordita,
Catastrophic  -quenching alleviated by helicity flux and shear Axel Brandenburg (Nordita, Copenhagen) Christer Sandin (Uppsala) Collaborators: Eric G.
Astrophysical Magnetism Axel Brandenburg (Nordita, Stockholm)
Numerical simulations of astrophysical dynamos Axel Brandenburg (Nordita, Stockholm) Dynamos: numerical issues Alpha dynamos do exist: linear and nonlinear.
Double diffusive mixing (thermohaline convection) 1. Semiconvection ( ⇋ diffusive convection) 2. saltfingering ( ⇋ thermohaline mixing) coincidences make.
Emission measure distribution in loops impulsively heated at the footpoints Paola Testa, Giovanni Peres, Fabio Reale Universita’ di Palermo Solar Coronal.
Large scale simulations of astrophysical turbulence Axel Brandenburg (Nordita, Copenhagen) Wolfgang Dobler (Univ. Calgary) Anders Johansen (MPIA, Heidelberg)
Gas-kineitc MHD Numerical Scheme and Its Applications to Solar Magneto-convection Tian Chunlin Beijing 2010.Dec.3.
Numerical simulations of the SN driven ISM Axel Brandenburg (NORDITA, Copenhagen, Denmark) Boris Gudiksen (Stockholm Observatory, Sweden) Graeme Sarson.
High-order codes for astrophysical turbulence
Self-assembly of shallow magnetic spots through strongly stratified turbulence Axel Brandenburg (Nordita/Stockholm) Kemel+12 Brandenburg+13 Warnecke+11.
Simple Radiative Transfer in Decomposed Domains Tobi Heinemann Åke Nordlund Axel Brandenburg Wolfgang Dobler.
Self-organized magnetic structures in computational astrophysics Axel Brandenburg (Nordita/Stockholm) Kemel+12 Ilonidis+11Brandenburg+13Warnecke+11 Käpylä+12.
Statistical Properties (PS, PDF) of Density Fields in Isothermal Hydrodynamic Turbulent Flows Jongsoo Kim Korea Astronomy and Space Science Institute Collaborators:
Dynamo action in shear flow turbulence Axel Brandenburg (Nordita, Copenhagen) Collaborators: Nils Erland Haugen (Univ. Trondheim) Wolfgang Dobler (Freiburg.
Turbulent transport coefficients from numerical experiments Axel Brandenburg & Matthias Rheinhardt (Nordita, Stockholm) Extracting concepts from grand.
ANGULAR MOMENTUM TRANSPORT BY MAGNETOHYDRODYNAMIC TURBULENCE Gordon Ogilvie University of Cambridge TACHOCLINE DYNAMICS
Turbulence research at Nordita 1.Bottleneck effect 2.Magnetic fields (active vector) 3.Passive scalar diffusion Haugen & Brandenburg (2006, Phys. Fl. 18,
Prandtl number dependence of magnetic-to-kinetic dissipation 1.What gets in, will get out 2.Even for vanishing viscosity 3.What if magnetic fields 4. contribute?
H. Isobe Plasma seminar 2004/06/16 1. Explaining the latitudinal distribution of sunspots with deep meridional flow D. Nandy and A.R. Choudhhuri 2002,
Application of Compact- Reconstruction WENO Schemes to the Navier-Stokes Equations Alfred Gessow Rotorcraft Center Aerospace Engineering Department University.
Pencil Code: multi-purpose and multi-user maintained Axel Brandenburg (Nordita, Stockholm) Wolfgang Dobler (Univ. Calgary) and now many more…. (...just.
GOAL: To understand the physics of active region decay, and the Quiet Sun network APPROACH: Use physics-based numerical models to simulate the dynamic.
A revised formulation of the COSMO surface-to-atmosphere transfer scheme Matthias Raschendorfer COSMO Offenbach 2009 Matthias Raschendorfer.
Physical conditions in astrophysics Axel Brandenburg (Nordita/Stockholm) Kemel+12 Ilonidis+11Brandenburg+11Warnecke+11 Käpylä+12.
GEM Student Tutorial: GGCM Modeling (MHD Backbone)
Overview of dynamos in stars and galaxies
Simulations and radiative diagnostics of turbulence and wave phenomena in the magnetised solar photosphere S. Shelyag Astrophysics Research Centre Queen’s.
Using a combined PIC-MHD code to simulate particle acceleration
Numerical Simulations of Solar Magneto-Convection
GOAL: To understand the physics of active region decay, and the Quiet Sun network APPROACH: Use physics-based numerical models to simulate the dynamic.
LES of Turbulent Flows: Lecture 8 (ME EN )
Convergence in Computational Science
Models of atmospheric chemistry
High Accuracy Schemes for Inviscid Traffic Models
Transition in Energy Spectrum for Forced Stratified Turbulence
International Workshop
Non linear evolution of 3D magnetic reconnection in slab geometry
Energy spectra of small scale dynamos with large Reynolds numbers
Low Order Methods for Simulation of Turbulence in Complex Geometries
Catastrophic a-quenching alleviated by helicity flux and shear
The radiation module  Current status 
Presentation transcript:

Large scale simulations of astrophysical turbulence Axel Brandenburg (Nordita, Copenhagen) Wolfgang Dobler (Univ. Calgary) Anders Johansen (MPIA, Heidelberg) Antony Mee (Univ. Newcastle) Nils Haugen (NTNU, Trondheim) etc. Talk given at Workshop on Large Scale Computation in Astrophysics, Oct 14, 2004. (...just google for Pencil Code)

Overview History: as many versions as there are people?? Example of a cost effective MPI code Ideal for linux clusters Pencil formulation (advantages, headaches) (Radiation: as a 3-step process) How to manage the contributions of 20+ people Development issues, cvs maintainence Numerical issues High-order schemes, tests Peculiarities on big linux clusters Online data processing/visualization

Pencil Code Started in Sept. 2001 with Wolfgang Dobler High order (6th order in space, 3rd order in time) Cache & memory efficient MPI, can run PacxMPI (across countries!) Maintained/developed by many people (CVS!) Automatic validation (over night or any time) Max resolution so far 10243 , 256 procs

Range of applications Isotropic turbulence Stratified layers MHD (Haugen), passive scalar (Käpylä), cosmic rays (Snod, Mee) Stratified layers Convection, radiative transport (T. Heinemann) Shearing box MRI (Haugen), planetesimals, dust (A. Johansen), interstellar (A. Mee) Sphere embedded in box Fully convective stars (W. Dobler), geodynamo (D. McMillan) Other applications and future plans Homochirality (models of origins of life, with T. Multamäki) Spherical coordinates

Pencil formulation In CRAY days: worked with full chunks f(nx,ny,nz,nvar) Now, on SGI, nearly 100% cache misses Instead work with f(nx,nvar), i.e. one nx-pencil No cache misses, negligible work space, just 2N Can keep all components of derivative tensors Communication before sub-timestep Then evaluate all derivatives, e.g. call curl(f,iA,B) Vector potential A=f(:,:,:,iAx:iAz), B=B(nx,3)

A few headaches All operations must be combined Curl(curl), max5(smooth(divu)) must be in one go out-of-pencil exceptions possible rms and max values for monitoring call max_name(b2,i_bmax,lsqrt=.true.) call sum_name(b2,i_brms,lsqrt=.true.) Similar routines for toroidal average, etc Online analysis (spectra, slices, vectors)

CVS maintained pserver (password protected, port 2301) non-public (ci/co, 21 people) public (check-out only, 127 registered users) Set of 15 test problems in the auto-test Nightly auto-test (different machines, web) Before check-in: run auto-test yourself Mpi and nompi dummy module for single processor machine (or use lammpi on laptops)

Switch modules magnetic or nomagnetic (e.g. just hydro) hydro or nohydro (e.g. kinematic dynamo) density or nodensity (burgulence) entropy or noentropy (e.g. isothermal) radiation or noradiation (solar convection, discs) dustvelocity or nodustvelocity (planetesimals) Coagulation, reaction equations Homochirality (reaction-diffusion-advection equations)

Features, problems Namelist (can freely introduce new params) Upgrades forgotten on no-modules (auto-test) SGI namelist problem (see pencil FAQs)

Pencil Code check-ins

High-order schemes Alternative to spectral or compact schemes Efficiently parallelized, no transpose necessary No restriction on boundary conditions Curvilinear coordinates possible (except for singularities) 6th order central differences in space Non-conservative scheme Allows use of logarithmic density and entropy Copes well with strong stratification and temperature contrasts

(i) High-order spatial schemes Main advantage: low phase errors

Wavenumber characteristics

Higher order – less viscosity

Less viscosity – also in shocks

(ii) High-order temporal schemes Main advantage: low amplitude errors 2N-RK3 scheme (Williamson 1980) 2nd order 3rd order 1st order

Shock tube test

Hydromagnetic turbulence and subgrid scale models? Want to shorten diffusive subrange Waste of resources Want to prolong inertial range Smagorinsky (LES), hyperviscosity, … Focus of essential physics (ie inertial range) Reasons to be worried about hyperviscosity Shallower spectra Wrong amplitudes of resulting large scale fields

Simulations at 5123 Biskamp & Müller (2000) Normal With diffusivity hyperdiffusivity

The bottleneck: is a physical effect compensated spectrum Porter, Pouquet, & Woodward (1998) using PPM, 10243 meshpoints Kaneda et al. (2003) on the Earth simulator, 40963 meshpoints (dashed: Pencil-Code with 10243 )

Bottleneck effect: 1D vs 3D spectra Compensated spectra (1D vs 3D)

Relation to ‘laboratory’ 1D spectra

Hyperviscous, Smagorinsky, normal height of bottleneck increased Haugen & Brandenburg (PRE, astro-ph/0402301) onset of bottleneck at same position Inertial range unaffected by artificial diffusion

256 processor run at 10243

Structure function exponents agrees with She-Leveque third moment

Helical dynamo saturation with hyperdiffusivity for ordinary hyperdiffusion ratio 125 instead of 5

Slow-down explained by magnetic helicity conservation molecular value!!

MHD equations Magn. Induction Vector Equation: potential Momentum and Continuity eqns

Vector potential B=curlA, advantage: divB=0 J=curlB=curl(curlA) =curl2A Not a disadvantage: consider Alfven waves B-formulation A-formulation 2nd der once is better than 1st der twice!

Comparison of A and B methods

Wallclock time versus processor # nearly linear Scaling 100 Mb/s shows limitations 1 - 10 Gb/s no limitation

Sensitivity to layout on Linux clusters Gigabit uplink 100 Mbit link only yprox x zproc 4 x 32  1 (speed) 8 x 16  3 times slower 16 x 8  17 times slower 24 procs per hub

Why this sensitivity to layout? 1 2 3 4 5 6 7 8 9 All processors need to communicate with processors outside to group of 24

Use exactly 4 columns Only 2 x 4 = 8 processors need to communicate outside the group of 24  optimal use of speed ratio between 100 Mb ethernet switch and 1 Gb uplink 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

Fragmentation over many switches

Pre-processed data for animations

Ma=3 supersonic turbulence

Animation of B vectors

Animation of energy spectra Very long run at 5123 resolution

MRI turbulence MRI = magnetorotational instability 2563 w/o hypervisc. t = 600 = 20 orbits 5123 w/o hypervisc. Dt = 60 = 2 orbits

Fully convective star

Geodynamo simulation

Homochirality: competition of left/right Reaction-diffusion equation

Conclusions Subgrid scale modeling can be unsafe (some problems) shallower spectra, longer time scales, different saturation amplitudes (in helical dynamos) High order schemes Low phase and amplitude errors Need less viscosity 100 MB link close to bandwidth limit Comparable to and now faster than Origin 2x faster with GB switch 100 MB switches with GB uplink +/- optimal

Transfer equation & parallelization Processors Analytic Solution: Intrinsic Calculation Ray direction

The Transfer Equation & Parallelization Processors Analytic Solution: Communication Ray direction

The Transfer Equation & Parallelization Processors Analytic Solution: Intrinsic Calculation Ray direction

Current implementation Plasma composed of H and He Only hydrogen ionization Only H- opacity, calculated analytically No need for look-up tables Ray directions determined by grid geometry No interpolation is needed

Convection with radiation