Download presentation
Presentation is loading. Please wait.
1
Simulation of Gravitational Collapse in General Relativity Collaborators: Matthew Choptuik, CIAR/UBC Eric Hircshmann, BYU Steve Liebling, LIU Analysis, Computation and Collaboration Simon Fraser University July 27, 2001 Frans Pretorius UBC http://laplace.physics.ubc.ca/People/fransp/
2
2 Outline Brief description of current topics of interest in numerical relativity –gravitational waves from astrophysically relevant sources –gravitational collapse and critical phenomena Challenges in simulating spacetime –singularity avoidance –choice of coordinates –computational complexity
3
3 Outline Techniques and tools in our approach to the problem –RNPL –AMR –DAGH/GrACE –data collection, analysis and visualization
4
4 Gravitational Waves and Black Holes General relativity is a theory of space and time, and how matter interacts with it. What matter experiences as the force of gravity is a consequence of it existing in a curved spacetime; and in turn it is the matter that causes spacetime to curve. Two of the more intriguing consequences of general relativity are black holes and gravitational waves.
5
5 Black Holes When enough matter/energy is compressed into a region smaller than its Schwarzschild radius spacetime undergoes gravitational collapse, forming a black hole.
6
6 Black Holes A black hole is a region of spacetime that is causally disconnected from the rest of the universe, i.e. anything inside that moves at speeds equal to or less than the speed of light cannot escape. Classically, spacetime singularities always form inside of black holes — bad for numerics
7
7 Gravitational Waves Gravitational waves are ripples in the geometry of spacetime, travelling at the speed of light. In the weak-field approximation, there are 2 linearly independent polarizations: Figure from A. Abramovici et. al. Science (1992)
8
8 Gravitational Waves Gravitational wave “observatories”, which are large laser interferometers, are currently being built, and should start gathering data within the next couple of years.
9
9 Gravitational Waves Optimistic estimates for the strength of gravitational waves reaching earth from plausible astrophysical sources suggest that Current laser interferometery technology may be able to detect changes in length on the order of (about 1/1000 the diameter of the nucleus of an atom!); thus arm lengths L on the order 1-10km are required.
10
10 Sources of Gravitational Waves To produce measurable quantities of gravitational radiation, large, compact distributions of energy must move around at speeds close to the speed of light c. Here, compact means that the radius of the object is comparable to its Schwarzschild radius Possible sources –mergers and collisions of black holes/neutron stars –supernovae explosions –early-universe phenomena?
11
11 Binary Black Hole Merger Waveform Estimate Figure from A. Abramovici et. al. Science (1992)
12
12 Head-on Black Hole Collision Part of the gravitational wave —
13
13 “Waveform Extraction”
14
14 Critical Phenomena Near the threshold of black hole formation, the spacetime geometry and matter fields exhibit critical behavior, discovered numerically by Choptuik in 1993 The threshold of black hole formation can be found by fine-tuning an appropriate one- parameter (p) family of initial data. The threshold solution is denoted by p=p*. For p>p* a black hole will form during evolution of the data, while for p<p* no black hole forms.
15
15 Critical Phenomena In the supercritical regime (p>p*) for so called Type II critical phenomena, the resulting black hole masses are found to scale as where is a universal exponent. The spacetime geometry and matter fields approach a unique solution as p approaches p*.
16
16 Type II Critical Behavior — SU(2) Yang-Mills field Top evolution is sub-critical, bottom one is super-critical W vs log(1+r) Choptuik, Hirschmann and Marsa(1999)
17
17 Type I Critical Behavior Weak field evolution W(r) vs r Choptuik, Hirschmann and Marsa(1999)
18
18 Type I Critical Behavior Top evolution is sub-critical, bottom one is super-critical (W-1)/r vs log(r) Choptuik, Hirschmann and Marsa(1999)
19
19 Numerical Challenges in Simulating Spacetime Coordinate freedom — because we are trying to solve for the structure of spacetime, coordinates are merely labels, and do not, a priori, have any geometric significance. A poor choice of coordinates can lead to coordinate singularities, and hence code crashes. Singularity avoidance — black holes contain physical singularities that must be “avoided” –use a singularity-avoiding slicing condition (plagued by “grid-stretching”) –black hole excision
20
20 Axisymmetric scalar field collapse to a black hole. Animation shows geometric variable, which diverges like 1/r inside the black hole. Black Hole Excision
21
21 Numerical Challenges Large computational requirements — back of the envelope calculations suggest that the 2 black hole collision problem (in 3D, using finite-difference techniques) will require on the order of 1 CPU week on a 1 TFLOP/s system. Reasons: –there are typically ~100 variables per grid point –some of the more complicated evolution/constraint equations have 100's or even 1000's of terms each
22
22 Numerical Challenges –the problem has a large range of dynamical scales: a black hole's radius is 2M (in geometric units); this region needs to be well resolved for stable excision. wavelengths of gravitational waves ~ 20M. the outer boundary needs to be at least as far out as R~100M for accurate waveform extraction, due to the non-linear nature of the interactions close to the merger
23
23 Our Axisymmetric gravitational collapse code By assuming axial symmetry, we reduce the complexity of the problem, and so can expect to obtain good results on modest computer systems. In addition, we are still able to study head-on black hole collisions and critical phenomena Unigrid code stats: with 512MB RAM on one of UBC's vn compute nodes (850Mhz PIII), we can use grids of up 400x400, and runtime is on the order of a few hours to a week for typical problems
24
24 RNPL The code is written in a combination of Fortran and RNPL (Rapid Numerical Prototyping language), a language designed by Matt Choptuik and Robert Marsa: http://laplace.physics.ubc.ca/People/matt/Rnpl/ RNPL takes care of reading parameters, memory allocation, file i/o etc., and also provides a convenient mechanism for implementing finite difference discretizations of hyperbolic type equations (which in this particular code is for the scalar field matter source, and a couple of metric variables)
25
25 RNPL example ######################################################################## # RNPL source for axisymmetric wave-equation in cylindrical coordinates # (rho,z). Second order form and single uniform mesh (including (0,0) # used. Radiation conditions imposed on outer-rho and both z-boundaries. # Regularity imposed at rho=0. Scheme seems to be stable # # Second order form: # phi_tt = phi_zz + phi_rhorho + phi_rho / rho # # Copyright 1996, Matthew W Choptuik, The University of Texas at Austin ######################################################################## # Parameters, grid and grid functions ######################################################################## Wave equation in 2D, cylindrical coordinates:
26
26 ######################################################################## # Parameters, grid and grid functions ########################################################################....................................................................... parameter float rhomin parameter float rhomax parameter float zmin parameter float zmax parameter float amp parameter float delta parameter float r0 parameter float z0 := 0.5 parameter float rho0 := 1 parameter float epsdis := 0....................................................................... rec coordinates t,rho,z uniform rec grid g1 [1:Nrho][1:Nz] {rhomin:rhomax} {zmin:zmax} float Phi on g1 at -1,0,1 float r on g1 at 0.......................................................................
27
27 ######################################################################## # Difference operators ######################################################################## operator D_FW(f,t) := ( f[0][0] - f[0][0])/dt operator D_BW(f,t) := ( f[0][0] - f[0][0])/dt operator D_LF(f,t,t) := ( f[0][0] - 2* f[0][0] + f[0][0])/(dt^2) operator D_LF(f,rho,rho):=( f[1][0]-2* f[0][0]+ f[-1][0])/(drho^2) operator D_LF(f,z,z) := ( f[0][1] - 2* f[0][0] + f[0][-1])/(dz^2)....................................................................... ######################################################################## # Residual definitions (equations of motion) ######################################################################## evaluate residual Phi { [1:1] [1:Nz] := QFIT(Phi,rho); [Nrho:Nrho][2:Nz-1] := r[0][0]*D_BW2(Phi,t)+rho*D_BW2(Phi,rho)+z*D_CADV(Phi,z); [2:Nrho-1][2:Nz-1] := D_LF(Phi,t,t)=D_LF(Phi,z,z)+D_LF(Phi,rho,rho)+D_LF(Phi,rho)/rho; [2:Nrho-1][ 1: 1] := r[0][0]*D_BW2(Phi,t)+rho*D_CADV(Phi,rho)+z*D_FW2(Phi,z); [2:Nrho-1][Nz:Nz] := r[0][0]*D_BW2(Phi,t)+rho*D_CADV(Phi,rho)+z*D_BW2(Phi,z); }
28
28 ######################################################################## # Initializations and update structure # # Note: RNPL generated initialization routine will generate only # time symmetric data. ######################################################################## initialize r {[1:Nrho][1:Nz]:= sqrt(rho^2+z^2)} initialize Phi{[1:Nrho][1:Nz]:= amp*exp(-((sqrt(rho^2+z^2)-r0)/delta)^2)} looper iterative auto update Phi Sample evolution produced by program compiled from the above RNPL code
29
29 RNPL Can also call custom routines from within RNPL to update specified variables. In our axisymmetric code, we use a Multigrid solver written in Fortran to solve the 4 variables that satisfy elliptic-type equations.
30
30 AMR We recently started work on an adaptive driver for the code, based upon an implementation of Berger and Oliger's (1984) AMR algorithm (without rotation of subgrids) –computational domain is dynamically decomposed into a hierarchy of overlapping, uniform grids –regridding via local truncation error estimates, using a “self-shadow” hierarchy –using a clusterer written by Reid Guenther, Mijan Huq and Dale Choi, based upon the signature-line algorithm of Berger and Rigoutsos (1991)
31
31 AMR –current runtime parameters for a typical near-critical collapse: base grid 64x128 (with a 32x64 shadow) up to 12 additional levels of 2:1 refinement, or the “equivalent” of a 262 144 x 524 288 uniform grid runtime ~ 1 day to 1 week, using 512MB of memory
32
32 2D AMR “near” critical example Spherically symmetric scalar field collapse... Initial hierarchy, 3 levels + shadow
33
33 2D AMR “near” critical example Spherically symmetric scalar field collapse... After ~ 1.5 echoes, 12 levels + shadow
34
34 2D AMR “near” critical example Spherically symmetric scalar field collapse... After ~ 1.5 echoes Spherically symmetric scalar field collapse... After ~ 1.5 echoes, 12 levels + shadow
35
35 2D AMR “near” critical example Spherically symmetric scalar field collapse... After ~ 1.5 echoes Spherically symmetric scalar field collapse... After ~ 1.5 echoes, 12 levels + shadow
36
36 2D AMR “near” critical example Prolate scalar field collapse... ~ 1.5 echoes; sub- critical; up to 14 levels
37
37 2D AMR “near” critical example Prolate scalar field collapse... ~ 1.5 echoes; sub- critical; level 10
38
38 Parallel Execution We intend to add parallel support in the future, (possibly) using DAGH/GrACE Written by Manish Parashar (Rutgers) and J.C. Browne (UT Austin) –http://www.caip.rutgers.edu/~parashar/DAGH/ –http://www.caip.rutgers.edu/~parashar/TASSL/Projects/GrACE/
39
39 Sample Uni-Grid Parallelization ghost zones 5-point finite difference stencil Grid split over 4 compute nodes
40
40 DAGH - key features transparent access to scalable distributed dynamic arrays, grids, grid-hierarchies. shadow grid-hierarchy for efficient error estimation in AMR automatic dynamic partitioning and load distribution locality in the face of multi-level data (space- filling curves) some support for multi-grid
41
41 DAGH The DAGH driver must be written in C++, though computational routines written in C,C++ or Fortran can be called from the driver. Uses MPI for parallel support
42
42 Schematic DAGH example #include "GrACE.h" #include "GrACEIO.h” bb[0]=xmin; bb[1]=xmax; bb[2]=ymin; bb[3]=ymax; shape[0]=Nx; shape[1]=Ny; GridHierarchy GH(2,NON_CELL_CENTERED,1); GH.ACE_SetBaseGrid(bb, shape); GH.ACE_ComposeHierarchy(); GH.ACE_IOType(ACEIO_HDF_RNPL); BEGIN_COMPUTE GridFunction(2) phi("phi",1,1,GH,ACEComm,ACENoShadow); for(step++;step<=iter;step++) { forall(phi,tc,lev,c) update(...) end_forall phi.GF_Sync(tc+idt,lev,ACE_Main); } Unigrid wave equation in 2D, Cartesian coordinates:
43
43 Sample 16-node parallel run
44
44 Data collection, analysis and visualization We are using a custom program, called the Data Vault (DV): –central grid repository with a GUI front-end –users (i.e. numerical codes) send single grids to the repository, of arbitrary shape and in arbitrary order; the DV is responsible for composing these grids into an appropriate hierarchy, called a register –data analysis functions operate on registers –interactive visualization component to view registers, or parts of a register (currently works with 2D uniform grid-based registers)
45
45 DV screenshot
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.