Ryuji Morishima (UCLA/JPL). N-body code: Gravity solver + Integrator Gravity solver must be fast and handle close encounters Special hardware (N 2 ):

Slides:



Advertisements
Similar presentations
From protostellar cores to disk galaxies - Zurich - 09/2007 S.Walch, A.Burkert, T.Naab Munich University Observatory S.Walch, A.Burkert, T.Naab Munich.
Advertisements

Sven Woop Computer Graphics Lab Saarland University
The nebular hypothesis
Formulation of an algorithm to implement Lowe-Andersen thermostat in parallel molecular simulation package, LAMMPS Prathyusha K. R. and P. B. Sunil Kumar.
ASTR100 (Spring 2008) Introduction to Astronomy Newton’s Laws of Motion Prof. D.C. Richardson Sections
Honor Thesis Presentation Mike Bantegui, Hofstra University Advisor: Dr. Xiang Fu, Hofstra University EFFICIENT SOLUTION OF THE N-BODY PROBLEM.
Felipe Garrido and Jorge Cuadra PUC, Chile XI SOCHIAS Annual Meeting January 2014.
Planet Formation Topic: Formation of rocky planets from planetesimals Lecture by: C.P. Dullemond.
3.4 N-body Simulation Introduction to Programming in Java: An Interdisciplinary Approach · Robert Sedgewick and Kevin Wayne · Copyright.
N-body Models of Aggregation and Disruption Derek C. Richardson University of Maryland Derek C. Richardson University of Maryland.
Timestepping and Parallel Computing in Highly Dynamic N-body Systems Joachim Stadel University of Zürich Institute for Theoretical.
1 Lucifer’s Hammer Derek Mehlhorn William Pearl Adrienne Upah A Computer Simulation of Asteroid Trajectories Team 34 Albuquerque Academy.
Angular Momentum (of a particle) O The angular momentum of a particle, about the reference point O, is defined as the vector product of the position, relative.
“Rummaging through Earth’s Attic for Remains of Ancient Life” John C. Armstrong, Llyd E. Wells, Guillermo Gonzalez Icarus 2002, vol. 160 December 9, 2004.
PHYS16 – Lecture 27 Gravitation November 10, 2010.
1 Distributed Computing Algorithms CSCI Distributed Computing: everything not centralized many processors.
Module on Computational Astrophysics Professor Jim Stone Department of Astrophysical Sciences and PACM.
1cs533d-winter-2005 Notes  Assignment 2 going okay? Make sure you understand what needs to be done before the weekend  Read Guendelman et al, “Nonconvex.
Simulating Self Gravitating Planetesimal Disks on the Graphics Processing Unit (GPU) Alice C. Quillen Alex Moore University of Rochester Poster DPS 2008.
Ge/Ay133 How do small dust grains grow in protoplanetary disks?
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
High Performance Computing 1 Parallelization Strategies and Load Balancing Some material borrowed from lectures of J. Demmel, UC Berkeley.
Derek C. Richardson (U Maryland) PKDGRAV : A Parallel k-D Tree Gravity Solver for N-Body Problems FMM 2004.
Cosmological N-body simulations of structure formation Jürg Diemand, Ben Moore and Joachim Stadel, University of Zurich.
Module on Computational Astrophysics Jim Stone Department of Astrophysical Sciences 125 Peyton Hall : ph :
Solar System Assignment By: Sam Shon. Top down view of the Solar System Interactive Background Simulates Correct Physics Gravity Collisions Ambient Music.
L15: Putting it together: N-body (Ch. 6) October 30, 2012.
7.1 Scalars and vectors Scalar: a quantity specified by its magnitude, for example: temperature, time, mass, and density Chapter 7 Vector algebra Vector:
Algorithms and Software for Large-Scale Simulation of Reactive Systems _______________________________ Ananth Grama Coordinated Systems Lab Purdue University.
Artdaq Introduction artdaq is a toolkit for creating the event building and filtering portions of a DAQ. A set of ready-to-use components along with hooks.
Solving the Poisson Integral for the gravitational potential using the convolution theorem Eduard Vorobyov Institute for Computational Astrophysics.
Computational issues in Carbon nanotube simulation Ashok Srinivasan Department of Computer Science Florida State University.
Advisor: Dr. Aamir Shafi Co-Advisor: Mr. Ali Sajjad Member: Dr. Hafiz Farooq Member: Mr. Tahir Azim Optimizing N-body Simulations for Multi-core Compute.
Chapter 3 Parallel Algorithm Design. Outline Task/channel model Task/channel model Algorithm design methodology Algorithm design methodology Case studies.
Scheduling Many-Body Short Range MD Simulations on a Cluster of Workstations and Custom VLSI Hardware Sumanth J.V, David R. Swanson and Hong Jiang University.
Overview of Recent MCMD Developments Jarek Nieplocha CCA Forum Meeting San Francisco.
Universal Gravitation. Paths of Satellites around Earth
Introduction: Lattice Boltzmann Method for Non-fluid Applications Ye Zhao.
Lecture 4 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
The PSI Planet-building Code: Multi-zone, Multi-use S. J. Weidenschilling PSI Retreat August 20, 2007.
Paolo Miocchi in collaboration with R. Capuzzo-Dolcetta, P. Di Matteo, A. Vicari Dept. of Physics, Univ. of Rome “La Sapienza” (Rome, Italy) Work supported.
Progress on Component-Based Subsurface Simulation I: Smooth Particle Hydrodynamics Bruce Palmer Pacific Northwest National Laboratory Richland, WA.
Anton, a Special-Purpose Machine for Molecular Dynamics Simulation By David E. Shaw et al Presented by Bob Koutsoyannis.
Data Structures and Algorithms in Parallel Computing Lecture 7.
Creating a 2-D Model of the Solar System using Physics-Based Geometries in Java. Brian Tubergen.
Smoothed Particle Hydrodynamics Matthew Zhu CSCI 5551 — Fall 2015.
From Planetesimals to Planets Pre-Galactic Black Holes and ALMA.
Data Structures and Algorithms in Parallel Computing Lecture 10.
CompSci 100e2.1 1 N-Body Simulation l Applications to astrophysics.  Orbits of solar system bodies.  Stellar dynamics at the galactic center.  Stellar.
Parallel Computing Presented by Justin Reschke
Improving Swift Hal Levison (PI), SwRI Martin Duncan (CoI), Queen’s University Mark Lewis (CoI), Trinity University David Kaufmann, SwRI.
LISA double BHs Dynamics in gaseous nuclear disk.
PHYSICS 103: Lecture 11 Circular Motion (continued) Gravity and orbital motion Example Problems Agenda for Today:
GRAPE. An implementation of Barnes-Hut treecode on GRAPE-6 with parallel host Makino (2005) For collisionless system Barnes’ vectorization
Collision Enhancement due to Planetesimal Binary Formation Planetesimal Binary Formation Junko Kominami Jun Makino (Earth-Life-Science Institute, Tokyo.
Celestial Mechanics VI The N-body Problem: Equations of motion and general integrals The Virial Theorem Planetary motion: The perturbing function Numerical.
Parallel Molecular Dynamics A case study : Programming for performance Laxmikant Kale
Celestial Mechanics IV Central orbits Force from shape, shape from force General relativity correction.
ChaNGa CHArm N-body GrAvity. Thomas Quinn Graeme Lufkin Joachim Stadel Laxmikant Kale Filippo Gioachin Pritish Jetley Celso Mendes Amit Sharma.
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Setup distribution of N particles
Multi-Processing in High Performance Computer Architecture:
Daniel D. Durda, William F. Bottke, and Brian L. Enke
Algorithms and Software for Large-Scale Simulation of Reactive Systems
By Brandon, Ben, and Lee Parallel Computing.
Setup distribution of N particles
Algorithms and Software for Large-Scale Simulation of Reactive Systems
Parallel Programming in C with MPI and OpenMP
N-Body Gravitational Simulations
Introduction to Scientific Computing II
Presentation transcript:

Ryuji Morishima (UCLA/JPL)

N-body code: Gravity solver + Integrator Gravity solver must be fast and handle close encounters Special hardware (N 2 ): Grape, GPS Tree (N log N): PKDGRAV, Gadget Integrator must take a large time step with good accuracy Bulirsch-Stoer Hermite: often used with Grape Mixed Variable Symplectic (MVS) integrators: SYMBA, Mercury This work: Implementation of SYMBA to PKDGRAV

Developed by Stadel (2001) Source is open in The astro-code wiki Tree gravity: 4 th order multiple moments Adaptive to various parallel environments (shared memory, mpi) Different functions and integrators Collisions (Richardson et al. 2000) SPH (Wadsley et al. 2004) Fragmentation (Leinhardt & Richardson 2005) SYMBA integrator (Morishima et al. 2010)

Up to 4 th order (Hexadecapole) Error estimation from cosmological simulations

k-D TreeSpatial binary tree Spatial binary tree can reduce the higher order multi-pole moments It is also as efficient as k-D tree in neighboring search.

Mater layer Controls overall flows of program Processor Set Tree (PST) layer Assigns tasks to processors Parallel KD layers Executes tasks in each core Machine dependent layers Interface to parallel primitives (e.g. MPI) call One needs to understand PST format but not parallel primitives such as MPI call

Specialized for systems with a massive central body Mixed variables: Cartesian and Keplerian co-ordinates A large time step along Keplerian orbit Time-reversible (no secular error) Handling close encounters: SYMBA (Duncan et al. 1998) Mercury (Chambers 1999) Most of N-body simulations for planet formation have been performed by these two codes in last decade But both codes use N 2 gravity calculations

Democratic co-ordinate Heliocentric position + barycentric velocity H kep >>H int (if there is no close encounter) H kep >>H sun

Potential (or Force) decomposition based on mutual distance normalized by the Hill radius A higher order potential component is calculated with a small block-sized time step Kick (F 0 ) Kepler Drift with F 1

The time step size needs to be determined by the minimum mutual distance during particle drift. This distance must be estimated by using particle co- ordinates at the beginning and ending of particle drift symmetrically (e.g. Hut et al. 1995).

1. Half kick (  0 /2) due to Sun’s motion 2. Half Kick (  0 /2) due to force F 0 from other particles 3. Kepler drift (  0 ) for all particles 4. Tree build and neighboring search (after drift) 5. Particles in close encounters 1. Sent back to pre-drift positions and velocities 2. Put into a single core (domain decomposition) 3. SYMBA multiple time stepping 4. Collisions are also handled here 6. Tree build and gravity calculation and neighboring search (before drift) 7. 2 and 1

With gas Without gas

Martian meteorites Accretion truncated at 14 My Accretion Extrapolated

In 3-body encounter, time stepping of these 3 bodies is synchronized (for time symmetry) If the system’s number density is high, all particles share the time step….

PKDGRAV-SYMBA works as desired unless the system’s number density is so high that most of bodies are in multi-body encounters. For such systems, time symmetry needs to be sacrificed?