Reactive Molecular Dynamics: Algorithms, Software, and Applications. Ananth Grama Computer Science, Purdue University

Slides:



Advertisements
Similar presentations
Simulazione di Biomolecole: metodi e applicazioni giorgio colombo
Advertisements

PERFORMANCE EVALUATION OF USER-REAXC PACKAGE Hasan Metin Aktulga Postdoctoral Researcher Scientific Computing Group Lawrence Berkeley National Laboratory.
Parameterizing a Geometry using the COMSOL Moving Mesh Feature
Formulation of an algorithm to implement Lowe-Andersen thermostat in parallel molecular simulation package, LAMMPS Prathyusha K. R. and P. B. Sunil Kumar.
A Digital Laboratory “In the real world, this could eventually mean that most chemical experiments are conducted inside the silicon of chips instead of.
Molecular Dynamics Simulations of Radiation Damage in Amorphous Silica: Effects of Hyperthermal Atomic Oxygen Vanessa Oklejas Space Materials Laboratory.
LINCS: A Linear Constraint Solver for Molecular Simulations Berk Hess, Hemk Bekker, Herman J.C.Berendsen, Johannes G.E.M.Fraaije Journal of Computational.
Parallel Computation of the 2D Laminar Axisymmetric Coflow Nonpremixed Flames Qingan Andy Zhang PhD Candidate Department of Mechanical and Industrial Engineering.
U N I V E R S I T Y O F S O U T H F L O R I D A Computing Distance Histograms Efficiently in Scientific Databases Yicheng Tu, * Shaoping Chen, *§ and Sagar.
Course Outline Introduction in algorithms and applications Parallel machines and architectures Overview of parallel machines, trends in top-500 Cluster.
OpenFOAM on a GPU-based Heterogeneous Cluster
PRISM Mid-Year Review Reactive Atomistics, Molecular Dynamics.
Molecular Dynamics Molecular dynamics Some random notes on molecular dynamics simulations Seminar based on work by Bert de Groot and many anonymous Googelable.
Joo Chul Yoon with Prof. Scott T. Dunham Electrical Engineering University of Washington Molecular Dynamics Simulations.
ReaxFF for Vanadium and Bismuth Oxides Kim Chenoweth Force Field Sub-Group Meeting January 20, 2004.
Large-Scale Density Functional Calculations James E. Raynolds, College of Nanoscale Science and Engineering Lenore R. Mullin, College of Computing and.
PuReMD: A Reactive (ReaxFF) Molecular Dynamics Package Hassan Metin Akgulta, Sagar Pandit, Joseph Fogarty, Ananth Grama Acknowledgements:
Efficient Parallel Implementation of Molecular Dynamics with Embedded Atom Method on Multi-core Platforms Reporter: Jilin Zhang Authors:Changjun Hu, Yali.
Parallel Performance of Hierarchical Multipole Algorithms for Inductance Extraction Ananth Grama, Purdue University Vivek Sarin, Texas A&M University Hemant.
CompuCell Software Current capabilities and Research Plan Rajiv Chaturvedi Jesús A. Izaguirre With Patrick M. Virtue.
Algorithms and Software for Large-Scale Simulation of Reactive Systems _______________________________ Ananth Grama Coordinated Systems Lab Purdue University.
PuReMD: Purdue Reactive Molecular Dynamics Package Hasan Metin Aktulga and Ananth Grama Purdue University TST Meeting,May 13-14, 2010.
Reactive Molecular Dynamics: Progress Report Hassan Metin Aktulga 1, Joseph Fogarty 2, Sagar Pandit 2, and Ananth Grama 3 1 Lawrence Berkeley Lab 2 University.
Monte Carlo Simulation of Liquid Water Daniel Shoemaker Reza Toghraee MSE 485/PHYS Spring 2006.
Computational issues in Carbon nanotube simulation Ashok Srinivasan Department of Computer Science Florida State University.
Algorithmic and Numerical Techniques for Atomistic Modeling Hasan Metin Aktulga PhD Thesis Defense June 23 rd, 2010.
Molecular Dynamics A brief overview. 2 Notes - Websites "A Molecular Dynamics Primer", F. Ercolessi
Scheduling Many-Body Short Range MD Simulations on a Cluster of Workstations and Custom VLSI Hardware Sumanth J.V, David R. Swanson and Hong Jiang University.
PuReMD: A Reactive (ReaxFF) Molecular Dynamics Package Ananth Grama and Metin Aktulga
Acurate determination of parameters for coarse grain model.
E-science grid facility for Europe and Latin America E2GRIS1 André A. S. T. Ribeiro – UFRJ (Brazil) Itacuruça (Brazil), 2-15 November 2008.
Study of Pentacene clustering MAE 715 Project Report By: Krishna Iyengar.
An FPGA Implementation of the Ewald Direct Space and Lennard-Jones Compute Engines By: David Chui Supervisor: Professor P. Chow.
Molecular Dynamics Study of Aqueous Solutions in Heterogeneous Environments: Water Traces in Organic Media Naga Rajesh Tummala and Alberto Striolo School.
Molecular simulation methods Ab-initio methods (Few approximations but slow) DFT CPMD Electron and nuclei treated explicitly. Classical atomistic methods.
Algorithms, Numerical Techniques, and Software for Atomistic Modeling.
Molecular Simulation of Reactive Systems. _______________________________ Sagar Pandit, Hasan Aktulga, Ananth Grama Coordinated Systems Lab Purdue University.
Computational Aspects of Multi-scale Modeling Ahmed Sameh, Ananth Grama Computing Research Institute Purdue University.
Algorithms and Software for Large-Scale Simulation of Reactive Systems _______________________________ Metin Aktulga, Sagar Pandit, Alejandro Strachan,
MODELING MATTER AT NANOSCALES 3. Empirical classical PES and typical procedures of optimization Classical potentials.
Parallel Solution of the Poisson Problem Using MPI
An Extended Bridging Domain Method for Modeling Dynamic Fracture Hossein Talebi.
Algorithms and Infrastructure for Molecular Dynamics Simulations Ananth Grama Purdue University Various.
Review Session BS123A/MB223 UC-Irvine Ray Luo, MBB, BS.
MD (here)MD*EXP (kcal/mole)  (D) D (cm/s) 298K ENHANCED H ION TRANSPORT AND HYDRONIUM ION FORMATION T. S. Mahadevan.
MSc in High Performance Computing Computational Chemistry Module Parallel Molecular Dynamics (i) Bill Smith CCLRC Daresbury Laboratory
Molecular Mechanics (Molecular Force Fields). Each atom moves by Newton’s 2 nd Law: F = ma E = … x Y Principles of M olecular Dynamics (MD): F =
Advanced methods of molecular dynamics 1.Monte Carlo methods 2.Free energy calculations 3.Ab initio molecular dynamics 4.Quantum molecular dynamics 5.Trajectory.
PuReMD Design Initialization – neighbor-list, bond-list, hydrogenbond-list and Coefficients of QEq matrix Bonded interactions – Bond-order, bond-energy,
A Parallel Hierarchical Solver for the Poisson Equation Seung Lee Deparment of Mechanical Engineering
Multipole-Based Preconditioners for Sparse Linear Systems. Ananth Grama Purdue University. Supported by the National Science Foundation.
Parallel Molecular Dynamics A case study : Programming for performance Laxmikant Kale
PuReMD: Purdue Reactive Molecular Dynamics Software Ananth Grama Center for Science of Information Computational Science and Engineering Department of.
1 Nanoscale Modeling and Computational Infrastructure ___________________________ Ananth Grama Professor of Computer Science, Associate Director, PRISM.
ReaxFF for Vanadium and Bismuth Oxides
Computational Techniques for Efficient Carbon Nanotube Simulation
Xing Cai University of Oslo
Implementation of the TIP5P Potential
David Gleich, Ahmed Sameh, Ananth Grama, et al.
Comparison to LAMMPS-REAX
Atomistic simulations of contact physics Alejandro Strachan Materials Engineering PRISM, Fall 2007.
Algorithms and Software for Large-Scale Simulation of Reactive Systems
PuReMD: Purdue Reactive Molecular Dynamics Software Ananth Grama Center for Science of Information Computational Science and Engineering Department of.
Course Outline Introduction in algorithms and applications
Supported by the National Science Foundation.
Mid-Year Review Template March 2, 2010 Purdue University
Molecular simulation methods
Computational Techniques for Efficient Carbon Nanotube Simulation
Algorithms and Software for Large-Scale Simulation of Reactive Systems
Multiscale Modeling and Simulation of Nanoengineering:
Presentation transcript:

Reactive Molecular Dynamics: Algorithms, Software, and Applications. Ananth Grama Computer Science, Purdue University

Outline 1.Introduction Molecular Dynamics Methods vs. ReaxFF 2.Sequential Realization: SerialReax Algorithms and Numerical Techniques Validation Performance Characterization 3.Parallel Realization: PuReMD Parallelization: Algorithms and Techniques Performance and Scalability 4.Applications Strain Relaxation in Si/Ge/Si Nanobars [Strachan et al.] Water-Silica Interface [Pandit et al.] Oxidative Stress in Lipid Bilayers [Pandit et al.] 5.Conclusions

Molecular Dynamics Methods vs. ReaxFF ps ns ss ms nm mm mm Statistical & Continuum methods Classical MD methods Ab-initio methods ReaxFF

ReaxFF vs. Classical MD ReaxFF [van Duin and Goddard] is classical MD in spirit: basis: atoms (electron & nuclei) parameterized interactions: from DFT (ab-initio) atom movements: Newtonian mechanics many common interactions bonds valence angles (a.k.a. 3-body interactions) dihedrals (a.k.a. 4-body interactions) hydrogen bonds van der Waals, Coulomb

ReaxFF vs. Classical MD Static vs. dynamic bonds  reactivity: bonds are formed/broken during simulations  dynamic 3-body & 4-body interactions  complex formulations of bonded interactions  additional bonded interactions multi-body: lone pair & over/under-coordination – due to imperfect coordination 3-body: coalition & penalty – for special cases like double bonds 4-body: conjugation – for special cases

ReaxFF vs. Classical MD Static vs. dynamic charges  large sparse linear system (n X n); n is the number of atoms  updates at every step! Shorter time-steps (femtoseconds vs. tenths of femtoseconds) Fully atomistic Result: a more complex and expensive force field

ReaxFF Software: State of the Art Original Fortran Reax code reference code widely distributed and used experimental: sequential, slow static interaction lists: large memory footprint LAMMPS-REAX parallel engine: LAMMPS kernels: from the original Fortran code local optimizations only static interaction lists: large memory footprint Parallel Fortran Reax code from USC good parallel scalability poor per-timestep running time accuracy issues no public domain release

Sequential Realization: SerialReax Excellent per-timestep compute time efficient generation of neighbors lists elimination of bond order derivative lists cubic spline interpolation: for non-bonded interactions highly optimized linear solver: for charge equilibration Linear scaling memory footprint fully dynamic and adaptive interaction lists Related publication: Reactive Molecular Dynamics: Numerical Methods and Algorithmic Techniques H. M. Aktulga, S. A. Pandit, A. C. T. van Duin, A. Y. Grama SIAM Journal on Scientific Computing, 2010 (under review).

SerialReax Software Organization System Geometry Control Parameters Force Field Parameters Trajectory Snapshots System Status Update Program Log File SerialReax Initialization Read input data Initialize data structures Compute Bonds Corrections are applied after all uncorrected bonds are computed Bonded Interactions 1.Bonds 2.Lone pairs 3.Over/Under-Coordination 4.Valence Angles 5.Hydrogen Bonds 6.Dihedral/ Torsion Charge Equilibration Large sparse linear system PGMRES(50) or PCG ILUT-based pre-conditioners give good performance Evolve the System F total = F nonbonded + F bonded Update x & v with velocity Verlet NVE, NVT and NPT ensembles Reallocate Fully dynamic and adaptive memory management: efficient use of resources large systems on a single processor Neighbor Generation 3D-grid based O(n) neighbor generation Optimizations for improved performance Init Forces Initialize the QEq coef matrix Compute uncorr. bond orders Generate H-bond lists vd Waals & electrostatics Single pass over the far neighbor-list after charges are updated Interpolation with cubic splines for speed-up

Generation of Neighbors List atom list 3D Neighbors grid atom list: reordered

Neighbors List Optimizations Size of the neighbor grid cell is important r nbrs : neighbors cut-off distance set cell size to half the r nbrs Reordering reduces look-ups significantly Atom to cell distance Verlet lists for delayed reneighborings neighbors grid Just need to look inside these cells No need to check these cells with reordering! Looking for the neighbors of atoms in this box If d > r nbrs, then no need to look into the cell d

Elimination of Bond Order Derivative Lists BO ij : bond order between atoms i and j at the heart of all bonded interactions dBO ij /dr k : derivative of BO ij wrt. atom k non-zero for all atoms sharing a bond with i or j costly bonded force computations, large memory space needed Eliminate dBO list delay computation of dBO terms accumulate force related coef. into CdBO ij compute dBO ij /dr k at the end of the step f k += CdBO ij x dBO ij /dr k ij BO ij

Look-up Tables Expensive non-bonded force computations Large number of interactions: r nonb ~ 10-12A vs. r bond ~ 4-5A However, simple, pair-wise interactions Reduce non-bonded force computations Create look-up tables Cubic spline interpolation: for non-bonded energy & forces Experiment with a 6540 atom bulk water system Significant performance gain, high accuracy

Linear Solver for Charge Equilibration QEq Method: used for equilibrating charges approximation for distributing partial charges solution using Lagrange multipliers yields N: number of atoms H: n X n sparse matrix s & t fictitious charges: used to determine the actual charge q i Too expensive for direct methods  Iterative (Krylov sub-space) methods

Basic Solvers for QEq Sample systems bulk water: 6540 atoms, liquid lipid bilayer system: 56,800 atoms, biological system PETN crystal: 48,256 atoms, solid Solvers: CG and GMRES H has heavy diagonal: diagonal pre-conditioning slowly evolving environment : extrapolation from prev. solutions

Basic Solvers for QEq Poor Performance: tolerance level = which is fairly satisfactory much worse at tolerance level due to cache effects more pronounced here # of iterations = # of matrix- vector multiplications actual running time in seconds fraction of total computation time

ILU-based preconditioning ILU-based pre-conditioners: no fill-in, drop tolerance effective (considering only the solve time) no fill-in + threshold: nice scaling with system size bulk water system bilayer system cache effects are still evident

ILU-based preconditioning ILU-based pre-conditioners: no fill-in, drop tolerance ILU factorization is expensive system/solver time to compute preconditioner solve time (s) total time (s) bulk water/ GMRES+ILU bulk water/ GMRES+diagonal ~00.11

ILU-based preconditioning Observation: can amortize the ILU factorization cost slowly changing simulation environment  re-usable pre-conditioners PETN crystal: solid, 1000s of steps! Bulk water: liquid, s of steps!

Memory Management Compact data-structures Dynamic and adaptive lists initially: allocate after estimation at every step: monitor & re-allocate if necessary Low memory foot-print, linear scaling with system size n-1n n-1’s datan’s data in CSR format neighbors list Qeq matrix 3-body intrs n-1n n-1’s datan’s data reserved for n-1reserved for n in modified CSR bonds list hbonds list

Validation Hexane (C 6 H 14 ) Structure Comparison Good agreement!

Comparison to MD Methods Comparisons using hexane: systems of various sizes Ab-initio MD: CPMD Classical MD: GROMACS ReaxFF: SerialReax

Comparison to LAMMPS-Reax Time per time-step comparison Qeq solver performances Memory foot-prints different QEq formulations similar results LAMMPS: CG / no preconditioner

Outline 1.Introduction Molecular Dynamics Methods vs. ReaxFF 2.Sequential Realization: SerialReax Algorithms and Numerical Techniques Validation Performance Characterization 3.Parallel Realization: PuReMD Parallelization: Algorithms and Techniques Performance and Scalability 4.Applications Strain Relaxation in Si/Ge/Si Nanobars Water-Silica Interface Oxidative Stress in Lipid Bilayers

Parallel Realization: PuReMD Built on the SerialReax platform Excellent per-timestep running time Linear scaling memory footprint Extends its capabilities to large systems, longer time-scales Scalable algorithms and techniques Demonstrated scaling to over 3K cores Related publication: Parallel Reactive Molecular Dynamics: Numerical Methods and Algorithmic Techniques H. M. Aktulga, J. C. Fogarty, S. A. Pandit, A. Y. Grama Parallel Computing 2010 (submitted).

Parallelization: Decomposition Domain decomposition: 3D torus

Parallelization: Outer-Shell b r b r b r/2 b r full shellhalf shell midpoint-shell tower-plate shell

Parallelization: Outer-Shell b r b r b r/2 b r full shellhalf shell midpoint-shell tower-plate shell choose full- shell due to dynamic bonding despite the comm. overhead

Parallelization: Boundary Interactions r shell = MAX (3xr bond_cut, r hbond_cut, r nonb_cut )

Parallelization: Messaging

Parallelization: Messaging Performance Performance Comparison: PuReMD with direct vs. staged messaging

Parallelization: Optimizations Truncate bond related computations double computation of bonds at the outer-shell hurts scalability as the sub-domain size shrinks trim all bonds that are further than 3 hops Scalable parallel solver for the QEq problem GMRES/ILU factorization: cannot be made parallel easily use CG + diagonal pre-conditioning good initial guess is important: redundant computations: to avoid reverse comm iterate both systems together Hs=  |Ht=-1

PuReMD: Performance and Scalability Weak scaling test Strong scaling test Comparison to LAMMPS-REAX Platform: Hera cluster at LLNL 4 AMD Opterons/node cores/node 800 batch nodes – cores, 127 TFLOPS/sec 32 GB memory / node Infiniband interconnect 42 nd on TOP500 list as of Nov 2009

PuReMD: Weak Scaling Bulk Water: 6540 atoms in a 40x40x40 A 3 box / core

PuReMD: Weak Scaling QEq ScalingEfficiency

PuReMD: Strong Scaling Bulk Water: atoms in a 80x80x80 A 3 box

PuReMD: Strong Scaling Efficiency and throughput

PuReMD: Comparison to LAMMPS-Reax Weak Scaling: bulk water system, 4-5 times faster

PuReMD: Comparison to LAMMPS-Reax Strong Scaling: bulk water system, 3-4 times faster

Outline 1.Introduction Molecular Dynamics Methods vs. ReaxFF 2.Sequential Realization: SerialReax Algorithms and Numerical Techniques Validation Performance Characterization 3.Parallel Realization: PuReMD Parallelization: Algorithms and Techniques Performance and Scalability 4.Applications Strain Relaxation in Si/Ge/Si Nanobars Water-Silica Interface Oxidative Stress in Lipid Bilayers

Si/Ge/Si nanoscale bars Motivation Si/Ge/Si nanobars: ideal for MOSFETs as produced: biaxial strain, desirable: uniaxial design & production: understanding strain behavior is important Si Ge Width (W) Periodic boundary conditions Height (H) [100], Transverse [001], Vertical [010], Longitudinal Related publication: Strain relaxation in Si/Ge/Si nanoscale bars from molecular dynamics simulations Y. Park, H.M. Aktulga, A.Y. Grama, A. Strachan Journal of Applied Physics 106, 1 (2009)

Si/Ge/Si nanoscale bars Key Result: When Ge section is roughly square shaped, it has almost uniaxial strain! W = nm Si Ge Si average transverse Ge strainaverage strains for Si&Ge in each dimension Simple strain model derived from MD results

Water-Silica Interface Motivation a-SiO 2 : widely used in nano-electronic devices biological screening devices: used for in-vivo screening understanding interaction with water: critical for reliability of devices Related publication: A Reactive Simulation of the Silica-Water Interface J. C. Fogarty, H. M. Aktulga, A. van Duin, A. Y. Grama, S. A. Pandit Journal of Chemical Physics 132, (2010)

Water-Silica Interface Water model validation Silica model validation

Water-Silica Interface Silica Water SiOO OH H Key Result Silica surface hydroxylation as evidenced by experiments is observed. Proposed reaction: H 2 O + 2Si + O  2SiOH

Water-Silica Interface Key Result Hydrogen penetration is observed: some H atoms penetrate through the slab. Ab-initio simulations predict whole molecule penetration. We propose: water is able to diffuse through a thin film of silica via hydrogen hopping, i.e., rather than diffusing as whole units, water molecules dissociate at the surface, and hydrogens diffuse through, combining with other dissociated water molecules at the other surface.

Oxidative Damage in Lipid Bilayers Motivation Modeling reactive processes in biological systems ROS  Oxidative stress  Cancer & Aging System Preparation 200 POPC lipid + 10,000 water molecules and same system with 1% H 2 O 2 mixed Mass Spectograph: Lipid molecule weighs 760 u Key Result Oxidative damage observed: In pure water: 40% damaged In 1% peroxide: 75% damaged

Conclusions An efficient and scalable realization of ReaxFF through use of algorithmic and numerical techniques Detailed performance characterization; comparison to other methods Applications on various systems Beta version released: Goddard/CalTech Buehler/MIT van Duin/PSU Strachan/Purdue Pandit/USF Germann/LANL Quenneville/Spectral Sciences

References Reactive Molecular Dynamics: Numerical Methods and Algorithmic Techniques H. M. Aktulga, S. A. Pandit, A. C. T. van Duin, A. Y. Grama SIAM Journal on Scientific Computing, 2010 (submitted) Parallel Reactive Molecular Dynamics: Numerical Methods and Algorithmic Techniques H. M. Aktulga, J. C. Fogarty, S. A. Pandit, A. Y. Grama Parallel Computin, 2010 (submitted) Strain relaxation in Si/Ge/Si nanoscale bars from molecular dynamics simulations Y. Park, H.M. Aktulga, A.Y. Grama, A. Strachan Journal of Applied Physics 106, 1 (2009) A Reactive Simulation of the Silica-Water Interface J. C. Fogarty, H. M. Aktulga, A. van Duin, A. Y. Grama, S. A. Pandit Journal of Chemical Physics 132, (2010)