Mid-Year Review Template March 2, 2010 Purdue University

Slides:



Advertisements
Similar presentations
PERFORMANCE EVALUATION OF USER-REAXC PACKAGE Hasan Metin Aktulga Postdoctoral Researcher Scientific Computing Group Lawrence Berkeley National Laboratory.
Advertisements

Formulation of an algorithm to implement Lowe-Andersen thermostat in parallel molecular simulation package, LAMMPS Prathyusha K. R. and P. B. Sunil Kumar.
Weighted Matrix Reordering and Parallel Banded Preconditioners for Nonsymmetric Linear Systems Murat Manguoğlu*, Mehmet Koyutürk**, Ananth Grama* and Ahmed.
Molecular Dynamics Simulations of Radiation Damage in Amorphous Silica: Effects of Hyperthermal Atomic Oxygen Vanessa Oklejas Space Materials Laboratory.
Charge-optimized Many Body Potentials for Heterogeneous Interfaces: Metal/Metal Oxides/Organic Molecules Simon R. Phillpot, University of Florida, DMR.
Molecular Modeling: Molecular Mechanics C372 Introduction to Cheminformatics II Kelsey Forsythe.
Reactive Empirical Force Fields Jason Quenneville X-1: Solid Mechanics, EOS and Materials Properties Applied Physics Division Los Alamos.
PRISM Mid-Year Review Reactive Atomistics, Molecular Dynamics.
PuReMD: A Reactive (ReaxFF) Molecular Dynamics Package Hassan Metin Akgulta, Sagar Pandit, Joseph Fogarty, Ananth Grama Acknowledgements:
Coarse grained simulations of p7 folding
© Fujitsu Laboratories of Europe 2009 HPC and Chaste: Towards Real-Time Simulation 24 March
Algorithms and Software for Large-Scale Simulation of Reactive Systems _______________________________ Ananth Grama Coordinated Systems Lab Purdue University.
Next-generation scalable applications: When MPI-only is not enough June 3, 2008 Paul S. Crozier Sandia National Labs Sandia is a multiprogram laboratory.
Coarse-grain modeling of lipid membranes
PuReMD: Purdue Reactive Molecular Dynamics Package Hasan Metin Aktulga and Ananth Grama Purdue University TST Meeting,May 13-14, 2010.
Reactive Molecular Dynamics: Progress Report Hassan Metin Aktulga 1, Joseph Fogarty 2, Sagar Pandit 2, and Ananth Grama 3 1 Lawrence Berkeley Lab 2 University.
Computational issues in Carbon nanotube simulation Ashok Srinivasan Department of Computer Science Florida State University.
Algorithmic and Numerical Techniques for Atomistic Modeling Hasan Metin Aktulga PhD Thesis Defense June 23 rd, 2010.
Chicago, July 22-23, 2002DARPA Simbiosys Review 1 Monte Carlo Particle Simulation of Ionic Channels Trudy van der Straaten Umberto Ravaioli Beckman Institute.
PuReMD: A Reactive (ReaxFF) Molecular Dynamics Package Ananth Grama and Metin Aktulga
Acurate determination of parameters for coarse grain model.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
Overcoming Scaling Challenges in Bio-molecular Simulations Abhinav Bhatelé Sameer Kumar Chao Mei James C. Phillips Gengbin Zheng Laxmikant V. Kalé.
A Technical Introduction to the MD-OPEP Simulation Tools
Molecular Dynamics Study of Aqueous Solutions in Heterogeneous Environments: Water Traces in Organic Media Naga Rajesh Tummala and Alberto Striolo School.
Molecular simulation methods Ab-initio methods (Few approximations but slow) DFT CPMD Electron and nuclei treated explicitly. Classical atomistic methods.
Algorithms, Numerical Techniques, and Software for Atomistic Modeling.
Molecular Simulation of Reactive Systems. _______________________________ Sagar Pandit, Hasan Aktulga, Ananth Grama Coordinated Systems Lab Purdue University.
Computational Aspects of Multi-scale Modeling Ahmed Sameh, Ananth Grama Computing Research Institute Purdue University.
Algorithms and Software for Large-Scale Simulation of Reactive Systems _______________________________ Metin Aktulga, Sagar Pandit, Alejandro Strachan,
Reactive Molecular Dynamics: Algorithms, Software, and Applications. Ananth Grama Computer Science, Purdue University
Parallel and Distributed Computing Research at the Computing Research Institute Ananth Grama Computing Research Institute and Department of Computer Sciences.
Algorithms and Infrastructure for Molecular Dynamics Simulations Ananth Grama Purdue University Various.
Anton, a Special-Purpose Machine for Molecular Dynamics Simulation By David E. Shaw et al Presented by Bob Koutsoyannis.
On the Use of Finite Difference Matrix-Vector Products in Newton-Krylov Solvers for Implicit Climate Dynamics with Spectral Elements ImpactObjectives 
Circuit Simulation using Matrix Exponential Method Shih-Hung Weng, Quan Chen and Chung-Kuan Cheng CSE Department, UC San Diego, CA Contact:
PuReMD Design Initialization – neighbor-list, bond-list, hydrogenbond-list and Coefficients of QEq matrix Bonded interactions – Bond-order, bond-energy,
Multipole-Based Preconditioners for Sparse Linear Systems. Ananth Grama Purdue University. Supported by the National Science Foundation.
Parallel Molecular Dynamics A case study : Programming for performance Laxmikant Kale
Reactive Molecular Dynamics: Progress Report Hassan Metin Aktulga 1, Joseph Fogarty 2, Sagar Pandit 2, and Ananth Grama 3 1 Lawrence Berkeley Lab 2 University.
PuReMD: Purdue Reactive Molecular Dynamics Software Ananth Grama Center for Science of Information Computational Science and Engineering Department of.
1 Nanoscale Modeling and Computational Infrastructure ___________________________ Ananth Grama Professor of Computer Science, Associate Director, PRISM.
ReaxFF for Vanadium and Bismuth Oxides
Hierarchical Theoretical Methods for Understanding and Predicting Anisotropic Thermal Transport and Energy Release in Rocket Propellant Formulations Michael.
Integrated Planning of Transmission and Distribution Systems
Computational Techniques for Efficient Carbon Nanotube Simulation
CHAPTER 2 - EXPLICIT TRANSIENT DYNAMIC ANALYSYS
Fig. 7 from Soft-error tolerance and energy consumption evaluation of embedded computer with magnetic random access memory in practical systems using computer.
SOCS/Scheduler Development Plan Michael Reuter, Fransicso Delgado
David Gleich, Ahmed Sameh, Ananth Grama, et al.
Comparison to LAMMPS-REAX
Development of the Nanoconfinement Science Gateway
Atomistic simulations of contact physics Alejandro Strachan Materials Engineering PRISM, Fall 2007.
Atomistic materials simulations at The DoE NNSA/PSAAP PRISM Center
Algorithms and Software for Large-Scale Simulation of Reactive Systems
Performance Evaluation of the Parallel Fast Multipole Algorithm Using the Optimal Effectiveness Metric Ioana Banicescu and Mark Bilderback Department of.
PuReMD: Purdue Reactive Molecular Dynamics Software Ananth Grama Center for Science of Information Computational Science and Engineering Department of.
Supported by the National Science Foundation.
By Brandon, Ben, and Lee Parallel Computing.
Molecular simulation methods
Computational Techniques for Efficient Carbon Nanotube Simulation
Algorithms and Software for Large-Scale Simulation of Reactive Systems
Ph.D. Thesis Numerical Solution of PDEs and Their Object-oriented Parallel Implementations Xing Cai October 26, 1998.
Adsorption Simulations on Metal-Organic Frameworks for Air Separation
Computer simulation studies of forced rupture kinetics of
Parallel computing in Computational chemistry
Computational issues Issues Solutions Large time scale
The Atomic-scale Structure of the SiO2-Si(100) Interface
Molecular Dynamics Simulation of a Synthetic Ion Channel
Presentation transcript:

Mid-Year Review Template March 2, 2010 Purdue University

Reactive Atomistics Metin Aktulga and Ananth Grama

Accomplishments Purdue ReaxFF represents a unique capability – simulating reactive systems with 106 atoms and beyond, at high accuracies. The speed and scale of such simulations is well beyond competing implementations.

Accomplishments V 2.0 of Serial ReaxFF Code Released V 1.0 of Parallel ReaxFF Code Released Initial third-party benchmarking (Goddard et al.) shows Purdue Reax is approx. 10 x faster than competing/ collaborating implementation Initial third-party benchmarking shows parallel version to be stable and scalable to 1K cores.

Continuing Workplan Benchmark scalability on larger configurations Address known scalability bottlenecks (qEq solver) Fully integrate into LAMMPS Continue development of FFOpt

Purdue ReaxFF Optimizations in every part of the code: efficient generation of nbrs & intrs lists completely dynamic memory management for all lists fast computation of bond-related forces computation of van der Waals & Coulomb interactions with cubic spline interpolations efficient QEq solver: GMRES+ILU-based preconditioner

Purdue ReaxFF Results: Approximately 10 times faster than competing Reax code 10-20 times smaller memory footprint and adaptive to resource and problem requirements

Purdue Reax Y. Park, H. Aktulga, A. Grama, A. Strachan “Strain relaxation in Si/Ge/Si nanoscale bars from MD simulations” J Appl Phys 106, 034304 (2009) J. Fogarty, H. Aktulga, A. van Duin, A. Grama, S. Pandit “A Reactive Simulation of the Silica-Water Interface”, J Comp Phys (2010) J. Fogarty, H. Aktulga, A. Grama, and S. Pandit Oxidative Damage in Lipid Bilayers: A Reactive Molecular Dynamics Study, Biophys. Soc. (2010)

Purdue Reax: Performance Reference 6540 atom bulk water system QEq tolerance = 10−6 (refactor every 100 steps) QEq tolerance = 10−10 (refactor every 30 steps) tol=10−6 tol=10−10 solver matvecs time matvecs time CG + diag. 31 0.18 95 0.54 GMRES+ diag. 18 0.11 81 0.49 CG/ilu(10−2) 9 0.06 18 0.13 GMRES /ilu(10−2) 6 0.04 15 0.11 ILU-based preconditioners 3 x better performance! QEq now takes as low as 6-7% of total time!

Purdue Parallel Reax Inherits major part of the code from SerialReax slower QEq solver: CG + diagonal preconditioner larger memory footprint: conservative allocation + communication buffers internal release: Feb 16, 2010 will be used in PRISM metal-dielectric contact simulations

Purdue Parallel Reax Performance Bulk water system (6540 atoms): executable cores time per step QEq time per step SerialReax (icc -fast) 1 0.74 0.12 ParallelReax(icc -O3) 1 1.46 0.55 Bilayer system (56800 atoms): SerialReax (icc -fast) 1 7.76 1.34 ParallelReax(icc -O3) 1 13.30 6.30 Performance degrades mostly due to parallel QEq solver Working with Dr Manguoglu on SPIKE-based QEq solver

Scaling Results

Integration Efforts Initial qEq integration into LAMMPS needs to be redone Changes to LAMMPS interface Changes to Purdue Reax Integration of Purdue Reax 2.0 into LAMMPS