Development of the Nanoconfinement Science Gateway

Slides:



Advertisements
Similar presentations
Building a CFD Grid Over ThaiGrid Infrastructure Putchong Uthayopas, Ph.D Department of Computer Engineering, Faculty of Engineering, Kasetsart University,
Advertisements

Implementation methodology for Emerging Reconfigurable Systems With minimum optimization an appreciable speedup of 3x is achievable for this program with.
Nuage Project at UW Magdalena Balazinska University of Washington
Iterative computation is a kernel function to many data mining and data analysis algorithms. Missing in current MapReduce frameworks is collective communication,
Motivation “Every three minutes a woman is diagnosed with Breast cancer” (American Cancer Society, “Detailed Guide: Breast Cancer,” 2006) Explore the use.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
© 2008 The MathWorks, Inc. ® ® Parallel Computing with MATLAB ® Silvina Grad-Freilich Manager, Parallel Computing Marketing
ExTASY 0.1 Beta Testing 1 st April 2015
Parallel Processing CS453 Lecture 2.  The role of parallelism in accelerating computing speeds has been recognized for several decades.  Its role in.
Software for Science Gateways: Open Grid Computing Environments Marlon Pierce, Suresh Marru Pervasive Technology Institute Indiana University
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
A performance evaluation approach openModeller: A Framework for species distribution Modelling.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
4.2.1 Programming Models Technology drivers – Node count, scale of parallelism within the node – Heterogeneity – Complex memory hierarchies – Failure rates.
The Future of the iPlant Cyberinfrastructure: Coming Attractions.
1. Process Gather Input – Today Form Coherent Consensus – Next two months.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
A Web Laboratory for Visual Interactive Simulation of Epitaxial Growth Feng Liu University of Utah Recently, we have developed a prototype of web laboratory.
A Software Framework for Distributed Services Michael M. McKerns and Michael A.G. Aivazis California Institute of Technology, Pasadena, CA Introduction.
Preliminary CPMD Benchmarks On Ranger, Pople, and Abe TG AUS Materials Science Project Matt McKenzie LONI.
National Center for Supercomputing Applications University of Illinois at Urbana–Champaign Visualization Support for XSEDE and Blue Waters DOE Graphics.
SALSASALSASALSASALSA Digital Science Center February 12, 2010, Bloomington Geoffrey Fox Judy Qiu
A Pattern Language for Parallel Programming Beverly Sanders University of Florida.
Design and implementation Chapter 7 – Lecture 1. Design and implementation Software design and implementation is the stage in the software engineering.
Derek Weitzel Grid Computing. Background B.S. Computer Engineering from University of Nebraska – Lincoln (UNL) 3 years administering supercomputers at.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
PEER 2003 Meeting 03/08/031 Interdisciplinary Framework Major focus areas Structural Representation Fault Systems Earthquake Source Physics Ground Motions.
SEAGrid Gateway 02/09/2016. Outline SEAGrid Production Service Airavata Infrastructure SEAGrid Integration with Airavata Demos.
Sub-fields of computer science. Sub-fields of computer science.
INTRODUCTION TO WIRELESS SENSOR NETWORKS
Big Data Analytics and HPC Platforms
Kai Li, Allen D. Malony, Sameer Shende, Robert Bell
Productive Performance Tools for Heterogeneous Parallel Computing
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Early Results of Deep Learning on the Stampede2 Supercomputer
Volunteer Computing for Science Gateways
基于多核加速计算平台的深度神经网络 分割与重训练技术
Recap: introduction to e-science
Introduction to XSEDE Resources HPC Workshop 08/21/2017
Multi-Processing in High Performance Computer Architecture:
Cloud Computing By P.Mahesh
Hadoop Clusters Tess Fulkerson.
University of Technology
Improving java performance using Dynamic Method Migration on FPGAs
Performance Evaluation of Adaptive MPI
Engineered nanoBIO Node at Indiana University
Engineered nanoBIO Node
Engineered nanoBIO Node
Liang Chen Advisor: Gagan Agrawal Computer Science & Engineering
Many-core Software Development Platforms
CS & CS Capstone Project & Software Development Project
Digital Science Center I
Scientific Computing At Jefferson Lab
David Gleich, Ahmed Sameh, Ananth Grama, et al.
Learn about MATLAB Engineers – not sales!
Early Results of Deep Learning on the Stampede2 Supercomputer
Scalable Parallel Interoperable Data Analytics Library
Simulink Support for VEX Cortex BEST Robotics Sandeep Hiremath
Clouds from FutureGrid’s Perspective
A Domain Decomposition Parallel Implementation of an Elasto-viscoplasticCoupled elasto-plastic Fast Fourier Transform Micromechanical Solver with Spectral.
Hybrid Programming with OpenMP and MPI
Department of Intelligent Systems Engineering
The Promise of Learning Everywhere and MLforHPC
Department of Computer Science, University of Tennessee, Knoxville
Support for Adaptivity in ARMCI Using Migratable Objects
DBOS DecisionBrain Optimization Server
Digital Science Center
Excursions into Parallel Programming
L. Glimcher, R. Jin, G. Agrawal Presented by: Leo Glimcher
Presentation transcript:

Development of the Nanoconfinement Science Gateway Suresh Marru, Vikram Jadhao Intelligent Systems Engineering Gateway: https://nanoconfinement.sciencegateways.iu.edu/ Jadhao Lab: https://jadhaolab.engineering.indiana.edu/

Self-assembly of Nanoparticles Self-assembly of nanoparticles is important in design of advanced materials. For charged nanoparticles self- assembly is governed by the distribution of ions between nanoparticles.

Computing distribution of ions accurately CONFINEMENT FORMED BY NANOPARTICLES For unpolarizable interfaces with no dielectric mismatch, standard MD produces good result. For polarizable interfaces, advanced techniques like the novel application Car-Parrinello molecular dynamics (CPMD) is needed. Jadhao Lab has proposed this theory and current simulations build on this prior work. The distribution is defined by electro-static and entropic interactions between ions. If nano particles has same di-eletric properties then you do MD. If you have different di-lectric then you need CPMD…The space is solution which is water in this case. In general nano particles have different properties. Ions represented by blue and green circles confined by nanoparticle surfaces Surfaces are approximated as planar interfaces due to the size difference between ions and the assembling nanoparticles. MD is based on newtons laws CPMD is

Computing Specifications Both standard MD and advanced CPMD are computationally intensive. 500 ions simulated using MD for 1 million time steps (one nano-second of real-time dynamics) takes 12 hours on a single processors. CPMD also calculates induced charges and is 5 times slower than MD. These challenges motivate the need for using OpenMP and MPI for code optimization. Currently simulations run with OpenMP multi-processors leading to an acceleration of 10X. scaling and paralleling the simulation codes and effectively use local and national computing resources.

Why a new gateway? Application-specific gateway built on top of a standard general purpose gateway framework. Specialized gateway targeting soft matter researchers, physics, chemists, material engineers, chemical engineers. While the MD parts can be built over general purpose software like LAMMPS, the CPMD is a new technique and will be available through the gateway. This is an active research area and requires community at large to participate and iterate on improving the simulations. All that feedback and method improvements and user support enables user-tested, user-informed, and user friendly tool to be deployed on nanoHUB (through the newly awarded Engineered nanoBIO node). LAMMPS like general purposes Although the MD is also very tuned to the problem..and optimized it for

Goals of the nanoconfinement Science Gateway To provide a web-based platform with sophisticated and user-friendly computing environment to engage with the nanoconfinement community by empowering them to launch and monitor simulations. To enable users to explore the effects of nanoscale confinement on the distribution of ions To build application-specific input generation, output visualization, and data production to study behavior of ions near nanoparticle surfaces and related material phenomena. To aid in related computational tool development for deployment on nanoHUB based on iterative user feedback

Gateway high level architecture

Gateway implementation details Built over Apache Airavata Science Gateway framework. Utilizes SciGaP Services (Airavata hosted cloud). Users input two categories of parameters Physical Parameters (confinement width, ion valency, salt concentration, etc.) Simulation Job Parameters (simulation timestep, wall time limit, etc. ) Output: ionic distributions and movies of ion dynamics in confinement.

Sample output Researcher’s view of a sample result on the gateway using the MD simulation: distributions of ions described by nz (in units of Molars) as a function of z (nm). Ions are confined by two surfaces at -1.5 nm and 1.5 nm. Different symbols denote ions of valencies 1, 2, and 3.

Current Status Current codes with openMP parallelization for 16 threads lead to a run time of 1 hour for a mid-size candidate system of 500 ions simulated for 1 million time steps. Work in progress to make the method more efficient by using MPI. Extending the current output data download to browser based interactive visualizations within the gateway. Currently focusing on CPU based resources on SDSC Comet and IU BigRed II clusters. Planning to explore other hardware to improve simulation run times.

Road Map

Grant Acknowledgements This work is partially supported by the National Science Foundation under: the Network for Computational Nanotechnology (NCN) Network for Computational Nanotechnology - Engineered nanoBIO Node through award 1720625 Science Gateways Platform as a Service through award 1339774. Apache Airavata Project Management Committee Computational Resources provided by: XSEDE Startup Allocation through Grant TG - DMR170089 Big Red II supercomputer at Indiana University

MD Simulation of confined ions 1:1 salt at 0.1 M in water, Compute Forces Initialize ions Move ions Confining interface