National Institute of Advanced Industrial Science and Technology Flexible, robust, and efficient multiscale QM/MD simulation using GridRPC and MPI Yoshio.

Slides:



Advertisements
Similar presentations
Demand Response in Ontario Paul Grod, CEO, Rodan Energy July 11, 2013.
Advertisements

Info to Enterprise Migration Implementation Case Study: SBC Corporation Presented to the Crystal Decisions Regional Users Group for the Bay Area on October.
EE384y: Packet Switch Architectures
Slide 1 Insert your own content. Slide 2 Insert your own content.
1 Towards an Open Service Framework for Cloud-based Knowledge Discovery Domenico Talia ICAR-CNR & UNIVERSITY OF CALABRIA, Italy Cloud.
1 Building a Fast, Virtualized Data Plane with Programmable Hardware Bilal Anwer Nick Feamster.
Resource WG Breakout. Agenda How we will support/develop data grid testbed and possible applications (1 st day) –Introduction of Gfarm (Osamu) –Introduction.
National Institute of Advanced Industrial Science and Technology Experiences through Grid Challenge Event Yoshio Tanaka.
Multi-organisation Grid Accounting System (MOGAS): PRAGMA deployment update A/Prof. Bu-Sung Lee, Francis School of Computer Engineering, Nanyang Technological.
PRAGMA 14 – Taichung March High Performance and Grid Computing Group Faculty of Computer Science and Engineering Ho Chi Minh City University.
National Institute of Advanced Industrial Science and Technology Status report on the large-scale long-run simulation on the grid - Hybrid QM/MD simulation.
National Institute of Advanced Industrial Science and Technology Running flexible, robust and scalable grid application: Hybrid QM/MD Simulation Hiroshi.
National Institute of Advanced Industrial Science and Technology Advance Reservation-based Grid Co-allocation System Atsuko Takefusa, Hidemoto Nakada,
Dynamic Resource Management for Virtualization HPC Environments Xiaohui Wei College of Computer Science and Technology Jilin University, China. 1 PRAGMA.
National Institute of Advanced Industrial Science and Technology Ninf-G - Core GridRPC Infrastructure Software OGF19 Yoshio Tanaka (AIST) On behalf.
Does the implementation give solutions for the requirements? Flexibility GridRPC enables dynamic join/leave of QM servers. GridRPC enables dynamic expansion.
Severs AIST Cluster (50 CPU) Titech Cluster (200 CPU) KISTI Cluster (25 CPU) Climate Simulation on ApGrid/TeraGrid at SC2003 Client (AIST) Ninf-G Severs.
T. Nakano. , A. Enomoto. , M. Moore. , R. Egashira. , T. Suda. , K
Chapter 1 Introduction Copyright © Operating Systems, by Dhananjay Dhamdhere Copyright © Introduction Abstract Views of an Operating System.
Simulazione di Biomolecole: metodi e applicazioni giorgio colombo
Prasanna Pandit R. Govindarajan
4.1 © 2004 Pearson Education, Inc. Exam Managing and Maintaining a Microsoft® Windows® Server 2003 Environment Lesson 4: Organizing a Disk for Data.
13 Copyright © 2005, Oracle. All rights reserved. Monitoring and Improving Performance.
© Bharati Vidyapeeths Institute of Computer Applications and Management, New Delhi © Bharati Vidyapeeths Institute of Computer Applications and.
MTE 241 Introduction to Computer Structures and Real-time Systems
© 2008 Cisco Systems, Inc. All rights reserved.Cisco Confidential 1 Bill Eklow October 26, D Test Issues.
1 Overview Assignment 4: hints Memory management Assignment 3: solution.
Chapter 10: Virtual Memory
Describing Complex Products as Configurations using APL Arrays.
A Large-Grained Parallel Algorithm for Nonlinear Eigenvalue Problems Using Complex Contour Integration Takeshi Amako, Yusaku Yamamoto and Shao-Liang Zhang.
25 July, 2014 Martijn v/d Horst, TU/e Computer Science, System Architecture and Networking 1 Martijn v/d Horst
25 July, 2014 Martijn v/d Horst, TU/e Computer Science, System Architecture and Networking 1 Martijn v/d Horst
5 August, 2014 Martijn v/d Horst, TU/e Computer Science, System Architecture and Networking 1 Martijn v/d Horst
Atom atom atom atom atom 1.True or false? Protons are in the nucleus.
1 Processes and Threads Chapter Processes 2.2 Threads 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling.
KAIST Computer Architecture Lab. The Effect of Multi-core on HPC Applications in Virtualized Systems Jaeung Han¹, Jeongseob Ahn¹, Changdae Kim¹, Youngjin.
Distributed Systems Major Design Issues Presented by: Christopher Hector CS8320 – Advanced Operating Systems Spring 2007 – Section 2.6 Presentation Dr.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 13 Slide 1 Application architectures.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Oct. 23, 2002 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
Resource Management §A resource can be a logical, such as a shared file, or physical, such as a CPU (a node of the distributed system). One of the functions.
Autonomic Systems Justin Moles, Winter 2006 Enabling autonomic behavior in systems software with hot swapping Paper by: J. Appavoo, et al. Presentation.
Molecular dynamics modeling of thermal and mechanical properties Alejandro Strachan School of Materials Engineering Purdue University
The Charm++ Programming Model and NAMD Abhinav S Bhatele Department of Computer Science University of Illinois at Urbana-Champaign
Dynamic Load Balancing for VORPAL Viktor Przebinda Center for Integrated Plasma Studies.
Study of Hurricane and Tornado Operating Systems By Shubhanan Bakre.
Two Approaches to Multiphysics Modeling Sun, Yongqi FAU Erlangen-Nürnberg.
MCell Usage Scenario Project #7 CSE 260 UCSD Nadya Williams
Parallel Programming Models and Paradigms
Joo Chul Yoon with Prof. Scott T. Dunham Electrical Engineering University of Washington Molecular Dynamics Simulations.
ADLB Update Recent and Current Adventures with the Asynchronous Dynamic Load Balancing Library Rusty Lusk Mathematics and Computer Science Division Argonne.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
Overcoming Scaling Challenges in Bio-molecular Simulations Abhinav Bhatelé Sameer Kumar Chao Mei James C. Phillips Gengbin Zheng Laxmikant V. Kalé.
November 14, 2013 Mechanical Engineering Tribology Laboratory (METL) Experimental and Analytical Investigation of Transient Friction Abdullah Alazemi Ph.D.
Ferroelectric Nanolithography Extended to Flexible Substrates Dawn A. Bonnell, University of Pennsylvania, DMR Recent advances in materials synthesis.
Efficient Live Checkpointing Mechanisms for computation and memory-intensive VMs in a data center Kasidit Chanchio Vasabilab Dept of Computer Science,
National Institute of Advanced Industrial Science and Technology Developing Scientific Applications Using Standard Grid Middleware Hiroshi Takemiya Grid.
1 Rocket Science using Charm++ at CSAR Orion Sky Lawlor 2003/10/21.
. Anatoli Korkin Nano & Giga Solutions Outline: Retrospection and Forecast Atomic Scale Materials Design A few steps from atoms to devices Future Research.
1 Performance Impact of Resource Provisioning on Workflows Gurmeet Singh, Carl Kesselman and Ewa Deelman Information Science Institute University of Southern.
RF Superconducting Materials Workshop at Fermilab, May 23 & 24, 2007 Advanced Nb oxide surface modification by cluster ion beams Zeke Insepov, Jim Norem.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
Xing Cai University of Oslo
GdX - Grid eXplorer parXXL: A Fine Grained Development Environment on Coarse Grained Architectures PARA 2006 – UMEǺ Jens Gustedt - Stéphane Vialle - Amelia.
Parallel Objects: Virtualization & In-Process Components
Development of the Nanoconfinement Science Gateway
Faucets: Efficient Utilization of Multiple Clusters
Multiscale Modeling and Simulation of Nanoengineering:
Parallel computing in Computational chemistry
Computational issues Issues Solutions Large time scale
Presentation transcript:

National Institute of Advanced Industrial Science and Technology Flexible, robust, and efficient multiscale QM/MD simulation using GridRPC and MPI Yoshio Tanaka, Hiroshi Takemiya (National Institute of AIST, Japan) (National Institute of AIST, Japan) Shuji Ogata (Nagoya Institute of Technology, Japan)

Outline Target simulation Atomic Force Microscope Tip Induced Anodic Oxidation Multiscale hybrid QM/Classic Simulation Behavior and requirementsImplementation GridRPC + MPI Strategy for the long run Ongoing experiments environments live status and demonstration Summary and future work

National Institute of Advanced Industrial Science and Technology Target simulation - Atomic Force Microscope Tip Induced Anodic Oxidation -

AFM nano-rubbing Atomic-scale friction of MEMS e.g., stick-slip process AFM anodic oxidation furrows ( ) polymer film on substrate e.g., locally oriented liquid crystal ( ) aggregation of molecules Mechanical and Chemical Reactions with Scanning Probe Microscopy smaller pressure larger pressure e - adsorption water local oxides (SiO 2 ) e - e.g., lithography H-saturated Si

Relations between external strain, microscopic structure, and oxidation 2. Direction of motion 3. Tip pressure 4. Inserted molecules (humidity) Oxidation at the contact region H-saturated Si(100) Nanoscale-Tip under strain motion 1. Atomic-scale commensuration of tip and substrate 5. Electron transfer

Hybrid QM(DFT)-CL(MD) Simulation Scheme Hybrid Coarse-Grained-Particles/MD simulation scheme Hybrid QM(DFT)-CL(MD) simulation scheme seamless coupling with the buffered-cluster method adaptive choice of QM-region Financial supports: ACT-JST (year ), JST-CREST(2005-present)

Hybrid QM-CL Simulation Run: Slide direction Si-Si dimers Formation of Si-Si bonds between tip and substrate QM-SiCL-Si QM-H CL-H 300fs 525fs 40 Zoom out view 15fs v=0.009 /fs Detachment of saturation-H atoms Detached QM-H atom Expansion of QM region fix

Requirements by the simulation Flexibility Adaptive expansion of QM region Number of atoms in a QM region may increase or decrease Number of QM regions may increase or decreaseRobustness Need to continue more than few weeks, few months Simulation should be capable of fault recoveryEfficiency Compute-intensive QM simulation runs on hundreds of cpus Each (independent) QM simulation runs on a different cluster 300fs 525fs 15fs

National Institute of Advanced Industrial Science and Technology Implementation - GridRPC + MPI - - Strategy for long run -

AlgorithmImplementation Algorithm and Implementation MD partQM part initial set-up Calculate MD forces of QM+MD regions Update atomic positions and velocities Calculate QM force of the QM region Data of QM atoms QM forces Calculate QM force of the QM region Calculate QM force of the QM region Calculate MD forces of QM region MPI_MD_WORLD MPI_QM_WORLD GridRPC

Does the implementation give solutions for the requirements? Flexibility GridRPC enables dynamic join/leave of QM servers. GridRPC enables dynamic expansion of a QM server.Robustness GridRPC detects errors and application can implement a recovery code by itself.Efficiency GridRPC can easily handle multiple clusters. Local MPI provides high performance on a cluster by fine grain parallelism.

Strategy for long run Impossible to run the simulation for few months on fixed clusters. QM simulation will migrate to the other cluster either by intentionally or unintentionally. intentional migration Exceeds the maximum runtime for the cluster Reservation period has expired unintentional migration Any error/fault is detected The next cluster will be selected by either reservation or simple selection algorithm. Selection algorithm considers number of available cpus number of requested cpus records of past utilization Simulation reads a host information file in every time step. A cluster can join to/leave from the experiment on-the-fly.

Examples of hosts information NAME SDSC ID 2 ADDR rocks-52.sdsc.edu FROM 2005/4/18/12/30/30 TO 2006/9/18/12/30/30 MAX_AVAIL CPU_MAX 32 CPU_INIT 32 NAME F32-2 ID 9 ADDR fsvc001.asc.hpcc.jp FROM 2005/10/7/9/0/0 TO 2006/10/11/12/0/0 MAX_AVAIL CPU_MAX 128 CPU_INIT 64

National Institute of Advanced Industrial Science and Technology Ongoing experiment - Experimental environments - - Live status and demonstration -

Experimental Environments (as of Oct. 19) ClusterSiteUsed #CPUPhysical #CPU 1F32-2AIST (2 x 68) 2F32-3AIST (2 x 132) 3P32AIST (2 x 128) 4M64AIST64256 (4 x 64) 5ISTBSU. Tokyo (2 x 170) 6POOLTokushima U (1 x 47) 7ALABTITECH3260 (2 x 30) 8Rocks-52SDSC16120 (4 x 30) 9AMATAKU8 8 (1 x 12) 10ASENCHC8 8 (2 x 8) 11UMEAIST8 8 (2 x 14) 12TGCNCSA8 8 (4 x 12) Used #CPU is decided based on memory size, busyness, and stability for launching MPI processes

Summary and future work GridRPC + MPI implements flexible, robust, and high performance Grid applications. flexible – allow dynamic resource allocation / migration robust – detect errors and recover from faults efficient – manage hundreds to thousands of CPUs. Will have a joint experiment with TeraGrid SIMOX (Separation by Implantation of Oxygen) simulation run for more than 1 week on 5 x 128 cpu clusters which are reserved in advance. Research issues Load balancing between QM simulations More clever scheduling algorithm …