National Institute of Advanced Industrial Science and Technology Status report on the large-scale long-run simulation on the grid - Hybrid QM/MD simulation.

Slides:



Advertisements
Similar presentations
PRAGMA – TeraGrid – AIST Interoperation Testing Philip Papadopoulos.
Advertisements

Kento Aida, Tokyo Institute of Technology Grid Working Group Meeting Aug. 27 th, 2003 Tokyo Institute of Technology Kento Aida.
Resource WG Breakout. Agenda How we will support/develop data grid testbed and possible applications (1 st day) –Introduction of Gfarm (Osamu) –Introduction.
National Institute of Advanced Industrial Science and Technology Asia Pacific Grid PMA Yoshio Tanaka APGrid PMA, Chair Grid Technology Research Center,
National Institute of Advanced Industrial Science and Technology Experiences through Grid Challenge Event Yoshio Tanaka.
CSF4 Meta-Scheduler Tutorial 1st PRAGMA Institute Zhaohui Ding or
National Institute of Advanced Industrial Science and Technology Flexible, robust, and efficient multiscale QM/MD simulation using GridRPC and MPI Yoshio.
Multi-organisation Grid Accounting System (MOGAS): PRAGMA deployment update A/Prof. Bu-Sung Lee, Francis School of Computer Engineering, Nanyang Technological.
Resources WG Update PRAGMA 9 Hyderabad. Status (in 1 slide) Applications QMMD (AIST) Savannah (MU) iGAP (SDSC, AIST) Middleware Gfarm (AIST) Community.
Reports from Resource Breakout PRAGMA 16 KISTI, Korea.
Resource WG Update PRAGMA 14 Mason Katz, Yoshio Tanaka, Cindy Zheng.
Resource WG Update PRAGMA 8 Singapore. Routine Use - Users make a system work.
Cross Calibration of Multiple Satellite Sensors Using GEO Grid Akihide Kamei, Ryosuke Nakamura, and Satoshi Tsuchida Grid Technology Research Center National.
17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan COMPLAINTS TO RESOURCE GROUP Habibah A Wahab, Suhaini Ahmad, Nur Hanani Che Mat School of Pharmaceutical.
Resource WG Update PRAGMA 14 Mason Katz, Yoshio Tanaka, Cindy Zheng.
Slides Please send to Arun
Resources WG Update PRAGMA 9 Hyderabad. Status (in 1 slide) Applications QMMD (AIST) Savannah (MU) iGAP (SDSC, AIST) Middleware Gfarm (AIST) Community.
Resource WG PRAGMA Mason Katz, Yoshio Tanaka, Cindy Zheng.
PRAGMA 14 Geosciences WG Activities Update G. S. Chang, W. F. Tsai NARL, Taiwan March 11, 2008.
Cindy Zheng, PRAGMA 8, Singapore, 5/3-4/2005 Status of PRAGMA Grid Testbed & Routine-basis Experiments Cindy Zheng Pacific Rim Application and Grid Middleware.
Experiences of Grid Enabled MPI Implementation named MPICH-GX with Atmospheric Applications Oh-kyoung Kwon, KISTI Salvador Castañeda, CICESE PRAGMA 11.
National Institute of Advanced Industrial Science and Technology Meta-scheduler based on advanced reservation Grid Technology Research Center, AIST Atsuko.
Resource/data WG Summary Yoshio Tanaka Mason Katz.
National Institute of Advanced Industrial Science and Technology Running flexible, robust and scalable grid application: Hybrid QM/MD Simulation Hiroshi.
Resource WG Summary Mason Katz, Yoshio Tanaka. Next generation resources on PRAGMA Status – Next generation resource (VM-based) in PRAGMA by UCSD (proof.
Resource WG Report. Projects Applications EOL Ninf-G Climate model GridBlast GOC Gangla / SCMSWeb => Uniform Database Goodness Status map (e.g. IVDGL)
Summary of Steering Committee Meeting 22 March 2007.
National Institute of Advanced Industrial Science and Technology Advance Reservation-based Grid Co-allocation System Atsuko Takefusa, Hidemoto Nakada,
CSF4 Meta-Scheduler PRAGMA13 Zhaohui Ding or College of Computer.
Dynamic Resource Management for Virtualization HPC Environments Xiaohui Wei College of Computer Science and Technology Jilin University, China. 1 PRAGMA.
A Proposal of Capacity and Performance Assured Storage in The PRAGMA Grid Testbed Yusuke Tanimura 1) Hidetaka Koie 1,2) Tomohiro Kudoh 1) Isao Kojima 1)
Cindy Zheng, SC2006, 11/12/2006 Cindy Zheng PRAGMA Grid Testbed Coordinator P acific R im A pplication and G rid M iddleware A ssembly San Diego Supercomputer.
National Institute of Advanced Industrial Science and Technology Ninf-G - Core GridRPC Infrastructure Software OGF19 Yoshio Tanaka (AIST) On behalf.
Does the implementation give solutions for the requirements? Flexibility GridRPC enables dynamic join/leave of QM servers. GridRPC enables dynamic expansion.
Severs AIST Cluster (50 CPU) Titech Cluster (200 CPU) KISTI Cluster (25 CPU) Climate Simulation on ApGrid/TeraGrid at SC2003 Client (AIST) Ninf-G Severs.
Barcelona Supercomputing Center. The BSC-CNS objectives: R&D in Computer Sciences, Life Sciences and Earth Sciences. Supercomputing support to external.
Create an Application Title 1A - Adult Chapter 3.
FACTORING ax2 + bx + c Think “unfoil” Work down, Show all steps.
A Large-Grained Parallel Algorithm for Nonlinear Eigenvalue Problems Using Complex Contour Integration Takeshi Amako, Yusaku Yamamoto and Shao-Liang Zhang.
Copyright © 2012, Elsevier Inc. All rights Reserved. 1 Chapter 7 Modeling Structure with Blocks.
PRAGMA19 – PRAGMA 20 Collaborative Activities Resources Working Group.
National Institute of Advanced Industrial Science and Technology Proposals for auditing Yoshio Tanaka Grid Technology Research.
Kento Aida, Tokyo Institute of Technology 1 Tutorial: Technology of the Grid 1. Definition 2. Components 3. Infrastructure Kento Aida Tokyo Institute of.
National Institute of Advanced Industrial Science and Technology ApGrid: Current Status and Future Direction Yoshio Tanaka (AIST)
National Institute of Advanced Industrial Science and Technology Introduction to Grid Activities in the Asia Pacific Region jointly presented by Yoshio.
Kento Aida, Tokyo Institute of Technology Grid Challenge - programming competition on the Grid - Kento Aida Tokyo Institute of Technology 22nd APAN Meeting.
PRAGMA: Cyberinfrastructure, Applications, People Yoshio Tanaka (AIST, Japan) Peter Arzberger (UCSD, USA)
Resource WG PRAGMA 18 Mason Katz, Yoshio Tanaka.
National Institute of Advanced Industrial Science and Technology Introduction of PRAGMA routine-basis experiments Yoshio Tanaka
PRAGMA 17 – PRAGMA 18 Resources Group. PRAGMA Grid 28 institutions in 17 countries/regions, 22 compute sites (+ 7 site in preparation) UZH Switzerland.
National Institute of Advanced Industrial Science and Technology Brief status report of AIST GRID CA APGridPMA Singapore September 16 Yoshio.
Routine-Basis Experiments in PRAGMA Grid Testbed Yusuke Tanimura Grid Technology Research Center National Institute of AIST.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
National Institute of Advanced Industrial Science and Technology APGrid PMA: Stauts Yoshio Tanaka Grid Technology Research Center,
Infrastructure areas Middleware – deployment - development Computer resources (incl hardware) Networking.
Pacific Rim Application and Grid Middleware Assembly: PRAGMA A community building collaborations and advancing grid-based applications Peter Arzberger,
National Institute of Advanced Industrial Science and Technology Some topics from the OGF20 and the EUGrid PMA F2F Meeting Yoshio Tanaka Grid Technology.
SC2008 (11/19/2008) Resources Group Pacific Rim Application and Grid Middleware Assembly Reports.
Kento Aida, Tokyo Institute of Technology Grid Working Group Aug. 29 th, 2003 Tokyo Institute of Technology Kento Aida.
National Institute of Advanced Industrial Science and Technology ApGrid: Asia Pacific Partnership for Grid Computing - Introduction of testbed development.
Kento Aida, Tokyo Institute of Technology Grid working group meeting Jan. 26 th, 2005 Bangkok.
National Institute of Advanced Industrial Science and Technology GGF12 Workshop on Operational Security for the Grid Cross-site authentication and access.
Thoughts on International e-Science Infrastructure Kevin Thompson U.S. National Science Foundation Office of Cyberinfrastructure iGrid2005 9/27/2005.
National Institute of Advanced Industrial Science and Technology Developing Scientific Applications Using Standard Grid Middleware Hiroshi Takemiya Grid.
Università di Perugia Enabling Grids for E-sciencE Status of and requirements for Computational Chemistry NA4 – SA1 Meeting – 6 th April.
Kento Aida, Tokyo Institute of Technology Joint Meeting Grid activities committee and Grid working group Jan. 28 th, 2004 Honolulu.
PRAGMA 25 Working Group Updates Resources Working Group Yoshio Tanaka (AIST) Phil Papadopoulos (UCSD) Most slides by courtesy of Peter, Nadya, Luca, and.
1 Performance Impact of Resource Provisioning on Workflows Gurmeet Singh, Carl Kesselman and Ewa Deelman Information Science Institute University of Southern.
Grid Datafarm and File System Services
Presentation transcript:

National Institute of Advanced Industrial Science and Technology Status report on the large-scale long-run simulation on the grid - Hybrid QM/MD simulation - Grid Technology Research Center AIST Hiroshi Takemiya, Yoshio Tanaka

Goal of the experiment To verify the effectiveness of our programming approach for large- scale long-run grid applications Flexibility Robustness efficiency Friction simulation Nano-scale prober moves on the Si substrate Requiring hundreds of CPUs Requiring long simulation time over a few months No. of QM regions and No. of QM atoms change dynamically 525fs 40 v=0.009 /fs Gridifying the application Using GridRPC + MPI Gridifying the application Using GridRPC + MPI 2 QM regions with QM atoms Totally atoms

Used 11 clusters with totally 632 CPUs in 8 organizations. PRAGMA Clusters SDSC (32 CPUs), KU (8 CPUs), NCSA (8 CPUs), NCHC (8 CPUs) Titech-1(8 CPUs), AIST(8 CPUs) AIST Super Cluster M64 (128 CPUs), F32-1(128 CPUs CPUs) Japan Clusters U-Tokyo (128 CPUs), Tokushima-U (32 CPUs), Titech-2 (16 CPUs) Testbed for the Friction Simulation M64 F32 NCHC NCSA SDSC U-Tokyo Titech-2 Tokushima-U Titech-1 AIST KU

Result of the Friction Simulation Experiment Time: days Longest Calculation Time: 22 day Manual restart: 2 times Execution failure: 165 times Succeeded in recovering these failures Changing the No. of CPUs used: 18 times succeeded in adjusting No. of CPUs to the No. of QM regions/QM atoms

Summary and future work Our approach is effective for running large- scale grid applications for a long time Need more grid services Getting information on available resources Resource reservation Coordinating with resource manager/scheduler Need cleaner MPI mpich quits leaving processes/IPC resources Using GridMPI in place of mpich