17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan COMPLAINTS TO RESOURCE GROUP Habibah A Wahab, Suhaini Ahmad, Nur Hanani Che Mat School of Pharmaceutical.

Slides:



Advertisements
Similar presentations
PRAGMA – TeraGrid – AIST Interoperation Testing Philip Papadopoulos.
Advertisements

CSF4 Meta-Scheduler Tutorial 1st PRAGMA Institute Zhaohui Ding or
Multi-organisation Grid Accounting System (MOGAS): PRAGMA deployment update A/Prof. Bu-Sung Lee, Francis School of Computer Engineering, Nanyang Technological.
Reports from Resource Breakout PRAGMA 16 KISTI, Korea.
Motivation 1.Application resources setup – make it easy 2.Transform PRAGMA grid – add on demand –Continue using Grid resources –Add cloud resources Current.
PRAGMA 14 – Taichung March High Performance and Grid Computing Group Faculty of Computer Science and Engineering Ho Chi Minh City University.
Reports from Resource Breakout PRAGMA 15 USM, Malaysia.
Resource/data WG Summary Yoshio Tanaka Mason Katz.
National Institute of Advanced Industrial Science and Technology Running flexible, robust and scalable grid application: Hybrid QM/MD Simulation Hiroshi.
PRAGMA Application (GridFMO) on OSG/FermiGrid Neha Sharma (on behalf of FermiGrid group) Fermilab Work supported by the U.S. Department of Energy under.
National Institute of Advanced Industrial Science and Technology Advance Reservation-based Grid Co-allocation System Atsuko Takefusa, Hidemoto Nakada,
Biosciences Working Group Update & Report Back Wilfred W. Li, Ph.D., UCSD, USA Habibah Wahab, Ph.D., USM, Malaysia Hosted by IOIT Hanoi, Vietnam, Oct 29,
CSF4 Meta-Scheduler PRAGMA13 Zhaohui Ding or College of Computer.
PRAGMA BioSciences Portal Raj Chhabra Susumu Date Junya Seo Yohei Sawai.
Gfarm v2 and CSF4 Osamu Tatebe University of Tsukuba Xiaohui Wei Jilin University SC08 PRAGMA Presentation at NCHC booth Nov 19,
Cindy Zheng, SC2006, 11/12/2006 Cindy Zheng PRAGMA Grid Testbed Coordinator P acific R im A pplication and G rid M iddleware A ssembly San Diego Supercomputer.
GRID COMPUTING AND SOME RESEARCH ISSUES IN DEVELOPMENT GEOGRID AT VAST Dao Van Tuyet Department for Computational & Knowledge Engineering.
Jaime Frey Computer Sciences Department University of Wisconsin-Madison OGF 19 Condor Software Forum Routing.
GXP in nutshell You can send jobs (Unix shell command line) to many machines, very fast Very small prerequisites –Each node has python (ver or later)
Condor Project Computer Sciences Department University of Wisconsin-Madison Eager, Lazy, and Just-in-Time.
NGS computation services: API's,
Generic MPI Job Submission by the P-GRADE Grid Portal Zoltán Farkas MTA SZTAKI.
CamGrid Mark Calleja Cambridge eScience Centre. What is it? A number of like minded groups and departments (10), each running their own Condor pool(s),
Test harness and reporting framework Shava Smallen San Diego Supercomputer Center Grid Performance Workshop 6/22/05.
© University of Reading IT Services ITS Support for e­ Research Stephen Gough Assistant Director of IT Services 18 June 2008.
SCARF Duncan Tooke RAL HPCSG. Overview What is SCARF? Hardware & OS Management Software Users Future.
1 Planetary Network Testbed Larry Peterson Princeton University.
PlanetLab What is PlanetLab? A group of computers available as a testbed for computer networking and distributed systems research.
4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
Dan Bradley Computer Sciences Department University of Wisconsin-Madison Schedd On The Side.
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
Academic Technology Services The UCLA Grid Portal - Campus Grids and the UC Grid Joan Slottow and Prakashan Korambath Research Computing Technologies UCLA.
PRAGMA19, Sep. 15 Resources breakout Migration from Globus-based Grid to Cloud Mason Katz, Yoshio Tanaka.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Supporting MPI Applications on EGEE Grids Zoltán Farkas MTA SZTAKI.
6/2/20071 Grid Computing Sun Grid Engine (SGE) Manoj Katwal.
Sun Grid Engine Grid Computing Assignment – Fall 2005 James Ruff Senior Department of Mathematics and Computer Science Western Carolina University.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Enabling Grids for E-sciencE Medical image processing web portal : Requirements analysis. An almost end user point of view … H. Benoit-Cattin,
A Project about: Molecular Dynamic Simulation (MDS) Prepared By Ahmad Lotfy Abd El-Fattah Grid Computing Group Supervisors Alexandr Uzhinskiy & Nikolay.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Overview of software tools for gLite installation & configuration.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Grids and Portals for VLAB Marlon Pierce Community Grids Lab Indiana University.
GRID Computing: Ifrastructure, Development and Usage in Bulgaria M. Dechev, G. Petrov, E. Atanassov.
Grid Computing I CONDOR.
:: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009.
O.S.C.A.R. Cluster Installation. O.S.C.A.R O.S.C.A.R. Open Source Cluster Application Resource Latest Version: 2.2 ( March, 2003 )
PRAGMA 17 – PRAGMA 18 Resources Group. PRAGMA Grid 28 institutions in 17 countries/regions, 22 compute sites (+ 7 site in preparation) UZH Switzerland.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
Group 1 : Grid Computing Laboratory of Information Technology Supervisors: Alexander Ujhinsky Nikolay Kutovskiy.
Evaluation of Agent Teamwork High Performance Distributed Computing Middleware. Solomon Lane Agent Teamwork Research Assistant October 2006 – March 2007.
Condor: High-throughput Computing From Clusters to Grid Computing P. Kacsuk – M. Livny MTA SYTAKI – Univ. of Wisconsin-Madison
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Code Applications Tamas Kiss Centre for Parallel.
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
© 2007 UC Regents1 Track 1: Cluster and Grid Computing NBCR Summer Institute Session 1.1: Introduction to Cluster and Grid Computing July 31, 2007 Wilfred.
Derek Wright Computer Sciences Department University of Wisconsin-Madison MPI Scheduling in Condor: An.
Institute For Digital Research and Education Implementation of the UCLA Grid Using the Globus Toolkit Grid Center’s 2005 Community Workshop University.
1 High-Performance Grid Computing and Research Networking Presented by David Villegas Instructor: S. Masoud Sadjadi
1 Grid Activity Summary » Grid Testbed » CFD Application » Virtualization » Information Grid » Grid CA.
Nara Institute of Science and Technology, Nara Prefecture, Japan CONFIGURATION AND DEPLOYMENT OF A SCALABLE VIRTUAL MACHINE CLUSTER FOR MOLECULAR DOCKING.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Open Science Grid Build a Grid Session Siddhartha E.S University of Florida.
Nara Institute of Science and Technology, Nara Prefecture, Japan CONFIGURATION AND DEPLOYMENT OF A SCALABLE VIRTUAL MACHINE CLUSTER FOR MOLECULAR DOCKING.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Overview of software tools for gLite installation & configuration.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
Administering the SOWN Network David R Newman & Chris Malton.
Agent Teamwork Research Assistant
Installation and Integration of Virtual Clusters onto Pragma Grid
Creating and running applications on the NGS
FTP - File Transfer Protocol
CRESCO Project: Salvatore Raia
Presentation transcript:

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan COMPLAINTS TO RESOURCE GROUP Habibah A Wahab, Suhaini Ahmad, Nur Hanani Che Mat School of Pharmaceutical Sciences, Unversiti Sains Malaysia

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan MIGRATING AMBER to GRID SYSTEM REQUIREMENTSYSTEM REQUIREMENT –Software: Globus 2.x, 3.x or 4.x Fortran 90 compiler –Hardware: ~50GB of disk space Linux on 32bit Intel machine

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan HOW WE BEGAN… Contact Cindy for testing resources. Allocated Resources: –USM – hawk.usm.my –USM – aurora.cs.usm.my –ROCK- 52 – rock-52.sdsc.edu –ASCC – pragma001.grid.sinica.edu.tw –IOIT-HCM – venus.ioit-hcm.ac.vn –UNAM – malicia.super.unam.mx –Thank You, Cindy!

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan HOW WE BEGAN… Contact Cindy for testing resources. Allocated Resources: –USM – hawk.usm.my –USM – aurora.cs.usm.my –ROCK- 52 – rock-52.sdsc.edu –ASCC – pragma001.grid.sinica.edu.tw –IOIT-HCM – venus.ioit-hcm.ac.vn –UNAM – malicia.super.unam.mx –Thank You, Cindy! Contacting the system administrators are fine, but is there any system that we could just submit our job without worrying about where they will be executed ?

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan WHAT WE ENCOUNTERED…. Hardware: – Heterogeneous architecture between clusters Globus Authentication: –Requires users account in all clusters –Globuss user certificate setup on each cluster –The cert need to be signed by institution CA admin. –User have to know all clusters in PRAGMA (host address and total of nodes on each site). –Certain port cannot be accessed. e.g: gsiftp port – for file transfer

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan WHAT WE ENCOUNTERED…. Hardware: – Heterogeneous architecture between clusters Globus Authentication: –Requires users account in all clusters –Globuss user certificate setup on each cluster –The cert need to be signed by institution CA admin. –User have to know all clusters in PRAGMA (host address and total of nodes on each site). –Certain port cannot be accessed. e.g: gsiftp port – for file transfer This is okay, a lot of work but we wish this process could be simpler…..

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan more encounters…. MPICH/MPI – No standard parallel software on the grid – e.g: MPICH (ASCC, UNAM, hawk, IOIT-HCM, aurora), LAM (rocks-52) –User need to know whether mpich/lam is configured by ssh/rsh rsh or ssh? –setting up rsh/ssh without password between execution nodes. –non-standardized usage of rsh/ssh on the grid. Some clusters are using rsh and others are using ssh. – e.g : –rsh – IOIT-HCM –ssh – hawk, aurora, ASCC, UNAM, rocks-52

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan more encounters…. MPICH/MPI – No standard parallel software on the grid – e.g: MPICH (ASCC, UNAM, hawk, IOIT-HCM, aurora), LAM (rocks-52) –User need to know whether mpich/lam is configured by ssh/rsh rsh or ssh? –setting up rsh/ssh without password between execution nodes. –non-standardized usage of rsh/ssh on the grid. Some clusters are using rsh and others are using ssh. – e.g : –rsh – IOIT-HCM –ssh – hawk, aurora, ASCC, UNAM, rocks-52 How we wish there is a standard parallel software and rsh/ssh running on all the clusters in pragma testbed….

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan still more ….. Compiling parallel AMBER –Unable to compiled with mpich/lam in the cluster. –Can compile amber-mpich in rocks-52, BUT… 1. CANNOT BE EXECUTED USING GLOBUS (Figure 1) 2. CAN BE EXECUTED USING GLOBUS, but run on one node only

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan But there is hope for us…. executable file can be copied between clusters with similar architecture and mpich configuration. –executables copied from HAWK to UNAM, aurora, IOIT-HCM (mpich-configured with rsh) –executables copied from rocks-52 to ASCC (mpich-configured with ssh ) Wilfred said that Gfarm can overcome this problem… Is it true Tatebe-san?

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan Testing AMBER with Globus Testing execution on each cluster, using globus from hawk to all sites. Testing gsiftp for sending and receiving files using from hawk-other cluster. Network Condition –Globus submission depends on the network condition. –Globus submission may fail, yet, the user will not know… Cluster reliability –unexpected cluster problem. System may down or cannot be access due many factors. Or… globus was just not working.

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan Testing AMBER with Globus Testing execution on each cluster, using globus from hawk to all sites. Testing gsiftp for sending and receiving files using from hawk-other cluster. Network Condition –Globus submission depends on the network condition. –Globus submission may fail, yet, the user will not know… Cluster reliability –unexpected cluster problem. System may down or cannot be access due many factors. Or… globus was just not working. Cindy, Sue gave up. Instead of working on 6 clusters you allocated to us: USM – aurora.cs.usm.my ROCK- 52 – rock-52.sdsc.edu ASCC – pragma001.grid.sinica.edu.tw IOIT-HCM – venus.ioit-hcm.ac.vn UNAM – malicia.super.unam.mx, She just work with 4 clusters: Aurora – 300K ASCC – 373K, 500K IOIT-HCM – 400K UNAM – 473K I think you know why…..

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan Web Interface? –Too many commands to remember & things to do to run AMBER on the grid –Web is more user-friendly. –But, it employs dynamic programming to process users command to run on the grid –But, must understand the application (amber) work flow and input files. –With this user can simply run and concentrate on the simulation.

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan AMBER Work Flow Structure Coordinates Force Field & Topology Creator Minimiser/ MD simulator Trajectory Analyser PDB, XYZ, Internal Coord. Junk in, Junk out! Prmtop, prmcrdMdinMd.Out En.out Trj.files Grid MiddlewareUser Simulator Engine

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan User interface Hawk Rocks-52 ASCC Aurora IOIT-HCM Gsiftp inputs & results Globus-submit jobs Gsiftp inputs & results Globus-submit jobs Gsiftp inputs & results Globus-submit jobs Gsiftp inputs & results Globus-submit jobs Upload files/submit jobs Download & view results

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan TESTING….. Thermo-effects of Methionine Aminopeptidase: Molecular Dynamics Studies

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan Globus-job-submit…. submitted 5 jobs(5 different temperatures of the same system) to 4 different clusters. Each job will occupy any empty cluster. List of clusters and jobs: – Aurora – 300K – ASCC – 373K, 500K – IOIT-HCM – 400K – UNAM – 473K Simulation time: 20ps

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan Benchmarking AMEXg Benchmark: Submit 4 different temperatures for the same system to 4 different clusters. List of clusters and jobs: – Aurora – 300K [Running on 16 nodes] – ASCC – 373K [Running on 4 nodes] – IOIT-HCM – 400K [Running on 8 nodes] – UNAM – 473K [Running on 8 nodes ] Simulation time: 20ps

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan Checking…… Transferring input files from hawk to other clusters

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan Checking…… Aurora cluster Receiving files from hawk Job submitted from hawk

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan Checking…… Receiving files from hawk Job submitted from hawk ASCC cluster

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan Checking…… Receiving files from hawk Job submitted from hawk IOIT-HCM cluster

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan Checking…… Receiving files from hawk Job submitted from hawk UNAM cluster

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan Checking…… Receiving files from hawk Transferring/copying output files from clusters to hawk

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan Interface displayed after uploading input files using AMEXg

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan Aurora cluster Transferring output files to hawk

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan ASCCcluster Transferring output files to hawk (cont.)

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan IOIT-HCM cluster Transferring output files to hawk (cont.)

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan UNAM cluster Transferring output files to hawk (cont.)

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan List of output files Result for MD simulation

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan Benchmarking Aurora – 300K ASCC – 373K UNAM – 473KIOIT-HCM – 400K

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan Benchmarking Aurora – 300K ASCC – 373K UNAM – 473KIOIT-HCM – 400K This is far from perfect…. We are working with GridSphere with Chan Huah Yong. But we are extremely happy that we can run our applications on the grid. If it is okay, we would like to run the applications from time to time on the testbed…. But soon, we need to think about the licencing issue, because AMBER is not free….

17 th October, 2006PRAGMA 11, Beautiful Osaka, Japan Sipadan Island, Sabah, Malaysia Thank you!