Www.mimos.my© 2010 MIMOS Berhad. All Rights Reserved. Nazarudin Wijee Mohd Sidek Salleh Grid Computing Lab MIMOS Berhad 7 Stages Heating System Amber Job.

Slides:



Advertisements
Similar presentations
NGS computation services: API's,
Advertisements

Generic MPI Job Submission by the P-GRADE Grid Portal Zoltán Farkas MTA SZTAKI.
MIMOS Berhad. All Rights Reserved. Nazarudin Wijee Mohd Sidek Salleh Grid Computing Lab MIMOS Berhad P-GRADE Performance.
Amber: How to set-up simple calculations (No Non-standard Residues )
© 2007 IBM Corporation IBM Global Engineering Solutions IBM Blue Gene/P Job Submission.
MIMOS Berhad. All Rights Reserved. Nazarudin Wijee Mohd Sidek Salleh Grid Computing Lab MIMOS Berhad Blender Job Submission in P-GRADE.
MIMOS Berhad. All Rights Reserved. Nazarudin Wijee Mohd Sidek Salleh Grid Computing Lab MIMOS Berhad Key Size Analysis of Brute Force.
Running DiFX with SGE/OGE Helge Rottmann Max-Planck-Institut für Radioastronomie Bonn, Germany DiFX Meeting Sydney.
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Greg Thain Computer Sciences Department University of Wisconsin-Madison Condor Parallel Universe.
Parallel ISDS Chris Hans 29 November 2004.
HPCC Mid-Morning Break MPI on HPCC Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Research
Batch Queuing Systems The Portable Batch System (PBS) and the Load Sharing Facility (LSF) queuing systems share much common functionality in running batch.
Using Clusters -User Perspective. Pre-cluster scenario So many different computers: prithvi, apah, tejas, vayu, akash, agni, aatish, falaq, narad, qasid.
IT MANAGEMENT OF FME, 21 ST JULY  THE HPC FACILITY  USING PUTTY AND WINSCP TO ACCESS THE SERVER  SENDING FILES TO THE SERVER  RUNNING JOBS 
New MPI Library on the cluster Since WSU’s Grid had an upgrade of its operating system recently, we need to use a new MPI Library to compile and run our.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Supporting MPI Applications on EGEE Grids Zoltán Farkas MTA SZTAKI.
Using the P4-Xeon cluster HPCCC, Science Faculty, HKBU Usage Seminar for the 64-nodes P4-Xeon Cluster in Science Faculty March 24, 2004.
Porto, January Grid Computing Course Summary of day 2.
Further Shell Scripting Michael Griffiths Corporate Information and Computing Services The University of Sheffield
Submit Host Setup (user) Tc.data file Pool.config file Properties file Vdl-gen file Input file Exitcode checking script.
Introduction to Linux and Shell Scripting Jacob Chan.
ISG We build general capability Purpose After this tutorial, you should: Be comfortable submitting work to the batch queuing system of olympus and be familiar.
Enabling Grids for E-sciencE Medical image processing web portal : Requirements analysis. An almost end user point of view … H. Benoit-Cattin,
August 22, 2015 XSEDE New User Tutorial Ken Hackworth, James Marsteller, Marcela Madrid, Steve Tuecke.
Introduction to UNIX/Linux Exercises Dan Stanzione.
Procedures on how to enter the GRID Christos Papachristos Site Manager of the HG-05-FORTH and GR-04-FORTH-ICS nodes Distributed.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
1. Introduction  The JavaScript Grid Portal is trying to find a way to access Grid through Web browser, while using Web 2.0 technologies  The portal.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
WestGrid Seminar Series Copyright © 2006 University of Alberta. All rights reserved Integrating Gridstore Into The Job Submission Process With GSUB Edmund.
Airavata Usecases from SEAGrid Applications and Workflows Sudhakar Pamidighantam 4 Oct 2015.
Introduction to Using SLURM on Discover Chongxun (Doris) Pan September 24, 2013.
Rochester Institute of Technology Job Submission Andrew Pangborn & Myles Maxfield 10/19/2015Service Oriented Cyberinfrastructure Lab,
Shell Script Yingying Wang. Basic Commands Good resources Google is your friend
E-science grid facility for Europe and Latin America gLite MPI Tutorial for Grid School Daniel Alberto Burbano Sefair, Universidad de Los.
Workflow Level Grid Interoperability By GEMLCA and the P-GRADE Portal.
Part Five: Globus Job Management A: GRAM B: Globus Job Commands C: Laboratory: globusrun.
Virtual mpirun Jason Hale Engineering 692 Project Presentation Fall 2007.
How to for compiling and running MPI Programs. Prepared by Kiriti Venkat.
MIMOS Berhad. All Rights Reserved. Nazarudin Wijee Mohd Sidek Salleh Grid Computing Lab MIMOS Berhad P-GRADE Portal Heuristic Evaluation.
Cluster Computing Applications for Bioinformatics Thurs., Sept. 20, 2007 process management shell scripting Sun Grid Engine running parallel programs.
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Master Control Program Subha Sivagnanam SDSC. Master Control Program Provides automatic resource selection for running a single parallel job on HPC resources.
Introduction to HPC Workshop October Introduction Rob Lane & The HPC Support Team Research Computing Services CUIT.
Rochester Institute of Technology 1 Job Submission Andrew Pangborn & Myles Maxfield 01/19/09Service Oriented Cyberinfrastructure Lab,
06/08/10 P-GRADE Portal and MIMOS P-GRADE portal developments in the framework of the MIMOS-SZTAKI joint project Mohd Sidek Salleh MIMOS Berhad Zoltán.
Portable Batch System – Definition and 3 Primary Roles Definition: PBS is a distributed workload management system. It handles the management and monitoring.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
CLS 集群 2015 年. 登录 teachcqb ssh /6 testpkuhpc 命令行登录 下载:
GridWay Overview John-Paul Robinson University of Alabama at Birmingham SURAgrid All-Hands Meeting Washington, D.C. March 15, 2007.
Advanced Computing Facility Introduction
Hackinars in Bioinformatics
Hands on training session for core skills
GRID COMPUTING.
PARADOX Cluster job management
Condor DAGMan: Managing Job Dependencies with Condor
Unix Scripts and PBS on BioU
Advanced Topics: MPI jobs
MPI Basics.
Special jobs with the gLite WMS
Using Paraguin to Create Parallel Programs
Architecture & System Overview
NGS computation services: APIs and Parallel Jobs
What is Bash Shell Scripting?
Mike Becher and Wolfgang Rehm
UNIX Reference Sheets CSE 2031 Fall 2010.
User interaction and workflow management in Grid enabled e-VLBI experiments Dominik Stokłosa Poznań Supercomputing and Networking Center, Supercomputing.
Working in The IITJ HPC System
Presentation transcript:

MIMOS Berhad. All Rights Reserved. Nazarudin Wijee Mohd Sidek Salleh Grid Computing Lab MIMOS Berhad 7 Stages Heating System Amber Job Submission in P-GRADE Portal

Background User: Universiti Sains Malaysia Amber Version: 9 Job Type: MPI No of processor: MIMOS Berhad. All Rights Reserved.

Amber 7 Stages Heating System Workflow MIMOS Berhad. All Rights Reserved. #PBS -N Amber #PBS -l select=4:ncpus=8 STAGE=1 PROJECT_DIR=job_execution_directory EXE=/share/apps/amber9/exe/sander GBIN=$PROJECT_DIR/gbin$STAGE PRMTOP=$PROJECT_DIR/prmtop$STAGE INPCRD=$PROJECT_DIR/inpcrd$STAGE RESTRT=$PROJECT_DIR/restrt$STAGE TRAJECTORY=$PROJECT_DIR/trajectory$STAGE MDINF=$PROJECT_DIR/mdinfo$STAGE MDOUT=$PROJECT_DIR/mdout$STAGE MDEN=$PROJECT_DIR/mden$STAGE MDVEL=$PROJECT_DIR/mdvel$STAGE cd $PBS_O_WORKDIR mpirun -np 32 -machinefile $PBS_NODEFILE $EXE -O -i $GBIN -p $PRMTOP -c $INPCRD -r $RESTRT -x $TRAJECTORY -inf $MDINF -o $MDOUT -e $MDEN -v $MDVEL RESTRT file from Stage1 Heating will be used as INPCRD for Stage2 Heating

Amber 7 Stages Heating System Workflow MIMOS Berhad. All Rights Reserved. STAGE=1. /etc/pbs.conf. executor.info echo "PBS_SERVER = $PBS_SERVER" echo "PBS_JOBID = $PBS_JOBID" echo "EXEC_NODE = `hostname`" echo "MONITOR_JOB_DIR = `pwd`" echo # begin monitoring FINISH_STATUS="0" until [[ $FINISH_STATUS -eq "1" ]] do WC=`ssh $PBS_SERVER "tracejob -n 30 $PBS_JOBID | grep 'dequeuing from' | wc -l"` if [[ $WC -eq 1 ]]; then FINISH_STATUS="1" else FINISH_STATUS="0" fi done echo $WC > tracejob.out echo "Job $PBS_JOBID has finished..." RESTRT=$PROJECT_DIR/restrt$STAGE TRAJECTORY=$PROJECT_DIR/trajectory$STAGE MDINF=$PROJECT_DIR/mdinfo$STAGE MDOUT=$PROJECT_DIR/mdout$STAGE MDEN=$PROJECT_DIR/mden$STAGE MDVEL=$PROJECT_DIR/mdvel$STAGE # in case some files were not produced... touch restrt$STAGE touch trajectory$STAGE touch mdinfo$STAGE touch mdout$STAGE touch mden$STAGE touch mdvel$STAGE cp $RESTRT. cp $TRAJECTORY. cp $MDINF. cp $MDOUT. cp $MDEN. cp $MDVEL. exit 0 What will Monitor job do? 1.It will receive PBS Job ID from Submit job 2.During runtime, it will goes to cluster head node and do PBS Pro tracejob to check it the given PBS Job ID already finished.

THANK YOU MIMOS Berhad. All Rights Reserved.