Www.eu-eela.org E-science grid facility for Europe and Latin America gLite MPI Tutorial for Grid School Daniel Alberto Burbano Sefair, Universidad de Los.

Slides:



Advertisements
Similar presentations
Generic MPI Job Submission by the P-GRADE Grid Portal Zoltán Farkas MTA SZTAKI.
Advertisements

EGEE is a project funded by the European Union under contract IST EGEE Tutorial Turin, January Hands on Job Services.
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
SEE-GRID-SCI User Interface (UI) Installation and Configuration Branimir Ackovic Institute of Physics Serbia The SEE-GRID-SCI.
MPI support in gLite Enol Fernández CSIC. EMI INFSO-RI CREAM/WMS MPI-Start MPI on the Grid Submission/Allocation – Definition of job characteristics.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Ejecución con MPI e interactividad en el Grid Ejercicios Prácticos 1.
Tutorial on MPI Experimental Environment for ECE5610/CSC
High Performance Computing
EGEE-II INFSO-RI Enabling Grids for E-sciencE Supporting MPI Applications on EGEE Grids Zoltán Farkas MTA SZTAKI.
FESR Consorzio COMETA - Progetto PI2S2 Using MPI to run parallel jobs on the Grid Marcello Iacono Manno Consorzio COMETA
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Supporting MPI applications on the EGEE Grid.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America Special Jobs Matias Zabaljauregui UNLP.
Enabling Grids for E-sciencE gLite training at Sinaia '06 Victor Penso Kilian Schwarz GSI Darmstadt Germany.
Procedures on how to enter the GRID Christos Papachristos Site Manager of the HG-05-FORTH and GR-04-FORTH-ICS nodes Distributed.
Electronic Visualization Laboratory, University of Illinois at Chicago MPI on Argo-new Venkatram Vishwanath Electronic Visualization.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America Luciano Díaz ICN-UNAM Based on Domenico.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) MPI Applications with the Grid Engine Riccardo Rotondo
INFSO-RI Enabling Grids for E-sciencE GILDA Praticals GILDA Tutors INFN Catania ICTP/INFM-Democritos Workshop on Porting Scientific.
E-science grid facility for Europe and Latin America Watchdog: A job monitoring solution inside the EELA-2 Infrastructure Riccardo Bruno,
:: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009.
1 HeMoLab - Porting HeMoLab's SolverGP to EELA glite Grid Environment FINAL REPORT Ramon Gomes Costa - Paulo Ziemer.
E-science grid facility for Europe and Latin America Marcelo Risk y Juan Francisco García Eijó Laboratorio de Sistemas Complejos Departamento.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) GISELA Additional Services Diego Scardaci
EGEE is a project funded by the European Union under contract IST Status of NA4 Generic Applications Roberto Barbera NA4 Generic Applications.
Group 1 : Grid Computing Laboratory of Information Technology Supervisors: Alexander Ujhinsky Nikolay Kutovskiy.
E-science grid facility for Europe and Latin America E2GRIS1 Gustavo Miranda Teixeira Ricardo Silva Campos Laboratório de Fisiologia Computacional.
E-science grid facility for Europe and Latin America Using Secure Storage Service inside the EELA-2 Infrastructure Diego Scardaci INFN (Italy)
Jan 31, 2006 SEE-GRID Nis Training Session Hands-on V: Standard Grid Usage Dušan Vudragović SCL and ATLAS group Institute of Physics, Belgrade.
Satellital Image Clasification with neural networks Step implemented – Final Report Susana Arias, Héctor Gómez UNIVERSIDAD TÉCNICA PARTICULAR DE LOJA ECUADOR.
gLite Information System UNIANDES OOD Team Daniel Alberto Burbano Sefair, Michael Angel.
E-science grid facility for Europe and Latin America MAVs-Study Biologically Inspired, Super Maneuverable, Flapping Wing Micro-Air-Vehicles.
Running on GCB part1 By: Camilo Silva. Simple steps to run MPI 1.Use putty or the terminal 2.SSH to gcb.fiu.edu 3.Loggin by providing your username and.
INFSO-RI Enabling Grids for E-sciencE Αthanasia Asiki Computing Systems Laboratory, National Technical.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Workload management in gLite 3.x - MPI P. Nenkova, IPP-BAS, Sofia, Bulgaria Some of.
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
INFSO-RI Enabling Grids for E-sciencE Job Submission Tutorial (material from INFN Catania)
Introduction to HPC Workshop October Introduction Rob Lane & The HPC Support Team Research Computing Services CUIT.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America Moisés Hernández Duarte UNAM FES Cuautitlán.
1 Running MPI on “Gridfarm” Bryan Carpenter February, 2005.
INFSO-RI Enabling Grids for E-sciencE Job Description Language (JDL) Giuseppe La Rocca INFN First gLite tutorial on GILDA Catania,
INFSO-RI Enabling Grids for E-sciencE GILDA Praticals Giuseppe La Rocca INFN – Catania gLite Tutorial at the EGEE User Forum CERN.
E-infrastructure shared between Europe and Latin America FP6−2004−Infrastructures−6-SSA Special Jobs Valeria Ardizzone INFN - Catania.
LCG2 Tutorial Viet Tran Institute of Informatics Slovakia.
Satellital Image Clasification with neural networks Susana Arias, Héctor Gómez UNIVERSIDAD TÉCNICA PARTICULAR DE LOJA ECUADOR
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Parallel jobs with MPI Hands-on tutorial Enol Fernández del Castillo Instituto.
Advanced gLite job management Paschalis Korosoglou, AUTH/GRNET EPIKH Application Porting School 2011 Beijing, China Paschalis Korosoglou,
The Finite Difference Time Domain Method FDTD By Dr. Haythem H. Abdullah Researcher at ERI, Electronics Research Institute, Microwave Engineering Dept.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarksEGEE-III INFSO-RI MPI on the grid:
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Advanced Job Riccardo Rotondo
LA 4 CHAIN GISELA EPIKH School SPECFEM3D on Science Gateway.
FESR Consorzio COMETA - Progetto PI2S2 Using MPI to run parallel jobs on the Grid Marcello Iacono Manno Consorzio Cometa
Parallel jobs with MPI and hands on tutorial Enol Fernández del Castillo Instituto de Física de Cantabria.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Special Topics: MPI jobs Maha Dessokey (
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Indian Institute of Technology Kharagpur EPIKH Workshop Kolkata,
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) How to Run MPI-enabled Applications on the EUMEDGRID Infrastructure.
Hands on training session for core skills
Stephen Childs Trinity College Dublin
Job Management Exercises
MPI Applications with the Grid Engine
Advanced Topics: MPI jobs
MPI Basics.
gLite MPI Job Amina KHEDIMI CERIST
Special jobs with the gLite WMS
Java standalone version
MPI Applications with the Grid Engine
Special Topics: MPI jobs
Special Jobs: MPI Alessandro Costa INAF Catania
gLite Advanced Job Management
Introduction to HPC Workshop
gLite Job Management Christos Theodosiou
Presentation transcript:

E-science grid facility for Europe and Latin America gLite MPI Tutorial for Grid School Daniel Alberto Burbano Sefair, Universidad de Los Andes (Bogotá, Colombia) UNAM, Querétaro, México 28 September to 10 October 2009

Querétaro (Mexico), E2GRIS – gLite MPI (Outline) Requierements to submit jobs. Get information from the Grid. Structure of a MPI job in the Grid with mpi-start. Structure of MPI jobs without mpi-start. 2

Querétaro (Mexico), E2GRIS – Requierements to submit jobs MPI-1 is for MPICH and LAM. MPI-2 is for OpenMPI and MPICH-2. Proxy certificate. Permissions to submit job in a specific site. The mpi jobs can run only in one Grid site. 3

Querétaro (Mexico), E2GRIS – Get information from the Grid Find the sites that support MPICH and MPICH2 4 ~]$ lcg-info --vo prod.vo.eu-eela.eu --list-ce --query 'Tag=MPICH‘ - CE: ce-eela.ciemat.es:2119/jobmanager-lcgpbs-prod_eela - CE: ce01.eela.if.ufrj.br:2119/jobmanager-lcgpbs-prod - CE: grid001.cecalc.ula.ve:2119/jobmanager-lcgpbs-prod - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-oneday - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-sixhour - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-thirtym - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-threeday - CE: kuragua.uniandes.edu.co:2119/jobmanager-lcgpbs-prod ~]$ lcg-info --vo prod.vo.eu-eela.eu --list-ce --query 'Tag=MPICH‘ - CE: ce-eela.ciemat.es:2119/jobmanager-lcgpbs-prod_eela - CE: ce01.eela.if.ufrj.br:2119/jobmanager-lcgpbs-prod - CE: grid001.cecalc.ula.ve:2119/jobmanager-lcgpbs-prod - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-oneday - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-sixhour - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-thirtym - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-threeday - CE: kuragua.uniandes.edu.co:2119/jobmanager-lcgpbs-prod ~]$ lcg-info --vo prod.vo.eu-eela.eu --list-ce --query 'Tag=MPICH2‘ - CE: grid001.cecalc.ula.ve:2119/jobmanager-lcgpbs-prod - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-oneday - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-sixhour - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-thirtym - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-threeday ~]$ lcg-info --vo prod.vo.eu-eela.eu --list-ce --query 'Tag=MPICH2‘ - CE: grid001.cecalc.ula.ve:2119/jobmanager-lcgpbs-prod - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-oneday - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-sixhour - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-thirtym - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-threeday

Querétaro (Mexico), E2GRIS – Get information from the Grid Find the sites that support MPICH and its available CPUs 5 ~]$ lcg-info --vo prod.vo.eu-eela.eu --list-ce --query 'Tag=MPICH' --attrs 'CE,FreeCPUs,TotalCPUs' - CE: ce-eela.ciemat.es:2119/jobmanager-lcgpbs-prod_eela - CE ce-eela.ciemat.es:2119/jobmanager-lcgpbs-prod_eela - FreeCPUs TotalCPUs CE: ce01.eela.if.ufrj.br:2119/jobmanager-lcgpbs-prod - CE ce01.eela.if.ufrj.br:2119/jobmanager-lcgpbs-prod - FreeCPUs TotalCPUs CE: grid001.cecalc.ula.ve:2119/jobmanager-lcgpbs-prod - CE grid001.cecalc.ula.ve:2119/jobmanager-lcgpbs-prod - FreeCPUs 22 - TotalCPUs 24 - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-oneday - CE gridgate.cs.tcd.ie:2119/jobmanager-pbs-oneday - FreeCPUs TotalCPUs CE: kuragua.uniandes.edu.co:2119/jobmanager-lcgpbs-prod - CE kuragua.uniandes.edu.co:2119/jobmanager-lcgpbs-prod - FreeCPUs 71 - TotalCPUs 84 ~]$ lcg-info --vo prod.vo.eu-eela.eu --list-ce --query 'Tag=MPICH' --attrs 'CE,FreeCPUs,TotalCPUs' - CE: ce-eela.ciemat.es:2119/jobmanager-lcgpbs-prod_eela - CE ce-eela.ciemat.es:2119/jobmanager-lcgpbs-prod_eela - FreeCPUs TotalCPUs CE: ce01.eela.if.ufrj.br:2119/jobmanager-lcgpbs-prod - CE ce01.eela.if.ufrj.br:2119/jobmanager-lcgpbs-prod - FreeCPUs TotalCPUs CE: grid001.cecalc.ula.ve:2119/jobmanager-lcgpbs-prod - CE grid001.cecalc.ula.ve:2119/jobmanager-lcgpbs-prod - FreeCPUs 22 - TotalCPUs 24 - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-oneday - CE gridgate.cs.tcd.ie:2119/jobmanager-pbs-oneday - FreeCPUs TotalCPUs CE: kuragua.uniandes.edu.co:2119/jobmanager-lcgpbs-prod - CE kuragua.uniandes.edu.co:2119/jobmanager-lcgpbs-prod - FreeCPUs 71 - TotalCPUs 84

Querétaro (Mexico), E2GRIS – Get information from the Grid Find the sites who have shared home MPI directory –What happens when there is a MPI shared directory in the site?  The file is compiled in one WN, then the file read by the other WNs in the same site. –What happens when is not MPI shared directory?  The file is compiled in one WN, then the file must be copied to the others WNs. 6 ~]$ lcg-info --vo prod.vo.eu-eela.eu --list-ce --query 'Tag=MPI_SHARED_HOME‘ - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-oneday - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-sixhour - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-thirtym - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-threeday ~]$ lcg-info --vo prod.vo.eu-eela.eu --list-ce --query 'Tag=MPI_SHARED_HOME‘ - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-oneday - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-sixhour - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-thirtym - CE: gridgate.cs.tcd.ie:2119/jobmanager-pbs-threeday Tag=MPI-START

Querétaro (Mexico), E2GRIS – Structure of a MPI job in the Grid with mpi-start 7 mpi-start-wrapper.sh mpi-Hooks.sh JDL File MPI file JobType = "Normal"; CPUNumber = 16; Executable = "mpi-start-wrapper.sh"; Arguments = "mpi-test MPICH2"; StdOutput = "mpi-test.out"; StdError = "mpi-test.err"; InputSandbox = {"mpi-start-wrapper.sh","mpi-hooks.sh","mpi-test.c"}; OutputSandbox = {"mpi-test.err","mpi-test.out"}; Requirements = Member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment) && Member("MPICH2", other.GlueHostApplicationSoftwareRunTimeEnvironment); InputSandbox mpi-test.out mpi-test.err OutputSandbox mpi-start-wrapper.sh Sets up the environment for an specific MPI implementation mpi-hooks.sh Used before and after the execution of MPI program. - Pre-hook: Download data and compile MPI.c - Post-hook: Analyze and save data mpi.c MPI code

Querétaro (Mexico), E2GRIS – Structure of a MPI job in the Grid with mpi-start 8 #!/bin/bash # Pull in the arguments. MY_EXECUTABLE=`pwd`/$1 echo " ======================= " pwd echo " ======================= " MPI_FLAVOR=$2 # Convert flavor to lowercase for passing to mpi-start. MPI_FLAVOR_LOWER=`echo $MPI_FLAVOR | tr '[:upper:]' '[:lower:]'` # Pull out the correct paths for the requested flavor. eval MPI_PATH=`printenv MPI_${MPI_FLAVOR}_PATH` # Ensure the prefix is correctly set. Don't rely on the defaults. eval I2G_${MPI_FLAVOR}_PREFIX=$MPI_PATH export I2G_${MPI_FLAVOR}_PREFIX # Touch the executable. It exist must for the shared file system check. # If it does not, then mpi-start may try to distribute the executable # when it shouldn't. touch $MY_EXECUTABLE # Setup for mpi-start. export I2G_MPI_APPLICATION=$MY_EXECUTABLE export I2G_MPI_APPLICATION_ARGS= export I2G_MPI_TYPE=$MPI_FLAVOR_LOWER export I2G_MPI_PRE_RUN_HOOK=mpi-hooks.sh export I2G_MPI_POST_RUN_HOOK=mpi-hooks.sh # If these are set then you will get more debugging information. export I2G_MPI_START_VERBOSE=1 #export I2G_MPI_START_DEBUG=1 # Invoke mpi-start. $I2G_MPI_START JobType = "Normal"; CPUNumber = 16; Executable = "mpi-start-wrapper.sh"; Arguments = "mpi-test MPICH2"; StdOutput = "mpi-test.out"; StdError = "mpi-test.err"; InputSandbox = {"mpi-start-wrapper.sh","mpi-hooks.sh","mpi-test.c"}; OutputSandbox = {"mpi-test.err","mpi-test.out"}; Requirements = Member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment) && Member("MPICH2", other.GlueHostApplicationSoftwareRunTimeEnvironment); Job.jdl mpi-start-wrapper.sh mpi-test MPICH2 $1 $2 MPI-start-wrapper.sh Environment variable in the system declared in the WN: /opt/i2g/bin/mpi-start Environment variables used inside of mpi-start script

Querétaro (Mexico), E2GRIS – #!/bin/bash # Pull in the arguments. MY_EXECUTABLE=`pwd`/$1 echo " ======================= " pwd echo " ======================= " MPI_FLAVOR=$2 # Convert flavor to lowercase for passing to mpi-start. MPI_FLAVOR_LOWER=`echo $MPI_FLAVOR | tr '[:upper:]' '[:lower:]'` # Pull out the correct paths for the requested flavor. eval MPI_PATH=`printenv MPI_${MPI_FLAVOR}_PATH` # Ensure the prefix is correctly set. Don't rely on the defaults. eval I2G_${MPI_FLAVOR}_PREFIX=$MPI_PATH export I2G_${MPI_FLAVOR}_PREFIX # Touch the executable. It exist must for the shared file system check. # If it does not, then mpi-start may try to distribute the executable # when it shouldn't. touch $MY_EXECUTABLE # Setup for mpi-start. export I2G_MPI_APPLICATION=$MY_EXECUTABLE export I2G_MPI_APPLICATION_ARGS= export I2G_MPI_TYPE=$MPI_FLAVOR_LOWER export I2G_MPI_PRE_RUN_HOOK=mpi-hooks.sh export I2G_MPI_POST_RUN_HOOK=mpi-hooks.sh # If these are set then you will get more debugging information. export I2G_MPI_START_VERBOSE=1 #export I2G_MPI_START_DEBUG=1 # Invoke mpi-start. $I2G_MPI_START #!/bin/sh # This function will be called before the MPI executable is started. # You can, for example, compile the executable itself. # pre_run_hook () { # Compile the program. echo "Compiling ${I2G_MPI_APPLICATION}" # Actually compile the program. echo " " pwd echo " " cmd="mpicc ${MPI_MPICC_OPTS} -o ${I2G_MPI_APPLICATION} ${I2G_MPI_APPLICATION}.c" echo $cmd $cmd if [ ! $? -eq 0 ]; then echo "Error compiling program. Exiting..." exit 1 fi # Everything's OK. echo "Successfully compiled ${I2G_MPI_APPLICATION}" return 0 } # This function will be called before the MPI executable is finished. # A typical case for this is to upload the results to a storage element. post_run_hook () { echo " " pwd echo "Executing post hook." echo "Finished the post hook." return 0 } MPI-start-wrapper.sh MPI-hooks.sh Compile the MPI C program Structure of a MPI job in the Grid with mpi-start

Querétaro (Mexico), E2GRIS – MPI Program 10 /* hello.c * Simple "Hello World" program in MPI. */ #include "mpi.h" #include int main(int argc, char *argv[]) { int numprocs; /* Number of process */ int procnum; /* process ID */ /* Initialize MPI */ MPI_Init(&argc, &argv); /* Find the ID of the process */ MPI_Comm_rank(MPI_COMM_WORLD, &procnum); /* Find the number of processors */ MPI_Comm_size(MPI_COMM_WORLD, &numprocs); printf ("Hello world! from processor %d out of %d\n", procnum, numprocs); /* Shut down MPI */ MPI_Finalize(); return 0; } MPI Code (Parallel code) Number of process that belong to the same set Process ID

Querétaro (Mexico), E2GRIS – Output job ======================= /home/eelapr045/gram_scratch_yt24szh4sR/https_3a_2f_2fviani.uniandes.edu.co_3a9000_2fUHe6be5cmis7XbLc1agm6w ======================= ************************************************************************ UID = eelapr045 HOST = wn017.grid.cs.tcd.ie DATE = Wed Sep 30 14:53:38 IST 2009 VERSION = ************************************************************************ mpi-start [INFO ]: search for scheduler mpi-start [INFO ]: activate support for pbs mpi-start [INFO ]: activate support for mpich2 mpi-start [INFO ]: call backend MPI implementation mpi-start [INFO ]: start program with mpirun Compiling /home/eelapr045/gram_scratch_yt24szh4sR/https_3a_2f_2fviani.uniandes.edu.co_3a9000_2fUHe6be5cmis7XbLc1agm6w /mpi-test /home/eelapr045/gram_scratch_yt24szh4sR/https_3a_2f_2fviani.uniandes.edu.co_3a9000_2fUHe6be5cmis7XbLc1agm6w mpicc -m64 -o /home/eelapr045/gram_scratch_yt24szh4sR/https_3a_2f_2fviani.uniandes.edu.co_3a9000_2fUHe6be5cmis7XbLc1a gm6w/mpi-test /home/eelapr045/gram_scratch_yt24szh4sR/https_3a_2f_2fviani.uniandes.edu.co_3a9000_2fUHe6be5cmis7XbLc1a gm6w/mpi-test.c Successfully compiled /home/eelapr045/gram_scratch_yt24szh4sR/https_3a_2f_2fviani.uniandes.edu.co_3a9000_2fUHe6be5cmi s7XbLc1agm6w/mpi-test =[START]================================================================ Hello world! from processor 3 out of 4 Hello world! from processor 0 out of 4 Hello world! from processor 1 out of 4 Hello world! from processor 2 out of 4 =[FINISHED]============================================================= /home/eelapr045/gram_scratch_yt24szh4sR/https_3a_2f_2fviani.uniandes.edu.co_3a9000_2fUHe6be5cmis7XbLc1agm6w Executing post hook. Finished the post hook Mpi-test-out

Querétaro (Mexico), E2GRIS – #!/bin/sh -x # the binary to execute EXE=$1 echo "*********************************************" echo "Running on: $HOSTNAME" echo "As: " `whoami` echo "*********************************************" echo "Compiling binary: $EXE" echo mpicc -o ${EXE} ${EXE}.c mpicc -o ${EXE} ${EXE}.c echo "*************************************" if [ "x$PBS_NODEFILE" != "x" ] ; then echo "PBS Nodefile: $PBS_NODEFILE" HOST_NODEFILE=$PBS_NODEFILE fi if [ "x$LSB_HOSTS" != "x" ] ; then echo "LSF Hosts: $LSB_HOSTS" HOST_NODEFILE=`pwd`/lsf_nodefile.$$ for host in ${LSB_HOSTS} do echo $host >> ${HOST_NODEFILE} done fi if [ "x$HOST_NODEFILE" = "x" ]; then echo "No hosts file defined. Exiting..." exit fi echo "*************************************************" CPU_NEEDED=`cat $HOST_NODEFILE | wc -l` echo "Node count: $CPU_NEEDED" echo "Nodes in $HOST_NODEFILE: " cat $HOST_NODEFILE echo "************************************************" CPU_NEEDED=`cat $HOST_NODEFILE | wc -l` echo "Checking ssh for each node:" NODES=`cat $HOST_NODEFILE` for host in ${NODES} do echo "Checking $host..." ssh $host hostname Done echo "***********************************************" echo "Executing $EXE with mpiexec" chmod 755 $EXE mpiexec `pwd`/$EXE > mpiexec.out 2>&1 test-mpi.sh Type = "Job"; JobType = "Normal"; CPUNumber = 4; Executable = "test-mpi.sh"; Arguments = "test-mpi"; StdOutput = "test-mpi.out"; StdError = "test-mpi.err"; InputSandbox = {"test-mpi.sh","test-mpi.c"}; OutputSandbox = {"test-mpi.err","test-mpi.out","mpiexec.out"}; Requirements = Member("MPICH2", other.GlueHostApplicationSoftwareRunTimeEnvironment); Type = "Job"; JobType = "Normal"; CPUNumber = 4; Executable = "test-mpi.sh"; Arguments = "test-mpi"; StdOutput = "test-mpi.out"; StdError = "test-mpi.err"; InputSandbox = {"test-mpi.sh","test-mpi.c"}; OutputSandbox = {"test-mpi.err","test-mpi.out","mpiexec.out"}; Requirements = Member("MPICH2", other.GlueHostApplicationSoftwareRunTimeEnvironment); test-mpi.jdl Structure of a MPI job in the Grid without mpi-start

Querétaro (Mexico), E2GRIS – References Gridification FAQ: – 13

Querétaro (Mexico), E2GRIS – Questions …