Advanced Topics: MPI jobs

Slides:



Advertisements
Similar presentations
Generic MPI Job Submission by the P-GRADE Grid Portal Zoltán Farkas MTA SZTAKI.
Advertisements

EGEE is a project funded by the European Union under contract IST EGEE Tutorial Turin, January Hands on Job Services.
SEE-GRID-SCI User Interface (UI) Installation and Configuration Branimir Ackovic Institute of Physics Serbia The SEE-GRID-SCI.
MPI support in gLite Enol Fernández CSIC. EMI INFSO-RI CREAM/WMS MPI-Start MPI on the Grid Submission/Allocation – Definition of job characteristics.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Ejecución con MPI e interactividad en el Grid Ejercicios Prácticos 1.
High Performance Computing
EGEE-II INFSO-RI Enabling Grids for E-sciencE Supporting MPI Applications on EGEE Grids Zoltán Farkas MTA SZTAKI.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
FESR Consorzio COMETA - Progetto PI2S2 Using MPI to run parallel jobs on the Grid Marcello Iacono Manno Consorzio COMETA
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Supporting MPI applications on the EGEE Grid.
Procedures on how to enter the GRID Christos Papachristos Site Manager of the HG-05-FORTH and GR-04-FORTH-ICS nodes Distributed.
Electronic Visualization Laboratory, University of Illinois at Chicago MPI on Argo-new Venkatram Vishwanath Electronic Visualization.
PARALLEL COMPUTING Petr Štětka Jakub Vlášek Department of Applied Electronics and Telecommunications, Faculty of electrical engineering, University of.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) WMPROXY API Python & C++ Diego Scardaci
Shell Scripting Todd Kelley CST8207 – Todd Kelley1.
The gLite API – PART I Giuseppe LA ROCCA INFN Catania ACGRID-II School 2-14 November 2009 Kuala Lumpur - Malaysia.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) MPI Applications with the Grid Engine Riccardo Rotondo
INFSO-RI Enabling Grids for E-sciencE GILDA Praticals GILDA Tutors INFN Catania ICTP/INFM-Democritos Workshop on Porting Scientific.
:: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009.
1 HeMoLab - Porting HeMoLab's SolverGP to EELA glite Grid Environment FINAL REPORT Ramon Gomes Costa - Paulo Ziemer.
Nadia LAJILI User Interface User Interface 4 Février 2002.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) GISELA Additional Services Diego Scardaci
Group 1 : Grid Computing Laboratory of Information Technology Supervisors: Alexander Ujhinsky Nikolay Kutovskiy.
E-science grid facility for Europe and Latin America E2GRIS1 Gustavo Miranda Teixeira Ricardo Silva Campos Laboratório de Fisiologia Computacional.
E-science grid facility for Europe and Latin America Using Secure Storage Service inside the EELA-2 Infrastructure Diego Scardaci INFN (Italy)
E-science grid facility for Europe and Latin America gLite MPI Tutorial for Grid School Daniel Alberto Burbano Sefair, Universidad de Los.
Jan 31, 2006 SEE-GRID Nis Training Session Hands-on V: Standard Grid Usage Dušan Vudragović SCL and ATLAS group Institute of Physics, Belgrade.
Satellital Image Clasification with neural networks Step implemented – Final Report Susana Arias, Héctor Gómez UNIVERSIDAD TÉCNICA PARTICULAR DE LOJA ECUADOR.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Workload management in gLite 3.x - MPI P. Nenkova, IPP-BAS, Sofia, Bulgaria Some of.
INFSO-RI Enabling Grids for E-sciencE Job Submission Tutorial (material from INFN Catania)
1 Running MPI on “Gridfarm” Bryan Carpenter February, 2005.
INFSO-RI Enabling Grids for E-sciencE GILDA Praticals Giuseppe La Rocca INFN – Catania gLite Tutorial at the EGEE User Forum CERN.
Lab 8 Overview Apache Web Server. SCRIPTS Linux Tricks.
E-infrastructure shared between Europe and Latin America FP6−2004−Infrastructures−6-SSA Special Jobs Valeria Ardizzone INFN - Catania.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
How to configure, build and install Trilinos November 2, :30-9:30 a.m. Jim Willenbring.
Satellital Image Clasification with neural networks Susana Arias, Héctor Gómez UNIVERSIDAD TÉCNICA PARTICULAR DE LOJA ECUADOR
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Parallel jobs with MPI Hands-on tutorial Enol Fernández del Castillo Instituto.
Advanced gLite job management Paschalis Korosoglou, AUTH/GRNET EPIKH Application Porting School 2011 Beijing, China Paschalis Korosoglou,
The Finite Difference Time Domain Method FDTD By Dr. Haythem H. Abdullah Researcher at ERI, Electronics Research Institute, Microwave Engineering Dept.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarksEGEE-III INFSO-RI MPI on the grid:
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Advanced Job Riccardo Rotondo
LA 4 CHAIN GISELA EPIKH School SPECFEM3D on Science Gateway.
FESR Consorzio COMETA - Progetto PI2S2 Using MPI to run parallel jobs on the Grid Marcello Iacono Manno Consorzio Cometa
Parallel jobs with MPI and hands on tutorial Enol Fernández del Castillo Instituto de Física de Cantabria.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Special Topics: MPI jobs Maha Dessokey (
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Indian Institute of Technology Kharagpur EPIKH Workshop Kolkata,
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) How to Run MPI-enabled Applications on the EUMEDGRID Infrastructure.
Advanced Computing Facility Introduction
Hands on training session for core skills
Stephen Childs Trinity College Dublin
Job Management Exercises
MPI Applications with the Grid Engine
MPI Basics.
gLite MPI Job Amina KHEDIMI CERIST
Special jobs with the gLite WMS
Java standalone version
The gLite Workload Management System
MPI Applications with the Grid Engine
MPI Message Passing Interface
Special Topics: MPI jobs
Special Jobs: MPI Alessandro Costa INAF Catania
gLite Advanced Job Management
Introduction to HPC Workshop
CS 584.
gLite Job Management Christos Theodosiou
MPI Message Passing Interface
Some codes for analysis and preparation for programming
CS 584 Lecture 8 Assignment?.
Presentation transcript:

Advanced Topics: MPI jobs The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Advanced Topics: MPI jobs Diego Scardaci diego.scardaci@ct.infn.it INFN Catania Joint CHAIN/GISELA/EPIKH Grid School for Application Porting 29th November - 9th Dicember 2010 www.epikh.eu

Table of Contents MPI and its implementations Wrapper script for mpi-start Hooks for mpi-start Defining the job and executable Running the MPI job References Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010

MPI and its implementations The Message Passing Interface (MPI) is a de-facto standard for writing parallel application. There are two versions of MPI, MPI-1 and MPI-2; Two implementations of MPI-1: LAM; MPICH. Two implementations of MPI-2: OpenMPI; MPICH2. Individual sites may chose to support only a subset of these implementations, or none at all. Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010

Requirements to submit MPI jobs Proxy certificate. Permissions to submit job in a specific site. The mpi jobs can run only in one Grid site. 3 Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010

Get information from the information sys Find the sites that support MPICH and MPICH2 4 Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010

Get information from the information sys Find the sites that support MPICH and its available CPUs 5 Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010

Get information from the information sys Find the sites who have shared home MPI directory What happens when there is a MPI shared directory in the site? The file is compiled in one WN, then the file read by the other WNs in the same site. What happens when is not MPI shared directory? The file is compiled in one WN, then the file must be copied to the others WNs. Tag=MPI-START 6 Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010

MPI and its implementations Wrapper script for mpi-start Hooks for mpi-start Defining the job and executable Running the MPI job References Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010

mpi-start mpi-start is a recommended solution to hide the implementation details for jobs submission. The design of mpi-start was focused in making the MPI job submission as transparent as possible from the cluster details! It was developed inside the Int.EU.Grid project The RPM to be installed in all WNs can be found here Using the mpi-start system requires the user to define a wrapper script that set the environment variables and a set of hooks. Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010

Wrapper script for mpi-start #!/bin/bash # Pull in the arguments. MY_EXECUTABLE=`pwd`/$1 MPI_FLAVOR=$2 # Convert flavor to lowercase for passing to mpi-start. MPI_FLAVOR_LOWER=`echo $MPI_FLAVOR | tr '[:upper:]' '[:lower:]'` # Pull out the correct paths for the requested flavor. eval MPI_PATH=`printenv MPI_${MPI_FLAVOR}_PATH` # Ensure the prefix is correctly set. Don't rely on the defaults. eval I2G_${MPI_FLAVOR}_PREFIX=$MPI_PATH export I2G_${MPI_FLAVOR}_PREFIX # Touch the executable. #It exist must for the shared file system check. # If it does not, then mpi-start may try to distribute the executable # when it shouldn't. touch $MY_EXECUTABLE # Setup for mpi-start. export I2G_MPI_APPLICATION=$MY_EXECUTABLE export I2G_MPI_APPLICATION_ARGS= export I2G_MPI_TYPE=$MPI_FLAVOR_LOWER export I2G_MPI_PRE_RUN_HOOK=mpi-hooks.sh export I2G_MPI_POST_RUN_HOOK=mpi-hooks.sh # If these are set then you will get more debugging information. export I2G_MPI_START_VERBOSE=1 #export I2G_MPI_START_DEBUG=1 # Invoke mpi-start. $I2G_MPI_START Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010

MPI and its implementations Wrapper script for mpi-start Hooks for mpi-start Defining the job and executable Running the MPI job References Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010

Hooks for mpi-start /1 The user may write a script which is called before and after the MPI executable is run. The pre-hook script can be used, for example, to compile the executable itself or download data; The post-hook script can be used to analyze results or to save the results on the grid. The pre- and post- hooks script may be defined in separate files, but the name of the functions named exactly “pre_run_hook” and “post_run_hook” Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010

Hooks for mpi-start /2 #!/bin/sh # This function will be called before the MPI executable is started. # pre_run_hook () { # Compile the program. echo "Compiling ${I2G_MPI_APPLICATION}" # Actually compile the program. cmd="mpicc ${MPI_MPICC_OPTS} -o ${I2G_MPI_APPLICATION} ${I2G_MPI_APPLICATION}.c" echo $cmd $cmd if [ ! $? -eq 0 ]; then echo "Error compiling program. Exiting..." exit 1 fi # Everything's OK. echo "Successfully compiled ${I2G_MPI_APPLICATION}" return 0 } # This function will be called before the MPI executable is finished. # A typical case for this is to upload the results to a Storage Elem. post_run_hook () { echo "Executing post hook." echo "Finished the post hook." Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010

MPI and its implementations Wrapper script for mpi-start Hooks for mpi-start Defining the JDL for the job and executable Running the MPI job References Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010

Defining the job and executable /1 Running the MPI job itself is not significantly different from running a standard grid job. JobType = “Normal"; CpuNumber = 2; Executable = "mpi-start-wrapper.sh"; Arguments = "mpi-test MPICH"; StdOutput = "mpi-test.out"; StdError = "mpi-test.err"; InputSandbox = {"mpi-start-wrapper.sh", "mpi-hooks.sh","mpi-test.c"}; OutputSandbox = {"mpi-test.err","mpi-test.out"}; Requirements = Member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment) && Member(“MPICH", other.GlueHostApplicationSoftwareRunTimeEnvironment); The JobType must be “Normal” and the attribute CpuNumber must be defined Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010

Structure of a MPI job in the Grid with mpi-start Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010

Structure of a MPI job in the Grid with mpi-start 8 Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010

Defining the job and executable /2 #include "mpi.h" #include <stdio.h> int main(int argc, char *argv[]) { int numprocs; /* Number of processors */ int procnum; /* Processor number */ /* Initialize MPI */ MPI_Init(&argc, &argv); /* Find this processor number */ MPI_Comm_rank(MPI_COMM_WORLD, &procnum); /* Find the number of processors */ MPI_Comm_size(MPI_COMM_WORLD, &numprocs); printf ("Hello world! from processor %d out of %d\n", procnum, numprocs); /* Shut down MPI */ MPI_Finalize(); return 0; } Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010

Hook Helpers The shell variable $MPI_START_SHARED_FS can be checked to figure out if the current site has a shared file system or not. The mpi_start_foreach_host shell function can be used to iterate over all the available machines in the current run. do_foreach_node () { # the first parameter $1 contains the hostname } post_run_hook () { ... mpi_start_foreach_host do_foreach_node Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010

MPI and its implementations Wrapper script for mpi-start Hooks for mpi-start Defining the job and executable Running the MPI job References Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010

Running the MPI job is no different from any other grid job. If the job ran correctly, then the standard output should contain something like the following: -<START PRE-RUN HOOK>------------------------------------------- […] -<STOP PRE-RUN HOOK>--------------------------------------------- =[START]========================================================= Hello world! from processor 1 out of 2 Hello world! from processor 0 out of 2 =[FINISHED]====================================================== -<START POST-RUN HOOK>------------------------------------------- -<STOP POST-RUN HOOK>------------------------------------------- Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010

Running MPI job in the Grid without mpi-start 12 Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010

MPI and its implementations Wrapper script for mpi-start Hooks for mpi-start Defining the job and executable Running the MPI job References Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010

http://applications.eu- eela.eu/grid_faq_MPICH.php?l=40&n=14 References GISELA MPI FAQ: http://applications.eu- eela.eu/grid_faq_MPICH.php?l=40&n=14 EGEE Mpi guide [ link ] EGEE MPI WG [ link ] MPI-START Documentation [ link ] Site config for MPI [ link ] Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010

Thank you for your kind attention ! Any questions ? Valparaiso, Joint CHAIN/GISELA/EPIKH Grid School for Application Porting, 01-12-2010