MPI support in gLite Enol Fernández CSIC. EMI INFSO-RI-261611 CREAM/WMS MPI-Start MPI on the Grid Submission/Allocation – Definition of job characteristics.

Slides:



Advertisements
Similar presentations
Generic MPI Job Submission by the P-GRADE Grid Portal Zoltán Farkas MTA SZTAKI.
Advertisements

Practical Mechanisms for Managing Parallel and Interactive Jobs on Grid Environments Enol Fernández UAB.
EGEE is a project funded by the European Union under contract IST EGEE Tutorial Turin, January Hands on Job Services.
INFSO-RI Enabling Grids for E-sciencE Workload Management System and Job Description Language.
Riccardo Bruno, INFN.CT Sevilla, 10-14/09/2007 GENIUS Exercises.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Ejecución con MPI e interactividad en el Grid Ejercicios Prácticos 1.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Supporting MPI Applications on EGEE Grids Zoltán Farkas MTA SZTAKI.
FESR Consorzio COMETA - Progetto PI2S2 Using MPI to run parallel jobs on the Grid Marcello Iacono Manno Consorzio COMETA
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Supporting MPI applications on the EGEE Grid.
The gLite API – PART I Giuseppe LA ROCCA INFN Catania ACGRID-II School 2-14 November 2009 Kuala Lumpur - Malaysia.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) MPI Applications with the Grid Engine Riccardo Rotondo
INFSO-RI Enabling Grids for E-sciencE GILDA Praticals GILDA Tutors INFN Catania ICTP/INFM-Democritos Workshop on Porting Scientific.
OGF 25/EGEE User Forum Catania, March 2 nd 2009 Meta Scheduling and Advanced Application Support on the Spanish NGI Enol Fernández del Castillo (IFCA-CSIC)
:: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009.
Enabling Grids for E-sciencE Workload Management System on gLite middleware Matthieu Reichstadt CNRS/IN2P3 ACGRID School, Hanoi (Vietnam)
INFSO-RI Enabling Grids for E-sciencE Workload Management System Mike Mineter
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) GISELA Additional Services Diego Scardaci
Group 1 : Grid Computing Laboratory of Information Technology Supervisors: Alexander Ujhinsky Nikolay Kutovskiy.
E-science grid facility for Europe and Latin America gLite MPI Tutorial for Grid School Daniel Alberto Burbano Sefair, Universidad de Los.
Job Management DIRAC Project. Overview  DIRAC JDL  DIRAC Commands  Tutorial Exercises  What do you have learned? KEK 10/2012DIRAC Tutorial.
How to for compiling and running MPI Programs. Prepared by Kiriti Venkat.
FP7-INFRA Enabling Grids for E-sciencE EGEE Induction Grid training for users, Institute of Physics Belgrade, Serbia Sep. 19, 2008.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Workload management in gLite 3.x - MPI P. Nenkova, IPP-BAS, Sofia, Bulgaria Some of.
Globus Grid Tutorial Part 2: Running Programs Across Multiple Resources.
Int.eu.grid: Experiences with Condor to Run Interactive and Parallel Applications on the Grid Elisa Heymann Department of Computer Architecture and Operating.
INFSO-RI Enabling Grids for E-sciencE Job Description Language (JDL) Giuseppe La Rocca INFN First gLite tutorial on GILDA Catania,
INFSO-RI Enabling Grids for E-sciencE GILDA Praticals Giuseppe La Rocca INFN – Catania gLite Tutorial at the EGEE User Forum CERN.
E-infrastructure shared between Europe and Latin America FP6−2004−Infrastructures−6-SSA Special Jobs Valeria Ardizzone INFN - Catania.
User Interface UI TP: UI User Interface installation & configuration.
Cliff Addison University of Liverpool NW-GRID Training Event 26 th January 2007 SCore MPI Taking full advantage of GigE.
LCG2 Tutorial Viet Tran Institute of Informatics Slovakia.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
Istituto Nazionale di Astrofisica Information Technology Unit INAF-SI Job with data management Giuliano Taffoni.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Parallel jobs with MPI Hands-on tutorial Enol Fernández del Castillo Instituto.
Probes Requirement Review OTAG-08 03/05/ Requirements that can be directly passed to EMI ● Changes to the MPI test (NGI_IT)
Advanced gLite job management Paschalis Korosoglou, AUTH/GRNET EPIKH Application Porting School 2011 Beijing, China Paschalis Korosoglou,
The Finite Difference Time Domain Method FDTD By Dr. Haythem H. Abdullah Researcher at ERI, Electronics Research Institute, Microwave Engineering Dept.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarksEGEE-III INFSO-RI MPI on the grid:
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Advanced Job Riccardo Rotondo
LA 4 CHAIN GISELA EPIKH School SPECFEM3D on Science Gateway.
FESR Consorzio COMETA - Progetto PI2S2 Using MPI to run parallel jobs on the Grid Marcello Iacono Manno Consorzio Cometa
Enabling Grids for E-sciencE Work Load Management & Simple Job Submission Practical Shu-Ting Liao APROC, ASGC EGEE Tutorial.
Parallel jobs with MPI and hands on tutorial Enol Fernández del Castillo Instituto de Física de Cantabria.
User requirements for interactive controlling and monitoring of applications in grid environments Dr. Isabel Campos Plasencia Institute of Physics of Cantabria.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Special Topics: MPI jobs Maha Dessokey (
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Indian Institute of Technology Kharagpur EPIKH Workshop Kolkata,
CREAM Status and plans Massimo Sgaravatto – INFN Padova
Grid Engine batch system integration in the EMI era
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) How to Run MPI-enabled Applications on the EUMEDGRID Infrastructure.
Stephen Childs Trinity College Dublin
Job Management Exercises
MPI Applications with the Grid Engine
Advanced Topics: MPI jobs
gLite MPI Job Amina KHEDIMI CERIST
Special jobs with the gLite WMS
Java standalone version
The gLite Workload Management System
I2G CrossBroker Enol Fernández UAB
MPI Applications with the Grid Engine
Special Topics: MPI jobs
The ATLAS software in the Grid Alessandro De Salvo <Alessandro
MPI probes OMB Meeting 26th February 2013
5. Job Submission Grid Computing.
Job Management with DATA
gLite Job Management Christos Theodosiou
GENIUS Grid portal Hands on
Testing the EGI-DRIHM TestBed
Job Description Language (JDL)
Hands on Session: DAG Job Submission
Presentation transcript:

MPI support in gLite Enol Fernández CSIC

EMI INFSO-RI CREAM/WMS MPI-Start MPI on the Grid Submission/Allocation – Definition of job characteristics – Search and select adequate resources – Allocate (or coallocate) resources for the job Execution – File distribution – Batch system interaction – MPI implementation details

EMI INFSO-RI Allocation / Submission Type = "Job"; CPUNumber = 23; Executable = "my_app"; Arguments = "-n 356 -p 4"; StdOutput = "std.out"; StdError = "std.err"; InputSandBox = {"my_app"}; OutputSandBox = {"std.out", "std.err"}; Requirements = Member("OPENMPI”, other.GlueHostApplicationSoftwareRunTimeEnvironment); Process count specified with the CPUNumber attribute

EMI INFSO-RI MPI-Start Specify a unique interface to the upper layer to run a MPI job Allow the support of new MPI implementations without modifications in the Grid middleware Support of “simple” file distribution Provide some support for the user to help manage his data Grid Middleware MPI-START Resources MPI

EMI INFSO-RI MPI-Start Design Goals Portable – The program must be able to run under any supported operating system Modular and extensible architecture – Plugin/Component architecture Relocatable – Must be independent of absolute path, to adapt to different site configurations – Remote “injection” of mpi-start along with the job “Remote” debugging features

EMI INFSO-RI MPI-Start Architecture CORE Execution Open MPI MPICH2 LAM PACX Scheduler PBS/Torque SGE LSF Hooks Local User Compiler File Dist.

EMI INFSO-RI Using MPI-Start (I) $ cat starter.sh #!/bin/sh # This is a script to call mpi-start # Set environment variables needed export I2G_MPI_APPLICATION=/bin/hostname export I2G_MPI_APPLICATION_ARGS= export I2G_MPI_TYPE=openmpi export I2G_MPI_PRECOMMAND=time # Execute mpi-start $I2G_MPI_START stdout: Scientific Linux CERN SLC release 4.5 (Beryllium) lflip30.lip.pt lflip31.lip.pt stderr: real 0m0.731s user 0m0.021s sys 0m0.013s JobType = "Normal"; CpuNumber = 4; Executable = "starter.sh"; InputSandbox = {"starter.sh”} StdOutput = "std.out"; StdError = "std.err"; OutputSandbox = {"std.out","std.err"}; Requirements = Member("MPI-START”, other.GlueHostApplicationSoftwareRunTimeEnvironment) && Member("OPENMPI”, other.GlueHostApplicationSoftwareRunTimeEnvironment);

EMI INFSO-RI Using MPI-Start (II) … CpuNumber = 4; Executable = ”mpi-start-wrapper.sh"; Arguments = “userapp OPENMPI some app args…” InputSandbox = {”mpi-start-wrapper.sh”}; Environment = {“I2G_MPI_START_VERBOSE=1”, …}... #!/bin/bash MY_EXECUTABLE=$1 shift MPI_FLAVOR=$1 shift export I2G_MPI_APPLICATION_ARGS=$* # Convert flavor to lowercase for passing to mpi-start. MPI_FLAVOR_LOWER=`echo $MPI_FLAVOR | tr '[:upper:]' '[:lower:]'` # Pull out the correct paths for the requested flavor. eval MPI_PATH=`printenv MPI_${MPI_FLAVOR}_PATH` # Ensure the prefix is correctly set. Don't rely on the defaults. eval I2G_${MPI_FLAVOR}_PREFIX=$MPI_PATH export I2G_${MPI_FLAVOR}_PREFIX # Setup for mpi-start. export I2G_MPI_APPLICATION=$MY_EXECUTABLE export I2G_MPI_TYPE=$MPI_FLAVOR_LOWER # Invoke mpi-start. $I2G_MPI_START

EMI INFSO-RI MPI-Start Hooks (I) File Distribution Methods – Copy files needed for execution using the most appropriate method (shared filesystem, scp, mpiexec, …) Compiler flag checking – checks correctness of compiler flags for 32/64 bits, changes them accordingly User hooks: – build applications – data staging

EMI INFSO-RI MPI-Start Hooks (II) #!/bin/sh pre_run_hook () { # Compile the program. echo "Compiling ${I2G_MPI_APPLICATION}" # Actually compile the program. cmd="mpicc ${MPI_MPICC_OPTS} -o ${I2G_MPI_APPLICATION} ${I2G_MPI_APPLICATION}.c" $cmd if [ ! $? -eq 0 ]; then echo "Error compiling program. Exiting..." exit 1 fi # Everything's OK. echo "Successfully compiled ${I2G_MPI_APPLICATION}" return 0 } … InputSandbox = {…, “myhooks.sh”…}; Environment = {…, “I2G_MPI_PRE_HOOK=myhooks.sh”}; …

EMI INFSO-RI MPI-Start: more features Remote injection – Mpi-start can be sent along with the job Just unpack, set environment and go! Interactivity – A pre-command can be used to “control” the mpirun call – $I2G_MPI_PRECOMMAND mpirun …. – This command can: Redirect I/O Redirect network traffic Perform accounting Debugging – 3 different debugging levels: VERBOSE: basic information DEBUG: internal flow information TRACE: set –x at the beginning. Full trace of the execution

EMI INFSO-RI Future work (I) New JDL description for parallel jobs (proposed by the EGEE MPI TF): – WholeNodes (True/False): whether or not full nodes should be reserved – NodeNumber (default = 1): number of nodes requested – SMPGranularity (default = 1): minimum number of cores per node – CPUNumber (default = 1): number of job slots (processes/cores) to use CREAM team working on how to support them

EMI INFSO-RI Future work (II) Management of non MPI jobs – new execution environments (OpenMP) – generic parallel job support Support for new schedulers – Condor and SLURM support Explore support for new architectures: – FPGAs, GPUs,…

EMI INFSO-RI More Info… gLite MPI PT: – teMPI teMPI MPI-Start trac – – contains user, admin and developer docs MPI TCD –

EMI INFSO-RI MPI-Start Execution Flow Do we have a scheduler plugin for the current environment? Trigger pre-run hooks Ask Scheduler plugin for a machinefile in default format Activate MPI Plugin Start mpirun Do we have a plugin for the selected MPI? Prepare mpirun Trigger post-run hooks START EXITDump Env NO Scheduler Plugin Execution Plugin Hooks Plugins