gLite MPI Job Amina KHEDIMI CERIST

Slides:



Advertisements
Similar presentations
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Advertisements

Reference: / MPI Program Structure.
MPI support in gLite Enol Fernández CSIC. EMI INFSO-RI CREAM/WMS MPI-Start MPI on the Grid Submission/Allocation – Definition of job characteristics.
Tutorial on MPI Experimental Environment for ECE5610/CSC
High Performance Computing
INFSO-RI Enabling Grids for E-sciencE EGEE Middleware The Resource Broker EGEE project members.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
1 Parallel Computing—Introduction to Message Passing Interface (MPI)
FESR Consorzio COMETA - Progetto PI2S2 Using MPI to run parallel jobs on the Grid Marcello Iacono Manno Consorzio COMETA
Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Supporting MPI applications on the EGEE Grid.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 9 October 30, 2002 Nayda G. Santiago.
Parallel & Cluster Computing MPI Basics Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy, Contra Costa College.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) MPI Applications with the Grid Engine Riccardo Rotondo
:: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) GISELA Additional Services Diego Scardaci
Group 1 : Grid Computing Laboratory of Information Technology Supervisors: Alexander Ujhinsky Nikolay Kutovskiy.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
E-science grid facility for Europe and Latin America E2GRIS1 Gustavo Miranda Teixeira Ricardo Silva Campos Laboratório de Fisiologia Computacional.
_______________________________________________________________CMAQ Libraries and Utilities ___________________________________________________Community.
E-science grid facility for Europe and Latin America gLite MPI Tutorial for Grid School Daniel Alberto Burbano Sefair, Universidad de Los.
Jan 31, 2006 SEE-GRID Nis Training Session Hands-on V: Standard Grid Usage Dušan Vudragović SCL and ATLAS group Institute of Physics, Belgrade.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
CS 591 x I/O in MPI. MPI exists as many different implementations MPI implementations are based on MPI standards MPI standards are developed and maintained.
Satellital Image Clasification with neural networks Step implemented – Final Report Susana Arias, Héctor Gómez UNIVERSIDAD TÉCNICA PARTICULAR DE LOJA ECUADOR.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
Introduction to MPI Nischint Rajmohan 5 November 2007.
MPI and OpenMP.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
Weather Research and Forecast implementation on Grid Computing Chaker El Amrani Department of Computer Engineering Faculty of Science and Technology, Tangier.
Project18 Communication Design + Parallelization Camilo A Silva BIOinformatics Summer 2008.
INFSO-RI Enabling Grids for E-sciencE GILDA Praticals Giuseppe La Rocca INFN – Catania gLite Tutorial at the EGEE User Forum CERN.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Message Passing Interface Using resources from
Satellital Image Clasification with neural networks Susana Arias, Héctor Gómez UNIVERSIDAD TÉCNICA PARTICULAR DE LOJA ECUADOR
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Advanced gLite job management Paschalis Korosoglou, AUTH/GRNET EPIKH Application Porting School 2011 Beijing, China Paschalis Korosoglou,
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Advanced Job Riccardo Rotondo
FESR Consorzio COMETA - Progetto PI2S2 Using MPI to run parallel jobs on the Grid Marcello Iacono Manno Consorzio Cometa
Enabling Grids for E-sciencE Work Load Management & Simple Job Submission Practical Shu-Ting Liao APROC, ASGC EGEE Tutorial.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Special Topics: MPI jobs Maha Dessokey (
PVM and MPI.
gLite Information System
Stephen Childs Trinity College Dublin
MPI Applications with the Grid Engine
Advanced Topics: MPI jobs
Introduction to parallel computing concepts and technics
MPI Basics.
Special jobs with the gLite WMS
Java standalone version
The gLite Workload Management System
Computational Physics (Lecture 17)
Introduction to MPI.
Special Topics: MPI jobs
Special Jobs: MPI Alessandro Costa INAF Catania
gLite Advanced Job Management
gLite Job Management Amina KHEDIMI CERIST
CS4961 Parallel Programming Lecture 16: Introduction to Message Passing Mary Hall November 3, /03/2011 CS4961.
MPI-Message Passing Interface
Cenni sul calcolo parallelo. Descrizione di JDL per i job di tipo MPI.
Introduction to parallelism and the Message Passing Interface
MPI MPI = Message Passing Interface
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
CS 584 Lecture 8 Assignment?.
Presentation transcript:

gLite MPI Job Amina KHEDIMI (a.khedimi@dtri.cerist.dz) CERIST 27/05/2018 gLite MPI Job Amina KHEDIMI (a.khedimi@dtri.cerist.dz) CERIST Africa 6 -Joint CHAIN/EPIKH/EUMEDGRID Support event in School on Application Porting Rabat, June 8, 2011 Rabat

Outline MPI and its implementations Basic Structures of MPI Programs 27/05/2018 MPI and its implementations Basic Structures of MPI Programs Structure of a MPI job in the Grid without mpi-start Wrapper script and Hooks for mpi-start Structure of a MPI job in the Grid with mpi-start Defining the job and executable Running the MPI job Rabat, Joint CHAIN/EPIKH/EUMEDGRID Support event in School on Application Porting Rabat

MPI and its implementations The Message Passing Interface (MPI) is a de-facto standard for writing parallel application. There are two versions of MPI, MPI-1 and MPI-2; Two implementations of MPI-1: LAM; MPICH. Two implementations of MPI-2: OpenMPI; MPICH2. Each version has different implementations: Some implementation are hardware related: E.g.: InfiniBand networks require MVAPICH v.1 or v.2 libraries. Individual sites may chose to support only a subset of these implementations, or none at all.

Goals of the MPI standard MPI prime goals are: To provide source-code portability To allow efficient implementation across a range of architectures A great deal of functionality the user need not cope with communication failures. Such failures are dealt with by the underlying communication subsystem. a “master” node starts some processes “slaves” by establishing SSH sessions all processes can share a common workspace and/or exchange data based on send() and receive() routines

A bit of history ... The Message Passing Interface (MPI) is a standard developed by the Message Passing Interface Forum (MPIF). It specifies a portable interface APIs for writing message-passing programs in Fortran, C and C++ MPIF (http://www.mpi-forum.org/), with the participation of more than 40 organizations, started working on the standard in 1992. The first draft (Version 1.0), which was published in 1994, was strongly influenced by the work at the IBM T. J. Watson Research Center. MPIF has further enhanced the first version to develop a second version (MPI-2) in 1997. The latest release of the first version (Version 1.2) is offered as an update to the previous release and is contained in the MPI-2 document.

Basic Structures of MPI Programs Header files Initializing MPI MPI Communicator MPI Function format Communicator Size Process Rank Finalizing MPI

1.Header files All sub-programs that contains calls to MPI subroutine MUST include the MPI HEADER file C: #include <mpi.h> Fortran: #include ‘mpi.h’ The header file contains definitions of MPI constants, MPI types and functions

2.Initializing MPI The first MPI routine called in any MPI program must be the initialisation routine MPI_INIT. Every MPI program must call this routine once, before any other MPI routines. Making multiple calls to MPI_INIT is erroneous. The C version of the routine accepts argc and argv as arguments : int MPI_Init(int &argc, char &argv); The Fortran version takes no arguments other than the error code: MPI_INIT(IERROR)

3.MPI Communicator The Communicator is a variable identifying a group of processes that are allowed to communicate with each other There is a default communicator MPI_COMM_WORLD which identify the group of all the processes. The processes are ordered and numbered consecutively from 0 (in both Fortran and C), the number of each process being known as its rank The rank identifies each process within the communicator. The predefined communicator MPI_COMM_WORLD for 7 processes The numbers indicate the ranks of each process.

How many processes are associated with a communicator ? 5.Communicator Size How many processes are associated with a communicator ? C : MPI_Comm_size (MPI_Comm comm, int *SIZE); Fortran : INTEGER COMM, SIZE, IERR CALL MPI_COMM_SIZE (COMM,SIZE,IERR) Output SIZE

6.Process Rank What is the ID of a process in a group ? C: MPI_Comm_rank (MPI_Comm comm, int *RANK); Fortran: INTEGER COMM, RANK, IERR CALL MPI_COMM_RANK (COMM, RANK, IERR) Output : RANK

7.Finalizing MPI An MPI program should call the MPI routine MPI_FINALIZE when all communications have completed. This routine cleans up all MPI data-structures, etc. Once this routine has been called, no other calls can be made to MPI routines Finalizing the MPI environment C: int MPI_Finalize (); Fortran: INTEGER IERR CALL MPI_FINALIZE (IERR)

Get information from the Grid Find the sites that support MPICH and MPICH2 [amina@ui01 ~]$ lcg-info --vo eumed --list-ce --query 'Tag=MPICH‘ CE: ce-grid.narss.sci.eg:8443/cream-pbs-eumed CE: ce-grid.obspm.fr:2119/jobmanager-pbs-eumed CE: ce0.m3pec.u-bordeaux1.fr:2119/jobmanager-pbs-eumed CE: ce01.grid.cynet.ac.cy:8443/cream-pbs-eumed CE: cream-ce-grid.obspm.fr:8443/cream-pbs-eumed [amina@ui01 ~]$ lcg-info --vo eumed --list-ce --query 'Tag=MPICH2‘ CE: ce-grid.narss.sci.eg:8443/cream-pbs-eumed CE: ce-grid.obspm.fr:2119/jobmanager-pbs-eumed CE: ce01.grid.arn.dz:8443/cream-pbs-eumed CE: ce-02.roma3.infn.it:8443/cream-pbs-eumed CE: ce01.grid.hiast.edu.sy:8443/cream-pbs-eumed CE: ce01.grid.um.edu.mt:8443/cream-pbs-eumed CE: ce03.grid.arn.dz:8443/cream-pbs-eumed Rabat, Joint CHAIN/EPIKH/EUMEDGRID Support event in School on Application Porting 13

Get information from the Grid 27/05/2018 [amina@ui01 ~]$ lcg-info --vo eumed --list-ce --query 'Tag=MPICH' --attrs 'CE,FreeCPUs,TotalCPUs’ - CE: ce-grid.narss.sci.eg:8443/cream-pbs-eumed - CE ce-grid.narss.sci.eg:8443/cream-pbs-eumed - FreeCPUs 4 - TotalCPUs 4 - CE: ce-grid.obspm.fr:2119/jobmanager-pbs-eumed - CE ce-grid.obspm.fr:2119/jobmanager-pbs-eumed - FreeCPUs 73 - TotalCPUs 112 - CE: ce0.m3pec.u-bordeaux1.fr:2119/jobmanager-pbs-eumed - CE ce0.m3pec.u-bordeaux1.fr:2119/jobmanager-pbs-eumed - FreeCPUs 52 - TotalCPUs 384 - CE: ce01.grid.cynet.ac.cy:8443/cream-pbs-eumed - CE ce01.grid.cynet.ac.cy:8443/cream-pbs-eumed - FreeCPUs 42 - TotalCPUs 44 - CE: cream-ce-grid.obspm.fr:8443/cream-pbs-eumed - CE cream-ce-grid.obspm.fr:8443/cream-pbs-eumed Rabat, Joint CHAIN/EPIKH/EUMEDGRID Support event in School on Application Porting 14 Rabat

Get information from the Grid Find the sites who have shared home MPI directory What happens when there is a MPI shared directory in the site? The file is compiled in one WN, then the file read by the other WNs in the same site. What happens when is not MPI shared directory? The file is compiled in one WN, then the file must be copied to the others WNs. Tag=MPI-start [amina@ui01 ~]$ lcg-info --vo eumed --list-ce --query 'Tag=MPI_SHARED_HOME‘ CE: ce-grid.narss.sci.eg:8443/cream-pbs-eumed CE: ce-grid.obspm.fr:2119/jobmanager-pbs-eumed CE: ce0.m3pec.u-bordeaux1.fr:2119/jobmanager-pbs-eumed CE: ce01.grid.arn.dz:8443/cream-pbs-eumed CE: ce2.cnrst.magrid.ma:8443/cream-pbs-eumed CE: ce01.grid.um.edu.mt:8443/cream-pbs-eumed CE: ce03.grid.arn.dz:8443/cream-pbs-eumed Rabat, Joint CHAIN/EPIKH/EUMEDGRID Support event in School on Application Porting

Requierements to submit jobs JobType attribute in jdl-file needs to be set to “Normal” in order to run MPI jobs via MPI-START scripts. "CPUNumber" needs to corresponds to the number of desired nodes. "Executable" attribute has to point to wrapper script (mpi-start-wrapper.sh in this case). "Arguments" are MPI binary and the MPI flavour that it uses. Note that Executable + Arguments form the command line on the WN MPI-START allows user defined extensions via hooks Rabat, Joint CHAIN/EPIKH/EUMEDGRID Support event in School on Application Porting 16

Structure of a MPI job in the Grid without mpi-start Rabat, Joint CHAIN/EPIKH/EUMEDGRID Support event in School on Application Porting

mpi-start-wrapper.sh Wrapper script and Hooks for mpi-start Users typically use a script that sets up paths and other internal settings to initiate the mpi-start processing. The following script (named "mpi-start-wrapper.sh") is generic and should not need to have significant modifications made to it. The script first sets up the environment for the chosen flavor of MPI using environment variables supplied by the system administrator. It then defines the executable, arguments, MPI flavor, and location of the hook scripts for mpi-start. Lastly, the wrapper invokes mpi-start itself. Rabat, Joint CHAIN/EPIKH/EUMEDGRID Support event in School on Application Porting

mpi-hooks.sh Wrapper script and Hooks for mpi-start The user may write a script that is called before and after the MPI executable is run. The pre-hook can be used, for example, to compile the executable itself or download data. The post-hook can be used to analyze results or to save the results on the grid. The pre- and post-hooks may be defined in separate files, but the names of the functions must be named exactly "pre_run_hook" and "post_run_hook“. Rabat, Joint CHAIN/EPIKH/EUMEDGRID Support event in School on Application Porting

Structure of a MPI job in the Grid with mpi-start mpi-start-wrapper.sh mpi-Hooks.sh JDL File MPI file JobType = "Normal"; CPUNumber = 16; Executable = "mpi-start-wrapper.sh"; Arguments = "mpi-test MPICH2"; StdOutput = "mpi-test.out"; StdError = "mpi-test.err"; InputSandbox = {"mpi-start-wrapper.sh","mpi-hooks.sh","mpi-test.c"}; OutputSandbox = {"mpi-test.err","mpi-test.out"}; Requirements = Member("MPI-START", other.GlueHostApplicationSoftwareRunTimeEnvironment) && Member("MPICH2", other.GlueHostApplicationSoftwareRunTimeEnvironment); InputSandbox mpi-test.out mpi-test.err OutputSandbox Sets up the environment for an specific MPI implementation mpi-hooks.sh Used before and after the execution of MPI program. - Pre-hook: Download data and compile MPI.c - Post-hook: Analyze and save data mpi.c MPI code Rabat, Joint CHAIN/EPIKH/EUMEDGRID Support event in School on Application Porting

Structure of a MPI job in the Grid with mpi-start Rabat, Joint CHAIN/EPIKH/EUMEDGRID Support event in School on Application Porting

Structure of a MPI job in the Grid with mpi-start Rabat, Joint CHAIN/EPIKH/EUMEDGRID Support event in School on Application Porting

MPI Program Rabat, Joint CHAIN/EPIKH/EUMEDGRID Support event in School on Application Porting

Output job Rabat, Joint CHAIN/EPIKH/EUMEDGRID Support event in School on Application Porting

MPI-START Documentation EGEE Mpi guide http://wiki.egee-see.org/index.php/SG_MPI_Guide MPI-START Documentation http://egee-uig.web.cern.ch/egee-uig/production_pages/MPIJobs.html Rabat, Joint CHAIN/EPIKH/EUMEDGRID Support event in School on Application Porting

Hands-on http://applications.eumedgrid.eu/app_software_parallel.php?l=290 Rabat, Joint CHAIN/EPIKH/EUMEDGRID Support event in School on Application Porting

Questions … Rabat, Joint CHAIN/EPIKH/EUMEDGRID Support event in School on Application Porting